pytorch / 2 / generated / torch.nn.functional.scaled_dot_product_attention.html

torch.nn.functional.scaled_dot_product_attention

torch.nn.functional.scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None) → Tensor:

Computes scaled dot product attention on query, key and value tensors, using an optional attention mask if passed, and applying dropout if a probability greater than 0.0 is specified.

# Efficient implementation equivalent to the following:
def scaled_dot_product_attention(query, key, value, attn_mask=None, dropout_p=0.0, is_causal=False, scale=None) -> torch.Tensor:
    # Efficient implementation equivalent to the following:
    L, S = query.size(-2), key.size(-2)
    scale_factor = 1 / math.sqrt(query.size(-1)) if scale is None else scale
    attn_bias = torch.zeros(L, S, dtype=query.dtype)
    if is_causal:
        assert attn_mask is None
        temp_mask = torch.ones(L, S, dtype=torch.bool).tril(diagonal=0)
        attn_bias.masked_fill_(temp_mask.logical_not(), float("-inf"))
        attn_bias.to(query.dtype)

    if attn_mask is not None:
        if attn_mask.dtype == torch.bool:
            attn_mask.masked_fill_(attn_mask.logical_not(), float("-inf"))
        else:
            attn_bias += attn_mask
    attn_weight = query @ key.transpose(-2, -1) * scale_factor
    attn_weight += attn_bias
    attn_weight = torch.softmax(attn_weight, dim=-1)
    attn_weight = torch.dropout(attn_weight, dropout_p, train=True)
    return attn_weight @ value

Warning

This function is beta and subject to change.

Note

There are currently three supported implementations of scaled dot product attention:

The function may call optimized kernels for improved performance when using the CUDA backend. For all other backends, the PyTorch implementation will be used.

All implementations are enabled by default. Scaled dot product attention attempts to automatically select the most optimal implementation based on the inputs. In order to provide more fine-grained control over what implementation is used, the following functions are provided for enabling and disabling implementations. The context manager is the preferred mechanism:

Each of the fused kernels has specific input limitations. If the user requires the use of a specific fused implementation, disable the PyTorch C++ implementation using torch.backends.cuda.sdp_kernel(). In the event that a fused implementation is not available, an error will be raised with the reasons why the fused implementation cannot run.

Due to the nature of fusing floating point operations, the output of this function may be different depending on what backend kernel is chosen. The c++ implementation supports torch.float64 and can be used when higher precision is required. For more information please see Numerical accuracy

Note

In some circumstances when given tensors on a CUDA device and using CuDNN, this operator may select a nondeterministic algorithm to increase performance. If this is undesirable, you can try to make the operation deterministic (potentially at a performance cost) by setting torch.backends.cudnn.deterministic = True. See Reproducibility for more information.

Parameters
  • query (Tensor) – Query tensor; shape ( N , . . . , L , E ) (N, ..., L, E) .
  • key (Tensor) – Key tensor; shape ( N , . . . , S , E ) (N, ..., S, E) .
  • value (Tensor) – Value tensor; shape ( N , . . . , S , E v ) (N, ..., S, Ev) .
  • attn_mask (optional Tensor) – Attention mask; shape ( N , . . . , L , S ) (N, ..., L, S) . Two types of masks are supported. A boolean mask where a value of True indicates that the element should take part in attention. A float mask of the same type as query, key, value that is added to the attention score.
  • dropout_p (float) – Dropout probability; if greater than 0.0, dropout is applied
  • is_causal (bool) – If true, assumes causal attention masking and errors if both attn_mask and is_causal are set.
  • scale (optional python:float) – Scaling factor applied prior to softmax. If None, the default value is set to 1 E \frac{1}{\sqrt{E}} .
Returns

Attention output; shape ( N , . . . , L , E v ) (N, ..., L, Ev) .

Return type

output (Tensor)

Shape legend:
  • N : Batch size . . . : Any number of other batch dimensions (optional) N: \text{Batch size} ... : \text{Any number of other batch dimensions (optional)}
  • S : Source sequence length S: \text{Source sequence length}
  • L : Target sequence length L: \text{Target sequence length}
  • E : Embedding dimension of the query and key E: \text{Embedding dimension of the query and key}
  • E v : Embedding dimension of the value Ev: \text{Embedding dimension of the value}

Examples:

>>> # Optionally use the context manager to ensure one of the fused kernels is run
>>> query = torch.rand(32, 8, 128, 64, dtype=torch.float16, device="cuda")
>>> key = torch.rand(32, 8, 128, 64, dtype=torch.float16, device="cuda")
>>> value = torch.rand(32, 8, 128, 64, dtype=torch.float16, device="cuda")
>>> with torch.backends.cuda.sdp_kernel(enable_math=False):
>>>     F.scaled_dot_product_attention(query,key,value)

© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/2.1/generated/torch.nn.functional.scaled_dot_product_attention.html