基于pytorch代码了解transformer的自注意力机制

  1. import torch
  2. import torch.nn as nn
  3. import torch.nn.functional as F
  4.  
  5. class MultiHeadAttention(nn.Module):
  6. def __init__(self, d_model, num_heads):
  7. super(MultiHeadAttention, self).__init__()
  8. assert d_model % num_heads == 0 # 确保d_model可以被num_heads整除
  9. self.d_model = d_model
  10. self.num_heads = num_heads
  11. self.d_head = d_model // num_heads # 每个头的特征维度
  12. # 定义线性变换
  13. self.W_Q = nn.Linear(d_model, d_model) # (d_model, d_model)
  14. self.W_K = nn.Linear(d_model, d_model) # (d_model, d_model)
  15. self.W_V = nn.Linear(d_model, d_model) # (d_model, d_model)
  16. self.W_O = nn.Linear(d_model, d_model) # (d_model, d_model)
  17. def forward(self, Q, K, V, mask=None):
  18. batch_size = Q.size(0) # (batch_size, seq_len, d_model)
  19. # 将输入通过线性变换得到 Q, K, V
  20. Q = self.W_Q(Q) # (batch_size, seq_len, d_model)
  21. K = self.W_K(K) # (batch_size, seq_len, d_model)
  22. V = self.W_V(V) # (batch_size, seq_len, d_model)
  23. # 将 Q, K, V 按照头数分割
  24. Q = Q.view(batch_size, -1, self.num_heads, self.d_head).transpose(1, 2) # (batch_size, num_heads, seq_len, d_head)
  25. K = K.view(batch_size, -1, self.num_heads, self.d_head).transpose(1, 2) # (batch_size, num_heads, seq_len, d_head)
  26. V = V.view(batch_size, -1, self.num_heads, self.d_head).transpose(1, 2) # (batch_size, num_heads, seq_len, d_head)
  27. # 计算注意力
  28. scores = torch.matmul(Q, K.transpose(-2, -1)) / (self.d_head ** 0.5) # (batch_size, num_heads, seq_len, seq_len)
  29. if mask is not None:
  30. scores = scores.masked_fill(mask == 0, float('-inf')) # (batch_size, num_heads, seq_len, seq_len)
  31. attention_weights = F.softmax(scores, dim=-1) # (batch_size, num_heads, seq_len, seq_len)
  32. attention_output = torch.matmul(attention_weights, V) # (batch_size, num_heads, seq_len, d_head)
  33. # 合并所有头的输出
  34. attention_output = attention_output.transpose(1, 2).contiguous().view(batch_size, -1, self.d_model) # (batch_size, seq_len, d_model)
  35. # 线性变换输出
  36. output = self.W_O(attention_output) # (batch_size, seq_len, d_model)
  37. return output
  38.  
  39. # 测试 MultiHeadAttention
  40. d_model = 512
  41. num_heads = 8
  42. batch_size = 64
  43. seq_len = 10
  44.  
  45. mha = MultiHeadAttention(d_model, num_heads)
  46. Q = torch.randn(batch_size, seq_len, d_model) # (64, 10, 512)
  47. K = torch.randn(batch_size, seq_len, d_model) # (64, 10, 512)
  48. V = torch.randn(batch_size, seq_len, d_model) # (64, 10, 512)
  49. output = mha(Q, K, V) # (64, 10, 512)
  50. print(output.shape) # 输出: torch.Size([64, 10, 512])

注释详细解释

nn.Linear 初始化:

self.W_Q, self.W_K, self.W_V, self.W_O 是线性层,将输入的特征维度映射到相同的特征维度(d_model),权重矩阵大小为 (d_model, d_model)。

前向传播:

Q, K, V 的初始维度是 (batch_size, seq_len, d_model),通过线性变换后仍然是相同的维度。

view 操作将 Q, K, V 从 (batch_size, seq_len, d_model) 变为 (batch_size, num_heads, seq_len, d_head),然后 transpose 使其变为 (batch_size, num_heads, seq_len, d_head)。

scores 是通过计算查询和键的点积得到的,维度为 (batch_size, num_heads, seq_len, seq_len)。

attention_weights 是应用 softmax 后的注意力权重,维度也为 (batch_size, num_heads, seq_len, seq_len)。

attention_output 是对值矩阵进行加权求和后的结果,维度为 (batch_size, num_heads, seq_len, d_head)。

将所有注意力头的输出合并回 (batch_size, seq_len, d_model),最后通过线性变换 self.W_O 得到输出,维度为 (batch_size, seq_len, d_model)。

希望这可以帮助你更好地理解代码中的每个步骤和张量的尺寸变化!

 

发表评论

匿名网友

拖动滑块以完成验证
加载失败