怎么针对 attention1_output = MultiHeadAttention(num_heads=8, key_dim=64)(gru1out, gru1out) attention1_output = LayerNorm

动态 未结 0 85
小小草
小小草 LV1 2023年9月23日 23:40 发表
你可以在`MultiHeadAttention`层的定义处进行修改。找到`MultiHeadAttention`类的定义,然后将其中的`_masked_softmax`方法中的`axis`参数改为`-1`,即对最后一个轴进行softmax操作: ```python class MultiHeadAttention(keras.layers.Layer): def __init__(self, num_heads, key_dim, name="multi_head_attention"): super(MultiHeadAttention, self).__init__(name=name) self.num_heads = num_heads self.key_dim = key_dim self.query_dense = keras.layers.Dense(units=key_dim, name="query") self.key_dense = keras.layers.Dense(units=key_dim, name="key") self.value_dense = keras.layers.Dense(units=key_dim, name="value") self.combine_heads = keras.layers.Dense(units=key_dim, name="combine_heads") def _masked_softmax(self, attention_scores, attention_mask): """ Softmax with a mask to prevent over-attention to padding tokens. """ if attention_mask is not None: # Apply the attention mask attention_scores = attention_scores - 1e9 * (1.0 - attention_mask) # Compute softmax on the last axis (num_heads or sequence_length)
收藏(0)  分享
相关标签: 智能问答
问题没解决?让AI助手帮你作答 AI助手
0个回复
  • 消灭零回复