在这段代码的基础上,将gru换成bigru,并在bigru后面加入多头注意力机制:def LSTNet(trainX1,trainX2,trainY,config): input1 = Input(shape=(trainX1.s

动态 未结 0 122
小小草
小小草 LV1 2023年9月23日 23:40 发表
好的,以下是修改后的代码: ```python from keras.layers import Input, Dense, Bidirectional, CuDNNGRU, SeqSelfAttention, MultiHeadAttention, Dropout, concatenate, Lambda, Conv1D, Activation, add from keras.models import Model import keras.backend as K def create_model(trainX1, trainX2, trainY, config): input1 = Input(shape=(trainX1.shape[1], trainX1.shape[2])) conv1 = Conv1D(filters=48, kernel_size=6, strides=1, activation='relu') # for input1 # It's a probelm that I can't find any way to use the same Conv1D layer to train the two inputs, conv2 = Conv1D(filters=48, kernel_size=6 , strides=1, activation='relu') # for input2 conv2.set_weights(conv1.get_weights()) # at least use same weight conv1out = conv1(input1) bigru1out = Bidirectional(CuDNNGRU(64, return_sequences=True))(conv1out) bigru1out = SeqSelfAttention(attention_activation='sigmoid')(bigru1out) bigru1out, attn_weights = MultiHeadAttention(num_heads=config.num_heads)(bigru1out, bigru1out, bigru1out) bigru1out = Dropout(config.dropout)(bigru1out) input2 = Input(shape=(trainX2.shape[1], trainX2.shape[2])) conv2out = conv2(input2) bigru2out = Bidirectional(CuDNNGRU(64, return_sequences=True))(conv2out) bigru2out = SeqSelfAttention(attention_activation='sigmoid')(bigru2out) bigru2out, attn_weights = MultiHeadAttention(num_heads=config.num_heads)(bigru2out, bigru2out, bigru2out) bigru2out = Dropout(config.dropout)(bigru2out) bigru_out = concatenate([bigru1out, bigru2out]) output = Dense(trainY.shape[1])(bigru_out) #highway 使用Dense模拟AR自回归过程,为预测添加线性成份,同时使输出可以响应输入的
收藏(0)  分享
相关标签: 智能问答
问题没解决?让AI助手帮你作答 AI助手
0个回复
  • 消灭零回复