Donate. I desperately need donations to survive due to my health

Get paid by answering surveys Click here

Click here to donate

Remote/Work from Home jobs

How do I predict and train LSTM with different batch sizes?

I am using TensorFlow, not Keras. when training I set batch size=128,but predict will be 1,as below:

input_tensor = tf.reshape(x, [-1, n_input])
input_rnn = tf.matmul(input_tensor, weights['in']) + biases['in']
input_rnn = tf.reshape(input_rnn, [-1, n_steps, n_hidden])

input_rnn = tf.layers.batch_normalization(input_rnn, training=is_train)
lstm_cell = tf.nn.rnn_cell.BasicLSTMCell(n_hidden, forget_bias=0.95)
init_state = lstm_cell.zero_state(batch_size, dtype=tf.float32)
output_rnn, final_states = tf.nn.dynamic_rnn(lstm_cell, input_rnn, initial_state=init_state, dtype=tf.float32,
                                             time_major=False)
output_rnn = tf.unstack(tf.transpose(output_rnn, [1, 0, 2]))

I make batch normal between input layer and LSTM layer,how to make train model be useful for prediction? I see some ways to solve LSTM batch size not same between predict and train,but almost all are Keras and no batch normal. I try to set batch_size to be a tf.placeholder, but it make predict accuracy always 0.

Comments