tensorflow.nn.dynamic_rnn creates a recurrent neural network given cell, which is an instance of RNNCell, and returns a pair consisting of:
outputs: The RNN output Tensorstate: The final stateHere is a toy recurrent neural network as well as its output[*]:
import numpy as np
import tensorflow as tf
dim = 3
hidden = 4
lengths = tf.placeholder(dtype=tf.int32, shape=[None])
inputs = tf.placeholder(dtype=tf.float32, shape=[None, None, dim])
cell = tf.nn.rnn_cell.LSTMCell(hidden, state_is_tuple=True)
output, final_state = tf.nn.dynamic_rnn(
cell, inputs, lengths, dtype=tf.float32)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8]],
[[9,9,9], [10,10,10], [11,11,11]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1, 2], dtype=np.int32)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_, final_state_ = sess.run(
[output, final_state],
{inputs: inputs_, lengths: lengths_})
print('hidden states:')
print(output_)
print('final state :')
print(final_state_)
Output:
hidden states:
[[[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[-3.0096283e-02 1.6747195e-01 2.3113856e-02 -4.5677904e-02]
[-6.0795926e-02 3.5036778e-01 6.0140129e-02 -1.6039203e-01]]
[[-2.1957003e-03 8.1749000e-02 1.2620161e-02 -2.8342882e-01]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]
[[-1.7376180e-04 2.7789388e-02 3.1011081e-03 -3.5858861e-01]
[-2.5059914e-04 4.5771234e-02 4.5708413e-03 -6.5035087e-01]
[ 0.0000000e+00 0.0000000e+00 0.0000000e+00 0.0000000e+00]]]
final state :
LSTMStateTuple(
c=array([[-1.0705842e-01, 5.2945197e-01, 1.5602852e-01, -2.5641304e-01],
[-3.3140955e-03, 8.6112522e-02, 7.2794281e-02, -3.6088336e-01],
[-3.4701003e-04, 4.6147645e-02, 6.7321308e-02, -8.6465287e-01]],
dtype=float32),
h=array([[-6.0795926e-02, 3.5036778e-01, 6.0140129e-02, -1.6039203e-01],
[-2.1957003e-03, 8.1749000e-02, 1.2620161e-02, -2.8342882e-01],
[-2.5059914e-04, 4.5771234e-02, 4.5708413e-03, -6.5035087e-01]],
dtype=float32))
My understanding is as follows:
c component) as well as last hidden states for each sequence (h component);thus, am I not supposed to get the same values in h component of the finale state and in the last hidden state of each sequence?
[*] Code largely inspired from this post
The h of the final state component contains the last output of your LSTM, and in the case of the dynamic_rnn it takes into account the length that you give as parameter (lengths)
As you can see in your example final_state.h[0] is equal to output[0][2] , because the length of the first example is 3 , final_state.h[1] is equal to output[1][0] because the length of your second example is one etc..
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With