I am using simple tensorflow example to flip the image. I had used the reverse_sequence as well as the reverse method and the result is same.If just using reverse() method we can flip the image then why should we use the reverse_sequence() method? I Just want to know that what is the main primary difference between this methods? Thanks in advance :)
import tensorflow as tf
import matplotlib.image as mpimg
import matplotlib.pyplot as plt
# First, load the image again
filename = "MarshOrchid.jpg"
image = mpimg.imread(filename)
# Create a TensorFlow Variable
x = tf.Variable(image, name='x')
height, width, depth = image.shape
model = tf.initialize_all_variables()
with tf.Session() as session:
# x= tf.reverse(x, dims=[False, True, False],name="reverse")
x = tf.reverse_sequence(x, [width] * height, 1, batch_dim=0)
session.run(model)
result = session.run(x)
print(session.run(x))
plt.imshow(result)
plt.show()
The tf.reverse_sequence() op is designed to be used on sequential data that has been padded to make a dense tensor. Consider the following matrix, x, in which the non-zero elements appear to be "left-justified":
x = [[1 2 3 4 0 0 0]
[1 2 3 0 0 0 0]
[1 2 3 4 5 6 7]]
seq_lens = [4, 3, 7]
Evaluating tf.reverse_sequence(x, seq_lens, seq_dim=1, batch_dim=0) gives:
result = [[4 3 2 1 0 0 0]
[3 2 1 0 0 0 0]
[7 6 5 4 3 2 1]]
Note that the result still appears to be "left-justified".
By contrast, if you evaluate tf.reverse(x, [False, True]), the sequence lengths are ignored and you get a "right-justified" result:
result = [[0 0 0 4 3 2 1]
[0 0 0 0 3 2 1]
[7 6 5 4 3 2 1]]
Reading the documentation of reverse and reverse_sequence is that the former operates on any axis on the whole axis, while the latter always slices the first dimension (the batch dimension) and then reverses only up to _seq_lens_ elements of that tensor.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With