For example:
I have a input tensor(input)
, shaped (?,10)
dtype=float32
, the first dimension means batch_size
.
And a mask tensor(mask)
, shaped (?,10)
. mask[sample_number]
is like [True,True,False,...]
, means the masks
And a label tensor(avg_label), shaped (?,)
,means the correct mean value of masked positions for each sample
I want to train the model , but can't find a good way to get the output.
The tf.reduce_...
(e.g. tf.reduce_mean
) functions don't seem to support argument about masking.
I try tf.boolean_mask
,But it will flatten the output shape into only one dimension,throwing the sample_number dimension, so it cannot differentiate among the samples
I considered tf.where
, like:
masked=tf.where(mask,input,tf.zeros(tf.shape(input)))
avg_out=tf.reduce_mean(masked,axis=1)
loss=tf.pow(avg_out-avg_label,2)
But the code above is certainly not working because False set to 0 will change avg. If use np.nan ,it will always get nan. i wonder if there is a value representing absence when doing reduce operations.
How can i do this?
You can use tf.ragged.boolean_mask
to keep the dimensionality.
tf.reduce_mean(tf.ragged.boolean_mask(x, mask=mask), axis=1)
You can use tf.boolean_mask
.
In [17]: tensor = tf.constant([[1, 2], [3, 4], [5, 6]])
In [18]: mask = np.array([[True, False], [False, True], [True, False]])
In [19]: masked = tf.boolean_mask(tensor, mask)
In [20]: masked.eval()
Out[20]: array([1, 4, 5], dtype=int32)
In [21]: tf.reduce_mean(masked).eval()
Out[21]: 3
For the False masked values you can use tf.logical_not to toggle the mask.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With