Suppose x is a tensor in Pytorch. One can either write:
x_lowerthanzero = x.lt(0)
or:
x_lowerthanzero = (x<0)
with seemingly the exact same results. Many other operations have Pytorch built-in equivalents: x.gt(0) for (x>0), x.neg() for -x, x.mul() etc.
Is there a good reason to use one form over the other?
They are equivalent. < is simply a more readable alias.
Python operators have canonical function mappings e.g:
Algebraic operations
| Operation | Syntax | Function |
|---|---|---|
| Addition | a + b |
add(a, b) |
| Subtraction | a - b |
sub(a, b) |
| Multiplication | a * b |
mul(a, b) |
| Division | a / b |
truediv(a, b) |
| Exponentiation | a ** b |
pow(a, b) |
| Matrix Multiplication | a @ b |
matmul(a, b) |
Comparisons
| Operation | Syntax | Function |
|---|---|---|
| Ordering | a < b |
lt(a, b) |
| Ordering | a <= b |
le(a, b) |
| Equality | a == b |
eq(a, b) |
| Difference | a != b |
ne(a, b) |
| Ordering | a >= b |
ge(a, b) |
| Ordering | a > b |
gt(a, b) |
You can check that these are indeed mapped to the respectively named torch functions here e.g:
def __lt__(self, other):
return self.lt(other)
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With