I was reading about fpectl
— Floating point exception control in the Python documentation.
It warns users that
......and its usage is discouraged and may be dangerous except in the hands of experts. See also the section Limitations and other considerations on limitations for more details.
Why?
Reading the linked webpage did not help me.
For many reasons, it's unsafe in the way that "not driving defensively" is unsafe. You might do it and have no trouble whatever. Or you might get unlucky and run off the road, across the median, and into oncoming traffic.
Just a few reasons why it's chaotic at best, and difficult to reason about in a way that makes using it to create safe, reliable code extremely difficult:
- It's not thread-safe. Meaning it will act differently in a threaded
vs. non-threaded program, and probably differently in a different way
in just about every threaded program.
- Its support code needed and behavior can vary widely based on the
specific floating point implementation. So if you get your code working
right on one, that doesn't mean you can distribute that code
safety to other systems, or other users who might run it on
different a CPU family (e.g. ARM vs x86 vs POWER), or different
generations of the same CPU family (e.g. AMD vs Intel
implementations of x86).
- Floating point code is remarkably portable in the
IEEE-754 era. But there
are still subtle imperfections that
those doing
numerical algorithms
take cautions to prevent. Understanding boundary
conditions, how floating point approximations
behave (or fail to behave) at those edges,
and how different FP implementations handle them is still
important. Reducing variables is
a key way of restricting risk in writing those algorithms.
Using a poorly-understood, poorly-documented, rarely-used,
not-part-of-standard-Python
exception handling module is the opposite of
"reducing risk."
- The risk involves data integrity. It's possible that squelching
exceptions will let bad or inconsistent data into your further
calculations.
Exceptions are like pain.
Systemic signals something's wrong or "don't do that!"
Turning them off has a long history in numerical processing
of leading to uncaught data corruption. In this age of widespread
NaN
and Inf
support, that's less likely than in ages past,
since there are in-band (in the data values) signaling mechanisms
that can be alternate indicators. But without studying the
source code, do you really know what happens to the data in
place of a OverflowError
? Can you guarantee it will be the same
outcome if it's processed in a vector or scalar FP instruction ?
Doubtful.
Will your code know the difference
between a NaN
that results from squelched numerical exceptions
vs a run-of-the-mill missing datum? Probably not. So many new
uncertain outcomes!
- The documentation itself basically warns "you're going to need to
drop down and study the source code to see how this really works."
That is a significant, triple-black-diamond path.
They don't enable it in the standard builds for a reason.
If you don't
already understand the numerical analysis dos-and-don'ts of traveling
it, it's not for you.