One of the major projects that's ongoing in the current decade is moving the standard math library functions to fully correctly-rounded, as opposed to the traditional accuracy target of ~1 ULP (the last bit is off).
For single-precision unary functions, it's easy enough to just exhaustively test every single input (there's only 4 billion of them). But double precision has prohibitively many inputs to test, so you have to resort to actual proof techniques to prove correct rounding for double-precision functions.
For what it’s worth, this is basically the first word you learn when discussing numerical precision; and I mean word—nobody thinks of it as an abbreviation, to the point that it’s very often written in lower case. So welcome to the club.
to me this feels like wasted effort due to solving the wrong problem. The extra half ulp error makes no difference to the accuracy of calculations. the problem is that languages traditionally rely on an OS provided libm leading to cross architecture differences. If instead, languages use a specific libm, all of these problems vanish.
Standardizing a particular libm essentially locks any further optimizations because that libm's implementation quirks have to be exactly followed. In comparison the "most correct" (0.5 ulp) answer is easy to standardize and agree upon.
Many of the conversions so far have been clearly faster. I don't think anything has been merged which shows a clear performance regression, at least not on CPUs with FMA support.
yeah one of the trackmania games -- which feature a nominally deterministic physics engine, allowing for replays from a recorded sequence of inputs... except the physics engine relies on libc transcendental functions. players are generally on windows, but backend servers doing anti-cheat validations via replays are running linux. this resulted in false cheat positives when the linux server was running glibc prior to the glibc rounding fixes... and as a result the guy's world record kept being flagged as a cheat. it's a pretty good video with a lot of detail on how they narrowed it down to specific glibc versions/etc.
As the paper mentions, this particular routine was the work of Alexei Sibidanov, though Zimmermann seems to have been maintaining it since it was contributed. (Sibidanov doesn't work for Red Hat either, though.)
The author works at a French university. Some French researchers do choose to cross-post to arXiv (and Zimmermann may have too, I haven’t checked), but HAL is the default.
For single-precision unary functions, it's easy enough to just exhaustively test every single input (there's only 4 billion of them). But double precision has prohibitively many inputs to test, so you have to resort to actual proof techniques to prove correct rounding for double-precision functions.