Skip to content

Conversation

@JacobHass8
Copy link

Implementation of log incomplete gamma function using asymptotic approximations where the incomplete gamma function underflows. See #1173 and #1338.

@JacobHass8
Copy link
Author

@jzmaddock I've tried to add error checking and the promotion boilerplate based on your comment #1338 (comment). Does this look okay for a start?

@jzmaddock
Copy link
Collaborator

Thanks @JacobHass8 that looks good to me. BTW the separation between lgamma_incomplete_imp_final and lgamma_incomplete_imp shouldn't be needed in this case: that was a hack Matt introduced for some functions to workaround the lack of recursion support in some GPU contexts, but there's no recursion here, so we should be good :)

Some tests and docs and hopefully this should be good to go! Thanks for this.

@JacobHass8
Copy link
Author

JacobHass8 commented Dec 31, 2025

Some tests and docs and hopefully this should be good to go! Thanks for this.

What file should I put the tests in, math/tests/test_igamma.hpp? Are the spot checks I've implemented so far sufficient?

@jzmaddock
Copy link
Collaborator

drone tests were failing for 128-bit long double platforms, I've updated the test data with values from wolframalpha in case the data was truncated to double precision somewhere along the way.

Also not sure why no github actions are being run - anyone any ideas? @mborland ?

With regard to further tests, the area most likely to fail, would be where the implementation first switches to the log asymptotic expansion, and at higher precision (we should support 128-bit long doubles at least, I'm less concerned about full arbitrary precision just yet).

@jzmaddock
Copy link
Collaborator

Ah... I needed to approve the github run: now done!

@codecov
Copy link

codecov bot commented Jan 1, 2026

Codecov Report

✅ All modified and coverable lines are covered by tests.
✅ Project coverage is 95.29%. Comparing base (f6be8e8) to head (2a6fb33).

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff            @@
##           develop    #1346   +/-   ##
========================================
  Coverage    95.28%   95.29%           
========================================
  Files          814      814           
  Lines        67364    67422   +58     
========================================
+ Hits         64191    64249   +58     
  Misses        3173     3173           
Files with missing lines Coverage Δ
include/boost/math/special_functions/gamma.hpp 99.85% <100.00%> (+<0.01%) ⬆️
test/compile_test/instantiate.hpp 100.00% <100.00%> (ø)
test/test_igamma.cpp 98.07% <100.00%> (ø)
test/test_igamma.hpp 100.00% <100.00%> (ø)

Continue to review full report in Codecov by Sentry.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f6be8e8...2a6fb33. Read the comment docs.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@JacobHass8
Copy link
Author

With regard to further tests, the area most likely to fail, would be where the implementation first switches to the log asymptotic expansion, and at higher precision (we should support 128-bit long doubles at least, I'm less concerned about full arbitrary precision just yet).

I've added some documentation but don't know how to generate the html. I tried running b2 in the doc folder but received an error from a different file.

I also added a test for $x=999$ which is right below when the asymptotic expansion is used. Is there anything else that needs to be done before this can be merged?

Add more tests where the logic changes method for large a,x.
So we can find the cause of failures easier.
@JacobHass8
Copy link
Author

@jzmaddock I started looking into implementing the log of the beta function. I'm not sure if this even needs to be implemented though because the precision is already very good as in this example

#include <iostream>
#include <boost/math/special_functions/beta.hpp>

int main(int argc, char* argv[])
{
    std::cout << log(boost::math::ibeta(15., 10., 1e-20)) << std::endl; // -676.692
    return 0;
}

I'm only seeing precision issues if a or b gets large. I'm not sure if that's a regime that people are interested in though.

If it's decided to be implemented, how could I go about that? It doesn't appear that there's a simple function that could be used to approximate the tail as in the log gamma case. I only found boost::math::beta_small_b_large_a_series which is already used to calculate boost::math::ibeta.

@jzmaddock
Copy link
Collaborator

Re incomplete beta, I don't see anything particularly easy, unless maybe you're so far out in the extreme that https://dlmf.nist.gov/8.18#E14 can be applied with the sum part neglected?

@dschmitz89
Copy link
Contributor

@JacobHass8 This looks great! Could you check the test cases requested by the user in the Scipy issue scipy/scipy#8424 ? The OP mentioned underflow and overflow issues, in this conversation I only see underflow.

@JacobHass8
Copy link
Author

@JacobHass8 This looks great! Could you check the test cases requested by the user in the Scipy issue scipy/scipy#8424 ? The OP mentioned underflow and overflow issues, in this conversation I only see underflow.

I've added the following test cases which should address the issue:

   BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(500), static_cast<T>(10)), static_cast<T>(-3.79666711621207197039397438773960431648625558027046365463e-639L), tolerance * 3);
   BOOST_CHECK_CLOSE(::boost::math::lgamma_q(static_cast<T>(5), static_cast<T>(1000)), static_cast<T>(-975.5430287171020511929200293377669175923128826278957569928895945L), tolerance);

Note that the first test, when $k=500$ and $\lambda=10$ underflows unless long double precision is used. The log of the CDF for the Poisson distribution there is of order $10^{-639}$ so underflow makes sense.

@mborland
Copy link
Member

mborland commented Jan 5, 2026

@JacobHass8 I added the CUDA markers and testing. Everything runs without changes needed. It does looks like you have a tolerance issue on CI machines with 128-bit long doubles (s390x and ARM64):

====== BEGIN OUTPUT ======

Running 1 test case...

Tests run with GNU C++ version 11.2.0, GNU libstdc++ version 20220324, linux

Testing spot values with type float

Testing spot values with type double

Testing spot values with type long double

test_igamma.hpp(283): error: in "test_main": difference{7.89562e-31} between ::boost::math::lgamma_q(static_cast<T>(1200), static_cast<T>(1250.25)){-2.59193486211758620551930971222062819} and static_cast<T>(-2.591934862117586205519309712218581885256650074210410262843591453L){-2.59193486211758620551930971221858169} exceeds 4.81482e-29%

Testing spot values with type real_concept

test_igamma.hpp(283): error: in "test_main": difference{4.8521e-31} between ::boost::math::lgamma_q(static_cast<T>(1200), static_cast<T>(1250.25)){-2.59193} and static_cast<T>(-2.591934862117586205519309712218581885256650074210410262843591453L){-2.59193} exceeds 4.81482e-29%

Testing tgamma(a, z) medium values with type float

@JacobHass8
Copy link
Author

@JacobHass8 I added the CUDA markers and testing. Everything runs without changes needed. It does looks like you have a tolerance issue on CI machines with 128-bit long doubles (s390x and ARM64):

Would simply increasing the tolerances in this case be acceptable? This is for very large arguments and the difference seems quite small.

In the future, how can I test with s390x and ARM64 CI machines?

@mborland
Copy link
Member

mborland commented Jan 5, 2026

@JacobHass8 I added the CUDA markers and testing. Everything runs without changes needed. It does looks like you have a tolerance issue on CI machines with 128-bit long doubles (s390x and ARM64):

Would simply increasing the tolerances in this case be acceptable? This is for very large arguments and the difference seems quite small.

I think that would be fine.

In the future, how can I test with s390x and ARM64 CI machines?

They are automatically included in the check titled "continuous-integration/drone/pr". You can click on that runner's hyperlink, and it will show you the several hundred configurations that Drone runs for us.

@JacobHass8
Copy link
Author

They are automatically included in the check titled "continuous-integration/drone/pr". You can click on that runner's hyperlink, and it will show you the several hundred configurations that Drone runs for us.

Oh, I see where that is now. Thanks! Is there a way to see if these tests will pass with my revisions before pushing?

@mborland
Copy link
Member

mborland commented Jan 6, 2026

They are automatically included in the check titled "continuous-integration/drone/pr". You can click on that runner's hyperlink, and it will show you the several hundred configurations that Drone runs for us.

Oh, I see where that is now. Thanks! Is there a way to see if these tests will pass with my revisions before pushing?

I use multi-arch docker containers integrated into CLion to test platforms like these I don't have. Outside of that generally no, you'll have to wait and see.

@JacobHass8
Copy link
Author

I use multi-arch docker containers integrated into CLion to test platforms like these I don't have. Outside of that generally no, you'll have to wait and see.

I had been trying to use various boost multiprecision types but to no success. I'm going to try to set up a docker container. Thanks for the help!

@ckormanyos
Copy link
Member

ckormanyos commented Jan 6, 2026

I had been trying to use various boost multiprecision types but to no success.

Most (if not all) of the special functions in Math run with Multiprecision types. Sometimes these need quite specialized test cases. The actual testing of special functions (in Math) with Multiprecision types, however, occurs in the CI/CD of Multiprecision.

That can make it tricky when you write a cool new Math function. Then we add Multiprecision tests and it has CI/CD issues in Multiprecision.

Down the road, when really considering running with Multiprecision, we can help you add tests in Multiprecision too if you like.

@jzmaddock
Copy link
Collaborator

I've just tweaked the expected error rate again.

Also hooked up some more tests - sorry I should have mentioned these sooner, all new functions should be added to test/compile_test/instantiate.hpp so that they get concept-checked (this includes but isn't limited to, multiprecision concept checking). There should also be an include test (in this case tests/compile_test/sf_gamma.cpp), and some trivial bits in include/boost/math/special_functions/math_fwd.hpp.

BTW If you need to test 128-bit reals locally I tend to reach for cpp_bin_float_quad from multiprecision.

@JacobHass8
Copy link
Author

I've just tweaked the expected error rate again.

It looks like we're still failing for real_concept types. I'll push a commit to change that.

Also hooked up some more tests - sorry I should have mentioned these sooner, all new functions should be added to test/compile_test/instantiate.hpp so that they get concept-checked (this includes but isn't limited to, multiprecision concept checking). There should also be an include test (in this case tests/compile_test/sf_gamma.cpp), and some trivial bits in include/boost/math/special_functions/math_fwd.hpp.

That's great to know in the future. Thanks for doing a lot of the heavy lifting with this PR!

BTW If you need to test 128-bit reals locally I tend to reach for cpp_bin_float_quad from multiprecision.

I tried that earlier but wasn't able to recreate the error for some reason. I was finding a difference of 1e-21, not 1e-31 as the error message stated. Maybe I was just doing something wrong though.

@jzmaddock
Copy link
Collaborator

I tried that earlier but wasn't able to recreate the error for some reason. I was finding a difference of 1e-21, not 1e-31 as the error message stated. Maybe I was just doing something wrong though.

Random thought: were you comparing the result to a long double constant? You need to construct the value to be tested against (the true/expected value) from a string of digits, not a floating point type which will truncate the constant before the multiprecision type gets to see it.

@JacobHass8
Copy link
Author

Random thought: were you comparing the result to a long double constant? You need to construct the value to be tested against (the true/expected value) from a string of digits, not a floating point type which will truncate the constant before the multiprecision type gets to see it.

I think that's the issue! Here's the code I was running

typedef boost::multiprecision::cpp_bin_float_quad T;
auto a = static_cast<T>(1200);
auto z = static_cast<T>(1250.25);
auto trueVal = static_cast<T>(-2.591934862117586205519309712218581885256650074210410262843591453L);

std::cout << boost::math::lgamma_q(a, z) - trueVal << std::endl; // 1e-20

So I really should have been using auto trueVal = static_cast<T>("-2.591934862117586205519309712218581885256650074210410262843591453");.

@JacobHass8
Copy link
Author

It looks like we're just failing on cygwin and windows_gcc. Could I just wrap the failing test in

#if defined(WIN32) || defined(_WIN32) || defined(__WIN32) || defined(__CYGWIN__)

and increase the tolerance?

@jzmaddock
Copy link
Collaborator

Yup, I've pushed some tolerance changes.

@ckormanyos
Copy link
Member

ckormanyos commented Jan 8, 2026

Yup, I've pushed some tolerance changes.

I'm not sure what's red now. Maybe some compilers don't think you can put the # hash symbol within the scope of parentheses. Couple new compile errors. Sigh I suspect the whole line needs to be written out for MINGW/CYGWIN or the lack thereof.

@jzmaddock
Copy link
Collaborator

Yup, I put a macro inside a macro, my bad, just pushed a fix.

@ckormanyos
Copy link
Member

ckormanyos commented Jan 8, 2026

Failing after 24m

For some reason, there is failure in test_reverse_mode_autodiff_stl_support.cpp:982, which looks like tolerance, and is seemingly unrelated to this new function.

After that, we might consider to start preliminarily testing this function in Multiprecision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants