Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Small difference in Fortran Sine function using MacLaurin expansion

I'm creating a program in Fortran that takes in an x for sin(x) in radians, then the number of terms to compute.

This is my program:

! Sine value using MacLaurin series 

program SineApprox
    implicit none
    integer :: numTerms, denom, i
    double precision :: x, temp, summ

    ! Read Angle in Radians, and numTerms
    read(*,*) x, numTerms

    ! Computing MacLaurin series
    denom = (2*numTerms)-1  !Gives denominator
    temp = x                !Temp calculates each term
    summ = x

    do i = 3, denom, 2
        temp = (((-temp)*(x**2))/(i*(i-1)))
        summ = summ + temp 
    end do
    print *, summ

end program SineApprox

However, I'm not getting the same value that my prof requires for the input: 5 30

The output of my code is:

-0.95892427466314001  

But, the required output is:

-0.95892427466313568
                ^^^^

I can't figure out where the error is.

like image 238
zuzu Avatar asked Nov 29 '25 11:11

zuzu


2 Answers

I can't figure out where the error is.

A high precisions sine(5.0) is -0.95892427466313846889...

OP’s result is better than Prof’s.

OP's result is within 14 ULP of the best answer whereas Prof's is 25 ULP off.

So no problem on OP’s part. To get an exact match to the Prof's answer, you would have to code an inferior approach.

A simple possible reason for the Prof's inferior answer is if Prof's code looped only while i<=30 (15 terms, rather than 30) - which would just about explain the difference. Try using your code with fewer iterations and see what iteration count closest matches the Prof's answer.


//  sine(5.0)             Delta from correct  Source
//  
//-0.95892427466313846889...    0 000000      calc
//-0.95892427466313823        -23 889...      chux (via some C code)
//-0.95892427466314001       +154 111...      OP
//-0.95892427466313568       -278 889...      Prof
// 0.00000000000000089      +/-89 ...         ulp(5.0)
// 0.00000000000000011      +/-11 ...         ulp(-0.9589)
//   12345678901234567(digit count)

Notes:
Near x == 5.0, after about the 17th term, the temp term is so small to not significantly affect the result. So certainly enough terms are used.

With common FP representation, the unit in the last place of 5 is ~89e-17. The ULP of -0.9589 if is ~11e-17.

like image 182
chux - Reinstate Monica Avatar answered Dec 02 '25 04:12

chux - Reinstate Monica


I will emulate the two algorithms with some ArbitraryPrecisionFloat to illustrate how worse is the factorial solution numerically: I'll use Squeak Smalltalk here, but the language does not really matter, you could do it in Maple or Python, as long as you have some arbitrary precision library available...

The closest binary64 floating point number to the exact result of sin(5) is -0.9589242746631385.

We'll see how well the two algorithms approximate that ideal value for different precision (ranging from single precision 24 bits to long double precision 64 bits).

p := 24 to: 64.
e1 := p collect: [:n |
    | x denom temp summ closest |
    closest := (5 asArbitraryPrecisionFloatNumBits: 100) sin asFloat.
    x := 5 asArbitraryPrecisionFloatNumBits: n.
    numTerms := 30.
    denom := (2*numTerms)-1.
    temp := x.
    summ := x.
    3 to: denom by: 2 do: [:i |
        temp := (((0-temp)*(x**2))/(i*(i-1))).
        summ := summ + temp ].
    (summ asFloat - closest) abs].

Then the factorial rewrite:

e2 := p collect: [:n |
    | x denom temp summ closest fact |
    closest := (5 asArbitraryPrecisionFloatNumBits: 100) sin asFloat.
    x := 5 asArbitraryPrecisionFloatNumBits: n.
    numTerms := 30.
    denom := (2*numTerms)-1.
    temp := x.
    summ := x.
    3 to: denom by: 2 do: [:i |
        fact := ((1 to: i) collect: [:k | k asArbitraryPrecisionFloatNumBits: n]) product.
        temp := ((x ** i)*(-1 ** (i//2)))/fact.
        summ := summ + temp ].
    (summ asFloat - closest) abs].

We can then plot the result in whatever language (here Matlab)

p=24:64;
e1=[1.8854927952283163e-8 4.8657250339978475e-8 2.5848555629259806e-8 6.355841153382613e-8 3.953766758435506e-9 2.071757310151412e-8 2.0911216092045493e-9 6.941377472813315e-10 4.700154709880167e-10 9.269683909352011e-10 6.256184459374481e-11 3.1578795134379334e-10 2.4749646776456302e-11 3.202560439063973e-11 1.526812010155254e-11 8.378742144543594e-12 3.444688978504473e-12 6.105005390111273e-12 9.435785486289205e-13 7.617240171953199e-13 2.275957200481571e-14 1.6486811915683575e-13 2.275957200481571e-14 5.1181281435219717e-14 1.27675647831893e-14 1.2101430968414206e-14 1.2212453270876722e-15 2.7755575615628914e-15 5.551115123125783e-16 1.5543122344752192e-15 1.1102230246251565e-16 1.1102230246251565e-16 0.0 1.1102230246251565e-16 0.0 0.0 0.0 0.0 0.0 0.0 0.0];
e2=[9.725292443585332e-7 4.281799078631465e-7 2.721746682476933e-7 1.823107481646602e-7 9.336073392152144e-8 5.1925587718493205e-8 1.6992282803052206e-8 6.756442849642497e-9 5.1179199767048544e-9 3.0311525511805826e-9 1.2180066955025382e-9 6.155346232716852e-10 2.8668412088705963e-10 6.983780220792823e-11 6.476741365446514e-11 3.8914982347648674e-11 1.7473689162272876e-11 1.2084888645347291e-11 4.513389662008649e-12 1.7393864126802328e-12 1.273314786942592e-12 5.172529071728604e-13 2.5013324744804777e-13 1.6198153929281034e-13 6.894484982922222e-14 2.8754776337791554e-14 1.6542323066914832e-14 8.770761894538737e-15 4.773959005888173e-15 2.7755575615628914e-15 7.771561172376096e-16 3.3306690738754696e-16 3.3306690738754696e-16 1.1102230246251565e-16 1.1102230246251565e-16 0.0 0.0 0.0 0.0 0.0 0.0];
figure; semilogy(p,e1,p,e2,'-.'); legend('your algorithm','factorial algorithm'); xlabel('float precision'); ylabel('error')

Your algorithm performs better: one magnitude of error lesser than the factorial variant:

enter image description here

The worse thing in factorial variant is that it depends on the intrinsic power function x**power. This function does not necessarily answer the nearest floating point to the exact result and may vary depending on the underlying mathematical library implementation. So asking for a bit identical result that depends not only on strict IEEE 754 compliance but also on an implementation defined accuracy is a really dumb thing - unless all students have exact same hardware and software - but even then what a lesson is it wrt. what every scientist should know about floating point?

like image 23
aka.nice Avatar answered Dec 02 '25 04:12

aka.nice



Donate For Us

If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!