Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Logo

Integrate is a small, lightweight Rust library for performing numerical integration of real-valued functions. It is designed to integrate functions, providing a simple and efficient way to approximate definite integrals using various numerical methods. Numerical integration is concerned with developing algorithms to approximate the integral of a function $f(x)$. The most commonly used algorithms are Newton-Cotes formulas, Romberg’s method, Gaussian quadrature, and to lesser extents Hermite’s formulas and certain adaptive techniques.

Caveats

All of the numerical integration techniques listed above assume that the integrand is continuous on the interval of integration. The error estimates generally require that the integrands are differentiable of whatever order is required so that the formula for the error estimate makes sense. Below is a check list which one should verify before using any of the numerical algorithms.

Integrand check list

  • Are there any singularities of the integrand in the interval of integration?
    • If there are, then can they be removed?
    • A function which has jump discontinuities can be integrated by splitting the interval of integration into subintervals on which the function is continuous, and then numerically integrate over each subinterval and add the results.
    • Using integration by parts certain types of singular integrals can be expressed as the sum of a function and a non-singular integral.
    • In other cases, an approximation of the integral in a small neighborhood of the singularity can be obtained by hand and the interval of integration can be split into three intervals in which numerical integration can be used on two of them.
  • Does the function oscillate over the region of integration? If it does, then make sure that the step size is chosen to be smaller than the wave length of the function. The interval of integration can also be split into subintervals in which each subinterval is half a wave length and the algorithm is applied to each subinterval.

Getting Started

Installation

Add integrate to your Cargo.toml:

[dependencies]
integrate = "0.1"

Your First Integral

Let’s compute \(\int_0^1 e^x \, dx = e - 1 \approx 1.71828\) using the trapezoidal rule:

use integrate::newton_cotes::trapezoidal_rule;

let result = trapezoidal_rule(|x: f64| x.exp(), 0.0_f64, 1.0_f64, 1_000_usize);
println!("Result:   {:.6}", result);
println!("Exact:    {:.6}", std::f64::consts::E - 1.0);

Using the Prelude

For convenience, you can import all methods at once with:

use integrate::prelude::*;

This exposes all 11 integration functions directly.

Choosing a Method

You want to integrate…Recommended method
A smooth function on \([a, b]\)[legendre_rule] or [trapezoidal_rule]
A function with variable smoothness[adaptive_simpson_method]
A smooth function to high precision[romberg_method]
\(f(x)\,e^{-x}\) over \([0, \infty)\)[gauss_laguerre_rule]
\(f(x)\,e^{-x^2}\) over \((-\infty, \infty)\)[gauss_hermite_rule]
\(f(x)/\sqrt{1-x^2}\) over \([-1,1]\)[gauss_chebyshev1_rule]
\(f(x)\sqrt{1-x^2}\) over \([-1,1]\)[gauss_chebyshev2_rule]

Generics: f32 and f64

All Newton-Cotes and Romberg methods are generic over float types. You can pass f32 functions:

use integrate::newton_cotes::simpson_rule;

let result = simpson_rule(|x: f32| x * x, 0.0_f32, 1.0_f32, 1_000_usize);
println!("{:.6}", result); // ≈ 0.333333

Newton-Cotes methods

Newton-Cotes methods approximate the integral of a function by summing a weighted combination of the function evaluated at equally-spaced points and nodes. If the endpoints of the interval of integration are excluded from the sum, the method is called an open Newton-Cotes method and if the endpoints are included the method is called a closed Newton-Cotes method.

Rectangle rule

Motivation

The rectangle rule (also called the midpoint rule) is the simplest Newton-Cotes formula for numerical integration. It is useful as a baseline method and is exact for constant functions. The composite form approximates a definite integral by summing rectangle areas, making it easy to implement.

Example

use integrate::newton_cotes::rectangle_rule;


let square = |x: f64| x * x;

let a = 0.0;
let b = 1.0;

let num_steps: usize = 1_000_000;

let integral = rectangle_rule(square, a, b, num_steps);

println!("{}",integral);

Understanding the Rectangle rule

The rectangle rule approximates the integral of a function \(f(x)\) on the closed and bounded interval \([a, a+h]\) of length \(h > 0\) by the (signed) area of the rectangle with length \(h\) and height the value of the function \(f(x)\) evaluated at the midpoint of the interval, \(f(a + \frac{h}{2} )\).

The composite rectangle rule is used to approximate the integral of a function \(f(x)\) over a closed and bounded interval \([a, b]\) where \(a < b\), by decomposing the interval \([a, b]\) into \(n > 1\) subintervals of equal length \(h = \frac{b-a}{n}\) and adding the results of applying the rectangle rule to each subinterval.

By abuse of language both the composite rectangle rule and the rectangle rule sometimes are referred to simply as the rectangle rule.

Let \(\int_{a}^{b} f(x) dx\) be the integral of \(f(x)\) over the closed and bounded interval \([a ,b ]\), and let \(R_h(f)\) be the result of applying the rectangle rule with \(n\) subintervals of length \(h\), i.e.

\[ R_h(f)=h \left[ f\left( a+\frac{h}{2} \right) + f\left(a+\frac{3h}{2}\right) + \cdots + f\left(b-\frac{h}{2}\right) \right] \]

An immediate consequence of the Euler-Maclaurin summation formula yields the following equation relating \(\int_{a}^{b} f(x) dx\) and \(R_h(f)\):

\[ \begin{align} R_h(f) & = \int_{a}^{b} f(x) dx - \frac{h^2}{24} \left[ f^\prime (b) - f^\prime (a) \right] + \frac{7h^4}{5760} \left[ f^{(3)}(b) - f^{(3)}(a) \right] + \\ & \cdots + K h^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{align} \]

where \(f’\), \(f^{(3)}\), and \(f^{(2p-3)}\) are the first, third and \((2p-3)^{rd}\) derivatives of \(f\) and \(K\) is a constant.

The last term, \(O(h^{2p})\) is important. Given an infinitely differentiable function in which the first \(2p-3\) derivatives vanish at both endpoints of the interval of integration, it is not true that \(R_{h}(f) = \int_{a}^{b} f(x) dx\), but rather what the theorem says is that

\[ \lim_{h \to 0} \left| \dfrac{R_h(f) - \int_{a}^{b} f(x)dx}{h^{2p}} \right| < M \]

where \(M>0\).

If \(f\) is at least twice differentiable on the interval \([a,b]\), then applying the mean-value theorem to

\[ \begin{align} R_h(f) - \int_{a}^{b} f(x) dx & = -\frac{h^2}{24} \left[ f^\prime (b) - f^\prime (a) \right] + \frac{7h^4}{5760} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ & + \cdots + K h^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{align} \]

yields the standard truncation error expression

\[ R_h(f) - \int_{a}^{b} f(x) dx = -\frac{h^2}{24} (b - a) f^{\prime\prime}(c) \]

for some point \(c\) where \(a ≤ c ≤ b\).

A corollary of which is that if \(f^{\prime\prime}(x) = 0\) for all \(x\) in \([a,b]\), i.e. if \(f(x)\) is linear, then the rectangle rule is exact.

The Euler-Maclaurin summation formula also shows that usually \(n\) should be chosen large enough so that \(h = \frac{b-a}{n} < 1\). For example, if \(h = 0.1\) then

\[ \begin{split} R_{0.1}(f) &= \int_{a}^{b} f(x) dx - 0.00042 \left[ f^\prime(b) - f^\prime(a) \right] \\ & + 0.00000012 \left[f^{(3)}(b) - f^{(3)}(a) \right] + \cdots \end{split} \]

and if \(h = 0.01\) then

\[ \begin{split} R_{0.01}(f) &= \int_{a}^{b} f(x) dx - 0.0000042 \left[ f^\prime(b) - f^\prime(a) \right] \\ &+ 0.000000000012 \left[ f^{(3)}(b) - f^{(3)}(a) \right] + \cdots \end{split} \]

while if \(h=10\) then

\[ \begin{split} R_{10}(f) &= \int_{a}^{b} f(x) dx - 4.1667 \left[ f^\prime(b) - f^\prime(a)\right] \\ &+ 12.15 \left[ f^{(3)}(b) - f^{(3)}(a) \right] + \cdots \end{split} \]

However, if the function \(f(x)\) is linear, then \(n\) may be chosen to be \(1\).

Limitations

The rectangle rule converges as \(O(h^2)\), making it the least accurate of the Newton-Cotes rules. Prefer Simpson’s rule for smooth integrands.

Trapezoidal rule

Motivation

The trapezoidal rule approximates a definite integral by replacing the integrand with straight-line segments between adjacent sample points. It is a practical first choice for smooth functions and is the building block for Richardson extrapolation methods such as Romberg’s method.

Example

use integrate::newton_cotes::trapezoidal_rule;


let square = |x: f64| x * x;

let a = 0.0;
let b = 1.0;

let num_steps: usize = 1_000_000;

let integral = trapezoidal_rule(square, a, b, num_steps);
println!("{}", integral);

Understanding the trapezoidal rule

The trapezoidal rule approximates the integral of a function \(f(x)\) on the closed and bounded interval \([a, a+h]\) of length \(h > 0\) by the (signed) area of the trapezoid formed by the line segments joining \((a, 0)\) to \((a+h, 0)\), \((a+h, 0)\) to \((a+h, f(a+h))\),\((a+h, f(a+h))\) to \((a, f(a))\) and \((a, f(a))\) to \((a, 0)\).

The composite trapezoidal rule is used to approximate the integral of a function \(f(x)\) over a closed and bounded interval \([a, b]\) where \(a < b\), by decomposing the interval \([a, b]\) into \(n > 1\) subintervals of equal length \(h = \dfrac{b-a}{n}\), then adding the results of applying the trapezoidal rule to each subinterval.

By abuse of language both the composite trapezoidal rule and the trapezoidal rule sometimes are referred to simply as the trapezoidal rule.

Let \(\int_{a}^{b} f(x) dx\) be the integral of \(f(x)\) over the closed and bounded interval \([a,b]\), and let \(T_h(f)\) be the result of applying the trapezoidal rule with \(n\) subintervals of length \(h\), i.e.

\[ T_h(f) = h \left[ \frac{f(a)}{2} + f(a+h) + \cdots + f(b-h) + \frac{f(b)}{2} \right] \]

The Euler-Maclaurin summation formula relates \(\int_{a}^{b} f(x) dx\) and \(T_h (f)\)

\[ \begin{align} T_h(f) &= \int_{a}^{b} f(x) dx + \frac{h^2}{12} \left[f’(b) - f’(a)\right] - \frac{h^4}{720} \left[f^{(3)}(b) - f^{(3)}(a) \right] \\ &+ \cdots + K h^{2p-2} \left[f^{(2p-3)}(b) - f^{(2p-3)}(a)\right] + O(h^{2p}) \end{align} \]

where \(f^\prime\), \(f^{(3)}\), and \(f^{(2p-3)}\) are the first, third and \((p-3)^{th}\) derivatives of \(f\) and \(K\) is a constant.

The last term, \(O(h^{2p})\) is important. Given an infinitely differentiable function in which the first \((2p-3)\) derivatives vanish at both endpoints of the interval of integration, it is not true that \(T_h (f) = \int_{a}^{b} f(x) dx\), but rather what the theorem says is that

\[ \lim_{h \to 0} \left| \frac{T_h(f) - \int_{a}^{b} f(x)dx}{h^{2p}} \right| < M \]

where \(M>0\).

If \(f\) is at least twice differentiable on the interval \([a,b]\), then applying the mean-value theorem to

\[ \begin{align} T_h(f) - \int_{a}^{b} f(x) dx &= \frac{h^2}{12} \left[f^\prime(b) - f^\prime(a) \right] - \frac{h^4}{720} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ &+ \cdots + K h^{2p-2} \left[f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{align} \]

yields the standard truncation error expression \[ T_h(f) - \int_{a}^{b} f(x) dx = -\frac{h^2}{12} (b - a) f^{\prime\prime}(c) \]

for some point \(c\) where \(a ≤ c ≤ b\).

A corollary of which is that if \(f^{\prime\prime}(x) = 0\) for all \(x\) in \([a,b]\), i.e. if \(f(x)\) is linear, then the trapezoidal rule is exact.

The Euler-Maclaurin summation formula also shows that usually \(n\) should be chosen large enough so that

\[ h = \frac{b-a}{n} < 1 \]

For example, if \(h = 0.1\) then \[ T_{0.1}(f) = \int_{a}^{b} f(x) dx - 0.00083 \left[f^\prime(b) - f^\prime(a) \right] + 0.00000014 \left[f^{(3)}(b) - f^{(3)}(a) \right] + \cdots \] and if \(h = 0.01\) then \[ T_{0.01}(f) = \int_{a}^{b} f(x) dx - 0.0000083 \left[ f^\prime(b) - f^\prime(a) \right] + 0.000000000014 \left[f^{(3)}(b) - f^{(3)}(a)\right] + \cdots \]

while if \(h=10\) then \[ T_{10}(f) = \int_{a}^{b} f(x) dx - 8.3333 \left[ f^\prime(b) - f^\prime(a) \right] + 13.89 \left[ f^{(3)}(b) - f^{(3)}(a) \right] + \cdots \]

However, if the function \(f(x)\) is linear, then \(n\) may be chosen to be \(1\).

Limitations

The trapezoidal rule converges as \(O(h^2)\) for general smooth functions. While more accurate than the rectangle rule, it is outperformed by Simpson’s rule for smooth integrands. It should not be used for functions with discontinuities or singularities in \([a, b]\) without splitting the interval first.

Simpson’s rule

Motivation

Simpson’s rule fits a quadratic polynomial through each pair of subintervals and integrates it exactly, yielding fourth-order convergence for smooth functions. It is one of the most widely used Newton-Cotes formulas and is exact for polynomials of degree three or less.

Example

use integrate::newton_cotes::simpson_rule;


let square = |x: f64| x * x;

let a = 0.0;
let b = 1.0;

let num_steps: usize = 1_000_000;

let integral = simpson_rule(square, a, b, num_steps);

println!("{}", integral);

Understanding Simpson’s rule

Simpson’s rule approximates the integral of a function \(f(x)\) on the closed and bounded interval \([a, a+h]\) of length \(h > 0\) by the integral on \([a, a+h]\) of the quadratic passing through the points \(\left(a, f(a)\right)\), \(\left(a+\dfrac{h}{2}, f\left(a+\dfrac{h}{2}\right)\right)\) and \(\left(a+h, f(a+h)\right)\).

The composite Simpson’s rule is used to approximate the integral of a function \(f(x)\) over a closed and bounded interval \([a, b]\) where \(a < b\), by decomposing the interval \([a, b]\) into \(n > 1\) subintervals of equal length \(h = \dfrac{b-a}{n}\), then adding the results of applying the Simpson’s rule to each subinterval. By abuse of language both the composite Simpson’s rule and Simpson’s rule sometimes are referred to simply as Simpson’s rule. Let \(\int_{a}^{b} f(x) dx\) be the integral of \(f(x)\) over the closed and bounded interval \([a,b]\), and let \(S_h(f)\) be the result of applying the Simpson’s rule with \(n\) subintervals of length h, i.e.

\[ \begin{align} S_h(f) &= \frac{h}{6} \left[ f(a) + 4f(a+\frac{h}{2}) + 2f(a+h) \right. \\ & + \left. \cdots + 2f(b-h) + 4f(b-\frac{h}{2}) + f(b) \right] \end{align} \]

An immediate consequence of the Euler-Maclaurin summation formula relates \(\int_{a}^{b} f(x) dx\) and \(S_h(f)\):

\[ \begin{align} S_h(f) & = \int_{a}^{b} f(x) dx + \frac{h^4}{2880} \left[ f^{(3)}(b) - f^{(3)}(a) \right] - \frac{h^6}{96768} \left[ f^{(5)}(b) - f^{(5)}(a) \right] \\ & + \cdots + K h^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{align} \]

where \(f^{(3)}\), \(f^{(5)}\), and \(f^{(2p-3)}\) are the third, fifth and \((p-3)^{th}\) derivatives of \(f\) and \(K\) is a constant.

The last term, \(O(h^{2p})\) is important. Given an infinitely differentiable function in which the first \(2p-3\) derivatives vanish at both endpoints of the interval of integration, it is not true that

\[ S_h(f) = \int_{a}^{b}f( x ) dx \]

but rather what the theorem says is that

\[ \lim_{h \to 0} \left| \frac{S_h(f) - \int_{a}^{b} f(x)dx}{h^{2p}} \right| < M \]

where \(M > 0\).

If \(f\) is at least four times differentiable on the interval \([a,b]\), then applying the mean-value theorem to

\[ \begin{align} S_h(f) - \int_{a}^{b}f( x ) dx &= \frac{h^4}{2880} \left[ f^{(3)}(b) - f^{(3)}(a) \right] - \frac{h^6}{96768} \left[ f^{(5)}(b) - f^{(5)}(a) \right] \\ &+ \cdots + K h^{(2p - 2)} \left[f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{align} \]

yields the standard truncation error expression

\[ S_h(f) - \int_{a}^{b} f( x ) dx = \frac{h^4}{2880} (b-a) f^{(4)}(c) \]

for some point \(c\) where \(a ≤ c ≤ b\).

A corollary of which is that if \(f^{(4)}(x) = 0\) for all \(x\) in \([a,b]\), i.e. if \(f(x)\) is a cubic, then Simpson’s rule is exact.

The Euler-Maclaurin summation formula also shows that usually $n$ should be chosen large enough so that \(h = \frac{b-a}{n} < 1\).

For example, if \(h = 0.1\) then \[ \begin{align} S_{0.1}(f) & = \int_{a}^{b} f(x) dx + 3.5 \cdot 10^{-8} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ & - 1.033 \cdot 10^{-11} \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{align} \]

and if \(h = 0.01\) then

\[ \begin{split} S_{0.01}(f) & = \int_{a}^{b} f(x) dx + 3.5 \cdot 10^{-12} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ & - 1.033 \cdot 10^{-17} \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{split} \]

while if \(h = 10\) then \[ \begin{align} S_{0.01}(f) & = \int_{a}^{b} f(x) dx + 3.47 \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ & - 10.33 \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{align} \]

However, if the function \(f(x)\) is a cubic, then \(n\) may be chosen to be \(1\).

Limitations

Simpson’s rule converges as \(O(h^4)\) and requires at least twice-differentiable integrands for its error estimate to apply. It is not suitable for functions with discontinuities or singularities without splitting the interval. For functions with highly variable smoothness, prefer the adaptive Simpson method.

Newton’s 3/8 rule

Motivation

Newton’s 3/8 rule fits a cubic polynomial through four equally-spaced points per subinterval and integrates it exactly. It achieves the same order of accuracy as Simpson’s rule (\(O(h^4)\)) but uses three subintervals at a time, making it useful when the number of subintervals is a multiple of three.

Example

use integrate::newton_cotes::newton_rule;


let square = |x: f64| x * x;

let a = 0.0;
let b = 1.0;

let num_steps: usize = 1_000_000;

let integral = newton_rule(square, a, b, num_steps);

Understanding Newton’s 3/8 rule

Newton’s \(3/8\) rule approximates the integral of a function \(f(x)\) on the closed and bounded interval \([a, a+h]\) of length \(h > 0\) by the integral on \([a, a+h]\) of the cubic passing through the points \((a, f(a))\), \((a+\frac{h}{3}, f(a+\frac{h}{3}))\), \((a+\frac{2h}{3}, f(a+\frac{2h}{3}))\) and \((a+h, f(a+h))\).

The composite Newton’s 3/8 rule is used to approximate the integral of a function \(f(x)\) over a closed and bounded interval \([a, b]\) where \(a < b\), by decomposing the interval \([a, b]\) into \(n > 1\) subintervals of equal length \(h = \dfrac{b-a}{n}\), then adding the results of applying the Newton’s 3/8 rule to each subinterval.

By abuse of language both the composite Newton’s 3/8 rule and Newton’s 3/8 rule are referred to simply as Newton’s 3/8 rule. Let \(\int_{a}^{b} f(x)dx\) be the integral of \(f(x)\) over the closed and bounded interval \([a, b]\), and let \(N_h(f)\) be the result of applying the Newton’s 3/8 rule with \(n\) subintervals of length \(h\), i.e.

\[ \begin{align} N_h(f) &= \frac{h}{8} \left[ f(a) + 3 f\left(a+ \frac{h}{3} \right) + 3f\left(a+ \frac{2h}{3} \right) + 2 f(a + h) \right.\\ & \left. + \cdots + 2f(b-h) + 3f \left( b-\frac{2h}{3} \right) + 3f \left( b - \frac{h}{3} \right) + f(b) \right] \end{align} \]

An immediate consequence of the Euler-Maclaurin summation formula relates \(\int_{a}^{b} f(x)dx\) and \(N_h(f)\)

\[ \begin{align} N_h(f) &= \int_{a}^{b} f(x)dx + \frac{h^4}{6480} \left[ f^{(3)}(b) - f^{(3)}(a) \right] - \frac{h^6}{244944} \left[ f^{(5)}(b) - f^{(5)}(a) \right] \\ & + \cdots + K h^{2p-2} \left[ f^{(2p - 3)}(b) - f^{(2p - 3)}(a) \right] + O(h^{2p}) \end{align} \]

where \(f^{(3)}\), \(f^{(5)}\), and \(f^{(2p-3)}\) are the third, fifth and \((p-3)rd\) derivatives of \(f\) and \(K\) is a constant.

The last term, \(O(h^{2p})\) is important. Given an infinitely differentiable function in which the first \(2p-3\) derivatives vanish at both endpoints of the interval of integration, it is not true that

\[ N_h(f) = \int_{a}^{b} f(x) dx \]

but rather what the theorem says is that

\[ \lim_{h \to 0} \left| \frac{N_h(f) - \int_{a}^{b} f(x)dx}{h^{2p}} \right| < M \]

where \(M > 0\).

If \(f\) is at least four times differentiable on the interval \([a,b]\), then applying the mean-value theorem to

\[ \begin{align} N_h(f) - \int_{a}^{b} f(x)dx &= \frac{h^4}{6480} \left[ f^{(3)}(b) - f^{(3)}(a) \right] - \frac{h^6}{244944} \left[ f^{(5)}(b) - f^{(5)}(a) \right] \\ & + \cdots + K h^{2p - 2} \left[ f^{(2p - 3)}(b) - f^{(2p - 3)}(a) \right] + O(h^{2p}) \end{align} \]

yields the standard truncation error expression

\[ N_h(f) - \int_{a}^{b} f(x)dx = \frac{h^4}{6480} (b-a) f^{(4)}(c) \]

for some point \(c\) where \(a \leq c \leq b\).

A corollary of which is that if \(f^{(4)}(x) = 0\) for all \(x\) in \([a,b]\), i.e. if \(f(x)\) is a cubic, then Newton’s 3/8 rule is exact.

The Euler-Maclaurin summation formula also shows that usually \(n\) should be chosen large enough so that \(h = \dfrac{b-a}{n}< 1\).

For example, if \(h = 0.1\) then

\[ \begin{align} N_{0.1}(f) &= \int_{a}^{b} f(x)dx + 1.5·10^{-8} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ &- 4.1 \cdot 10^{-12} \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{align} \]

and if \(h = 0.01\) then

\[ \begin{align} N_{0.01}(f) &= \int_{a}^{b} f(x)dx + 1.5 \cdot 10^{-12} \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ &- 4.1 \cdot 10^{-18} \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{align} \]

while if \(h = 10\) then

\[ \begin{align} N_{10}(f) &= \int_{a}^{b} f(x)dx + 1.54 \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ &- 4.08 \left[ f^{(5)}(b) - f^{(5)}(a) \right] + \cdots \end{align} \]

However, if the function \(f(x)\) is a cubic, then \(n\) may be chosen to be \(1\).

Limitations

Newton’s 3/8 rule converges as \(O(h^4)\), the same order as Simpson’s rule, but requires the number of subintervals to be a multiple of three. For general use, Simpson’s rule is more flexible. Like all fixed-step Newton-Cotes formulas, it is not suitable for integrands with discontinuities or singularities.

Gaussian quadrature

Instead of evaluating the integrand at equally spaced nodes as in Newton-Cotes methods, Gaussian quadrature methods make a judicious choice of nodes so as to maximize the precision of the numerical integration relative to the number of integrand evaluations.

The common Gaussian quadrature methods are:

  • Gauss-Legendre used to integrate a function \(f(x)\) over a closed and bounded interval \([a,b]\).
  • Gauss-Laguerre used to integrate a function of the form \(f(x) e^{-x}\) over the positive x-axis \(\lbrace x \in \mathbb{R} : x > 0 \rbrace\).
  • Gauss-Hermite used to integrate a function of the form \(f(x) e^{-x^2}\) over the entire x-axis, \(\lbrace x \in \mathbb{R} : -\infty < x < \infty \rbrace\).
  • Gauss-Chebyshev First Kind used to integrate a function of the form \(\frac{f(x)}{\sqrt( 1-x^2 )}\) over the interval \([-1,1]\).
  • Gauss-Chebyshev Second Kind used to integrate a function of the form \(f(x) * \sqrt{ 1-x^2 }\) over the interval \([-1,1]\).

Gauss-Legendre

Example

use integrate::gauss_quadrature::legendre_rule;

let square = |x: f64| x * x;

let a = 0.0;
let b = 1.0;

let num_steps: usize = 1_000_000;

let integral = legendre_rule(square, a, b, num_steps);
println!("{}",integral);

Understanding Gauss-Legendre rule

Gauss-Legendre quadrature formulas are used to integrate functions \(f(x)\) over a closed and bounded interval \([a, b]\).

Let \(\int_{a}^{b} f(x) dx\) denote the integral of \(f(x)\) from \(a\) to \(b\). After making the change of variable \(t = \dfrac{2(x-a)}{b-a} - 1\), then

\[ \int_{a}^{b} f(x) dx = \frac{b-a}{2} \int_{-1}^{1} f \left( \frac{t(b-a) + (b+a)}{2} \right) dt \]

with respect to the inner product

\[ \langle f,g \rangle = \int_{-1}^{1} f(x) \cdot g(x) \cdot w(x) dx \]

The Legendre polynomials are defined by

\[ P_n(x) = \frac{1}{2^n n! } \frac{\partial^{n} (x^2 - 1)^n}{\partial x^n} \quad \text{for} \quad n>0 \]

and \(P_0(x) = 1\), form an orthogonal family of polynomials with weight function \(w(x) = 1\) on the interval \([-1,1]\).

The \(n\)-point Gauss-Legendre quadrature formula, \(GL_n ( f )\), for approximating the integral of \(f(x)\) over \([-1,1]\), is given by

\[ GL_n ( f ) = A_1 f(x_1) + \cdots + A_n f(x_n) \]

where \(x_i\), \(i = 1,\dots,n\), are the zeros of \(P_n\) and

\[ A_i = 2 \cdot \frac{1 - x_i^2}{n^2 P_{n-1} (x_i)^2} \quad \text{for} \quad i = 1,\dots,n \]

The truncation error is

\[ \int_{-1}^{1} f(x) dx - GL_n(f) = K \cdot \frac{ f^{(2n)}(c) }{2n!} \]

where \(K\) is a constant, and \(c\) is some unknown number that verifies \(-1 < c < 1\). The constant \(K\) is easily determined from

\[ K = \int_{-1}^{1} x^{2n} dx - GL_n (x^{2n} ) \]

Generalizing, in order to integrate \(f(x)\) over \([a,b]\), the \(n\)-point Gauss-Legendre quadrature formula, \(GL_n ( f(x), a, b )\), is given by

\[ GL_n ( f(x), a, b ) = A_1’ f(x_1’) + \cdots + A_n’ f(x_n’) \quad \text{where} \quad x_i’ = \frac{b-a}{2} \cdot x_i + \frac{b+a}{2} \]

\(x_i\), \(i = 1,\dots,n\), are the zeros of \(P_n\) and

\[ A_i’ = (b-a) \frac{1 - x_i^2}{n^2 P_{n-1}(x_i)^2} = \frac{b-a}{2} \cdot A_i, \quad i = 1,\dots,n \]

The truncation error is

\[ \int_{a}^{b} f(x) dx - GL_n(f, a, b) = K \cdot \frac{ f^{(2n)}(c) }{2n!} \]

where \(K\) is a constant, and \(c\) is some unknown number that verifies \(a < c < b\).

Limitations

Gauss-Legendre quadrature is best suited for smooth functions on finite intervals. It is not appropriate for functions with singularities, discontinuities, or functions defined on unbounded domains. Rapidly oscillating functions may also require specialized treatment or a very large number of nodes \(n\).

References:

Original C++ code implemented by Ignace Bogaert.

The main features of this software are:

  • Speed: due to the simple formulas and the \(O(1)\) complexity computation of individual Gauss-Legendre quadrature nodes and weights.
  • Accuracy: the error on the nodes and weights is within a few ulps.

Ignace Bogaert’s Paper:

\[ [1]: \text{Ignace Bogaert,} \textit{ Iteration-free computation of Gauss-Legendre quadrature nodes and weights,} \\ \text{ SIAM Journal on Scientific Computing, Volume 36, Number 3, 2014, pages A1008-1026.} \]

For more details, here is a link to the article.

Gauss-Chebyshev Quadrature

Gauss-Chebyshev quadrature formulas are used to integrate functions like \(\frac{f(x)}{\sqrt{1- x^2}}\) and \(f(x) \cdot \sqrt{1- x^2}\) from \(-1\) to \(1\).

Chebyshev Polynomials of the First Kind

Example

use integrate::gauss_quadrature::gauss_first_kind_chebyshev_rule;

let f = |x: f64| 1.0;

let n:usize = 100;

let integral = gauss_first_kind_chebyshev_rule(f, n);
println!("{}",integral);

Understanding Chebyshev Polynomials of the First Kind

With respect to the inner product

\[ \langle f,g \rangle = \int_{-1}^{1} f(x) \cdot g(x) \cdot w(x) dx \]

the Chebyshev polynomials are defined by

\[ T_n(x) = \cos(n \cdot \arccos(x)) \quad \text{for} \quad n>0 \]

and \(T_0(x)=1\) form an orthogonal family of polynomials with weight function \(w(x)=\frac{1}{\sqrt{1 - x^2}}\) on \([-1, 1]\).

The \(n\)-point Gauss-Chebyshev quadrature formula, \(GC_n(f(x))\), for approximating the integral of \(\frac{f(x)}{\sqrt{1 - x^2}}\) over \([-1, 1]\), is given by

\[ GC_n ( f(x) ) = A_1 f(x_1) + \cdots + A_n f(x_n) \]

where \(x_i\), \(i = 1,\dots,n\), are the zeros of \(T_n\) and \(A_i = \frac{\pi}{n}\), \(i = 1,\dots,n\).

Chebyshev Polynomials of the Second Kind

Example

use integrate::gauss_quadrature::gauss_second_kind_chebyshev_rule;

fn f(x: f64) -> f64 {
    1.0
}

let n:usize = 100;

let integral = gauss_second_kind_chebyshev_rule(f, n);
println!("{}",integral);

Understanding Chebyshev Polynomials of the Second Kind

With respect to the inner product

\[ \langle f,g \rangle = \int_{-1}^{1} f(x) \cdot g(x) \cdot w(x) dx \]

the Chebyshev polynomials are defined by

\[ U_n(x) \cdot \sin(\arccos(x)) = \sin((n+1) \cdot \arccos(x)) \quad \text{for} \quad n>0 \]

and \(U_0(x)=1\) form an orthogonal family of polynomials with weight function \(w(x)=\sqrt{1 - x^2}\) on \([-1, 1]\).

The \(n\)-point Gauss-Chebyshev quadrature formula, \(GC_n(f(x))\), for approximating the integral of \(f(x) \cdot \sqrt{1 - x^2}\) over \([-1, 1]\), is given by

\[ GC_n ( f(x) ) = A_1 f(x_1) + \cdots + A_n f(x_n) \]

where \(x_i\), \(i = 1,\dots,n\), are the zeros of \(U_n\) and

\[ A_i = \frac{\pi}{n + 1} \cdot \sin^2\left(\frac{i\pi}{n + 1} \right) \quad \text{for} \quad i = 1,\dots,n. \]

Limitations

Gauss-Chebyshev quadrature is only suitable when the integrand can be expressed in the weighted form \(f(x)/\sqrt{1-x^2}\) (first kind) or \(f(x)\sqrt{1-x^2}\) (second kind) on \([-1, 1]\). It is not appropriate for functions on other intervals or without the corresponding weight factor, as the accuracy guarantees rely on orthogonality with respect to that weight.

Gauss-Laguerre

Motivation

Gauss-Laguerre quadrature is designed for integrals of the form \(\int_0^{+\infty} f(x)\,e^{-x}\,dx\), where the exponential decay naturally arises in probability distributions, physics, and engineering problems. By using the zeros and weights of Laguerre polynomials, the method achieves high accuracy with far fewer function evaluations than general-purpose rules on a truncated domain.

Example

use integrate::gauss_quadrature::gauss_laguerre_rule;

let f = |x: f64| 1.0;

let n:usize = 100;

let integral = gauss_laguerre_rule(f, n);
println!("{}",integral);

Understanding Gauss-Laguerre rule

Gauss-Laguerre quadrature formulas are used to integrate functions \(f(x) e^{-x}\) over the positive \(x\)-axis.

With respect to the inner product

\[ \langle f,g \rangle = \int_{0}^{+\infty} f(x) \cdot g(x) \cdot w(x) dx \]

the Laguerre polynomials are defined by

\[ L_n(x) = e^x \dfrac{\partial^{n} x^n e^{-x}}{\partial x^n}, \quad \text{for} \quad n > 0 \]

and \(L_0(x) = 1\) form an orthogonal family of polynomials with weight function \(w(x) = e^{-x}\) on the positive \(x\)-axis.

The \(n\)-point Gauss-Laguerre quadrature formula, \(GL_n ( f(x) )\), for approximating the integral of \(f(x) e^{-x}\) over \(\left[0, \infty \right[\), is given by

\[ GL_n ( f(x) ) = A_1 f(x_1) + \cdots + A_n f(x_n) \]

where \(x_i\), \(i = 1,\dots,n\), are the zeros of \(L_n\) and

\[ A_i = \dfrac{n!^2}{ x_i L_{n-1} (x_i)^2} \quad \text{for} \quad i = 1,\dots,n \]

Limitations

Gauss-Laguerre quadrature is only appropriate for integrals of the form \(\int_0^{+\infty} f(x)\,e^{-x}\,dx\). If the integrand does not have exponential decay, the method will produce inaccurate results. It is also not suitable for integrands defined on finite intervals or on the full real line.

Gauss Hermite

Motivation

Gauss-Hermite quadrature is designed for integrals of the form \(\int_{-\infty}^{+\infty} f(x)\,e^{-x^2}\,dx\), which appear naturally in quantum mechanics, probability theory (Gaussian distributions), and signal processing. By exploiting the orthogonality of Hermite polynomials, it provides exponential convergence for smooth integrands with Gaussian-type decay.

Example

use integrate::gauss_quadrature::gauss_hermite_rule;

let f = |x: f64| 1.0;

let n:usize = 100;

let integral = gauss_hermite_rule(f, n);
println!("{}",integral);

Understanding Gauss-Hermite rule

Gauss-Hermite quadrature formulas are used to integrate functions \(f(x) e^{-x^2}\) from \(-\infty\) to \(+\infty\).

With respect to the inner product

\[ \langle f,g \rangle = \int_{-\infty}^{+\infty} f(x) \cdot g(x) \cdot w(x) , dx \]

the Hermite polynomials

\[ H_n(x) = (-1)^n \cdot e^{x^2} \cdot \frac{\partial^{n} e^{-x^2}}{\partial x^n} \quad \text{for} \quad n > 0 \]

and \(H_0(x) = 1\) form an orthogonal family of polynomials with weight function \(w(x) = e^{-x^2}\) on the entire \(x\)-axis.

The \(n\)-point Gauss-Hermite quadrature formula, \(GH_n ( f(x) )\), for approximating the integral of \(f(x) e^{-x^2}\) over the entire \(x\)-axis, is given by

\[ GH_n ( f(x) ) = A_1 f(x_1) + \cdots + A_n f(x_n) \]

where \(x_i\), for \(i = 1,\ldots,n\), are the zeros of \(H_n\) and

\[ A_i = \frac{2^{n+1} \cdot n! \cdot \sqrt{\pi}}{H_{n-1} (x_i)^2} \quad \text{for} \quad i = 1,\ldots,n \]

Limitations

Gauss-Hermite quadrature is only appropriate for integrals of the form \(\int_{-\infty}^{+\infty} f(x)\,e^{-x^2}\,dx\). If the integrand does not have Gaussian decay, the method will produce inaccurate results. It is not suitable for integrands on finite intervals or those with non-Gaussian tail behavior.

Adaptive Methods

Adaptive methods are used to estimate the integral of a function on a closed and bounded interval. The basic idea is to take smaller subintervals in regions where the function changes significantly and larger subintervals in regions where the function doesn’t change significantly. What is significant depends on the quadrature technique involved, for Simpson’s rule, if the function is nearly cubic (or quadratic, or linear, or constant) then the function doesn’t change significantly from being cubic and a large interval can be used.

Adaptive Simpson

Motivation

Fixed-step quadrature methods use the same subinterval size everywhere, which is inefficient when the integrand is smooth in most of the domain but has rapid variation or near-singularities in a small region. Adaptive quadrature automatically concentrates function evaluations where the integrand is hardest to approximate, achieving a target accuracy with fewer total evaluations.

Example

use integrate::adaptive_quadrature::adaptive_simpson_method;


let f = |x: f64| x.exp();

let a = 0.0;
let b = 1.0;

let tolerance = 10.0e-6;
let min_h = 10.0e-3;


let result = adaptive_simpson_method(f, a, b, min_h, tolerance);


match result{
    Ok(res)=>{println!("{}", res)}
    Err(err)=>{println!("{}", err)}
};

Understanding adaptive Simpson method

If an integrand is poorly behaved in a small interval about a point, then an attempt to integrate the function over an interval which contains the poorly behaved interval either requires that small subintervals are chosen for composite quadratures or the interval is decomposed into three intervals, two on which the function is well-behaved and relatively large subintervals can be chosen for the composite quadrature technique and one in which smaller subintervals need to be chosen.

Adaptive techniques are attempts to automatically detect and control the length of subintervals.

The technique for which the link to the listing is given below uses Simpson’s rule for integrating a function \(f(x)\) on a closed and bounded interval \([a,b]\).

Limitations

The adaptive Simpson method requires the integrand to be continuous on \([a, b]\). It may fail or produce unreliable results for functions with true singularities or jump discontinuities. The recursion depth is implicitly bounded by min_h; choosing min_h too large may miss fine structure, while too small a value can cause excessive recursion.

Romberg’s method

Motivation

Romberg’s method combines Richardson extrapolation with the composite trapezoidal rule to eliminate successive error terms from the Euler-Maclaurin expansion, achieving high-order accuracy without evaluating higher derivatives. It is an excellent choice when high precision is required for smooth integrands on finite intervals.

Example

use integrate::romberg::romberg_method;


fn square(x: f64) -> f64 {
    x.powi(2)
}

let a = 0.0;
let b = 1.0;

let num_steps: usize = 10;

let integral = romberg_method(square, a, b, num_steps);
println!("{}",integral);

Understanding Romberg’s method

Romberg’s method is used to estimate the integral of a function on a closed and bounded interval.

Classically, the method consists of successively applying the composite trapezoidal rule, each time halving the length of the subintervals, and using a linear combination of the resulting sequence of estimates to estimate the integral, by successively deleting the low order error terms in the Euler-Maclaurin summation formula.

The process terminates when the change of the estimate is within a preassigned tolerance, within a preassigned number of successive estimates.

The Euler-Maclaurin summation formula relates the integral of a function \(f(x)\) over a closed and bounded interval \([a,b]\) , \(\int_{a}^{b} f(x) dx\), and the composite trapezoidal rule,

\[ T_h (f) = h \left[ \frac{f(a)}{2} + f(a+h) + \cdots + f(b-h) + \frac{f(b)}{2} \right] \]

by

\[ \begin{split} T_h(f) &= \int_{a}^{b} f(x) dx + \left(\frac{h^2}{12}\right) \left[f^\prime(b) - f^\prime(a)\right] - \left(\frac{h^4}{720}\right) \left[f^{(3)}(b) - f^{(3)}(a) \right] \\ &+ \cdots + K h^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{split} \]

where \(f^\prime\), \(f^{(3)}\), and \(f^{(2p-3)}\) are the first, third and \((p-3)rd\) derivatives of \(f\) and \(K\) is a constant.

If the subinterval length is halved, then

\[ \begin{split} T_{\frac{h}{2}}(f) &= \int_{a}^{b} f(x) dx + \frac{h^2}{4·12} \left[ f^\prime(b) - f^\prime(a) \right] - \left( \frac{h^4}{16·720} \right) \left[ f^{(3)}(b) - f^{(3)}(a) \right] \\ &+ \cdots + K \left( \frac{h}{2} \right)^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{split} \]

So that

\[ \begin{split} \frac{4 T_{\frac{h}{2}}(f) - T_{h}(f)}{3} &= \int_{a}^{b} f(x) dx + \left( \frac{h^4}{2880} \right) \left[ f^{(3)}(b) - f^{(3)}(a) \right] - \left( \frac{h^6}{96768} \right) \left[ f^{(5)}(b) - f^{(5)}(a) \right] \\ &+ \cdots + K h^{2p-2} \left[ f^{(2p-3)}(b) - f^{(2p-3)}(a) \right] + O(h^{2p}) \end{split} \]

The \(h^2\) term has vanished. This process can be continued, each halving of the subinterval length results in a new composite trapezoidal rule estimate of the integral, which can be combined with previous estimates to yield an estimate, in which the lowest order term involving \(h\) vanishes.

The easiest way to combine all the estimates from applications of the trapezoidal rule by halving the length of the subintervals is to arrange the estimates in a column, from the coarsest estimate to the finest estimate. To form the second, column take two adjacent values from the first column, subtract the finer estimate from the coarser estimate divide by 3 and add to the finer estimate.

The second column is automatically arranged from the coarsest estimate to the finest with one less element. Continue, form the third column by taking two adjacent values from the second column, subtract the finer estimate from the coarser estimate, divide by 15 and add to the finer estimate.

The third column is automatically arranged from the coarsest to the finest estimate with one fewer element than in the second column.

This process is continued until there is only one element in the last column, this is the estimate of the integral.

The numbers which are used the divide the difference of two adjacent elements in the \(i^{th}\) column is \(4^i - 1\).

Limitations

Romberg’s method requires the integrand to be infinitely differentiable on \([a, b]\) for the Euler-Maclaurin expansion to hold. It is not appropriate for functions with singularities, discontinuities, or even kinks (non-differentiable points) in the integration interval. For such functions, consider the adaptive Simpson method or splitting the interval.

Python API

integrate-py exposes all 11 integration methods as a Python package, built with PyO3 and Maturin. The Rust core is called directly from Python, with the GIL released during integration so other Python threads can run concurrently.

Installation

pip install integrate-py

Or build from source (requires Rust ≥ 1.63):

git clone https://github.com/mtantaoui/Integrate
cd Integrate/python
pip install maturin
maturin develop --release

Quick start

import math
import integrate_py as ig

# ∫₀¹ x² dx = 1/3
result = ig.legendre_rule(lambda x: x**2, 0.0, 1.0, 100)
print(f"{result:.10f}")   # 0.3333333333

# ∫₀¹ eˣ dx = e − 1
result = ig.adaptive_simpson(math.exp, 0.0, 1.0, min_h=1e-3, tol=1e-6)
print(f"{result:.10f}")   # 1.7182818284

# ∫₀^π sin(x) dx = 2
result = ig.romberg(math.sin, 0.0, math.pi, n_cols=10)
print(f"{result:.10f}")   # 2.0000000000

Available methods

Choosing a method

You want to integrate…Recommended
Smooth \(f\) on \([a, b]\)legendre_rule or romberg
Highly smooth \(f\), maximum precisionromberg
\(f\) with variable smoothnessadaptive_simpson
Simple estimate, any \(f\)trapezoidal_rule
\(\int_0^\infty f(x)\,e^{-x}\,dx\)gauss_laguerre_rule
\(\int_{-\infty}^{\infty} f(x)\,e^{-x^2}\,dx\)gauss_hermite_rule
\(\int_{-1}^{1} f(x)/\sqrt{1-x^2}\,dx\)gauss_chebyshev1_rule
\(\int_{-1}^{1} f(x)\sqrt{1-x^2}\,dx\)gauss_chebyshev2_rule

A note on performance

The Rust core runs without Python overhead between function evaluations. For best performance:

  • Prefer methods with small \(n\) (legendre_rule, romberg, adaptive_simpson) — they need far fewer function evaluations than Newton-Cotes for the same accuracy.
  • Use NumPy-vectorized integrands when very large \(n\) is needed; pre-evaluate into an array and integrate with numpy.trapezoid / scipy.integrate.simpson.
  • The GIL is released during integration, so other Python threads are not blocked while the Rust core computes.

Benchmarks vs scipy

The table below compares integrate-py against the equivalent scipy.integrate methods on \(\int_0^1 e^x\,dx = e - 1\), using pytest-benchmark. Results will vary by machine.

Run the benchmarks yourself:

cd Integrate
pip install pytest pytest-benchmark scipy numpy maturin
maturin develop --release --manifest-path python/Cargo.toml
pytest python/tests/test_integrate.py --benchmark-only -v
Methodintegrate-pyscipy equivalent
trapezoidal_rule(n=500)pytest-benchmarknumpy.trapezoid
simpson_rule(n=500)pytest-benchmarkscipy.integrate.simpson
legendre_rule(n=200)pytest-benchmarkscipy.integrate.fixed_quad
adaptive_simpsonpytest-benchmarkscipy.integrate.quad
romberg(n_cols=15)pytest-benchmarkscipy.integrate.romberg