<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en">
	<id>https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Divided_differences</id>
	<title>Divided differences - Revision history</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.sarg.dev/index.php?action=history&amp;feed=atom&amp;title=Divided_differences"/>
	<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Divided_differences&amp;action=history"/>
	<updated>2026-04-18T23:24:32Z</updated>
	<subtitle>Revision history for this page on the wiki</subtitle>
	<generator>MediaWiki 1.44.2</generator>
	<entry>
		<id>https://wiki.sarg.dev/index.php?title=Divided_differences&amp;diff=577867&amp;oldid=prev</id>
		<title>imported&gt;Nolord: /* Definition */</title>
		<link rel="alternate" type="text/html" href="https://wiki.sarg.dev/index.php?title=Divided_differences&amp;diff=577867&amp;oldid=prev"/>
		<updated>2025-11-03T01:37:09Z</updated>

		<summary type="html">&lt;p&gt;&lt;span class=&quot;autocomment&quot;&gt;Definition&lt;/span&gt;&lt;/p&gt;
&lt;p&gt;&lt;b&gt;New page&lt;/b&gt;&lt;/p&gt;&lt;div&gt;{{Short description|Algorithm for computing polynomial coefficients}}&lt;br /&gt;
&lt;br /&gt;
In [[mathematics]], &amp;#039;&amp;#039;&amp;#039;divided differences&amp;#039;&amp;#039;&amp;#039; is an [[algorithm]], historically used for computing tables of [[logarithms]] and [[trigonometric functions]].{{Citation needed |date=October 2017}} [[Charles Babbage]]&amp;#039;s [[difference engine]], an early [[mechanical calculator]], was designed to use this algorithm in its operation.&amp;lt;ref name=&amp;quot;Isaacson2014&amp;quot;&amp;gt;{{cite book |last1=Isaacson |first1=Walter |title=The Innovators |date=2014 |publisher=Simon &amp;amp; Schuster |isbn=978-1-4767-0869-0 |page=20 }}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Divided differences is a [[recursion|recursive]] [[division (mathematics)|division]] process. Given a sequence of data points &amp;lt;math&amp;gt;(x_0, y_0), \ldots, (x_{n}, y_{n})&amp;lt;/math&amp;gt;, the method calculates the coefficients of the [[polynomial interpolation|interpolation polynomial]] of these points in the [[Newton polynomial|Newton form]].&lt;br /&gt;
&lt;br /&gt;
It is sometimes denoted by a delta with a bar: &amp;lt;math&amp;gt;\text{△} \!\!\!|\,\,&amp;lt;/math&amp;gt; or &amp;lt;math&amp;gt;\text{◿} \! \text{◺}&amp;lt;/math&amp;gt;.&lt;br /&gt;
&lt;br /&gt;
==Definition==&lt;br /&gt;
Given &amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;amp;nbsp;+&amp;amp;nbsp;1 data points&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(x_0, y_0),\ldots,(x_{n}, y_{n})&amp;lt;/math&amp;gt;&lt;br /&gt;
where the &amp;lt;math&amp;gt;x_k&amp;lt;/math&amp;gt; are assumed to be pairwise distinct, the &amp;#039;&amp;#039;&amp;#039;forward divided differences&amp;#039;&amp;#039;&amp;#039; are defined as:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{align}&lt;br /&gt;
\mathopen[y_k] &amp;amp;:= y_k, &amp;amp;&amp;amp; k \in \{ 0,\ldots,n\} \\&lt;br /&gt;
\mathopen[y_k,\ldots,y_j] &amp;amp;:= \frac{[y_{k+1},\ldots , y_j] - [y_{k},\ldots , y_{j-1}]}{x_j-x_k}, &amp;amp;&amp;amp; k\in\{0,\ldots,n-1\},\ j\in\{k+1,\ldots,n\}.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
To make the recursive process of computation clearer, the divided differences can be put in tabular form, where the columns correspond to the value of &amp;#039;&amp;#039;j&amp;#039;&amp;#039; above, and each entry in the table is computed from the difference of the entries to its immediate lower left and to its immediate upper left, divided by a difference of corresponding &amp;#039;&amp;#039;x-&amp;#039;&amp;#039;values:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\begin{matrix}&lt;br /&gt;
x_0 &amp;amp; y_0 = [y_0] &amp;amp;           &amp;amp;               &amp;amp; \\&lt;br /&gt;
        &amp;amp;       &amp;amp; [y_0,y_1] &amp;amp;               &amp;amp; \\&lt;br /&gt;
x_1 &amp;amp; y_1 = [y_1] &amp;amp;           &amp;amp; [y_0,y_1,y_2] &amp;amp; \\&lt;br /&gt;
        &amp;amp;       &amp;amp; [y_1,y_2] &amp;amp;               &amp;amp; [y_0,y_1,y_2,y_3]\\&lt;br /&gt;
x_2 &amp;amp; y_2 = [y_2] &amp;amp;           &amp;amp; [y_1,y_2,y_3] &amp;amp; \\&lt;br /&gt;
        &amp;amp;       &amp;amp; [y_2,y_3] &amp;amp;               &amp;amp; \\&lt;br /&gt;
x_3 &amp;amp; y_3 = [y_3] &amp;amp;           &amp;amp;               &amp;amp; \\&lt;br /&gt;
\end{matrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Notation ===&lt;br /&gt;
Note that the divided difference &amp;lt;math&amp;gt;[y_k,\ldots,y_{k+j}]&amp;lt;/math&amp;gt; depends on the values &amp;lt;math&amp;gt;x_k,\ldots,x_{k+j}&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;y_k,\ldots,y_{k+j}&amp;lt;/math&amp;gt;, but the notation hides the dependency on the &amp;#039;&amp;#039;x&amp;#039;&amp;#039;-values. If the data points are given by a function &amp;#039;&amp;#039;f&amp;#039;&amp;#039;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(x_0, y_0), \ldots, (x_{k}, y_n)&lt;br /&gt;
=(x_0, f(x_0)), \ldots, (x_n, f(x_n))&amp;lt;/math&amp;gt;&lt;br /&gt;
one sometimes writes the divided difference in the notation&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f[x_k,\ldots,x_{k+j}]&lt;br /&gt;
\ \stackrel{\text{def}}= \  [f(x_k),\ldots,f(x_{k+j})] &lt;br /&gt;
= [y_k,\ldots,y_{k+j}].&amp;lt;/math&amp;gt;Other notations for the divided difference of the function &amp;#039;&amp;#039;&amp;amp;fnof;&amp;#039;&amp;#039; on the nodes &amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;0&amp;lt;/sub&amp;gt;,&amp;amp;nbsp;...,&amp;amp;nbsp;&amp;#039;&amp;#039;x&amp;#039;&amp;#039;&amp;lt;sub&amp;gt;&amp;#039;&amp;#039;n&amp;#039;&amp;#039;&amp;lt;/sub&amp;gt; are:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f[x_k,\ldots,x_{k+j}]=\mathopen[x_0,\ldots,x_n]f= &lt;br /&gt;
\mathopen[x_0,\ldots,x_n;f]=&lt;br /&gt;
D[x_0,\ldots,x_n]f.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Example==&lt;br /&gt;
&lt;br /&gt;
Divided differences for &amp;lt;math&amp;gt;k=0&amp;lt;/math&amp;gt; and the first few values of &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt;:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
  \mathopen[y_0] &amp;amp;= y_0 \\&lt;br /&gt;
  \mathopen[y_0,y_1] &amp;amp;= \frac{y_1-y_0}{x_1-x_0} \\&lt;br /&gt;
  \mathopen[y_0,y_1,y_2]&lt;br /&gt;
&amp;amp;= \frac{\mathopen[y_1,y_2]-\mathopen[y_0,y_1]}{x_2-x_0}&lt;br /&gt;
 =  \frac{\frac{y_2-y_1}{x_2-x_1}-\frac{y_1-y_0}{x_1-x_0}}{x_2-x_0}&lt;br /&gt;
 = \frac{y_2-y_1}{(x_2-x_1)(x_2-x_0)}-\frac{y_1-y_0}{(x_1-x_0)(x_2-x_0)}&lt;br /&gt;
\\&lt;br /&gt;
  \mathopen[y_0,y_1,y_2,y_3] &amp;amp;= \frac{\mathopen[y_1,y_2,y_3]-\mathopen[y_0,y_1,y_2]}{x_3-x_0}&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt; &amp;lt;!-- the \mathopen command is there because latex otherwise thinks that [...] denotes an optional argument --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Thus, the table corresponding to these terms upto two columns has the following form:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{matrix}&lt;br /&gt;
   x_0 &amp;amp; y_{0} &amp;amp;                                 &amp;amp; \\&lt;br /&gt;
       &amp;amp;        &amp;amp; {y_{1}-y_{0}\over x_1 - x_0}  &amp;amp; \\&lt;br /&gt;
   x_1 &amp;amp; y_{1} &amp;amp;                                 &amp;amp; {{y_{2}-y_{1}\over x_2 - x_1}-{y_{1}-y_{0}\over x_1 - x_0} \over x_2 - x_0} \\&lt;br /&gt;
       &amp;amp;        &amp;amp; {y_{2}-y_{1}\over x_2 - x_1}  &amp;amp; \\&lt;br /&gt;
   x_2 &amp;amp; y_{2} &amp;amp;                                 &amp;amp; \vdots \\&lt;br /&gt;
       &amp;amp;        &amp;amp; \vdots                          &amp;amp; \\&lt;br /&gt;
\vdots &amp;amp;        &amp;amp;                                 &amp;amp; \vdots \\&lt;br /&gt;
       &amp;amp;        &amp;amp; \vdots                          &amp;amp; \\&lt;br /&gt;
   x_n &amp;amp; y_{n} &amp;amp;                                 &amp;amp; \\&lt;br /&gt;
\end{matrix}  &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Properties==&lt;br /&gt;
* [[Linear functional|Linearity]] &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{align}&lt;br /&gt;
(f+g)[x_0,\dots,x_n] &amp;amp;= f[x_0,\dots,x_n] + g[x_0,\dots,x_n] \\&lt;br /&gt;
(\lambda\cdot f)[x_0,\dots,x_n] &amp;amp;= \lambda\cdot f[x_0,\dots,x_n]&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
* [[Leibniz rule (generalized product rule)|Leibniz rule]] &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(f\cdot g)[x_0,\dots,x_n] = f[x_0]\cdot g[x_0,\dots,x_n] + f[x_0,x_1]\cdot g[x_1,\dots,x_n] + \dots + f[x_0,\dots,x_n]\cdot g[x_n] = \sum_{r=0}^n f[x_0,\ldots,x_r]\cdot g[x_r,\ldots,x_n]&amp;lt;/math&amp;gt;&lt;br /&gt;
* Divided differences are symmetric: If &amp;lt;math&amp;gt;\sigma : \{0, \dots, n\} \to \{0, \dots, n\}&amp;lt;/math&amp;gt; is a permutation then &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f[x_0, \dots, x_n] = f[x_{\sigma(0)}, \dots, x_{\sigma(n)}]&amp;lt;/math&amp;gt;&lt;br /&gt;
* [[Polynomial interpolation]] in the [[Newton polynomial|Newton form]]: if &amp;lt;math&amp;gt;P&amp;lt;/math&amp;gt; is a polynomial function of degree &amp;lt;math&amp;gt;\leq n&amp;lt;/math&amp;gt;, and &amp;lt;math&amp;gt;p[x_0, \dots , x_n]&amp;lt;/math&amp;gt; is the divided difference, then &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;P_{n-1}(x) = p[x_0] + p[x_0,x_1](x-x_0) + p[x_0,x_1,x_2](x-x_0)(x-x_1) + \cdots + p[x_0,\ldots,x_n] (x-x_0)(x-x_1)\cdots(x-x_{n-1})&amp;lt;/math&amp;gt;&lt;br /&gt;
* If &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; is a polynomial function of degree &amp;lt;math&amp;gt;&amp;lt;n&amp;lt;/math&amp;gt;, then &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;p[x_0, \dots, x_n] = 0.&amp;lt;/math&amp;gt;&lt;br /&gt;
*[[Mean value theorem for divided differences]]: if &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; is &amp;#039;&amp;#039;n&amp;#039;&amp;#039; times differentiable, then &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f[x_0,\dots,x_n] = \frac{f^{(n)}(\xi)}{n!}&amp;lt;/math&amp;gt; for a number &amp;lt;math&amp;gt;\xi&amp;lt;/math&amp;gt; in the open interval determined by the smallest and largest of the &amp;lt;math&amp;gt;x_k&amp;lt;/math&amp;gt;&amp;#039;s.&lt;br /&gt;
&lt;br /&gt;
==Matrix form==&lt;br /&gt;
The divided difference scheme can be put into an upper [[triangular matrix]]:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;T_f(x_0,\dots,x_n)=&lt;br /&gt;
\begin{pmatrix}&lt;br /&gt;
f[x_0] &amp;amp; f[x_0,x_1] &amp;amp; f[x_0,x_1,x_2] &amp;amp; \ldots &amp;amp; f[x_0,\dots,x_n] \\&lt;br /&gt;
0      &amp;amp; f[x_1]     &amp;amp; f[x_1,x_2]     &amp;amp; \ldots &amp;amp; f[x_1,\dots,x_n] \\&lt;br /&gt;
0      &amp;amp; 0          &amp;amp; f[x_2]         &amp;amp; \ldots &amp;amp; f[x_2,\dots,x_n] \\&lt;br /&gt;
\vdots &amp;amp; \vdots     &amp;amp;                &amp;amp; \ddots &amp;amp; \vdots           \\&lt;br /&gt;
0      &amp;amp; 0          &amp;amp; 0              &amp;amp; \ldots &amp;amp; f[x_n]&lt;br /&gt;
\end{pmatrix}.&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Then it holds&lt;br /&gt;
* &amp;lt;math&amp;gt;T_{f+g}(x) = T_f(x) + T_g(x)&amp;lt;/math&amp;gt;&lt;br /&gt;
* &amp;lt;math&amp;gt;T_{\lambda f}(x) = \lambda T_f(x)&amp;lt;/math&amp;gt; if &amp;lt;math&amp;gt;\lambda&amp;lt;/math&amp;gt; is a scalar&lt;br /&gt;
* &amp;lt;math&amp;gt;T_{f\cdot g}(x) = T_f(x) \cdot T_g(x)&amp;lt;/math&amp;gt; {{pb}} This follows from the Leibniz rule. It means that multiplication of such matrices is [[commutativity|commutative]]. Summarised, the matrices of divided difference schemes with respect to the same set of nodes &amp;#039;&amp;#039;x&amp;#039;&amp;#039; form a [[commutative ring]].&lt;br /&gt;
* Since &amp;lt;math&amp;gt; T_f(x) &amp;lt;/math&amp;gt; is a triangular matrix, its [[eigenvalue]]s are obviously &amp;lt;math&amp;gt; f(x_0), \dots, f(x_n) &amp;lt;/math&amp;gt;.&lt;br /&gt;
* Let &amp;lt;math&amp;gt;\delta_\xi&amp;lt;/math&amp;gt; be a [[Kronecker delta]]-like function, that is &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\delta_\xi(t) = \begin{cases}&lt;br /&gt;
1 &amp;amp;: t=\xi , \\&lt;br /&gt;
0 &amp;amp;: \mbox{else}.&lt;br /&gt;
\end{cases}&amp;lt;/math&amp;gt; Obviously &amp;lt;math&amp;gt;f\cdot \delta_\xi = f(\xi)\cdot \delta_\xi&amp;lt;/math&amp;gt;, thus &amp;lt;math&amp;gt;\delta_\xi&amp;lt;/math&amp;gt; is an [[eigenfunction]] of the pointwise function multiplication. That is &amp;lt;math&amp;gt;T_{\delta_{x_i}}(x)&amp;lt;/math&amp;gt; is somehow an &amp;quot;[[eigenmatrix]]&amp;quot; of &amp;lt;math&amp;gt;T_f(x)&amp;lt;/math&amp;gt;: &amp;lt;math&amp;gt; T_f(x) \cdot T_{\delta_{x_i}} (x) = f(x_i) \cdot T_{\delta_{x_i}}(x) &amp;lt;/math&amp;gt;. However, all columns of &amp;lt;math&amp;gt;T_{\delta_{x_i}}(x)&amp;lt;/math&amp;gt; are multiples of each other, the [[matrix rank]] of &amp;lt;math&amp;gt;T_{\delta_{x_i}}(x)&amp;lt;/math&amp;gt; is 1. So you can compose the matrix of all eigenvectors of &amp;lt;math&amp;gt;T_f(x)&amp;lt;/math&amp;gt; from the &amp;lt;math&amp;gt;i&amp;lt;/math&amp;gt;-th column of each &amp;lt;math&amp;gt;T_{\delta_{x_i}}(x)&amp;lt;/math&amp;gt;. Denote the matrix of eigenvectors with &amp;lt;math&amp;gt;U(x)&amp;lt;/math&amp;gt;. Example &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; U(x_0,x_1,x_2,x_3) = \begin{pmatrix}&lt;br /&gt;
1 &amp;amp; \frac{1}{(x_1-x_0)} &amp;amp; \frac{1}{(x_2-x_0) (x_2-x_1)} &amp;amp; \frac{1}{(x_3-x_0) (x_3-x_1) (x_3-x_2)} \\&lt;br /&gt;
0 &amp;amp; 1 &amp;amp; \frac{1}{(x_2-x_1)} &amp;amp; \frac{1}{(x_3-x_1) (x_3-x_2)} \\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 1 &amp;amp; \frac{1}{(x_3-x_2)} \\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 1&lt;br /&gt;
\end{pmatrix} &amp;lt;/math&amp;gt; The [[diagonalizable matrix|diagonalization]] of &amp;lt;math&amp;gt;T_f(x)&amp;lt;/math&amp;gt; can be written as &amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; U(x) \cdot \operatorname{diag}(f(x_0),\dots,f(x_n)) = T_f(x) \cdot U(x) .&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Polynomials and power series===&lt;br /&gt;
The matrix &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
J =&lt;br /&gt;
\begin{pmatrix}&lt;br /&gt;
x_0 &amp;amp; 1 &amp;amp; 0 &amp;amp; 0 &amp;amp; \cdots &amp;amp; 0 \\&lt;br /&gt;
0 &amp;amp; x_1 &amp;amp; 1 &amp;amp; 0 &amp;amp; \cdots &amp;amp; 0 \\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; x_2 &amp;amp; 1 &amp;amp;        &amp;amp; 0 \\&lt;br /&gt;
\vdots &amp;amp; \vdots &amp;amp; &amp;amp; \ddots &amp;amp; \ddots &amp;amp; \\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp;    \; \ddots     &amp;amp; 1\\&lt;br /&gt;
0 &amp;amp; 0 &amp;amp; 0 &amp;amp; 0 &amp;amp;          &amp;amp; x_n&lt;br /&gt;
\end{pmatrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
contains the divided difference scheme for the [[identity function]] with respect to the nodes &amp;lt;math&amp;gt;x_0,\dots,x_n&amp;lt;/math&amp;gt;, thus &amp;lt;math&amp;gt;J^m&amp;lt;/math&amp;gt; contains the divided differences for the [[monomial|power function]] with [[exponent]] &amp;lt;math&amp;gt;m&amp;lt;/math&amp;gt;.&lt;br /&gt;
Consequently, you can obtain the divided differences for a [[polynomial function]] &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; by applying &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; to the matrix &amp;lt;math&amp;gt;J&amp;lt;/math&amp;gt;: If&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;p(\xi) = a_0 + a_1 \cdot \xi + \dots + a_m \cdot \xi^m&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;p(J) = a_0 + a_1\cdot J + \dots + a_m\cdot J^m&amp;lt;/math&amp;gt;&lt;br /&gt;
then&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;T_p(x) = p(J).&amp;lt;/math&amp;gt;&lt;br /&gt;
This is known as &amp;#039;&amp;#039;Opitz&amp;#039; formula&amp;#039;&amp;#039;.&amp;lt;ref&amp;gt;[[Carl de Boor|de Boor, Carl]], &amp;#039;&amp;#039;Divided Differences&amp;#039;&amp;#039;, Surv. Approx. Theory  1  (2005), 46–69, [http://www.emis.de/journals/SAT/papers/2/]&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Opitz, G. &amp;#039;&amp;#039;Steigungsmatrizen&amp;#039;&amp;#039;, Z. Angew. Math. Mech. (1964), 44, T52–T54&amp;lt;/ref&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Now consider increasing the degree of &amp;lt;math&amp;gt;p&amp;lt;/math&amp;gt; to infinity, i.e. turn the Taylor polynomial into a [[Taylor series]].&lt;br /&gt;
Let &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; be a function which corresponds to a [[power series]].&lt;br /&gt;
You can compute the divided difference scheme for &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; by applying the corresponding matrix series to &amp;lt;math&amp;gt;J&amp;lt;/math&amp;gt;:&lt;br /&gt;
If&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f(\xi) = \sum_{k=0}^\infty a_k \xi^k&amp;lt;/math&amp;gt;&lt;br /&gt;
and&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f(J)=\sum_{k=0}^\infty a_k J^k&amp;lt;/math&amp;gt;&lt;br /&gt;
then&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;T_f(x)=f(J).&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Alternative characterizations==&lt;br /&gt;
&lt;br /&gt;
===Expanded form===&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\begin{align}&lt;br /&gt;
f[x_0] &amp;amp;= f(x_0) \\&lt;br /&gt;
f[x_0,x_1] &amp;amp;= \frac{f(x_0)}{(x_0-x_1)} + \frac{f(x_1)}{(x_1-x_0)} \\&lt;br /&gt;
f[x_0,x_1,x_2] &amp;amp;= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)} + \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)} \\&lt;br /&gt;
f[x_0,x_1,x_2,x_3] &amp;amp;= \frac{f(x_0)}{(x_0-x_1)\cdot(x_0-x_2)\cdot(x_0-x_3)} + \frac{f(x_1)}{(x_1-x_0)\cdot(x_1-x_2)\cdot(x_1-x_3)} +\\&lt;br /&gt;
&amp;amp;\quad\quad \frac{f(x_2)}{(x_2-x_0)\cdot(x_2-x_1)\cdot(x_2-x_3)} + \frac{f(x_3)}{(x_3-x_0)\cdot(x_3-x_1)\cdot(x_3-x_2)} \\&lt;br /&gt;
f[x_0,\dots,x_n] &amp;amp;=&lt;br /&gt;
\sum_{j=0}^{n} \frac{f(x_j)}{\prod_{k\in\{0,\dots,n\}\setminus\{j\}} (x_j-x_k)}&lt;br /&gt;
\end{align}&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
With the help of the [[polynomial function]] &amp;lt;math&amp;gt;\omega(\xi) = (\xi-x_0) \cdots (\xi-x_n)&amp;lt;/math&amp;gt; this can be written as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
f[x_0,\dots,x_n] = \sum_{j=0}^{n} \frac{f(x_j)}{\omega&amp;#039;(x_j)}.&lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Peano form===&lt;br /&gt;
If &amp;lt;math&amp;gt;x_0&amp;lt;x_1&amp;lt;\cdots&amp;lt;x_n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;n\geq 1&amp;lt;/math&amp;gt;, the divided differences can be expressed as&amp;lt;ref&amp;gt;{{Cite book |last=Skof |first=Fulvia |url=https://books.google.com/books?id=ulUM2GagwacC&amp;amp;dq=This+is+called+the+Peano+form+of+the+divided+differences&amp;amp;pg=PA41 |title=Giuseppe Peano between Mathematics and Logic: Proceeding of the International Conference in honour of Giuseppe Peano on the 150th anniversary of his birth and the centennial of the Formulario Mathematico Torino (Italy) October 2-3, 2008 |date=2011-04-30 |publisher=Springer Science &amp;amp; Business Media |isbn=978-88-470-1836-5 |pages=40 |language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;f[x_0,\ldots,x_n] = \frac{1}{(n-1)!} \int_{x_0}^{x_n} f^{(n)}(t)\;B_{n-1}(t) \, dt&amp;lt;/math&amp;gt;&lt;br /&gt;
where &amp;lt;math&amp;gt;f^{(n)}&amp;lt;/math&amp;gt; is the &amp;lt;math&amp;gt;n&amp;lt;/math&amp;gt;-th [[derivative]] of the function &amp;lt;math&amp;gt;f&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;B_{n-1}&amp;lt;/math&amp;gt; is a certain [[B-spline]] of degree &amp;lt;math&amp;gt;n-1&amp;lt;/math&amp;gt; for the data points &amp;lt;math&amp;gt;x_0,\dots,x_n&amp;lt;/math&amp;gt;, given by the formula&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;B_{n-1}(t) = \sum_{k=0}^n \frac{(\max(0,x_k-t))^{n-1}}{\omega&amp;#039;(x_k)}&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This is a consequence of the [[Peano kernel theorem]]; it is called the &amp;#039;&amp;#039;Peano form&amp;#039;&amp;#039; of the divided differences and &amp;lt;math&amp;gt;B_{n-1}&amp;lt;/math&amp;gt; is the &amp;#039;&amp;#039;Peano kernel&amp;#039;&amp;#039; for the divided differences, all named after [[Giuseppe Peano]].&lt;br /&gt;
&lt;br /&gt;
===Forward and backward differences===&lt;br /&gt;
{{details|Finite difference}}&lt;br /&gt;
&lt;br /&gt;
When the data points are equidistantly distributed we get the special case called &amp;#039;&amp;#039;&amp;#039;forward differences&amp;#039;&amp;#039;&amp;#039;. They are easier to calculate than the more general divided differences.&lt;br /&gt;
&lt;br /&gt;
Given &amp;#039;&amp;#039;n&amp;#039;&amp;#039;+1 data points&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;(x_0, y_0), \ldots, (x_n, y_n)&amp;lt;/math&amp;gt;&lt;br /&gt;
with&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;x_{k} = x_0 + k h,\ \text{ for } \ k=0,\ldots,n \text{ and fixed } h&amp;gt;0&amp;lt;/math&amp;gt;&lt;br /&gt;
the forward differences are defined as&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{align}&lt;br /&gt;
\Delta^{(0)} y_k &amp;amp;:= y_k,\qquad k=0,\ldots,n \\&lt;br /&gt;
\Delta^{(j)}y_k &amp;amp;:= \Delta^{(j-1)}y_{k+1} - \Delta^{(j-1)}y_k,\qquad k=0,\ldots,n-j,\ j=1,\dots,n.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;whereas the backward differences are defined as:&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;\begin{align}&lt;br /&gt;
\nabla^{(0)} y_k &amp;amp;:= y_k,\qquad k=0,\ldots,n \\&lt;br /&gt;
\nabla^{(j)}y_k &amp;amp;:= \nabla^{(j-1)}y_{k} - \nabla^{(j-1)}y_{k-1},\qquad k=0,\ldots,n-j,\ j=1,\dots,n.&lt;br /&gt;
\end{align}&amp;lt;/math&amp;gt;&lt;br /&gt;
Thus the [[Finite difference|forward difference]] table is written as:&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\begin{matrix}&lt;br /&gt;
y_0 &amp;amp;               &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \Delta y_0 &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
y_1 &amp;amp;               &amp;amp; \Delta^2 y_0 &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \Delta y_1 &amp;amp;                   &amp;amp; \Delta^3 y_0\\&lt;br /&gt;
y_2 &amp;amp;               &amp;amp; \Delta^2 y_1 &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \Delta y_2 &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
y_3 &amp;amp;               &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
\end{matrix}&lt;br /&gt;
&amp;lt;/math&amp;gt;whereas the backwards difference table is written as:&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;&lt;br /&gt;
\begin{matrix}&lt;br /&gt;
y_0 &amp;amp;               &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \nabla y_1 &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
y_1 &amp;amp;               &amp;amp; \nabla^2 y_2 &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \nabla y_2 &amp;amp;                   &amp;amp; \nabla^3 y_3\\&lt;br /&gt;
y_2 &amp;amp;               &amp;amp; \nabla^2 y_3 &amp;amp;                  \\&lt;br /&gt;
    &amp;amp; \nabla y_3 &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
y_3 &amp;amp;               &amp;amp;                   &amp;amp;                  \\&lt;br /&gt;
\end{matrix} &lt;br /&gt;
&amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The relationship between divided differences and forward differences is&amp;lt;ref&amp;gt;{{cite book|last1=Burden|first1=Richard L.| last2=Faires|first2=J. Douglas | title=Numerical Analysis |url=https://archive.org/details/numericalanalysi00rlbu |url-access=limited | date=2011|page=[https://archive.org/details/numericalanalysi00rlbu/page/n146 129]|publisher=Cengage Learning | isbn=9780538733519| edition=9th}}&amp;lt;/ref&amp;gt;&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;[y_j, y_{j+1}, \ldots , y_{j+k}] = \frac{1}{k!h^k}\Delta^{(k)}y_j,    &amp;lt;/math&amp;gt;whereas for backward differences:{{Citation needed|date=December 2023}}&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt;[{y}_{j}, y_{j-1},\ldots,{y}_{j-k}] = \frac{1}{k!h^k}\nabla^{(k)}y_j.       &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
===Explicit formula===&lt;br /&gt;
&lt;br /&gt;
When the data points are equispaced we can also derive an explicit formula for &amp;lt;math&amp;gt;[y_j,\ldots,y_{j+k}]&amp;lt;/math&amp;gt;. For any fixed &amp;lt;math&amp;gt;k \leq n&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;j + k \leq n&amp;lt;/math&amp;gt;,&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; [y_j,\ldots,y_{j+ k}] = \frac{(-1)^{j+k}}{k!h^k} \sum_{i=j}^{j+k} (-1)^i y_i \binom{k}{i-j}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;&amp;#039;Proof.&amp;#039;&amp;#039;&amp;#039; We prove this by induction on &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt;:&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Base case:&amp;#039;&amp;#039; Let &amp;lt;math&amp;gt;k = 0&amp;lt;/math&amp;gt; and &amp;lt;math&amp;gt;j = 0 \ldots n&amp;lt;/math&amp;gt;. Then by definition &amp;lt;math&amp;gt;[y_j] = y_j&amp;lt;/math&amp;gt;, and &lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{(-1)^{j}}{0!h^0} \sum_{i=j}^{j} (-1)^i y_i \binom{0}{i-j} = (-1)^{2j} y_j = y_j. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;#039;&amp;#039;Induction step:&amp;#039;&amp;#039; Assume the above formula holds until &amp;lt;math&amp;gt;k&amp;lt;/math&amp;gt; and consider &amp;lt;math&amp;gt;j&amp;lt;/math&amp;gt; such that &amp;lt;math&amp;gt;j+k+1 \leq n&amp;lt;/math&amp;gt;. By the [[recursive definition]] we have&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; [y_j,\ldots,y_{j+k+1}] = \frac{1}{h(k+1)} \left ( [y_{j+1},\ldots,y_{j+k+1}] - [y_j,\ldots,y_{j+k}] \right ). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
We can use our inductive hypothesis on both members of the left side, and obtain&lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{1}{h(k+1)} \left ( \frac{(-1)^{j+k+1}}{k!h^k} \sum_{i=j+1}^{j+k+1} (-1)^i y_i \binom{k}{i-j} - \frac{(-1)^{j+k}}{k!h^k} \sum_{i=j}^{j+k} (-1)^i y_i \binom{k}{i-j} \right ). &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
This can be rearranged as &lt;br /&gt;
&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \frac{(-1)^{j+k+1}}{(k+1)!h^{k+1}} \sum_{i=j}^{j+k+1} (-1)^i y_i \left ( \binom{k}{i-j} + \binom{k}{i-j-1} \right ) &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
where &amp;lt;math&amp;gt; \binom{k}{s} = 0  &amp;lt;/math&amp;gt; when  &amp;lt;math&amp;gt;s &amp;lt; k&amp;lt;/math&amp;gt;. We obtain our thesis for &amp;lt;math&amp;gt; k+1 &amp;lt;/math&amp;gt; by substituting the identity&lt;br /&gt;
&amp;lt;math display=&amp;quot;block&amp;quot;&amp;gt; \binom{k}{i - j} + \binom{k}{i - j - 1} = \binom{k + 1}{i - j}. &amp;lt;/math&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== See also ==&lt;br /&gt;
* [[Difference quotient]]&lt;br /&gt;
* [[Neville&amp;#039;s algorithm]]&lt;br /&gt;
* [[Polynomial interpolation]]&lt;br /&gt;
* [[Mean value theorem for divided differences]]&lt;br /&gt;
* [[Nörlund–Rice integral]]&lt;br /&gt;
* [[Pascal&amp;#039;s triangle]]&lt;br /&gt;
* [[Lagrange polynomial]]&lt;br /&gt;
&lt;br /&gt;
==References==&lt;br /&gt;
{{Reflist|1}}&lt;br /&gt;
* {{cite book|author=Louis Melville Milne-Thomson|author-link=L. M. Milne-Thomson| title=The Calculus of Finite Differences | year=2000|orig-year=1933| publisher=American Mathematical Soc.| isbn=978-0-8218-2107-7|at=Chapter 1: Divided Differences}}&lt;br /&gt;
* {{cite book|author1=Myron B. Allen|author2=Eli L. Isaacson| title=Numerical Analysis for Applied Science | year=1998|publisher=John Wiley &amp;amp; Sons|isbn=978-1-118-03027-1| at=Appendix A}}&lt;br /&gt;
* {{cite book|author=Ron Goldman|title=Pyramid Algorithms: A Dynamic Programming Approach to Curves and Surfaces for Geometric Modeling|year=2002| publisher=Morgan Kaufmann| isbn=978-0-08-051547-2| at=Chapter 4:Newton Interpolation and Difference Triangles}}&lt;br /&gt;
&lt;br /&gt;
== External links  ==&lt;br /&gt;
* An [http://code.henning-thielemann.de/htam/src/Numerics/Interpolation/DividedDifference.hs implementation] in [[Haskell (programming language)|Haskell]].&lt;br /&gt;
&lt;br /&gt;
[[Category:Finite differences]]&lt;br /&gt;
&lt;br /&gt;
[[de:Polynominterpolation#Bestimmung der Koeffizienten: Schema der dividierten Differenzen]]&lt;/div&gt;</summary>
		<author><name>imported&gt;Nolord</name></author>
	</entry>
</feed>