# Interpolation

## Polynomial Interpolation

Interpolation is way of extending discrete data points to a function. If the given data points are in ${\displaystyle \mathbf {R} ^{2}}$ then polynomial interpolation is common. The main idea behind polynomial interpolation is that given n+1 discrete data points there exits a unique polynomial of order n that fits those data points. A polynomial of lower order is not generally possible and a polynomial of higher order is not unique. If the polynomial ${\displaystyle p(x)}$ is defined as

${\displaystyle p(x)=a_{n}x^{n}+a_{n-1}x^{n-1}+\ldots +a_{1}x+a_{0}}$

then the unknown coefficients can be solved for with the system

${\displaystyle {\begin{bmatrix}x_{0}^{n}&x_{0}^{n-1}&x_{0}^{n-2}&\ldots &x_{0}&1\\x_{1}^{n}&x_{1}^{n-1}&x_{1}^{n-2}&\ldots &x_{1}&1\\\vdots &\vdots &\vdots &&\vdots &\vdots \\x_{n}^{n}&x_{n}^{n-1}&x_{n}^{n-2}&\ldots &x_{n}&1\end{bmatrix}}{\begin{bmatrix}a_{n}\\a_{n-1}\\\vdots \\a_{0}\end{bmatrix}}={\begin{bmatrix}y_{0}\\y_{1}\\\vdots \\y_{n}\end{bmatrix}}.}$

where ${\displaystyle x_{0}\ldots x_{n}}$ are the given data points. The matrix in the system is known as the Vandermonde matrix. Assume all of the data points are distinct the Vandermonde matrix is nonsingular and therefore the system can be uniquely solved.

If the number of data points being interpolated on becomes large the degree of the polynomial will become large which may result in oscillations between data points and an increase in error. For large numbers of data points alternative methods are suggested.

A common problem in science and engineering is that of multivariate interpolation of a function f whose values are known only on a finite set of points. Therefore let ${\displaystyle \Omega \subset \mathbf {R} ^{d}}$ be an open bounded domain. Given a set of distinct points X and real numbers, the task is to construct a function ${\displaystyle s:\mathbf {R} ^{d}\rightarrow \mathbf {R} }$ that satisfies the interpolation conditions

${\displaystyle s(x_{i})=f_{i}}$

In the last decades, the approach of radial basis function interpolation became increasingly popular for problems in which the points in X are irregularly distributed in space. In its basic form, radial basis function interpolation chooses a fixed function ${\displaystyle \phi :\mathbf {R_{+}} \rightarrow \mathbf {R} }$ and defines an interpolant by ${\displaystyle \phi }$ where the coefficients ${\displaystyle c_{i}}$ are real numbers, for ${\displaystyle \|\cdot \|}$ one chooses usually the euclidian norm, and ${\displaystyle \phi }$ is a radial basis function.

We see that the approximating function s is a linear combination of radially symmetric functions, each centered on a definite point ${\displaystyle x_{i}}$. The points ${\displaystyle x_{i}}$ often called the centers or collocation points of the RBF interpolant. This approach goes back to Hardy, who used as early as 1971 the multi-quadric RBF to reconstruct geographical surfaces from a set of scattered measurements. Note also that the centers ${\displaystyle x_{i}}$ can be located at arbitrary points in the domain, and do not require a grid. For certain RBF exponential convergence has been shown. Radial basis functions were successfully applied to problems as diverse as computer graphics, neural networks, for the solution of differential equations via collocation methods and many other problems.

Other popular choices for ${\displaystyle \phi }$ comprise the Gaussian function and the so called thin plate splines, invented by Duchon in the 1970s. Thin plate splines result from the solution of a variational problem. The advantage of the thin plate splines is that their conditioning is invariant under scalings. Another class of functions are Wendland's compactly supported RBF, which are relevant because they lead to a sparse interpolation matrix, and thus the resulting linear system can be solved efficiently. However, the conditioning of the problem gets worse with the number of centers. When interpolating with RBF of global support, the matrix A usually becomes ill-conditioned with an increasing number of centers. Gaussians, multi-quadrics and inverse multi-quadrics are infinitely smooth and involve a scale- or shape parameter, ${\displaystyle \epsilon }$ > 0. \ref{fig:var-eps2} for the Gaussian RBF with different values of ${\displaystyle \epsilon }$. Decreasing ${\displaystyle \epsilon }$ tends to flatten the basis function. For a given function, the quality of approximation may strongly depend on this parameter. In particular, increasing ${\displaystyle \epsilon }$ has the effect of better conditioning (the separation distance ${\displaystyle q_{X}:=min\,\,\Vert x_{j}-x_{k}\Vert }$ of the scaled points increases).

Solving the interpolation problem results in the linear system of equations

${\displaystyle Ac=f}$

with ${\displaystyle a_{ij}=\phi (\|x_{i}-x_{j}\|),c=(c_{j})_{j=1,...,N},f=(f_{j})_{j=1,...,N}.}$ . Because only the distance between the points appears in A, the effort to solve the system is independent of the spatial dimension, which becomes important especially when interpolating in higher dimensions. Franke posed in 1982 the question whether the matrix A is nonsingular and thus a unique interpolant is guaranteed to exist for any choice of f. A few years later, this conjecture was proven true for a number of useful RBF independently by results of Micchelli and Madych and Nelson. The only requirement is that there are at least two centers and that all centers are distinct. An exception are the thin plate splines, for which the matrix A can be singular for nontrivial distributions of points. For example, if the points ${\displaystyle x_{2},...,x_{n}}$ are distributed on the unit sphere around the center ${\displaystyle x_{1}}$, then the first row and column consists of zeros. A remedy for this problem is to add to the interpolant a polynomial p of degree ${\displaystyle m\geq 1}$, so that the interpolant becomes

${\displaystyle s(x)=\sum _{j=1}^{N}c_{i}\phi (\|x-x_{j}\|)+p(x)}$

and to require that the node points are unisolvent (for ${\displaystyle \Pi _{d}^{m}}$), that means every polynomial ${\displaystyle p\in \Pi _{d}^{m}}$ is determined by its values on X. ${\displaystyle p\in \Pi _{d}^{m}}$ denotes the space of all polynomials in n variables up to degree m and the ${\displaystyle p_{k}}$ are the standard basis polynomials for this space. A point set is unisolvent for ${\displaystyle \Pi _{0}^{d}}$ if all points are distinct, unisolvent for ${\displaystyle \Pi _{1}^{d}}$ of not all points are arranged on one line.

The interpolation conditions are now

${\displaystyle \sum _{j=1}^{N}c_{j}\phi (\Vert x_{i}-x_{j}\Vert )+\sum _{j=1}^{l}d_{j}p_{j}(x_{i})=f_{i},\qquad i=1,...,N}$

Because the interpolation conditions result in N equations in N + l unknowns, the system is under-determined. Therefore, extra side conditions are imposed, which represent the property of polynomial reproduction. This means that if the data comes from a polynomial ${\displaystyle p\in {\mathcal {P}}_{m}^{d}}$, i.e. ${\displaystyle f_{i}=p(x_{i})}$, then, the interpolant s must coincide with p. These conditions amount to

${\displaystyle \sum _{j=1}^{l}d_{j}p_{j}(x_{i})=0,\quad 1\leq i\leq l.}$

To find c and d, we need to solve the linear systems

${\displaystyle A\,c+P\,d=f,\qquad P^{T}c=0,}$

where ${\displaystyle P_{ij}=p_{j}(x_{i})}$ and A is defined as above. The equations may be summarized as

The system has a unique solution, if the function ${\displaystyle \phi }$ belongs to the following class of functions.

Definition A continuous function ${\displaystyle \Phi :\mathbf {R} ^{d}\rightarrow \mathbf {R} }$ is said to be conditionally positive definite of order m, shortly ${\displaystyle \Phi \in CPDm}$ if for every set ${\displaystyle X=\{x_{1},...,x_{N}\}\subset \Omega }$ of distinct points and for every set of complex values ${\displaystyle \{c_{1},...,c_{N}\}}$ for which

${\displaystyle P\,\,c=\sum _{j=1}^{N}c_{j}p(x_{j})=0\qquad for\,all\,p\in \Pi _{m}^{d}}$

${\displaystyle c\,\,A\,\,c^{T}=\sum _{j=1}^{N}\sum _{k=1}^{N}c_{j}c_{k}\Phi (x_{j}-x_{k})\geq 0}$

is positive. If, additionally ${\displaystyle c\,\,A\,\,c^{T}=0}$ implies ${\displaystyle c=0}$, then ${\displaystyle \phi }$ is said to be strictly CPD of order m. For ${\displaystyle m=0}$, ${\displaystyle \Phi }$ is said to be positive definite.

A radial basis function ${\displaystyle \phi (\|x-y\|)=\Phi (x-y)}$ is called conditionally positive definite if the corresponding multivariate function ${\displaystyle \Phi }$ is conditionally positive definite.

If a function is positive definite, then also the corresponding interpolation matrix is positive definite. TPS and MQ are conditionally positive definite functions, Gau, IMQ and Wen are positive definite functions.

Beside the proof of the problem to be well posed, a proof of the convergence is necessary. However, the convergence is guaranteed for most RBF only for global interpolation on infinite square grids. For the thin plate splines, also convergence proofs for local interpolation are known.

## The Generalized Hermite Interpolation Problem

The interpolation problem can also be expressed in terms of point evaluations i.e. applications of the Dirac delta functionals acting on the function f. If ${\displaystyle {\mathcal {S}}}$ is a vector space then the Dirac delta functional ${\displaystyle \delta _{x}}$ corresponding to x is defined by

${\displaystyle \delta _{x}(f)=f(x),\quad f\in {\mathcal {S}}}$

The data is generated via

${\displaystyle f_{j}=\int f(x)\delta (x-x_{j})dx}$

As before, let ${\displaystyle \{x_{1},...,x_{N}\}}$ be the centers. More generally, the data can be generated by any set of linear functionals ${\displaystyle \lambda _{1},...,\lambda _{N}}$ acting on some function f. The distributions need to be independent in order to avoid redundant data. The ${\displaystyle \lambda _{i}}$ are not restricted to point evaluations, but can be arbitrary linear functionals including differentiations and difference operators. This problem includes Hermite functions and ${\displaystyle {\mathcal {E}}'}$ the set of all distributions of compact support. To a distribution ${\displaystyle \lambda \in {\mathcal {E}}'}$ corresponds a linear functional, which is acting on ${\displaystyle f\in {\mathcal {E}}}$ via the convolution of distributions. It can be written as

${\displaystyle (\lambda ,f)=\int (\lambda *f)(x)dx=\int f(y)\lambda (x-y)dy}$

Now, the task is to interpolate given the data, ${\displaystyle f_{i}=(\lambda _{i},f)}$

such that ${\displaystyle s=\Phi *\lambda +p}$ satisfies

${\displaystyle (\lambda _{i},s)=f_{i},\quad for\,\,i=1,...,N}$

Here, the interpolant is of the form

${\displaystyle s(x)=\sum _{j=1}^{N}c_{j}\lambda _{j}^{y}\phi (\Vert x-y\Vert )+p(x)}$

where, the ${\displaystyle c_{j}}$'s are suitable real coefficients, and ${\displaystyle \lambda _{j}^{y}}$ indicates that ${\displaystyle \lambda }$ is acting on the ${\displaystyle \phi }$ viewed as a function of the argument |y.

${\displaystyle \sum _{j=1}^{N}\lambda _{j}(p)\phi _{j}(x)=\lambda (p),\quad for\,all\,p\in {\mathcal {P}}_{m}^{d}}$

The entries of the interpolation matrix ${\displaystyle A=(a)_{ij}}$ in \ref{eq:rbf-lgs} are now given by

${\displaystyle a_{ij}=\lambda ^{x}\lambda ^{y}\phi (\Vert x-y\Vert )}$

If the number of linear functionals equals the number of centers and if ${\displaystyle \phi }$ belongs to the class of conditionally positive definite functions, then a unique solution does exist.