Sedimentation/Parameter Identification

Even though it is tried to keep this chapter on Parameter Identification of Flocculated Suspensions as self-comprehensive as possible, preliminar knowledge on numerical Methods and the Modeling of suspensions are useful. In particular, the Newton-Raphson scheme to solve nonlinear systems of equations for the optimization and Finite-Volume-Methods for the solution of partial differential equations are applied.

Modeling of flocculated supensions

The batch settling process of flocculated suspensions is modeled as an intitial value problem

${\displaystyle {\frac {\partial u}{\partial t}}+{\frac {\partial f(u)}{\partial x}}={\frac {\partial A^{2}(u)}{\partial x^{2}}},\quad u(0,x)=u_{0}(),\quad x\in [0,L]}$

where ${\displaystyle u}$ denotes the volume fraction of the dispersed solids phase. For the closure, the convective flux function is given by the Kynch batch settling function with Richardson-Zaki hindrance function

${\displaystyle f(u)=\left\{{\begin{matrix}&u_{\infty }u(1-u)^{C}&{\mbox{for }}0\leq u

and the diffusive flux is given by

${\displaystyle A(u)=\int _{0}^{u}a(s)ds,\quad a(u)=a_{0}(1-u)^{C}u^{k-1},}$

which results from the insertion of the power law

${\displaystyle \sigma _{\mathrm {e} }(u)=\left\{{\begin{matrix}\sigma _{0}\left(\left({\frac {u}{u_{\mathrm {c} }}}\right)^{k}-1\right)&{\mbox{for }}u_{\mathrm {c} }

into

${\displaystyle a(u)={\frac {f(u)\sigma _{\mathrm {e} }'(u)}{\Delta \varrho gu}},\quad a_{0}={\frac {u_{\infty }\sigma _{0}k}{\Delta \varrho gu_{\mathrm {c} }^{k}}}.}$

In the closure, the constants ${\displaystyle u_{\infty },C,a_{0},u_{\max },k}$ are partly known.

Numerical scheme

The numerical scheme for the solution of the direct problem is written in conservative form as a marching formula for the interior points ("interior scheme") as ${\displaystyle u_{j}^{n+1}=u_{j}^{n}-{\frac {\Delta t}{\Delta x}}\left(f_{j+1/2}^{n+1}-f_{j-1/2}^{n+1}\right)+{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1}^{n+1})-2A(u_{j}^{n+1})+A(u_{j+1}^{n+1})\right),\quad j=2,\dots ,J-1}$

and at the bondaries ("boundary scheme") as

${\displaystyle u_{1}^{n+1}=u_{1}^{n}-{\frac {\Delta t}{\Delta x}}f_{j+1/2}^{n+1}+{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j+1}^{n+1})-A(u_{j}^{n+1})\right),\quad u_{J}^{n+1}=u_{J}^{n}+{\frac {\Delta t}{\Delta x}}f_{J-1/2}^{n+1}-{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{J}^{n+1})-A(u_{J-1}^{n+1})\right)}$

For a first-order scheme, the numerical flux function becomes

${\displaystyle f_{j+1/2}^{n}=f(u_{j}^{n},u_{j+1}^{n}).}$ If the flux function has one single maximum, denoted by

${\displaystyle u_{\mathrm {m} }}$, the Engquist-Osher numerical flux function can be stated as

${\displaystyle f^{\mathrm {EO} }(u,v)=\left\{{\begin{matrix}&f(u),&{\mbox{for }}u\leq u_{\mathrm {m} },v\leq u_{\mathrm {m} },\\&f(u)+f(v)-f(u_{\mathrm {m} }),&{\mbox{for }}u\leq u_{\mathrm {m} },v>u_{\mathrm {m} },\\&f(u_{\mathrm {m} }),&{\mbox{for }}u>u_{\mathrm {m} },v\leq u_{\mathrm {m} },\\&f(v),&{\mbox{for }}u>u_{\mathrm {m} },v>u_{\mathrm {m} }.\end{matrix}}\right.}$

For linearization, the Taylor formulae

${\displaystyle f(u_{j}^{n+1},u_{j+1}^{n+1})=f(u_{j}^{n},u_{j+1}^{n})+{\frac {\partial f(u_{j}^{n},u_{j+1})}{\partial u_{j}^{n}}}(u_{j}^{n+1}-u_{j}^{n})+{\frac {\partial f(u_{j}^{n},u_{j+1})}{\partial u_{j+1}^{n}}}(u_{j+1}^{n+1}-u_{j+1}^{n}),\quad j=1,\dots ,J}$

and

${\displaystyle A(u_{j}^{n+1})=A(u_{j}^{n})+{\frac {\partial A(u_{j}^{n})}{\partial u_{j}^{n}}}(u_{j}^{n+1}-u_{j}^{n}),\quad j=1,\dots ,J}$

are inserted. Abbreviating the time evolution step as

${\displaystyle \Delta u_{j}^{n}=u_{j}^{n+1}-u_{j}^{n},\quad j=1,\dots ,J,}$

the linearized marching formula for the interior scheme becomes

${\displaystyle \Delta u_{j}^{n}=-{\frac {\Delta t}{\Delta x}}\left(f_{j+1/2}^{n}+{\mathcal {J}}_{j+1/2}^{-}\Delta u_{j}^{n}+{\mathcal {J}}_{j+1/2}^{+}\Delta u_{j+1}^{n}+f_{j-1/2}^{n}+{\mathcal {J}}_{j-1/2}^{-}\Delta u_{j-1}^{n}+{\mathcal {J}}_{j-1/2}^{+}\Delta u_{j}^{n}\right)}$ ${\displaystyle +{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1})-2A(u_{j})+A(u_{j+1})+a(u_{j-1})\Delta u_{j-1}-2a(u_{j})\Delta u_{j}+a(u_{j+1})\Delta u_{j+1}\right),}$

where

${\displaystyle J_{j+1/2}^{+}={\frac {\partial f(u_{j}^{n},u_{j+1}^{n})}{\partial u_{j+1}^{n}}},\quad J_{j+1/2}^{-}={\frac {\partial f(u_{j}^{n},u_{j+1}^{n})}{\partial u_{j}^{n}}}.}$

Rearrangement leads to a block-triangular linear system

${\displaystyle -\left({\frac {\Delta t}{\Delta x}}({\mathcal {J}}_{j-1/2}^{-}+{\frac {\Delta t}{\Delta x^{2}}}a(u_{j-1}^{n})\right)\Delta u_{j-1}^{n}+\left(I+{\frac {\Delta t}{\Delta x}}\left({\mathcal {J}}_{j+1/2}^{-}-{\mathcal {J}}_{j-1/2}^{+}\right)+{\frac {2\Delta t}{\Delta x^{2}}}a(u_{j}^{n})\right)\Delta u_{j}^{n}}$ ${\displaystyle +\left({\frac {\Delta t}{\Delta x}}{\mathcal {J}}_{j+1/2}^{+}-{\frac {\Delta t}{\Delta x^{2}}}a(u_{j+1}^{n})\right)\Delta u_{j+1}^{n}}$ ${\displaystyle ={\frac {\Delta t}{\Delta x}}\left(f(u_{j}^{n},u_{j+1}^{n})-f(u_{j-1}^{n},u_{j}^{n})\right)-{\frac {\Delta t}{\Delta x^{2}}}\left(A(u_{j-1}^{n})-2A(u_{j}^{n})+A(u_{j+1}^{n})\right),j=2,\dots ,J-2}$

which is of the form

${\displaystyle m_{j,j-1}\Delta u_{j-1}^{n}+m_{j,j}\Delta u_{j}^{n}+m_{j,j+1}\Delta u_{j+1}=b_{j},}$

or, in more compact notation,

${\displaystyle M\Delta u^{n}=b,\quad {\begin{bmatrix}m_{11}&m_{12}&0&\cdots &\cdots &\cdots &\cdots &0\\m_{21}&m_{22}&m_{23}&0&\cdots &\cdots &\cdots &0\\\vdots &&&&&&&\vdots \\\vdots &&&&&&\ddots &\vdots \\0&\cdots &\cdots &\cdots &\cdots &0&m_{J,J-1}&m_{J,J}\end{bmatrix}},\quad b={\begin{bmatrix}b_{1}\\\vdots \\b_{J}\end{bmatrix}},\quad \Delta u^{n}={\begin{bmatrix}\Delta u_{1}^{n}\\\vdots \\\Delta u_{J}^{n}\end{bmatrix}}.}$

Parameter identification as Optimization

The goal of the parameter identification by optimization is to minimize the cost function over the parameter space

${\displaystyle \min _{e}q(e),\quad q(e)=\|h(e)-H\|^{2}}$,

where h(e) denotes the interface that is computed by simulations and H is the measured interface. Without loss of generality we consider a parameter set e=(e_1, e_2) consiting of two parameters. The optimization can be iteratively done by employing the Newton method as

${\displaystyle Q(e_{k})\Delta e_{k}=\nabla _{e}q(e_{k}),\quad e\subset \{C,v_{\infty },a_{0},u_{\mathrm {c} },k\},}$

where

${\displaystyle Q={\begin{bmatrix}{\frac {\partial ^{2}q}{\partial e_{1}^{2}}}&{\frac {\partial ^{2}q}{\partial e_{1}e_{2}}}\\{\frac {\partial ^{2}q}{\partial e_{2}e_{1}}}&{\frac {\partial ^{2}q}{\partial e_{2}^{2}}}\end{bmatrix}},}$

is the Hessian of q. The Newton method is motivated by the Taylor expansion

${\displaystyle 0\approx Q(e^{*})=Q(e)+\nabla _{e}Q(e)\Delta e+{\frac {1}{2}}\Delta e^{\mathrm {T} }Q\Delta e,\quad \Delta e=e^{*}-e,}$

where ${\displaystyle e^{*}}$ is the optimal parameter choice.