Newton with Line Search Algorithm

From OpenSeesWiki
Revision as of 20:55, 3 March 2010 by Fmk (talk | contribs) (Created page with '{{CommandManualMenu}} This command is used to construct a NewtonLineSearch algorithm object which is introduces line search to the Newton-Raphson algorithm to solve the nonlinea...')
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search




This command is used to construct a NewtonLineSearch algorithm object which is introduces line search to the Newton-Raphson algorithm to solve the nonlinear residual equation. Line search increases the effectiveness of the Newton method when convergence is slow due to roughness of the residual. The command is of the following form:

algorithm NewtonLineSearch <-type $typeSearch> <-tol $tol> <-maxIter $maxIter> <-minEta $minEta> <-maxEta $maxEta>


$typeSearch line search algorithm. optional default is InitialInterpoled. valid types are:
Bisection, Secant, RegulaFalsi, InitialInterpolated
-tol optional flag to indicate to use initial stiffness on first steo, then use current stiffness for subsequent steps



NOTES:



REFERENCES:

M.A. Crisfield, "Nonlinear Finite Element Analysis of Solids and Structures", Vol(1), Wiley


THEORY:

The rationale behin line search is that:

  • the direction <math>\Delta U\,\!</math> found by the Newton method is often a good direction, but the step size
<math>|| \Delta U|| \,\!</math> is not. 
  • It is cheaper to compute the residual for several points along <math>\Delta U\,\!</math> rather than form and factor a new system Jacobian

In NewtonLineSearch the regular Newton-Raphson method is used to compute the <math>\Delta U\,\!</math>, but the update that is used is modified. The modified update is:

<math> U_{n+1} = U_n + \eta \Delta U\,\!</math>


The different line search algorithms use different root finding methods to obtain <math>\eta\,\!</math>:

<math> s0 = \Delta U R(U_n),\!</math>
<math> s = \Delta U R(U_{n} + \Delta U)\,\!</math>

InterpolatedLineSearch:

while (r > tolerance && count < maxIter} {

<math> \eta_{n+1} = \frac{\eta_n *s0}{s0 -s} ,\!</math>
<math> \X = \eta
<math> s = \Delta U R(U_{n} + \Delta U)\,\!</math>

ReguaFalsi Line Search:

<math>U_{n+1} = U_n + \eta \Delta U\,\!</math>
<math>\eta\,\!</math>

Keeping terms only to first order,

<math>f(x_n+\Delta x) \approx f(x_n)+r^'(x_n)\Delta x = f(x_n)+ \frac{df(x_n)}{dx}\Delta x</math>

and since at the root we wish to find <math>x_n + \Delta x</math>, the function equates to 0, i.e. <math>f(x_n+\Delta x) = 0</math>, we can solve for an approximate <math>\Delta x</math>

<math> \Delta x \approx -\frac{f(x_n)}{f^'(x_n)} = - \frac{df(x_n)}{dx}^{-1}f(x_n)</math>

The Newmark method is thus an iterative method in which, starting at a good initial guess <math>x_0\,\!</math> we keep iterating until <math>\Delta x</math> is small enough using the following:

<math> \Delta x = - \frac{df(x_n)}{dx}^{-1}f(x_n)\,\!</math>
<math> x_{n+1} = x_n + \Delta x\,\!</math>


The method is generalized to n unknowns by replacing the above scalar equations with matrix ones.

<math>R(U_n+\Delta x) = R(U_n)+\frac{\partial R(U_n)}{\partial U} \Delta U + O(\Delta U ^2) \,\!</math>

The matrix <math>\frac{\partial R(U_n)}{\partial U}\,\!</math> is called the system Jacobian matrix and will be denoted K:

<math>K = \frac{\partial R(U_n)}{\partial U}\,\!</math>


resulting in our iterative procedure where starting from a good initial guess we iterate until <math> \Delta U ,\!</math> is small enough with the following:

<math> \Delta U = - K^{-1}R(U_n),\!</math>
<math> U_{n+1} = U_n + \Delta U\,\!</math>



Code Developed by: fmk