This is a method for finding close approximations to solutions of functional equations g(x) = 0. It is similar to the Secant Method; here we use tangents instead of secants. So we start with a guess, say x1 near the root.
When g(x) = x2 - Q, we get the formula x2 = (x1 + Q/x1)/2 .
That is, to find √Q, you substitute your guess x1 of √Q into the formula. The value, x2, thus obtained is the new approximation x1 which you plug into the formula again and continue to obtain √Q to greater and greater precision.
xn+1 = (xn+1 + Q/xn)/2
Observe that the above formula for xn+1 is the same as the one in the Babylonian Algorithm. The Newton-Raphson gives a "quadratically convergent" algorithm for "simple" roots, which is the case with finding square-roots. Quadratic convergence means that after a few steps, the number of zeros following the decimal point in the error of the approximation doubles at each succeeding step. A more precise description of quadratic convergence is that there exists an integer N - depending on Q - such that for all n ≥ N, the error in the (n+1)-st step is a constant times the square of the error in the n-th step.
For finding square-roots, the constant is 1/(2Q), where Q is the number whose square-root we are looking for. This is interesting, for the larger Q is the faster the speed of convergence!
Newton-Raphson Method applied to tasks other than the finding of square roots
We have limited our discussion of the use of Newton-Raphson method to find square roots. However, the method has wider applicability, namely to find the roots of functions possessing "continuous derivatives".
The roots of functions and the Newton-Raphson method are discussed in most textbooks on differential calculus.
Suppose we wish to find an approximation to a simple root of g(x) = 0. Let xn denote the approximation obtained at the n-th iteration.
The expression for xn+1 in this case is
xn+1 = xn - g(xn)/g'(xn) ------- Equation (1)
For finding the square root of the positive real number Q, the function we used was g(x) = x2 - Q because the root of this function is the solution of the equation g(x) = 0, and the positive solution is √Q.
Now, let us say we are not seeking square roots, but trying to find the root of a function that is more complex than g(x) = x2 - Q. Then the Equation (1) makes it necessary that we evaluate g(xn) and g'(xn) to compute the value of xn+1. So in comparing the rate of convergence of the method relative to another method that might require only one function evaluation for computing xn+1, the calculation of the convergence rate must also take into account the effort needed - which, for complicated functions will be approximately twice the effort of the method requiring only one function evaluation.
The convergence of Newton's Method taking function evaluations into account turns out to be not quadratic, but only better than linear - they call it super-linear.
Ü BACKMathPath - "BRIGHT AND EARLY"
Send suggestions to firstname.lastname@example.org
Copyright © 2001- MathPath
Last updated - March 3, 2004