I'm going to talk about Newton's Iterative Method, which is a way of estimating a square root. I say estimating because since most square roots are irrational numbers, it is impossible to represent them accurately using floating point arithmetic. So we just get as close as we can to it.
There are other ways of calculating square roots. The square root of n can also be expressed as eln(n)/2. However this is slower to calculate.
How does this method work? Well to understand it fully you need to understand calculus, however with square roots the calculus is really simple so you can just take my word for it if you don't know calculus.
Let's try to find the square root of 2. Our function is therefore:
f(x) = x2 - 2The derivative1 f′ of this function is 2x.
We start off with a guess. Let's say 1. If you want you can start off with any other number except zero (I'll explain why you can't use zero in a bit), but I'm going to start with 1.
What we want to do now is find the linear approximation (first order Taylor series if you want to be more pedantic) of f around the point x = 1. We'll call this point x0 The function, we'll call it T, for a linear approximation is:
T(x) = f(x0) + f′(x0) * (x - x0)For our function, it looks like this:
T(x) = x02 - 2 + 2x0(x - x0) = 2x0x - x02 - 2At x0 = 1, we have:
T(x) = 2x - 3
Now for the more interesting part. Let's look at a picture of these two functions (the blue line is the x-axis):
We want to find out where the linear approximation hits the x-axis. By setting T to zero and solving for x we get:
x = x0 - (x02 - 2) / 2x0 = 1.5We now use this 1.5 as our new x0, except that we will call it x1. Now we plug that into our formula:
x = x1 - (x12 - 2) / 2x1 = 1.416667Call that x2, plug that in again, we get:
x = x2 - (x22 - 2) / 2x2 = 1.414216We're getting pretty close eh?
When do we stop looping? Well, since we can't get absolute precision, we can stop looping when the xi is within a certain threshold of some error:
while |f(xi)| > εYou can set this ε to whatever you want. A higher value will mean less precision but more speed - however the iterative method moves pretty damn quickly toward the zero so it is not a huge deal.
do iterative method
There's a few catches. What would happen if we used a guess of zero? Well, we'd end up with a divide-by-zero error. Basically the linear approximation to f would be flat, and never touch the x-axis.
What would happen if we used a negative guess? Well, that leads to a more interesting discussion. Newton's Iterative Method finds local zeroes. The closest zero to a negative number (say -1) is not around 1.41421. It is close to -1.41421, which is the other square root of 2. When there are multiple zeroes to a function, the iterative method will tend toward which ever one it is sloping toward. So if you're trying to find the zero of a function that crosses the x-axis several times, choose your initial guess wisely!
Finally, here's an interesting problem. Try finding the square root of -1 using this method. Try it with different guesses.
1If you don't know what a derivative is, it is the function which tells you the slope of f at any given value of x.
No comments:
Post a Comment