Algorithms that compute do not run forever. They are nevertheless capable of computing up to any desired accuracy .
Choose any and compute .
For example (setting ):
Compare the results with
It appears that the multiplication of the input by gives an accuracy of k decimal digits.[note 2]
To compute the (entire) decimal representation of , one can execute an infinite number of times, increasing by a factor at each pass.
Assume that in the next program () the procedure is already defined and — for the sake of the argument — that all variables can hold integers of unlimited magnitude.
Then will print the entire decimal representation of .[note 3]
// Print sqrt(y), without haltingvoidsqrtForever(unsignedinty){unsignedintresult=isqrt(y);printf("%d.",result);// print result, followed by a decimal pointwhile(true)// repeat forever ...{y=y*100;// theoretical example: overflow is ignoredresult=isqrt(y);printf("%d",result%10);// print last digit of result}}
The conclusion is that algorithms which compute isqrt() are computationally equivalent to algorithms which compute sqrt().[1]
One way of calculating and is to use Heron's method, which is a special case of Newton's method, to find a solution for the equation , giving the iterative formula
One can prove[citation needed] that is the largest possible number for which the stopping criterion
ensures in the algorithm above.
In implementations which use number formats that cannot represent all rational numbers exactly (for example, floating point), a stopping constant less than 1 should be used to protect against round-off errors.
Although isirrational for many , the sequence contains only rational terms when is rational. Thus, with this method it is unnecessary to exit the field of rational numbers in order to calculate , a fact which has some theoretical advantages.
For computing for very large integers n, one can use the quotient of Euclidean division for both of the division operations. This has the advantage of only using integers for each intermediate value, thus making the use of floating point representations of large numbers unnecessary. It is equivalent to using the iterative formula
By using the fact that
one can show that this will reach within a finite number of iterations.
In the original version, one has for , and for . So in the integer version, one has and until the final solution is reached. For the final solution , one has and , so the stopping criterion is .
However, is not necessarily a fixed point of the above iterative formula. Indeed, it can be shown that is a fixed point if and only if is not a perfect square. If is a perfect square, the sequence ends up in a period-two cycle between and instead of converging.
// Square root of integerunsignedintint_sqrt(unsignedints){// Zero yields zero// One yields oneif(s<=1)returns;// Initial estimate (must be too high)unsignedintx0=s/2;// Updateunsignedintx1=(x0+s/x0)/2;while(x1<x0)// Bound check{x0=x1;x1=(x0+s/x0)/2;}returnx0;}
For example, if one computes the integer square root of 2000000 using the algorithm above, one obtains the sequence
In total 13 iteration steps are needed. Although Heron's method converges quadratically close to the solution, less than one bit precision per iteration is gained at the beginning. This means that the choice of the initial estimate is critical for the performance of the algorithm.
When a fast computation for the integer part of the binary logarithm or for the bit-length is available (like e.g. std::bit_widthinC++20), one should better start at
which is the least power of two bigger than . In the example of the integer square root of 2000000, , , and the resulting sequence is
In this case only four iteration steps are needed.
The traditional pen-and-paper algorithm for computing the square root is based on working from higher digit places to lower, and as each new digit pick the largest that will still yield a square . If stopping after the one's place, the result computed will be the integer square root.
If working in base 2, the choice of digit is simplified to that between 0 (the "small candidate") and 1 (the "large candidate"), and digit manipulations can be expressed in terms of binary shift operations. With * being multiplication, << being left shift, and >> being logical right shift, a recursive algorithm to find the integer square root of any natural number is:
definteger_sqrt(n:int)->int:assertn>=0,"sqrt works for only non-negative inputs"ifn<2:returnn# Recursive call:small_cand=integer_sqrt(n>>2)<<1large_cand=small_cand+1iflarge_cand*large_cand>n:returnsmall_candelse:returnlarge_cand# equivalently:definteger_sqrt_iter(n:int)->int:assertn>=0,"sqrt works for only non-negative inputs"ifn<2:returnn# Find the shift amount. See also [[find first set]],# shift = ceil(log2(n) * 0.5) * 2 = ceil(ffs(n) * 0.5) * 2shift=2while(n>>shift)!=0:shift+=2# Unroll the bit-setting loop.result=0whileshift>=0:result=result<<1large_cand=(result+1)# Same as result ^ 1 (xor), because the last bit is always 0.iflarge_cand*large_cand<=n>>shift:result=large_candshift-=2returnresult
Traditional pen-and-paper presentations of the digit-by-digit algorithm include various optimizations not present in the code above, in particular the trick of pre-subtracting the square of the previous digits which makes a general multiplication step unnecessary. See Methods of computing square roots § Binary numeral system (base 2) for an example.[2]
Some programming languages dedicate an explicit operation to the integer square root calculation in addition to the general case or can be extended by libraries to this end.