Computing decimal digits of square roots

We define the integer square root of a number as the largest integer such that (naturally we have ). For given numbers , it is easy to check if is the integer square root of , because in that case we have

The difference of the squares of two consecutive numbers , is the th odd number:

So by summing the first odd numbers we can find the square of :

This can be used to find the integer square root of a number ; We just keep on adding odd numbers until adding the next number would make the sum bigger than .

Now, we have so that gives us the first digits behind the decimal point. For example, to find the first 3 digits of we compute the integer square root of and

Note that this way of computing the square root digits does not round the digits. The actual value is so it would be more accurate to round up to here. If you want a rounded answer, you should compute one digit more and use the extra digit to round.

It is possible to convert this algorithm to a streaming algorithm, which produces digits one by one. We already saw how to compute the digits before the decimal point. We use this to calculate an approximation x to . When we have found a digit of x, we simply start increasing the next digit until we find that would become larger than if we increase it further.

x      x^2      delta between x^2
-------------------------
0      0
                1
1      1
                3
2      4
                5
3      9
                0.61
3.1    9.61               (4^2 > 11.66, go to the next digit)
                0.63
3.2    10.24
                0.65
3.3    10.89
                0.67
3.4    11.56
                0.0681
3.41   11.6281
                0.006821
3.411  11.634921          (3.42^2 > 11.66, go to the next digit)
                0.006823
3.412  11.641744
                0.006825
3.413  11.648569
                0.006827
3.414  11.655396
...                       (3.415 > 11.66, go to the next digit)

We see that when we move to the th digit after the decimal dot, the delta becomes 2 * x / 10^n + 10^(2n). This makes sense because we’ll be computing the next digit to compute a new approximation of the form , which makes the delta

Now, we can implement these ideas to write a simple function that generates the digits of a square root of an input number:

from decimal import Decimal, getcontext

def compute_sqrt_digits(input: Decimal, num_digits=10) -> Decimal:
    getcontext().prec = num_digits  # set precision of Decimal arithmetic
    precision = Decimal('0.1') ** num_digits

    delta = Decimal(1)
    digit_value = Decimal(1)
    current_approximation = Decimal(0)
    current_square_approximation = Decimal(0)

    while True:
        if current_square_approximation + delta <= input:
            current_approximation += digit_value
            current_square_approximation += delta
            delta += 2 * digit_value * digit_value
        else:
            digit_value /= 10
            delta = (2 * current_approximation + digit_value) * digit_value
            if input - current_square_approximation <= precision:
                return current_approximation

If is our approximation of and becomes smaller than precision, the function stops improving and returns x. This is guaranteed to return only correct digits (e.g. it can’t happen that it outputs 1.99 but the number actually turns out to be two or larger.)

This code can be slightly improved by nothing that we really only compare current_square_approximation with input, so that we can use a single variable rest = input - current_square_approximation instead.

from decimal import Decimal, getcontext

def compute_sqrt_digits(input: Decimal, num_digits=10) -> Decimal:
    getcontext().prec = num_digits  # set precision of Decimal arithmetic
    precision = Decimal('0.1') ** num_digits

    delta = Decimal(1)
    digit_value = Decimal(1)
    current_approximation = Decimal(0)
    rest = input

    while True:
        if delta <= rest:
            current_approximation += digit_value
            rest -= delta
            delta += 2 * digit_value * digit_value
        else:
            digit_value /= 10
            delta = (2 * current_approximation + digit_value) * digit_value
            if rest <= precision:
                return current_approximation

If you want the digits to come out one by one, shift the decimal dot left or right an even number of places, so that the input number has either one or two digits before the decimal dot. If you shifted the decimal dot in the input by places, the decimal dot in the output needs to be shifted in the other direction by places. If you do not do this, there might be a series of zeroes before the first non-zero digit. Another possibility is that the first ‘digit’ that is printed actually contains multiple digits. For example, if you compute the square root of , the accumulating loop will take 100 steps, and then output 100 as a single digit. This is inefficient, since it will take the loop 100 iterations before moving on the next digit, instead of the maximum of 9 iterations it would take when the input would have one or two digits before the decimal dot.

A similar algorithm is described by Lionel Dangerfield on Paul Bourke’s site.