# How did the Babylonians know √2 up to six digits?

### The greatest known computational accuracy in the ancient world

This clay tablet from 1800-1600 BC shows that ancient Babylonians were able to approximate the square root of two with 99.9999% precision.

How did they do it?

# Deciphering the tablet

First, let’s decipher the tablet itself. It is called YBC 7289 (short for the 7289th item in the Yale Babylonian Collection), and it depicts a square, its diagonal, and numbers written around them. Here is a stylized version from the book *Episodes from the Early History of Mathematics* by Aaboe Asger*.*

As the Pythagorean theorem implies, the diagonal’s length for a unit square is √2. Let’s focus on the symbols there!

These are numbers, written in Babylonian cuneiform numerals. They read as 1, 24, 51, and 10.

Since the Babylonians used the base 60 numeral system (also known as *sexagesimal*), the number *1.24 51 10* reads as *1.41421296296* in decimal.

This matches √2 up to the sixth digit, meaning a 99.9999% accuracy!

The computational accuracy is stunning. To appreciate this, pick up a pen and try to reproduce this without a calculator. It’s not that easy!

Here is how they did it.

# The Babylonian square root algorithm

Now, I am going to play the magician. Let’s describe the algorithm first! I’ll pull back the curtain afterward and explain it all.

We start by picking a number x₀ between 1 and √2. I know, this feels random, but let’s just roll with it for now. One such example is 1.2, which is going to be our first approximation.

Because of this, 2/x₀ is larger than √2.

Thus, the interval [x₀, 2/x₀] envelopes √2.

From this, it follows that the mid-point of the interval [x₀, 2/x₀] is a better approximation to √2. As you can see in the figure below, this is significantly better!

Let’s define x₁ by this.

Continuing on this thread, we can define an approximating sequence by taking the midpoints of such intervals.

Here are the first few terms of the sequence. Even the third member is a surprisingly good approximation.

If we put these numbers on a scatterplot, we practically need a microscope to tell the difference from √2 after a few steps.

As you can see, this converges to √2 extremely fast.

How fast exactly?

## The error of the Babylonian approximation

The error between a given approximation and √2 is simply their distance, measured by the absolute value of their difference. For instance, e₀, the error of our first guess is given by the following.

However large or small e₀ might be, we can use it to estimate the subsequent errors.

Let’s do some algebra and see how e₀ relates to e₁! First, we express e₁ as a fraction.

Then, as x₀ was selected to be larger than one, we can estimate this in terms of e₁. As the numerator is e₀ squared, we have a simple job.

Repeating this argument, we obtain that the convergence is lightning fast, even faster than exponential!

Were the Babylonians just lucky, or did they hit the nail right on the head?

The latter one. It’s time to finally pull back the curtain!

# The Newton-Raphson method

Let’s rephrase the problem of approximating the square root of two. Instead of computing the function *f(x) = √x* at a given point, let’s try to find the (positive) root of *f(x) = x² - 2*. (Which, as it turns out, is also √2.

Is there a general method for such a task? Yes, and it’s called the Newton-Raphson method. To illustrate how it works, let’s zoom in at the root of *f(x)*.

How can we move from our initial guess x₀ to the root?

One idea is to follow the direction of the tangent, and see where it intersects the *x*-axis. As the derivative describes the tangent’s slope, this intersection can be computed straight away. I’ll show you how.

The equation of the tangent line is given by the following.

By solving it for zero, we obtain the point where it intersects with the *x*-axis.

Thus, by selecting our next guess x₁ as this intersection point, we obtain a (hopefully) better approximation.

And here we go! We can define a recursive sequence based on this idea.

This is called the Newton-Raphson method. (Sometimes Newton-Raphson iteration.) Here is the next step. As you can see, the third step is almost at √2.

One big question remains: is this the same as the Babylonian method? Yes, and here is why.

## Newton-Raphson and the Babylonian algorithm

In the previous example, we opted to find the root of *f(x) = x² - 2*. Let’s find an explicit formula for the recursive sequence given by the Newton-Raphson method. Its derivative is simple to compute, so we are ready to go.

With a little algebra, we can reach a not-so-surprising conclusion.

Thus, the Babylonian algorithm is a special case of the Newton-Raphson iteration!

Recall that the convergence is extremely fast in this case. Is this true in general? If we are lucky.

## The rate of convergence

Without going into the details, the convergence and its rate depend on the local behavior of the function.

For instance, if *f(x)* is twice differentiable, then the error term for the *n*-th element can be described in terms of the derivatives and the square of the *(n-1)*-th error.

(If you are interested in the exact details, you can find the proof on Wikipedia.)

In particular, if the derivatives “behave nicely” (that is, the first derivative is separated from zero and the second derivative is bounded), then the convergence is quadratic.

The quadratic convergence is true not only for finding the square root of two by approximating the positive root of *f(x) = x² - 2*, but for a wide array of functions.

## Drawbacks

Unfortunately, it is not all perfect. The Newton-Raphson iteration can fail miserably in certain not-so-rare cases, and there are several drawbacks.

For instance, if the function is “flat” around the root, the convergence can be painfully slow. One such case is illustrated below.

This happens when the root has a larger multiplicity, that is, the derivatives are also zero. Speaking of derivatives, unlike the Babylonian square root case, they can be hard to evaluate, rendering the method unfeasible to use.

Moreover, the entire process depends heavily on the initial guess: the iteration might converge to the wrong root, or even diverge.

# Conclusion

It is pretty stunning that ancient Babylonians were able to compute √2 up to the sixth digit. This accuracy is extremely respectable, especially considering that it was achieved almost four thousand years ago, computed entirely by hand.

As it turns out, they were not merely lucky; they stumbled upon the special case of a powerful method able to approximate the root for a wide array of functions. This became known as the Newton-Raphson method.

The idea is simple:

guess an initial value x₀,

temporarily replace the function with its tangent at x₀,

determine where the tangent intersects the

*x*-axis,and use this intersection x₁ as the new starting point of this process.

If the function behaves nicely enough (that is, its derivative is locally separated from zero and its second derivative is bounded), the convergence is extremely fast: this is why the Babylonians were able to achieve the “greatest known computational accuracy in the ancient world“.