Naive Methods

We know that at least two ways to multiply integers and to get .

First, we know that meant adding up copies of , which gives an time algorithm to multiply two -digit base-10 numbers. (Each addition is , and we do this times, where could be as large as ).

Then you learned to multiply long numbers on a digit-by-digit basis, like

Recall that those zeros we pad the digit-terms by are not really computed as products. We implement their effect by shifting the product digits to the correct place. Assuming we perform each real digit-by-digit product in constant time, by looking it up in a times table, this algorithm multiplies two n-digit numbers in time.

Divide-and-Conquer Method

Here we present an even faster algorithm for multiplying large numbers. It is a classic divide and conquer algorithm.

Suppose each number has digits. We can split each number into two pieces each of digits, such that the product of the full numbers can easily be constructed from the products of the pieces, as follows.

Let , and represent and , where and are the pieces of a respective number. Then:

Example

Let’s say we have and .

Then:

  • can be split into and .
  • can be split into and

We have .

This procedure reduces the problem of multiplying two -digit numbers to four products of ()-digit numbers. Recall that multiplication by doesn’t count: it is simply padding the product with zeros. We also have to add together these four products once computed, which is work.

Let denote the amount of time it takes to multiply two -digit numbers. Assuming we use the same algorithm recursively on each of the smaller products, the running time of this algorithm is given by the recurrence:

For , we have:

  • . Thus, we have , where , for .
  • Comparing and , we see that grows slower than . This means that that the recursive subproblems dominate the internal evaluation cost .

Thus, this is a Case 1 Divide-and-Conquer Recurrence,which tells us that if for some constant , then . Consequently, we see that this algorithm runs in time, exactly the same as the digit-by-digit method.

Karatsuba’s Algorithm

Karatsuba’s algorithm is an alternative recurrence for multiplication, which yields a better running time. Suppose we compute the following three products:

Note that

so now we have computed the full product with just three half-length multiplications and a constant number of additions. Again, the w terms don’t count as multiplications: recall that they are really just zero shifts.

The time complexity of this algorithm is therefore governed by the recurrence

For , we have:

  • .

This is once again case 1 of the divide-and-conquer recurrence master theorem, so we have

This is a substantial improvement over the quadratic algorithm for large numbers, and indeed beats the standard multiplication algorithm soundly for numbers of digits or so.

Strassen Algorithm

This approach of defining a recurrence that uses fewer multiplications but more additions also lurks behind fast algorithms for matrix multiplication. The nested-loops algorithm for matrix multiplication for two matrices, because we compute the dot product of n terms for each of the elements in the product matrix. However, Strassen discovered a divide-and-conquer algorithm that manipulates the products of seven matrix products to yield the product of two matrices. This yields a time-complexity recurrence

Because , dominates , so Case 1 of the master theorem applies and .

This algorithm has been repeatedly “improved” by increasingly complicated recurrences, and the current best is .