Multiplication algorithm
From Academic Kids

This article is in need of attention. 
Please improve (https://tantalum.academickids.com:443/encyclopedia/index.php?title=Multiplication_algorithm&action=edit) this article. 
A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are in use. Efficient multiplication algorithms have been around since the advent of the decimal system.
Contents 
Long multiplication
A major advantage of positional numeral systems over other systems of writing down numbers is that they facilitate the usual gradeschool method of long multiplication: multiply the first number with every digit of the second number and then add up all the properly shifted results. In order to perform this algorithm, one needs to know the products of all possible digits, which is why multiplication tables have to be memorized. Humans use this algorithm in base 10, while computers employ the same algorithm in base 2. The algorithm is a lot simpler in base 2, since the multiplication table has only 4 entries. Rather than first computing the products, and then adding them all together in a second phase, computers add the products to the result as soon as they are computed. Modern chips implement this algorithm for 32bit or 64bit numbers in hardware or in microcode. To multiply two numbers with n digits using this method, one needs about n^{2} operations. More formally: the time complexity of multiplying two ndigit numbers using long multiplication is Ο(n^{2}).
Example
23958233 5830 _________ 00000000 71874699 191665864 119791165  139676498390
Shifts and adds
An old method for multiplication, that does not require multiplication tables, is the Peasant multiplication algorithm; this is actually a method of multiplication using base 2.
A similar technique is still in use in computers where a binary number is multiplied by a small integer constant. Since multiplication of a binary number by powers of two can expressed in terms of bitshifts, a series of bit shifts and addition operations which has the effect of performing a multiplication without the use of any conditional logic or hardware multiplier. For many processors, this is often the fastest way to perform these simple multiplications.
Multiplication algorithms for computer algebra
Karatsuba multiplication
For systems that need to multiply numbers in the range of several thousand digits, such as computer algebra systems and bignum libraries, long multiplication is too slow. These systems employ Karatsuba multiplication which was discovered in 1962 and proceeds as follows. To multiply two ndigit numbers x and y represented in base W, where n is even and equal to 2m (if n is odd instead, or the numbers are not of the same length, this can be corrected by adding zeros at the left end of x and/or y), we can write
 x = x_{1} W^{m} + x_{2}
 y = y_{1} W^{m} + y_{2}
with mdigit numbers x_{1}, x_{2}, y_{1} and y_{2}. The product is given by
 xy = x_{1}y_{1} W^{2m} + (x_{1}y_{2} + x_{2}y_{1}) W^{m} + x_{2}y_{2}
so we need to quickly determine the numbers x_{1}y_{1}, x_{1}y_{2} + x_{2}y_{1} and x_{2}y_{2}. The heart of Karatsuba's method lies in the observation that this can be done with only three rather than four multiplications:
 compute x_{1}y_{1}, call the result A
 compute x_{2}y_{2}, call the result B
 compute (x_{1} + x_{2})(y_{1} + y_{2}), call the result C
 compute C  A  B; this number is equal to x_{1}y_{2} + x_{2}y_{1}.
To compute these three products of mdigit numbers, we can employ the same trick again, effectively using recursion. Once the numbers are computed, we need to add them together, which takes about n operations.
If T(n) denotes the time it takes to multiply two ndigit numbers with Karatsuba's method, then we can write
 T(n) = 3 T(n/2) + cn + d
for some constants c and d, and this recurrence relation can be solved, giving a time complexity of Θ(n^{ln(3)/ln(2)}). The number ln(3)/ln(2) is approximately 1.585, so this method is significantly faster than long multiplication. Because of the overhead of recursion, Karatsuba's multiplication is not very fast for small values of n; typical implementations therefore switch to long multiplication if n is below some threshold.
ToomCook
Another method of multiplication is called ToomCook or Toom3. The ToomCook method splits each number to be multiplied into multiple parts. Karatsuba is a special case of ToomCook using two parts. A threeway ToomCook can do a sizeN^{3} multiplication for the cost of five sizeN multiplications, improvement by a factor of 9/5 compared to the Karatsuba method's improvement by a factor of 4/3. Using more parts eventually comes up against the law of diminishing returns.
Fourier transform methods
There exist even faster algorithms, based on the fast Fourier transform. The idea, due to Strassen (1968), is the following: multiplying two numbers represented as digit strings is virtually the same as computing the convolution of those two digit strings. Instead of computing a convolution, one can instead first compute the discrete Fourier transforms, multiply them entry by entry, and then compute the inverse Fourier transform of the result. (See convolution theorem.) The fastest known method based on this idea was described in 1971 by Schönhage and Strassen (SchönhageStrassen algorithm) and has a time complexity of Θ(n ln(n) ln(ln(n))).
The GIMPS distributed Internet prime search project deals with numbers having several million digits and also employs a Fourier transform based multiplication algorithm.
Using numbertheoretic transforms instead of discrete Fourier transforms should avoid any rounding error problems by using modular arithmetic instead of complex numbers.
Polynomial multiplication
All the above multiplication algorithms can also be expanded to multiply polynomials.
See also
External links
 Multiplication Algorithms used by GMP (http://www.swox.com/gmp/manual/MultiplicationAlgorithms.html#Multiplication%20Algorithms)