Multiplication results

Back to the main page


Graph of results (Note the logarithmic axes)

In January 1999, I measured the program on a Sun SPARCstation 4 using g++ with optimization level 4. (It was an antiquated computer even at that time.) The below results are the results for pairs of random n-digit numbers. (Yes, digits. I implemented it in decimal.) All times are in milliseconds,
# digitsKaratsubagrade school
80.0599230.063902
160.1063600.121773
320.2788620.414594
640.7980851.481481
1282.3255815.780347
2566.94444422.727273
51221.27659688.333333
102463.750000370.000000
2048195.0000001650.000000
What we see is that Karatsuba, properly implemented, beats grade-school multiplication even for 16-digit numbers. It is significantly better at 32 digits, and of course after that it just blows grade-school away.

More recently (November 2000), I compiled the program using g++ (highest optimization level) on a 700MHz Pentium III Linux box. (Here I set the KARAT_CUTOFF constant - the point at which the recursive Karatsuba function just does grade-school multiplication - to 16.) Results gained here indicate a significantly better algorithm only once gets to 64 digits. Again, all times are in milliseconds.
# digitsKaratsubagrade school
80.0028380.002703
160.0062300.006141
320.0166220.017437
640.0483540.058411
1280.1402720.212314
2560.4374450.808407
5121.3175233.164557
10244.01606412.048193

Of course, this begs the question: Why would one want to multiply 100-digit numbers with exact precision? One response is cryptographic applications: Some protocols (including RSA) involve many multiplications of keys with hundreds of digits. (Another application is to breaking mathematical records (largest prime, whatever), but it's not clear how practical this is.)