A Brief History of Computing
- Mathematical Discoveries important to computing

© Copyright 1996-2005, Stephen White

(Click here to return to the History Homepage)

1614 Scotsman John Napier (1550-1617) published a paper outlining his discovery of the logarithm. Napier also invented an ingenious system of moveable rods (referred to as Napier's Rods or Napier's bones). These allowed the operator to multiply, divide and calculate square and calculate cube roots by moving the rods around and placing them in specially constructed boards.
1848 British Mathematician George Boole devised binary algebra (Boolean algebra) paving the way for the development of a binary computer almost a century later. See 1939.
1937 Alan M. Turing (1912-1954), of Cambridge University, England, publishes a paper on "computable numbers" - the mathematical theory of computation. This paper solves a mathematical problem, but the solution is achieved by reasoning (as a mathematical device) about the theoretical simplified computer known today as a Turing machine.
1938 Claude E. Shannon (1916-) publishes a paper on the implementation of symbolic logic using relays.
1950 The British mathematician and computer pioneer Alan Turing declared that one day there would be a machine that could duplicate human intelligence in every way and prove it by passing a specialized test. In this test, a computer and a human hidden from view would be asked random identical questions. If the computer were successful, the questioner would be unable to distinguish the machine from the person by the answers.
1956 First conference on Artificial Intelligence held at Dartmouth College in New Hampshire.
1956 Edsger Dijkstra invented an efficient algorithm for shortest paths in graphs as a demonstration of the abilities of the ARMAC computer.
1965 Fuzzy Logic designed by Lofti Zadeh (University of Berkeley, California), it is used to process approximate data - such as 'about 100'.
1976

Whitfield Diffie and Martin Hellman published their famous 'key exchange' algorithm in an article called "New Directions in Cryptography". This algorithm was the first published algorithm that allowed two parties, communicating only over an insecure medium, to agree on a secret that they both knew but on which anyone evesdropping their communication could not discover. This secret could then be used as a key for further secure communication. It is widely regarded as the beginning of public key cryptography, as until its publication many regarded secure communication in this manner as impossible.

Diffie-Hellman Key Exchange does not provide a mechanism to authenticate the other party in the communication, and so is vulnerable to a "man in the middle" attack - one where a third party is able to modify the communications between the two parties. RSA, published in 1977, provided a solution to this problem.

1977

Rivest, Adi Shamir and Len Adleman (their surnames making 'RSA') described an asymmetric algorithm for public key cryptography. This algorithm is widely used for encrypting traffic on the modern internet. Its strength is derived from the belief that there are no efficient algorithms for finding integer factors of large numbers.

It should be noted that a similar algorith was devised by Clifford Cocks in 1973, while working for the British intellegence service at GCHQ. The discovery was, however, classified and not released until 1997 - by which time RSA had firmly planted itself into history.

1987 Fractal Image Compression Algorithm calculated by English mathematician Michael F. Barnsley, allowing digital images to be compressed and stored using fractal codes rather than normal image data. In theory this allows more efficient storage of the images.




© Copyright 1996-2004, Stephen White My homepage - email:swhite@ox.compsoc.net