DEV Community

Tom Deneire ⚡
Tom Deneire ⚡

Posted on • Originally published at tomdeneire.Medium

What every programmer should know about Alan Turing

Enigma encryption-machine — Photo by [Mauro Sbicego](https://unsplash.com/@maurosbicego?utm_source=medium&utm_medium=referral) on [Unsplash](https://unsplash.com?utm_source=medium&utm_medium=referral)
Enigma encryption-machine — Photo by Mauro Sbicego on Unsplash

Biography

Alan Turing (1912–1954) was a British mathematician, who was highly influential in the development of theoretical computer science. Turing formalized the concepts of algorithm and computation by describing the so-called Turing machine, which can be considered a model of a general-purpose computer. Turing is widely considered to be the father of computer science and artificial intelligence.

Outside the realm of computer science, Turing is especially remembered for the role he played in World War Two whilst working for Britain’s codebreaking centre. He was particularly instrumental in improving the Allied techniques for breaking the ciphers of the famous Enigma machine, which was used extensively by Nazi Germany to encipher even their most top-secret military messages.

Sadly, despite his accomplishments in both cryptanalysis and computer science, Turing — who often worked under the cover of secrecy — was never fully recognized in his home country. To boot, in 1952, Turing was prosecuted for homosexual acts. He accepted chemical castration, which rendered him impotent and caused him to grow breasts, as an alternative to prison. Turing died in 1954, at home, at only 41 years old, from eating an apple poisoned with cyanide. Although the circumstances of his death remain unclear, it appears most likely that he took his own life.

Turing machines

Computing

Turing first described his “computing” machine in 1936 with a strictly mathematical purpose. Without going into too much detail, Turing was basically looking for a way to prove that while it is possible to formulate a procedure to produce all rational numbers — numbers that can be expressed as a fraction, e.g. 1 (1/1), 0.5 (1/2) or 0.333… (1/3) — , it is impossible to formulate such a procedure for all real numbers, which include the irrational numbers, which cannot be expressed as a fraction, e.g. √2 or π.

The problem with the irrationals is that while they contain some numbers that are computable — i.e. numbers that can be computed to within any desired precision by a finite, terminating process, like the examples of √2 or π, the vast majority of the irrational numbers cannot be computed. They consist of a purely random, infinite progression of digits without a discernible pattern, such as 0.2896574609817847850521… There is no way to formally describe how to produce such numbers.

The notion “computability” is a good way to understand what a Turing machine is supposed to do. It is a machine that, given a set of instructions, is able to compute numbers, even if their decimal expansion goes on for ever. It is not the end result that counts, but the formal procedure required to produce that result. For instance, we can easily think of a set of instructions to produce 1/3:

0.33333333333333333333333333333333333333333333...
Enter fullscreen mode Exit fullscreen mode

It could be something like (this is not how Turing does it, but that does not matter for now):

**A**: print "0"
**B**: print "."
**C**: print "3"
**D**: goto **C**
Enter fullscreen mode Exit fullscreen mode

I’m using instructions here that will ring a bell for anyone familiar with computer science and although this is not exactly how Turing designed his machine, it is the logical consequence of it.

Turing machine

In actuality, a Turing machine is more like a typewriter, which can both print and read characters or “symbols”, with a head that can move both left and right, and working on an infinite one-dimensional tape divided into cells. The machine’s actions are not so much the result of “instructions” (that would imply that it can be programmed), but of “states” or “configurations”. In a certain state a machine will perform a certain action (Turing also allows series of actions) depending on which symbol it reads at the current position of the head. These actions can be:

  • print a symbol

  • erase a symbol

  • move to the right

  • move to the left

  • change to another state

Although difficult to believe at first, this unbelievably simple, even “slavish” (in Turing’s own words) machine, that in itself is not even able to do simple addition or multiplication, can be used to print every computable number. For instance, we can easily see that how the machine’s possibility for conditional logic means that we can construct a Turing machine that keeps repeating the sequence 0123456789:

0.01234567890123456789...
Enter fullscreen mode Exit fullscreen mode

In his original 1936 paper, Turing gives even more difficult examples, such as this one (with increasingly larger sequences of 0):

0.01001000100001...
Enter fullscreen mode Exit fullscreen mode

Any finite “algorithm” — a word I have carefully avoided so far, because it is of course anachronistic — one can think of to construct a number, can be formulated as a specific Turing machine. What’s more, it is possible to design a Universal Turing machine: one single machine (U) that can be fed as input the description of any other specific machine (M), and will consequently compute the same number as M.

When you consider what Turing is describing here — the notion of I/O, machine states or instructions implementing an algorithm’s logic, a stream of binary data (Turing used binary fractions exclusively), a way to “store” programs, and so on — it is easy to see how the Turing machine is widely considered a basic model for a general computer as we know it today.

Turing completeness

A system of data-manipulation rules, such as a computer’s instruction set or a programming language, is said to be Turing-complete, if it can be used to simulate any Turing machine. For example, an imperative language is Turing-complete if it has conditional branching and the ability to change an arbitrary amount of memory.

Of course, all well-known programming languages are Turing complete, and so are some lesser-known, esoteric ones like Whitespace, in which only spaces, linefeeds and tabs have meaning, or Piet, whose programs are bitmaps that look like abstract art. However, it might come as a surprise that even programs like Microsoft Excel or Powerpoint are Turing-complete. This means that, at least in theory, they are completely equivalent to any other Turing-complete language. Imagine writing an operating system in Powerpoint slides! And we can go further still: even video games like Minecraft, card games like Magic: The Gathering, and chemical reaction networks have been shown to be Turing-complete!

Notable examples of incomplete languages are markup languages such as HTML or XML, or query languages. Standard SQL, for instance, is not Turing-complete, although with some additions it can be made so. Regular expression languages and parsing languages are also not Turing-complete.

Turing test

In his writings, Turing often connected his computing machine to the way the human mind works. For instance, he likened the states of the machine to the state of mind of a person performing computations. Vice versa, it is possible that he modeled certain aspects of the Turing machine on the way humans calculate. This interest lead Turing to explore what we would now call “artificial intelligence”.

Beside the apparent resemblance of the human mind and his computing machine, it is not difficult to see how Turing would arrive at the notion of making machines “intelligent”. Let’s go back to the Universal Turing machine, which builds on the fact that it is possible to describe or encode any Turing machine as a string. Conversely, any string that can be produced by an algorithm can be formulated by a Turing machine. Hence, it is conceivable that we create an algorithm that produces the encoding of a Turing machine! In other words, software writing software.

When thinking about the problem of artificial intelligence, Turing proposed what he called the “imitation game” (hence the 2014 movie about Turing’s life), nowadays known as the Turing test. The test is designed to determine whether a machine is able to exhibit intelligent behaviour indistinguishable from that of a human. In the standard interpretation, it is a conversation between a human interrogator and two other parties, one of which is a machine. The “game” is to determine which one is human and which is not, using written communication only (to rule out factors like articulation, tone, etcetera).

While machines have been able to pass the Turing test since the 60s, it remains a highly influential, but also widely criticized way to approach artificial intelligence.

Want to know more?

A lot more can be said about Turing machines or Alan Turing’s influence on theoretical computer science in general, and this short blog cannot really do justice to the richness of his ideas.

For those who want to know more, I highly recommend reading *The Annotated Turing: A Guided Tour Through Alan Turing’s Historic Paper on Computability and the Turing Machine (2008) *by Charles Petzold (ISBN 978–0–470–22905–7). It is written as a commentary to Turing’s original paper “On Computable Numbers, with an Application to the Entscheidungsproblem” (doi:10.1112/plms/s2–42.1.230). However, it not only takes the reader sentence by sentence through Turing’s paper, but adds explanations, further examples, corrections, and biographical information. Also, it requires nothing but a general, high-school level of mathematics to start with.

In short, this is an invaluable source for those who want to explore Alan Turing’s groundbreaking insights on the power of mechanical computation, and — more importantly — its fundamental limitations.


Hi! 👋 I’m Tom. I’m a software engineer, a technical writer and IT burnout coach. If you want to get in touch, check out https://tomdeneire.github.io

Top comments (0)