DEV Community

Cover image for What is a computer?
Sean Williams
Sean Williams

Posted on

What is a computer?

At the close of the 19th Century, it was thought that math and science were about to be "solved," and all that remained was cleaning up some of the details. The famous mathematician David Hilbert posed what he considered the last big questions. One of them was called the Entscheidungsproblem, meaning Decision Problem. The question was:

Given a set of axioms in first-order logic, can you create a mechanical process for deciding whether a particular proposition is true?

The first step in solving this problem was to figure out what constitutes a mechanical process for solving logic problems. Two great mathematicians came to the fore: Alonzo Church and Alan Turing. Church came up with a mathematically oriented process, called lambda calculus, and Turing came up with a mechanically oriented process, the Universal Turing Machine.

Both mathematicians proved that, for the problem-solving processes they devised, the answer to the Decision Problem is "no." They got to this answer through another, better known problem:

Given an algorithm and a set of inputs, can you prove whether the algorithm will fall into an infinite loop?

What they actually proved is that the answer to this question, the Halting Problem, is "no." The undecidability of the Decision Problem is an extension of the undecidability of the Halting Problem.

Indeed, this is one way of defining computers: a computer is a machine for which the Halting Problem is undecidable.

The limits of analysis

One of the more eye-opening things for me, as a computer science student, came from the required graduate-level theory course. We looked at proofs about the behavior of computer programs. Proofs generally work by substituting things for equivalent things, to change the form of some logical statement into a form you're happier with.

When it comes to computer programs, this breaks down once you have a loop. What's the value of a loop? Sometimes you can work this out; sometimes you can prove that a loop "reduces its problem space" with each iteration until you collapse down to a particular value. OpenMP, a sort of extension library for C/C++, allows you to automatically parallelize loops that take on certain "canonical forms." The strange programming language Idris automatically detects certain recursive patterns that are guaranteed to produce a value.

But for the most part, the value of a loop is unknowable. After all, outside certain special cases, you can't prove whether a loop will take on any value at all. Computer scientists have a special name for this: bottom, indicated by the symbol ⊥. ⊥ is the value of a "divergent computation," or, one for which we don't know if it'll ever actually produce a value. (⊥ is also the value of exceptions.)

The most obvious consequence of this is quite simple: a computer program cannot robustly write another computer program. In order to write a robust program-writing-program, you need to understand what the inputs and outputs of each piece of the program will be—just as normal programming is about understanding the flow of input and output.

If we try to model a computer program, which is the first step in writing a program to write programs, most of the inputs and outputs will be ⊥, and you can't analyze ⊥. After all, ⊥ isn't any particular value; there's no equivalent thing you can substitute into the proof. It's a sort of logical black hole, an immovable logical object.

The other obvious consequence of this is that we are nothing like computers. The Halting Problem presents some obstacles to programming, or rather to testing, but we can look at looping code and figure out what it's supposed to do. We also have problem solving intuitions that let us sort out when the loop isn't doing what it's supposed to, and bring it more into line. I don't know what problem solving intuition is, but I know for a fact that it isn't computation.

What about compiler optimization? The truth is, compiler optimization starts by breaking your code down into "basic blocks," which are runs of code that contain no branching. No ifs, no loops, no function calls. Basic blocks can be robustly analyzed, but they're also not Turing complete.

This is also the reason that the dream of "programming for business managers" has never worked out (despite decades of trying). Once your needs exceed the walled garden of curated functionality you can mix and match in a reasonably safe way, you're back to normal programming.

What a computer isn't

Another dream that hasn't panned out very well is declarative programming. Prolog is a language that purports to allow you to answer questions in propositional logic. Of course, computing began with the fact that you can't do that.

Indeed, Prolog programs are highly sensitive to the order in which statements are presented. This is because Prolog programs are still dynamical processes: there's still an evaluation system that does recursive descent to determine the truth of the logic you present it.

This gets to the most important point of what computers aren't: they are not logic machines. They contain integrated circuits that are designed from logical principles, but that no more makes them logic machines than the presence of silicon makes a beach a computer.

Similarly, the standard engineering tool for complex math used to be the slide rule. A slide rule is based on a more fundamental principle of what addition is: if you walk forward 6 feet, then you walk forward 3 feet, you've walked forward 6 + 3 = 9 feet. Is walking forward the essence of arithmetic?

Keuffel & Esser Model 4081-3 slide rules, taken from Wikimedia Commons

Physically, a slide rule is just one bar that's allowed to slide between two other bars. You do addition by sliding the inner rule such that its 0 mark lines up with the left operand on the outer rule, going forward on the inner rule a distance equal to the right operand, and reading the value at the same place on the outer bar. This acts like doing addition with a tape measure: to do 6 + 3, start from a wall and measure out 6 feet and mark the position. Measure an additional 3 feet from your mark, and mark that position. Now measure the distance from the wall to the second mark.

Slide rules do another trick, though. Knowing that log(a * b) = log(a) + log(b), you can logarithmically scale the tick marks and thereby add log(a) to log(b). Look up the position of log(a) + log(b) on the same logarithmic scale, and the result is a * b.

Does this mean that arithmetic is the essence of tape measures and slide rules—that a couple of sticks with notches on them "do" math? In a certain sense the answer is "yes," but accepting that answer also forces math to come down from the mountaintops and linger with us commoners.

In terms of Plato's cave, though, the answer is obviously "no." Adding some transistors doesn't make computers any more of "mathematical entities" or "pure abstracta" than tape measures.

What computers are

A friend of mine has the best, but also most obtuse, characterization of computers: he says they're instruments for which we control the output. With most instruments, a voltmeter for example, the output is some sort of fact of the world. Also, instruments are calibrated to react to the world in certain predictable ways, but it's the world that determines how far the needle moves.

Computers are a strange situation in which we condition the input and control the output. In my experience, this means computers only have two fundamental uses: accounting and control.

Accounting is a situation where we've created a system for ourselves, and we've set up the rules about how inputs should be conditioned (debits = credits) and what the outputs should look like (e.g., profit and loss statement). If the output were a fact of the world, it would be unnecessary to use a computer. This shows us another fact of computing: a bug is when the computer produces the wrong output.

This is the most fundamental problem with artificial intelligence, by the way. One thing that's very clear about AI research is, it has no interest in relinquishing control over the output—after all, AI is joined at the hip with "model validation." But then again, there isn't really such a thing as a computer for which the programmers don't control the output. This is, in my opinion, a rather meager view of "intelligence."

Control is a situation where you want to drive some real-world process according to rules you've set up. Cruise control is meant to keep your car's speed in a range around some set point, and it does that by adjusting the throttle. We still control the output in this situation, but the output is not the car's speed—it's how open the throttle is. Controlling speed is the goal, but it isn't what the program does.

Anyway, most computer programs you and I write are reskinned accounting systems. Video games are accounting systems, just with much fancier user interfaces. Even large scientific simulations are no different: almost all of the actual work in simulation design goes into having it produce correct output. If this sounds anathema to your understanding of the scientific method, then I'm sorry to be the bearer of bad news.

You should, though, feel better about the fact that it's provably impossible for you to optimize yourself out of the job, at least in the aggregate.

Top comments (0)