Turing Machines and Computability


What are the limits (if any) of computers?

To answer this question, we need to clarify the notions of computer and computation

A good theory should be as simple as possible, but not simpler.
—Albert Einstein

A computation is any process that can be described by a set of unambiguous instructions.

Alan Turing invented the idea of a Turing Machine in 1935-36 to describe computations.


Example: Inversion

State Symbol New State New Symbol Move
1 0 1 1 R
1 1 1 0 R
1 b 2 b R

Start State: 1
Halt State: 2

This Turing machine can be viewed as a function that takes an input sequence and returns the corresponding inverted sequence (all 1's replaced by 0's and vice versa).

1100  -->  0011
01  -->  10

Turing machine description:

    (1, 0, 1, 1, R)
    (1, 1, 1, 0, R)
    (1, b, 2, b, R)


Turing Machine Simulator from The Analytical Engine

Turing Machine Simulator from Buena Vista University


Is this really enough to compute everything?

Consider:


Church-Turing Thesis

Anything that can be computed can be computed by a Turing machine
 


Choice of programming language doesn't really matter — all are "Turing equivalent"

When we talk about Turing machines, we're really talking about computer programs in general.


Corollary

If the human mind is really a kind of computer, it
must be equivalent in power to a Turing machine



The Halting Problem


Is there anything a Turing machine cannot do, even in principle?   YES!

Example: Looper TM eventually halts on input 0000bbb... but loops forever on input 0000111bbb...

No Turing machine can infallibly tell if another Turing machine will get stuck in an infinite loop on some given input.

In other words, no computer program can infallibly tell if another computer program will ever halt on some given input.

Put another way, no computer program can infallibly tell if another program is completely free of bugs.

How did Turing prove that such a program is in principle impossible?

We'll use Scheme instead of Turing machines to illustrate the argument, but the argument is valid no matter what language we use to describe computations (Scheme, Turing machines, BASIC, Java, etc.)

Turing's approach was to assume that a loop-detector program could be written.

He then showed that this leads directly to a logical contradiction!

So, following in Turing's steps, let's just assume that it's possible to write a Scheme program that correctly tells whether other Scheme programs will eventually halt when given particular inputs.  Let's call our hypothetical program halts?.

(define halts?
  (lambda (program input)
    (cond

          ... lots of complicated code ...

      ((... more code ...) #t)
      (else #f))))  

For example, let's write a couple of simple Scheme programs to test halts?:

(define halter
  (lambda (input)
    'done))

(define looper
  (lambda (input)
    (cond
      ((= input 1) (looper input))
      (else 'done))))

halter always halts, no matter what input we feed it:

looper loops forever if we happen to feed it the value 1. Any other value will cause it to halt:

So far, we have every reason to believe that halts? could exist, at least in principle, even though it might be a rather hard program to write.

At this point, Turing says "OH YEAH? If halts? exists, then I can define the following program called turing which accepts any Scheme program as its input..."

(define turing
  (lambda (program)
    (cond
      ((halts? program program) (looper 1))
      (else 'done))))

At this point, we say "Yes, so what?"

Turing laughs and says "Well, what happens when I feed the turing program to itself?"

(turing turing)

What happens indeed? Let's analyze the situation:

Thus our original assumption about the existence of halts? must have been invalid, since it's easy to define the logically-impossible turing program if halts? is available to us.

Q.E.D.

Conclusion

The task of deciding if an arbitrary computation will
ever terminate cannot be described computationally.



Universal Turing Machines


Turing discovered another amazing fact about Turing machines:


A single Turing machine, properly programmed, can simulate any other Turing machine.


Such a machine is called a Universal Turing Machine (UTM)


How can we "encode" a Turing machine?   Here's one way:

Example: Looper TM

States:  1, 2  --> 0, 00
Symbols:  0, 1, b  -->  0, 00, 000
Moves:  L, R  -->  0, 00

Rule 1:   (1, 0, 1, 0, R)   -->  0101010100
Rule 2:   (1, 1, 1, 1, L)   -->  01001010010
Rule 3:   (1, b, 2, b, R)   -->  010001001000100

1110101010100110100101001011010001001000100111


We could run our hypothetical Loop-detector Turing machine on the above encoding with the input 0000111:

Eventually the machine would halt with a single 1 as output, meaning an infinite loop was detected:

But, alas, we know that such a loop-detecting Turing machine is impossible, as Turing showed.