### Friday, November 15, 2002

**It all started with a machine...**

Steve Homer and I have finally finished our chapter *
A Short History of Computational Complexity*. It will
eventually appear in a book collection on the history of mathematical
logic.
You can also see a talk
I gave at the 2002 Complexity Conference based on the paper.

7:52 AM
#
Comments
[]

### Thursday, November 14, 2002

**Tic Tac Toe Variation**

In my daughter's second grade math homework there was an interesting
variation of Tic-Tac-Toe designed to teach addition and
subtraction. Take a 3 x 3 grid and randomly give each square a
different number between 2 and 18. We have two players X and O. Play
goes as follows:
- Player X chooses a number from 1 to 9.
- Player O chooses a
number from 1 to 9 that she had not picked before.
- Player O adds
that number and the last number picked from X and if that square is on
the board and unmarked, that square is marked O.
- Player X chooses a
number from 1 to 9 that he had not picked before.
- Player X adds
that number and the last number picked from O and if that square is on
the board and unmarked, that square is marked X.
- Go to step 2.

Play ends when either X or O has three in a row and is declared a
winner or when all the numbers run out and the game is declared a
draw.
Here is an example:

12 | 5 | 7
-----------
14 | 11 | 3
-----------
4 | 13 | 9

X: picks 1, O: picks 3 (to make 4), X: 8 (11), O: 4 (12), X: 3 (7), O:
6 (9). At the point the board looks like:
O | 5 | X
-----------
14 | X | O
-----------
O | 13 | O

Defensively X plays 2, Y: 1, X; 1, Y:2 and whatever X plays next Y has
a forced win by making 13 or 14.
Despite the simplicity this is quite a challenging game. For every
initial configuration, is there always a forced draw like in real
Tic-Tac-Toe or do some configurations have a forced win for X or O?
How complicated is it to compute an optimal strategy?

My daughter was frustrated at how hard it is to win this game but she
shouldn't be ashamed--I couldn't figure out the best strategy
either. Amazing what complicated things can come out of a second-grade
class.

9:53 AM
#
Comments
[]

### Tuesday, November 12, 2002

**Kolmogorov Complexity Web Site**

Can't get enough Kolmogorov Complexity. Check out Marcus Hutter's site on Kolmogorov Complexity and Solomonoff Induction. The site is a bit dated but contains many useful links and information about the Kolmogorov mailing list which still seems quite active.
3:46 PM
#
Comments
[]

**The Union of Complexity Classes**

We often see the intersection of two classes as an interesting class
in and of itself. For example factoring is in NP∩co-NP. In some
cases you get interesting equalities, like that ZPP is equal to
RP∩co-RP. But we rarely see the union of two classes. Every wonder
why?
In fact, no complexity class can be the nontrivial union of two other
classes. To formalize and prove this statement we need some
definitions.

Let A and B be subsets of {0,1}^{*}. We define the join,
A⊕B, as the union of {0x | x is in A} and {1y | y is in B}. Given
a set C we define the 0-projection of C as {x | 0x is in C} and the
1-projection of C as {y | 1y is in C}. Note that the 0-projection of
A⊕B is just A and the 1-projection is just B.

Essentially every complexity class is closed under joins and
projections. For example if A and B are in NP then A⊕B is also
in NP. The fact that no complexity class is the nontrivial union of
other classes follows from the following Lemma.

**Lemma**: Let **E**, **F** and **G** be classes of
languages that are closed under joins and projections and **G** =
**E**∪**F**. Then either **G** = **E** or **G** =
**F**.

**Proof**: Suppose the lemma is false. Let A be a set in
**G**-**E** and B be a set in
**G**-**F**. Let C = A⊕B. We have that C is in **G**
since **G** is closed under joins. Thus C is in either **E** or
**F**. Suppose C is in **E**. Since **E** is closed under
projections, we have A is in **E** a contradiction. If C is in
**F** then B is in **F** also a contradiction.

10:57 AM
#
Comments
[]

### Monday, November 11, 2002

**Foundations of Complexity**

Lesson 7: The Recursion Theorem

Previous lesson
| Next Lesson

Here we are in Lesson 7 and have not yet talked about complexity per
se. I felt it important to give some background on computability
theory not only for the importance of the results but also to
introduce the basic concepts of Turing machines, diagonalization and
reducibility.
We will start complexity in the next lesson.

Let me end the discussion
of computability by one of my favorite theorems. Suppose you wanted to
create the ultimate computer virus that attacked any program and made
it change its behavior. The *recursion theorem* states that no
matter how powerful the virus, some program will remain unscathed. At
first this seems impossible just by considering the function that
simulates a program and then adds one to the answer. But this process
will not affect the machine that never halts.

**Theorem:** Let f be any computable function. There is some Turing
machine M such that

L(M) = L(f(<M>))
The recursion theorem, sometimes called the fixed-point theorem, has
one of the most unintuitive proofs where I cannot explain why it
works, only that it does.

**Proof:** Fix a computable function
f. For each machine N, construct a Turing machine <R> that on input x,
simulates N(<N>) to produce the description of a machine and
simulates that machine on x. Let g(<N>) be the function that
outputs <R>. Note that if N(<N>) halts then the programs
described by g(<N>) and N(<N>) accept the same language.

Note that g is computable even if N(<N>) does not halt. Let T(x)
be the machine that computes f(g(x)). We will let M be the machine
described by g(<T>). Then we have that

M accepts input x
if and only if

the machine described by g(<T>) accepts input
x if and only if

the machine described by T(<T>) accepts
input x if and only if

the machine described by f(g(<T>))
accepts input x (since T(x)=f(g(x))) if and only if

the machine
described by f(<M>) accepts input x. QED

As an application, consider the function f(x) that outputs the
description of a machine that accepts {x}. By the recursion theorem must
be some M such that L(M) accepts exactly <M>. As an experiment, pick
your favorite programming language and find a program that outputs
its own code. By an argument based on the recursion theorem, such a task is
always possible but it is trickier than it seems.

This ends the section on computability theory which is an exciting
area of research in and of itself. For further reading the book of
Homer and Selman goes into these ideas with some more detail and
examples. For more advanced concepts I recommend the books of
Soare, Odifreddi
or Schoenfield.

1:47 PM
#
Comments
[]