### Friday, September 13, 2002

**Complexity Class of the Week: Factoring**

Previous CCW
Factoring is not a class, it is not even a language. Nevertheless it
has some interesting connections to complexity classes that are worth
discussing.

Factoring is the problem of taking a number m and producing the prime
factors of m. There are a number of algorithms for factoring but they
all take time exponential in the length of the input (log m) in the
worst case. Here is a
good place to start reading about these algorithms.

Factoring gives a great example of an issue that often arises in
complexity, search versus decision. It has always amazed me that we
can easily determine whether a number has nontrivial factors but
finding these factors is apparently much more difficult. This
distinction and some other nice properties of numbers makes factoring
very useful in cryptography.

To consider the computational complexity of factoring, we need to make
a language version.

FACTOR = {(m,r) | There is an s, 1<s<r such that s divides m}

FACTOR is computationally equivalent to factoring: (m,r) is in FACTOR
if and only if r is greater than the least prime factor of m. Given a
black-box for FACTOR one can use binary search to find a prime factor
of m.
If one insists on a complexity class, one could consider all the
languages reducible to FACTOR but there is not much value beyond
considering the complexity of the language FACTOR.

FACTOR is in NP∩co-NP: Guess the prime factorization of m. Since
every number has a unique prime factorization we have FACTOR in
UP∩co-UP where UP is the set of languages accepted by NP machines
which have at most one accepting path for every input. The class
UP∩co-UP is arguably the smallest interesting complexity class not
known to have efficient algorithms and, assuming factoring is hard,
really does not have efficient algorithms. I should note that FACTOR
in UP∩co-UP was known before the recent AKS primality algorithm
but the proof was more difficult.

Peter Shor almost singlehandedly justified the study of quantum
computing by
his
efficient quantum algorithm for factoring, in
complexity terms FACTOR in BQP. His proof used the fact that factoring
of a number m can be reduced to finding the order of the
multiplicative group (mod n). Shor then shows that quantum computers
can solve this kind of group question.

Factoring does not appear to be hard for any nontrivial complexity
class. This means that unlike Satisfiability we don't have any reason
to believe that factoring is hard other than the fact that many smart
people have failed to solve it. On the other hand the hardness of
factoring is taken as a given in the study of complexity and
cryptography and used to show the hardness of classes like UP∩co-UP.

8:33 AM
#
Comments
[]

### Thursday, September 12, 2002

Who says you can't make money with mathematics? Here is an interesting Wired article on the MIT Blackjack team.
12:53 PM
#
Comments
[]

**Science Screenplay Contest**

Have you secretly been writing a movie script about a computer scientist? Now is your chance. Robert De Niro's Tribeca Film Institute in conjunction with the Sloan foundation is looking for movie scripts with a lead character who is a scientist, mathematician
or engineer. No science fiction stories will be accepted.
**Update 8/4/05:** See the winners.

5:49 AM
#

### Wednesday, September 11, 2002

On September 11, 2001 we lost a member of the theoretical computer science community. Danny Lewin,
an MIT graduate student who
went on to co-found Akamai was on board the American Airlines flight that crashed in New York City. He and the other victims of that day will always be remembered.
6:14 AM
#
Comments
[]

### Sunday, September 08, 2002

**Foundations of Complexity**

Lesson 1: What is a computer?

Next Lesson
This is the first of a long series of posts giving an informal
introduction to computational complexity.

Computational complexity theorists try to determine which problem are
efficiently solvable on a computer. This sentence already leads to
many questions: What is a problem? What is efficiently solvable?
Let us first start off with a truly basic question, what is a
computer?

In 1936, Alan Turing invented a theoretical computing device now
called a *Turing Machine*. This was before electronic computers
existed. He tried to give a simple model of the thought processes of
mathematicians. His model has stood the test of time and represents
the official model of computation that we still study today.

Instead of giving the formal definition of a Turing machine, let us
try a more modern approach. Consider some current programming language
like Java. Let us consider the (imaginary) world where a Java program
has access to a potentially infinite amount of storage. A Turing
machine corresponds to a specific Java program. You might find it a
little confusing to think of Turing *machine* = Java
*Program* but that is the best way to think about it.

Does it
matter which programming language we choose? What if we used C++ or
Visual Basic or the original Turing machine model? No, it makes no
difference. Consider a C++ program P. There are, at least
theoretically, Java programs that will interpret C++ programs. If you
consider a Java program that interprets P, it will have the same
computational behavior as P. This idea holds between any two
programming languages including the original Turing machine and leads
to the Church-Turing thesis:

* Everything computable is
computable by a Turing machine.*

Which you can interpret
as saying everything is computable is computable by a Java program.
The Church-Turing thesis cannot be proven as it is a thesis but has lead
us to define computable as computable by a Turing machine. Now after
about half a century of having real computers, the Turing machine has
really proven itself as the right model of computation.

7:25 PM
#
Comments
[]