Computational Complexity

 

Creative Commons License
This work is licensed under a Creative Commons License.

Powered by Blogger™

Friday, November 08, 2002

 
STACS

The STACS Conference has just posted the list of accepted papers for their 20th conference. STACS alternates between France and Germany (and only some truth to the rumor that it alternates between great food and great organization). The upcoming 2003 conference will be held in Berlin, February 27 to March 1.

I have always considered STACS, the Symposium on Theoretical Aspects of Computer Science, the best venue for computational complexity in Europe. I have attended the conference many times and they consistently have several strong papers in the area as well a good attendance of complexity theorists from both Europe and America. You can see the weight complexity gets on the web page where "Computational and structural complexity" gets the same weight as "Algorithms and data structures, including: parallel and distributed algorithms, computational geometry, cryptography, algorithmic learning theory".

The ICALP conference has a longer history, a larger audience, more traditions and does a better job representing Europe as a whole. But the scope in ICALP is quite large and computational complexity often gets lost in the shuffle.

6:45 AM # Comments []  

Wednesday, November 06, 2002

 
Complexity Class of the Week: SPP, Part II

Previous CCW

Last week we gave the history of the complexity class SPP and described GapP functions. This week we will give a definition of SPP and many of the class' amazing properties.

A language L is in SPP if there is a GapP function f such that

  1. If x is in L then f(x)=1.
  2. If x is not in L then f(x)=0.
That is if x is in L there is one more accepting than rejecting path. If x is not in L there are the same number of each.

If we used #P functions instead of GapP functions we have the definition of UP. SPP contains UP since every #P function is a GapP function. In fact SPP contains FewP and even Few where we don't believe such languages are in UP.

SPP is the smallest Gap-definable class, i.e., the smallest class that can be defined by GapP functions as above. There are a number of common Gap-definable classes, for example from the Zoo: ⊕P, AWPP, C=P, ModP, ModkP, MP, AmpMP, PP, WPP and of course SPP. SPP is contained in all of these classes. AWPP is the smallest classical class known to contain BQP, the class of problems with efficient quantum algorithms, though it is not known if BQP is itself Gap-definable.

SPP is exactly equal to the low sets for GapP, i.e., SPP is exactly the set of oracles A such that for any NP machine M, the number of accepting minus the number of rejecting paths of M^A(x) is still an (unrelativized) GapP function. This means that SPP is low for all of the Gap-definable classes, for example that ⊕PSPP = ⊕P. This also means that SPP is self-low: SPPSPP = SPP which means SPP is closed under union, complement and in fact any Turing-reduction.

Kobler, Schoning and Toran showed that graph automorphism is in SPP and very recently Arvind and Kurur have show that graph isomorphism is in SPP. This means that graph isomorphism sits in and is in fact low for every Gap-definable class.

The decision tree version of SPP is interesting. A function f on n bits is in this class if there is a polynomial g with polylog degree such that f(x)=g(x) on all x in {0,1}*. All such functions have low deterministic decision tree complexity--the first complexity application of a combinatorial lemma of Nisan and Szegedy. Applications of this result include relativized worlds where SPP does not have complete sets or where P = SPP and the polynomial-time hierarchy is infinite.

2:47 PM # Comments []  

Monday, November 04, 2002

 
Foundations of Complexity
Lesson 6: The Halting Problem

Previous Lesson | Next Lesson

Last lesson we learned about using reductions to show problems are hard. Now consider the most famous of undecidable problems, the halting problem:

LH = {<M> | <M> eventually halts with blank tape as input}
We will now show that LH is not computable. We do this by reducing the universal language LU to LH where LU is the set of pairs (<M>,x) such that M(x) accepts.

Given <M> and x, consider the following program:
Replace input with x.
Simulate M on x.
If M(x) accepts then halt.
If M(x) does not accept then go into an infinite loop.

Let us call this program N. Note that M(x) accepts if and only if N halts on blank tape.

Now here is the important point. Consider the function f that given <M> and x, will produce the program N. Even though M(x) and N may not halt the actual procedure that converts <M> and x to N is computable. This is just converting one program to another.

So we have that (<M>,x) is in LU if and only if M(x) accepts if and only if N=f(<M>,x) halts on blank tape if and only if N is in LH. Thus f reduces LU to LH and thus by the Lemma of Lesson 5, we have that LH is not computable.

I consider the noncomputability of the halting problem to be the single most important result in theoretical computer science. There are some programs, of course, that are easy to determine whether or not they will halt. But in general, no matter how smart you are or fast the computers, it is simply impossible to analyze a piece of code and see if it will terminate.

Using similar techniques one can prove a general result known as Rice's Theorem: Every nontrivial property of the computably enumerable languages is undecidable. More formally
Rice's Theorem: Let P be any non-empty proper subset of the computably enumerable languages. Then the language

LP = {<M> | L(M) is in P}
is not computable.

For example the following languages are not computable:

  • {<M> | L(M) is empty}
  • {<M> | L(M) is computable}
  • {<M> | L(M) is finite}

3:59 PM # Comments []  

Archives