### Friday, November 01, 2002

**Conferences**

November is a month for conference deadlines. The STOC conference
has a submission deadline of November 6. STOC and FOCS, which is being held
November 16-19, are the two major theoretical computer science
conferences. STOC this year is part of the 2003 Federated Computing Research
Conference in San Diego in June. Several other theory conferences
are also part of FCRC and many of them have deadlines in November or
soon thereafter.

My favorite conference, The IEEE
Conference on Computational Complexity, will be held in Denmark in
July. Their submissions deadline is November 27.

In computer science in general and theoretical computer science in particular,
conferences are the primary outlet for announcement and publication of
results. Since computer science is a relatively young discipline, the
field changes dramatically year to year and the usual long process of
journal publications might often publish outdated work. More mature
fields like mathematics and physics use journals as the primary source
of publication.

The main disadvantage of the computer science system is that while
computer scientists are encouraged to submit their work to refereed
journals, many of the important papers in the area never make it that
far.

There have been at least two recent major exceptions to this
process. Alexander Razborov wrote a paper last
spring on lower bounds on quantum communication complexity that would
have been the best quantum paper in FOCS if not the best
paper. Instead he chose to submit it directly to a journal, Izvestiya
of the Russian Academy of Science: Mathematics. The
Agrawal-Kayal-Saxena Primality
Paper which would easily be the best paper at the upcoming STOC is
not being submitted to a conference either but directly to Annals of
Mathematics. "Why should I send it to a conference," Manindra Agrawal
asks, "when everyone already knows the result?"

Are these two papers a trend? Are conferences less important as papers
are easily available online? Or is computer science finally becoming a
mature field?

8:13 AM
#
Comments
[]

### Wednesday, October 30, 2002

**Complexity Class of the Week: SPP, Part I**

Previous CCW
With the new FOCS paper by Arvind and Kurur, "Graph Isomorphism in
SPP", people have asked me why they should be interested in SPP, a
class first defined by a paper by Steve Fenner, Stuart Kurtz and
myself. I thought I would discuss how this class was developed and why
we feel it is important.

Gill, in his seminal paper on probabilistic complexity classes,
defined the class PP and asked whether the class was closed under
intersection. In 1990, Fenner and Kurtz and later myself, decided
to try a new approach to the question: Consider a class defined like
PP but with additional restrictions, show that this class is closed
under intersection and then show the class was really the same as PP.
Kurtz had a philosophical approach to the problem and defined three
variants of PP, Epicurean-PP, Cynical-PP and Stoic-PP.

Recall that PP is the set of languages L accepted by probabilistic
machines such that x is in L exactly when the probability of accepting
is greater than the probability of rejecting. Epicurean-PP machines
were happy to accept but only rejected by barely rejecting--having one
more rejecting paths than accepting paths. Cynical-PP machines were
the opposite, willing to reject in any way but would only barely
accept. Stoic-PP machines stood their ground and would just barely
accept or barely reject. Cynical-PP turned out to be the same as the
well-studied class C_{=}P and Epicurean-P was
co-C_{=}P. Stoic-PP or SPP was new and thus a complexity class
was born.

While it was easy to show SPP was closed under intersection it is unlikely to
be the same as PP and thus we failed in this attempt to show PP was
closed under intersection. While we were having this discussion, sitting on the
printer was a paper Richard Beigel had emailed me earlier, his paper
with Nick Reingold and Daniel Spielman entitled "PP is Closed Under
Intersection". Their successful approach was completely different the
ours. They used rational functions to approximate the sign function.

Not to be deterred we started studying SPP and related classes which
also led to GapP functions. Valiant had defined the class #P,
functions f such that there was some nondeterministic polynomial-time
Turing machine M such that f(x) was the number of accepting paths of
M(x). GapP functions were the closure of #P functions under
subtraction, or equivalently the difference or gap of the number of
accepting and rejecting computation paths of an NP machine.

GapP functions are closed under many of the same properties as #P
functions such as polynomial products and exponential sums as well as
subtraction of course. The power of subtraction made GapP a much
cleaner approach to studying counting classes and the study of GapP
showed the great importance of the class SPP.

Independently of us, Ogihara and Hemachandra defined a class XP and
Gupta defined a class ZUP, both of which were equivalent to SPP.

I will stop here and in a later post describe the actual properties of SPP that
make it such an interesting class.

4:22 PM
#
Comments
[]

### Tuesday, October 29, 2002

**Talks by Manindra Agrawal**

I don't usually give talk announcements in this web log, but if you are in the Boston area this week you can see Manindra Agrawal give talks about prime numbers and his new algorithm with Kayal and Saxena. This new algorithm, giving the first provably deterministic polynomial-time algorithm to check primality, will go down as one of the classic results in theoretical computer science.
Manindra is giving a non-technical talk on the history of primes at the Clay Mathematics Institute on Wednesday and technical talks on the primality algorithm at MIT on Thursday and
Harvard on Friday.

5:58 AM
#
Comments
[]

### Monday, October 28, 2002

**Foundations of Complexity**

Lesson 5: Reductions

Previous Lesson
| Next Lesson
In the
previous lesson we gave examples of two noncomputable sets. The set

L_{A} = { <M> | Machine M does accept input <M>}
is computably enumerable but not computable while the set
L_{D} = { <M> | Machine M does not accept input <M>}
is not even computably enumerable.
We also defined computable functions. In this lesson we will use
computable functions to create other languages that are not
computable. To do so we use the notion of reduction. Informally a
reduction takes one decision problem and *reduces* it to another
problems. For example to know your current longitude, you only need to
know the time of day in a fixed location. The problem of computing the
longitude reduces to the problem of proper time-keeping. This is one
way the longitude issue was dealt with before the days of GPS.

Formally we say a language A reduces to language B if there is a
computable function f such that for all x in Σ^{*}, x
is in A if and only if f(x) is in B.

The power of reductions come from the following lemma.

**Lemma:** Let A and B be sets such that A is reducible to B.

- If B is computable then A is computable.
- If B is computably enumerable then A is computably enumerable.
- If A is not computable then B is not computable.
- If A is not computably enumerable then B is not computably
enumerable.

Lines 1 and 2 are easy to see: Just compute f(x) and simulate the
program for B on f(x). Lines 3 and 4 are just the contrapositive of
1 and 2 and turn out to be especially useful.

For example, consider the universal Turing machine language,

L_{U} = { (<M>,x) | Machine M does accept input x}
We have seen L_{U} is computably enumerable. Let
f(<M>)=(<M>,<M>). The function f is easily seen to
be computable and reduces L_{A} to L_{U}. Thus by our
Lemma, line 3, we have that L_{U} is not computable.
Reductions play a major role in computational complexity so it is very
instructive to see them in this context. Next lesson we will use more
complicated reductions to show other simple languages are not computable.

8:47 AM
#
Comments
[]