Computational Complexity

 

Creative Commons License
This work is licensed under a Creative Commons License.

Powered by Blogger™

Thursday, May 08, 2003

 
Foundations of Complexity
Lesson 17: Space Complexity

Previous Lesson | Next Lesson

In addition to time, computer scientists also worry about the memory or space that a Turing machine uses. Roughly one can measure space as the number of tape squares used by at Turing machine. We would like to talk about space bounds like log n and still be able to read the whole input so we need a slightly different model.

We now allow our Turing machine to have a read-only input tape, one or more work tapes and for a Turing machine computing a function, a write-only output tape. On the input tape the head can move back and forth but it cannot change the values in any cell. On the output tape the head can only move right, writing as it moves. The amount of space a Turing machine uses on input x is the number of cells of the work tape that it uses. We will assume a space bound of at least log n since we need log n bits to describe the location of the the input head pointer.

In real world terms, think of your computer accessing "the internet". You can still reach many pages even though you cannot store the entire internet on your computer.

Any machine that runs in time t(n) clearly also runs in space t(n). If a machine uses space s(n) and it halts then it will halt in time cs(n) for some constant c. Otherwise the machine will repeat a configuration and run forever.

We define the classes DSPACE(s(n)) as the set of languages that are accepted by Turing machines using at most O(s(n)) space. NSPACE(s(n)) is the same for nondeterministic machine. We will always assume s(n)≥log n and s(n) is an easily computable function.

Common space classes include L=DSPACE(log n), NL=NSPACE(log n), PSPACE=∪kDSPACE(nk) and NPSPACE=∪kNSPACE(nk).

Unlike the P versus NP question, we know quite a bit about the relationship of deterministic versus nondeterministic space through the following two results.

  • Savitch's Theorem: NSPACE(s(n))⊆DSPACE(s2(n)). In particular this means NPSPACE = PSPACE.
  • Immerman-Szelepcsényi Theorem: If L is in NSPACE(s(n)) then the complement of L is also in NSPACE(s(n)).
We will discuss these theorems in more detail in upcoming lessons.

3:38 PM # Comments []  

Tuesday, May 06, 2003

 
Universal Search

Psst. Want to know the fastest algorithm for factoring? I can give you an algorithm that is within a constant multiplicative factor of the best possible factoring algorithms.

Actually this is an idea due to Levin. Let p1, p2, ... be a list of all programs. On some input m simulate program p1 for half of your computation time, p2 for a quarter of the time, p3 for an eighth of the time, etc., until one of these programs outputs a factor of m. If pi is the fastest algorithm for factoring then our algorithm will run in time at most 2i times the running time of pi. The multiplicative factor 2i is independent of m but unfortunately could be quite large.

Marcus Hutter gives another algorithm that has a multiplicative factor of 5 but has a large additive constant. The trick is to spend some of your time searching for a proof that an algorithm is correct and runs in certain amount of time. You then only need to simulate the provably fastest algorithm found so far.

Hutter's algorithm works only as fast as the provably best algorithm with a provable running time. It could very well be the case that there is some good heuristic for factoring that does not have a provable running time or proof of correctness. Levin's technique will capture this case.

Of course, neither of these papers gives a practical algorithm as the constants involved go beyond huge. Nevertheless it is still interesting to see the theoretical possibilities of universal search.

5:58 AM # Comments []  

Sunday, May 04, 2003

 
FCRC

Even the largest theoretical computer science conferences draw at most a couple of hundred people. Many (but not all) other areas of computer science also do not draw large numbers of participants. To have a larger and more noticeable conference and possibly to get press attention, the CS community decided to hold a joint conference covering many different areas in computer science. In 1993, the first Federated Computing Research Conference was held in San Diego.

The press didn't show but with some cross-collaboration the conference was considered a mild success. After stops in Philadelphia and Atlanta, FCRC returns to San Diego June 7-14. The 2003 FCRC includes the main spring theory conference (STOC), Electronic Commerce (EC), Computational Geometry, Principle and Practice of Parallel Programming and many others. Registration deadline is May 7.

For the first time the Complexity Conference will not be part of FCRC as 2003 is a Europe year for us. I am planning to attend FCRC for STOC and the EC meeting. If you attend FCRC stop by and say hi.

5:45 AM # Comments []  

Archives