Computational Complexity

 

Creative Commons License
This work is licensed under a Creative Commons License.

Powered by Blogger™

Friday, February 14, 2003

 
Traveling Salesman on the Plane

Yesterday's post on NP-complete problems reminds me of one of the more interesting open questions, the complexity of the traveling salesman problem on the plane.

The traveling salesman problem consists of a collection of n cities, a symmetric distance function d(i,j) and a number k. The question is whether there is an ordering of the cities so that a tour through those cities and back to the start can be done in total distance at most k. If d takes on rational values, the problem is NP-complete in general and for many restrictions of the possible functions for d, such as requiring d to have the triangle inequality.

Consider the case where the cities are given as points (x,y) in the plane and the distance function is the regular Euclidean distance, i.e.,

d((r,s),(u,v))=((r-u)2+(s-v)2)1/2.
It is open whether this traveling salesman problem on the plane is NP-complete. In most cases, it is easy to show the problem is in NP and more difficult to show it NP-hard. For TSP on the plane, Garey, Graham and Johnson showed it NP-hard way back in 1976; it remains open whether the problem is in NP.

In NP one can guess the ordering of the cities, the problem is in checking whether the sum of distances is at most k. Since we can approximate the square root to a polynomial number of digits the issue is how many digits of accuracy we need. So it boils down to the following purely mathematical question: Given an expression of length m over integers with addition, subtraction, squares and square roots (but no recursive squares or square roots) what is the smallest positive number than can be expressed? If the answer is at least 2-mk for some k then TSP on the plane is in NP and NP-complete.

Let me also mention that one can well approximate traveling salesman on the plane.

10:01 AM # Comments []  

Thursday, February 13, 2003

 
Foundations of Complexity
Lesson 15: More NP-complete Problems

Previous Lesson | Next Lesson

Now that we have shown that CNF-SAT is NP-complete we can use it to show many other problems are NP-complete via the following lemma.

Lemma: If a set A is NP-complete and B is in NP and A polynomial-time reduces to B then B is also NP-complete.

Proof: Let C be an arbitrary set in NP. We need to show C reduces to B. We know C reduces to A (since A is complete) and A reduces to B and it is a simple exercise to see that reductions are transitive. ◊

For example consider 3-SAT, the set of satisfiable 3-CNF formula with exactly 3 literals in each clause. 3-SAT is in NP by just guessing an assignment and verifying that it satisfies the formula. To show 3-SAT is NP-complete we will reduce 3-CNF to 3-SAT. We take each clause and convert it to a set of clauses each with three literals. We will add additional variables for this reduction.

If the clause C has three literals then no conversion is necessary.

If the clause C has two literals something like C = x OR y. We use a new variable z and replace C with two clauses, x OR y OR z and x OR y OR z. Notice that C is satisfied if and only if both new clauses can be satisfied.

If the clause C has one literal like C = x, we add two new variables, u and v and replace C with four new clauses, x OR u OR v, x OR u OR v, x OR u OR v and x OR u OR v.

If the clause C has k literals for k>3, we will replace C by two clauses, one with k-1 literals and one with 3 literals and then recurse. Suppose C = x1 OR ... OR xk. We add a new variable w and replace C with X1 OR ... OR Xk-2 OR w and w OR xk-1 OR xk. Again note that C is satisfied only if both of the new clauses are satisfied with some value for w.

That is the reduction and the proof that it works is straightforward.

There are many many natural NP-complete problems and we could spend many lessons showing that they are NP-complete. Personally, I find NP-completeness proofs more algorithmic than complexity and I will instead focus on relationships of complexity classes in future lessons.

This site is a collection of optimization problems but it also gives you a good idea of what NP-complete problems are out there. The best source for all things NP-complete still remains the excellent book of Garey and Johnson.

9:53 AM # Comments []  

Tuesday, February 11, 2003

 
NEXP not in P/poly?

Daniel Varga asks about the question of NEXP not in P/poly and whether it is "fundamentally" easier than NP not in P/poly. As a reminder: NEXP is the class of languages accepted in nondeterministic exponential (2nO(1)) time. P/poly are languages accepted by nonuniform polynomial-size circuits, or equivalently by a polynomial time machine given a polynomial amount of "advice" that depends only on the input size.

First one must mention the beautiful paper of Impagliazzo, Kabanets and Wigderson that show that NEXP in P/poly if and only if NEXP equals MA.

NEXP not in P/poly should be much easier to prove than NP not in P/poly, as NEXP is a much larger class than NP. Also NEXP not in P/poly is just below the limit of what we can prove. We know that MAEXP, the exponential time version of MA, is not contained in P/poly. MAEXP sits just above NEXP and under some reasonable derandomization assumptions, MAEXP = NEXP.

There is also the issue of uniformity. If one can use the nondeterminism to reduce the advice just a little bit than one could then diagonalize against the P/poly machine. Also if one could slightly derandomize MA machines than one could diagonalize NEXP from MA and thus from P/poly.

Still the problem remains difficult. BPP is contained in P/poly and we don't even know whether BPP is different than NEXP. Virtually any weak unconditional derandomization of BPP would separate it from NEXP but so far we seem stuck.

4:05 PM # Comments []  

Archives