This work is licensed under a
Creative Commons License.

Friday, January 16, 2004
Live From QIP (by guest blogger Scott Aaronson)
As my plane descended toward Toronto on Monday, it felt as though I was landing on the surface of another planet (though maybe the Mars rover was too fresh in my mind). All I could see out the window was white snow crisscrossed by black highways. On the ground, the weather was probably the coldest I've ever experienced. Call me a wuss if you're from Alaska, northern Canada, Siberia, or Antarctica, but I did go to Cornell.
Before QIP started I visited the University of Western Ontario for a day, to work with Dan Christensen on the complexity of simulating spinfoam models of quantum gravity. We didn't get far. The trouble is that no one knows how to define measurement in these models, and the answer could strongly affect computational complexity. Maybe spinfoam models can solve graph isomorphism in polynomial time; then again, maybe they can't even simulate gardenvariety quantum computers.
I took the train to Waterloo on Tuesday night, then on Wednesday hung around the Perimeter Institute, which is a converted tavern full of theoretical physicists and quantum computing people. The conference talks started on Thursday; here are summaries of a few.
 Richard Cleve spoke about some extremely cool joint work with Peter Høyer, Ben Toner, and John Watrous. They point out that the classical proof of MIP = NEXP breaks down if the two provers share entanglement  regardless of whether the verifier is able to manipulate, or even knows anything about, quantum information. (It might still be true that multiple provers who share entanglement can convince us of any language in NEXP, but if so it will need a new proof.) Cleve et al. give explicit examples of 2prover interactive proof systems that are classically sound but become unsound if the provers share entanglement. To me, the most exciting aspect of this work is that it offers a new, complexitytheoretic way to understand the famous Bell inequalities. In turns out that Bell inequality violation is "really" about two provers convincing a verifier that (say) a certain graph has a 3coloring when in fact it doesn't, by using entanglement to correlate their answers to the verifier's queries.
 John Watrous spoke about stronger error reduction for QMA. Recall that QMA, or Quantum MA, is the class of languages for which there exist polynomialsize quantum proofs that convince a polynomialtime quantum verifier that an input is in the language when indeed it is. Here the completeness and soundness errors are 1/3. Early on Kitaev observed that the prover can amplify the correctness probability to 12^{p(n)} by giving the verifier O(p(n)) copies of the proof. The verifier then checks each proof independently (destroying it in the process) and outputs the majority result. Against everyone's intuitions (or at least mine!), Watrous now shows that O(p(n)) copies are overkill  the verifier can amplify the correctness probability arbitrarily using a single copy of the proof! This means that a "quantum library" could store proofs on the shelves, to be borrowed and returned intact by quantum patrons who want to convince themselves of the truth of various statements. The conventional wisdom  that learning something from a quantum state always disturbs that state  is wrong in the case of proofs. (Could this be related to zeroknowledge proofs?) Another implication of Watrous' result is that QMA_{log} = BQP.
 Scott Aaronson spoke about Multilinear Formulas and Skepticism of Quantum Computing. Journalistic objectivity precludes me from commenting on the excellence or otherwise of that particular talk. Next Ran Raz explained why MultiLinear Formulas for Permanent and Determinant are of SuperPolynomial Size  a brilliant result whose relevance to quantum computing is that it provides the technical tools for my talk. I'd say more about Raz's result, but it's 1AM and I have to get up early tomorrow for another day of manipulating entanglement at ultracold temperatures.
12:20 AM
#
Comments
[]
Thursday, January 15, 2004
Advice, Not The Quantum Kind (by guest blogger Scott Aaronson)
A comment to my last post asked for advice for people interested in getting into complexity research. So here it is. Keep in mind that I'm still a grad student  for advice from more experienced researchers, read Lance's earlier post, and this essay by physicist Steven Weinberg.
I think the key is to start doing creative original research right away. My first year at Berkeley, I took three courses a semester, hoping to prepare by stuffing my brain with knowledge. This was a mistake. Take as few courses as you can get away with, besides directly relevant ones like complexity theory. Learn what you need to know while doing research, not beforehand.
This approach has two advantages. First, you never know what you need to know until you need to know it. Not even Einstein could have predicted as a student that he'd need differential geometry to invent general relativity. And second, you don't really understand anything unless you have a personal stake in it  meaning that you discovered it, rediscovered it, extended it, applied it, tested it, implemented it, reinterpreted it, explained it to others, etc. This the reason most students forget everything in a course right after the exam. (As Feynman said, "what I cannot create, I do not understand.")
So then, how do you do original research? By throwing your entire life into it. Many researchers play piano, go to clubs, sail, etc., but if they're any good they probably think about research while they're doing these things. I know grad students who never suffer the indignity of working late into the night. They go surfing with friends every weekend and are constantly away on road trips. At this rate, they'll enjoy life more than I will but won't be successful researchers.
I can't offer any advice on research topics, other than to solve the open problems listed in my papers. Blanket advice is difficult because your research ought to be intimately connected to who you are as an individual. Lance suggests leafing through conference proceedings until you find what excites you, while Weinberg suggests getting involved in the "messes" that nobody understands. As for me, I like to start with physical or philosophical questions (can we assign any meaning to "the past" besides memories and records in the present? is there a theory that agrees with quantum mechanics on all experiments to date but that wouldn't allow quantum computation? why should we expect information content to be proportional to area rather than volume?), and then look for related questions that can be addressed using complexity theory. But I don't know if anyone else works that way.
9:42 AM
#
Comments
[]
Wednesday, January 14, 2004
Ingredients for Serious Thought (by guest blogger Scott Aaronson)
To prove theorems I need a particular kind of intense concentration, sustained for hours, that I don't need for programming, fiction writing, guest blogging, or anything else I've ever done. This kind of concentration seems to come naturally to some researchers, but it never has to me. So over the past four years, I've been keeping a list of what in my physical environment and state of mind facilitates the proving of STOC/FOCStype results. Although this list is personal and idiosyncratic (and even a bit embarrassing), I offer it in the hope that its very specificity will inspire you to add your own ingredients. Feel free to do so in the comments section.
 Lots of light.
 Adequate sleep the night before (duh).
 Freedom from buzzing insects, screaming babies, ringing phones, slamming doors, and car alarms. I'll never know what I could have proved if not for these things.
 A wellventilated room with fresh, nonoxygendepleted air at about room temperature. (Bug screens allow the last two ingredients simultaneously.)
 Caffeine or other stimulants.
 A comfortable swivel chair, or else a couch or bed to sprawl across.
 Long deserted halls or outdoor walkways. (Pacing around in tight circles is no good.)
 Hours and hours of concentration with no end in sight. I've never been able to set aside (say) two hours for serious work, in between other commitments. That's why I work at night.
 Lack of awareness of how much time has elapsed with no new ideas. Before starting to work I take off my watch and hide the Windows taskbar so I can't see the little clock in the corner.
 Comfortable clothes. I've never proved a publishable result wearing a shirt with a tootight collar.
 Black erasable pens, unruled paper (the backs of printouts serve nicely), Scientific Workplace for TeX, and (don't laugh) MSDOS QBasic for quick calculations. Substitute your own favorite tools.
 No tempting distractions. Train rides are good: plenty of room to spread out papers and a laptop, but no Internet access (something I hope doesn't change soon).
 No people around toward whom I have strong unresolved feelings (attraction being only one example).
 Freedom from bodily annoyances and pains. Advil, cold medicine, lip balm, a nail clipper, and a glasses cleaning cloth are important weapons in my theory arsenal. Also, I can't do serious work until about half an hour after a meal.
 A positive attitude, which is fostered by a calm, uneventful week in my life.
 Colleagues to talk to. People able to shoot down wrong proofs are ideal, but even "writeonly black boxes" are invaluable as sounding boards. Of course I try to reciprocate both services.
 A problem that I consider "mine"  either because I posed the problem, I've had recent successes on subproblems or related problems, the problem is important for one of my research goals (or even better, two goals), or I'm (rightly or wrongly) seen as the world expert on the problem.
 A problem that others are eager to see solved. It's easier to let myself down than to let others down.
 Conference deadlines. They motivate me to work, but then if I miss them (as I do), my "research GPA" doesn't suffer: there's always the next conference.
12:01 PM
#
Comments
[]
Monday, January 12, 2004
Arrr, Even Pirates Be PolynomiallyBounded (by guest blogger Scott Aaronson)
Leaving home after the holidays, I said goodbye tonight to my friend since junior high school, Alex Halderman. You might have read about Alex in the news: Princeton computer science graduate student, breaker of music CD copyprotection schemes, and the first person ever to attain national fame for holding down a Shift key. (Alas, I tease him, my own research too often entails pressing multiple keys.)
Alex's recent runin with the recording industry got me thinking about whether antipiracy technology can have any theoretical basis. Passive experiences like listening to music are hard to copyprotect for an obvious reason: if you can see or hear something, then you can also record it, especially since disk space is almost never a limitation today. (Admittedly, there's some loss of quality any time you convert from digital to analog and back. Also, this theory would predict rampant piracy of books, which hasn't happened  yet.)
The copyprotection problem is more interesting for interactive experiences like video games and application software. The standard solution  you send the software company your computer's hardware ID, X, and the company sends you back a key f(X) that unlocks the program  is insecure. You could always copy the unlocked program, then run it on another computer using an emulator. Whenever the program asks for the hardware ID, the emulator says it's X.
A better solution involves a program that constantly needs to communicate with a central server in order to run. For example, the program could demand a new key each time it's executed (based on its current input), which the server only supplies after getting back the previous key sent by the server. That way, any pirated copies of the program not only have to spoof IP addresses; they have to remain in communication with each other (or else be coordinated by a "renegade" server) in order to synchronize their keys.
An even more centralized solution is to run the whole program off a server and charge for each use. In this situation, a program can be "pirated" only if (in learning theory terms) the function that it computes is PAClearnable from membership queries. The downside, of course, is the high cost in server computing time and communication latency.
Open Research Issue #1. Is there a way for the user's machine to do almost all the actual computation, yet to still need a short message from the server to "unlock" its results? If so, how much can the required communication with the server be reduced (especially the number of rounds)? Boaz Barak has pointed me to some relevant crypto papers, including this one by Sander, Young, and Yung; but the general problem seems wide open.
Of course there's always copyprotection based on physically opaque hardware, such as dongles, smartcards, or the 'Fritz' chip. Since I have no idea how secure these technologies really are, I prefer to end this post with a question more up my alley:
Open Research Issue #2. Can the NoCloning Theorem of quantum mechanics be exploited to create unpirateable software? What we want is a quantum state ψ such that (1) a program P can be written that needs to measure ψ in order to work correctly; (2) ψ can be prepared by a polynomialsize quantum circuit, given secret information known only to the software company; and (3) a circuit for ψ can't be efficiently inferred, even given P's source code and unlimited copies of ψ. More impressive still would be if P used the same state ψ over and over, without the software company needing to provide a fresh copy for each execution. I suspect the latter is impossible. Proof?
3:30 AM
#
Comments
[]
Sunday, January 11, 2004
Complexity Class of the Week: PP (by guest blogger Scott Aaronson)
Yeah, I know: PP has already been this weblog's complexity class of the week. But once you've seen how to define PP using superpowerful variants of quantum mechanics, you might never look at Probabilistic PolynomialTime the same way again! (Then again, you might.)
Let's define PostBQP (or BQP with postselection) as the class of languages L for which there exists a uniform family of polynomialsize quantum circuits such that
 For all inputs x, the circuit's first qubit has a nonzero probability of being measured '1' at the end of the computation.
 If x is in L, the second qubit will be measured '1' with probability at least 2/3, conditioned on the first qubit being measured '1'.
 If x is not in L, the second qubit will be measured '1' with probability at most 1/3, conditioned on the first qubit being measured '1'.
In physics, "postselection" means you throw away all runs of an experiment for which a measurement of some quantity X doesn't yield a desired outcome. (Hopefully, X isn't itself what you're trying to measure  otherwise it's not postselection, it's fraud!) But you can also think of PostBQP as the quantum analogue of the classical complexity class BPP_{path} (another previous CCW).
Clearly PostBQP sits between BQP and PP. I became interested in PostBQP when I realized that the containment BQP/qpoly in EXP/poly (discussed earlier in this weblog) can be improved to BQP/qpoly in PostBQP/poly.
Exercise 1. Imagine that the gates of a quantum computer only needed to be invertible  not unitary. Since states might no longer be normalized, let's define the probability of measuring a basis state x with amplitude α_{x} to be α_{x}^{2} divided by the sum over all basis states y of α_{y}^{2}. Show that we could decide exactly the languages in PostBQP.
Exercise 2. Now imagine that the gates are unitary, but the probability of measuring a basis state x equals α_{x}^{p} divided by the sum over all basis states y of α_{y}^{p}, where p is a nonnegative real number not equal to 2. Show that we could decide all languages in PostBQP.
I was getting more and more excited about the fundamental new complexity class I'd discovered. Alas:
Exercise 3. Show that PostBQP equals PP.
The moral is that when you make a quantum class too powerful, it turns into a classical class! (Another example of this is NQP = coC=P.)
5:10 AM
#
Comments
[]
