For your final project, you are expected to read one or several papers related to a complexity theory topic of your choice. You will be required to
You can work either individually or in pairs. There is a list of topics suggested below, but you should not feel obliged to choose from the list. You are strongly encouraged to find a topic on your own.
Here are some places where you can look for topics. A good source is the Electronic Colloquium on Computational Complexity where you can find much exciting recent work in the subject. You can also browse recent proceedings of FOCS, STOC, or CCC (the Conference on Computational Complexity), though many papers from there appear (in more readable form) on ECCC. Some of the readings may require too much background reading so feel free to talk to me if you are not sure the project is appropriate. ECCC has several excellent surveys on various topics that you can do (and explore further if you feel motivated).
The suggestions below vary somewhat in terms of "difficulty" and preparation time. I am providing a subjective rating from * (easier) to *** (harder). You should not choose your project based on the difficulty level; choose a topic that you like or you are interested in. These indicators are merely there to help you plan your time more effectively near the end of the semester.
In one of the first lectures we saw that P ≠ EXP and NP ≠ NEXP. These are examples of time hierarchy theorems, as they say that given more time we can do more things. What if we allow the algorithms to use randomness? Surprisingly the answer is not known. A good starting point for this project is the survey
In our treatment of NP we measure hardness with respect to the size of the instance and not the size of the witness. But is the real hardness determined by the instance size or the witness size? One way to formalize and answer this question is via compressibility:
Circuit and formula lower bounds are very hard to prove, yet recently there has been some progress in the arithmetic setting:
The Nisan-Wigderson paradigm assumes circuit lower bounds to achieve derandomization. Can we obtain the same results using uniform lower bounds, e.g., EXP ⊄ BPP?
Applications to the Nisan-Wigderson generator extend beyond giving evidence for BPP = P. They can be used to eliminate randomness from interactive proofs, approximate sampling algorithms, and so on. Some of these applications are described in
The PCP theorem used to be proved along the same lines as the NEXP ⊂ PCP proof from class, but with much more intricate constructions. Recently Irit Dinur came up with a completely different proof of the PCP theorem. However reading her proof requires some background in expander graphs and a few other things we did not cover in class.