Title: What Do You Mean, It’s Hard?
Abstract: Suppose someone gives you a computer and asks you to perform one of the following tasks:
- solve a 17 × 17 × 17 Rubik’s cube, or
- decide if a given list of 100 integers can be broken into two parts having equal sums.
If your life depended on it, which task would you choose? Which is harder, computationally?
In 1971, Stephen Cook proposed a strong measure of efficiency – polynomial time, or simply P – as a desirable standard to which we should hold solutions to computational problems. Task A is an instance of a problem with such a solution. He also identified a seemingly less stringent measure – nondeterministic polynomial time, or simply NP – in which one merely has to check, efficiently, that a given solution is correct. Tasks A and B are both instances of problems satisfying this condition. The big question raised by Cook is whether these two measures of computational efficiency are actually distinct.
One can argue that the “P ≠ NP Problem,” as it is now known, is the most important open problem in all of mathematics and computer science. Certainly, it has far-reaching implications both within these two fields and beyond. Cook showed in his 1971 paper that there are NP problems that seem really hard: remarkably, if you can solve any one of these problems efficiently, then you can solve every NP problem efficiently. Problem B is an instance of one of these “NP-complete” problems. To solve P ≠ NP, one could look for NP problems that are unlikely to be hard in this sense and try to show that they’re also not easy. We have yet to find such a problem, but in this talk I will try to persuade you that there are some candidates worthy of further scrutiny!