My pure amateur, possibly muddled understanding is that P=!NP is rather different from most open conjectures.
A) It's extremely general or even "foundational" in the sense that it's asking whether any algorithm at all exists to solve an extremely general sort of problem in polynomial time.
B) All of the standard methods used to solve problems of this sort have at least been pronounced exhausted at this point.
C) Very few theorems that inherently limit the speed of a class of problems actually have ever been proven. Very few methods for proving these constraints are known.
D) A lot of famous theorems have yielded results through being embedded in a larger, different field where they are just one result of many proved with a new machinery (Fermet most prominently). But given P=!NP is so general it can't really embedded in a larger, tractible space - lots of things are equivalent to it but all these things are kind of the same.
Great post. Here are a few things pointers if you want more information about your points:
B) The situation is actually worse than exhausted. We know (have proven) that the regular methods simply won't work (check up "the reletaivzation barrier" and "natural proofs" and "algebrization")
C) We actually can construct problems which take at least a certain amount of time to solve (though admittedly many of them are kind of unnatural). Search up "time hierarchy theorem". This is probably one of the theorems you mention. However, you're right, as far as natural problems go, it appears to be extreme difficult to prove most natural problems takes at least linear time (https://mathoverflow.net/questions/4953/super-linear-time-co...). Note the log*... factor grows extremely slowly.
In fact, it is not known whether SAT requires more than linear time. Since SAT is NP-complete, we expect it to have no polynomial time algorithm... and yet here we are wondering whether it requires even more than linear time. If you are interested in this sort of thing, maybe look up fine-grained complexity. Also note that to solve P != NP, we will have to atleast shown P != PSPACE, since PSPACE contains NP. However, even this seemingly easier problem has had no progress and seems like it would require a huge breakthrough.
From this it seems that the question itself might have been hopelessly naive
Is there any evidence that P might be equal to NP? This seems different from other famous conjectures like Fermat's Last Theorem or Riemann's Theorem where what's hard is finding a counterexample that disproves the theory.
I think RJ Lipton (blog author) and Knuth believe it could be true. However, most complexity theorists don't (I think Fortnow does a survey of theorists on this every few years). Knuth says something like "well all it would take is a single algorithm for any of these hundreds of problems". A fine take for the godfather of algorithms. At the same time, there are many people who have in some sense tried to find such an algorithm, even non-explicitly (e.g. factoring would be in P if P = NP). Moreover, there is a lot of complexity theoretic work which makes it hard to believe it to be true (I think Scott Aaronson's blog has some presentable information about this)
>>B) The situation is actually worse than exhausted. We know (have proven) that the regular methods simply won't work (check up "the reletaivzation barrier" and "natural proofs" and "algebrization")
I've heard of those, even tried to read the proof. Still, my understanding is those claims are ultimately "informal proofs" despite having formal step so the situation isn't entirely closed-up but still very bad.
Maybe this will help with understand the revitalization barrier. Most proof of separations of complexity classes relativize, in that not only do they show
A strict subset B
they also show
A^O strict subset B^O
for ANY oracle O. Most of these proofs don't actually explicitly state this is true, but they are readily to be extended this way (e.g. the diagonlizing proofs of the time hierarchy theorem)
However, we know (Baker, Gill, Solovay) that there exists and oracle O1 where
P^O1 subset NP^O1
as well as an oracle O2 such that
P^O2 not subset NP^O2
Therefore, we know we a proof of P != NP must not relativize or else that would contradict the aforementioned result.
It isn’t that different - most complexity theory conjectures are this way.
Basically, there are all these classes of algorithm. P can be solved in polynomial time, NP can be solved in polynomial time by a computer that gets really lucky with rng, PSPACE can be solved with polynomial space, L can be solved in logarithmic space. For many of these pairs, it kinda feels like they are not the same, but we can’t prove it.
So in particular we don’t know that P != PSPACE, which is a much weaker statement, and a very similar one, compared to P != NP.
Similarly, we can’t prove there are cryptographically secure hash functions, and we can’t prove it’s hard to factor numbers.
Basically, we just have very few mathematical tools for proving that complexity classes are different. It’s hard to prove that algorithmic problems are hard. We have diagonalization, which shows that the halting problem is impossible and that P != EXPTIME, and that’s about it.
Anyway, I would conclude not that P != NP is a uniquely hard problem, as much as that complexity theory is very young compared to most fields of mathematics, and there are still many statements we can’t prove.
P vs NP is also distinguished in that many people think it has profound philosophical implications. Nick Land seems to think this (http://www.uf-blog.net/crypto-current-021/), and so does Scott Aaronson (“if P=NP, the word would be a profoundly different place than we usually assume it to be ... there would be no value in “creative gaps””). Though I think Land’s idea is a lot more interesting — he says P vs NP is a paraphrase of the analytic vs synthetic distinction.
Should we even care for a mathematical proof that P=NP? We know that in practice we haven't been able to come up with algorithms that solve NP problems in polynomial time. Suppose we'll be told that this is possible (i.e. P=NP): will this help up us to invent such algorithms? I don't see how, unless the proof will be by construction.
> Should we even care for a mathematical proof that P=NP?
Should we care about the most important computational problem in the world today? I'd say yes.
> Suppose we'll be told that this is possible (i.e. P=NP): will this help up us to invent such algorithms? I don't see how, unless the proof will be by construction.
Why does the method of proof matter? If it is proved that P=NP, then it means most of the pressing computational problems today are solvable in polynomial time. So we can double our efforts in trying to find these solutions. If it is proven that P!=NP, then we can stop wasting our time or just work on solving subsets of these problems. The problem is that we don't know whether the problems can be solved in polynomial time.
Put it this way. Which scenario would you prefer.
Scenario 1: There MAY be a billion dollars hidden somewhere in Mount Denali.
Scenario 2: There IS a billion dollars hidden somewhere in Mount Denali.
Scenario 3: There ISN'T a billion dollars hidden somewhere in Mount Denali.
Currently, we are at scenario 1. We don't know whether our efforts are for naught. We don't know if we haven't found the billion dollars because it's in a place we haven't looked or if the billion dollars isn't even in the mountain.
Proving P=NP, would get us to scenario 2. We know there is a billion dollars there, but we just have to find it. Proving N!=NP would get us to scenario 3. We know the billion dollars isn't there so we don't have to bother wasting our time.
I'd say we'd be in a much better position if we were in scenario 2 or 3 than the current scenario 1 we are at right now.
Working programmers and even people concerned with producing useful algorithm probably shouldn't care about P=NP?. It seems very likely to be true and if it isn't true, it seems like a polynomial algorithm would be unwieldy indeed.
Further, P!=NP is a theory entirely about worst case scenarios. Many NP complete problems are actually quite solvable in the average case and the worst case can have little relation to the average case (SAT solvers that are quite fast on average exist now, for example).
For logicians and mathematicians, a proof of N=NP? would be an incredible accomplishment simply because it's a problem that at this point no one know where to start on and so by definition, the proof would be a piece of remarkable and surprising mathematics giving people much to think on.
P=NP is very likely to be true? I would say it's very likely to be false.
Why do I say that? Because it's possible to construct, say, SAT3 problems such that it seems impossible to solve them in polynomial time. If some problems can't be solved in polynomial time, then P!=NP.
Why do you think that it's "very likely" that P=NP?
Given the next sentence (...if it isn't true, it seems like a polynomial algorithm would be unwieldy indeed), I think the GP actually meant P!=NP to be true.
SAT solvers are not fast on average. Even some very simple random distributions produce small (<100 variables) SAT instances that are very hard for the solvers.
Thanks for the link, but it seems difficult to answer the question "find a SAT instance with no more than 100 variables which all competing solvers failed to solve in the allowed time" from the information presented here.
I'm going to download the benchmarks and try to correlate them with the results CSV table (which doesn't show number of variables for each instance), but since the full random benchmarks are 2.9 GB compressed this might take a fair amount of work.
If you can provide any further guidance on finding the relevant instances I'd appreciate it.
Also note that just specifying the number of variables may not be terribly informative if you don't also specify the maximum clause size and the number of clauses.
For example I see some instances here with 120 variables but they are 7-SAT with around 10k clauses. Unless I'm mistaken reducing those to <=3-SAT will require adding about 20k more variables and tripling the number of clauses.
This is an over simplification, but since the np hard problems are reducible to each other, finding polynomial time solution for one problem will give us polynomial time solutions for all np hard problems.
Depends, if it was a constructive proof by example for a useful problem (e.g. somebody finds a provable polynomial algorithm for the traveling salesman problem). Even without the polynomial reduction to other problems that would be pretty great...
P=NP? is such a difficult problem that it is more likely than not that a proof either way will necessitate extremely powerful new tools in complexity theory. It's hard to say what the impact on algorithm design and analysis would be.
More likely it will be proving p!=np. Which, yes, would not really teach anything except we would finally know how to prove it, which could be useful for something else.
It would be pretty handy for cryptography to have a better understanding of which problems were hard. P != NP we are already assuming, but if we could use the mechanism on other problems (like knowledge of exponent, discrete log, etc) it would help.
In addition to the obvious polynomial reduction between NP problems - which is usually not very practical - it is quite possible that this will help.
> I don't see how
“When I was a child, I spake as a child, I understood as a child, I thought as a child: but when I became a man, I put away childish things. For now we see through a glass, darkly; but then face to face: now I know in part; but then shall I know even as also I am known.” -1 Corinthians 13:11,12
The point of the quote is to say that they don't know the answer either. So, yes.
Think of it as: we are all children when it comes to the ramifications of the proof of P vs NP. We do not know the impact; we cannot know the impact. We will know it when it arrives.
In my mind I always regard P/NP in the same category as the Riemann Hypothesis^. That has been open since 1859 (160 years) and shows no signs of cracking. So in other words we may be in for a long wait.
^Perhaps because one of the hypothesized examples of a hard problem is the factoring of multiples of large primes (though not proved NP Complete). Or perhaps because they are both extremely famous open problems.
We know much less about P vs NP than we do about RH; we can prove weaker versions of RH (e.g. the prime number theorem) and analogues for other number systems (the Weil conjectures). This is unsurprising, since it hasn’t influenced math research for 160 years. Maybe someone will separate P and PSPACE before P and NP, or maybe they’ll fall at once to some brilliant new technique.
One of the things that slows scientific progress is the lack of cross-discipline learning. Every so often you hear about someone taking an old concept from another field and applying it. Quite a few breakout companies have combined interdisciplinary knowledge into one product and made a mint.
So many people are affected by NP complete problems that you'd think that someone would have a problem that appeared simpler on the surface and gave insights into a P solution, but either nobody has, or the right phrasing about it has caught nobody's attention.
There's an XKCD lamenting something pretty similar. You solved some unsolved problem but nobody will ever know because it's buried in a bug fix for your obscure little product.
I think Academia is more likely to criticize your technique because it doesn't generalize to any ring or you didn't prove a certain assumption you had.
Of course the real bug is in some IEEE 754 quirk you forgot to consider.
> Some engineer out there has solved P=NP and it's locked up in an electric eggbeater calibration routine. For every 0x5f375a86 we learn about, there are thousands we never see.
When people try to use 'NP complete' as a dodge for doing difficult coding at work, I point out to them that Claude Shannon proved that generalized compression algorithms can't exist, and yet we are surrounded all day by compression algorithms.
Just because the space in NP hard doesn't mean that your problem is automatically NP hard. Lots of human consumable or generated data is full of patterns that can be exploited to useful purpose, even when purely random inputs are intractable.
They just mean that there is no general way to compress an arbitrary M-bit string to an N-bit string for M > N, and yet compression algorithms exist.
The point is that the input to your compression is not an arbitrary M-bit string, but some very structured thing which could have a smaller representation. Similarly, when encountering what appears to be an NP-hard problem in the wild, you might still be able to find an efficient solution by exploiting the structure of your input (NP-hardness only applies when considering all inputs).
One major issue is that the quote seems entirely backwards. In fact, Claude Shannon described how to optimally compress a string from a known source. https://en.wikipedia.org/wiki/Shannon%27s_source_coding_theo... As such, Shannon's results would suggest the existence of many compression algorithms.
(And conversely, I see no reason to believe that "Claude Shannon proved that generalized compression algorithms can't exist". I assume that result predates Shannon.)
The pigeonhole principle establishes that you cannot fit n distinct messages into m spaces, where n>m
The Shannon coding limit defines the bounds on what subset of n can fit into a channel of capacity m, without excluding any of the others.
By drawing a fence around the possible, he fences out the impossible.
comp.compression has several longstanding bets that one particular high entropy input can not be represented by any decoder smaller than the difference in the input and output size, but I lack their confidence in the infallibility of their entropy source. It is possible someone will win that particular bet, but there will come a time where another similar bet will never be collected.
Yes, and yes. P != NP could be independent from Peano Arithmetic (PA) or the Zermelo-Fraenkel set theory with Axiom of Choice (ZFC) models. That is a separate conjecture which is also presently open.
Since an algorithm for running NP-complete problems in P-time would be enough to prove whether P=NP, wouldn't the impossibility to prove it just prove that P!=NP?
You could perhaps have a program that for every input you've tried gives the correct answer in polynomial time, but you haven't yet proved that it always will.
Nevertheless, I don't see how the answer to P!=NP could depend on the choice of axioms for set theory. Programs, their inputs and the state of a machine after executing n steps can all be encoded as integers, so P!=NP can be expressed as a statement about integers, and we know what the integers are: we don't need any dodgy set theory axioms for that.
If that's wrong, someone please explain how.
There are lots of interesting unsolved questions about integers (Goldbach's conjecture, ...) but people don't usually suggest that the answer to those questions might depend on the Axiom of Choice. Or do they?
Surprisingly, a theorem being unprovable does not mean that the theorem is untrue, even if you can prove its unprovability! This was shown by Goedel (https://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_...). He showed that there exists no system powerful enough to express theorems about the natural numbers which can prove all facts about natural numbers and be sound (ie not prove things that are untrue).
"impossibility to prove" is different from "proven impossible" because hiding behind the loose English are different models of logics. That's what Godel theorems at e about.
Not really, we don't know if human reasoning is contained in P, BPP, BQP, PP or even PSPACE (is there a ghost in this shell?). It may not be. Though human reasoning probably is contained in P, just like probably P != NP.
I have no idea what will be proven nor when. (My uneducated guess is not any time soon.)
But I hope it isn't "here's a proof that P!=NP". That'd be just too boring... Much cooler possibilities would be "here's a non-constructive proof that P=NP" or "here's a proof that P=NP is independent of ZFC".
While Knuth has a different take on the P=?NP problem than most others, namely he believes P=NP, nonetheless, he is also saying that the degree won't ever be known and not even an upper bound will be known for it. http://www.informit.com/articles/article.aspx?p=2213858&WT.m... Certainly "if P=NP it'll be a pure non-constructive proof with no practical consequence whatsoever" feels right.
A) It's extremely general or even "foundational" in the sense that it's asking whether any algorithm at all exists to solve an extremely general sort of problem in polynomial time.
B) All of the standard methods used to solve problems of this sort have at least been pronounced exhausted at this point.
C) Very few theorems that inherently limit the speed of a class of problems actually have ever been proven. Very few methods for proving these constraints are known.
D) A lot of famous theorems have yielded results through being embedded in a larger, different field where they are just one result of many proved with a new machinery (Fermet most prominently). But given P=!NP is so general it can't really embedded in a larger, tractible space - lots of things are equivalent to it but all these things are kind of the same.