> This is not separate groups of two or three mathematicians each belaboring on a paper on their own. It's not like that. This is not anymore the medieval mathematician's guild--which is how mathematics is still organized today. This is a post-industrial division of labor. It is completely new. It's a math hive. It's exciting, and we haven't seen this sort of thing before, and I think it's going to change mathematics.
Change itself is speeding up. We are going to zoom through the hive era and jump straight into AI era, where 99%, if not more, of the heavy lifting will be done by the machines. The centaur era is rather close ahead. Beyond that, the hope is there will be some room left for humans, but I won't take a long term bet.
Neither am I. Had been hearing of downfall of software development since the dawn. Remember 4GL? 4th Generation languages were supposed to end programmer dependency. It will be managers writing what they want in mostly plain English and computer will do the rest.
That was the vision of SQL too. Managers were supposed to write "select sum(amount) from sales where month=2021-05"
Didn't happen. Developers still prefer function calls, ORMs and API calls over English-like sentences. And 4GL has died off like UML.
But things are better now. Getting easier. You don't need to write assembly. And can develop a web page using wordpress without any coding.
But that created a separate class of developers and didn't make seasoned developers out of job. Even though it's been like 20 years since.
Underneath it all there will be people that develop and maintain these AI techs. And as usage rise so will demand for people that can maintain and develop those AI.
It doesn’t mean it will not happen. For the past 20 years progress in IT was about making IT appear everywhere, that’s a breadth growth rather than depth. Before we need tools that allow someone to program in NLP we need the industry to be broadly used enough to actually have enough data to be manipulated by managers.
On top of this, the company I work for provides amazing tools for non-tech personnel: for example Looker is basically SQL for managers. It is also much more powerful that SQL: for example for simple GROUP BY queries it also provides the UI to access the underlying rows in each of the rows returned by GROUP BY.
Mathematics in isolation is a social anti-pattern.
Individuals keep arriving at the same conclusions, but they invent different notations/languages to express those results. Communication stalls. Tribes form around languages/styles of expression.
It's much much worse than wheel re-inventing too. It takes a very long time to figure out that two Mathematical constructions/notations are equivalent.
It took decades for somebody to notice that proofs are programs.
I think it is the future. Writing a mathematical proof is writing a computer program. I think it is common to see lots of lets say hacky, untyped work, like some janky bash script, that takes you on some adventure, that you have to spend extra work exploring. Imagine instead reading some declarative program. Everything formalised, use of higher order functions/constructs when appropriate that encapsulate important generalisable ideas (eg in the programming example using "map" when you want to loop over some list). Hacky stuff is fine for exploration and experimentation. But something for publication should be clear. You could use engineering principles, optimising your proofs for readability and efficiency, version control. Seems like it is the future.
I am of the opinion that mathematics is very much a human endeavor. Writing a proof is to convince other mathematicians that it’s true. While formalized mathematics is one such technique, I don’t believe it will replace how mathematics is generally done. Writing a program is something different entirely.
One thing I would be interested in is an objective look of how the limitations of computation affect formalized mathematics. I think there’s a lot of sticky computational and philosophical problems there that I don’t see mentioned a lot.
I am not against including a human focused discussion of a proof, stepping through it in human terms. A verified proof is more like the final step or output.
I agree this is not how maths is done, there is a huge exploration stage. People do not just crank out a proof even though sometimes it is written in a way to make you believe that. You can spend days, weeks, months exploring. But at some point you should try to formalise that into something coherent.
Amendment: this seems to be a very controversial discussion with wildly swinging upvotes/downvotes. I hope people would elaborate on their viewpoints.
Proofs are more about understanding why something is true, than verification per se. See this well known article by the illustrious Bill Thurston (RIP):
I think it’s all great. I have been interested in picking up something like Coq before. I only made my comment to reflect my personal tempered expectations. Maybe I am completely wrong, but just don’t see how it could wholly replace how mathematics is done and generated, but I do see how it could helpfully augment. I think the analogy is in the fancy type system programming languages like Haskell and Idris and the like, which are not practically affecting how people generally write programs and build systems.
In both mathematics and writing programs, one must come up with something that models the solution needed for a given problem or system even if one can formally verify it afterwards, but that first part is the hard part (especially in programming since the assumptions in mathematics are quite explicit and known). I suppose there could be or are systems that generate techniques and proofs rather than merely verify, but I can only imagine that being an intractable problem and one not likely to be practically solved soon.
Yes, the stack of papers at my desk with random scribbles are a far cry from a verified proof written in lean. There is lots behind a proof that you don't necessarily see reflected in a piece of code/proof. So I agree, something formal is like a final step.
Mathematics is not Computation. Mathematics is about stating definitions and prove theorems about those definitions. A mathematician doesn’t care whether those definitions are computable. Take a look at the leading edge (the mathlib community using the Lean prover). A lot of their definitions are non-computable and it works fine.
I rather read some thing that's formal and readable, than just merely readable.
> One thing I would be interested in is an objective look of how the limitations of computation affect formalized mathematics. I think there’s a lot of sticky computational and philosophical problems there that I don’t see mentioned a lot.
~Goëdel~ Gödel's theorems are not relevant. Consider even if the halting problem had the other answer, it still could be impractical to do the brute force search. Same thing here.
And frankly, the formal methods community is sick of people spreading FUD from that stuff. Mixing romantic humanism with technical misunderstandings is no good.
I didn’t mention Goedel’s theorems, so I don’t know what they’re relevant or irrelevant to, and I am not spreading anything.
Maybe I didn’t make that part clear, but it’s more of a question of mine than a statement. It seems to me there would be problems and limits but I haven’t seen them addressed in a way I understand, so the questions and misunderstandings remain.
If you have any references, then by all means share them.
> Mixing romantic humanism with technical misunderstandings is no good.
I think this miscategorizes what I wrote. One could easily do the same.
Mathematics is indeed a human endeavor, as are its goals. If machines are to help, then that’s absolutely great (I’m actually a big fan of computers being used to augment human endeavors), but making grand sweeping claims like I’ve seen in discussion of proof assistants is as romantic as anything.
State of the art research explored with lean. I think a certain level of hand wavy romanticism is fine for a HN comment if it can inspire people to be interested in the field. I also would prefer to be corrected in a reasonable discussion than a silent chilling downvote.
This is actually a great article that helps to show that humans are still very much involved in convincing each other, using machines to aid to that effect.
Regarding downvotes, you can’t even downvote someone you’ve replied to or they’ve replied to you on Hacker News.
I want to second this. I started working on a project in Lean only last December. Since then, I've made several contributions to the mathlib github repo - the first time I've contributed to open source despite wanting to get into it for a few years. The Zulip forum https://leanprover.zulipchat.com/# is very welcoming and responsive to questions. If you're interested in mathematics and you want to contribute to the growing hivemind working on this kind of thing, I highly encourage you to learn Lean and join this community.
I'm not sure if I would recommend that, given that Lean has less support for constructive math than some existing systems, and constructive math can be very convenient for proving stuff about programs/computations.
> constructive math can be very convenient for proving stuff about programs/computations.
How? I can see how proving properties of programs is a place where constructivism won't get in the way, but do you have an example of where it can be more convenient?
The Howard-Curry correspondence only applies if you stick to constructive maths. This means all your proofs are also programs (and computable) for free. E.g. if trying to prove "blah(x) = blaz(x)" for a fixed x, where blah and blaz are both computable predicates, then the proof can be solved easily just by evaluating blah(x) and blaz(x). This can't be done if blah or blaz isn't computable.
Sure, but isn't it the case that either your predicates are computable, in which case both a "classical" and a "constructive" proof assistant could evaluate them, or they're not, in which case neither could evaluate them?
Yep, but one could express a computable predicate classically (e.g. by unnecessarily invoking the axiom of choice in its definition). Using a constructive proof assistant prevents accidentally doing this, and encourages using constructive definitions where possible. It also encourages using computable mathematical formulations (e.g. computable reals vs classical reals).
Classical logic can be embedded into Constructive logic through any one of a number of choices of double negation translations. Thus it is Constructive logic that is arguably more expressive. See https://news.ycombinator.com/item?id=26217587
The mathematicians who study topos theory, the 'internal logic' of whatever category etc. are definitely "real" mathematicians, and they have to care about constructive settings because these come up very naturally in that field. Constructivism is not an isolated niche anymore, it interacts with "real" math too.
This is way off topic, but that's funny, because it's the other way around for me: I can install Agda with a `pacman -S agda`, but I'd need to go through the AUR for Lean.
I'm wondering about the 'searchability' or canonicalization of such a theorem library. How do you make sure there's no duplicate work? How do you index the thing? Can you match and input / output proposition and look through the whole database and have it find all related theorems?
I'm thinking 'automatic lemma matching' for software verification. e.g. In a SPARK proof you might need to use a lemma [0]. If you formal math education is lacking or maybe your theorem library is not encyclopedic, you might want suggestions from 'the hive'...
How does Lean compare to Coq? And which one would be preferable for non-mathematicians trying to learn more about math? Which one would be better if you come from a software background and want to do formal verification for parts of your programs?
There's an excellent online book that introduces Coq and how to use it for verification of programs: https://softwarefoundations.cis.upenn.edu/. There's also a more advanced book, http://adam.chlipala.net/cpdt/, that goes into more detail. In comparison, Lean has little teaching material for formal verification of programs.
This is only a partial answer, but my understanding is that Lean is really not meant for software verification work. That's very much Coq's alley.
I'm much less certain of this, but I think Lean is better for non-mathematicians trying to learn more about math --- provided, that is, that you're set on playing with a proof assistant. I'm not sure I'd recommend that.
Not true. Lean and Coq are very similar. However it is true that Coq has more of a history when it comes to verifying software (CompCert is written in Coq for example). And Lean is the state of the art when it comes to mathematics (mathlib).
Sorry to be the buzzkill here, but I'll say what needs to be said. Computers are decades away from being able to formalize any serious mathematical results, and "formalized mathematics" is a fancy term for "formal methods in computer science".
I've worked for a good numbers of years in the US software industry as a compiler engineer, and am a Mathematics masters student now. I've played with proof assistants extensively, and in particular I'm currently working on a paper to formalize semi-cubical sets in Coq. Working with Coq has been most painful, and it requires spoon-feeding for verifying even the simplest results. Worst of all, it's riddled with bugs, and I'm constantly astounded at how stupid it is.
That said, Lean is a much newer proof assistant, and while it offers some nice conveniences, it will take at least a decade before its power is comparable to Coq.
Mathematics is a solitary activity for good reason. Much of the work happens on pen-and-paper, and it takes months of toil to prove a result. No serious mathematician is going to go through the pain of encoding her proof in a proof assistant, and spoon-feeding it to verify every little result.
What's the news? That several computer scientists in the area are doing the work of formalizing simple mathematical results in Lean. Don't mistake Lean for some ground-breaking new technology; sure, it has a thriving community, and offers many conveniences, but the level of abstraction it offers isn't even close to the level of abstraction that my subject offers (I study stable ∞-categories).
Sure, if you want to do the thankless work of formalizing well-known results from group theory, ring theory, or vector spaces in Lean, be my guest. But don't be under the illusion that it's going to change the way mathematics is fundamentally done. The work is comparable to type-setting a document, except that it's a LOT more painful.
"I find it absolutely insane that interactive proof assistants are now at the level that within a very reasonable time span they can formally verify difficult original research." Peter Scholze [1].
Nope. You clearly are not up to date on what is happening right now. With leading edge mathematics being defined and proven correct in Lean, in a collaboration of mathematicians and computer scientists.
> This is not separate groups of two or three mathematicians each belaboring on a paper on their own. It's not like that. This is not anymore the medieval mathematician's guild--which is how mathematics is still organized today. This is a post-industrial division of labor. It is completely new. It's a math hive. It's exciting, and we haven't seen this sort of thing before, and I think it's going to change mathematics.