Prentice_Hall_International_Series_in_Computer_Science_Richard_Bird.pdf
21.6 MB
Richard Bird & Oege de Moor, Algebra of Programming (1997)
(φ (μ (λ)))
Edsger W. Dijkstra's Request to Programmers on Correctness of Programs
How to practice this? Of course have your automated testing tools, Haskell has had one of the finest ones since 20 years which later on influenced a generation of program testing tools, it was John Hughes' QuickCheck. It checks your programs against certain properties (not results) it should satisfy, a randomized set of tests are generated for those specifications and properties using combinators (they're extremely powerful in a pure functional language such as Haskell).
But! Dijkstra's claims go above and beyond this, as he said On the Reliability of Programs:
And he continues to stab us in the very assumptions we make about programs and the art of programming, as to effectively treat all of its mechanisms as black boxes, as if each subroutine and each procedure is declared and then "we don't know how it works" until it shows us magically what we want to see:
So the first step is to remove the assumption that all mechanisms involved in programming are black boxes, because they aren't. And once you've done that, the goal is to step-by-step integrate techniques of verification (Hoare logic, principles of invariance and proof of termination) to aid your algorithm development, top-down. You don't first develop the algorithm and then verify it, that will take you ages and isn't efficient. You use the verification tools while laying down the foundations of your algorithm on your structures of choice. This process of refinement needs to go hand-in-hand, the same way when you're proving a theorem you don't just write down all the equations or steps and then verify their sequence, rather you start by a lemma or a given statement and try to prove something useful from there and then use the newly proven property to then prove the required result.
And such refinement isn't vague in computer science, it has been well-practiced on a large amount of software, both in academia and production. One of the best papers on this is Niklas Wirth's Program Development by Stepwise Refinement (1971), it promoted modularity before it became the cool term as the talk of the town. And this modularity isn't even just separation of things into neat categories, it's about how to use modularity as a process of refining your methods.
But! Dijkstra's claims go above and beyond this, as he said On the Reliability of Programs:
But because in programming we think not in terms of numerical values but in terms of variables, we have abstracted from the values actually processed by the arithmetic unit and we are only allowed to make this abstraction when the multiplier would do any multiplication correctly. I make this point because it is often not realised that the at first sight extreme and ridiculous reliability requirements imposed by us on the hardware is a direct consequence of the fact that without it we could not afford this vital abstraction. Another consequence of the number of 30.000 years that sampling testing is hopelessly inadequate to convince ourselves of the correctness even of a simple piece of equipment as a multiplier: whole classes of in some sense critical cases can and will be missed! All this applies a fortiori to programs that claim to cope with many more cases and take more time for each single execution. The first moral of the story is that program testing can be used very effectively to show the presence of bugs but never to show their absence.
And he continues to stab us in the very assumptions we make about programs and the art of programming, as to effectively treat all of its mechanisms as black boxes, as if each subroutine and each procedure is declared and then "we don't know how it works" until it shows us magically what we want to see:
But as long as we regard the mechanism as a black box, testing is the only thing we can do. The conclusion is that we cannot afford to regard the mechanism as a black box, i.e. we have to take its internal structure into account. One studies its internal structure and on account of this analysis one convinces oneself that if such and such cases work "all others must work as well." That is, the internal structure is exploited to reduce the number of still necessary testcases, for all the other ones (the vast majority) one tries to convince oneself by reasoning, the only problem being that the amount of reasoning often becomes excessive, with the sad result that bugs remain.
So the first step is to remove the assumption that all mechanisms involved in programming are black boxes, because they aren't. And once you've done that, the goal is to step-by-step integrate techniques of verification (Hoare logic, principles of invariance and proof of termination) to aid your algorithm development, top-down. You don't first develop the algorithm and then verify it, that will take you ages and isn't efficient. You use the verification tools while laying down the foundations of your algorithm on your structures of choice. This process of refinement needs to go hand-in-hand, the same way when you're proving a theorem you don't just write down all the equations or steps and then verify their sequence, rather you start by a lemma or a given statement and try to prove something useful from there and then use the newly proven property to then prove the required result.
And such refinement isn't vague in computer science, it has been well-practiced on a large amount of software, both in academia and production. One of the best papers on this is Niklas Wirth's Program Development by Stepwise Refinement (1971), it promoted modularity before it became the cool term as the talk of the town. And this modularity isn't even just separation of things into neat categories, it's about how to use modularity as a process of refining your methods.
👍1
(φ (μ (λ)))
Edsger W. Dijkstra's Request to Programmers on Correctness of Programs
And indeed, this doesn't make programming "easy", it never was and no amount of glossy editors or JS packages or even "better languages" like Haskell would make it any easier, as Wirth concludes:
The detailed elaborations on the development of even a short program form a long story, indicating that careful programming is not a trivial subject. If this paper has helped to dispel the widespread belief that programming is easy as long as the programming language is powerful enough and the available computer is fast enough, then it has achieved one of its purposes
👍1
niklas-wirth-stepwise-refinement.pdf
829.7 KB
Niklas Wirth, Program Development by Stepwise Refinement (1971)
Rustaceans that have bought into the story that Rust is the "next best thing" after C, C++, Java etc. often forget to mention one important language that sill remains actively developed, and is just as much type-safe and the compiler works just as hard to figure out the errors our programmer makes.
It is Ada, a language from the family of Wirthian languages that took influence from Pascal. And it wasn't just an "academic toy" as a lot of Rustaceans consider Haskell or Idris to be. It was developed specifically under United States' Department of Defense for highly-critical software that needed to be verified to be totally correct.
And just like Rust, it was designed as a systems language, had message passing, extremely strong typing, exception handling, generics and also object-oriented features. And it was able to get that going before C++.
There's an option in modern implementation for Garbage Collection, but no one really uses it or recommends it. The last standardized edition was in 2023, GNU has one of the best distributions for it with our favorite GPLv3+, and everything still works. You load
Moreover, there's a tool with Ada that does a much better job than
I hate to break it, but no matter how much you fight with
While Rust obviously remains a new language and obviously I use and admire Rust a lot, its a bit obnoxious to claim that its a new invention of its kind that promises safety & security. Granted Rust invented ownership, but is it the only or most secure way towards safety? Doubtful.
Rust is yet to be tested in the highly-critical conditions in which Ada already remained a victor for decades, replacing about 450 languages that Department of Defense was already relying on.
And of course, walking on the path of Dijkstra, we can say that while Bob Milner's promise of "well-typed programs can't go wrong" is true, we can say: "well-proven programs can't help but be right."
It is Ada, a language from the family of Wirthian languages that took influence from Pascal. And it wasn't just an "academic toy" as a lot of Rustaceans consider Haskell or Idris to be. It was developed specifically under United States' Department of Defense for highly-critical software that needed to be verified to be totally correct.
And just like Rust, it was designed as a systems language, had message passing, extremely strong typing, exception handling, generics and also object-oriented features. And it was able to get that going before C++.
There's an option in modern implementation for Garbage Collection, but no one really uses it or recommends it. The last standardized edition was in 2023, GNU has one of the best distributions for it with our favorite GPLv3+, and everything still works. You load
ada-mode into your Emacs and you're ready.Moreover, there's a tool with Ada that does a much better job than
rust-analyzer (which because of being a compiler-helper albeit LSP-client) in verifying your code,. Its SPARK, it reduces Ada totally to logical soundness, rigorous but simple semantics and formally verified core.I hate to break it, but no matter how much you fight with
rustc, there's guaranteed ways of making errors in runtime. GNAT would generate verification conditions and then establish whether your program meets those properties. All those numerical overflowing, array running out of index range, type violation, they are the minimum checks conducted by it.While Rust obviously remains a new language and obviously I use and admire Rust a lot, its a bit obnoxious to claim that its a new invention of its kind that promises safety & security. Granted Rust invented ownership, but is it the only or most secure way towards safety? Doubtful.
Rust is yet to be tested in the highly-critical conditions in which Ada already remained a victor for decades, replacing about 450 languages that Department of Defense was already relying on.
And of course, walking on the path of Dijkstra, we can say that while Bob Milner's promise of "well-typed programs can't go wrong" is true, we can say: "well-proven programs can't help but be right."
(φ (μ (λ)))
Rustaceans that have bought into the story that Rust is the "next best thing" after C, C++, Java etc. often forget to mention one important language that sill remains actively developed, and is just as much type-safe and the compiler works just as hard to…
And once again, its less about a particular language but the attitude. Dijkstra would be curious about Ada or SPARK but he would once again remind us, when you engage with SPARK's proof verification, you're engaging in an act of proving something, that can effectively be done by yourself. The prover only mechanizes it for you, and in the case of complicated programs it can be a helpful aid, but it can't replace you, the programmer. Just like no amount of "intelligent" software or theorem prover would replace a mathematician's act of constructing proofs from scratch, it would only aid him as a tool. The same way books have aided us, or certain strategies.
Once again, writing good and correct programs is little about the tools you have or the language you choose. Knuth wrote the core of TeX, one of the most beautiful programs as Guy L. Steele reports, without LSP and in a language that was usually considered weak as a "teaching language", Pascal.
Just as the only way to be a better mathematician is to construct proofs from scratch, and no amount of intelligent and well-crafted hints or examples will automatically make you better at proofs. They can help in optimizing certain things, but the act remains the same: start with the fundamentals, and slowly prove each step.
Languages, in programming or otherwise, are different paradigms, different ways of speaking and writing, i.e., expressing. Tools and methods in proof-writing are similar, they are different ways of expressing your approach to the theorem. The crux of the proof always remains an ingenious thought that you had to put in, which makes the tools worthwhile. All good problem solving books aren't focused on techniques and tricks, they are focused on preparing you by trying exercises on developing your thinking about proofs. Programming languages and tools are the same, they are there to develop and aid your thinking, making you help in better putting that "ingenious thought" you might have had. They can't replace it.
Gauss came up with congruence (on which the entire foundation of modern number theory and cryptography rests) as a 19 year old in university, Hoare came up with quicksort as a student too. The former did it in a time when books were bulky and you couldn't scroll through thousands of pages sitting on a table. I wish some amount of intelligent tooling could help us in getting that, but it can't. Not because of bad technology, but because its a totally different domain at all. Technology was never about replacing human thought, it was an aid. In some respects, our obsession with finding intelligence externally, albeit in God, a "theory of everything", or LLMs, is a symptom of how much we've stopped to reflect on our own capabilities, and their limitations.
Once again, writing good and correct programs is little about the tools you have or the language you choose. Knuth wrote the core of TeX, one of the most beautiful programs as Guy L. Steele reports, without LSP and in a language that was usually considered weak as a "teaching language", Pascal.
Just as the only way to be a better mathematician is to construct proofs from scratch, and no amount of intelligent and well-crafted hints or examples will automatically make you better at proofs. They can help in optimizing certain things, but the act remains the same: start with the fundamentals, and slowly prove each step.
Languages, in programming or otherwise, are different paradigms, different ways of speaking and writing, i.e., expressing. Tools and methods in proof-writing are similar, they are different ways of expressing your approach to the theorem. The crux of the proof always remains an ingenious thought that you had to put in, which makes the tools worthwhile. All good problem solving books aren't focused on techniques and tricks, they are focused on preparing you by trying exercises on developing your thinking about proofs. Programming languages and tools are the same, they are there to develop and aid your thinking, making you help in better putting that "ingenious thought" you might have had. They can't replace it.
Gauss came up with congruence (on which the entire foundation of modern number theory and cryptography rests) as a 19 year old in university, Hoare came up with quicksort as a student too. The former did it in a time when books were bulky and you couldn't scroll through thousands of pages sitting on a table. I wish some amount of intelligent tooling could help us in getting that, but it can't. Not because of bad technology, but because its a totally different domain at all. Technology was never about replacing human thought, it was an aid. In some respects, our obsession with finding intelligence externally, albeit in God, a "theory of everything", or LLMs, is a symptom of how much we've stopped to reflect on our own capabilities, and their limitations.
👍2
(φ (μ (λ)))
https://invidious.rocks/watch?v=uyLy7Fu4FB4
Use of Formal Methods at Amazon Web Services:
https://lamport.azurewebsites.net/tla/formal-methods-amazon.pdf
https://lamport.azurewebsites.net/tla/formal-methods-amazon.pdf
To put it quite bluntly: as long as there were no machines, programming was no problem at all; when we had a few weak computers, programming became a mild problem, and now we have gigantic computers, programming has become an equally gigantic problem.
Edsger Dijkstra, The Humble Programmer (1972)
Edsger Dijkstra, The Humble Programmer (1972)
❤1
Just finished filling out the GNU Guix User/Contributor Survey, I'd recommend all Guix users to fill it so that the devs and community can have better ways of improving.
I recently sent a patch to package an app (jack-mixer) which wasn't part of Guix repositories. And I absolutely loved the way it handles packages, all you need is about 60 lines of Guile Scheme and you have a reproducible and declarative package ready.
As someone who obviously loves Scheme, this has been a good journey. I am glad I switched from Arch Linux after 4 years to Guix, not only I'm no longer afraid of breaking my distribution and losing everything, I feel the ease in just having to modify a few Scheme files to handle the entire system.
And as a supporter of Free and Libre Software Guix's adherence to GNU standards is commendable. I have to give up a few conveniences, but overall I'm happy with having a fully libre operating system (the BIOS obviously isn't free, only from the kernel and above). And Emacs integrates beautifully with GNU Guix, like no editor ever allowed you to manage system packages from just a few keybindings inside the editor (Cf. emacs-guix).
I recently sent a patch to package an app (jack-mixer) which wasn't part of Guix repositories. And I absolutely loved the way it handles packages, all you need is about 60 lines of Guile Scheme and you have a reproducible and declarative package ready.
As someone who obviously loves Scheme, this has been a good journey. I am glad I switched from Arch Linux after 4 years to Guix, not only I'm no longer afraid of breaking my distribution and losing everything, I feel the ease in just having to modify a few Scheme files to handle the entire system.
And as a supporter of Free and Libre Software Guix's adherence to GNU standards is commendable. I have to give up a few conveniences, but overall I'm happy with having a fully libre operating system (the BIOS obviously isn't free, only from the kernel and above). And Emacs integrates beautifully with GNU Guix, like no editor ever allowed you to manage system packages from just a few keybindings inside the editor (Cf. emacs-guix).
👍2
(φ (μ (λ)))
Recording for the third session, we worked through β-reduction relation and β-equality. The next few sessions will be devoted to Church-Rosser, we'll be understanding and maybe mechanizing its proof from scratch. https://www.youtube.com/live/PX4DlRyxv3M
The sessions for Type Theory and Formal Proof are back in action, we finished everything in Chapter 1 except Church-Rosser, and are due to meet on 24th November to discuss the first 10 exercises of the chapter.
The recording of the fourth session is out, check it out.
On top of that, we're also working on starting a simultaneous project, which uses formal methods (Hoare Logic, Recursive induction) to practice proofs of correctness without always relying on theorem provers (Lean, Coq). This would implement practicing formal methods on two sets of languages: imperative and functional ones. We would start with the imperative kind.
To know more and/or participate, comment below or DM me and I'll share the link to the group chat.
The recording of the fourth session is out, check it out.
On top of that, we're also working on starting a simultaneous project, which uses formal methods (Hoare Logic, Recursive induction) to practice proofs of correctness without always relying on theorem provers (Lean, Coq). This would implement practicing formal methods on two sets of languages: imperative and functional ones. We would start with the imperative kind.
To know more and/or participate, comment below or DM me and I'll share the link to the group chat.