Forwarded from Deputy Sheriff The Viking Programmer
45.pdf
24.8 KB
Andrew Appel, Garbage Collection Can be Faster Than Stack Allocation (1986)
Hear, O nobly born: Techniques can be taught, but the Way of the Hacker cannot be taught. Skills can be acquired, but the Way of the Hacker is not a checklist of skills. Programming can be accomplished, but the Way of the Hacker is not a place at which you can stop and say "I have arrived!"
Hear, O nobly born: The Way of the Hacker is a posture of mind; he who seeks a teacher of the Way knows it not, but he is only looking for a mirror. All those competent to teach the Way know that it cannot be taught, only pursued with joyous labor and by emulation of the great hackers of the past.
Hear, O nobly born: Great were the hackers of the past! Subtle and deep in their thinking, shaggy-bearded and with thunder on their brows! You may seek to become as them, but it will not suffice you to grow a beard.
Hear, O Nobly Born: The center of the mystery is the act of coding. You have a keyboard before you; pursue the Way through work.
Eric S. Raymond, The Loginataka
Hear, O nobly born: The Way of the Hacker is a posture of mind; he who seeks a teacher of the Way knows it not, but he is only looking for a mirror. All those competent to teach the Way know that it cannot be taught, only pursued with joyous labor and by emulation of the great hackers of the past.
Hear, O nobly born: Great were the hackers of the past! Subtle and deep in their thinking, shaggy-bearded and with thunder on their brows! You may seek to become as them, but it will not suffice you to grow a beard.
Hear, O Nobly Born: The center of the mystery is the act of coding. You have a keyboard before you; pursue the Way through work.
Eric S. Raymond, The Loginataka
❤4⚡3
Forwarded from Programming Deadlock
How OCaml type checker works -- or what polymorphism and garbage collection have in common
https://okmij.org/ftp/ML/generalization.html
https://okmij.org/ftp/ML/generalization.html
okmij.org
Efficient and Insightful Generalization
A short guide on OCaml type checker, describing the surprisingly elegant algorithm for generalization, which generalizes to first-class polymorphism, MLF and local types. Polymorphism and regions have much in common.
G._W._Leibniz_-_A_New_Method_for_Finding_Maxima_and_Minima.pdf
238.2 KB
G.W. Leibniz, A New Method for Finding Maxima and Minima, and Likewise for Tangents, and with a single kind of calculation for these, which is hindered neither by fractions nor irrational quantifiers.
Actis Erud. Lips. Oct. 1684, p.467-473
Actis Erud. Lips. Oct. 1684, p.467-473
🤔1
(φ (μ (λ)))
G._W._Leibniz_-_A_New_Method_for_Finding_Maxima_and_Minima.pdf
A bilingual translation of one of several fine papers from Leibniz. It is a disaster that modern introductions to calculus provides students a way of understanding maxima/minima that leaves them dumbfounded, even when they see the same thing later in different forms, whether in the differential geometry of manifolds or functional analysis.
🤔1
(φ (μ (λ)))
A bilingual translation of one of several fine papers from Leibniz. It is a disaster that modern introductions to calculus provides students a way of understanding maxima/minima that leaves them dumbfounded, even when they see the same thing later in different…
This paper also accounts for one of the earliest uses of the "algorithm" concept in mathematical analysis.
🤔1
using llm to write code is like asking an artist to paint for you. if you only want the end result, by all means! if you are like me who enjoy the process of painting, then why would you bother automating the fun part away? one may say, “but i am only using llm to write code. i am still doing the problem solving myself!”. to me, programming isn’t complete if i don’t get to express the solution in code myself. it isn’t my art if i don’t create it myself.
https://kennethnym.com/blog/why-i-still-wont-use-llm/
⚡3👍1
Axler - 2020 - Measure, Integration & Real Analysis.pdf
5.9 MB
Sheldon Axler, Measure, Integration & Real Analysis (2020)
Macros are described in Curry's terminology as meta-programming. A meta-program is a program with the sole purpose of enabling a programmer to better write programs. Although meta-programming is adopted to various extents in all programming languages, no language adopts it as completely as lisp. In no other language is the programmer required to write code in such a way to convenience meta-programming techniques. This is why lisp programs look weird to non-lisp programmers: how lisp code is expressed is a direct consequence of its meta-programming needs. As this book attempts to describe, this design decision of lisp—writing meta-programs in lisp itself—is what gives lisp the stunning productivity advantages that it does. However, because we create meta-programs in lisp, we must keep in mind that meta programming is different from U-Language specification. We can discuss meta-languages from different perspectives, including other meta-languages, but there is only one U-Language.
Doug Hoyte, Let Over Lambda (2008)
Doug Hoyte, Let Over Lambda (2008)
❤1
(φ (μ (λ)))
Macros are described in Curry's terminology as meta-programming. A meta-program is a program with the sole purpose of enabling a programmer to better write programs. Although meta-programming is adopted to various extents in all programming languages, no language…
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether. In lisp, the meta-programming system doesn't stop at any so-called primitives. It is possible, in fact desired, for these macro programming techniques used to build the language to continue on up into the user application. Even applications written by the highest-level of users are still macro layers on the lisp onion, growing through iterations.
Doug Hoyte, Let Over Lambda (2008)
Doug Hoyte, Let Over Lambda (2008)
❤1
(φ (μ (λ)))
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether.…
On Lisp and Subversive Mathematical Formalism
One of the reasons why Lisp continues to fascinate me is precisely the capacity of Lisp to not rely on any primitives and rewrite itself. The power that this form of meta-programming enables goes beyond just allowing for things like closures—which have appeared in every major language in the last few decades from Java to Haskell—but it really acts as a layer of abstraction that doesn't just represent but also rewrites existing rules. It is very similar to mathematical formalism, whether that is Peacock's Principle of the Permanence of Equivalent Forms or Hilbert's formalism where one considers numbers as nothing but symbols on paper that obey certain laws.
In mathematical formalism one can represent almost everything in a different language (say, algebraic) and extend the existing field of knowledge. This happened several times in the history of mathematics, first during the late 17th century up until 18th and 19th centuries, starting from the likes of Euler and Lagrange and later by George Peacock and De Morgan. The study of analysis changed radically during these periods, Euler began his studies of analysis by calculating derivatives and logarithms while still having to rely on graphical representations for applying that mathematical analysis to mechanics. Lagrange changed that course of things radically, when he mentioned in his preface to Analytique Mechanique, that he will hereby renumerate all the existing laws of Newtonian mechanics without positing a single diagram in the book. And so he did, today one studies classical mechanics not by imagining solids, but by working on algebraic symbolizations of them.
Lisp, because it can rewrite itself, has the same capacity. It allows the programmer to navigate problem domains not by giving him something that he must be restricted to, but instead provide the problem solver the ability to make his own tools, and moreover change the very substances with which he makes his tools. The same way, not only can one change the equations of Newton's Mechanics but change the very grounds on which Newtonian mechanics has been theorized. And it's not just with classical mechanics, the same revolution also occurred again and again in different areas of mathematics, in different eras. It happened during the mid 19th century when Gauss who had already discovered the foundations for Non-Euclidean Geometry was unable to keep it just to his own constructions, but with the works of Riemann, Darboux et.al would go on to rewrite the very ways in which Euclidean Geometry had been conceived of until then, and there was no going back after that. A century later, in the early '50s, a similar revolution happened to the field of topology that had been born out of Euler's Königsberg bridge problem (the same one that also gave birth to graph theory) and later gathered together by the efforts of Poincare. Topology which had been formulated strictly along the lines of naive set theory (with Axiom of Choice), got itself rebased when Ellenberg and Steenrod were able to construct algebraic generalization of the de Rham cohomology. This was also the intersection that gave birth to what is now fashionable among certain computer scientists, i.e., category theory. But category theory didn't arise to serve itself, it was devised precisely to change the ways in which topology had hitherto been conceived of. The same way non-Euclidean geometry of Riemann based on Gauss generalized everything about geometry.
One of the reasons why Lisp continues to fascinate me is precisely the capacity of Lisp to not rely on any primitives and rewrite itself. The power that this form of meta-programming enables goes beyond just allowing for things like closures—which have appeared in every major language in the last few decades from Java to Haskell—but it really acts as a layer of abstraction that doesn't just represent but also rewrites existing rules. It is very similar to mathematical formalism, whether that is Peacock's Principle of the Permanence of Equivalent Forms or Hilbert's formalism where one considers numbers as nothing but symbols on paper that obey certain laws.
In mathematical formalism one can represent almost everything in a different language (say, algebraic) and extend the existing field of knowledge. This happened several times in the history of mathematics, first during the late 17th century up until 18th and 19th centuries, starting from the likes of Euler and Lagrange and later by George Peacock and De Morgan. The study of analysis changed radically during these periods, Euler began his studies of analysis by calculating derivatives and logarithms while still having to rely on graphical representations for applying that mathematical analysis to mechanics. Lagrange changed that course of things radically, when he mentioned in his preface to Analytique Mechanique, that he will hereby renumerate all the existing laws of Newtonian mechanics without positing a single diagram in the book. And so he did, today one studies classical mechanics not by imagining solids, but by working on algebraic symbolizations of them.
Lisp, because it can rewrite itself, has the same capacity. It allows the programmer to navigate problem domains not by giving him something that he must be restricted to, but instead provide the problem solver the ability to make his own tools, and moreover change the very substances with which he makes his tools. The same way, not only can one change the equations of Newton's Mechanics but change the very grounds on which Newtonian mechanics has been theorized. And it's not just with classical mechanics, the same revolution also occurred again and again in different areas of mathematics, in different eras. It happened during the mid 19th century when Gauss who had already discovered the foundations for Non-Euclidean Geometry was unable to keep it just to his own constructions, but with the works of Riemann, Darboux et.al would go on to rewrite the very ways in which Euclidean Geometry had been conceived of until then, and there was no going back after that. A century later, in the early '50s, a similar revolution happened to the field of topology that had been born out of Euler's Königsberg bridge problem (the same one that also gave birth to graph theory) and later gathered together by the efforts of Poincare. Topology which had been formulated strictly along the lines of naive set theory (with Axiom of Choice), got itself rebased when Ellenberg and Steenrod were able to construct algebraic generalization of the de Rham cohomology. This was also the intersection that gave birth to what is now fashionable among certain computer scientists, i.e., category theory. But category theory didn't arise to serve itself, it was devised precisely to change the ways in which topology had hitherto been conceived of. The same way non-Euclidean geometry of Riemann based on Gauss generalized everything about geometry.
(φ (μ (λ)))
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether.…
This, I believe, is also the case with Lisps. And one sees examples of this across several dialects of Lisp, such as Common Lisp with its CLOS. While I do agree with Hoyte (2008) when he says Common Lisp is the language to consider when doing serious Lisp-based macro meta-programming, I would contend that one must rank Racket (previously PLT Scheme) on similar grounds. Racket's defining concept itself is based on this power of lisp, taken to it's logical extreme. When you are designing your programs with Racket, and you want to exercise it's Lisp Powers, you don't fit the demands of the program to the "primitives" that some implementation of Racket provides you, rather you rewrite the very foundations on which you want to solve the problem. In other words, you create a DSL that has every bit of access to what a Lisp (Racket) can do while freely extending it's powers. And mind you, the same way in which Non-Euclidean Geometry was incompatible with Euclid's axioms (parallel postulate) but still was able to reincorporate it, a Racket
Chang et.al (2017), the paper that influenced Alexis' Hackett write:
The same manner in which Ellenberg and Steenrod's homological algebra eventually redefined all of topology, or as Errett Bishop's Constructive Foundations of Analysis by giving up on the law of the excluded middle, poor notions of continuity and abuse of Axiom of choice, redefined the methods of analysis as a whole, so does Lisp by giving up on syntactic, typing and inheritance restrictions, opens a whole new world for the programmer that pushes on the boundaries of what he can implement to solve his problem. If Perlis' maxim has any weight, then Lisp weighs the most, it not only changes the way you program, it redefines the very coordinates upon which you design and refactor computer programs.
#lang can be defined even if the very semantics of this new #lang don't exist or are incompatible with Racket. This can be illustrated using the Hackett language by Alexis King, which implements several type-level semantics from Haskell including the ones that are deeply tied to Haskell's type-checker such as: higher-kinded type, lazy evaluation, multi-parameter typeclasses and so on. And mind you, this isn't a new language for which we have to build a compiler to target particular systems, it is just a macro, which is exactly what every Racket #lang is. This is what syntactic extension can give you, only in Lisp. Why only in Lisp? Because there's no difference between syntactic and semantic extension in Lisps, the semantics are implemented in the same structure in which the non-existent syntax is, it's lists all the way.Chang et.al (2017), the paper that influenced Alexis' Hackett write:
Indeed, programmers need only supply their desired type rules in an intuitive mathematical form. Creating type systems with macros also fosters robust linguistic abstractions, e.g., they report type errors with surface language terms. Finally, our approach produces naturally modular type systems that dually serve as libraries of mixable and matchable type rules, enabling further linguistic reuse. When combined with the typical reuse of the runtime that embedded languages enjoy, our approach inherits the performance of its host and thus produces practical typed languages with significantly reduced effort.This is exactly how one avoids being corned while programming (Sussman, 2021), when one faces the consequences of being limited by the restrictions your programming language of choice puts on you, and it's also why Lisp rejects the criticisms of "not enough strict/strong static typing" that might be endowed unto her. You want strong static typing? Cool, here's a macro for you. You want lazy call-by-name evaluation? Cool, here's a macro for you. You want both static and dynamic typing using call-by-push evaluation? Cool, here's a macro for you. You want dependent types to implement Martin-Löf's HoTT or Thierry Coquand's Calculus of Constructions? Cool, here indeed is a macro for you.
The same manner in which Ellenberg and Steenrod's homological algebra eventually redefined all of topology, or as Errett Bishop's Constructive Foundations of Analysis by giving up on the law of the excluded middle, poor notions of continuity and abuse of Axiom of choice, redefined the methods of analysis as a whole, so does Lisp by giving up on syntactic, typing and inheritance restrictions, opens a whole new world for the programmer that pushes on the boundaries of what he can implement to solve his problem. If Perlis' maxim has any weight, then Lisp weighs the most, it not only changes the way you program, it redefines the very coordinates upon which you design and refactor computer programs.
👍1