(φ (μ (λ))) – Telegram
(φ (μ (λ)))
1.06K subscribers
329 photos
16 videos
142 files
639 links
www.phimulambda.org

https://tv.dyne.org/c/phimulambda

Uncovering underlying intersections between philosophy (φ), mathematics (μ) and logic (λ).

Other embeddings include:
- Computing
- Cognitive Science
- Linguistics
- Statistics

@DivyaRanjan1905
Download Telegram
(φ (μ (λ)))
Macros are described in Curry's terminology as meta-programming. A meta-program is a program with the sole purpose of enabling a programmer to better write programs. Although meta-programming is adopted to various extents in all programming languages, no language…
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether. In lisp, the meta-programming system doesn't stop at any so-called primitives. It is possible, in fact desired, for these macro programming techniques used to build the language to continue on up into the user application. Even applications written by the highest-level of users are still macro layers on the lisp onion, growing through iterations.

Doug Hoyte, Let Over Lambda (2008)
1
(φ (μ (λ)))
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether.…
On Lisp and Subversive Mathematical Formalism

One of the reasons why Lisp continues to fascinate me is precisely the capacity of Lisp to not rely on any primitives and rewrite itself. The power that this form of meta-programming enables goes beyond just allowing for things like closures—which have appeared in every major language in the last few decades from Java to Haskell—but it really acts as a layer of abstraction that doesn't just represent but also rewrites existing rules. It is very similar to mathematical formalism, whether that is Peacock's Principle of the Permanence of Equivalent Forms or Hilbert's formalism where one considers numbers as nothing but symbols on paper that obey certain laws.

In mathematical formalism one can represent almost everything in a different language (say, algebraic) and extend the existing field of knowledge. This happened several times in the history of mathematics, first during the late 17th century up until 18th and 19th centuries, starting from the likes of Euler and Lagrange and later by George Peacock and De Morgan. The study of analysis changed radically during these periods, Euler began his studies of analysis by calculating derivatives and logarithms while still having to rely on graphical representations for applying that mathematical analysis to mechanics. Lagrange changed that course of things radically, when he mentioned in his preface to Analytique Mechanique, that he will hereby renumerate all the existing laws of Newtonian mechanics without positing a single diagram in the book. And so he did, today one studies classical mechanics not by imagining solids, but by working on algebraic symbolizations of them.

Lisp, because it can rewrite itself, has the same capacity. It allows the programmer to navigate problem domains not by giving him something that he must be restricted to, but instead provide the problem solver the ability to make his own tools, and moreover change the very substances with which he makes his tools. The same way, not only can one change the equations of Newton's Mechanics but change the very grounds on which Newtonian mechanics has been theorized. And it's not just with classical mechanics, the same revolution also occurred again and again in different areas of mathematics, in different eras. It happened during the mid 19th century when Gauss who had already discovered the foundations for Non-Euclidean Geometry was unable to keep it just to his own constructions, but with the works of Riemann, Darboux et.al would go on to rewrite the very ways in which Euclidean Geometry had been conceived of until then, and there was no going back after that. A century later, in the early '50s, a similar revolution happened to the field of topology that had been born out of Euler's Königsberg bridge problem (the same one that also gave birth to graph theory) and later gathered together by the efforts of Poincare. Topology which had been formulated strictly along the lines of naive set theory (with Axiom of Choice), got itself rebased when Ellenberg and Steenrod were able to construct algebraic generalization of the de Rham cohomology. This was also the intersection that gave birth to what is now fashionable among certain computer scientists, i.e., category theory. But category theory didn't arise to serve itself, it was devised precisely to change the ways in which topology had hitherto been conceived of. The same way non-Euclidean geometry of Riemann based on Gauss generalized everything about geometry.
(φ (μ (λ)))
While most languages just try to give a flexible enough set of these primitives, lisp gives a meta-programming system that allows any and all sorts of primitives. Another way to think about it is that lisp does away with the concept of primitives altogether.…
This, I believe, is also the case with Lisps. And one sees examples of this across several dialects of Lisp, such as Common Lisp with its CLOS. While I do agree with Hoyte (2008) when he says Common Lisp is the language to consider when doing serious Lisp-based macro meta-programming, I would contend that one must rank Racket (previously PLT Scheme) on similar grounds. Racket's defining concept itself is based on this power of lisp, taken to it's logical extreme. When you are designing your programs with Racket, and you want to exercise it's Lisp Powers, you don't fit the demands of the program to the "primitives" that some implementation of Racket provides you, rather you rewrite the very foundations on which you want to solve the problem. In other words, you create a DSL that has every bit of access to what a Lisp (Racket) can do while freely extending it's powers. And mind you, the same way in which Non-Euclidean Geometry was incompatible with Euclid's axioms (parallel postulate) but still was able to reincorporate it, a Racket #lang can be defined even if the very semantics of this new #lang don't exist or are incompatible with Racket. This can be illustrated using the Hackett language by Alexis King, which implements several type-level semantics from Haskell including the ones that are deeply tied to Haskell's type-checker such as: higher-kinded type, lazy evaluation, multi-parameter typeclasses and so on. And mind you, this isn't a new language for which we have to build a compiler to target particular systems, it is just a macro, which is exactly what every Racket #lang is. This is what syntactic extension can give you, only in Lisp. Why only in Lisp? Because there's no difference between syntactic and semantic extension in Lisps, the semantics are implemented in the same structure in which the non-existent syntax is, it's lists all the way.

Chang et.al (2017), the paper that influenced Alexis' Hackett write:
Indeed, programmers need only supply their desired type rules in an intuitive mathematical form. Creating type systems with macros also fosters robust linguistic abstractions, e.g., they report type errors with surface language terms. Finally, our approach produces naturally modular type systems that dually serve as libraries of mixable and matchable type rules, enabling further linguistic reuse. When combined with the typical reuse of the runtime that embedded languages enjoy, our approach inherits the performance of its host and thus produces practical typed languages with significantly reduced effort.
This is exactly how one avoids being corned while programming (Sussman, 2021), when one faces the consequences of being limited by the restrictions your programming language of choice puts on you, and it's also why Lisp rejects the criticisms of "not enough strict/strong static typing" that might be endowed unto her. You want strong static typing? Cool, here's a macro for you. You want lazy call-by-name evaluation? Cool, here's a macro for you. You want both static and dynamic typing using call-by-push evaluation? Cool, here's a macro for you. You want dependent types to implement Martin-Löf's HoTT or Thierry Coquand's Calculus of Constructions? Cool, here indeed is a macro for you.

The same manner in which Ellenberg and Steenrod's homological algebra eventually redefined all of topology, or as Errett Bishop's Constructive Foundations of Analysis by giving up on the law of the excluded middle, poor notions of continuity and abuse of Axiom of choice, redefined the methods of analysis as a whole, so does Lisp by giving up on syntactic, typing and inheritance restrictions, opens a whole new world for the programmer that pushes on the boundaries of what he can implement to solve his problem. If Perlis' maxim has any weight, then Lisp weighs the most, it not only changes the way you program, it redefines the very coordinates upon which you design and refactor computer programs.
👍1
(φ (μ (λ)))
The classic Euler-Lagrange equations for a mechanical system
This weird looking equation can be described with much simplicity in a Scheme procedure as such:

(define ((Lagrange-equations Lagrangian) w)
(- (D (compose ((partial 2) Lagrangian) (Gamma w)))
(compose ((partial 1) Lagrangian) (Gamma w))))


One employs the functional abstraction that is very ubiquitous in lisp programs, to implement Lagrangian, D, Gamma and partial. Once that has been done, using this equation for a harmonic oscillator or a double pendulum is only a matter of applying the procedure to it's specific arguments.
(φ (μ (λ)))
This weird looking equation can be described with much simplicity in a Scheme procedure as such: (define ((Lagrange-equations Lagrangian) w) (- (D (compose ((partial 2) Lagrangian) (Gamma w))) (compose ((partial 1) Lagrangian) (Gamma w)))) One…
This is not that far from what you can achieve with Haskell, though honestly you hit a wall with Haskell after a while. I had to implement a quick symbolic differentiator for a particular task and here's the code below for the final differentiation function that will be called:

d :: Expr -> Char -> Expr

-- Trivial Conditions
d (Const _) _ = 0
d (Var v) x
| x == v = 1
| otherwise = 0

-- Primary Conditions
d (DSum m n) x = freduce (d m x + d n x)
d (DSub m n) x = freduce (d m x - d n x)
d (DProduct m n) x = freduce (m * d n x + n * d m x)


Now if you see the conditions, they're pretty close to what you'd write in mathematics. Except the freduce part which I need to simplify the equations further. I need a separate function because simply a lambda can't handle that much of conditional reduction, and actually freduce itself calls another helper function. This is why I mentioned about hitting a wall, you cannot extend things after a while. You cannot easily extend the + operator, or have your own that works with numbers. I had to instead instantiate the Num typeclass and use it but then I need to define the properties of these operations (+, -, *) in a separate function and then recursively apply them instead of changing them directly.
(φ (μ (λ)))
This is not that far from what you can achieve with Haskell, though honestly you hit a wall with Haskell after a while. I had to implement a quick symbolic differentiator for a particular task and here's the code below for the final differentiation function…
Now the same symbolic differentiator when implemented in Scheme, looks almost the same:

(define (diff e v)
(cond ((constant? e) 0)
((variable? e)
(if (variable-eq? e v) 1 0))
((sum? e)
(make-sum (diff (augend e) v)
(diff (addend e) v)))
((sub? e)
(make-sub (diff (augend e) v)
(diff (addend e) v)))
((product? e)
(make-sum
(make-product (multiplier e) (diff (multiplicand e) v))
(make-product (multiplicand e) (diff (multiplier e) v))))

(else
(error "This expression cannot be differentiated." e))))


When one compares the two, they might seem almost the same which they are on the surface. But when one realizes, that yeah my procedure/function isn't simplifying the equations, and I need to fix this. In the case of the Scheme procedure you can leave the diff entirely unchanged and only add new conditions to make-product, make-sum etc. Even adding exponentiation can be done similarly. This can be achieved in Haskell if you go out of your way and do the typeclass shenanigans but Scheme symbolically allows you to do it with much ease. It naturally supports the structure of never having to rely on a specific set of things, you make structures and break them and rebuild them as you like. And in doing so, you don't have to fight with the semantics of the language, rather it's awaiting for you to make use of it.
👍1
On his honeymoon in 1961, Knuth discovered that the roads of mathematics and computer programs intersect. For, Jill was not only accompanied by her newly wed husband on their joint trip through Europe, but also by Noam Chomsky's book Syntactic Structures which Knuth was studying eagerly. Chomsky showed Knuth how mathematics and computing can be practiced together. One year later, Knuth met Bob Floyd who would teach him that you really could use mathematical reasoning to understand computer programs. The early sixties thus not only brought Jill and Don Knuth officially together, it also married mathematics and programming.

E.G Daylight, The Essential Knuth (2013)
👍1
The Feyerabend Project by Richard Gabriel

https://www.dreamsongs.com/Feyerabend/Feyerabend.html
Varities of REPLs

1. C
#include <stdio.h>

int main() {
char input[100];

while (fgets(input, sizeof(input), stdin)) {
printf("%s", input);
}

return 0;
}

2. Rust
use std::io::{self, Write};

fn main() {
let mut input = String::new();
while io::stdin().read_line(&mut input).unwrap() > 0 {
print!("{}", input);
input.clear();
}
}

3. (g)Forth
: repl
BEGIN
TIB DUP >IN !
0 WORD COUNT TYPE
AGAIN ;

4. Haskell
main :: IO ()
main = do
input <- getLine
putStrLn input
main

And, finally:

5. Lisp
(loop (print (eval (read))))
😁5❤‍🔥21
The problem is that coding isn’t fun if all you can do is call things out of a library, if you can’t write the library yourself. If the job of coding is just to be finding the right combination of parameters, that does fairly obvious things, then who’d want to go into that as a career?

Donald Knuth in an interview with Peter Seibel. Unfortunately, the answer Knuth would get today is: everybody.
1💯1
Douglas Hofstadter, Gödel, Escher, Bach: An Eternal Golden Braid (1979)
Forwarded from Symptoms
McKenzie Wark, A Hacker Manifesto (2009)
Steven Levy, Afterword to Hackers: Heroes of the Computer Revolution (2010)
1
: birthday 0 100 do ." Happy Birthday Chuck " cr next ;
birthday

Happy birthday Chuck, aka Charles H. Moore, the inventor of Forth and subsequently the concatenative programming paradigm, the threaded code model, and much more.
Forwarded from Mathematics Channel
Interactive Mathematical applets and animations

https://www.dynamicmath.xyz/

"A mathematical formula should never be 'owned' by anybody!"
Donald Knuth, Digital Typography, ch1, p. 8 (1999).


#site