Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My previous, somewhat-cranky comment got downvoted away, but I do think Lisp's "higher-order magic" and "ability to do calculus" is being a bit exaggerated here. From below - evaluating an Nth derivative in 80s-era C is not so atrocious:

    #define DX 0.0001
    typedef double (*func)(double);

    double NthDeriv(int n, func f, double x) {
        if (n == 1) {
            return (f(x + DX) - f(x)) / DX;
        }
        else {
            return (NthDeriv(n - 1, f, x + DX) - NthDeriv(n - 1, f, x)) / DX;
        }
    }

    double Cube(double x) { return x * x * x; }

    double result = NthDeriv(3, &Cube, 5.0);
As mentioned in previous discussions of this article, the equivalent in Python is pretty elegant:

    >>> def deriv(f):
    ...   dx = 0.0001
    ...   def fp(x):
    ...     return (f(x + dx) - f(x)) / dx
    ...   return fp
    ...
    >>> cube = lambda x: x**3
    >>> deriv(cube)(2.0)
    12.000600010022566


I think the thing that impressed the author at the time was more the fact that functions, being first-class values, can be introduced at runtime in a REPL, rather than having to be "planned" at compile time. So the C code isn't really analogous, but the Python code is.

But re: the Python code—I would say that, from the perspective of the 1960s, all modern "dynamic" languages that have REPLs (like Python) are Lisps in essential character.

"Lisp", back then, referred less to "a language that uses a lot of parentheses", and more to things like:

• runtime sum-typing using implicit tagged unions;

• parameterization of functions using linked lists (or hash-maps) of paired interned-string "keys" and arbitrary product-typed values, rather than parameterization using bitflags or product-types of optional positional parameters;

• heap allocation and garbage-collection;

• a compiler accessible by the runtime;

• "symbolic linkage" of functions and global variables, such that a named function or variable "slot" can be redefined (even to a new type!) at runtime, and its call-sites will then use the new version.

We only notice the parens as the differentiating feature of Lisps nowadays, because everything else has become widely disseminated. Perl and Python and PHP and Ruby (and even Bash) are fundamentally Lisps, in all of the above ways. Lisp "won."


However, PHP, Python, Ruby, JS, etc aren’t homioconic, which would disqualify them as “lisps” in the minds of most Lispers, including its creator.


One might say that they're all implementations of https://en.wikipedia.org/wiki/M-expression s.

My point, though, was that while the proponents of Lisp define Lisp one way (by the things only Lisp can do due to e.g. homoiconicity), the opponents of Lisp (like the author was, coming into the course) define Lisp by the set of features that make Lisp "not a Real Programmer†'s programming language"—i.e. the set of things that make them not want to use it.

http://www.catb.org/jargon/html/R/Real-Programmer.html

My assertion was, from these opponents' perspectives, there are very few languages left for "Real Programmers"; most modern languages have inherited nearly all of those horribly convenient Lisp-isms. Heck—modern CPUs are so good at pointer-chasing, they may as well be Lisp Machines!


Python, Ruby, and JS/ES are all on the same side of the Smalltalk family tree, aren't they?

On another side of the Smalltalk family tree, languages like Java embraced a kind of static typing and found strong success with it, leading to the promotion of strongly-typed structures over lambdas. This side, ironically, did the most work on JITs, which are meant to compensate for dynamic language features, and languages like Dart are still marrying JIT techniques to strong static type systems today.

And on another side of the Smalltalk family tree, languages like E are fully oriented around objects with rights, so that instead of structures which can be examined, values must have messages sent to them instead. Homoiconicity, lambdas, pattern-matching lists, and other staples of Scheme are nonetheless common in E and Monte.

I think that the Smalltalk family, overall, could be viewed as a response to the barren syntactic landscape of Lisp, but I think that they could also be viewed like any other language: Taking what they like from ancestors, and not taking what they don't like.


Lisp was developed largely for Symbolic Computing. See the famous McCarthy paper. Early on it was detected that Lisp execution can be seen as Symbolic Computing itself - see the Lisp interpreter executing symbolic expressions and the famous definition of Lisp evaluation in Lisp itself. Lisp then had the first self-hosted compiler, the first integrated macro system and the first integrated structure editors in the early 60s.

That was Lisp back then. Lisp then early on was used in a bunch of symbolic computing domains: theorem provers, computer algebra, natural language processing, planning, rule-based languages, programming language implementation (languages like ML or Scheme were implemented first in Lisp).

In the Lisp sense Python does not have a REPL, since it does not implement READ, EVAl and PRINT over symbolic data, like Lisp does.


Queinnec Lisp in small pièces explicitely argues that learning eval helps you model all dynlangs.


Seeing the symbolic derivative is the amazing thing to me. Really highlights the blurring of the lines between data and code.

You are correct that the analytic derivative isn't that hard to do in any language.

Edit: Should have said "many languages" not just "any"


I put up https://news.ycombinator.com/item?id=18343652 to explore a little more of the symbolic derivative fun you can do in lisp. Would love to get feedback on that post.


The Scheme and Python versions both return the derivative function. The C version is significantly less cool. I'd want to do something like:

    func cube_3rd_deriv = NthDeriv(3, &Cube); /* returns a function */
    double result = cube_3rd_derive(5.0);


You can still do this in C, with the same implementation Lisp uses under the hood. C being C, there is a bit of syntax boilerplate. I also omitted some details which may have been relevant at the time such as ensuring no cast from function pointers to data pointers (not hard to handle, but makes the code a bit longer).

    #include<stdio.h>
    #define DX 0.0001
    typedef double (*function)(double);
    typedef double (*closure_body)(void*, double);
    struct deriv_env { function f; };
    struct closure { void* env; closure_body body; };
    #define CALL(c, x) ((c)->body((c)->env, (x)))
    double deriv_body(void *env, double x) {
        function f = (deriv_env *)env->f;
        return (f(x + DX) - f(x)) / DX;
    }
    void deriv(function f, struct closure *out) {
        out->env = (void *)f;
        out->body = deriv_body;
    }
    double cube(double x) { return x * x * x; }

    int main(void) {
        struct closure cube_deriv;
        deriv(&cube, &cube_deriv);

        printf("%f\n", CALL(&cube_deriv, 2.));
        printf("%f\n", CALL(&cube_deriv, 3.));
        printf("%f\n", CALL(&cube_deriv, 4.));

        return 0;
    }


Maybe not as theoretically cool (doesn't happen at runtime) but isn't this pretty much the same in practice?

    double Cube3rdDeriv(double x) {
        return NthDeriv(3, &Cube, x);
    }
&Cube3rdDeriv can now be passed around in a higher-order way.


Same in Haskell:

   *Main> derivative f x = (f(x+dx)-f(x))/dx where dx = 0.0001
   *Main> derivative (^3) 2
   12.000600010022566


But would you introduce a function pointer on the first day of a student's first programming class? Good luck.


In Python:

    def deriv(f): return lambda x: (f(x+dx) - f(x))/dx
or even:

    deriv = lambda f: lambda x: (f(x+dx) - f(x))/dx


That's a sort of function, yes, but it's numerical analysis. In a Lisp you can pass in the equivalent of "4x^3 + (x^2)/2 -5X + 6" and get back "12x^2 + x - 5" - a function that is the derivative of the function you put in, rather than a function that approximates the value of the derivative at a value. You'd be manipulating the function rather than its return value.


Lisp is definitely not doing symbolic algebra and finding a pure analytical answer like that under the hood - symbolic math, differentiation rules etc. are a whole other beast (see SymPy for example).

Maybe that's the misunderstanding in the original article that made it seem so mindblowing?

Evaluating (f(5.0001) - f(5.0)) / 0.0001 just gives an approximation of the derivative at a certain point in the function (aka the "finite difference method").


Symbolic derivatives are one of the very next sections in the famous lisp lectures that this was watching.


Interesting - for symbolic differentiation I imagine it's about code-manipulation rules for e.g. calculus' product rule, chain rule etc.?


Essentially, yes. And if you want it to be pretty, it gets complicated quickly.

https://mitpress.mit.edu/sites/default/files/sicp/full-text/... is the chapter. Turns out it wasn't the "very next" one. But it is in my head.


Here's symbolic differentiation done in ~20 lines of Haskell: https://www.quora.com/What-are-20-lines-of-code-that-capture...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: