My previous, somewhat-cranky comment got downvoted away, but I do think Lisp's "higher-order magic" and "ability to do calculus" is being a bit exaggerated here. From below - evaluating an Nth derivative in 80s-era C is not so atrocious:
I think the thing that impressed the author at the time was more the fact that functions, being first-class values, can be introduced at runtime in a REPL, rather than having to be "planned" at compile time. So the C code isn't really analogous, but the Python code is.
But re: the Python code—I would say that, from the perspective of the 1960s, all modern "dynamic" languages that have REPLs (like Python) are Lisps in essential character.
"Lisp", back then, referred less to "a language that uses a lot of parentheses", and more to things like:
• runtime sum-typing using implicit tagged unions;
• parameterization of functions using linked lists (or hash-maps) of paired interned-string "keys" and arbitrary product-typed values, rather than parameterization using bitflags or product-types of optional positional parameters;
• heap allocation and garbage-collection;
• a compiler accessible by the runtime;
• "symbolic linkage" of functions and global variables, such that a named function or variable "slot" can be redefined (even to a new type!) at runtime, and its call-sites will then use the new version.
We only notice the parens as the differentiating feature of Lisps nowadays, because everything else has become widely disseminated. Perl and Python and PHP and Ruby (and even Bash) are fundamentally Lisps, in all of the above ways. Lisp "won."
My point, though, was that while the proponents of Lisp define Lisp one way (by the things only Lisp can do due to e.g. homoiconicity), the opponents of Lisp (like the author was, coming into the course) define Lisp by the set of features that make Lisp "not a Real Programmer†'s programming language"—i.e. the set of things that make them not want to use it.
My assertion was, from these opponents' perspectives, there are very few languages left for "Real Programmers"; most modern languages have inherited nearly all of those horribly convenient Lisp-isms. Heck—modern CPUs are so good at pointer-chasing, they may as well be Lisp Machines!
Python, Ruby, and JS/ES are all on the same side of the Smalltalk family tree, aren't they?
On another side of the Smalltalk family tree, languages like Java embraced a kind of static typing and found strong success with it, leading to the promotion of strongly-typed structures over lambdas. This side, ironically, did the most work on JITs, which are meant to compensate for dynamic language features, and languages like Dart are still marrying JIT techniques to strong static type systems today.
And on another side of the Smalltalk family tree, languages like E are fully oriented around objects with rights, so that instead of structures which can be examined, values must have messages sent to them instead. Homoiconicity, lambdas, pattern-matching lists, and other staples of Scheme are nonetheless common in E and Monte.
I think that the Smalltalk family, overall, could be viewed as a response to the barren syntactic landscape of Lisp, but I think that they could also be viewed like any other language: Taking what they like from ancestors, and not taking what they don't like.
Lisp was developed largely for Symbolic Computing. See the famous McCarthy paper. Early on it was detected that Lisp execution can be seen as Symbolic Computing itself - see the Lisp interpreter executing symbolic expressions and the famous definition of Lisp evaluation in Lisp itself. Lisp then had the first self-hosted compiler, the first integrated macro system and the first integrated structure editors in the early 60s.
That was Lisp back then. Lisp then early on was used in a bunch of symbolic computing domains: theorem provers, computer algebra, natural language processing, planning, rule-based languages, programming language implementation (languages like ML or Scheme were implemented first in Lisp).
In the Lisp sense Python does not have a REPL, since it does not implement READ, EVAl and PRINT over symbolic data, like Lisp does.
You can still do this in C, with the same implementation Lisp uses under the hood. C being C, there is a bit of syntax boilerplate. I also omitted some details which may have been relevant at the time such as ensuring no cast from function pointers to data pointers (not hard to handle, but makes the code a bit longer).
That's a sort of function, yes, but it's numerical analysis. In a Lisp you can pass in the equivalent of "4x^3 + (x^2)/2 -5X + 6" and get back "12x^2 + x - 5" - a function that is the derivative of the function you put in, rather than a function that approximates the value of the derivative at a value. You'd be manipulating the function rather than its return value.
Lisp is definitely not doing symbolic algebra and finding a pure analytical answer like that under the hood - symbolic math, differentiation rules etc. are a whole other beast (see SymPy for example).
Maybe that's the misunderstanding in the original article that made it seem so mindblowing?
Evaluating (f(5.0001) - f(5.0)) / 0.0001 just gives an approximation of the derivative at a certain point in the function (aka the "finite difference method").