Hacker Newsnew | past | comments | ask | show | jobs | submit | curryhoward's commentslogin

You can inspect the styles with browser dev tools. It's IBM Plex Mono.


The suggestion is to use pure Haskell for the rules DSL only. The surrounding system that applies any side effects would presumably not be written in that language.

It's a good idea, and Facebook actually does something like this for their spam filtering rules: https://engineering.fb.com/2015/06/26/security/fighting-spam...


Looks like it has become abandonware, or at least the component they open sourced. Oh well, such is life in the Haskell ecosystem. https://github.com/facebook/Haxl


Citation needed on "well-established". I find myself often benefiting from Haskell's disciplined approach to effects.


" (beyond the well-established Haskell-esque effects bad)"

read that as:

"beyond the well-established idea, from Haskell, that side effects are troublesome and need to be explicitly modelled and thought about."


Ah, I think your clarification makes a lot more sense than your original phrasing, because Haskell's position is indeed that side effects are bad, not that effects are bad. The bad thing is "side", not "effects".


For one, Rust's "trait" system, which is a foundational part of the language that enables a lot of Rust's expressivity, is a limited imitation of Haskell's "type classes". Rust would be a very different language without it.


The Y combinator is not the same as λ calculus. The Y combinator is an expression in λ calculus.


The Y combinator is in a sense the heart of the λ calculus; it's a key discovery for understanding that the λ calculus is universal, which is what makes it a useful concept rather than an arbitrary bundle of rules.


The simply typed, and, especialy, the dependently typed lambda calculi are extremely versatile and powerful subsets of untyped lambda calculus which do not admit the Y combinator. Their usefulness does not at all require universality, and are, perhaps, more useful for being typed.


There are many useful things that are not "the lambda calculus". "The lambda calculus" is more widely known and important, largely because it is universal, hence Church-Turing thesis.


Of course there are other useful things. But a comment argued that the presence of Y is what makes untyped lambda calculus useful. My rebuttal is that certain subsets which (intentionally) lack Y are more useful. Applications of typed calculi abound, but one rarely sees applications of the untyped calculus. Universality is not so important, it would seem. For example, infinite computations can still be modeled in typed calculi, but one has more control over them in that setting than running on "bare metal" untyped lambda calculus.


Ok, "useless" is a slight exaggeration; non-universal calculi can be useful. But the thing that makes the lambda calculus so important and famous - the thing that makes it "the lambda calculus - is universality, and that's the ur-application that makes all of the subsequent application of calculi to mechanical computation possible.


> heart of the λ calculus

The Y combinator is a specific fixed-point combinator in lambda calculus. It is used to express recursion in lambda calculus where direct recursion is not initially available due to the lack of named functions. The Y combinator demonstrates the power of lambda calculus.


> which is what makes it a useful concept rather than an arbitrary bundle of rules.

I cannot recall a single practical FP language based on the Y combinator for recursion. The typical approach is to extend lambda calculus with a separate recursion primitive.


The lambda calculus was never meant to be a "practical FP language"; it predates programming languages and even mechanical stored-program computers as we understand them.


What are the useful cases of the Y combinator then? (I don't say that the lambda calculus is useless, I was responding to a comment that claimed the lambda calculus is useless without the Y combinator.)


the great benefit of the y-combinator is that it permits indefinite iteration in a system that seems at first glance to not support iteration at all

so when you're faced with a formal system that doesn't seem to support indefinite iteration, perhaps because you are trying to design it to guarantee termination, a useful exercise is to try to construct an analogue of the y-combinator in it

if you can't, you may have gained enough insight into the problem to show that the system actually does guarantee termination, and maybe even show useful time bounds on it

if you can, you have usually shown the system is turing-complete, which means you can appeal to rice's theorem whenever you are tempted to analyze the behavior of the system in any way, so you can satisfy yourself with partial decision procedures that work often enough to be useful and are conservative in the appropriate way, rather than wasting time trying to find a complete solution

(also if you're designing the system you might consider either supporting iteration more directly or removing the back door)

every once in a while it's also useful in practice to be able to program a weird machine, too


> the great benefit of the y-combinator is that it permits indefinite iteration in a system that seems at first glance to not support iteration at all

This is only the case for untyped systems. If we try to type the Y combinator, we will eventually need some notion of recursion. For example, in Haskell/OCaml we can type it with iso/equi-recursive types. How many truly untyped systems we use in practice?

The Y combinator as a counterexample of termination is a great insight though!


Dynamically typed programming languages are statically untyped.

You might argue they are unityped, but that’s really a type theoretician’s job security: if a language has the untyped lambda calculus as a subset (like the python example in this discussion) it’s reasonable to call it untyped.


typing is often an excellent way to ensure termination


Emulating recursion, and more generally the whole Church-Turing thesis. The fact that you can represent mechanical computation through, well, computation - the fact that equating those concepts makes sense at all, which was such a huge innovation that today it seems too banal to even notice.


You keep answering with the same words though. My question was _where_ the Y combinator is useful, not _what_ it is designed for, which I am of course aware of.


What do you mean "where"? In Princeton? In the Proceedings of the London Mathematical Society? In the head of anyone who presumes to think about computation?


you could argue that Y is also an advanced CPS transform and a lot of patterns rely on similar idea, mainly passing the following computation as a functional parameter


The Miranda programming language was an early lazy functional programming language, an ancestor of Haskell. It compiles down to combinators, and uses combinator reduction as its evaluation strategy. This became obsolete by the end of the 1980s but it’s still neat to learn how it works. One of the primitive combinators is Y. Here is its implementation: https://github.com/pkreyenhop/miranda/blob/master/reduce.c#L...


> Even more beautiful than the y combinator

That's a bit of a strange statement. The Y combinator is an expression in λ calculus. It's like saying French is even more beautiful than the phrase "nouveau départ".


> Comic Sans is perfect for setting children’s activity timetables that are displayed in a school playground. It’s perhaps not as appropriate for announcing scientific breakthroughs.

Simon Peyton Jones would beg to differ. :)


I enjoyed his talk on adding lambdas to Excel, in general, but one particular bright spot was when someone asked why the slides weren't Comic Sans and he explained (my paraphrase...) that he had to give the talk outside Haskell circles and the rest of the world doesn't love him enough to forgive Comic Sans.


> and why they are so absolutely lost when it comes to creating successful languages

They aren't lost—they're just more interested in actually good ideas than in popularity. Popular languages must appeal to all kinds of programmers with varying backgrounds, so they are heavily constrained. Your argument is basically that mathematicians don't know what they're doing because their most advanced theories aren't used by mechanical engineers.


Most of the functional programmers I know have a deeper understanding of OOP than the OOP programmers I know. For example, most of the OOP programmers I know do not understand covariance and contravariance (whereas just about every functional programmer I know has mastered them), even though those concepts frequently come up in the context of OOP. People who study programming language theory tend to gravitate toward the functional paradigm, but it's not because they don't understand OOP.


If you aren't aware that contra and covariance are typically taught under the banner of the "liskov substitution principle" rather than with the name variance you don't know OOP better than OOP programmers. You just convinced yourself you did because you use different terminology. I think maybe the guy you're responding to is correct.


>most of the OOP programmers I know do not understand covariance and contravariance (whereas just about every functional programmer I know has mastered them)

I very much doubt this and I suspect it is more of a problem of how you framed it. I'm sure you would see more success if you framed it seeing if the following would compile.

    List list = new ArrayList<String>;
and for contravariance ,which is something that is not very common in practice

    Consumer printHashCode = o -> System.out.println(o.hashCode());
    Consumer<String> printStringHashCode = printHashCode;


Where is the variance in your example?


If you covariance, it's the first example because String is a subtype of Object. List is the same thing as List<Object>.

The first example is missing some parenthesis, but it should convoy my point.


Assuming we are talking about java, List is not the same as List<Object>.

List<Object> list = new ArrayList<String>(); will fail to compile.

Generic types in Java have no variance by default. The only type that can be assigned to List<String> is exactly List<String>.

Your example would want to be List<? extends Object> foo = new List<String>();

There is a reason for this. Consider:

  List<String> foo;
  List<Object> bar = foo;
  bar.add(new Integer(0));
If the above works, you would be adding an Integer to a collection of Strings, which is not sound.

In general, you typically want to be covarient in type parameters that are used for arguements, and contravariant in type parameters that are used for return values. In the general case, this is unworkable (since types parameters are often used for both), so the only reasonable behaviour is to be invariant.

Raw arrays in Java are actually covariant - which is not reasonable - and quickly result in a runtime type exception when you get to the "bar[0] = new Integer(0)" portion of the above example.

Scala allows you to explicitly mark type parameters as covariant or contravariant, as bellow:

    class Foo[+T] //Foo is covariant in T
    class Foo[-T] //Foo is contravarient in T
    class Foo[T]  //Foo is invarient in T
However, no equivelent mechanism exists in Java.


> Code has limited CPU available as well - if it tries to use too many cycles it quits with an error

That's a pretty strange failure mode. Usually you'd just throttle the sandboxed application (e.g., give it a CFS quota with a CPU cgroup) or give it a limited number of vCPUs.


But then it could just keep running forever which isn’t something you’d want


That's what (4) is for. Time out after some duration.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: