Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I actually quite like the Haskell and OCaml type systems! I had them in mind while writing my post above. Personally, I dislike type systems which are rigid but weak. Strong compiler verification is awesome in a language like Haskell which offers union types, bounded types, contra- and covariant parameterized types (not commonly referred to as such, but implict in functors), etc. I agree with you that it makes it easier to handle complex codebases, catches huge classes of bugs readily, and allows you to solve problems in less code. [1] Good type systems are programmer amplifiers.

I also love the concision and flexibility that comes with Clojure's idiomatic eschewing of type information: it helps me focus on functional composition instead of the particular data. Both have their advantages. Java just makes me angry. ;-)

Regarding the difference in cost: I meant to say that many "strongly typed" compilers are not yet smart enough to elide many of the run-time indirections and safety checks that dynamic languages must use. Really good type systems, like Haskell, are different: precomputing finite-domain functions to lookup tables, finding fixed points, etc.

This is not a domain I understand very well, but the comments I've read from language folks (for instance, the Dart VM designers) suggest that type checks in particular have a relatively small impact on performance. Polymorphism still leads to things like vtables, etc, and as I understand it modern x86 is pretty good at handling these cases.

Again, I know very little about actually writing compilers/vms, so if you have further comments I'd be interested to hear 'em!

[1] That said, Haskell's type system drives me nuts in its proliferation of types which are almost but not quite compatible; nothing worse than trying to use two libraries which will only interact with their own particular variant of a String or ByteArray.



Thanks, that clarified things. Unfortunately I too know little about actually writing compilers so my depth is limited. But I will offer some thoughts anyways.

The slow down in dynamic languages is in more than type checks though. The slow down is because dynamic languages are like some crazy awesome dream world where anything can change at any given moment and things are not necessarily what they seem. So VMs must be extra vigilant, checking many things like assignment, for exceptions, if the object is still the same class, if it still has the same methods etc. The term dynamic is almost an understatement! Add to that boxing and the possibility of heterogeneous collections and slow downs are the price to pay for all that flexibility. This makes things very hard to predict both on the compiler level and also at the CPU level in terms of branching - which alone is very costly. Dynamic language programs are not easily compressed. There are ways around that and languages like Clojure offer a sort of compromise, by being able to lock certain parts down in a solid reality, structures (in the sense of regularities) coalesce and can be used to speed up parts of your code. You can choose where to trade flexibility for speed.

You're right (Subtype) Polymorphsism do incur a cost but modern CPUs can well handle these. But with parametric polymorphism and value types you get no runtime hits and also get some free theorems to boot.

That said, how you think has to have some influence. I have never found static types to be constraining, I actually feel like the allow me to more easily plan future consequences. I suppose I trade implementation freedom for the ability to create consequence trees of greater depth and quickly eliminate unproductive branches.


That's a really good observation. I think in cases like Clojure, the dynamic problems aren't quite as bad because of the emphasis on immutability: the compiler can readily generate single-assignment forms from let and def bindings. With type hints, method calls on variables should then be computable at compile-time. (Where type hints are absent, obviously, you pay the runtime reflection cost.)

One thing I don't quite understand is how expensive the protocol system is; e.g., if I extend a type with a new protocol at run time, perhaps concurrent with the use of an object of that type, how does the compiler handle it? IIRC protocols are handled as JVM interfaces, so it may just be an update in the interface method table which is resolved... by invokeVirtual, right? I imagine you could pay a significant cost in terms of branch misprediction for the JVM's runtime behavior around interfaces...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: