Hacker Newsnew | past | comments | ask | show | jobs | submit | nponeccop's commentslogin

I wonder if there are other such typos


Can modern garbage collectors deal with almost full heaps? There is an urban legend that GC only shows good performance with 50% of memory free, which is certainly unacceptable for microcontrollers. Also, most garbage collectors require boxing which prevents tight packing of heap data. Are all these problems already solved somewhere? Jitter is the least important issue as there are a lot of works on realtime collectors.


Unfortunately I've no idea how the various GCs deal with the trade-offs; my point is that (memory bandwidth / memory size) in these embedded things is high enough that garbage-collection is fast, contrary to popular wisdom. I was looking at OCAPIC for an example, which has a very simple stop and copy gc which takes 1.5ms to collect. The trade-offs would have some 2× impact on some metric, but not change the feasibility of the thing.


My point is that 2x slowdown is critical for many systems - embedded, gamedev, HPC, systems software. Controlled measurement of slow-down and heap slack caused by GC is beyond capabilities of most embedded devs. Academia provided us with some experimental results saying that in half-empty heaps GC is lightning fast, but there's no (recent) data for 90% full heaps and for very large heaps.


I wonder what happened to SELF-like object systems (slots, prototypes and all that).


JavaScript


I mean high-performance implementations. Techniques to (statically) compile prototypes into efficient code were developed, but they are largely irrelevant to Javascript. High-performance modern implementations of JS rely on tracing JIT.


Pretty sure polymorphic inline caches are standard fare in both Javascript and other VMs (JVM, CLR, etc.). What other techniques are you referring to?


See also Vala, Mercury, Felix and Haxe.


It is not necessary. With software replacement for MMU possible (see Singularity OS), hardly any hardware safety support is a requirement nowadays. TAL is the way to go - AFAIK it is possible to make plain x86 assembly dependently typable by attaching proof witnesses to EXE.


I'm shamelessly adding my own HNC here :) http://code.google.com/p/inv/ https://github.com/kayuri/HNC/wiki I'm moving to GitHub - so 2 repos. "C++ is an evil language used at early stages of HNC development", so it will go away. See also discussions on reddit (submitted not by me lol) http://www.reddit.com/r/haskell/comments/mldzq/hnspl_a_bette... and here on HN long time ago http://news.ycombinator.com/item?id=2116337 (submitted by me)


LLVM bytecode does not exist after linking - it turns into usual machine code. It is not even a JIT compiler, talk less of an interpreter.


What about Cyclone?


I'm not sure I've even heard the name once. Looks interesting , gradually safer than C, I like that.


Well, BitC is not too popular either. Another interesting experiment is Sing#/Singularity, which is a successful attempt to get IO performance of FreeBSD despite designing whole OS in modified dependently-typed C# with usual JIT and garbage collection. The question whether GC is applicable in memory-constrained environments is still open - I couldn't find a single research in this direction.


Looks like you have never took academia papers seriously :)


The dissertation says it's Deca. DecaC stands for Deca Compiler.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: