Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

That's absolutely insane at that clockrate. The way I would get 'animations' (for want of a better term) done is by rendering them frame-by-frame, compressing that and then playing it back at high speed. And even that was next to impossible. Decompressing video @60fps, and doing real-time dithering to increase the effective number of colours and still have time enough for 45KHz audio is totally nuts. This qualifies as art, not just software.


For me, the most interesting part is that his solution - updating only the changed parts between each frame and the previous one, and approximating the changes so that they're not (too) visually perceptible in order to satisfy a bitrate constraint - is one of the ways that modern video codecs achieve their compression.

I agree it's also amazing that apparently, the true limitations of hardware from over 30 years ago are still rather elusive... this is the complete opposite of the "throw more hardware at it" attitude towards most software problems today, but instead it's "throw more brainpower at it".


The more I progress in our domain of expertise, the more I observe we're being incredibly wasteful† all over the place. For all the expressiveness power of our platforms and languages it somehow sounds insane that time (ruby -e '100_000_000.times {}') takes four solid seconds on my 3.4GHz machine††. I know, bogoMIPS are no benchmark, this is just to exemplify that layers of abstraction, while useful (necessary even), are also harmful, the underlying question being: how much layers is too much layers?

I dream of a system redesigned from the ground up, where hardware and software components, while conceptually isolated, cooperate instead of segregating each other to layers. See how ZFS made previously segregated layers cooperate to offer a robust system, see how TRIM operates on the lowest hardware levels by notifying of filesystem events, see how OSI levels get pierced through for QoS and reliability concerns. Notice how the increase in layers and thus holistic complexity rampantly leads to more bugs, more vulnerabilities, more energy wasted. We all know the fastest code is the one that does not execute, the most robust code is the one that doesn't get written, the most secure code is the one that doesn't exist. Why do I still see redraws and paintings and flashes in 2014? Why does a determined adversary has such a statistical advantage that he is almost guaranteed toget a foothold into my system? This is completely unacceptable. For as much as we love playing with it, the whole web stack, while a significant civilization milestone, is, as a whole, a massive technological failure (the native stack barely fares better).

† I consider wasteful and bloated subtly distinct

†† not at all an attack on Ruby, just what I happen to have at hand right now


I think the underlying cause of this overabstraction is largely a result of abstraction being excessively glorified (mostly) by academics and formal CS curricula. In some ways, it's similar to the OOP overuse that has thankfully decreased somewhat recently but was extremely prevalent throughout the 90s. In software engineering, we're constantly subjected to messages like: Abstraction is good. Abstraction is powerful. Abstraction is the way to solve problems. More abstraction is better. Even in the famously acclaimed SICP lecture series [1] there is this quote:

"So in that sense computer science is like an abstract form of engineering. It's the kind of engineering where you ignore the constraints that are imposed by reality."

There is an implication that we should be building more complex software just because we can, since that is somehow "better". Efficiency is only thought of in strictly algorithmic terms, constants are ignored, and we're almost taught that thinking about efficiency should be discouraged unless absolutely necessary because it's "premature optimisation". The (rapidly coming to an end) exponential growth of hardware power made this attitude acceptable, and lower-level knowledge of hardware (or just simple things like binary/bit fields) is undervalued "because we have these layers of abstraction" - often leading to adding another layer on top just to reinvent things that could be easily accomplished at a lower level.

The fact that many of those in the demoscene who produce amazing results yet have never formally studied computer science leads me to believe that there's a certain amount of indoctrination happening, and I think to reverse this there will need to be some very massive changes within CS education. Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources, and that often leads to very simple and elegant solutions, which is something that should definitely be encouraged more in mainstream software engineering. Instead the latter seem more interested in building large, absurdly complex, baroque architectures to solve simple problems. Maybe the "every byte and clock cycle counts" attitude might not be ideal either for all problems, but not thinking at all about the amount of resources really needed to do something is worse.

> how much layers is too much layers?

Any more than is strictly necessary to perform the given task.

[1] http://www.youtube.com/watch?v=2Op3QLzMgSY#t=10m28s


"Demoscene is all about creative, pragmatic ways to solve problems by making the most of available resources"

It probably doesn't hurt that nobody expects a demo scene app to adapt to radical changes in requirements, or to interoperate with other things that are changing as well - for that matter, to even conform to any specific requirements other than "being epic".

For instance, the linked 8088 demo encodes video in a format that's tightly coupled to both available CPU cycles and available memory bandwidth. Its goal is "display something at 24fps".

Not that I'm a fan of abstraction-for-its-own-sake, but putting scare-quotes around real problems like premature optimization is an excessive counter-reaction.


Up to the ~'60s gave us a vast theoretical foundation, and from then on we toyed with it, endlessly rediscovering it (worst case) or slightly prodding forward (best case), trying to turn this body of knowledge into something useful while accreting it into platforms of code, copper and silicon. My hope is that the next step will eventually be for some of us to stop our prototyping, think about what matters, and build stuff this time, not as a hyperactive yet legacy addicted child, but as a grown up, forward-thinking body that understands it's just not about a funny toy or a monolithic throwaway tool that will end up lasting decades, but a field that has a purpose and a responsibility.

To correct the quote:

Computer science is not an abstract form of engineering. Software (and hardware in the case it's made to run software) engineering is leveraging CS in the context of constraints imposed by reality.

> Any more than is strictly necessary to perform the given task.

Easy to say, but hard to define up front when 'task' is an OS + applications + browser + the hardware that supports it ;-)

This[0] is the typical scenario I'm hoping we would build a habit of doing.

[0]: http://www.folklore.org/StoryView.py?story=Negative_2000_Lin...


> abstraction being excessively glorified (mostly) by academics and formal CS curricula.

It's not just academics, it's many developers, too.

We're in an old-school thread. We like what's really going on. Hang out in the Web Starter Kit from last night though, and you'll find tons of people who glorify abstraction.

The reality is that competing forces spread out the batter in different directions: the abstractionists write Java-like stuff. The old-schoolers exploit subtle non-linearities.

Actual commercial shipments rely on a complex "sandwich" of these opposed practices.

> Demoscene is all about creative, pragmatic ways to solve problems

Yes and I grew up with the demoscene (c64 and amiga 500) and it's also about magic, misdirection, being isolated for long winters and celebrating a peculiar set of values. Focus is shifted toward things that technologists know are possible, such as tight loops running a single algorithm that connects audio or video with pre-rendered data, not on what people want or need, such as CAD software or running mailing lists. Flexibility, integration and portability are eschewed in favor of performance.

Don't get me wrong, I LOVE the demoscene - it's the path that got me to love music. And I have near-total apathy for functional programming. I only code in Javascript when weapons are pointed at my heart, but with the proper balance, there are some very real reasons to make use of abstraction. It's not just academics, it's people solving real problems. The trick is to act strategically with respect to the question: which parts will you optimize and which parts will you offload to inefficient frameworks?


> I think to reverse this there will need to be some very massive changes within CS education.

For instance, starting it elementary school. A surprisingly large amount of the mathematical portion of CS has very little in the way of prerequisites.


Having been in the demoscene (Imphobia) for a long time and having been in more abstract (quad tree construction optimizations) stuff I can say that writing a demo is not the same as computing theory. Writing a demo is most often exploiting a very narrow area of a given technology to produce a seducing effect (more often than not, to fake something thought impossible so that it looks possible). So you're basically constraining the problem to fit your solution.

On the other hand, designing pure algorithms is about figuring a solution for a given, canonical and often unforgiving problem (quicksort, graph colouring ?). To me, this is much harder. It involves quite the same amount of creativity but somehow, it's harder on your brain : no you can't cheat, no you can't linearize n² that easily :-)

To take an example. You can make "convincing" 3D on a C64 in a demo because you can cheat, precalculate, optimize in various way for a given 3D scene. Now, if you want to do the same level of 3D but for a video game where the user can look at your scene from unplanned point of views, then you need to have more flexible algorithms such as BSP trees. So you end up working at the algorithm/abstract level...

A very good middle ground here was Quake's 3D engine. They used the BSP engine and optimized it with regular techniques (and there they used the very smart idea of potentially visible sets) but they also used techniques found in demo's (M. Abrash work on optimizing texture mapping is a nice "cheat" -- and super clever)

Now don't get me wrong, academics is not more impressive than demoscene (but certainly a bit more "useful" for the society as whole) These are just two different problems and there are bright minds that makes super impressive stuff in both of them...

stF


I think to reverse this there will need to be some very massive changes within CS education.

Well, I mean, that is most definitely true regardless. But, with my experience getting my BS in CS a few years ago, it had nothing to do with "mainstream software engineering" either. I had classes on formal logic and automata, algorithms (using CLRS), programming language principles (where we compared the paradigms in Java, Lisp, Prolog, and others), microprocessor design (ASM, Verilog, VHDL), compilers, linear algebra, and so on. Very little in the way of architecting and implementing large, abstracted, real-world business applications or anything remotely web-related. In my experience I did not meet anyone interested in glorifying heaps of whiz-bang abstraction, they seemed to be more in line with the stereotypical "stubbornly resisting all change and new development" camp of academics.


I sense the frustration around this subject is building. What I'm afraid of is that once it boils over into action it will lead to a repetition of moves. That's the hard one, to get a 'fresh start' going is ridiculously easy and one of the reasons we have this mess in the first place.

Very hard to avoid the 'now you have two problems' trap.


Indeed. The problem with starting over is that anything you start over with is going to be simpler, at first. Thus potentially faster, easier, etc, etc.

Rewrites are hard and costly, which is rarely taken into account. Even just maintaining a competent fork is hard enough.

I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.


> I think it's probably worth the effort, but I'm not quite sure how you get from A to B without just having some super competent eccentric multi-billionaire finance a series of massive development projects.

And Elon Musk is busy doing rockets and electric cars!


I think it didn't happen because the people feeling this way are precisely in the situation to understand how vast and hard an undertaking it is, not only to achieve, but also to succeed.

Few have attempted a reboot, yet the zeitgeist is definitely there: ZFS, Wayland, Metal, A7, even TempleOS (or whatever its name is these days). Folks are starting to say themselves 'hey, we built things, we learned a ton, we do feel the result, while useful, is a mess but we now genuinely understand we need to start afresh and how'. It's as if everyone were using LISP on x86 and suddenly realised they might as well use LISP machines.

I too fear we just loop over, yet my hope is that in doing that looping, our field iteratively improves.


I'd answer in two ways: One, it is already happening. The 10M problem (10 million concurrent open network connection) is solved by getting the Linux kernel out of the way and managing your own network stack: http://highscalability.com/blog/2013/5/13/the-secret-to-10-m... - The beauty of their approach is that they still provide a running Linux on the side to manage the non-network hardware so you have a stable base to build and debug upon.

Two, I am not sure we are that much smarter now than we were then. As you have quoted a language problem I'll use one myself as an example. See this SO question: https://stackoverflow.com/questions/24015710/for-loop-over-t... . I wanted to have a "simple" loop over some code instantiating several templates. I say simple, because I had first written the same code in Python and found out it was too slow for my purposes and thus rewrote in in C++. In Python this loop is dead simple to implement, just use a standard for loop over a list of factory functions. In C++ I pay for the high efficiency by turning this same problem in an advanced case of template meta programming that in the end didn't even work out for me because one of the arguments was actually a "template template". And on the other hand, making the C++ meta programming environment more powerful has its own set of problems: http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2013/n361...


I'm finding that an inherent psychological part of software development is to accept that nothing will be perfect. Everything is fucked up at some level, and there's no practical way around it. You just bite the bullet.

You stop worrying and learn to love the bomb.


My machine is slower than yours and luajit does your million benchmark in 0.037s

    time luajit -e 'for i=1,100000000 do end'
    
    real	0m0.037s
    user	0m0.034s
    sys         0m0.002s
Just plain old Lua

    time lua -e 'for i=1,100000000 do end'
    
    real	0m0.502s
    user	0m0.497s
    sys         0m0.004s


> "throw more brainpower at it"

Back then there was simply no other way. I remember doing a 3D real-time fly-by of a big architectural development in Amsterdam ("Meervaart") in the 80's. I custom built the machine, pulled a trick where I clocked the fp coprocessor faster than the main processor, had a tseng graphics card (just about as fast as it would go at the time). And all the rest was software, hidden line removal, 800x600 on some primitve beamer at 25 fps. It was the best I could do at the time and it took many weeks to prepare for that demo. Just digitizing the whole neighbourhood was a monks job, I still have the aerial photograph as a souvenir from the job.

I got paid with a rusty old car that I wanted the engine from :)


Wow, as someone who saw some "cutting edge" 3D as a young student in the early 90's, this is beautiful. Weren't the Tseng cards in the 80's pretty much the first consumer cards with features hinting at fmv / 3d ? I was a tad young to know the details, I know their cards in the early 90's were incredible, but I wasn't there for the first Tseng labs stuff. Friends of mine claim that the early Tseng stuff was so impressive they suspected fakery in some of the demos!

Your clocking antics remind me of when I had to match a motherboard / processor to the maximum serial data rate acceptable by an old milling machine. The controlling software was no longer supported, and relied on the clock speed for timing (disastrous for controlling motors / servos etc) so I trialled a bunch of processor / MB combos until the milling machine accepted the output... Involved underclocking a Cyrix Cx something on some unknown brand MB that supported non-standard clock multipliers.

I got paid with a set of 5 year old race skis :-)


I loved the Tseng mostly because of its nice memory map and the fact that the registers weren't very secret. Before that it was "VGA Wonder" (ATI).

The Tseng vesa cards did not do 3D but they were blisteringly fast (for the time) if you knew how to hit them 'just so'. Do everything by the row and avoid bank switches at all cost.

The funny thing is that the driver I wrote for the card was only about 2% or so Tseng specific. gp_wdot, gp_rdot, gp_wrow and gp_rrow were the only routines out of about a 150 or so that were optimized and they were quite short to begin with. And that alone was enough to get very close to maximum bandwidth between the CPU and the graphics memory (this was across the VLB).

I like your clocking trick a lot better than mine, I just soldered an extra socket for an oscillator to the motherbord and ran one wire under the chip to the right pin (and I cut one trace on the motherbord). Plugging in a bunch of oscillators until the FP chip started to behave weird (and then adding a little fan and pushing it some more :) ).

Interesting how those payments worked out.

Now I'm seriously wondering if there is a way in which I could resurrect that demo. No idea what I did with the data, I probably still have the code in some form or a descendant of it.

This was the card I originally wrote the code for:

http://www.vgamuseum.info/index.php/component/content/articl...

But by then I may have upgraded to a et4000 (the 3000 was 16 bit ISA).


Your clocking trick is exactly what I spent 3 weeks swapping CPU / MB combos trying to avoid!

Kudos for actually doing it, and making it work!


Oh trust me if I had had the money I would have happily pursued your route. Cutting a trace on your only working computer and soldering bits & pieces onto the motherboard in order to land a job (talk about risk/reward here, I'm not sure how I would have worked without that machine but I really wanted that engine ;) ) made me pretty nervous. If I could have saved myself that batch of cold sweat I would have happily done so.

What got me is that it did work, I fully expected there to be some level of synchronization between the chips that would require both of them to be clocked at the same rate. The only reason I tried this is that the main CPU appeared to stop working and I figured it was worth a shot to see if the FP could go faster. And it did, and not just a little bit faster! Apparently Intel engineers were quite friendly when they designed the interaction between the two processors because in spite of the huge discrepancy in clock speed between the two chips it worked incredibly well.


> the true limitations of hardware from over 30 years ago are still rather elusive

That was the basic idea that kept the Apple II line alive for ~15 years on an 8 bit processor running at 1Mhz. Of course at the end, there were a handful of faster configurations but the IIgs @ 2.5Mhz and the short lived IIc+ at 4Mhz were the only machines apple produced with faster processors.


why the Apple II was still kept around for that long is kind of a mystery to me. It's not games. Maybe educational customers? Maybe next to no migration path for business users? I had an uncle who ran a veterinarian clinic off of Appleworks and several floppies worth of data for god knows how long. "Works for me" is a powerful force, and they'd probably squeezed all the costs out of the Apple II line.


"Apple II was still kept around for that long is kind of a mystery to me."

There was a very strong following, especially in the educational market. I remember seeing schools purchasing labs of IIGS's as late as the early 1990's.

Basically, the apple ][ was the cash cow that kept Apple afloat for years while they tried to sell 68k macs. Apple basically tried to kill the II for a decade but wasn't successful enough to just cut off the customer base that was crying for new models.


There are many reasons, but one of the big subtleties that should get remembered is that the Apple II had essentially two great epics:

Epic 1: The Apple II sold with no expansion cards, but many expansion slots. Hackers and business designed addons for years.

Epic 2: the Apple IIe (and later IIc) were sold with an optimal set of expansion cards.

So you had one generation of experimentation and a second generation that leveraged all the hard work!


"Hackers and business designed addons for years."

Hackers and "business" continue to design and sell cards for them!

(CompactFlash & USB-storage interface card) http://dreher.net/?s=projects/CFforAppleII&c=projects/CFforA...

(ethernet boards) http://a2retrosystems.com/

(RAM boards) http://www.brielcomputers.com/wordpress/?p=321


See also http://bespin.org/~qz/pc-gpe/fli.for - the .fli format was pretty common at one time and does this same delta encoding.


> real-time dithering

Wasn't dithering done before the encoding? I thought that was the reason he needed ordered dithering.


"This qualifies as art, not just software."

For me there was never any doubt.

A bit unrelated, but I've got an old 5150 at my parent's place, so when I'm visiting next Xmas I'll try to load this demo onto it. The only problem is that of transferring files to it. It only has a 5.25" floppy drive, and I don't have a means to copy files onto those floppies.

Any suggestions?


I have, in the past, been forced to type an Xmodem transfer program into debug.com's hex mode, to get to the point where I can transfer files over a null-modem connection from another box. I can dig up the file in question, if that'd help you out at all.

I ended up typing it in 1k at a time, and independently typing in a CRC32 utility to check that I'd done it properly.

(That was to install Windows 98 on a computer with no drives, if I recall. So, not so very long ago.)


That's how I used to transfer files to my coding buddies.

On the phone, hex dump in S-record format, then read out loud while the other side would type in the line. Checksum matches? Next line...

Our respective parents were not too happy about this unplanned usage of their phone lines but it saved a ton of cycling.


That's how you do it! What's the today's equivalent that kids do?


Yes, I would love to find that specific program - as there are several Xmodem transfer programs out there and I'd like to use one that's not only small in size, but also most likely to work.


You can run Norton Commander on both computers and set them on connect mode. One PC is set as master and the other as slave; connect them physically with a parallel cable.


Good idea - but then you would need Norton Commander, which I don't have for some reason.


KryoFlux is a USB-based floppy controller http://www.kryoflux.com/


Will check it out!


Squirt it in bit by bit down a serial port?


Laplink?


Isn't dithering done offline at encoding time?




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: