Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Amazing! Very impressive how you got all the details in. I'd be interested in hearing what the development process is like.

Do you work directly in the minified code, or do you create the demo in normal code first then look for ways to minimize it?



I'm not the author but I've done this before [1], so here's what I can quickly make up (EDIT: updated using informations gathered from p01's comment below):

    // c is a canvas created outside
    d = [ // 2 times audio frequencies used, I think
      2280,
      1280,
      1520,
      c.width = 1920,
      // d[4] is not used, not sure why this stmt was stuffed into d
      // required to hide the PNG bootstrap; the bare minimum would be `0'`, probably this compresses better though?
      document.body.style.font = "0px MONOSPACE"
    ],
    g = new AudioContext,
    o = g.createScriptProcessor(4096,
                                // clears the margin and initializes vars
                                // (t: time in seconds, n: last t when speak occurred)
                                document.body.style.margin = t = n = 0,
                                1),
    o.connect(g.destination),
    o.onaudioprocess = o => { // periodically called to fill the audio buffer, used in place of setInterval
      o = o.outputBuffer.getChannelData(
        e = Math.sin(
          t / 16 % 1, // this is the only arg to sin, others are for shoving exprs into a single stmt
          m = Math.sin(Math.min(1, y = t / 128) * Math.PI) ** .5 + .1,
          c.height = 1080, // setting canvas.width/height clears the canvas
          b.shadowOffsetY = 32420,
          // results in `radial-gradient(#222,black` or so, reinterpreting decimal number as hex, the last `)` is not required
          c.style.background = "radial-gradient(#" + [222, 222, 222, 222, 155, 155, 102, 102][t / 16 & 7] + ",black",
          b.font = "920 32px MONOSPACE",
          // each function determines the dot size for 16 seconds, also sometimes used as a display text
          f = [
            (x, y, t) => x / y * 2 - t,
            (x, y, t) => (x ** 2 + y ** 2) ** .5 - t,
            (x, y, t) => x / 4 ^ y / 4 - t,
            (x, y, t) => y % x - t
          ][t / 16 & 3],
          // determines a string to print and speaks it every 16 second
          // the inner [...][t/16|0] can return undefined, which gets coerced to an empty string by `""+[...]`
          u = "" + [[, f, f, " CAN YOU HEAR ME", f, f, , "MONOSPACE", "THE END"][t / 16 | 0]],
          t > n && speechSynthesis.speak(new SpeechSynthesisUtterance(u, n += 16)))
      );
      for (i = 0; 4096 > 4 * i; i++) // for each dot; `4096>4*i` probably compresses better than `1024>i`
        // calculate the dot size and mix with the radius in the previous frame for easing
        // f and g are objects (function and AudioContext), so can be abused as a generic store
        g[i] = r = (f(x = 16 - i % 32, a = 16 - (i / 32 | 0), t) / 2 & 1) + (g[i] || 0) / 2,
        x += o[0] / 4 + 4 * (1 - m ** .3) * Math.sin(i + t + 8),
        a += o[64] / 4 + 4 * (1 - m ** .3) * Math.sin(i + t),
        h = x * Math.sin(y * 2 + 8) + a * Math.sin(y * 2),
        p = 4096 / (m * 32 + 4 * h * Math.sin(e) + t % 16),
        b.beginPath(f[i] = r / p),
        b.arc(h * Math.sin(e + 8) * p + 1280,
              x * Math.sin(y * 2) * p - a * Math.sin(y * 2 + 8) * p - 31920,
              p > 0 && p / (2 + 32 - r * 16),
              0,
              8), // anything larger than `2*Math.PI` will draw a full circle
        b.shadowBlur = o[0] ** 2 * 32 + 32 - m * 32 + 4 + h * h / 2,
        // `[a,b,c]` coerces into a string `a,b,c`
        b.shadowColor = "hsl(" + [f(x, y, t) & 2 ? t - a * 8 : 180, (t & 64) * m + "%", (t & 64) * m + "%"],
        b.fill();
      b.shadowBlur = o[0] ** 2 * 32,
      b.shadowColor = "#fee";
      for (i = 0; 4096 > i; i++) // generate each sample, also prints the glitched text
        o[i] = o[i] / 2 + (
          (
            Math.sin(t * d[t / [4, 4, 4, 4, 1/4, 1/4, 16, 4][t / 16 & 7] & 3] * Math.PI) * 8 +
            (t * d[t / 8 & 3] / 2 & 6) + t * d[t / 16 & 3] / 4 % 6
          ) / 64 + f[i / 4 | 0] // f[0..1023] is the visual data, reused as a noise
        ) * m,
        // prints at most 64 characters of u;
        // 0th and 64th samples (o[0] & o[64]) of the prev/current buffer act as x/y jitter,
        // first 64 samples also displaces the char offset for the glitched text effect
        64 > i & t % 16 * 6 > i &&
          b.fillText([u[i + (o[i] * 2 & 1)]], // again, [undefined] coerces into an empty string
                     i % 9 * 32 + o[0] * 16 + 180,
                     (i / 9 | 0) * 64 + o[64] * 16 - t - 31920),
        t += 1 / g.sampleRate // so t increments by 1 per second
    }
While the obfuscation itself is fairly standard, I think the real magic here is the carefully selected motion and jitters---which I can't easily figure out from a glance.

> Do you work directly in the minified code, or do you create the demo in normal code first then look for ways to minimize it?

Also, in my experience you end up structuring everything so that it can be easily minifiable (by hand or using something like terser-online [2]). This doesn't necessarily mean that the code is unreadable (variables can be renamed, statements can be converted to comma expressions and so on), but the resulting code would be very unorthodox. See the source code of my JS1024 entry for example.

[1] https://www.js1024.fun/demos/2020#46

[2] https://xem.github.io/terser-online/


Thanks for the expansion and comments! On your project, I noticed the comments were about 5 times longer than the source code :) - so that makes sense how to work with it.


Hi there.

I do not use any minifier. I minify the code by hand, and typically prototype ideas and performance tests in normal non-minified code. Once the main idea and approach are settled, I minify the code by hand and keep an eye on the heatmap of the DEFLATE stream to match the 1024 bytes limit.

MONOSPACE took ~4months on an off to create, tallying ~60h of work. You know 2020 + trying to balance work & family, and remain sane these days.

As I said on my site, the Audio and Visuals feed each other to make the background noise based on each dot, and the camera shake based on the Audio. That way I keep the Audio & Visuals are perfectly in synch, for free ;)

For the X & Y camera shake, I use the values 0 and 64 from the Audio buffer. The 64 is because this is the maximum number of characters rendered by the Text writer which happens in the loop updating the Audio buffer. Using something lower than the 64th value would make the last characters of the Text shake in a different way than the rest.

-- P01, author of MONOSPACE,


Hi P01, thank for your great work. Can you explain what this means?

> keep an eye on the heatmap of the DEFLATE stream

How is the heatmap generated?


I use gzThermal by Caveman - https://encode.su/threads/1889-gzthermal-pseudo-thermal-view...

He is very talented and was nice enough to implement a couple of features to help with this kind of productions when I was working on BLCK4777 - https://www.p01.org/BLCK4777


> tallying ~60h of work

This is far less than I expected, which usually means I am hopelessly out of my league in the topic. Great job :)


That is 60h for this project alone.

I made 100s of creative projects and failed experiences before getting there. Don't get scared by that number. All it means, is that it is possible to do in 60h. Some would take 600, others 20.

I threw that number away to put it in context with the 4 months since the first prototype. I could only work from time to time. Some times not touching the code at all for weeks.


Awesome work!

That makes sense. I imagine with some experience you have a decent idea about what you can make fit.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: