Maybe I'm missing something from my quick read, but the idea of using this for plugins seems like a real misfire. I'm curious whether the devs come from the pro audio world at all. It's one area where prioritizing for development time by using high level garbage collected languages just isn't done because your user is always concerned with how many instances they can run and what the lowest latency they can get is, and that latency constraint is going to be applied across your entire system. So if there is one plugin that hoses the latency by needing a time slice big enough to run a GC in the audio dsp routine, pro audio buyers won't touch it.
It looks like a nice way to do interactive web audio work, but plugins are a different beast all together. If you want a place where old school skating-with-razor-blades pointer coding is still common practice, it's DSP code.
You're totally right that in this domain you have to be extremely careful with performance and latency. Elementary takes great care to deliver that; it's a JavaScript "frontend" API but all of the actual handling of audio is done natively with high quality realtime constraints. The core engine is all native C/C++, and we run in the web by compiling to wasm and running inside a web worker. For delivering a desktop app or audio plugin, the native engine is compiled directly into the target binary.
So, while JavaScript and garbage collectors can bring a performance concern, we're only using JavaScript there for handling the lightweight virtual representation of the underlying engine state, and for that role JavaScript is plenty fast enough and the garbage collector quite helpful!
That's great on paper but how do I actually use it? If I want to play back a wavetable you have el.table() but it's missing the oversampling and decimation tools required for antialiasing. The el.lowpass() is a biquad which is not suited well for modulation. How can this compete with JUCE when the documentation and features are so sparse?
You should really fix the initial impressions you make from the site to get that front and centre though. Experienced audio devs are going to dismiss this unless you make it clear very quickly.
Ah cool, after I posted I was wondering if this was the case. I was just thinking "maybe the renderer is in WASM"?
That's pretty cool, it will be interesting to watch. I do something very vaguely similar in my computer music work where Scheme orchestrates C level objects.
Personally, I wouldn't want to use JS as the high level orchestrator myself as it's such a dog's breakfast of a language, but I can see that many might.
This is my new favorite comment for illustrating the perilous future of general computing and how it needs to be taken away from JavaScript if we have any chance of survival. Electron, Webassembly fetishism, the pursuit of language synchrony at the expense of common sense, it all gets you this. This comment. Right here. This is the future of software and it should scare the shit out of you.
Let me get this straight: you realized latency was a concern, so you wrote in C/C++ (which, exactly?), then turned it into wasm so you can run it in a Web worker? What the hell was the point of that? That’s like saying you bought an M6 and converted it to a foot-powered bicycle. What exactly do you think wasm does? You seem to be implying that you think the native engineering you invested in continues to matter, in the same way, after you do that. You also imply heavily that you understand the wasm you’re executing to still be native. Do you think that? Do you understand what you’re giving up in the Web worker? As in, directly tied to latency and real-time requirements, your whole reason to go native in the first place?
Whatever your response is, deep down, you and I both know it’ll be justification to do Web things for Web’s sake. I know this because everyone I’ve had this discussion with has played the same notes (imagine the deployment model!) while failing to understand that they’re justifying their preference. The only people who build Web stuff want to build Web stuff. In the high performance sphere, of which DSP is a very mature practice, this looseness with the fundamentals of software is going to put you out of the game before you’re even started.
I'm a web person defending web things but providing something on the web has a significant advantage.
I know you've heard this again and again, but I can't emphasize it enough.
You can use the site from most platforms, including PCs and mobiles.
You don't have to install software, a single click is enough.
Of course, browsers have considerable limitations, and serious users will eventually choose other tools, but providing such an accessible software is a really huge advantage for me.
> You can use the site from most platforms, including PCs and mobiles. You don't have to install software, a single click is enough. Of course, browsers have considerable limitations, and serious users will eventually choose other tools, but providing such an accessible software is a really huge advantage for me.
In the audio world that is cool for toys.
There are a lot of really cool things to play with in the browser/multi media space, maybe this is another.
But when it comes to getting shit done, writing plugins and transports, creating instruments and rigs for live use (my hobby these days) the quite demeaning comments here are on the mark.
This "Elementary" is prioritising the wrong things. And there are a lot of frame works that do not require you know C or C++
Pure Data, and Sonic Pi are two I have played with (the former much more than the latter).
Platform independence is simply not an issue when building these systems. Installing software is not an important issue.
Sorry. This is, on the face of it, a waste of time. I hate saying that, but if it were me doing this I would pitch it as a fun toy.
Do you understand how WASM and Web workers work? Do you understand that low-enough-latency audio doesn't take a super computer anymore? Yeah, if you were working on DSP stuff in the 1990s, you were a hot shit programmer. Nowadays, it doesn't really say much at all. And it certainly doesn't justify talking about it as if it were a moral failure to not treat DSP with respect.
> Do you understand that low-enough-latency audio doesn't take a super computer anymore
It never did. Low latency audio has almost nothing to do with CPU power. Here's a summary of some of the issues faced on modern general purpose computers:
I know how WASM and Web workers work. Since nothing you can do in WASM or a web worker has anything to do with either (a) realtime scheduling priority (b) actual audio hardware i/o, they don't have much to do with solving the hard parts of this. Browsers in general do not solve it: they rely on relatively large amounts of buffering between them and the platform audio API. Actual music creation/pro-audio software requires (sometimes, not always) much lower latencies than you can get routing audio out of a browser.
And even when we set it, we don't get it, because we blithely read a "latency" label in a GUI instead of measuring the round-trip latency on the specific device in question.
That wouldn't be correct either, at least half the time. Problem is that "latency" is used with different meanings, at least two:
1. time from an acoustic pressure wave reaching a transducer (microphone), being converted to a digital representation, being processed by a computer, being converted back to an analog representation and finally causing a new acoustic pressure wave care of another transducer (speaker).
2. time between when someone uses some kind of physical control (mouse, MIDI keyboard, touch surface, many others) to indicate that they would like something to happen (a new note, a change in a parameter) and an acoustic pressure wave emerging somewhere that reflects that change.
The first one is "roundtrip" latency; the second one is playback latency.
my friend is running an independent studio, i'm a bedroom musician, and we're planning some elementary plugins together. we tried some stuff with max and it wasn't expressive enough, we tried the steinberg sdks but they were too much of a lift, elementary is good enough for our workflows and in that perfect zone with enough control and enough familiarity that we feel we can be productive. we'll see in due time if it's too good to be true, but we're excited!
or even if you can't! I use Max and to a lesser extent Pd, but > 90% of my actual work is done in Scheme or in C running inside them. Many advanced computer music people wind up doing this sort of thing because the graphical environments just can't be beat for quickly lashing together an environment for interaction, where interaction could be GUI, midi, network/OSC, customer hardware over serial, etc.
It looks like a nice way to do interactive web audio work, but plugins are a different beast all together. If you want a place where old school skating-with-razor-blades pointer coding is still common practice, it's DSP code.