Hacker Newsnew | past | comments | ask | show | jobs | submit | angleofrepose's commentslogin

This is cool. I enjoy seeing these kinds of projects, thanks for sharing it and thanks for making it.

There has been so much innovation over the years around transpilers/compilers to JS, it makes me wonder what a programming paradigm of à la carte first class language functionality could look like and how it might interoperate. A system in which I might grab Haskell type syntax, Clojure list comprehensions and JS arrow functions, all together, and all working just fine.

Probably want to break up files into more granular chunks, probably more like next gen polyglot notebooks beyond a cell per language and more like custom languages composed of features in any given cell.

The system could be made to translate between functionality based on editor preference. Like python list comprehensions? Read my Clojure comprehension in that syntax.

I know this would not perfectly map, as some language functionality is more powerful than others. Still interesting to think about.


Thank you and other commenters for the great rundowns here. I'm interested in a related question and I wonder if you or others could point me in the right direction: why was the mainstream consensus around solar power (and/or batteries) apparently so wrong for so long? More specifically -- and maybe a better question -- why didn't progress in solar and batteries happen sooner?

I'm less interested in blame than in a systems analysis of how in the last half century powerful players seem to have missed the opportunity to start earlier investment in solar and battery technology. Solar and batteries are unique in energy infrastructure, as even any casual observer knows by now, and is certain to change many aspects of politics, industry and culture. It seems an inevitability that energy infrastructure will evolve from large complex components towards small and simple components, and I'm interested in engaging with the history of why "now" is the moment, rather than decades ago.


> why didn't progress in solar and batteries happen sooner?

The rate of progress in cost reduction has been astonishing. It's unlike anything except Moore's Law. This catches people out.

As well as the usual suspects: cheap fossil fuels, failure to take global warming seriously, belief that nuclear power would see similar exponential cost reduction rather than opposite, and of course anti green politics.

But if 95% cost reduction is the result of not taking it seriously, would taking it seriously earlier have been even better? Hard to say.


Right! Good points for optimism here, and acknowledging broken mental models.

We have silicon solar modules in the 1950s, Moore's law in the 1960s. Another take on the question then: today we use Moore's law to describe progress in solar modules, to what extent was that realization possible in the 1960s from the fundamentals, or "first principles"?

If it was clear, why did we not see rapid prioritization of solar and energy storage technology research? Or did we and I don't know the actual history? Or what influences am I undervaluing or not recognizing?

If it wasn't clear, why not? Gaming out many positive impacts of solar technology feels easy today in a way it appears was not easy in the past. Why wasn't it clear in the past?


Battery progress was in some ways slowed but also accelerated by oil companies who kept buying up patents on solar and battery stuff that looked promising, and then sat on the patents, refusing to license it.

One oil company bought Cobasys, which owned all the NiMH patents. Thereafter, Cobasys refused to license NiMH batteries to anyone making a vehicle, except large ones like transit busses. Several early EVs used NiMH batteries until Cobasys was acquired and set up the restrictions.

This really lit a fire under researchers and battery industry to try and improve lithium ion, which had hit the market in the early 90's. Once the price of Lithium Ion started falling, the market very quickly forgot about NiMH batteries. In about ten years prices have fallen to one fifth of what they were. That fall has slowed, but it's still dropping.


It's a false assumption that technological progress happens automatically or even that it's based upon the passage of time.

Progress happens as a result of many choices made by individuals to invest time and energy solving problems. Why is solar rapidly improving now? Because way more people are invested in making it better.

Nascent technologies almost always face an uphill battle because they compete against extremely optimized legacy technologies while themselves having no optimization at first. We only get to the current rapid period of growth because enough people pushed us through the early part of the S curve.


Sure, that makes sense. This is where I'm coming from with my interest in history:

I heard an interesting argument somewhere that solar cells are an ideal manufactured good. Whether you are building a module for a calculator or a GW scale plant, the modules are the same. This is fundamentally different for steam turbines. On the "concrete-internal combustion engine" spectrum of complexity, solar modules are closer to concrete and turbines are closer to ICEs.

Shouldn't this have led to a special interest in advancing solar module research? Or widespread understanding that eventually the unique set of attributes that define a solar module would lead to it's takeover of a significant portion of global energy generation? Shouldn't that have been apparent from the earliest days of photovoltaic research as a sort of philosophical truth before the advances in material science, extraction or manufacturing of the last fifty years?


I think another important part is that solar has low minimum useful quantities and customization. Lots of the problem with nuclear power is that you only need ~100 to power the US, and each one takes years to build, so getting scale is basically impossible. With a 50-100 year lifespan per plant, that means you only get to build 1-2 a year, and you can't learn much from the 5 you've most recently started since they're still under construction.


Solar and batteries got cheaper when we scaled up and built a lot. You have to pay current prices to get the next price drop, because it's all learning by doing.

If we had pushed harder in the 80s, 90s, and 2000s, solar might have gotten cheaper sooner. Solar fit in at the edges of the market as it grew: remote locations for power, or small scale settings where running a wire is inconvenient or impractical. The really big push that put solar over the edge was Germany's energiwende public policy that encouraged deploying a ton of solar in a country with exceptionally poor solar resources; but even with that promise of a market, massive scale up was guaranteed.

It's in many ways a collective action problem. Even in this thread, in 2025 you will see people wondering when we will have effective battery technology, because they have been misinformed for so long that batteries are ineffective that they don't see the evidence even in the linked article.

Also, most people do not understand technology learning curves, and how exponential growth changes things. Even in Silicon Valley, where the religion of the singularity is prevalent and where everyone is familiar with Moore's law, the propaganda against solar and batteries has been so strong that many do not realize the tech curves that solar and batteries enjoy.

A lot of this comes down to who has the money to spend on public influence too, which is largely the fossil fuel industry, who spends massive amounts on both politicians and in setting up a favorable information environment in the media. Solar and batteries are finally getting significant revenues, but they have been focused more on execution than on buying politics and buying media. They have benefited from environmental advocates that want to decarbonize, without a doubt, but that doesn't have the same effect as a very targeted media propaganda campaign that results in zealots that, whenever they see an article about climate change, call up their local paper and chew out the management with screaming. Much of the media is very afraid of right wing nuts on the matter and it puts a huge tilt on the coverage in the mass media in favor of fossil fuels and against climate science.


Indeed. You widen the conversation here, and remind me of the idea that moneyed influence is underrepresented in analysis and understanding of the world. Maybe the most appropriate way to understand big questions is who is funding the various players.

I like to think about "learn by doing". While I have of course lived it, I try to think of counterpoints. It seems clear that solar owes it's growth to Germany and California policies which subsidized the global solar industry with taxes on their economies, most disproportionately placed on individual ratepayers. But why couldn't solar research have been long-term funded based on it's fundamental value? Talk about national security, or geopolitical stability -- especially post 1970s! Skip the intermediate and expensive buildouts of the 2000s, failed companies heavily subsidized and fund research instead to hopefully bring the late 2010s forward in time?

What's a good model here, or concrete example? We see the same side of the history in electric vehicles. I think Tesla and Rivian, to pick two, both lost money on every sale in early years. Why not skip that expensive step in company history, and develop better products to sell at a profit from the beginning of mass manufacturing? Are there industries or technologies where this expensive/slow process went the other way?


> It seems clear that solar owes it's growth to Germany and California policies which subsidized the global solar industry with taxes on their economies, most disproportionately placed on individual ratepayers. But why couldn't solar research have been long-term funded based on it's fundamental value

I think this is a really important distinction, that between research in the lab versus research on the factory floor. Tesla in particular has talked about how much they value engineers that get down in to the production process versus those that are working in the lab. That's the "doing" that needs to happen. As well as shaking out parts of the upstream supply chains and making all that cheaper.

We can theorize about what's going to work in practice, but the price drops are the combination of 1% savings here, 0.75% savings there, 0.5% there, and until you have the full factory going you won't be able to fully estimate your actual numbers, much less come up with all the sequential small improvements that build on each other. And all that comes together in the design of the next factory that's the next magnitude up in size.


I hear that, it seems a common observation. Maybe a fundamental truth of enterprise.

> until you have the full factory going you won't be able to fully estimate your actual numbers, much less come up with all the sequential small improvements that build on each other.

Why not? Is there a theory or school of management or industry that establishes this foundational principle that seems so commonly invoked? It feels true, but I don't really know why it might be true. There must also be great examples of counterpoints in this too!

Maybe it goes back to learn by doing: it's a common refrain in outdoor recreation that safety rules are written in blood; that many of our guidelines directly follow from bad things that happened. But certainly we can also design safety rules by thinking critically about our activities. Learn by doing vs theory.


It's literally studied as "learning" in the management science literature.

For example: https://pubsonline.informs.org/doi/abs/10.1287/mnsc.2015.235...

> We find that productivity improves when multiple generations of the firm’s primary product family are produced concurrently, reflecting the firm’s ability to augment and transfer knowledge from older to newer product generations.


Could you expand on what you see as exciting developments? I’ll have to check out the op post link as well as yours and others in the thread.

It’s been a few years since I seriously looked at options for my personal use, but I remember being quite disappointed in the options I found. Zotero and org-noter seemed two of the best (though in completely different ways) pieces of software I could find regarding reading or organizing pdfs. I trialed OneNote for a year and liked it in the moment, but zero support for navigation or discovery or review of information make it untenable for building a knowledge base or doing literature review.

I imagine that software which makes reading and connecting document information (in any form: pdf, html, video or other) could be so much better than what I use daily.


1) The post-roam research note-taking apps (Obsidian, logseq) have shown the usefulness of creating notes with links, back-links and databases.

2) Document editor apps (Notion, Craft) have popularized the concept of documents as a set of text and non-text blocks. They're useful and provide rich building blocks for documents.

3) Some design engineers are exploring multi-modal text editors. Text, audio and video in the same document, integrated with CRDTs for collaboration.

One would think that digital text editing had already reached state of the art, but the work above shows that there's plenty to discover yet. I'd love to hear your take on what you think could be much better.


Off topic, but I’m wondering if anyone attracted to this topic could help me understand why JavaScript doesn’t have macros.

I’m aware of much conversation around dismissing macros, often in the context of bad dev experience — but this sounds like a shallow dismissal to me.

At the end of the day, we have some of the results of macros in the JavaScript ecosystem, but rather than being supported by the language they are kicked out to transpilers and compilers.

Can anyone point me to authoritative sources discussing macros in JavaScript? I have a hard time finding deep and earnest discussion around macros by searching myself.


Interpreted languages rarely have macros.

But more importantly, do you really want script tags on webpages defining macros that globally affect how other files are parsed/interpreted? What if the macro references an identifier that's not global? What if I define a macro in a script that loads after some other JavaScript has already run? Do macros affect eval() and the output of Function.prototype.toString?

Sure, you could scope macros to one script/module to prevent code from blowing up left and right, but now you need to repeat your macro definitions in every file you create. You could avoid that by bundling your js into one file, but now you're back to using a compiler, which makes the whole thing moot.


It turns out there might actually be a benefit of the compilation step which has been introduced now that everyone uses Typescript... would be really interesting to see macros get added, though I suspect it's too far away from Typescript's mandate to add as few new features on top of Javascript as possible


Macros don't really make sense in JS runtime spec. Since you can mostly already achieve macro level features by using eval or new Function, but it's not very efficient. Macros make most sense at build time, and there have been a few attempts at generalized build macros with various bundlers / transpiler plugins. I think the space needs more time to mature. I'm optimistic that we'll eventually see some sort of (un)official macro spec emerge.



A great resource that I should have found on my own. Thank you. I’ll look through this later. Giving it a quick glance now I see some of the same language I see other places; here that macros are “too far.”

I don’t know why macros are approached with apprehension. As I briefly get at in my first comment, I’m aware of a lot of dismissals of macros as a tool, but those dismissals don’t make sense to me in context. I’m missing some backstory or critical mind-share tipping points in the history of the concept.

What could be a good set of sources to understand the background perspective with which TC39 members approach the concept of macros?


I picked this up at a used bookstore a while back for a dollar or two, and enjoy flipping through it from time to time. There’s something deeply satisfying about the quantity and density of the graphics in the book, and the visual simplicity of the prints.

Why do you post it here? What do you think about the book?


This is a great project in a space that I've been playing around for a little while, fun to see it here!

I'm interested in hearing what you think are some of the more difficult problems or bugs you've come across during development. Did you hit any stumbling blocks around handling user code or integrating babel or the terminal? Do you have any insights about preventing errors or crashes in how you parse and eval user code? (My typical test of a while(true) loop crashes this system, but you're still in good company; it crashes replit, browser dev tools, observable and just about every other clientside execution tool I've come across. The most popular solution appears to be the loop timeout transformation.)

I think the examples pages do a better than usual job of demonstrating your system, in particular the ubiquity of one liners and your connections between them. Do you have ideas or responses about the classic "mess of wires" critique that graphical coding systems inevitably receive?

This is such a fun domain to think about, thanks for sharing your work!


The eval of natto maps surprisingly well to React's primitives (memoization, effects). This is pseudocode for the main eval https://gist.github.com/paulshen/9889b6067609f9053a0d56d4641...

The expression is only transformed with Babel if you enable the JSX React transform. Otherwise, it's just straight eval-ed by your browser. It's by no means battle-tested (eg while (true)). I haven't tested circular deps and am leaving that as a surprise for myself in a little bit. One thing that I do is run the canvas in an iframe on a different domain for security reasons.

Parsing is something I'm trying to avoid as much as possible but it's likely I'll add it. Referencing things as inputs[2] doesn't feel stable. May help with implicit deps and avoiding wires (see observablehq.com)

As for mess of wires, I'm still forming my opinion! I want to learn more about nodes-and-wire programming and why it isn't mainstream. The hunch I'm getting is that visual programming feels better to create than consume. The space is great for exploration but looking at someone else's canvas can be chaotic. Maybe there are features that can alleviate this (multiple views, autolayout). Look at this haha https://twitter.com/_paulshen/status/1321872376234082305


Would you mind sharing other interesting examples of projects in this vein? Thanks!


The future of coding link in the parent has a large list of similarly spirited projects. I have scattered lists of similar projects but none handy or packaged well. I'll point you to the Ink&Switch article on end user programming. https://www.inkandswitch.com/end-user-programming.html And encourage you to check out personal sites of the people involved. The lively kernel is a programming kit project that's been around in various incarnations for a long time. https://lively-next.org/ The history of Eve (also linked by that future of coding page) is rich and full of references to other projects http://witheve.com/ VPRI similarly is a gateway to lots of history on personal computing http://www.vpri.org/ of particular interest to me there is the graphical language Nile and the meta compiler Ohm. http://worrydream.com/ Bret Victor's site is another gateway you may have heard of, and the researchers at Dynamicland are also well worth exploring. More future of coding resources https://github.com/d-cook/SomethingNew

For more actual environments you can use I recommend https://observablehq.com/, https://starboard.gg/ and emacs along with the links above.


Do you, or the broader community, have any ideas about solving infinite loops? I'm on mobile so I can't test this at the moment, but I imagine that while(1) crashes the tab.

What would an MVP operating system like ctrl-C functionality look like for execution environments in the browser?


Codepen uses a system that measures loop duration, and it's a giant pain. Having done some pens that do ray tracing and image transforms which can have long running loops. Given the variable execution time of JS it can be quite random. It just exits the loop without warning, causing weird failures in your code.

2 theoretical solutions ( with significant overhead ) are:

Run the code in a VM ( maybe quick.js compiled to WASM would work ) that suspends code execution periodically if it exceeds a certain duration. This has the advantage that long running code in general won't block rendering, not just loops.

Transform the AST to use async generator that yields once per loop. This would allow the loop to be suspended and resumed. But it would require a lot of modification to the AST, making effectively the entire call tree async.


I've done this and it works surprisingly well. I made a timesliced js scripting system this way.. it looked imperative with tight loops but it was all asynchronous. It felt like a threaded app.



To add to this comment, Stopify is a JS-to-JS compiler that instruments sync JS code to make them interruptible at set points. The paper [0] can explain it better than I ever could.

I work on an experimental Pyret [1] runtime that uses Stopify to instrument compiled Pyret code (plain old JS) so that we can run Pyret code on the main page thread without hanging it up. Main thread execution is important for quick/easy DOM access. In terms of performance cost, we haven't measured too extensively, but so far, on average, we're seeing a 2x slow down compared to un-Stopified programs.

(Disclaimer: paid contributor for Pyret).

[0] https://www.stopify.org/research.html

[1] https://www.pyret.org/


Do you see any other solutions in the same domain as Stopify? Another method that might provide a way to keep UI unblocked but still have user executable code?


If you need the user code to execute on the main thread, then unfortunately I am aware of none besides bundling your own tailored system.

Pyret used to use its own runtime system [0] but Stopify was created in part to replace it due to the maintenance burden and complexity of "vanilla" JS interoperability.

[0] https://www.pyret.org/docs/latest/s_running.html


This is what I use. Seems to be the only option, and it works.


We started with a blacklist to match against while(1), while(true), for(;;), etc, but we eventually found an eslint plugin (goedel.js) that nicely tells you if the code contains an infinite loop or recursion.


That plug-in certainly won’t cover all cases of infinite loops, or they just solved the halting problem :)


Hmmmm maybe you're thinking of entscheidungsproblem.js? This is a fork of that.


I am aware of multiple hacky solutions, such as loop detection and adding timeouts. This fails in most non-trivial creative coding applications due to long running code. I'm interested what it would take to come up with a general escape hatch like any shell user has.


On Starboard[0] I approached this by sandboxing the notebook code in an iframe on a different origin. This sandboxing has to be done anyway to prevent XSS.

If you type while(true){} in a notebook only the iframe will break (and usually your browser will prompt you after a while to kill it). When you do only the iframe is no longer functional.

I don't think there's an elegant way to solve it any differently in the browser.

[0]: https://starboard.gg


Hi, yes! I like your project. I chatted about similar things on your launch post here. You also address this explicitly on your product which I appreciate.

What I'm getting at is that these browser notebooks try to get at the desire and feeling for rapid exploration and iteration. Losing context by having a crashing logic error is a massive blow to that ideal.

I'm not saying that your or anyone elses product is only for "rapid prototyping" but it's still true that larger projects could be bit by the same errors. When I crash my native code I C-c and I'm back in an instant. When I crash browser notebook code I lose a bit of time and unsaved code. I crash browsers often in creative coding where I write many loops and don't always do them right.

It may also be that my chrome and firefox experience on linus is worse than standard, I don't know. But I have crashed my entire browser in chrome when using observable, and I thought that wasn't supposed to be possible.


That's clever. Do you know if there is something possible using web workers? maybe running the "sandboxed code" in the worker? I don't really know how they work and if it is possible to interrupt them from the main thread.


I think so, but a worker won't have access to the DOM and a bunch of other APIs, so the code would be fairly limited in what it can do. Which may be fine for some usecases!


A timeout would most certainly not trigger during a busy loop in JS. Timeouts can only trigger when the main thread is not running code.

Browsers will complain against code running for too long without interruption though.


Yes, as the other comments get at the "hack" I'm referring to is a loop transform that adds a timer check to the condition.


Oh, right, I was not there at all.

Wow, that seems hard to do. One would need to take a lot of things in account, including recursive calls, asynchronous functions / calls and, indeed, even long strings of instructions that are not necessarily part of a loop or recursive calls.

Would a transform that adds the check between every JS instructions where it is possible theoretically solve the problem? is there a solution that does not slow down the code too much and interrupts the code within an acceptable margin?


Yeah! The general case of this is the halting problem... The best solution I know of is stopify, which the other comments have talked about. I just wonder if there's another take on the situation, something akin to OS task management.


Well you can easily detect trivial examples like "while(1)" and "for(i=0;true;i++)". But otherwise how would know some is an infinite loop?

Put a bit more simply, to work out if a problem is unsolvable (infinitly looping) you need to evaluate the problem... By trying to solve it. Checkout the halting problem for more details.

https://en.m.wikipedia.org/wiki/Halting_problem


"Solving" infinite loops doesn't necessarily mean accurately predicting a priori whether a piece of code will terminate. It can just mean ensuring that if the code does try to run indefinitely, it doesn't have unfortunate effects such as blocking the UI thread without the possibility of being interrupted.


> It can just mean ensuring that if the code does try to run indefinitely, it doesn't have unfortunate effects such as blocking the UI thread without the possibility of being interrupted.

Well that can be achieved by executing the code in a background worker thread. Which doesn't affect the UI thread in browsers... no sure how it's managed but I think you could terminate it after a certain amount of time too


> ctrl-C functionality

OS implementations dont solve the halting problem. I agree with the sibling comment.


I have made something similar (https://easylang.online/ide/). It is a language of its own, which is compiled and interpreted by WASM. The problem with hanging in endless loops is solved by running the interpreter in a "web worker" that can be killed and restarted at any time.


Looks interesting, I'll do some digging around. Thank you for sharing.


You could modify the AST tree using something like jscodeshift to add a function that is called in every loop (and maybe every function) and there you can "break" the loop in a pretty clean way.


A determined user will still be able to figure out something that blocks forever, for instance run a WASM program that has an infinite loop in it.


I'll second the computers for cynics. I don't think one of that series is about blockchain but he does have other videos about blockchain.

For those who don't know computers for cynics is his series about questioning the origins of the status quo and considering alternative futures with different foundations.

https://www.youtube.com/playlist?list=PLTI2Kz0V2OFlgbkROVmzk...


Is the seventh in the series: https://youtu.be/3CMucDjJQ4E published on Sep 2014, six years ago, and a must-watch account for any outsider even in 2020. Enjoy it!


Ah I didn't even realize the list I sent wasn't from his account. Thanks for the correction!


That 2014 video is so revealing... I think it deserves a post of its own :-)


It's been a little while since I've heard this one, but if there's one thing that I remember hitting me is that the same language used by the interviewer is used today about privacy, end user programming and any other more powerful technology, programming language or paradigm. It seems that as an culture we're always able to go so far, but not all the way. We see the path between start and end, there's no genius needed for the last push, but after so much progress we reduce ambition towards the end goal and instead develop arguments against continuing.

At some point we just don't think people need help with paper based tasks, "look around you, it's how everything is done" yet here we are with the PC 40 years later. And people look around and think that there is no chance everyone could be a programmer "look around you, they're all consumers, they couldn't understand how to make the computer do what they like". In 40 years there's no doubt this viewpoint will be wrong, but the popular opinion on the matter can't see that future.

See Bret Victors history of computing. The biggest adversary we have to overcome towards progress is the mainstream experts of our own field.

We have apps which seem to be like starting from scratch every time, which can't have abilities known by all because they aren't prepackaged by the devs ahead of time. Every app reinvents a minimal subset of sorting and search. If you have a better idea or a different connection you want to make its just not possible in the app.

Stop pretending that debilitating users is actually good for them in the silly word games we play. Give users power.


> The biggest adversary we have to overcome towards progress is the mainstream experts of our own field.

“Science advances one funeral at a time.” - Max Planck (Apparently)


This Idea Must Die is an interesting take on that concept.


I feel like we don't give tools like Microsoft Excel, Game Maker, Photoshop, or the web enough credit for the ability they empower users of computers. Without any formal training or education, they're able to use computers to their ends.


Give Excel to an uncontacted tribe and see how well they go with it.

That we assume reading, numeracy and fine motor skills which until 5 centuries ago were the preserve of less than 1% of the population in the West are not part of formal education or training should tell you all you need to know about how much cultural knowledge we assume people need to have in their daily life to function at the level of a 10 year old.

That 4 years to learn to read is considered normal but reading a 100 page manual is considered unreasonable shows us how much popular culture is lagging behind our tools. Given how technocentric our culture is, this is a situation as ridiculous as Mongols complaining that they need to learn to ride horses.


oh ok


Good for the interviewer for making counter arrangements at that time. Some of the things he said are still true: we still have to come up with categories in our heads. This is true regardless of advancements in ML.


Good point! I might give it another listen. I remember it being an interesting respectful conversation with no ground given on either side. Hence the title, I suppose.


> there is no chance everyone could be a programmer

There is no chance, even if by "everyone" you just meant "a majority". It's intellectually challenging, just for one.


Reading and writing is intellectually challenging. Everyone can do it now, because we make sure to teach them to do it, and because it's necessary to live in modern society.


Almost anyone can walk, not everyone can climb mount Everest, or even a much smaller mountain face. Bad analogies are bad.


Are you seriously trying to claim that programming, of any useful sort, is so hard that only really smart people, presumably like yourself, can do it? Christ. I really hope the software development industry gets its ego kicked in really hard in the near future so everything can stop sucking because of elitist asshats.


Well on one hand, it could be my massive ego, and on the other it could be the truth.

You're the one arguing on the basis of value. I don't. Let me just ask you this; do you think that being a good enough finance trader requires a high IQ? I'm pretty sure of it, and I'm also of the opinion that what they're applying their smarts to is a net negative for society as a whole.

Another question, have you read Steven Pinker's The Blank Slate?


I think the other two comments make good points. There is every chance that everyone will learn to use a small set of fundamentally composable digital tools in the future. That's programming. I think "intellectually challenging" just means poorly explained or resulting from poor access. Anyone can program, it's just artificially hard to do it today.


> There is every chance that everyone will learn to use a small set of fundamentally composable digital tools in the future. That's programming

That's not, unless you stretch the meaning of the word far into meaningless. But if you insist on doing so, then yes, most people should be able to "compose digital tools" for a small enough number of digital tools and a wide enough meaning of "compose." Although, on second thought, it appears so many people had issues with "programming a VCR" back in the day, and that wasn't anywhere close to my meaning of "programming."

So let me rephrase it, "there is no chance everyone or even majority could become a minimally proficient user of a minimally useful programming language for novel tasks beyond a sequential list of actions."


Imagine if "everyone" was literate, what a world that would be /s


Is being able to read at the minimum level comparable to programming?

Do you seriously expect 99% of the population to be able to understand something as simple as the demonstration of Euclid's theorem? And yet any programming is more complicated than that, and more analogous.


Outside of the datascience context, I would argue that notebooks like these are how information in a world of compute should be presented. It's ironic that we have codeblocks upon codeblocks of content served on the web on technical sites and blogs that aren't immediately executable in the context of the post. The most popular software in the world are javascript vms, yet we talk about and teach javascript code on websites in a javascript vm that can't run the code on the screen.


SO has runnable snippets, there are services like CodePen too and some articles are using it, but there's one simple trick making your point moot.

F12 or Ctrl+Shift+i and Ctrl+c Ctrl+v and you're executing code in the context of the post ;).

It's not a notebook but does the job well enough.


Haha, sure, sure. But that's not really the point. We have machine code right? Who needs an assembler? Tools and workflows are worth thinking about and improving. Right?


Depends a lot. Debugging is awesome when you have source maps and good devtools in a modern browser but the code itself is not editable. That said it's possible to open a plain html file from disk with devtools and save changes done in it to disk making devtools essentially a development environment.

I wonder how much value there is to this particular use case because it lives somewhere between devtools and an online IDE. Also arguably testing out a library in a codepen is more comfortable than an inline which you potentially can't save.


Very good points. Makes me think about a persistent web, what if every javascript code visible on your page could be edited and persisted between loads or visits or you could send a blog post link to a friend with your edits/comments/changed environment. Not arguing for that immediately, it's just interesting to think about.

Then what do you do when someone is demonstrating how to reset a page, whether that be document.body.innerHTML = "" or document.body.removeChild(main). They could be run in sandboxes, like starboard here in iframes. Or a powerful history tool can be attached to the page. Every program takes an environment as a parameter, maybe we can make it as easy to shuffle around them as playing with cards.

To really simplify it down, I'm exploring the space of digital workspace + shell. What do you get with a compute environment in OneNote or Figma or Sketchup. What do you get if you can position things in a freeform space rather than a filesystem and text based browser? Can I build a Sketchpad with code editing, and take advantage of viewport macros to build a space of my project and explorations?

Makespace.fun + jsfiddle/observable seems like an environment that I'd like to work in with myself, my collaborators and arbitrary content. To play with things like history and alternatives, and associations of material and ideas.

I have a good example of success of programmatic control of content and auto layouts working with my tiling window manager. Being able to explicitly describe compositions and demonstrating computational content seem to be targets not yet hit by the pop culture of personal computing.

I might be just blowing hot air with all this, but it is fun to play around with.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: