Yeah, I mean there is some support for various editors (https://lispcookbook.github.io/cl-cookbook/editor-support.ht...) including VS Code (https://lispcookbook.github.io/cl-cookbook/vscode-alive.html), but it's kind of rough (https://blog.djhaskin.com/blog/experience-report-using-vs-co...) and not exactly feature-complete with the emacs experience, plus you're still left having to figure out how to install and setup a Lisp implementation and quicklisp. I like that mine solves those for a newcomer, especially on Windows. (I myself use vim + slimv, but even that isn't quite at parity in some respects with emacs. The biggest weaknesses are around debugging, especially in the presence of multiple threads. But the essentials do work (stepping, eval-in-frame, continuing-from-a-stack-frame, selecting the various types of restarts, compiling changes before selecting restarts) so I'm still fairly productive and don't feel like I'm lacking anything sorely needed for professional work. I've hacked together some automatic refactoring bits as well, which emacs doesn't have either, and I'm eventually going to make a separated GUI test runner.)
I've been kicking the tires with mine a little bit yesterday and today, I think it's quite good for the beginner experience. But I'm constantly of two minds about reporting some feature requests. The project's primary goal seems to be existing as a stepping stone to even see what Lisp (and especially Coalton) is really all about before "graduating" to something like emacs, it feels like a secondary goal (though it is mentioned as a goal) to be usable by professionals as well, but there's inherent tension there. That's also been a weakness with the other editors: anyone already comfortable with Lisp development, professional or not, in emacs or not, isn't very likely to give the time of day to some new thing that's almost certainly not going to be as good as what they're used to. And so the new thing doesn't get the attention and feedback from experienced developers and the gap never closes.
It's an important concern for those footing the bill, but I expect companies really in the face of being impacted by it to be able to do a cost-benefit calculation and use a mix of models. For the sorts of things GP described (iptables whatever, recalling how to scan open ports on the network, the sorts of things you usually could answer for yourself with 10-600 seconds in a manpage / help text / google search / stack overflow thread), local/open-weight models are already good enough and fast enough on a lot of commodity hardware to suffice. Whereas now companies might say just offload such queries to the frontier $200/mo plan because why not, tokens are plentiful and it's already being paid for, if in the future it goes to $2000/mo with more limited tokens, you might save them for the actual important or latency-sensitive work and use lower-cost local models for simpler stuff. That lower-cost might involve a $2000 GPU to be really usable, but it pays for itself shortly by comparison. To use your Uber analogy, people might have used it to get to downtown and the airport, but now it's way more expensive, so they'll take a bus or walk or drive downtown instead -- but the airport trip, even though it's more expensive than it used to be, is still attractive in the face of competing alternatives like taxis/long term parking.
Are you able to describe any of those internal tools in more detail? How important are they on average? (For example, at a prior job I spent a bit of time creating a slackbot command "/wtf acronym" which would query our company's giant glossary of acronyms and return the definition. It wasn't very popular (read: not very useful/important) but it saved myself some time at least looking things up (saving more time than it took to create I'm sure). I'd expect modern LLMs to be able to recreate it within a few minutes as a one-shot task.)
If it's useless that's a you problem. I've been building CRUDs that would have taken me a month to get perfectly right in the span of 4-5 days which save an enormous number of human tech support hours.
Sorry man but the software world is littered with CRUD apps, they are called CRUD apps for a reason. They're basically the mass assembled stamped L-bracket of the software world. CRUD apps have also had template generators for like 30 years now too.
Still useless in the sense that if you died tomorrow and your app was forgotten in a week the world will still carry on. As it should. Utterly useless in pushing humanity forward but completely competent at creating busy work that does not matter (much like 99% of CRUD apps and dashboards).
But sure yeah, the dashboard for your SMB is amazing.
The software industry's value proposition for the vast majority of businesses running the world lies in CRUD apps that properly capture business requirements. That's infinitely more relevant in insurance, pharma, banking and logistics than any technological breakthrough of the past 25 years.
Your rant just shows you don't understand why people pay for software.
I have one that serves a few functions- Tracks certificates and licenses (you can export certs in any of the majorly requested formats), a dashboard that tells you when licenses and certs are close to expiring, a user count, a notification system for alerts (otherwise it's a mostly buried Teams channel most people miss), a Downtime Tracker that doesn't require people to input easily calculatable fields, a way for teams to reset their service account password and manage permissions, as well as add, remove, switch which project is sponsoring which person, edit points of contact, verify project statuses, and a lot more. It even has some quick charts that pull from our Jira helpdesk queue- charts that people used to run once a week for a meeting are just live now in one place. It also has application statuses and links, and a lot more.
I'd been fighting to make this for two years and kept getting told no. I got claude to make a PoC in a day, then got management support to continue for a couple weeks. It's super beneficial, and targets so many of our pain points that really bog us down.
A lot of businesses can get by just fine with making it one person's responsibility to maintain a spreadsheet for this. It can be fragile though as the company grows and/or the number of items increases, and you have to make sure it's all still centralized and teams aren't randomly purchasing licenses or subscriptions without telling anyone, it needs to be properly handed off if the person leaves/dies/takes a vacation, backed up if not using a cloud spreadsheet... I've probably seen at least a dozen startups come and go over the years purporting to solve this kind of problem, other businesses integrate it into an existing Salesforce/other deployment... it seems like a fine choice for an internal tool, so long as the tool is running on infrastructure that is no less stable than a spreadsheet on someone's machine.
In the startup world something like "every emailed spreadsheet is a business" used to be a motivating phrase, it must be more rough out there when LLMs can business-ify so many spreadsheet processes (whether it's necessary for the business yet or not). And of course with this sort of tool in particular, more eyes seeing "we're paying $x/mo for this service?" naturally leads to "can't we just use our $y/mo LLM to make our own version?". Not sure I'd want to be in small-time b2b right now.
Why are you ignoring the fact that grabbing data from heterogeneous sources, combining it and presenting it is generally never a trivial task? This is exactly what LLMs are good for.
If you are using an LLM to actually fetch that data, combine it, and present it to you in an ad hoc way (like you run the same prompt every month or something), I wouldn't trust that at all. It still hallucinates, invents things and takes short cuts too often.
If you are using an LLM to create an application to grab data from heterogeneous sources, combine it and present it, that is much better, but could also basically be the excel spreadsheet they are describing.
Your knowledge of LLMs is outdated by at least a year. For the past three months at least my team has been one-shotting complex SQL queries that are as semantically correct as your ability to describe them.
And why do you diminish the skill of good data wrangling as if it weren’t the most valuable skill in the vast majority of computer programming jobs? Your cynicism doesn’t correspond with the current ground truth in LLM usage.
Well, that is still having the LLM write code which is more like my second scenario. I use SOTA LLMs for coding literally every day. I don't think my knowledge is "outdated by at least a year".
The ones I can mention.. one that watches a specific web site until an offer that is listed expires and then clicks renew (happens about once a day, but there is no automated way in the system to do it and having the app do it saves it being unlisted for hours and saves someone logging in to do it). Several that download specific combinations of documents from several different portals, where the user would just suck it up previously and right-click on each one to save it (this has a bunch of heuristics because it really required a human before to determine which links to click and in what order, but Claude was able to determine a solid algo for it). Another one that opens PDFs and pulls the titles and dates from the first page of the documents, which again was just done manually before, but now sends the docs via Gemma4 free API on Google to extract the data (the docs are a mess of thousands of different layouts).
So not at all for their work and with a reverse Robin Hood model? That would be terrible for software.
The way artists gets paid on streaming is a genius play at catering to the biggest artists and labels and screw over the smaller ones, especially true on Spotify with their freemium model
In the US, Aleve is the name-brand pill for naproxen, available in grocery stores next to everything else. I have a bottle of 160 gelcaps. Each pill is 220mg naproxen sodium or in parentheses 200 mg of naproxen. The advertised effect is 12 hours / all day, getting anywhere near 4g would only happen in a suicidal "swallow bottle of pills" situation.
GP meant 4g is the safe limit to paracetamol (hence "liver pain"). About 8 typical doses over 24 hours. It's little known amongst the general population, who have the occasional extreme of people taking double doses every few hours
Do you have any of these presentations available publicly? I'm always amused by the glitch names people come up with (force quit wrong warps, skirtless parry-walks...) and it'd be fun to see them in a TLA+ context.
This used to be true, but one trip to any modern e-store front should dispell the notion. So much slop. Even for arguable non-slop, so much that just rapidly crashes and is unplayable. The extent of platform certifying these days for most titles seems to be: can launch, can back out to the console top level, and maybe doesn't crash if a controller is added/removed.
It's more depressing if you work in a big organization where decisions come down from on-high instead of letting teams decide what's best. (Especially if one of those decisions is so-called Agile practices which were about empowering teams against on-high global decisions from management, but that's a digression.)
But yes, treat it as a job, and make time for "fun time" after work at home using your favorite tools and languages and OSes, and you can still be happy, especially because the bills are being paid. And even in restrictive corporations there still may be opportunities to introduce a little of your favorite thing... Clojure itself "snuck in" at a lot of places because it was just another jar, and it's not too hard to shim in a bit of Java code that then turns things over to the Clojure system. You can also try getting away with internal-only tooling.
If I had stayed at my last job a little longer I would have tried putting more effort into sneaking Common Lisp in. I had a few slackbot tools I wrote in Lisp running on my machine that I turned over (with pre-built binaries) to someone else when I left (but I doubt they're running still). The main application was Java, and there was already mandates from security people and others not to use other JVM languages.. at least in application code. I was thinking (and got a prototype working) of sneaking in Lisp via ABCL, but only for selenium web driver tests. It was a neat trick to show some coworkers a difference in workflow: write or edit a web driver test, one of your asserts fails or an action click on some ID that's not there fails, in Java you get an exception and it's over, you'll have to restart the whole thing, which for us was expensive because these test suites typically spun up huge sets of state before starting. But with Lisp, exceptions don't automatically go back up the stack, but pause at the spot they occurred: from there you can do anything in the debugging REPL that you can do normally, redefine functions, classes, whatever, and then resume the computation from where it left off (or at some higher part of the call tree stack if you prefer), thus no expensive restart.
There's also ways to introduce things you like that aren't just different languages. My team started doing "lunch and learns" at most once a week (sometimes less often); if anyone wanted to talk about whatever for 30-60+ mins during a lunch period we'd get together and do it. Sometimes that would be about specific work things being built, sometimes it would be about external stuff, ideas (e.g. the Mikado Method) or tools. Once I did a brief presentation about property testing and later on got the quicktheories library for Java into the codebase and handling some tests, and ended up not being the only one to occasionally make use of it.
For a Masters program it's pretty weird but I assume prospective students will be aware, and they move on to learning Unreal, so...
It's always struck me as a bit silly how so many schools use some very niche tooling as part of "simplifying" or "adding constraints". I would have thought that such stuff was kept at the undergrad level. Even DigiPen (where the "famous" undergrad CS-like degree has you writing your own engine (though used to also have an elective for GBA games)) has a separate newer game design degree that had classes mandating some crappy in-house engine or in later years joining teams with students from the other degrees and using someone's custom engine. When I was there, a friend was able to get a professor's exception one semester and allowed to use a mobile-first engine that got out of the way and let him design while also making it easy to add polish, easy to playtest and develop (it used Lua) and show or give to others since everyone has a phone, etc. The crappy in-house engine stymied the efforts of everyone else, and only ran on Windows. It took a while longer before the formal curriculum had other students allowed to move beyond the in-house crap to consider things like the entire field of mobile games and mobile design, VR games and design, and eventually learning industry-standard tooling that employers will expect familiarity with. (I think the courtesy of using an industry engine was extended to the main degree program too vs. continuing with a custom one; I'm not sure what ratio Unreal/Unity/Godot/other/custom have there these days.) And while last I've heard an in-house engine is still used at the beginning (and even replaced the second semester "make a game in pure C with only the Windows text console for 'rendering'" project), it's a rewrite of a successor and apparently isn't as crappy now.
For the Playdate itself, I've never seen the appeal... I have no interest in going back to that sort of screen. My Game Boy Color, besides having color, also allowed me to have a wormlight attachment plugged in to make up somewhat for not having a backlight. I don't think the Playdate has support for that. And the price...
The article makes it quite clear as you read that the appeal is the constraints, it allows the students to think outside of the box, and ask themselves a lot of interesting questions
That's the intention, sure, and as long as prospective Masters students know that's what they're getting into and paying for, and are looking forward to it, then it's fine or whatever. But it still strikes me as a silly constraint, just as it would be to require an in-house engine that sucks, or requiring students to develop for some old Nintendo hardware, or requiring students to fit everything in under 96k.[0] Anyone can add arbitrary constraints to anything, and lots of interesting questions will arise from figuring out how to deal with (or work around) such constraints. But is the constraint to develop for this specific device (and all the sub constraints that implies) actually a good one vs. any other set of constraints, especially for the purposes of game design? I doubt it. Especially how some of the constraints like only using black-and-white graphics are easily enforced without also requiring such a specific niche device.
[0] .kkrieger (https://www.youtube.com/watch?v=2NBG-sKFaB0) is my favorite of this genre of constraints, but it's mainly impressive for being possible at all (and you can read up on some of the developer notes for how much effort was put into satisfying this constraint). It didn't actually advance the design of FPSes or anything, and FPS design ideas could be better learned by making and iterating on an FPS without the tiny size constraint. If students want to impose extra constraints on themselves, like developing for the Playdate and making use of its crank for game control, go for it, but it's a bit different when they're imposed from the outside for no real reason other than "hey, it's some constraints, and constraints breed creativity".
No, because the goal of the university is to teach students to think. Not necessarily just to "acquire the skills to apply in industry". Constraints are great for that.
So is teaching them assembly, even though most people no longer directly code in ASM. But a constrained language that's close-to-the-metal gives them an interesting view of how computing really works, etc
So I'd say it's actually much better for a class teaching coding and creativity
This is part of a 2 year Masters program focused on Game Design, Development & Innovation, costing a student $113,000 to pursue. If a student enrolls in it without already having learned how to think, this is not the program that is going to teach that. Surely any competent school can teach students how to think within the first year, if they do not already know how to think, leaving the rest of the years (and any Masters or PhD programs) able to assume that the students already know how to think and thus save the time to teach actual content.
If students sign up and pay for a class you teach called "Data Structures & Algorithms", and you just read from Hamming's book every lecture and don't actually attempt to teach any data structures and algorithms, expect to not have a teaching job for long.
If it's so easy, all the better. You can learn to build great architecture, optimize resources, and create a creative game all while also using Unity. There are additional bonuses to this beyond the pure knowledge too.
I mean it's all there in the text... it's for the introductory class "in an introductory class focused on game design fundamentals, students can’t afford a long learning curve."
I ran a design school for eight years where fourteen-year-olds built real projects—wearable medical devices, robotic systems, public art installations—in two-week studio cycles. No grades, portfolio-based assessment, and a structured constraint I designed that did exactly what the Playdate is doing here.
It was a two-sentence writing assignment. Before you could describe your project, you had to state the idea in one sentence (the soul) and the concrete form in one sentence (the body). Kids who could prototype a working medical device in two weeks couldn't articulate what they'd built. The constraint forced the thinking the tool couldn't.
Jach's argument—"you could impose the same constraints on Unity"—misses the point. You could. Nobody does. The tool shapes the behavior. An engine that can do anything invites you to do everything. Or, for young people, nothing. A 1-bit screen with a crank asks you one question: what's the game? That's not an arbitrary constraint. That's a design decision about where the student's cognitive effort goes.
The expensive tool teaches the tool. The constrained tool teaches the thinking. They're both necessary but they serve different stages, and most programs only do the first one.
For teaching, it depends a lot on what you’re trying to teach. In some courses I’m involved in we’re intentionally using old, limited, obtuse or otherwise just strange tools and equipment for the sake of practicing debugging, reading specs approaching an unknown system. The point of those courses is not to learn the tool itself but to learn methodology that can be generalised.
As I said however, it depends on when in the timeline we’re looking. For 3-year bachelor’s programmes, there’s significantly more focus on producing graduates who can move straight into the industry, having already learnt the tools they will use. For theoretical 5-year master’s programmes, knowing specific hardware or software is secondary to the general reasoning, maths and planning that’s expected in research or R&D industry work.
Using more limited or restricted tools, if thought out well, can force students focus on the parts that matter. I haven’t actually used the Playdate, but for first-year students I would think the most important thing is to actually get to designing games. The core ideas you’d want to teach do not require fancy graphics or platform support, rather, that’d just be a time sink. Learning industry tools can be done in later courses or on the job. While being able to work efficiently is important - I don’t want to discredit the handiwork of the process, learning what buttons to push in eg. Unreal is arguably much less ephemeral than learning ”game design”.
However, using limited tools in teaching must be well motivated. Forcing old, obsolete tech onto students might be a learning experience just as well as a time sink.
I've thought something like a software archeology class would be really fun as an elective. I agree that it can make sense to use intentionally limited things especially if something is hard to teach otherwise. e.g. Learning to parse datasheets and probe things with an oscilloscope is best done by actually doing it, but starting off with an n-layer PCB instead of a breadboard would be pretty crazy. A benefit to using old things can sometimes be useful simplicity but also sometimes just being cheap. There's also a lot of interesting (if often commercially and methodologically irrelevant these days) things to teach as a matter of history.
I agree it all needs to be well motivated. I'm often suspicious of attempts to teach things indirectly, but of course a lot of indirect learning happens anyway. And a lot (direct and indirect) is done in parallel and I think it's useful to look for places to usefully exploit that, especially when it comes to the conflict of college for pre-job-training vs. study. Do you really need a limited or obscure platform to teach or practice most things about debugging? printf and any debugger tool that supports break points and stepping would teach a lot, with modern (even graphical) tools having a lot less friction while not dampening what is learned. Bonus points if you actually teach more advanced debuggers so another generation of developers isn't released thinking only-the-basics console gdb + printf are the extent of what's available to help in the practice of debugging. A danger of only teaching limited or restricted tools is that students end up thinking that's all there is. This happens at every level from sorting algorithms to programming languages to whole ways of thinking about things. By artificially constraining the box in an attempt to focus on something basic or avoid clichés of other boxes, all too often the result is just that thinking doesn't generalize and is now crippled in the constrained box.
Timeline is important, I wonder if we're both interpreting Master's program quite differently here. In the US, a Bachelors program is typically 4 years while a Masters is typically 2, and many Masters are industry-oriented (no thesis, just classes/projects) rather than being like a stepping stone to full PhD research. The Duke program here seems to work as typical: 2 years + capstone project (and even seeming to require a summer internship). A longer program is in some ways a bit more forgivable for less than ideal teaching efficiency. (At my old school, the game design undergrads had a course that required designing physical board games. There are plausible arguments that board games as a medium make it easier to teach or focus on important things in design that are harder to teach with digital video games. But even if that's not really true (as I'm arguing here applies to the Playdate not being particularly useful over just normal PC/mobile development) at least it's just one course in many for the whole program. And at least there's a >$10bn market for board games.)
The Playdate features a mic, accelerometer, and crank as unique inputs, as well as being portable, that can suggest interesting game design ideas on their own. In one sense, if you want to use those features, it's simpler because you can count on them being there. In another sense, except for I guess the crank, the other two inputs are part of ~every phone and widely available on any PC/laptop. Developing for PC or mobile gives you access to even more interesting input and output for design consideration too: keyboards, mice (with/without scrollwheels), cameras, haptic feedback, gyroscopes, touch, light or temperature sensors, weird whatever devices over USB or wireless (Nintendo wiimotes, steering wheels, arcade sticks), networking... and making use of these things has never been easier, with drivers widely available and especially with the engines that let you click around to configure things. I would think that if your goal is to learn game design, you would want to prioritize doing your design on a platform that is as open and flexible as possible to allow exploring as much of design space as you can. Perhaps the teacher thinks it's useful to add artificial constraints to narrow the design space or focus from a certain perspective (like: let's design a multiplayer game, but with the constraint that you have only one device, no networking or multiple controllers), fine, but they don't need to start with a platform where those constraints are baked in to start with and can't be lifted.
Similarly Unreal as well as any of the other popular engines, along with any of the libraries like DirectX, SDL, raylib, pygame, or even just the web browser with HTML Canvas, are all open and flexible in what they allow you to explore in design space. Some are more limited than others (like you're going to have a hard time using a 2D-focused library or engine for a 3D game) and some are easier to express ideas in than others (you're going to have a better time using a 2D-focused library or engine for a 2D game) but they're all pretty easy to express basics in, and they all are pretty good at letting you rapidly prototype and playtest and iterate. If you artificially impose on yourself the same constraints as the Playdate has inherently, they can be even easier to use, and even easier yet if the teacher provides a template. Like browse the games on itch.io tagged with playdate, I don't think any would be particularly harder (and some may even be easier) to do in <random other tooling>. The article mentions it taking "months" to learn Unreal, which is true in some sense (it can be longer, especially if you don't already know C++), but false in another sense in that getting up and running is quick, any competent introduction will have the student getting something on screen and responding to their input within an hour. For the very basic stuff a typical Playdate game does it won't take that long to learn to do it with Unreal.
Another way of looking at it: take the "Owl Invasion" example from the article, "an endless wave-based action game with tower defense mechanics." Unlike the other game, there's no mention of using any of the unique inputs of the Playdate, so is there anything fundamentally unique about the Playdate that suggests such a game would be easier to develop for it vs. using an arbitrary other tool? Was there anything learned about game design from the experience that wouldn't have been learned otherwise? What if you had mandated the same visual constraints for resolution and (lack of) color but artificially? Was it useful to be forced to incorporate an owl somehow, vs. a rat, vs. a pirate, vs. having no restrictions? (This one perhaps, even creative writing workshops like to require something to incorporate, but this is more about trying to unblock creativity and avoid decision paralysis rather than directly learning some principle.) If the impact of using Playdate vs. something else is fairly arbitrary for accomplishing the teaching goals, then unless the student is particularly interested in Playdate on their own, it's more beneficial among several axes to use something else.
(I submitted this the day before but I guess this is HN's second-chance feature kicking in.)
reply