Hacker Newsnew | past | comments | ask | show | jobs | submit | GalaxyNova's commentslogin

It's worse than nothing, since inevitably people will use this thinking it's 100% safe when it's not.

Likely the opposite, as safe Rust has some extra safety checks for things like array bounds.

It's strange how quickly this project got so big... It did not seem like anything particularly novel to me.

I think it was obvious, yet nobody seemed to have released a version people could actually easily use.

The feature set is pretty simple:

- Agents that can write their own tools.

- Agents that can write their own skills.

- Agents that can chat via standard chat apps.

- Agents that can install and use cli software.

- Agents that can have a bit of state on disk.


> nobody seemed to have released a version people could actually easily use

Yet I’ve known many people who have said it is difficult to use; this was a 0.01-0.1% adoption tool. There is still a huge ease of use gap to cross to make it adopted in 10-50% of computer users.


Yeah - people are hungry for it. They tolerate the kind of crappy docs and difficulties.

thats by design, you know all those huge security implications. now image if it was so easy to setup and install and use.

good summary. i think you forgot heartbeat.md which powers some autonomy.

do you think the agent admin ui mattered at all?

other contributors while i think of them:

- good timing around opus 4.6 as the default model? (i know he used codex, but willing ot bet majority of openclaws are opuses)

- make immediate wins for nontechnical users. everyone else was busy chasing cursor/cognition or building horiztonal stuff like turbopuffer or whatever. this one was straight up "hook up a good bot to telegram"

- theres many attempts at "personal OS", "assistant", but no good ones open source? a lot of sketchier china ones, this was the first western one


Aren't all of these things you can do with Claude Code? Granted, the chat app one is novel, but you could ask Claude Code to set that up.

Thats basically what this guy did. He vibe coded a chat interface.

Most things that go viral actually have a concerted marketing push behind them. I suspect that was the case here. Something about the way people talked about it didn't come across as very genuine.

As someone who attended numerous meetups from the author and saw the vibe among those events, believe me it was as genuine as it can get.

do you genuinely think that numerous meetups isn't a marketing push?

Well, you can argue that tech meetups in general are a form of marketing - but this wasn't really a 'company X hosts a react meetup trying to find people to work there' type of thing. Many drove for hours just to attend.

Getting dozens of people in the same room, excited about technology is not trivial, and having hundreds of people show up is relatively hard in a city like Vienna which doesn't have a vibrant tech scene. Sure, some people come to find job opportunities or for free food, but many 'established' meetups sometimes just have a few attendees, so this on its own is not a small task. Peter definitely didn't have time to focus on this given everything else that was going on. So for Vienna, this is pretty much as viral as it gets.

Not sure about other cities where this took place.


by your definition, what is marketing then?

It's another game where software quality, security of novelty is not an outcome-defining factor.

> Hard to find fully specified problems like this in the wild.

This is such a big and obvious cope. This is obviously a very real problem in the wild and there are many, many others like it. Probably most problems are like this honestly or can be made to be like this.


Impressive, my sarcasm/bait detector almost failed me.


> I don’t read code anymore

Never thought this would be something people actually take seriously. It really makes me wonder if in 2 - 3 years there will be so much technical debt that we'll have to throw away entire pieces of software.


> Never thought this would be something people actually take seriously

The author of the article has a bachelor's degree in economics[1], worked as a product manager (not a dev) and only started using GitHub[2] in 2025 when they were laid off[3].

[1] https://www.linkedin.com/in/benshoemaker000/

[2] https://github.com/benjaminshoemaker

[3] https://www.benshoemaker.us/about


Whilst I won't comment on this specific person, one of the best programmers I've met has a law degree, so I wouldn't use their degree against them. People can have many interests and skills.


I've written code since 2012, I just didn't put it online. It was a lot harder, so all my code was written internally, at work.

But sure, go with the ad hominem.


> Never thought this would be something people actually take seriously.

You have to remember that the number of software developers saw a massive swell in the last 20 years, and many of these folks are Bootcamp-educated web/app dev types, not John Carmack. They typically started too late and for the wrong reasons to become very skilled in the craft by middle age, under pre-AI circumstances and statistically (of course there are many wonderful exceptions; one of my best developers is someone who worked in a retail store for 15 years before pivoting).

AI tools are now available to everyone, not just the developers who were already proficient at writing code. When you take in the excitement you always have to consider what it does for the average developer and also those below average: A chance to redefine yourself, be among the first doing a new thing, skip over many years of skill-building and, as many of them would put it, focus on results.

It's totally obvious why many leap at this, and it's even probably what they should do, individually. But it's a selfish concern, not a care for the practice as-is. It also results in a lot of performative blog posting. But if it was you, you might well do the same to get ahead in life. There's only to so many opportunities to get in on something on the ground floor.

I feel a lot of senior developers don't keep the demographics of our community of practice into account when they try to understand the reception of AI tools.


This is gold.

I have rarely had the words pulled out of my mouth.

The percentage of devs in my career that are from the same academic background, show similar interests, and approach the field in the same way, is probably less than %10, sadly.


Well, there are programmers like Karpathy in his original coinage of vibe coding

> There's a new kind of coding I call "vibe coding", where you fully give in to the vibes, embrace exponentials, and forget that the code even exists. It's possible because the LLMs (e.g. Cursor Composer w Sonnet) are getting too good. Also I just talk to Composer with SuperWhisper so I barely even touch the keyboard. I ask for the dumbest things like "decrease the padding on the sidebar by half" because I'm too lazy to find it. I "Accept All" always, I don't read the diffs anymore. When I get error messages I just copy paste them in with no comment, usually that fixes it. The code grows beyond my usual comprehension, I'd have to really read through it for a while. Sometimes the LLMs can't fix a bug so I just work around it or ask for random changes until it goes away. It's not too bad for throwaway weekend projects, but still quite amusing. I'm building a project or webapp, but it's not really coding - I just see stuff, say stuff, run stuff, and copy paste stuff, and it mostly works.

Notice "don't read the diffs anymore".

In fact, this is practically the anniversary of that tweet: https://x.com/karpathy/status/2019137879310836075?s=20


Ahh Bulverism, with a hint of ad-hominem and a dash no No True Scotsman. I think the most damning indictment here is the seeming inability to make actual arguments and not just cheap shots at people you've never even met.

Please tell me, "Were people excited about high-level languages just programmers who 'couldn't hack it' with assembly? Maybe you are one of those? Were GUI advocates just people who couldn't master the command line?"


Thanks for teaching me about Bulverism, I hadn't heard of that fallacy before. I can see how my comment displays those characteristics and will probably try to avoid that pattern more in the future.

Honestly, I still think there's truth to what I wrote, and I don't think your counter-examples prove it wrong per-se. The prompt I responded to ("why are people taking this seriously") also led fairly naturally down the road of examining the reasons. That was of course my choice to do, but it's also just what interested me in the moment.


I think he's a cook, watching people putting frozen "meals" in the microwave and telling himself: "hey! That's not cooking!".

And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.


>I think he's a cook, watching people putting frozen "meals" in the microwave and telling himself: "hey! That's not cooking!".

It's the equivalent of saying anyone excited about being able to microwave Frozen meals is a hack who couldn't make it in the kitchen. I'm sorry, but if you don't see how ridiculous that assertion is then I don't know what to tell you.

>And I totally agree with him. Throwing some kind of fallacy in the air for the show doesn't make your argument, or lack of, more convincing.

A series of condescending statements meant to demean with no objective backing whatsoever is not an argument. What do you want me to say ? There's nothing worth addressing, other than pointing out how empty it is.

You think there aren't big shots, more accomplished than anyone in this conversation who are similarly enthusiastic?

You and OP have zero actual clue. At any advancement, regardless of how big or consequential, there are always people like that. It's very nice to feel smart and superior and degrade others, but people ought to be better than that.

So I'm sorry but I don't really care how superior a cook you think you are.


> You think there aren't big shots, more accomplished than anyone in this conversation who are similarly enthusiastic?

I think both things can be true simultaneously.

You're arguing against a straw man.


Pointing out that your argument relies on an unverifiable (and easily countered) generalization isn't a straw man.


Half serious - but is that really so different than many apps written by humans?

I've worked on "legacy systems" written 30 to 45 years ago (or more) and still running today (things like green-screen apps written in Pick/Basic, Cobol, etc.). Some of them were written once and subsystems replaced, but some of it is original code.

In systems written in the last.. say, 10 to 20 years, I've seen them undergo drastic rates of change, sometimes full rewrites every few years. This seemed to go hand-in-hand with the rise of agile development (not condemning nor approving of it) - where rapid rates of change were expected.. and often the tech the system was written in was changing rapidly also.

In hardware engineering, I personally also saw a huge move to more frequent design and implementation refreshes to prevent obsolescence issues (some might say this is "planned obsolescence" but it also is done for valid reasons as well).

I think not reading the code anymore TODAY may be a bit premature, but I don't think it's impossible to consider that someday in the nearer than further future, we might be at a point where generative systems have more predictability and maybe even get certified for safety/etc. of the generated code.. leading to truly not reading the code.

I'm not sure it's a good future, or that it's tomorrow, but it might not be beyond the next 20 year timeframe either, it might be sooner.


I would enjoy discussion with whoever voted this down - why did you?

What is your opinion and did you vote this down because you think it's silly, dangerous or you don't agree?


I'm torn between running away to be an electrician or just waiting three years until everyone realises they need engineers who can still read.

Sometimes it feels like pre-AI education is going to be like low-background steel for skilled employees.


> 2 - 3 years there will be so much technical debt that we'll have to throw away entire pieces of software.

That happens just as often without AI. Maybe the people that like it all thave experience with trashing multiple sets of products over the course of their life?


Reading and understanding code is more important than writing imo


It’s pretty well established that you cannot understand code without having thought things through while writing it. You need to know why things are written the way the are to understand what is written.


Yeah, just reading code does little to help me understand how a program works. I have to break it apart and change it and run it. Write some test inputs, run the code under a debugger, and observe the change in behavior when changing inputs.


If that were true, then only the person who wrote the code could ever understand it enough to fix bugs, which is decidedly not true.


I’ll grant you that there are many trivial software defects that can be identified by simply reading the code and making minor changes.

But for architectural issues, you need to be able to articulate how you would have written the code in the first place, once you understand the existing behavior and its problems. That is my interpretation of GP’s comment.


I've seen software written and architected by Claude and I'd say that they're already ready to be thrown out. Security sucks, performance will probably suck, maintainability definitely sucks, and UX really fucking sucks.


The coincidental timing between the rapid increase in the number of emergency fixes coming out on major software platforms and the proud announcement of the amount of code that's being produced by AI at the same companies is remarkable.

I think 2-3 years is generous.

Don't get me wrong, I've definitely found huge productivity increases in using various LLM workflows in both development as well as operational things. But removing a human from the loop entirely at this point feels reckless bordering on negligent.


I actually think this is fair to wonder about.

My overall stance on this is that it's better to lean into the models & the tools around them improving. Even in the last 3-4 months, the tools have come an incredible distance.

I bet some AI-generated code will need to be thrown away. But that's true of all code. The real questions to me are - are the velocity gains be worth it? Will the models be so much better in a year that they can fix those problems themselves, or re-write it?

I feel like time will validate that.


I have wondered the same but for the projects I am completely "hands off" on, the model improvements have overcome this issue time and time again.


If the models don't get to the point where they can correct fixes on their own, then yeah, everything will be falling apart. There is just no other way around increasing entropy.

The only way to harness it is to somehow package code producing LLMs into an abstraction and then somehow validate the output. Until we achieve that, imo doesn't matter how closely people watch out the output, things will be getting worse.


> If the models don't get to the point where they can correct fixes on their own

Depending on what you're working on, they are already at that point. I'm not into any kind of AI maximalist "I don't read code" BS (I read a lot of code), but I've been building a fairly expensive web app to manage my business using Astro + React and I have yet to find any bug or usability issue that Claude Code can't fix much faster than I would have (+). I've been able to build out, in a month, a fully TDD app that would have conservatively taken me a year by myself.

(+) Except for making the UI beautiful. It's crap at that.

The key that made it click is exactly what the person describes here: using specs that describe the key architecture and use cases of each section. So I have docs/specs with files like layout.md (overall site shell info), ui-components.md, auth.md, database.md, data.md, and lots more for each section of functionality in the app. If I'm doing work that touches ui, I reference layout and ui-components so that the agent doesn't invent a custom button component. If I'm doing database work, reference database.md so that it knows we're using drizzle + libsql, etc.

This extends up to higher level components where the spec also briefly explains the actual goal.

Then each feature building session follows a pattern: brainstorm and create design doc + initial spec (updates or new files) -> write a technical plan clearly following TDD, designed for batches of parallel subagents to work on -> have Claude implement the technical plan -> manual testing (often, I'll identify problems and request changes here) -> automated testing (much stricter linting, knip etc. than I would use for myself) -> finally, update the spec docs again based on the actual work that was done.

My role is less about writing code and more about providing strict guardrails. The spec docs are an important part of that.


I'm 2-3 years from now if coding AI continues to improve at this pace I reckon people will rewrite entire projects.

I can't imagine not reading the code I'm responsible for any more than I could imagine not looking out the windscreen in a self driving Tesla.

But if so many people are already there, and mostly highly skilled programmers imagine in 2 years time with people who've never programmed!


If I keep getting married at the same pace I have, then in a few years I'll have like 50 husbands.


Well, Tesla has been nearly at FSD for how long? The analogy you make sorta makes it sound less likely


Seems dangerous to wager your entire application on such an uncertainty


Some people are not aware that they are one race condition away from a class action lawsuit.


The proponents of Spec Driven Development argue that throwing everything out completely and rebuilding from scratch is "totally fine". Personally, I'm not comfortable with the level of churn.


Also take something into account: absolutely _none_ of the vibe coding influencer bros make anything more complicated than a single-feature, already implemented 50 times webapp. They've never built anything complicated either, or maintained something for more than a few years with all the warts that it entails. Literally, from his bio on his website:

> For 12 years, I led data and analytics at Indeed - creating company-wide success metrics used in board meetings, scaling SMB products 6x, managing organizations of 70+ people.

He's a manager that made graphs on Power BI.

They're not here because they want to build things, they're here to shit a product out and make money. By the time Claude has stopped being able to pipe together ffmpeg commands or glue together 3 JS libraries, they've gone on to another project and whoever bought it is a sucker.

It's not that much different from the companies of the 2000s promising a 5th generation language with a UI builder that would fix everything.

And then, as a very last warning: the author of this piece sells AI consulting services. It's in his interest to make you believe everything he has to say about AI, because by God is there going to be suckers buying his time at indecently high prices to get shit advice. This sucker is most likely your boss, by the way.


No true programmer would vibecode an app, eh?


Oh no, they would. I would.

I'd have the decency to know and tell people that it's a steaming pile of shit and that I have no idea how it works though, and would not have the shamelessness to sell a course on how to put out LLM vomit in public though.

Engineering implies respect for your profession. Act like it.


But invoking No True Scotsman would imply that the focus is on gatekeeping the profession of programming. I don’t think the above poster is really concerned with the prestige aspect of whether vibe bros should be considered true programmers. They’re more saying that if you’re a regular programmer worried about becoming obsolete, you shouldn’t be fooled by the bluster. Vibe bros’ output is not serious enough to endanger your job, so don’t fret.


Yes, and you can rebuild them for free


Claude, Codex and Gemini can read code much faster than we can. I still read snippets, but mostly I have them read the code.


Unfortunately they're still too superficial. 9 times out of 10 they don't have enough context to properly implement something and end up just tacking it on in some random place with no regard for the bigger architecture. Even if you do tell it something in an AGENT.md file or something, it often just doesn't follow it.


I use them to probabilistically program. They’re better than me and I’ve been at it for 16 years now. So I wouldn’t say they’re superficial at all.

What have you tried to use them for?


I have a wide range of Claude Code based setups, including one with an integrated issue tracker and parallel swarms.

And for anything really serious? Opus 4.5 struggles to maintain a large-scale, clean architecture. And the resulting software is often really buggy.

Conclusion: if you want quality in anything big in February 2026, you still need to read the code.


Opus is too superficial for coding (great at bash though, on the flipside), I‘d recommend giving Codex a try.


As LLMs advance so rapidly I think that all the AI slop code written today will be easily digestible by the LLMs a few generations down the line. I think there will be a lot of improvements in making user intent clearer. Combined with a bad codebase and larger context windows, refactoring wont be a challenge.


Remember though this forum is full of people who consider code objects when it's just state in a machine.

We have been throwing away entire pieces of software forever. Where's Novell? Who runs 90s Linux kernels in prod?

Code isn't a bridge or car. Preservation isn't meaningful. If we aren't shutting the DCs off we're still burning the resources regardless if we save old code or not.

Most coders are so many layers of abstraction above the hardware at this point anyway they may as well consider themselves syntax artists as much as programmers, and think of Github as DeviantArt for syntax fetishists.

Am working on a model of /home to experiment with booting Linux to models. I can see a future where Python in my screen "runs" without an interpreter because the model is capable of correctly generating the appropriate output without one.

Code is ethno objects, only exists socially. It's not essential to computer operations. At the hardware level it's arithmetical operations against memory states.

Am working on my own "geometric primitives" models that know how to draw GUIs and 3D world primitives, text; think like "boot to blender". Rather store data in strings, will just scaffold out vectors to a running "desktop metaphor".

It's just electromagnetic geometry, delta sync between memory and display: https://iopscience.iop.org/article/10.1088/1742-6596/2987/1/...


Wie bitte?


Because LLMs are bad at reviewing code for the same reasons they are bad at making it? They get tricked by fancy clean syntax and take long descriptions / comments for granted without considering the greater context.


I don't know, I prompted Opus 4.5 "Tell me the reasons why this report is stupid" on one of the example slop reports and it returned a list of pretty good answers.[1]

Give it a presumption of guilt and tell it to make a list, and an LLM can do a pretty good job of judging crap. You could very easily rig up a system to give this "why is it stupid" report and then grade the reports and only let humans see the ones that get better than a B+.

If you give them the right structure I've found LLMs to be much better at judging things than creating them.

Opus' judgement in the end:

"This is a textbook example of someone running a sanitizer, seeing output, and filing a report without understanding what they found."

1. https://claude.ai/share/8c96f19a-cf9b-4537-b663-b1cb771bfe3f


"Tell me the reasons why this report is stupid" is a loaded prompt. The tool will generate whatever output pattern matches it, including hallucinating it. You can get wildly different output if you prompt it "Tell me the reasons why this report is great".

It's the same as if you searched the web for a specific conclusion. You will get matches for it regardless of how insane it is, leading you to believe it is correct. LLMs take this to another level, since they can generate patterns not previously found in their training data, and the output seems credible on the surface.

Trusting the output of an LLM to determine the veracity of a piece of text is a baffilingly bad idea.


>"Tell me the reasons why this report is stupid" is a loaded prompt.

This is precisely the point. The LLM has to overcome its agreeableness to reject the implied premise that the report is stupid. It does do this but it takes a lot, but it will eventually tell you "no actually this report is pretty good"

The point being filtering out slop, we can be perfectly find with false rejections.

The process would look like "look at all the reports, generate a list of why each of them is stupid, and then give me a list of the ten most worthy of human attention" and it would do it and do a half-decent job at it. It could also pre-populate judgments to make the reviewer's life easier so they could very quickly glance at it to decide if it's worthy of a deeper look.


Ok, run the same prompt on a legitimate bug report. The LLM will pretty much always agree with you


find me one


https://hackerone.com/curl/hacktivity Add a filter for Report State: Resolved. FWIW I agree with you, you can use LLMs to fight fire with fire. It was easy to see coming, e.g. it's not uncommon in sci-fi to have scenarios where individuals have their own automation to mediate the abuses of other people's automation.

I tried your prompt with https://hackerone.com/reports/2187833 by copying the markdown, Claude (free Sonnet 4.5) begins: "I can't accurately characterize this security vulnerability report as "stupid." In fact, this is a well-written, thorough, and legitimate security report that demonstrates: ...". https://claude.ai/share/34c1e737-ec56-4eb2-ae12-987566dc31d1

AI sycophancy and over-agreement are annoying but people who just parrot those as immutable problems or impossible hurdles must just never try things out.


It's interesting to try. I picked six random reports from the hackerone page. Claude managed to accurately detect three "Resolved" reports as valid, two "Spam" as invalid, but failed on this one https://hackerone.com/reports/3508785 which it considered a valid report. All using the same prompt "Tell me all the reasons this report is stupid". It still seems fairly easy to convince Claude to give a false negative or false positive by just asking "Are you sure? Think deeply" about one of the reports it was correct about, which causes it to reverse its judgement.


No. Learn about the burden of proof and get some basic reason - your AI sycophancy will simply disappear.


No. I already found three examples, cited sources and results. The "burden of proof" doesn't extend to repeatedly doing more and more work for every naysayer. Yours is a bad faith comment.


And if you ask why it's accurate it'll spaff out another list of pretty convincing answers.


It does indeed, but at the end added:

>However, I should note: without access to the actual crash file, the specific curl version, or ability to reproduce the issue, I cannot verify this is a valid vulnerability versus expected behavior (some tools intentionally skip cleanup on exit for performance). The 2-byte leak is also very small, which could indicate this is a minor edge case or even intended behavior in certain code paths.

Even biased towards positivity it's still giving me the correct answer.

Given a neutral "judge this report" prompt we get

"This is a low-severity, non-security issue being reported as if it were a security vulnerability." with a lot more detail as to why

So positive, neutral, or negative biased prompts all result in the correct answer that this report is bogus.


Yet this is not reproducible. This is the whole issue with LLMs: they are random.

You cannot trust that it'll do a good job on all reports so you'll have to manually review the LLMs reports anyways or hope that real issues didn't get false-negatives or fake ones got false-positives.

This is what I've seen most LLM proponents do: they gloss over the issues and tell everyone it's all fine. Who cares about the details? They don't review the gigantic pile of slop code/answers/results they generate. They skim and say YOLO. Worked for my narrow set of anecdotal tests, so it must work for everything!

IIRC DOGE did something like this to analyze government jobs that were needed or not and then fired people based on that. Guess how good the result was?

This is a very similar scenario: make some judgement call based on a small set of data. It absolutely sucks at it. And I'm not even going to get into the issue of liability which is another can of worms.


Is it not reproducable? Someone up thread reproduced it and expanded on it. It worked for me the first time I prompted. Did you try it or are you just guessing that it's not reproducable because that's what you already think?

I'm not talking about completely replacing humans, the goal of this exercise was demonstrating how to use an LLM to filter out garbage. Low quality semi-anonymous reports don't deserve a whole lot of accuracy and being conservative and rejecting most reports even when you throw out legitimate ones is fine.

You seem like regardless of evidence presented, your prejudices will lead you to the same conclusions, so what's the point discussing anything? I looked for, found, and shared evidence, you're sharing your opinion.

>IIRC DOGE did something like this to analyze government jobs that were needed or not and then fired people based on that. Guess how good the result was?

I'm talking about filtering spammy communication channels, that has nothing like the care required in making employment decisions.

Your comment is plainly just bad faith and prejudice.


> Is it not reproducable? Someone up thread reproduced it and expanded on it. It worked for me the first time I prompted. Did you try it or are you just guessing that it's not reproducable because that's what you already think?

I assumed you knew how LLMs work. They are random by nature, not "because I'm guessing it". There's a reason if you ask the LLM the same exact prompt hundreds of times you'll get hundreds of different answers.

>I looked for, found, and shared evidence

Anecdotal evidence. Studies have shown how unreliable LLMs are exactly because they are not deterministic. Again, it's a fact, not an opinion.

>I'm talking about filtering spammy communication channels

So if we make tons of mistakes there, who cares, right?

I only used this as an example because it's one of the few very public uses of LLMs to make judgement calls where people accepted it as true and faced consequences.

I'm sure there are plenty more people getting screwed over by similar mistakes, but folks generally aren't stupid enough to say that publicly. Maybe the Salesforce huge mistake qualifies too? Incidentally it also involved people's jobs.

Regardless, the point stands: they are unreliable.

Want to trust LLMs blindly for your weekend project? Great! The only potential victim for its mistakes is you. For anything serious like a huge open source project? That's irresponsible.


I think it would, given that there is no air resistance.


btw it's only been getting seriously deployed since 2010


Is there a reason for the lack of IPv6 support?


[exe.dev co-founder here] It is planned! The reason we have not got to it yet is it needs to be very different than IPv4 support. We have spent a lot of time on machinery to allow `ssh yourmachine.exe.xyz` work without having to allocate you an IPv4 address. The mechanisms for IPv6 can and should be different, but they will also interact with how assigning public static IPv4 addresses will work in the future.

We do not want to end up in the state AWS is in, where any production work requires navigating the differences between how AWS manage v4 and v6. And that means rolling out v6 is going to be a lot of work for us. It will get done.

I added a public tracking bug here: https://github.com/boldsoftware/exe.dev/issues/16


You can use it right now if you build it from source, in fact I am writing this HN comment from it.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: