Hacker Newsnew | past | comments | ask | show | jobs | submit | maccard's commentslogin

VC investment isn’t about margins, it’s about finding a unicorn. It doesn’t matter if margins are negative if your product is dominant in the market as you can fiddle with the margins after the fact. You just need to be invested long enough to see everyone else fail.

The problem with AI is that there doesn't seem to be a durable barrier to entry for a "winner take all" dynamic to work. The biggest barrier to entry seems to be the capital needed to train the models, but even free models are getting "good enough" for some uses and there's little friction to stop users from switching between models. Many frontends make this explicit by letting you pick the model you want to run inside the same environment.

If prices go up, I suspect a bunch of folks will jump to cheaper, less capable models instead of eating the added cost. The whole value proposition of AI in enterprise is around cost-cutting, so that mentality is likely to persist when choosing which model to pay for.


I imagine the calculus changes a little bit when you've invested hundreds of billions (trillions?) of dollars in a relatively short period of time. Priority number one is probably getting that money back. I think the fact that providers are RAPIDLY cutting back/jacking up prices points to this being the case.

We had more build failures in 2025 due to Actions outages or degraded service than any other reason.

Which is fair but inversely we do many builds throughout the day most business days and have not had an impact where we noticed it. Could also be that we deploy often and frequently and have setup our builds to be as quick as possible so any issues would likely go unnoticed.

Personally I'd never use codeberg. Their FAQ on licensing [0] is basically everything that anyone who supports free software should abhor - it's "we might allow you to do what you want to".

[0] https://docs.codeberg.org/getting-started/faq/#how-about-pri...


I’ve been on both sides of this. Engineers who complain loudest about the waste of time from too many meetings will also complain the loudest about how they feel disconnected from the decisions and from the product IME.

What data is that? There's an unlabelled graph and a number at the current peak.


This is the data that should be in the blog post. Thanks for sharing.

IMO it transmits the magnitude of the impact pretty well.

It's kind of hard to read this with a straight face.

The unlabelled graph with big numbers on top, the priorities that don't match with what we're experiencing, and a list of things that they're doing without a real acknowledgement of the _dire_ uptime over the last 12 months....


These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly. The growth between 2023->2024->2025->2026 is growing quickly. And that in the end/beginning of 2026 they say more growth than the three years before, combined!

You don't need to know the bottom left axis number. We do have to assume the graph is linear, and not some kind of negative exponent log graph. But given the rest of the content, I think that is safe to assume.

Any company that experiences significantly more growth than they were planning for will have capacity issues.

The priorities are most inline with that. The are way beyond the point that they can just add more hardware. They need to make the backend more efficient, and all the stated goals are about helping there.


> You don't need to know the bottom left axis number.

We very much do. The graph suggests an insane growth in PRs from almost zero to 90M. Now compare this misleading graph with this much clearer one, which shows that the growth over the last three years has been less than 80%: https://github.blog/wp-content/uploads/2025/10/octoverse-202...


That link shows the number of PRs created to be less than 10M though.

Yes, to be honest, that graph could use some improvements as well. I should probably just link to the blog post with actual numbers: https://github.blog/news-insights/octoverse/octoverse-a-new-...

> These are not the worst graphs in the world... Sure the bottom left axis is not labeled, but it still conveys the point correctly.

No, they're completely useless. Using the "New repos per month" as an example, if the bottom left is 1m, then that's a 20x increase in 2 years which is a lot. If the bottom left is 19m, it's a 5% increase in 2 years which is nothing.

The massive surge on their labelled X axis starts in 2026, and these issues have been going on for a lot longer than that. GHA has been borderline unusable for a year at this point, if not longer.

> But given the rest of the content, I think that is safe to assume.

The rest of the content is "we're working on it", and "here's two outages in the last 14 days, one of which caused actual data loss"


More numbers: https://x.com/kdaigle/status/2040164759836778878

What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale, when 10x YoY is not enough?


As a business user, our costs have gone up while service has gone down dramatically. Meanwhile our marginal cost to GitHub has hardly changed. Where our costs to them have increased, they mostly charge us per cpu minute, so obviously aren’t making any kind of loss on our account.

I’m sure they’re experiencing scaling issues across the platform, but it’s unacceptable for that to have a negative impact on us when we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.


I understand that, and maybe GitHub became a bad deal because of that.

But if anything, their post and your reply are precisely an endorsement of usage based billing.

The bit that's growing 13x YoY (and which they expect will easily blow past that) is unmetered - commits. The bit that is metered (for some, not all folks) - action minutes, grew only 2x YoY.

GitHub was not built to limit the number of commits, checkouts, forks, issues, PRs, etc - nor do we want them to - but that's what's growing ridiculously as people unleash hordes of busy beaver agents on GitHub, because their either free or unlimited.

Where there are limits - or usage based billing - people add guardrails and find optimizations.

Because for all the talk, agents don't bring a 10x value increase; otherwise, they'd justify a 10x cost increase.

Besides, other forges are having issues too. Even running your own. We have Anubis everywhere protecting them for a reason.


I'm curious how Azure DevOps reliability has been for comparison. My current job is managing stories in DevOps with SCC in GitHub ent. While I like Github slightly more, have been curious about the decision.

> we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.

You know, you can just host your own code forge. Or you can just drop gitolite on a server. Or pull directly from each others' dev machines on a LAN.

GitHub is not git.


In that case, why are you using them at all?

> we're sending them $250/dev/yr for (what is in all honesty) hosting a bunch of static text files.

so start a GitHub competitor which bills $50/dev/yr for solving this easy problem and make a lot of money?


These numbers should have been in the blog post, not the graphs that are present.

> What's the question here, you don't believe growth is currently exponential, or do you think it shouldn't be hard to scale

I think you're putting words in my mouth here; I didn't say either of those things. I'm saying that this blog post is a meaningless platitude when the github stability issues predate this, and that all this post says is "we hear you're having issues".


Sorry if I misread your intent.

I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).

Either those charts are a bald-faced lie (the tweet could be as well) or there is no way for that chart to be something else.

The only way to fake exponential growth like that would be to use an inverse log scale (which would be a bald-faced lie).

It doesn't even really matter what's the y-axis baseline, unless we really think growth was huge in 2020, then cratered to zero by 2023, now back to the previous normal.

As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.

You can already see people complaining loudly where they instead of "we'll do better" decided to limit usage.


No problem - it's tough online sometimes.

> I just think their charts, taken at face value, show substantially the same thing (for PRs, commits, new repos).

The problem is that these charts show the massive exponential growth in 2026. But this didn't start in 2026, this has been going on since early last year. My team had more build failures in 2025 due to actions outages or "degraded performance" than _any other reason_ and that includes PR's that failed linting or tests that developer were working on.

> As for the rest of the post, I do think it's panic mode platitudes. But I honestly don't know what I'd write instead that's better.

IMO, this needed to be written a 6 months ago (around the time that the memo of them prioritising the migration to Azure was released), and then this post should have been "We're still struggling, this isn't good enough. Here's the amount of growth, here's what we've done to try and fix it, and here's what we're planning over the next 3-6 months", instead of "Our priorities are clear: availability first, then capacity, then new features" and "We are committed to improving availability, increasing resilience, scaling for the future of software development, and communicating more transparently along the way." This isn't transparency (yet).


You mean since GH acquisition 6 years ago https://damrnelson.github.io/github-historical-uptime/

"We hear you" in ~300 words, basically.

You can do the same with so many clients.

Anecdotally, my path was 2010 macbook pro -> 2015 macbook pro -> 2021 M1, with each device lasting about 10 years, and keeping 2 in flight at once. The 2015 one is showing it's age, and is likely to be replaced this year or next. Running linux on it isn't an option due to all the nonsense involved in suspend/sleep and the effect it has on battery life.

I also have a 2007 Intel mac with firewire that I use for some audio stuff that's still going strong with just an SSD swap.


20 years ago we were complaining about steam being bloated and unnecessary, we were 6 months off vista being a bloated mess and the Office Ribbon debacle being in full swing. PC games were often half baked console ports with atrocious performance and filled with game breaking bugs. Software was super rigid - there was no real cross platform support. We were just heading into the core 2 duo realm and it was a mess.

Engineers sucked then as much as they suck now


If you want an example of how this will be abused by companies, https://www.theguardian.com/money/2015/aug/12/airport-shops-...

And if you want an example of who has the power these days, I've encountered airport shops that are "take it leave it" (WHSmith in Spain in fact). I was told they can't require my boarding pass, but they won't sell me anything without it... (There was no language barrier)

Isn't that a tax thing on airports? I don't think it is the same problem.

I’d love to see your attempts at this. I think we’re close to something vaguely resembling this at a first glance but nothing that actually works.

Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: