Hacker Newsnew | past | comments | ask | show | jobs | submit | withinboredom's commentslogin

Before switching to a hot and branchless code path, I was seeing strangely lower performance on Intel vs. AMD under load. Realizing the branch predictor was the most likely cause was a little surprising.

Replace "oven" with a dish washer or a washing machine for your clothes. Those things do exactly all of this. Yet we still complain about washing clothes and doing the dishes, even though it is far less effort than anything our parents did, or their parents before them.

> The students, if they are, should be banned for life.

I'm all for repurcussions ... but a life is a long time and students are usually only at the beginning of it.


The company I'm a director for licenses IP from my company. The company wouldn't exist without my IP (I'm one of the founders). Yet, I'm also a customer of the same. Dollars in/out, we're all kosher. This isn't fraud, it's just how companies work.

So long as the prices are fair and reasonable, sure. But that doesn't enable you to run up an arbitrarily large bill without either doing an arbitrary amount of actual work or committing fraud.

Yes. This is called “transfer pricing” and is tightly regulated in almost every country.

We were working on translations for Arabic and in the spec it said to use "Arabic numerals" for numbers. Our PM said that "according to ChatGPT that means we need to use Arabic script numbers, not Arabic numerals".

It took a lot of back-and-forths with her to convince her that the numbers she uses every day are "Arabic numerals". Even the author of the spec could barely convince her -- it took a meeting with the Arabic translators (several different ones) to finally do it. Think about that for a minute. People won't believe subject matter experts over an LLM.

We're cooked.


Kind of a tangent but that did make me curious about how numbers are written in Arabic: https://en.wikipedia.org/wiki/Eastern_Arabic_numerals

I guess "Western Arabic" would have been more precise.

The architect should have required Hindu numbers. Same result, but even more confusion.

Man this is maddening.

Honestly I think we're just becoming more aware of this way of thinking. It's certainly exacerbated it now that everyone has "an expert" in their pocket.

It's no different than conspiracy theorists. We saw a lot more with the rise in access to the internet. Not because they didn't put in work to find answers to their questions, but because they don't know how to properly evaluate things and because they think that if they're wrong then it's a (very) bad thing.

But the same thing happens with tons of topics, and it's way more socially acceptable. Look how everyone has strong opinions on topics like climate, rockets, nuclear, immigration, and all that. The problem isn't having opinions or thoughts, but the strength of them compared to the level of expertise. How many people think they're experts after a few YouTube videos or just reading the intro to the wiki page?

Your PM is no different. The only difference is the things they believed in, not the way they formed beliefs. But they still had strong feelings about something they didn't know much about. It became "their expert" vs "your expert" rather than "oh, thanks for letting me know". And that's the underlying problem. It's terrifying to see how common it is. But I think it also leads to a (partial) solution. At least a first step. But then again, domain experts typically have strong self doubt. It's a feature, not a bug, but I'm not sure how many people are willing to be comfortable with being uncomfortable


My favorite is when you need to rebuild/restart outside of claude and it will "fix the bug" and argue with you about whether or not you actually rebuilt and restarted whatever it is you're working on. It would rather call you a liar than realize it didn't do anything.

this is a pretty annoying problem -- i just intentionally solve it by asking claude to always use the right build command after each batch of modifications, etc

"That's an old run, rebuild and the new version will work" lol

No. Eventually the queues get full and go routines pause waiting to place the element onto the queue, landing you right back at unfair scheduling.

https://github.com/php/frankenphp/pull/2016 if you want to see a “correctly behaving” implementation that becomes 100% cpu usage under contention.


fair point on blocking sends — but that's an implementation detail, not a structural one.

From my pov, the worker pool's job isn't to absorb saturation. it's to make capacity explicit so the layer above can route around it. a bounded queue that returns ErrQueueFull immediately is a signal, not a failure — it tells the load balancer to try another instance.

saturation on a single instance isn't a scheduler problem, it's a provisioning signal. the fix is horizontal, not vertical. once you're running N instances behind something that understands queue depth, the "unfair scheduler under contention" scenario stops being reachable in production — by design, not by luck.

the FrankenPHP case looks like a single-instance stress test pushed to the limit, which is a valid benchmark but not how you'd architect for HA.


My biggest issue with go is it’s incredibly unfair scheduler. No matter what load you have, P99 and especially P99.9 latency will be higher than any other language. The way that it steals work guarantees that requests “in the middle” will be served last.

It’s a problem that only go can solve, but that means giving up some of your speed that are currently handled immediately that shouldn’t be. So overall latency will go up and P99 will drop precipitously. Thus, they’ll probably never fix it.

If you have a system that requires predictable latency, go is not the right language for it.


> Thus, they’ll probably never fix it.

I'm sorry you had a bad experience with Go. What makes you say this? Have you filed an issue upstream yet? If not, I encourage you to do so. I can't promise it'll be fixed or delved into immediately, but filing detailed feedback like this is really helpful for prioritizing work.


> If you have a system that requires predictable latency, go is not the right language for it.

Having a garbage collector already make this the case, it is a known trade off.


This may have been practically true for a long time, but as Java's ZGC garbage collector proves, this is not a hard truth.

You can have world pauses that are independent of heap size, and thus predictable latency (of course, trading off some throughput, but that is almost fundamental)


Not really, it is a matter of having the right implementation.

- https://www.ptc.com/en/products/developer-tools/perc

- https://www.aicas.com/products-services/jamaicavm

- https://www.azul.com/products/prime

Not all GCs are born alike.


Nim's GC is deterministic when you need it.

How so?

You can run it for fixed timeslices.

“It’s a problem that only go can solve”

I had this discussion a decade ago and concluded that a reasonable fair scheduler could be built on top of the go runtime scheduler by gating the work presented. The case was be made that the application is the proper, if not only, place to do this. Other than performance, if you encountered a runtime limitation then filing an issue is how the Go community moves forward.


It misses having a custom scheduler option, like Java and .NET runtimes offer, unfortunely that is too many knobs for the usual Go approach to language design.

Having a interface for how it is supposed to behave, a runtime.SetScheduler() or something, but it won't happen.


I find it hard to believe the people who built Go, coming from designing Plan 9 and Inferno, would build a language where it is difficult to swap out a component.

I have this feeling that in their quest to make Go simple, they added complexity in other areas. Then again, this was built at Google, not Bell Labs so the culture of building absurdly complex things likely influenced this.


The same people refused to support generics for several years, and the current design still has some issues to iron out.

Go also lacks some of Limbo features, e.g. plugin package is kind of abandoned. Thus even though dynamic loading is supported, it is hardly usable.


> If you have a system that requires predictable latency, go is not the right language for it.

I presume that's by design, to trade off against other things google designed it for?


No clue. All I know is that people complain about it every time they benchmark.

> No matter what load you have, P99 and especially P99.9 latency will be higher than any other language

I strongly call BS on that.

Strong claim and evidence seems to be a hallucination in your own head.

There are several writeups of large backends ported from node/python/ruby to Go which resulted in dramatic speedups, including drop in P99 and P99.9 latencies by 10x

That's empirical evidence your claim is BS.

What exactly is so unfair about Go scheduler and what do you compare it to?

Node's lack of multi-threading?

Python's and Ruby's GIL?

Just leaving this to OS thread scheduler which, unlike Go, has no idea about i/o and therefore cannot optimize for it?

Apparently the source of your claim is https://github.com/php/frankenphp/pull/2016

Which is optimizing for a very specific micro-benchmark of hammering std-lib http server with concurrent request. Which is not what 99% of go servers need to handle. And is exercising way more than a scheduler. And is not benchmarking against any other language, so the sweeping statement about "higher than any other language" is literally baseless.

And you were able to make a change that trades throughput for P99 latency without changing the scheduler, which kind of shows it wasn't the scheduler but an interaction between a specific implementation of HTTP server and Go scheduler.

And there are other HTTP servers in Go that focus on speed. It's just 99.9% of Go servers don't need any of that because the baseline is 10x faster than python/ruby/javascript and on-par with Java or C#.


"There are several writeups of large backends ported from node/python/ruby to Go which resulted in dramatic speedups, including drop in P99 and P99.9 latencies by 10x"

But that's not comparing apples to apples. When you get a dramatic speedup, you will also see big drops in the P99 and P99.9 latencies because what stressed out the scripting language is a yawn to a compiled language. Just going from stressed->yawning will do wonders for all your latencies, tail latencies included.

That doesn't say anything about what will happen when the load increases enough to start stressing the compiled language.


Do I need to share the TLA+ spec that shows its unfair? Or do you have any actual proof to your claims?

It would be helpful for you to share a link to the Github issue you created. If the TLA+ spec you no doubt put a lot of time into creating is contained there, that would be additionally amazing, but more relevant will be the responses from the maintainers so that we're not stuck with one side of the story.

Of course, expecting you to provide the link would be incredibly onerous. We can look it up ourselves just as easy as you can. Well, in theory we can. The only trouble is that I cannot find the issue you are talking about. I cannot find any issues in the Go issue tracker from your account.

So, in the interest of good faith, perhaps you can help us out this one time and point us in the right direction?


I’m not interested in contributing to go. I tried once, was basically ignored. I have contributed to issues there where it has impacted projects I’ve worked on. But even then, it didn’t feel collaborative; mostly felt like dealing with a tech support team instead of other developers.

That being said, I love studying go and learning how to use it to the best of my ability because I work on sub-ųs networking in go.

When I get home, I’ll dig it up. But if you think it’s a fair scheduler, I invite you to just think about it on a whiteboard for a few minutes. It’s nowhere near fair and should be self-evident from first principles alone.


Here’s a much better write up than I’m willing to do: https://www.cockroachlabs.com/blog/rubbing-control-theory/

There are also multiple issues about this on GitHub.

And an open issue that is basically been ignored. golang/go#51071

Like I said. Go won’t fix this because they’ve optimized for throughput at the expense of everything else, which means higher tail latencies. They’d have to give up throughput for lower latency.


> And an open issue that is basically been ignored. golang/go#51071

It doesn't look ignored to me. It explains that the test coverage is currently poor, so they are in a terrible position of not being able to make changes until that is rectified.

The first step is to improve the test coverage. Are you volunteering? AI isn't at a point where it is going to magically do it on its own, so it is going to take a willing human hand. You do certainly appear to be the perfect candidate, both having the technical understanding and the need for it.


Heh. I've had my fair share of mailing list drama. This is political AND technical. Someone saying "let’s cut throughput" is going to get shot down fast, no matter the technical merit. If someone with the political clout were to be willing to champion the work and guide the discussion appropriately while someone like me does the work, that's different. That's at least how things like this are done in other communities, unless go is different.

> If someone with the political clout were to be willing to champion the work and guide the discussion appropriately while someone like me does the work, that's different.

There is unlikely anyone on the Go team with more political clout in this particular area than the one who has already reached out to you. You obviously didn't respond to him publicly, but did he reject your offer in private? Or are you just imaging some kind of hypothetical scenario where they are refusing to talk to you, despite evidence to the contrary?


> You obviously didn't respond to him publicly, but did he reject your offer in private?

I literally have no idea what you're talking about here.


You must not have read all the comments yet? One of Go's key runtime maintainers sent you a message. Now is your opportunity to give him your plan so that he can give you the political support you seek.

I still have no idea what you are talking about.

I thought it was a simple question. You don't know if you have read the comments or not?

> If you have a system, go is not the right language for it.

FTFY


And AI is stuck in the past. As we prepare to launch a new product… people using AI won’t know about it for months or years, potentially. This will make startups have to seed the planet with text so an AI learns about it, not to mention normal SEO and other shit. I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.

The future is going to suck.


> I’m sure it is only a matter of time before you can pay to inject your product into the models so it knows about it faster, but incumbent companies will pay more to make sure they don’t.

You have just discovered the fully enshittified version of the business model ai companies hope to reach.


Wasn’t expecting to see Hebrew here today.

Eh, you know, when the conversation has devolved to the level of "Python is slower to develop in than PHP because of spaces or tabs", you have to bust out the Hebrew.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: