Hacker Newsnew | past | comments | ask | show | jobs | submit | jcranmer's commentslogin

About 20% of the world's oil flows through the Strait of Hormuz, which is effectively closed right now. So far, the economy is coping by drawing down inventories elsewhere and praying that the strait reopens soon, but even then, crude oil futures are skyrocketing (up 50% in a week). If this lasts for a few months--and if a few oil tankers get blown up in the crossfire--this is going to be a repeat of the 1970's oil embargo. There is also the worry that the war is going to end up targeting and destroying a significant chunk of oil production facilities in the region, which will persist the energy crisis well beyond the end of active hostilities.

Combine that with the fact that the war is being led by a senile idiot who is unable to articulate a strategic purpose for the war in the first place and being prosecuted by someone who thinks that war crimes are aspirational, and you begin to understand that there is actually little prospect of this being resolved anytime soon.


One potential knock-on impact of the strait being effectively closed, is that at some point the gulf states will be forced to shut-in production as local storage fills up and production can’t be exported. That combined with war damage to critical transport/export infrastructure will cause a lag where production can’t meet global demand even when the strait re-opens. Turning oil wells back on is not like flipping on a light switch.


Ignoring concerns about the wisdom of the war; there's about three directions this can go:

Fizzle out: Strait reopens cause Iran needs ocean shipping.

Continue as now: oil trade disrupted, but using lots of missiles and drones and things; increased munitions demand leads to increased manufacturing jobs.

Boots on the ground: oil trade reopened, long term quagmire, probably more munitions production.


The other Gulf States will now have to greatly increase military spending in order to protect their sea lines of communication against Iranian aggression. A lot of that spending will flow to US defense contractors. (I'm not claiming this is a good thing, just that it's inevitable.)

Not just oil but a lot of fertilizers goes through the strait as well, fertilizers that have to be applied at certain moments of the year. Missing a shipment by a few months will decimate crop yields across the world.

If you think food is expensive now, wait until fall!

Unrelated, Grapes of Wraith is a good read if you're looking for something to distract yourself with.


You're not wrong here. Not that I'm entirely up-to-speed on all of the deep Rust discussions, but the sense I have of the language evolution is that while there is definitely a loud contingent of people pushing for a lot of the complexity of full effect systems or linear types, these sorts of proposals aren't actually all that likely to actually move forward in the language.

(I should note that of all of the features mentioned in this blog post, the only one I actually expect to see in Rust someday is pattern types, and that's largely because it partially exists already in unstable form to use for things like NonZeroU32.)


Yoshua works directly on developing the language, and mentions he is working on these features specifically (he is part of the effects initiative), I'm not sure you won't see these features in Rust.

Yoshua is part of the "loud contingent" being described. He's not on the lang team, and he's been "working on" things like keyword generics for years without any indication that they are going to make it into the language.

> We (Oli, Niko, and Yosh) are excited to announce the start of the Keyword Generics Initiative, a new initiative 1 under the purview of the language team

https://blog.rust-lang.org/inside-rust/2022/07/27/keyword-ge...

Maybe he's not on the language team (I haven't read enough into Rust governance structures to know definitively) but it's not like he's on some random person working on this. And yes, work takes time, I actually disagreed with his initial approach where his syntax was to have a bunch of effects before the function name, and everyone rightly mentioned how messy it was. So they should be taking it slow anyway.


What complexity?

I'm not sure that's really the case. The fundamental issue with the idea of a Roman Industrial Revolution isn't that Rome didn't have the technical antecedents (although that's still a big issue), but rather that the Industrial Revolution only solves problems that Rome didn't have.

One of the big, if easy, mistakes to make about history is to assume that a historical society is just like modern society at a lower tech level. Bret Devereaux is fond of dunking on George R. R. Martin's question "but what was Aragon's tax policy like?" as malformed because Gondor is a polity that doesn't really have the capacity to have a tax policy in the first place (it's pretty clearly modeled off of something like the Byzantine state). Not that Tolkien is immune from this either--the Shire suffers from being a Victorian-era English countryside being transplanted to a ~15th century tech level.


The "unexpectedly" is because the people looking at more real-time (but more indirect) indicators were expecting jobs to increase by about 50k or so.

It's rather more like someone going "based on the daily footfall numbers in my store, I expect sales to be up 1% this month" and the actual data being down 2%.


> Your CPU doesn’t know or care what functions are

This has already been commented on by a couple of people, but yes, your CPU absolutely does care a lot about functions. At the very least, call/ret matching is important for branch prediction, but the big arches nowadays have shadow stacks and CFI checks that require you to use call/rets as regular functions. x86 has a more thoroughly built-in notion of functions, since they have a (since mostly-defunct) infrastructure for doing task switching via regular-ish call instructions.

> The toString method that gets called depends on the type of the receiver object. This isn’t determined at compile time, but instead a lookup that happens at runtime. The compiler effectively generates a switch statement that looks at the result of getClass and then calls the right method. It’s smarter than that for performance I’m sure, but conceptually that’s what it’s doing.

No, it's conceptually doing the exact opposite. Class objects have a vtable pointer, a pointer to a list of functions, and the compiler is reading the vtable and calling the n'th function via function pointer. The difference is quite important: vtables are an inherently open system (anyone can define their own vtable, if they're sufficiently crazy), but switches are inherently closed (the complete set of possible targets has to be known at compile-time). Not that I've written it up anywhere, but I've come to think of the closed nature of switch statements as fundamentally anathema to the ideals of object-oriented programming.


The vtable vs switch dichotomy was called the “expression problem” by Philip Wadler https://en.wikipedia.org/wiki/Expression_problem

SCOTUS hasn't ruled on any AI copyright cases yet. But they've said in Feist v Rural (1991) that copyright requires a minimum creative spark. The US Copyright Office maintains that human authorship is required for copyright, and the 9th Circuit in 2019 explicitly agreed with the law that a non-human animal cannot hold any copyright.

Functionally speaking, AI is viewed as any machine tool. Using, say, Photoshop to draw an image doesn't make that image lose copyright, but nor does it imbue the resulting image with copyright. It's the creativity of the human use of the tool (or lack thereof) that creates copyright.

Whether or not AI-generated output a) infringes the copyright of its training data and b) if so, if it is fair use is not yet settled. There are several pending cases asking this question, and I don't think any of them have reached the appeals court stage yet, much less SCOTUS. But to be honest, there's a lot of evidence of LLMs being able to regurgitate training inputs verbatim that they're capable of infringing copyright (and a few cases have already found infringement in such scenarios), and given the 2023 Warhol decision, arguing that they're fair use is a very steep claim indeed.


The lack thereof (of human use). Prompts are not copyrightable thus the output also - not. Besides retelling a story is fair use, right? Otherwise we should ban all generative AI and prepare for Dune/Foundation future. But we not there, and we perhaps never going to be.

So the LLM training first needs to be settled, then we talk whether retelling a whole software package infringes anyone's right. And even if it does, there are no laws in place to chase it.


> Prompts are not copyrightable

Surely that varies on a case by case basis? With agentic coding the instructions fed in are often incredibly detailed.


In practice the output of the LLM does not tell what the prompt was, and the output varies randomly, so it is unlikely you would be sued for copying the prompt. And in fact you would not know what the prompt, if any, was for the original unless you copied the prompt from somewhere.

> Besides retelling a story is fair use, right?

Actually, most of the time, it is not.


> what? 3 of the justices were nominated by Trump. You think the people appointing them didn't have internal deliberations before they were appointed, including about things Trump had thought about like tariffs?

They were nominated in Trump's first term, which had a very qualitatively different cabinet assembled around Trump, one much less focused on sycophancy and pleasing Trump. I don't think anybody in Trump's cabinet 6 years ago was thinking about the potential powers a president had in being able to change tariffs based on how he felt waking up in the morning, much less interrogation of judicial candidates based on how willing they were to go along with that.


> And the supreme court doesn't hear cases that are 100% obviously illegal.

There is an argument in about two months' time as to whether or not the Birthright Citizenship clause of the 14th Amendment actually guarantees birthright citizenship in the US. There is no serious legal argument in favor of the interpretation being advanced by the Trump administration, that it does not. And yet here we are.


I'm not aware of any major BLAS library that uses Strassen's algorithm. There's a few reasons for this; one of the big ones is Strassen is much worse numerical performance than traditional matrix multiplication. Another big one is that at very large dense matrices--which are using various flavors of parallel algorithms--Strassen vastly increases the communication overhead. Not to mention that the largest matrices are probably using sparse matrix arithmetic anyways, which is a whole different set of algorithms.

I am extremely skeptical that that would be the case. Local stack accesses are pretty guaranteed to be L1 cache hits, and if any memory access can be made fast, it's accesses to the local stack. The general rule of thumb for performance engineering is that you're optimizing for L2 cache misses if you can't fit in L2 cache, so overall, I'd be shocked if this convoluted calling convention could eke out more than a few percent improvement, and even 1% I'm skeptical of. Meanwhile, making 14-argument functions is going to create a lot of extra work in several places for LLVM that I can think of (for starters, most of the SmallVector<Value *, N> handling is choosing 4 or 8 for N, so there's going to be a lot of heap-allocate a 14-element array going on), which will more than eat up the gains you'd be expecting.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: