Hacker Newsnew | past | comments | ask | show | jobs | submit | theamk's commentslogin

A third-party security service got hacked, and then hackers used that to collect highly sensitive information from that service's user.

To fix this, let's add another third-party security service and give it all the sensitive information. I am sure it won't get hacked!


The Trivy attack did not hack anyone's secrets manager. It just waited until the key was retrieved and sitting in memory as a plaintext string. Then read it.

VaultProof solves that specific moment. The key never exists as plaintext in your app or pipeline.

And even if VaultProof gets hacked, that is the whole point. We only store shares. Individual shares are mathematically useless. An attacker who completely owns our infrastructure still gets nothing they can use.

There is nothing to steal. That is the architecture.

Compromise VaultProof and you get worthless shares.


That's how life generally works. If your friend tells you, "I went to that new movie yesterday. It was very boring, I fell asleep midway." - then you either listen to his advice or don't. You don't ask your friend if they ever made a movie of their own.. And you don't ask for a 3rd party research of that movie's either.

As for AI specifically.. life is too short to read all the interesting pages already, and AI just makes is so much worse.

- AI is verbose in general, so you are spending a lot of time reading and not getting much new facts out of that.

- Heavy AI use often means that author has little idea about the topic themselves, and thus cannot engage in comments. Since discussion with authors are often most interesting part of HN, that makes submission less interesting.

And yes, it is possible to use AI assistance to create nice and concise report on the topics you can happily talk about, but then this would not be labeled as "AI".


> According to Anthropic, Mythos Preview successfully generates a working exploit for Firefox's JS shell in 72.4% of trials

Why are AI people so dramatic? Ok, there is yet another JS sandbox escape - not the first one, not the last one. It will be patched, and the bar will be raised for a bit... at least until the next exploit is found.

If anything, AI will make _weaponized_ exploits less likely. Before, one had to find a talented person, and get pretty lucky too. If this AI is as good as promised, you can have dependabot-style exploit finder running 24/7 for the 1/10th cost of a single FTE. If it's really that good, I'd expect that all browser authors adopt those into their development process.


> Before, one had to find a talented person, and get pretty lucky too. If this AI is as good as promised, you can have dependabot-style exploit finder running 24/7 for the 1/10th cost of a single FTE

Not you. EVERYONE doing ANY kind of software will have to, because else attacker can just pick and choose targets to point their exploit-bot


Which has always been the case. Attackers only have to find one exploit in the weakest part of the system, and usually that's more a function of grunt work than it is being particularly sophisticated.

Well, you can only do that if you have access to the model. We're setting a precedent for the AI labs getting to pick and choose.

Not "ANY" kind of software, only the software that handles untrusted data in a non-trivial way. A lot of software, like local tools, does not.

> doing ANY kind of software

That's not at all clear. JS escape exploits have high value in our current Internet so there's going to be a lot of prior art. It's not surprising at all that this is what their model found and it's not a statistic that immediately suggest it has any broader implications.


Further, Opus identified most of the vulnerabilities itself already. It just couldn’t exploit them.

Mythos seems much, much more creative and self directed, but I’m not yet convinced the core capabilities are significantly higher than what’s possible today.

The full price of finding the vulnerabilities was also something like $20k. That’s a price point that brings a skilled professional in to accomplish the same task.


Remember, that's the most expensive this capability will ever be.

If it's model is opened up and can run on commodity hardware. Otherwise price could go up as RAM and silicon prices climb.

Yes, but the problem with these models isn't a gradual shift, it's a step function. With a gradual shift, the world has time to react and adapt.

Ding ding ding, and this is why you are hearing about it. It is marketing for enterprise to pay a premium for the next model, with maybe a wakeup call to enforcement agencies as well (which is also marketing).

Codegen for many companies is much less continuous. Security is always on, and always a motivator.


This whole thing has just been a huge PR stunt the whole time. Even the original leak of the blog post was just more fuel to the hype.

All software has bugs. What this tells me is that the actors with the best models (and Anthropic apparently has one so good and expensive it is outstripping compute supply) they will find the exploits first and probably the ones that are hardest to find

So yeah, dependabot, but the richest actors will have the best bits and they probably won’t share the ones they can find that nobody else’s models can


> What this tells me is that the actors with the best models (and Anthropic apparently has one so good and expensive it is outstripping compute supply) they will find the exploits first and probably the ones that are hardest to find

Presumably we would not give the AI models to the "good guys" because then they would also find and patch these vulnerabilities?


Someone's "good guys" are just someone "bad guys". Access to a valuable resource/tool that provides some sort of power and utility will be just another contended item.

Anthropic is saying exactly what you're saying. They don't believe that software security is permanently ruined. They just want to ensure that good defensive techniques like the ones you describe are developed before large numbers of attackers get their hands on the technology.

You’re asking why people are being “dramatic” about an automated system that can do what highly specialized experts get paid hundreds of thousands of dollars to do?

It’s just fascinating to see how AI’s accomplishments are being systematically downplayed. I guess when an AI proves that P!=NP, I’m going to read on this forum “so what, mathematicians prove conjectures all the time, and also, we pretty much always knew this was true anyway”.


I am sceptical because AI companies, and anthropic in particular, like to overplay their achievements and build undeserved hype. I also don't understand all the caveats (maybe official announcement is more clear what this really means).

But yeah, if their model can reliably write an exploit for novel bugs (starting from a crash, not a vulnerable line of code) then it's very significant. I guess we'll see, right?

edit: Actually the original post IS dramatic: "Has Mythos just broken the deal that kept the internet safe? For nearly 20 years the deal has been simple: you click a link, arbitrary code runs on your device, and a stack of sandboxes keeps that code from doing anything nasty". Browser exploits have existed before, and this capability helps defenders as much as it helps attackers, it's not like JS is going anywhere.


The interesting thing is that within a year we will know whether it is vapid hype or a momentous change.

Scepticism means staying wary and keeping one's mind open, and not closing your eyes to a new reality.


It would be warranted if Mythos could jailbreak an up-to-date iPhone. (Maybe it can?) That would actually also be nice, “please rewrite without Liquid Glass”.

> I guess when an AI proves that P!=NP,

What would be the practical impacts of this discovery?


Likely all existing cryptography would become crackable, possibly some of it, very readily.

(Assuming you mean P==NP)

Would it become crackable, or just theoretically crackable?

E.g. it's one thing to show it's possible to fly to Mars, it's another thing to actually do it.


Not really:

* It's possible - very likely even - that even if somehow P=NP, the fastest algorithm for any NP problem turns out to be something like n^1000, which is technically P, but not practical in any way.

* The proof may not be constructive, so we may just know that P=NP but it won't help us actually create an algorithm in P (nitpick: technically if P=NP there's a construction to create an algorithm that solves any NP problem in P time, but it's extremely slow - for example it involves iterating over all possible programs).


I think you read it backwards - that's a possible consequence of P==NP, not P!=NP.

Yes, I meant the equality.

We already operate on the assumption that P ≠ NP, so little would change if that were proved.


Isn’t it the opposite?

You've read the post, right? Especially this part:

> It is why I paid for your app...

this is about closed-source, paid software - no PRs possible there.


That's the result of Proposition 13, which holds property tax values very low. Without it, the rising cost of house would increase property tax payments accordingly, and financially, it would be better to sell rather than hold.

One solution is to double-down on prop 13 ideas, and also add limits on how much house sale can be taxed. Another is to slowly start dialing-down prop 13 and allow higher tax rate increases. Both of those have problems.


neat!

> the machine could be calibrated to an accuracy of 2%.

I always wondered how precise those "physical computers" were - this one apparently had error of 1/50, or about 6 bits of precision.


I don't have full-text access, but from the photo alone, this is far from the complete wifi solution. If you look at wifi receiver diagram [0], the photo seems to contain "mixer" and "VCO" blocks, maybe LNA and filters.. Rest of "frequency synthesizer" is possible, but less likely, as it needs much more transistors.

But what's definitely missing is "ADC" and "DSP" parts - you are not getting any usable bits out of that chip, the best you can is raw analog I and Q signals. You still need a whole bunch of complex rad-hard logic to get usable data.

[0] https://www.researchgate.net/figure/Block-diagram-of-a-typic...


wrong link? that's AWS front page, and it has no references to space for me

> Unless this are enterprise disks with capacitors anything can happen when it suddenly looses power. Not the FSes fault.

Most filesystems just get a few files/directories damaged though. ZFS is famous for handling totally crazy things like broken hardware which damages data in-transit. ext4 has no checksum, but at least fsck will drop things into lost+found directory.

The "making all data inaccessible" part is pretty unique to btrfs, and lets not pretend nothing can be done about this.


What kind of "education overhaul" you have in mind? Some things can be easily verified in class (run a mile), but some require effort (write exam in class / testing center), and some require too much effort to be practical (multi-day research or programming project).

Unfortunately at the high school level, the materials are not that complex, and there are a lot of ways to cheat. Answer keys for textbooks, graphical calculators (or CAS systems), reports copied wholesale from some websites. AI just made all of this significantly worse.


That's an answer I don't have, and am not qualified to make. I'd defer the decision to teachers and those who work well with children already, I just don't think this current iteration works.

If I had to guess, it would look something like a software that confines the student to the software and provides interactive lessons and exams...but I'm a computer guy, my answer will always be "use a computer"


"confines the student to the software and provides interactive lessons and example" - this already exists. It is also useless without continuous supervision, as students will simply take a 2nd device (cell phone or tablet), start LLM app on it, then point to locked-down device's screen and ask to solve the problem. Yes, it slows down the process a bit since the students have to actually re-type the LLM answers instead of copy-pasting them, but it does not eliminate the problem.

"That's an answer I don't have .. I'd defer the decision to teachers" - you are really sounding right now like someone who comes to a town's discussion of whether to get more solar panels, and starts saying how nice it would be if the fusion were solved, and we all had an near-infinite source of cheap and clean energy. Yes, it would be nice, but unless you have a good idea on how to achieve this, please don't distract people from the real problems they have.

The AI-in-education is the same way: there is a crisis right now, and it seems that the only way is to lean heavily onto proctored exams - which students hate, and are more expensive for schools too. Saying "There should be a better way, I have no idea about what this better way is, but meanwhile what you are doing is bad" really does not help much.


1. Yes, and elsewhere in the thread I am suggesting proctored exams as well. Agreed.

2. I believe there is value in identifying issues with the current implementation, as it's required to fix them in the next implementation. This isn't a project I'm working on, related to my career path, or anything I'm passionate about. I am simply stating that I find the current implementation is flawed, and I believe it stems from the mindset of the original comment's "I wouldn't like cheaters to compete with honest students on the job market." I understand there is a difference between being resourceful and cheating, and using LLMs to write essays is clearly cheating, but, as someone who is not an educator and does not have children, I assume it is important to instill a sense of resourcefulness as well. If the entire purpose of education has become the job market, and the job market rewards resourceful people....

3.Seems to be replied to in 2.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: