Hacker Newsnew | past | comments | ask | show | jobs | submit | sigmar's commentslogin

>agent now proactively detects when your app needs a database or login. After you approve a Firebase integration, it provisions Cloud Firestore for databases and Firebase Authentication for a secure sign-in with Google... securely integrate services like databases, payment processors or Google services like Maps. The agent detects when a key is required and safely stores it in the new Secrets Manager located in the Settings tab.

Has some really neat integrations. Really strikes me as a huge contrast with Apple, which yesterday seemed to oppose vibe coding (by preventing updates to Replit and Vibecode).


Configuring an application environment is a huge obstacle. The number of people who can think logically and break down a business problem into pieces is 10x the number of people who can recite the exact right incantations to get a working cloud setup.

You in the 90s: "Leaded fuel isn't illegal guys, stop your campaigning, let's keep huffing it"

How about coming up with an actual defense of social media rather than an ad hominem about "neurotics"?


Consuming social media doesn't have an inescapable negative impact on other people, unlike burning leaded fuel. In the same way that eating junk food doesn't. Should we ban junk food? What else do you want to ban from others just because it has a risk profile you personally don't feel comfortable with?


> Consuming social media doesn't have an inescapable negative impact on other people

You don't think large portions an entire generation(s) getting cooked by social media doesn't have negative externalities that impact society as a whole?


I don't think anybody has the moral authority to regulate such second-order effects.

Should unhealthy food be banned because of the second-order effects of obesity? What about mandatory church / religious service? After all, I judge that atheism has negative second-order effects on the world. Where would I get this moral authority from?


For fuzzy second order effects you have fuzzy second order impacting laws.

You increase disclosure norms, you increase monitoring and you ensure marketing and packaging norms that disclose the potential risks.

You aren’t allowed to put up booze and cigarette stores near schools. These are not new problems that humanity has never encountered before.


> You aren’t allowed to put up booze and cigarette stores near schools.

Huh? Where? In many countries grocery and convenience stores sell both. When I was in school I could have gone across the street to get both. Everywhere I've travelled it's been even more accessible. The only place I've seen these restrictions are in very religious places, which are not analogous to morality in any way.

Lets play a little though experiment: Is it okay for me and my friend to send each other messages over the internet? Can we send images and videos? What about a group chat with all of our friends? What if our neighbourhood joins in? What if our city joins in? What if our country joins in?

Can you identify the precise step in which this becomes unallowable? Can you articulate a logical reason why it's unallowable, but the previous steps are fine?

Can you do this without it becoming a subjective question about your personal moral values?

This is the problem with laws and mandates. They can't just be based on your own subjective feelings. And as humans, we have very different thoughts and feelings on what is good and bad, what should be allowed an unallowed. Furthermore, many things are perfectly legal despite causing harm. If I reject someone's advances and they suffer negative mental consequences, have I violated their rights? They've suffered harm after all. To whom are their obligations for?

There can be claimed "fuzzy second order effects" to every single human action. Authoritarians believe they are smarter than everyone else and have the right to enforce their subjective and often incorrect opinions on everyone else. In another country, on another topic, this would be about something else - maybe religion. This does not form a solid legal basis for anything.


I wonder where folks like this came from, and at what point did people who associate themselves with hacker culture decide that censorship is great.

The OG hackers thought of censorship as network damage that needed to be routed around.

People who support censorship always think of themselves as smarter than the rest. Dunning-Krueger however would suggest something different.


I posted above that social media related issues are a problem, and then a bunch of posts accused me of wanting to make it illegal. I never suggested that and I actually don't support censorship, I just wish some people I know didn't spend so much of their time bummed out about social media.



"Defense Department intended to “refocus” the news organization... it “should” republish content created by the Defense Department public affairs offices with a label describing its origin"

Article makes it clear that they're banning the publication of wire services with the goal to make this publication more like a DoD PR team and less like a news source.


Wait, they can't even internally remember that they're the DoW (Department of War) and not DoD?


I suspect that was deliberate. DoW is the preferred nomenclature but DoD is still technically correct.

The article is phrased in a way to imply that the author would rather the publication maintain independence. It is probably the last time she will be permitted to say "department of defense".


They're not. That's executive fiat. The actual name of the department changes if Congress says it does.


The Department of War is an “alternate title”. Department of Defense continues to be correct


> The Department of War is an “alternate title”.

Like "alternative facts"?


You're probably right. I've been thinking about why anthropic's revenue keeps soaring. I think in terms of "new users trying the product" we're definitely somewhere in the slowing part of the S-curve (at least in the US), but there are other growth contributors. Two bigs ones are people finding new use-cases and people figuring out how to scale up current use-cases to use more tokens. Perhaps little temporary-usage-boosts like this give people permission to attempt new use-cases or more scale and realize they could use a higher tiered plan.


>Our proprietary AI robots independently recreate any open source project from scratch.

Fact that this is satire aside, why would a company like this limit this methodology to only open source? Since they can make a "dirty room" AI that uses computer-use models, plays with an app, observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.


> observes how it looks from the outside (UI) and inside (with debug tools), creates a spec sheet of how the app functions, and then sends those specs to the "clean room" AI.

and tbh, i cannot see any issues if this is how it is done - you just have to prove that the clean room ai has never been exposed to the source code of the app you're trying to clone.


>This means the step function has more predictive power (“fits better”) than the linear slope. For fun, we can also fit a function that is completely constant across the entire timespan. That happens to get the best Brier score.

I mean, sure. but it's obvious in that graph that the single openai model is dragging down the right side. Wouldn't it be better to just stick to analyzing models from only one lab so that this was showing change over time rather than differences between models?


Will using a voice-to-text app to create my comment get me banned? Especially if it creates a transcription mistake that might be characteristic of an LLM


I wouldn't expect voice-to-text apps to produce anything that looks "Signature LLM" since it's still your words, your grammar, etc.. The occasional transcription mistake is unlikely to be an issue either, given the prevalence of humans here who use em-dashes, speak ESL, etc..


I've been using a voice-to-text app on android that replaces my keyboard. I love it because on mobile I can speak faster than I type, but it does produce perfectly written text with no grammar mistakes and better flow and structure. So it doesn't write my speech 1:1. It has made writing on my phone much more fun and increased my productivity and decreased my threshold for commenting on forums. But now I guess I won't be using it on HN in the future...

(Disclaimer: I did not use it for this comment)


I've got no idea who codewall is. Is there acknowledgment from McKinsey that they actually patched the issue referenced? I don't see any reference to "codewall ai" in any news article before yesterday and there's no names on the site.

https://www.google.com/search?q=codewall+ai


Yeah can't find much information either. I would like to see at least some proof. Either via Mckinsey or from the security team.


it is weird isn't it? The register article implies that it's acknowledged by McKinsey- https://www.theregister.com/2026/03/09/mckinsey_ai_chatbot_h...

Edit: Apparently, this is the CEO https://github.com/eth0izzle


>A McKinsey spokesperson told The Register that it fixed all of the issues identified by CodeWall within hours of learning about the problems.

Ah. Thanks for the link. I'm suspicious of everything posted to a blog without proof these days.


We’re pretty new! :) They didn’t want to provide comment on our post but they did offer comment via The Register.


There's a responsible disclosure timeline at the bottom indicating they'd all been fixed.


I think the point is that we don't have evidence that this actually happened from anyone other than Codewall.


If it's true that there's 58k users in the dump, that would mean former employees are in the dump

I assume that means McKinsey would need to disclose it, or at least alert the former employees of the breach?


It may not have been your intent, but this comment seems to downplay the crime here. It's a crime to take the data even if he wasn't shopping it around as alleged. and the fact that he was 'young and stupid' makes the circumstances of how this happened much more important for an investigation by the IG (ie why was an immature person given so much power?)


I think it’s a great reaction to news stories to imagine how you could have made the same bad decisions. Furthermore this public confession of being able to imagine making bad decisions might encourage a similarly minded to 20-something to wonder why an older version of themself is so afraid of even having such a dataset. It might even prompt someone to destroy some long forgotten cache of data they exfiltrated a long time ago.

I don’t think there’s a risk that it will influence a rare person in power to enforce the rules to go lighter. I just think it encourages people to be less reckless with hoarding data who might otherwise put themselves in danger.


Over and above the fact that everyone should already know that the SSN database is extremely sensitive, DOGE had to strong-arm people out of the way to gain access to it in the first place. Even a fresh twenty-something should have known better than to download the entire thing onto a flash drive and carry it around, let alone take it home with them, and especially not to share with a future employer.

The idea that could be done accidentally and innocently lacks any sort of credulity. It's so far out of the ordinary that I don't think applying Hanlon's Razor can be done in good faith.


yeah. ignorantia juris non excusat applies to both the speed limit and passive data theft


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: