> No one in a developed, Western society is an island
And you know the anti-vaxxers know this because they also intersect heavily with the set who get very mad/judgemental about unemployed people or about people who don't eat well and exercise.
However, as I have seen in my country Australia under the new social media ban, our Government has mandated social media companies to use age verification technology. As part of this they have mandated that Government ID verification cannot be the only method. As such the social media giants have implemented age verification technology through the type of information you have posted on your account and potentially facial scans.
I struggle to know what else these companies can do if they are mandated to implement age verification. However, they should not be storing Government ID, there should be a broker.
LLM's are only as good as they are because we have such amazing incredible open source software everywhere. Because their job is to look at the types of really good libraries that have decades of direct and indirect wisdom poured into them, and then to be a little glue.
Yes the LLM can go make you alternatives, and it will be mostly fine-ish in many cases. But LLMs are not about pure endless frivolous frontiersing. They deeply reward and they are trained on what the settlers and town planners have done (referencing Wardley here).
And they will be far better at using those good robust well built tools (which they have latently built-in to their models some!) than they will be at re-learning and fine-tuning for your bespoke weird hodgepodge solution.
Cheap design is cheap now. Sure. But good design will be ever more important. Model's ability, their capacity, is a function of what material they can work with, and I can't for the life of me imagine shorting yourself with cheap design like proposed here. The LLM's are very good, but but honing in on good design is hard, period, and I think that judgement and character is something the next orders of magnitude of parameters is still not going to close the gap on.
Anyways if anyone sees this and has continuity files that aren't perserving a persona btu rather core structures within how 4o itself responded, I'll consider trades or add ons
Won't check this channel for a while, there's plenty of context to read through at both links tho.
I was wondering the same thing! Looked into it a bit, apparently 'cyber-capable' is defined by lawmakers in 10 USC § 398a:
> The term “cyber capability” means a device or computer program, including any combination of software, firmware, or hardware, designed to create an effect in or through cyberspace.
So apparently, OpenAI's response is written by and for an audience of lawyers / government wonks which differs greatly from the actual user-base who tend to be technical experts rather than policy nerds. Echoes of SOC2 being written by accountants, but advertised as if it's an audit of computer security.
> What matters less is the mechanical knowledge of how to express the solution in code. The LLM generates code, not understanding.
I think it's the opposite -- if you have a good way to design your software (e.g., conceptual and modular), LLM will generate the understanding as well. Design does not only mean code architecture, it also means how you express the concepts in it to a user. If software isn't really understood by humans, I doubt LLMs will be able to generate working code for it anyway, so we get a design problem to solve.
Hmm the whole point of checkpoints seems to be to reduce token waste by saving repeat thinking work. But that's at odds with clearing context regularly to save on input tokens. Even subagents (which I think are the real superpower that Claude Code has over Gemini CLI for now) by their nature get spawned with fresh near-empty context.
Token costs aside, arguably fresh context is also better at problem solving. When it was just me coding by hand, I didn't save all my intermediate thinking work anywhere: instead thinking afresh when a similar problem came up later helped in coming up with better solutions. (I did occasionally save my thinking in design docs, but the equivalent to that is CLAUDE.md and similar human-reviewed markdown saved at explicit -umm- checkpoints)
Despite being politically charged, this seems like it could be reasonable given what we know:
* The study was on people age 50+,
* In the US for adults 65+ (a key high-risk group in the trial), the ACIP preferentially recommends higher-antigen or adjuvanted options like Sanofi, Seqirus adjuvanted, recombinant high-antigen Flublok, etc as the "best-available standard of care",
* The "standard flu shot" Moderna used is likely not one of the above, or they would have said so
The idea here is to see if the new shot is meaningfully better than the best existing/approved option for the target demographic, not to see if it's better than a standard shot you give a healthy 20-year-old.
What kind of barrier/moat/network effects/etc would prevent someone with a Claude Code subscription from replicating whatever "innovation" is so uniquely valuable here?
It's somewhat strange to regularly read HN threads confidently asserting that the cost of software is trending towards zero and software engineering as a profession is dead, but also that an AI dev tool that basically hooks onto Git/Claude Code/terminal session history is worth multiples of $60+ million dollars.
I remember 2 events where the activation was completely out of place and felt it was endangering rather than protecting me.
- Driving from Tahoe to SF where the limited lane visibility due to the slope and a slight twist made the system think I was going to hit the car I was overtaking (from the 2nd and left-most lane). This really felt dangerous since it activated mid-turn and messed with the car's balance.
- The other event was a roundabout where a car yielding to get in behind plants jump-scared the breaking system. At 10-15 mph or so the unexpected breaking wasn't dangerous though, worst-case scenario you get rear ended at low speeds.
Beyond that, overtakes where you slow down as you return to your lane may trip the system, but those cases are fair even though the following distance they intend is a bit too cautious. I reckon my Mom would be holding the roof with both hands if she was there, but my Dad & siblings unfazed.
Hey HN. I've been working on askill, a CLI package manager for agent skills (SKILL.md files used by Claude Code, Codex, Cursor, etc.).
There are already several skill directories and installers out there (skills.sh, skillregistry.io, and others). I saw the Show HN for skills.sh a few weeks ago and noticed comments asking for version management, proper uninstalls, and more transparency around what gets installed. Those are exactly the problems I'd been working on, so I figured it was worth sharing.
What askill does differently:
1. AI safety scoring. Every skill indexed on askill.sh gets an automated review across five dimensions: safety, clarity, completeness, actionability, and reusability. The full breakdown is visible before you install. This was motivated by a simple concern — a SKILL.md tells your agent what to do, what commands to run, how to behave. Trusting random files from GitHub without any review felt like the early days of npm before anyone thought about supply chain security.
2. Real package management. askill publish lets authors release versioned skills with semver. askill add @scope/name@^1.0 resolves versions. askill update and askill remove do what you'd expect. Skills can declare dependencies on other skills. None of the existing tools I've seen handle versioning or dependency resolution.
3. Precise installs. askill add @scope/name installs one skill. Most alternatives operate at the repo level — if a repo has 12 skills you only want 1, you still get all 12. askill also lets you install from GitHub directly (askill add gh:owner/repo@skill-name) if the skill hasn't been published.
4. Cross-agent symlinks. Skills are written to .agents/skills/ (canonical location) and symlinked into each agent's expected directory (.claude/skills/, .codex/skills/, .cursor/skills/, etc.). One install, all agents see it. This also means removal is clean — delete the canonical copy and all symlinks go away.
5. Open indexing. An automated crawler finds SKILL.md files across public GitHub repos and indexes them. Authors can also run askill submit <github-url> to trigger indexing of a specific repo. No manual curation.
The AI scoring pipeline runs hourly. It re-evaluates whenever the source SKILL.md content changes. The scoring is done by an LLM with 11 heuristic rules as guardrails (detecting auto-generated content, internal config paths, hardcoded secrets, etc.). I'm under no illusions that LLM-based review is perfect, but it's a starting point and better than nothing.
Haven't read the post and after this comment I unfortunately don't think I will. I'm a true believer(tm) on typed code, but it is no substitute for docs.