I'm not sure what features I'm supposed to notice that are better, but having built-in API docs and source code browsing is nice. (Though slightly laggy.)
Nit: there are distracting animations, such as on the weekly download graph.
Haven't watched the video, but the end of exponential growth isn't the end of growth. It means the percentage growth per year decreases. The Internet also went through an exponential growth phase at the beginning.
You're describing a standard S-curve (logistic growth), which is definitely what happens to parameter counts or user adoption (like The Internet). But Amodei is applying this to scientific discovery itself.
He’s effectively saying the "S-curve of Science" flatlines because we figure out everything that matters (curing aging, mental health, etc.). My whole point was that science doesn't have a top to the S-curve - it’s an infinite ladder (as per Deutsch).
we're on the verge of getting to Moon and Mars in more than rare tourist numbers and with notable payloads. Add to that advancements in robotics, which will change things here on Earth as well as in space. The growth is only starting.
>The Internet also went through an exponential growth phase at the beginning.
If we consider general Internet as all the devices connected i think the exponential growth is still on as for example ARM CPUs shipments:
2002: Passed 1 billion cumulative chips shipped.
2011: Surpassed 1 billion units shipped in a single year.
2015: Running at ~12 billion units per year.
2020 (Q4): Record 6.7 billion chips shipped in one quarter (842 chips per second).
2020: Total cumulative shipments crossed 150 billion.
2024 (FY): Nearly 29 billion ARM chips shipped in 12 months.
2025: Total cumulative shipments exceeded 250 billion.
Interesting approach. It requires a Near AI account. Supposedly that's a more private way to do inference, but at the same time they do offer Claude Opus 4.6 (among others), so I wonder what privacy guarantees they can actually offer and whether it depends on Anthropic?
Afaik Anthropic is not giving pretty much any provider model weights, so any inference of Opus is certainly not private. Either going through Anthropic or Bedrock, or Vertex.
Of the three Bedrock is probably the best for trust, but still not private by any means.
They do verifiable inference on TEEs for the open source models. The anthropic ones I think they basically proxy for you (also via trusted TEE) so that it cant be tied to you. VPN for LLM inference so to speak.
I can't help but feel there is a funny pattern going on.
A lot of companies want to embrace AI, agents, etc. so they make their platforms easier to use by AI, implementing whatever the latest craze is.
I imagine we're going to see a lot more APIs open up (agentic finances?), a lot of granular access controls, etc.
Where was all of this when regular users had been asking for it for _years_?
Empowering users in general is a good thing, so, in a way, it's a good thing that OpenClaw and things of this nature are exposing all the issues with access controls and API interactions that many of our services have.
Now we just need a reason for AI agents to need "dark mode" on websites...
I was under the impression that they do obey robots.txt now? There are clearly a lot of dumb agents that don’t, but didn’t think it was the major AI labs.
Obeying robots.txt (now) is still better than not obeying it, regardless of what they did before.
The alternative is to say that bugs shouldn’t be fixed because it’s a ladder pull or something. But that’s crazy. What’s the point of complaining if not to get people to fix things?
I don't see why it wouldn't - but I'm not familiar with setup / integration on other platforms. Would love to hear more about your stack and see if we can't find a way for you to try it out
The server exposes a straightforward API so wrapping it in MCP should be straight forward. The agent / skill interacts with the server using the cli implementation (part of the skill definition) at https://github.com/JaredStewart/coderlm/blob/main/plugin/ski...
When I use ChatGPT to do research, I expect it to justify itself by quoting from web pages and linking to those web pages. (I gave it explicit instructions to quote things, but unfortunately it will only do short quotes.)
This might be an extended web search, but it's still a web search. The documents need to exist. Maybe a lot of the surrounding boilerplate disappears, though?
Like many others, I can vouch for “AI is good at programming now,” at least for the generic web programming that I do. But that doesn’t imply that it generalizes to other fields, and this article, at least, doesn’t show that it does.
I would like to read more from people who have other jobs about what they see when they use AI. Did they see a similar change?
It translates PDFs for me and gives me a good enough text dump in the console to understand what I’m being told to do. If the PDF is simple enough (a letter, for example). It doesn’t give me a structured English recreation of the PDF.
I’ll give it credit that it’s probably underpinning improved translation in e.g. google translate when I dump a paragraph of English and then copy the Chinese into an email. But that’s not really in the same ballpark.
The only other professional interaction I’ve had with it was when a colleague saw an industry-slang term and asked AI what it meant. The answer, predictably, was incredibly wrong but to his completely naive eyes seemed plausible enough to put in an email. As in, it was a term relating to a metallurgical phenomena observed by a fault and AI found an unrelated industry widget that contained the same term and suggested it was due to the use of said widget.
I don’t even really see the telltale AI writing signs of people using it to summarise documents or whatnot. Nor could I think how I could take what I do and use it to do it faster or more efficiently. So I don’t even think it’s being used to ingest and summarise stuff either.
Nit: there are distracting animations, such as on the weekly download graph.
reply