I am not saying this to be sarcastic - the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years, or boris saying coding is solved and that 100% of his code is written by AI.
It's not good enough to just say oreo ceos say we need to more oreos.
There's a real grey area where these tools are useful in some capacity, and in that confusion we're spending billions. Too may people are saying too conflicting things and chaos is never good for clear long-term growth.
Either that 20 years is completelly inapplicable to AI, or we're in for a world of hurt. There's no in between given the kinds of bets that have been made.
AI companies don’t have 20 years, they have max 5 years where they have to turn to profit.
They don’t have time to wait for all the companies to pick up use of AI tooling in their own pace.
So they lie and try to manufacture demand. Well demand is there but they have to manufacture FOMO so that demand materializes now and not in 20 or 10 years.
This outlook is as short-sighted as the 2000 fiber optic bust. Critics then thought overcapacity meant the end, yet that infrastructure eventually created the modern internet. Capital does not walk away from a fundamental shift just because of one market correction. While specific companies may fail, the long-term value of the technology ensures that investment will continue far beyond a five-year window.
The massive investment in power grids and data centers provides a permanent physical backbone that outlives any specific silicon generation. This infrastructure serves as a durable shell for the model design knowledge and chip architectural IP gained through each iteration. Capital is effectively funding a structural moat built on energy access and engineering mastery.
Seems like there’s a lot of resources being dumped into those data centers that will not be very useful. Saying it will all be worthwhile because we’ll have the buildings and the modest power grid updates (which are largely paid for by tax payers, anyway,) feels like saying a PS5 is a good long-term investment because the cords and box will still be good long ag after the PS5 has outlived its usefulness.
The "PS5" analogy fails to account for how "useless" hardware often triggers the next paradigm shift. For decades, traditionalists dismissed high-end GPUs as expensive toys for gamers, yet that specific architecture became the accidental engine of the AI revolution.
And you imagine these incredibly expensive-to-operate, environmentally damaging, highly specialized, years-outdated GPUs will trigger some sort of technological revolution that won’t be infinitely better served by the shiny new GPUs of the day that will not only be dramatically more powerful, but offer a ton more compute for the amount of electricity used?
The AI use of GPUs didn’t stem from a glut of outdated, discarded units with nearly no market value. All of those old discarded GPUs were, and still are, worthless digital refuse.
The closest analog i can think of to what you’re referring to is cluster computing with old commodity PCs that got companies like Google and Hotmail off the ground… for a few years until they could afford big boy servers and now all of those, and most current PCs on the verge of obsolescence, are also worthless digital refuse.
The big difference is that Google et al chose those PC clusters because they were cheap, commodity pieces right off-the-bat, not because they were narrowly scoped specialty hardware pieces that collectively cost hundreds of billions of dollars.
Your supposition fails to account for our history with hardware in any reasonable way.
Focusing exclusively on the physical decay and replacement cycle of hardware is a classic case of tunnel vision. It ignores the fact that the semiconductor industry’s true value lies in the evolution of manufacturing processes and architectural design rather than the lifespan of a specific unit. While individual chips eventually become obsolete, the compounding breakthroughs in logic and efficiency are what actually drive the technological revolution you are discounting.
Tunnel vision is ignoring the astonishing amount of money and environmental resources our society is dumping into these very physical, very temporally useful chips and their housing because… of what we learn by doing that. We should have dumped 1/100th of that money into research and we’d have been further along.
This isn’t a normal tech expenditure— the scale of this threatens the economy in a serious way if they get it wrong. That’s 401ks, IRAs, pension plans, houses foreclosed on, jobs lost, surgeries skipped… if we took a tiny fraction of this race-to-hypeland and put towards childhood food insecurity, we could be living in a fundamentally different looking society. The big takeaway from this whole ordeal has nothing to do with semiconductors — it is that rich guys playing with other people’s money singularly focused on becoming king of the hill are still terrible stewards of our financial system.
Dismissing massive capital expenditure as "hypeland" ignores the historical reality that speculative bubbles often build the physical foundation for the next century. The Panic of 1873 saw a catastrophic evaporation of debt-driven capital, yet the "worthless" railroads built during that frenzy remained in the ground. That redundant, overbuilt infrastructure became the literal backbone of American industrialization, providing the logistics required for a global economic shift that far outlasted the initial financial ruin.
Divorcing research from "learning by doing" is a recipe for a bureaucratic ivory tower. If you only funnel money into pure research without the messy, expensive, and often "wasteful" reality of large-scale deployment, you end up with an economy of academic metrics rather than industrial power.
The most damning evidence against the "research-only" model is the birth of the Transformer architecture. It did not emerge from an ivory tower funded by bureaucratic grants or academic peer-review cycles; it was forged in the fires of industrial practice.
History shows that a fixation on immediate social utility or "rational" cost analysis can be a strategic trap. During the same era, Qing Dynasty bureaucrats employed your exact logic, arguing that the astronomical costs of industrialization and rail were a waste of resources better spent elsewhere. By prioritizing short-term stability over "expensive" technological leaps, they missed the industrial window entirely. Two decades later, they faced an industrialized Japan in 1894 and suffered a total collapse. The "waste" of one generation is frequently the essential infrastructure of the next.
How much capital was wiped out for it to be cheap after the bust? Someone is going to eat the exuberance loss in the near term, even if there is long term value.
It's a "Motte and Bailey" system [0], where the extreme "AI will do everything for you" claim keeps getting thrown around to try to get investors to throw in cash, but then somehow it transmutes into "all technologies took time to mature stop being mean to me."
To be fair, it isn't necessarily the same people doing both at once. Sometimes there are two groups under the same general banner, where one makes the big-claims, and another responds to perceived criticism of their lesser-claim.
> the problem is that people from OpenAI/Antrhopic are saying things like superintelligence in 3 years
An even bigger problem is that people listen to them even after they say rationally implausible things. When even Yann LeCunn is putting his arms up and saying "this approach won't work," it's pretty bad.
Researchers looked at GPT-3 in 2023 and saw “sparks of AGI”. The saying “feel the AGI” became widespread not long after, if I’m remembering right. We’ve been saying AGI is right around the corner for a while now. And of course, if you predict the end of the world every day, you’ll eventually be right. But for the moment, what we have is an exceptionally powerful coding assistant that can also speed up entry-level work in various other white collar industries. That is earth-shattering, paradigm-shifting. But given how competitive and expensive the AI game has become, that is not enough, so it needs to be “superintelligence” - and it’s just not.
Ah, that’s my mistake. Thank you. I saw 2023, I thought GPT-3. Even still, people talk about GPT-4 today like it was a quaint little demo. It was a magnificent achievement, it scared the pants off of a lot of people, and sparked a new round of “is AI conscious?” discourse.
What does that mean? By what metric do you measure "AGI", whatever that means? Industry definitions are incredibly vague, perhaps intentionally so, with no benchmarks to define how a model, harness, or other technology might achieve "AGI". They have no intelligence, and can't even reason that you need to take your car to the car wash to have it washed[0].
If somehow Claude became sentient that would be sci-fi. One day it’s wrangling CSS and Spring Boot Controllers and the next it’s telling you opinions it developed through its own experiences on programming languages. Not sure that’s on the near horizon, but it’s definitely impressive technology.
> Superintelligence in 3 years doesn't really sound that crazy given how quickly I can write code with Claude. I mean we're 90%-95% of the way there already.
Yeah? So you must have a clear idea of where "there" is, and of the route from here to there?
Forgive me my skepticism, but I don't believe you. I don't believe that you actually know.
Not to mention the investment is on another level. We've got companies with valuations in the hundred-billions talking about raising trillions to buy all of the computers in the world, before establishing whether they can even turn a profit, nevermind upend the economy.
I wonder how many actually beneficial projects will not be financed by investors too scared to try anything risky after the AI buble crashes and burns to the ground. :P
the investments are being made by massively profitable companies (our biggest and brightest ones, the ones that have been carrying the economy for quite some time now, even before "AI"). even just in recent history we have seen companies making large investments and being very unprofitable until they weren't anymore (e.g. Uber). and it is always the same story, everyone is up in arms "this is not sustainable etc..."
whether or not these companies can turn a profit - time will tell. but I am betting that our massively profitable companies (which are biggest spenders of course) perhaps know what they are doing and just maybe they should get the benefit of the doubt until they are proven wrong. but if I had to make a wager and on one side I have google, microsoft, amazon, meta... and on the other side I have bunch of AI bubble people with a bunch of time to predict a "crash" I'd put my money on the former...
The fact that the companies that have already shoveled billions of dollars at this are continuing to do so is equally consistent with AI improvement and adoption stalling as it is with infinite improvement and widespread adoption. Yes, it’s irrational to chase sunk costs - but unlike the VC funds that backed Uber and its competition, may of the players in this game are exposed to public markets, which are not known for being rigorously logical. If you pull back on your AI investments, the markets will punish you - probably vigorously - and if your only concern is the value of your stock options, it is entirely rational for you to act in a way that keeps the market from punishing their value. We’re 3 years in without showing any ROI, and who’s to say we can’t get 3 or 5 or 10 more? Plenty of time to cash out before the eventual reckoning.
There is definitely growing hesitancy in the market, but pulling back at this juncture could set off a full-on race to the bottom, because it would disprove the original point (“all the smart tech companies are all-in, so there must be profit at the end of the tunnel”). Right now, they can point to the skeptics as bears or doomers or whatever. The first big tech company to drop its capex will pierce the aura of invincibility and make the moderate retreat from the exuberant highs of late 2025 look like a blip on the radar.
I'd maybe think twice about assuming Meta knows what they're doing after they just pissed $75 billion up the wall on a Metaverse dream that went nowhere.
Pissed it away, but Zuckerberg is richer than ever and so are his stockholders it seems. I can’t imagine doing it, but also can’t imagine running Meta.
I am certainly not saying that this can’t all come crashing down for the big boys, surely it can. I just am putting a little more weight on them than on people on the internet and doomsdayers hunting for clicks is all
I just keep thinking about SGI and, to an extent, Sun. Couple missteps and a couple innovations in the commodity direction and it will start having a negative effect.
For the U.S. economy, productivity is defined as (output measured in $)/(input measured in $). Typically, new technologies (computers, internet, AI) reduce input costs, and due to competition in the market, companies are required to reduce their prices, thereby having an overall deflationary effect on the economy. It's entirely possible that AI will have a small or no effect on productivity as measured above, but society will benefit by getting access to inexpensive products and services powered by inexpensive AI. Individual companies won't use AI to improve their productivity but will need to use AI just to stay competitive.
I think this paragraph from the wikipedia article captures it nicely:
>Many observers disagree that any meaningful "productivity paradox" exists and others, while acknowledging the disconnect between IT capacity and spending, view it less as a paradox than a series of unwarranted assumptions about the impact of technology on productivity. In the latter view, this disconnect is emblematic of our need to understand and do a better job of deploying the technology that becomes available to us rather than an arcane paradox that by its nature is difficult to unravel.
Yep, and the same with the internet. During the 1990s and 2000s, people kept wondering why the internet wasn't showing up in productivity numbers. Many asked if the internet was therefore just a fad or bubble. Same as some now do with AI.
It takes time for technology to show measurable impact in enormous economies. No reason why AI will be any different.
Sure, but you have to consider Carl Sagan's point, "The fact that some geniuses were laughed at does not imply that all who are laughed at are geniuses. They laughed at Columbus, they laughed at Fulton, they laughed at the Wright brothers. But they also laughed at Bozo the Clown." Some truly useful technologies start out slow and the question is asked if they are fads or bubbles even though they end up having huge impact. But plenty of things that at first appeared to be fads or bubbles truly were fads or bubbles.
Personally I think AI is unlikely to go the way of NFTs and it shows actual promise. What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it. The Internet didn't begin as a massive black hole sucking all the light out of the room for anything else before it really started showing commensurate ROI.
> What I'm much less convinced of is that it will prove valuable in a way that's even remotely within the same order of magnitude as the investments being pumped into it.
I think there are two layers of uncertainty here. One is, as you say, if the value is worth the investment. The other and possibly bigger issue is who is going to capture the value and how.
Assuming AI turns out to be wildly valuable, I'm not at all convinced that at the end of this money spending race that the companies pouring many billions of dollars into commercial LLMs are going to end up notably ahead of open models that are running the race on the cheap by drafting behind the "frontier" models.
For now the frontier models can stay ahead by burning heaps of money but if/when progress slows toward a limit whatever lead they have is going to quickly evaporate.
At some point I suspect some ugly legal battles as some attempt to construct some sort of moat that doesn't automatically drain after a few months of slowed progress. Google's recent complaining about people distilling gemini could be an early signal of this.
I have no idea how any of that would shake out legally, but I have a hard time sympathizing with commercial LLM providers (who slurped up most existing human knowledge without permission) if/when they start to get upset about people ripping them off.
All those racks of Nvidia machines might not pay off for the companies buying them, but I have a hard time believing that people are still questioning the utility of this stuff. In the last hour, Opus downloaded data for and implemented a couple of APIs that I would’ve otherwise paid hundreds a month for, end to end, from research all the way to testing its implementation. It’s so, incredibly, obviously useful.
That something is useful does not necessarily mean that it will be doable for companies the capture enough of value to make up for the billions in investments they have/will have make in the coming years.
Right now the frontier AI companies are explicitly running a kind of chicken race - increasing the burn rates so much that it gets harder and harder. With the hopes that they (and not their competitor) will be the one left standing. Especially OpenAI and Antropic, but non-AI companies like Oracle have also joined.
If they keep it going, the likely outcome is that one of them folds - and the other(s) reap the rewards.
Utility (per cost) will go up the tougher the competition. Money captured by single entity possibly down with increased competition.
It's only really useful if what you produce with those API's is useful. It's easy to feel productive with AI tho, in a way that doesn't show up in economic statistics, hence the disconnect.
Well, it might actually decrease GDP in this case, because it's making it so I can just quickly make products that I would've otherwise purchased. But it's also made me more productive, and purchasing things isn't good for its own sake. So maybe measuring progress via GDP isn't ideal?
The thing I'm making with the APIs is very helpful to me, maybe it'll be helpful to others, who knows.
I mean, it's an apt comparison, given that the Venn diagram between the pro-NFT hucksters and the pro-AI crowd is a circle. When you listen to people who were so publicly and embarrassingly wrong about the future try to sell you on their next hustle, skepticism is the correct posture.
Columbus was not a genius. He was an idiot who believed the earth was smaller than the scientists of his day, and the scientists were right. Columbus became successful through pure luck, genocide and cruelty.
Also no particular reason to group it in with those two. There are plenty of things that never showed up at all. It's just not a signal It's kind of like "My kid is failing math, but he's just bored. Einstein failed a lot too you know". Regardless of whether Einstein actually failed anything, there are a lot more non-Einsteins that have failed.
That seems a tad reductionist. Why not just say the iPhone was completely inconsequential because afterall it's simply another "computer". Why not go even back further and start the timer at the first physical implementation of a Turing machine?
The iPhone killer UX + App store release can be directly traced to the growth in tech in the subsequent years its release.
I think it would have happened regardless - late Symbian from Nokia was pretty close and Maemo was already a thing with N900 not that far off in the future, not to mention Android.
We might have been possibly better of actually, with the Apple walled garden abominations and user device lockdowns not being dragged into the mainstream.
As someone who worked for Nokia around the iPhone launch (on map search, not phones directly) - I also wanted to believe this at the time. But in retrospect, it feels like what actually mattered was that capacitive multi-touch screens were the only non-garbage interface, and only Apple bought FingerWorks...
Not clear that this is a helpful interpretation, other than "we're in the primordial ooze stage and the thing that matters will be something none of the current players have", but that's hard to take to the bank :-)
I think one of the interesting things here is that AI doesn't need to be able build B2B SaaS to kill it. So much of the overhead of B2B SaaS companies is thinking about multitenancy, intergrating with many auth providers and mapping those concepts to the program's user system, juggling 100 features when any given customer only needs 10 of them, creating PLG upsell flows to optimize conversions, instrumenting A/B tests etc...
A given company or enterprise does not have to vibe code all this, they just need to make the 10 features with the SLA they actually care about, directly driven off the systems they care about integrating with. And that new, tight, piece of software ends up being much more fit for purpose with full control of new features given to company deploying it. While this was always the case (buy vs build), AI changes the CapEx/OpEX for the build case.
And in many cases, it's 12 features, with 2 of the features not even existing in the big SaaS.
I'm pretty sure every developer who has dealt with janky workflows in products like Jira has planned out their own version that fits like a glove, "if only I had more time".
If companies wanted to build thier own simple-JIRA they could have built themselves before. I dont think making a kanban board was hard even before AI.
JIRA especially, and I'm always shaking my fist at Atlassian that simple APIs or workflows or reports aren't already included in the tool. I have to pay some other company $10/user/month to get this dumb report your tool should already be able to do?? Insane.
Jira has had free competitors that do at least 75% of what it does since it's inception. You could find a dozen on github that actually look good right now.
Until a given company decides they need access control for their contractors that's different from their employees, etc. etc. etc. - seen it all before with internal often data scientist written applications that they then try to scale out and run into the security nightmare and lack of support internally for developing and taking forward. Usually these things fizzle out when someone leaves and it stops working.
Pretty much. My employer was looking to cut costs and they were spending ~500k a year on a product that does little more than map entra roles/groups to datasets and integrated with a federated query engine through a plugin. Took a couple days to build a replacement. The product had only a few features we needed.
I've found in the embedded space that people sell lots and lots of products that do everything you could ever want, and the most efficient thing to do is not buy those things and instead find a way to do just the subset of things you care about with your own back-end systems. The upshot of that is that because you're in total control if something goes wrong you can fix it without getting 6 people on a phone call to point fingers at each other.
As niche SaaS provider, I'm trying to avoid succumbing to the same fate. The product I built carefully for years would now be within the reach of a senior dev with a couple focused weeks -- if they knew all the requirements. To avoid being overtaken, I'm working to increase my customer's requirements -- getting them hooked on new reports and features I never had time to build before LLMs could do it for me. This makes it less likely for a competitor to be able to afford to quickly replace me.
At the same time, I have no idea what the cost of LLMs usage will be in the future. So I'm working to ensure the architecture stays clean and maintainable for humans in case this kind of tooling becomes untenable.
That sounds like a good strategy to me. We have a couple other products we're looking to knock out to reduce costs, and the decision comes down to me and another colleague. The thing these businesses have in common - difficult to partner with, rough edges for the use cases we need, and no appetite on their end to shore them up. We're paying premium prices for a subpar experience. If instead they adopted your thinking, perhaps we would've looked for savings elsewhere.
there's no shortage of software engineers, if it was so easy for an organization to replace a saas with something built in-house they'd be doing it all the time. In my experience in enterprise consulting implementing a well defined requirement is the easiest part. Getting everyone to agree on the requirement, getting it defined, and stopping it from changing after every demo is the hard part.
Regarding the meta experiment of using LLMs to transpile to a different language, how did you feel about the outcome / process, and would you do the same process again in the future?
I've had some moments recently for my own projects as I worked through some bottle necks where I took a whole section of a project and said "rewrite in rust" to Claude and had massive speedups with a 0 shot rewrite, most recently some video recovery programs, but I then had an output product I wouldn't feel comfortable vouching for outside of my homelab setup.
It depends on the situation. In this case the agent worked only using the reference code provided by Flux's Black Forest Labs which is basically just the pipeline implemented as a showcase. The fundamental way for this process to work is that the agent can have a feedback to understand if it is really making progresses, and to debug failures against a reference implementation. But then all the code was implemented with many implementation hints about what I wanted to obtain, and without any reference of other minimal inference libraries or kernels. So I believe this just is the effect of putting together known facts about how Transformers inference works plus an higher level idea of how software should appear to the final user. Btw today somebody took my HNSW implementation for vector sets and translated it to Swift (https://github.com/jkrukowski/swift-hnsw). I'm ok with that, nor I care of this result was obtained with AI or not. However it is nice that the target license is the same, given the implementation is so similar to the C one.
When I first saw the OP, panic started to set in that I am fucked and Chat-Completions/LLMs/AI/whatever-you wanna-call-it will soon be able to create anything and eat away at my earning potential. And I will spend my elder years living with roommates, with no wife or children because I will not be able to provide for them. But upon reading that you used a reference implementation, I've realized that you simply managed to leverage it as the universal translator apenwarr believes is the endgame for this new technology [1]. So, now I feel better. I can sleep soundly tonight knowing my livelihood is safe, because the details still matter.
This is pretty great. I’ve gone and hacked your GTE C inference project to Go purely for kicks, but this one I will look at for possible compiler optimizations and building a Mac CLI for scripting…
I have a set of prompts that are essentially “audit the current code changes for logic errors” (plus linting and testing, including double checking test conditions) and I run them using GPT-5.x-Codex on Claude generated code.
It’s surprising how much even Opus 4.5 still trips itself up with things like off-by-one or logic boundaries, so another model (preferably with a fresh session) can be a very effective peer reviewer.
So my checks are typically lint->test->other model->me, and relatively few things get to me in simple code. Contrived logic or maths, though, it needs to be all me.
Once we had a slowdown in our application that went unadressed for a couple of months. Using git bisect to binary search across a bunch of different commits and run a perf test, every commit being a "good" historical commit allowed that to be much easier, and I found the offending commit fast.
I’ve been using some time off to explore the space and related projects StereoCrafter and GeometryCrafter are fascinating. Applying this to video adds a temporal consistency angle that makes it way harder and compute intensive, but I’ve “spatialized” some old home videos from the Korean War and it works surprisingly well.
I never liked the global leaderboard since I was usually asleep when the puzzles were released. I likely never would have had a competitive time anyway.
I never had any hope or interest to compete in the leaderboard, but I found it fun to check it out, see times, time differences ("omg 1 min for part 1 and 6 for part 2"), lookup the names of the leaders to check if they have something public about their solutions, etc. One time I even ran into the name of an old friend so it was a good excuse to say hi.
I believe that Everybody Codes has a leaderboard where it starts counting from when you first open the puzzle. So if you're looking for coding puzzles with a leaderboard that one would be fair for you.
I've released a templatized local development setup using devcontainers that I've crafted over the last year, that I use on all projects now. This post explains the why and links to the project.
It's potentially the opposite. If you instrument a codebase with documentation and configuration for AI agents to work well in it, then in a year, that agent will be able to do that same work just as well (or better with model progress) at adding new features.
This assumes your adding documentation, tests, instructions, and other scaffolding along the way, of course.
I wonder how soon (or if it's already happening) that AI coding tools will behave like early career developers who claim all the existing code written by others is crap and go on to convince management that a ground up rewrite is required.
(And now I'm wondering how soon the standard AI-first response to bug reports will be a complete rewrite by AI using the previous prompts plus the new bug report? Are people already working on CI/CD systems that replace the CI part with whole-project AI rewrites?)
As the cost of AI-generated code approaches zero (both in time and money), I see nothing wrong with letting the AI agent spin up a dev environment and take its best shot. If it can prove with rigorous testing that the new code works is at least as reliable as the old code, and is written better, then it's a win/win. If not, delete that agent and move on.
On the other hand, if the agent is just as capable of fixing bugs in legacy code as rewriting it, and humans are no longer in the loop, who cares if it's legacy code?
But I can see it "working". At least for the values of "working" that would be "good enough" for a large portion of the production code I've written or overseen in my 30+ year career.
Some code pretty much outlasts all expectations because it just works. I had a Perl script I wrote in around 1995-1998 that ran from cron and sent email to my personal account. I quit that job, but the server running it got migrated to virtual machines and didn't stop sending me email until about 2017 - at least three sales or corporate takeovers later (It was _probably_ running on CentOS4 when I last touched it in around 2005, I'd love to know if it was just turned into a VM and running as part of critical infrastructure on CentOS4 12 years later).
But most code only lasts as long as the idea or the money or the people behind the idea last - all the website and differently skinned CRUD apps I built or managed rarely lasted 5 years without being either shut down or rewritten from the ground up by new developers or leadership in whatever the Resume Driven Development language or framework was at the time - toss out the Perl and rewrite it in Python, toss out the Python and rewrite it in Ruby On Rails, then decide we need Enterprise Java to post about on LinkedIn, then rewrite that in Nodejs, now toss out the Node and use Go or Rust. I'm reasonably sure this year's or perhaps next years LLM coding tools can do a better job of those rewrites than the people who actually did them...
Will the cost of AI-generated code approach zero? I thought the hardware and electricity needed to power and train the models and infer was huge and only growing. Today the free and plus plans might be only $20/month, once moats are built I assume prices will skyrocket a order of magnitude or few higher.
> Will the cost of AI-generated code approach zero?
Absolutely not.
In the short term it will, while OpenAI/Anthropic/Anysphere destroy software development as a career. But they're just running the Uber playbook - right now they're giving away VC money by funding the datacenters that're training and running the LLMs. As soon as they've put enough developers out of jobs and ensured there's no new pipeline of developers capable of writing code and building platforms without AI assistance, they will stop burning VC cash and start charging at rates that not only break even but also return the 100x the investors demand.
They're not directly solving the same problem. MCP is for exposing tools, such as reading files. a2a is for agents to talk to other agents to collaborate.
MCP servers can expose tools that are agents, but don't have to, and usually don't.
That being said, I can't say I've come across an actual implementation of a2a outside of press releases...
Perhaps naive to say, but I think there was the briefest moment where your status updates started with "is", feeds were chronological, and photos and links weren't pushed over text, that it was not an adversarial actor to one's wellbeing.
There was an even briefer moment where there was no such thing as status updates. You didn't have a "wall." The point wasn't to post about your own life. You could go leave public messages on other people's profiles. And you could poke them. And that was about it.
I remember complaining like hell when the wall came out, that it was the beginning of the end. But this was before publicly recording your own thoughts somewhere everyone could see was commonplace, so I did it by messaging my friends on AIM.
And then when the Feed came out? It was received as creepy and stalkerish. And there are now (young) adults born in the time since who can't even fathom a world without ubiquitous feeds in your pocket.
Unless I’m remembering wrong, posting a public message on someone else’s profile was posting on their wall. Or was it called something else before it was somebody’s wall?
It didn't have a name. It wasn't really a "feature." You just went and posted on their "page" I guess I would call it.
The change to being able to post things on your own page and expecting other people to come to your page and read them (because, again, no Feed) wasn't received well at first.
Keep in mind, smartphones didn't exist yet, and the first ones didn't have selfie cameras even once they did. And the cameras on flip phones were mostly garbage, so if you wanted to show a picture, you had to bring a camera with you, plug it in, and upload it. So at first the Wall basically replaced AIM away messages so you could tell your friends which library you were going to go study in and how long. And this didn't seem problematic, because you were probably only friends with people in your school (it was only open to university students, and not many schools at first), and nobody was mining your data, because there were no business or entity pages.
Yeah, that's about when it changed. The lack of a wall was a very early situation. I joined in 2004, back when it was only open to Ivy League and Boston-area schools.
It was still acceptable to write on someone else's wall when they came to be called that. You can still do that now I think but it's quite uncommon and how it works is now complicated but settings.
Sure, you could. That wasn't the problem. The problem was that now you could post on your own.
That's what turned it from a method of reaching out and sending messages to specific people when you had something to say to them to a means of shouting into the void and expecting (or at least hoping) that someone, somewhere, would see it and care what you had to say. It went from something actively pro-social to something self-focused.
Blogs and other self-focused things already existed, but almost nobody used them for small updates throughout the day. Why do you think the early joke about Twitter was that it was just a bunch of self-absorbed people posting pictures of their lunch? Nobody knew what to do with a tool like that yet, but the creation of that kind of tool has led to an intensity of self-focus and obsession the world had never seen before.
I made the mistake of sending a Gen Z (adult) friend a poking finger emoji to try to remind him about something.
It wasn't the first time I've had a generational digital (ha) communication failure, but it was the first time I've had one because I'm old and out of touch with what things mean these days!
My hunch is that instant messaging is slowly taking over that space. If you actually want to connect with people you can without needing much of a platform.
I mean let's be clear on the history and not romanticize anything, Zuck created Facebook pretty much so he could spy on college girls. He denies this of course, but it all started with his Facemash site for ranking the girls, and then we get to the early Facebook era and there's his quote about the "4,000 dumbfucks trusting him with their photos" etc.
There is no benevolent original version of FB. It was a toy made by a college nerd who wanted to siphon data about chicks. It was more user friendly back then because he didn't have a monopoly yet. Now it has expanded to siphoning data from the entire human race and because they're powerful they can be bigger bullies about it. Zuck has kind of indirectly apologized for being a creeper during his college years. But the behavior of his company hasn't changed.
https://en.wikipedia.org/wiki/Productivity_paradox