I am a huge fan of Ben Lerner and have a copy of “Transcription” at home, waiting to be read. Autofiction is in many ways _the_ dominant mode of contemporary American literature, particularly among the literati of NYC/London (cf. Ocean Vuong, Tao Lin, Patricia Lockwood, etc., etc.). It can, for this reason, feel overdone and out of touch. But Lerner comes to the topic with such skill and intelligence, he really defines the genre for me, in a positive light.
Agreed. Lerner has an unique way of digging into solipsism that's truly genuine and comical (whereas I feel Knausgård and most of the autofiction crowd's stuff comes across as glorified navel-gazing). It's no wonder he's been compared to Foster Wallace in that regard, who also wrote about deeply human struggles yet still is dismissed as a lit-bro pseudointellectual.
Yeah, good comparison. To Lerner's credit, he is always a poet at heart, which leads to concise, lyrical prose. DFW is voluminous in comparison; when it lands, its great, but it can feel overinflated/overdone when it doesn't.
> People hate AI so much that they are prone to attribute to it everything that’s going wrong in their lives, regardless of the truth. That’s why they mix real arguments, like data theft, with fake ones, like the water stuff. Employers do it, too. Most layoffs are not caused by AI, but it’s the perfect excuse to do something that’s otherwise socially reprehensible.
Pertinent quote. A lot of AI discourse goes in circles trying to evaluate the truthiness of every individual complaint about AI. Obviously it's good to ensure claims are factual! But I believe it misses a broader point that people are resistant to AI, often out of fear, and are grasping for strategies to exert control. Or at least that's my read of it.
Refuting individual claims won't make a difference if the underlying anxieties aren't addressed (e.g., if I lose my job will I be compensated, will we protect ourselves against x-risk, etc).
I doubt there is a single profile about "not accelerate blindly on adoption everywhere".
On my side the biggest concern is the lake of transparency of ecological impact. This is not strictly related to LLMs though, data centers are not new, and all the concerns about people keeping a leverageable level of control through distributed power is not new.
How so? Opus and Sonnet are frontier models which cannot easily be replicated. Compute has real physical constraints which require appropriate procurement at this scale. At least those two points seem like pretty strong moats against the majority of companies.
You don't need to "replicate" Opus and Sonnet, you just need to match their overall performance at lower cost. That's been absolutely doable so far, with a steadily decreasing lag time.
That's a fair response. But I'm not aware of any metrics supporting the point that the lag time is decreasing. The discourse I've seen has more focused on the ways Claude/OpenAI/Google have pulled away from the rest of the pack.
To be clear, I accept you might be right, but I think the crux is whether lag time is increasing, steady or growing.
I like the principle, but I also find that we software folk commonly mistake the creation of a website as the goal, rather than the production of "content" (e.g. blog posts). I spent years trying to publish a blog and continually getting derailed building the ultimate static website. Recently I switched to a Substack hosted on my own subdomain, and now I'm finally writing. At least I still own the subdomain.
Hah, reminds of trying to make a blog as a teenager, 20+ years ago. Built my own CMS in PHP with various features. But never got further than having a few lines of text in the draft state. Most of the time was actually spent on having rounded corners (border-radius didn't exist) with some kinda of glass effect for a cool look (inspired by the then unreleased Windows Longhorn). And named my tool the generic name Publish-it, because publi-shit was funny.
I’m not convinced you read the post. I believe the author makes quite explicit their goal was to actually visit these cities, noting this is far from the most efficient bus route. Their itinerary also shows long stays in several spots.
I have had the same hypothesis around the recent operational success of US military interventions, but would agree with other comments here that this is more "vibes" than data. It's been reported that Maven (integrated with Claude) has been used extensively for Iran, but I haven't seen any hard evidence this is directly contributing to greater US military efficiency. I do buy the general thesis that AI would support operation excellence and solve attention problems across concurrent actions. Would be good to see some more reporting or combat analysis to try to measure the contributions of AI (e.g., how many more concurrent aerial sorties are taking place vs equivalent interventions, how many more strikes are "successful" vs past, etc).
EDIT: I see this post has been flagged. Why? I understand it’s political but it seems very much within the site’s ethos. I didn’t get the impression it was AI-writing either.
I think you might be underrating the value of even that enabling work. Some parents would not have the financial resources to provide those learning materials. And some parents would take a normative stance on how an 8 year old ought to behave.
More importantly, it's not as though individuals like Clements or Erdos was corresponding with Terrence directly to arrange a meeting. His parents clearly played an important role in facilitating and allowing these encounters. That deserves a lot of credit!
> I think you might be underrating the value of even that enabling work. Some parents would not have the financial resources to provide those learning materials. And some parents would take a normative stance on how an 8 year old ought to behave.
And most modern parents would swamp the child with a bunch of mind rotting auto playing TV and video games. There's an account of Terence's time at university where he nearly fails his oral qualifying exams as he spent most of his time playing Civ rather than studying anything. Imagine the travesty for the world if 5 year old Terence had been handed an Xbox.
Yeah I agree, an 8 year old isn't setting up these meetings and correspondences.
I think beyond even having supportive parents, the most important part was that he had a parent that had a degree in the field that he happened to be a genius in. His mother knew exactly how to guide her child through the material, even if it just was to let him go off to a corner and read the books she guided him towards for 3-4 hours a day for fun. So many children have advanced proclivities for certain things and parents that just can't even see what it is their child is brilliant at.
Having someone that knows the path and can point it out to them is a beautiful thing to have as a child.
I think gene and characteristics are more important than knowledge and degree. I happen to have two parents who are both in education, one teaches in university and one teaches in middle schools. Because of this I also know many friends whose parents are also teachers.
Without any statistical significance, but nonetheless the sample size is greater than 5. None of us consider their parents to be great, or even good teachers. All kids squandered sometime after they are free from the parents, usually in universities.
This experience impacts me so much, that I have a bias that teachers should not teach their own kids.
A parent of mine was also a teacher, and other than grading their student's 9th grade math exams when I was in elementary school, I was on my own for most of my learning.
So I agree that yes, just having a parent who is a teacher doesn't necessarily get you much, outside of likely being in a home environment where school is deemed important (many don't have this unfortunately). But where things become slightly magical is when you have a genetically gifted child and a parent that both knows how to guide that genius and has the resources to do so.
One needs to be a (long term present) parent to understand these subtleties.
You also hear just the success stories which are often extremely marginal, when such approach wouldn't fit development curve of some other potential genius we would not be hearing their less successful story, would we.
Not diminishing the overall message, in 80s even in western democracies deeper info was not so readily available so its not like his parents just threw him wifi-connected tablet with wikipedia opened and that was it.
But I think what should be celebrated more is some proper hard long term effort and not just usual approach with exceptional results.
Apologies I am not a native speaker so sometimes more complex thoughts take long sentences to explain.
We are discussing his parent's contribution to his growth. Some, like me, tend to agree they just gave him (good) tools and he found his interest and way through and beyond due to superior analytical skills and overall intelligence, not through some super duper tutoring by them.
I have cca such cousin. He was way ahead of his class (which was already math-focused class from secondary school), geniune interest in deeper math, physics and philosophy from early age. Even very good at software development in old Pascal or C. Nobody was tutoring him in any way, he just went to public library and borrowed what he liked.
The stuff thats not hard but still counts as discovery and learning must be self-motivating in way more average folks simply don't experience, not with same topics.
His parents, as an old saying in my country, must have done a tremendous amount of good things in their previous life to be rewarded with an easy kid :P
This all sounds like the stochastic parrot fallacy. Total determinism is not the goal, and it not a prerequisite for general intelligence. As you allude to above, humans are also not fully deterministic. I don't see what hard theoretical barriers you've presented toward AGI or future ASI.
I haven't heard the stochastic parrot fallacy (though I have heard the phrase before). I also don't believe there are hard theoretical barriers. All I believe is that what we have right now is not enough yet. (I also believe autoregressive models may not be capable of AGI.)
Did you just invent a nonsense fallacy to use as a bludgeon here? “Stochastic parrot fallacy” does not exist, and there actually quite a bit of evidence supporting the stochastic parrot hypothesis.
I imagine "stochastic parrot fallacy" could be their term for using the hypothesis to dismiss LLMs even where they can be useful; i.e., dismissing them for their weaknesses alone and ignoring their strengths. (Of course, we have no way to know for sure without their input.)
I don’t believe the article makes any claims on the infeasibility of a future ASI. It just explores likely failure modes.
It is fine to be worried about both alignment risks and economic inequality. The world is complex, there are many problems all at once, we don’t have to promote one at the cost of the other.
Totally agreed. Most the weird concepts of Gas Town are just workarounds for bad behavior in Claude or the underlying models. Anthropic is in the best position to get their own model to adhere to orchestration steps, obviating the need for these extra layers. Beyond that, there shouldn’t actually be much to orchestration beyond a solid messaging and task management implementation.
reply