Hacker Newsnew | past | comments | ask | show | jobs | submit | perrygeo's commentslogin

> Succinctness, functionality and popularity of the language are now much more important factors.

Not my experience at all. The most important factor is simplicity and clarity. If an LLM can find the pattern, it can replicate that pattern.

Language matters to the extent it encourages/forces clear patterns. Language with more examples, shorter tokens, popularity, etc - doesn't matter at all if the codebase is a mess.

Functional languages like Elixir make it very easy to build highly structured applications. Each fn takes in a thing and returns another. Side effects? What side effects? LLMs can follow this function composition pattern all day long. There's less complexity, objectively.

But take languages that are less disciplined. Throw in arbitrary side effects and hidden control flow and mutable state ... the LLM will fail to find an obviously correct pattern and guess wildly. In practice, this makes logical bugs much more likely. Millions of examples don't help if your codebase is a swamp. And languages without said discipline often end up in a swamp.


> explore new approaches and connect ideas faster

This is the hidden super power of LLM - prototyping without attachment to the outcome.

Ten years ago, if you wanted to explore a major architectural decision, you would be bogged down for weeks in meetings convincing others, then a few more weeks making it happen. Then if it didn't work out, it feels like failure and everyone gets frustrated.

Now it's assumed you can make it work fast - so do it four different ways and test it empirically. LLMs bring us closer to doing actual science, so we can do away with all the voodoo agile rituals and high emotional attachment that used to dominate the decision process.


That's only because no one understood agile or XP and they've become a "no one actually does that stuff" joke to many. I have first hand experience with prototyping full features in a day or two and throwing the result away. It comes with the added benefit of getting your hands dirty and being able to make more informed decisions when doing the actual implementation. It has always been possible, just most people didn't want to do it.

I basically just _accidentally_ added a major new feature to one of my projects this week.

In the sense that, I was trying to explain what I wanted to do to a coworker and my manager, and we kept going back and forth trying to understand the shape of it and what value it would add and how much time it would be worth spending and what priority we should put on it.

And I was like -- let me just spend like an hour putting together a partially working prototype for you, and claude got _so close_ to just completely one-shotting the entire feature in my first prompt, that I ended up spending 3 hours just putting the finishing touches on it and we shipped it before we even wrote a user story. We did all that work after it was already done. Claude even mocked up a fully interactive UI for our UI designer to work from.

It's literally easier and faster to just tell claude to do something than to explain why you want to do it to a coworker.


> Even the crotchetiest and most out-of-touch people I know basically accept that the Earth is warming now

Same. Empirical evidence is just too hard to ignore.

It's quite amazing watching the "climate change isn't real" folks transition to "climate change is no big deal", then to "climate change is too hard/expensive to deal with".


> Empirical evidence is just too hard to ignore.

Except it's the opposite - empirical evidence is very easy to ignore. Between herding, the replication crisis, and the overall insularity of academia, trust in "studies" has never been lower.

But people still respond very well to demonstrative or pragmatic evidence. Empirically there's nothing special about a keto diet. But demonstratively the effects are very convincing.


People who know anything about the replication crisis are a single-digit percentage of the population. Doesn't help explain the public's attitudes.

People just lived through a crisis in which public health officials were telling them to avoid a deadly virus by using glory holes[0]. Skepticism of institutions is at an all time high for good reason.

[0] https://metro.co.uk/2020/07/23/health-officials-recommend-gl...


Thanks for that reminder of some cultural differences (!) between us and our friends across the pond. Hopefully it goes without saying, that rather colorful example is a few steps removed from the replication crisis, although the point about governing institutions spending their credibility in poor ways is taken.

> It's quite amazing watching the "climate change isn't real" folks transition to "climate change is no big deal", then to "climate change is too hard/expensive to deal with".

At the top level (of government and corporate entities) those people always knew it was real, the messaging just changed as it became harder to keep a straight face while parroting the previous message in the face of overwhelming empirical evidence.

Exxon's (internal) research in the 1970s has been very accurate to the observed reality since then.

They just didn't care that it was real because they valued profits/power/etc in the moment over some difficult to quantify (but certainly not good) future calamity.

You would think they would care at least in the cases where they had children and grandchildren who will someday have to really reckon with the outcome, but you'd be wrong, they (still) don't give a shit.


Playbook is The Narcissist's Prayer

  That didn't happen.
  And if it did, it wasn't that bad.
  And if it was, that's not a big deal.
  And if it is, that's not my fault.
  And if it was, I didn't mean it.
  And if I did, you deserved it.

Narcissism is America’s greatest vice, imo. Not surprising to see it take center stage on what may be the nation’s greatest challenge: ensuring our future in the face of climate change.

something something tilt of the earth.

Unaudited empirical evidence is easy to ignore. The problem is one of physics. It should be simple to show with napkin math.

Reminds me a bit of the Narcissist's Prayer:

That didn't happen. And if it did, it wasn't that bad. And if it was, that's not a big deal. And if it is, that's not my fault. And if it was, I didn't mean it. And if I did, you deserved it.


The proposed mechanism is polyphenols.

Historically, you'd get your polyphenols from your garden or wild gathering. But we know that industrial crops (even organic grown) have extremely low polyphenol content compared to their wild counterparts. So coffee remains as one of the few strong sources you can buy in a grocery store.

Hypothesis: Polyphenols from other sources would be just as protective as coffee.


It must be hard to differentiate:

Hypothesis 1: Polyphenols

Hypothesis 2: 2-3 coffees a day is a symptom of a normal life

You get that kind of issue coming up a lot in this sort of research. Like people who don't drink at all are probably more likely to drop dead in the next year than moderate drinkers. Not because drink protects but because people critically ill tend not to drink.


> Coffee and tea contain bioactive ingredients like polyphenols and caffeine

There are also studies that nicotine lowers dementia risk.

Since caffeine and nicotine are both stimulants related to similar receptors, I wouldn't discount this other mechanism.

I'm not saying anything about general healthiness of caffeine though.


Would be interesting to know whether theacrine or paraxanthine can provide the same neuroprotection without as much downsides.

> There are also studies that nicotine lowers dementia risk.

It has been found to be negatively correlated with Parkinson's disease also.

Does it not adversely affect cardiovascular health ? Even if it did, I would prefer keeping my mind and mobility.


Sorry it wasn't clear, I was not advocating for anything here, certainly not that people should smoke to prevent dementia.

Large quantities of nicotine are poisonous (just like caffeine) and it is addictive (more so than caffeine).

Regarding cardiovascular health, I'm not sure. As far as I know, nicotine itself is safe unless overdosed, but smoking and vaping of course is unhealthy.

I like coffee, and drinking to much of it is also unhealthy.

However, I love to remind myself of all the pop-sci articles saying that 2-3 cups are healthy when I'm making myself my 5th or 6th cup for the day.


Oh I did not think you were advocating anything. I was just thinking aloud.

I used to drink lots of coffee earlier but now my caffeine metabolism seems to have ground to halt. Anything more than two mugs and night's sleep is history. Even that infrequent second mug pushes it a lot.

In all fairness, my mug is around 2 to 3 espresso shots.

Have to find myself some good local decaf. That's not easy in India.


Oh I see, and tbh it's the same for me with sleep, I should be more discplined about caffeine given I have trouble sleeping anyway. Still often make the mistake of brewing a coffee in late afternoon because I love the ritual and it gets me off my desk.

It also has replaced cigarette breaks for me.

Drinking local coffee is admirable, but not an option in Germany :) I often buy "fair" brands in hope that does something but only when I can get it at OK prices...

Since you mention decaf, mixing 50/50 decaf/regular is also a good option to reduce caffeine intake for me.


That. Is. Fascinating. How'd you hear about the industrial crops having low polyphenol content?

An Alarming Decline in the Nutritional Quality of Foods: The Biggest Challenge for Future Generations’ Health

https://pmc.ncbi.nlm.nih.gov/articles/PMC10969708/


> The features I am currently working on cannot be vibe-coded, because no AI can understand the context.

It's always possible to improve understanding and put the context into writing, usually just that no one has. Clean up the variable names and docs and the language models (and other developers!) will understand the context better. This amounts to doubling down on clear technical communications.

Another way to say it: LLMs are pattern matching machines. So in a twisted way, I see LLMs driving code quality improvements - if only to make the LLMs more effective. Improve the pattern, improve the pattern matching.


Witness the giant leap forward in the capabilities of coding agents over the last year. There has been no such leap in LLM model performance. I think the causality is crystal clear. It's nothing about "AGI" and all about existing LLMs learning to use existing tools.

Even a sub-par LLM, put into a context where it has access to unix tools and network and files etc, is vastly more capable than the best LLM chatbot.


I must use AI differently than y'all. Do we not use plan mode?

There is almost no value in watching the stream of intermediate tokens. There's no need to micromanage the agent's steps. Just monitor the artifact and insist the LLM summarizes findings in plain English.

If it can't explain the proposed change coherently, it can't code it coherently either. `git restore .`

I find it much more effective to throw away bad sessions, try a new prompt than to massage the existing context swamp.


For me it comes down to Language. They're LLMs after all. They pattern match on tokens, and if your tokens have muddled semantics, you've lost before you even started.

I have a codebase where variables are named poorly - nah that's too generous, variable names are insane; inconsistent even within a single file and often outright wrong and misleading. No surprise - the LLMs choke and fail to produce viable changsets. Bad pattern = bad code generated from that pattern.

Going through and clarifying the naming (not ever refactoring) was enough to establish the pattern correctly. A little pedantry and the LLM was off to the races.

If LLMs are the future of coding, the number one highest priority for the software industry should be to fix muddled naming, bad patterns, and obfuscated code. My bet is that building clean code foundations is the fastest way to fully agentic coding.


This. Historically there's been a lot of resistance to the idea of refactoring or refining features. The classic "It works, just ship it" mentality that leaves mountains of tech debt in its wake.

And there _was_ a good reason to resist refactoring. It takes time and effort! After "finishing" something, the timeline, the mental and physical energy, the institutional support, is all dried up. Just ship it and move on.

But LLMs change the equation. There's no reason to leave sloppy sub-optimal code around. If you see something, say something. Wholesale refactoring your PR is likely faster than running your test suite. Literally no excuses for bad code anymore.

You'd think it didn't need to be said but, given we have a tool to make coding vastly more efficient, some people use that tool to improve quality rather than just pump out more quantity.


We are becoming spec writers, wearing the PM/lead hats.

1) Do a gap and needs assessment. 2) Build business requirements. 3) Define scope of work to advance fulfillment. 4) Create functional and non-functional specs. 5) Divide-conquer-refine loop.


I feel the same way. LLMs errors sound most plausible to those who know least.

On complex topics where I know what I'm talking about, model output contains so much garbage with incorrect assumptions.

But complex topics where I'm out of my element, the output always sounds strangely plausible.

This phenomenon writ large is terrifying.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: