Hacker Newsnew | past | comments | ask | show | jobs | submit | jwr's commentslogin

No, they can't, not unless we get rid of the fossil fuel lobby, which pretty much runs the world these days. Which isn't surprising, given that fossil fuels are the largest industry ever created by mankind. If you compare it to anything else which was actively harmful and yet big money tried to convince you it wasn't (like tobacco, alcohol, or really anything else), there is nothing that huge. So it isn't surprising that the industry fights change.

EV adoption has been successfully held back mostly by PR, Germany shifted from nuclear to coal and gas, the US president is doing everything to dismantle anything that isn't fossil fuel and promotes fossil fuels, the list goes on.


I think this sells the German energy mix short - fossil fuel has been on a steady decline in the energy mix for about 2 decades now.

Comparing 2020[^2] to 2025[^1]:

- renewables (solar+wind) went from 181 TWh to 219 TWh

- fossil (coal+gas) stayed constant (177 TWh and 179 TWh)

So I'd say we switched from nuclear (60TWh in 2020) to renewables & imported nuclear - but the long-term trend is towards renewables.

[1]: https://www.ise.fraunhofer.de/en/press-media/press-releases/... [2]: (pdf) https://www.ise.fraunhofer.de/content/dam/ise/en/documents/N...


I realize there is a lot of verbal gymnastics going on around this issue, and the word "renewables" is being used a lot, but my point still stands.

Another way to look at your numbers is that had the nuclear plants not been turned off, fossil (coal+gas) could have been reduced by 60TWh.

But they weren't reduced. They remained the same.

From the point of view of the fossil fuel industry: WIN!


The fossil fuel lobby can only do so much. Solar has gotten so cheap it's taking over on its own. Companies are doing it for no reason other than the math makes sense. EV batteries are nearing that point too. You can only keep BYD out of the US for so long.

The fossil fuel industry is fighting a rearguard action at this point.

> Germany shifted from nuclear to coal and gas

Sure, but you're attributing this, deliberately or not, to the wrong cause. It wasn't that the fossil fuel industry somehow won - it was range of factors possibly including geopolitics, some existing plants aging, an emotional response to the Fukushima nuclear disaster, and the Green lobby.

Basically, they voted to kill nuclear without a solid plan for an alternative, and coal/gas is the default option for filling the gaps left in the absence of timely and sufficiently rapid investment in other technologies.


Hmm. After former chancellor (Schroeder) heavily pushed Russian gas pipelines (Nord Stream 1 and 2) and then swiftly moved to working for Russian state-owned energy companies, including Nord Stream AG, Rosneft, and Gazprom, I have a different outlook on things.

One can never discount lobbying and influence behind the scenes, but Schroeder finished being Chancellor in 2005, which was six years before the initial post-Fukushima vote in question, and even longer since various aspects of the plan continued to be supported by various politicians.

He'd be a spectacularly successful lobbyist if your suspicion is correct.


Why would I assume Schroeder is the only politician under the influence of Russia or any of the fossil-fuel industry businesses?

I mean yeah, but $100 a barrel makes it difficult to argue.

I found it impossible to add a European project to it.

Bottom of the page:

"Any suggestions? Sign up for an account to suggest changes or new products."

(granted: what on earth do i need an account for, that can't be done with a form or an email?)


Have you tried? I have. I just checked: my product has been "Waiting for review" since July 15, 2025.

No wonder us Europeans get laughed at when it comes to speed and execution.


I never understood the logic behind the thinking there. Why would you ever want to place menubar items UNDER the notch, if you know it's there and they won't be visible?

It's such an easy problem to fix, with such incredible usability consequences, I just don't get the thinking.


The notch itself is probably considered temporary internally. If you code a rule for the notch, then you're going to have to consider which hardware OSX is running on in order to determine if the notch is present or not for your "notch width calculation."

"Think Different"

"Courage"

PoE is not obvious to implement (take it from someone who has done it with a fair share of mistakes), uses more expensive components that normal ethernet, takes up more space on the board, makes passing emissions certification more complex, and is more prone to mistakes that ruin boards in the future, causing support/warranty issues. In other words, a bag of worms: not impossible to handle, but something you would rather avoid if possible.

And what would a better alternative look like ?

I wouldn't call it "better", but the least-effort path among hobbyists and low end gear is often 12v or 24v sent over a pair with Gnd and a forgiving voltage regulator on the other end.

There is none, I never said PoE is "bad": it's a very good solution, it's just difficult to implement.

Really looking forward to testing and benchmarking this on my spam filtering benchmark. gemma-3-27b was a really strong model, surpassed later by gpt-oss:20b (which was also much faster). qwen models always had more variance.

If you wouldn't mind chatting about your usage, my email is in my profile, and I'd love to share experiences with other HNers using self-hosted models.

Does spam filtering really need a better model? My impression is that the whole game is based on having the best and freshest user-contributed labels.

He said it’s a benchmark.

Better models help on the day the spam mutates, before you have fresh labels for the new scam and before spammers can infer from a few test runs which phrasing still slips through. If you need labels for each pivot you're letting them experiment on your users.

In my experience the contents of the message are all but totally irrelevant to the classification, and it is the behavior of the mailing peer that gives all the relevant features.

Based on how much blatant gmail->gmail spam I receive, the gmail team agrees with this strategy.

This is such a waste of effort. Your E-mail address is not and can't be a secret. It will get into spammer databases eventually, no matter what you do. You will spend a lot of effort doing all these fancy tricks, and eventually you will get spam anyway.

Also, a note to those who make fancy "me+someservice@somedomain.com" addresses: make really sure you are in control and these work. Some services (including mine) will need to E-mail you one day, for example to tell you that your account will be deleted because of inactivity. If you don't receive that E-mail because of your fancy spam defenses, your account will be deleted. I've seen people hurt themselves like this and it makes me sad.

On a constructive note: what works very well is spam filtering using LLMs. We have AI to help us with this problem today. I wrote an LLM despammer tool which processes my inbox via IMAP using a local LLM (for privacy reasons). I see >97% accuracy in my benchmarks on my (very difficult) testing corpus. It's nearly perfect in real life usage. I've tested many local models in the 4-32B range and the top practical choice is gpt-oss:20b (GGUF, I run it from LM Studio, MLX quantizations are worse) — not only does it perform very well, but it's also really fast.


Plus-addressing is built in to most email services. There's no 'fancy' set up to break; it just works. That is, there's no way me@gmail.com works but me+someservice@gmail.com doesn't, unless you explicitly configure it not to work. Similarly for custom domains on most services.

If you use a catch-all on a domain, i.e. someservice@somedomain.com, I guess in theory that might break. But it seems about as likely as messing up the overall domain setup.

Also, my account on your service is likely much more disposable to me than my email address/domain. Anything I care about, I'd back up. Not just assume some random website is going to preserve it for me forever.


The techniques in the article right now have had around 95%-100% success at avoiding spam and take about 5 min. to implement. Your approach of putting an LLM in front of your inbox gives 97% accuracy, may have false positives (so you may not receive that account deletion email after all), requires to run inference and, I assume, would take at least an hour to setup.

Also, the two can be complementary, anyways, so I am not sure what your point is.


> Also, a note to those who make fancy "me+someservice@somedomain.com" addresses:

Just wait until one of these companies demands an email from the registered email address of your account!


My email provider allows me to send from + email addresses, just change the from header.

Plus tags annoy signup forms more than they slow spam crawlers. If you're spending this much effort on obfuscation, run a sane mail filter and save the weird tricks for the sites that insist on emailing you later, because some apps treats a plus alias as invalid and then you get to debug their broken account recovery.

Two things: 1) MLX has been available in LM Studio for a long time now, 2) I found that GGUF produced consistently better results in my benchmarking. The difference isn't big, but it's there.

I've been running my business for 10 years now, relying on Clojure and ClojureScript. It is amazing to be able to base one's livelihood on a foundation that is so stable and well designed. Clojure has been designed by a very smart and very experienced person, and it shows. It has then been maintained and extended by a team built around a culture of maturity and stability, and the result is something you can rely on.

The fact that I can use the same language to develop business model code that runs on both the client and the server, or that I don't have to use a different on-the-wire format for sending data between them (EDN does the job great) is just icing on the cake in this context.

I am very thankful to Rich and the entire Clojure (and ClojureScript) teams for giving me access to all their work (for free!).

BTW, if you haven't seen any of Rich's talks, go see them — they are worth it even if you do not intend to use Clojure.


Rich's talks have been the apex of my programming career. I didn't like sitting in front of a computer to the extent needed to make a living from it, so I moved on to another industry. And maybe I wasn't smart enough to become competent in Clojure. But I'm thankful for the eureka moments that Rich offered me. He's such a beautiful mind.

Would love to know what industry you ended up in. Daydreaming about working with my hands out and about one day lol.

A lot of people abandon tech for less stressful careers. Things like air traffic control and firefighting, or deep sea diving for the oil industry.

I'm pivoting to mental health. But the trades are quite appealing too!

I would love to use Clojure but there are basically no jobs in my area with the language. Seems like the Nordics like Clojure but I'd need to move.

The very good backwards compatibility is attractive but as the result of the small community, there's also a lot of abandoned packages and fewer QoL packages (formatters, linters, etc); I know there are some but for example I had setup `cljfmt` in Emacs and it wouldn't work, didn't look further.


VS Code and its forks (Cursor, Antigravity, etc.) have Calva, a fantastic REPL with excellent linter Kondo. These are amazing tools; formatting is the very least of it. You don't need Emacs. I personally using VS Code + Doom Emacs. Also, many packages that look abandoned are simply mature. You can literally use ten year old packages.

I'm not a hot shot programmer, entirely self-taught but a decent architect who thinks hard about problems, and with LLM agents Clojure shines for me. There are some fantastic databases also starting with Datomic -- free now thanks to Nubank -- and everything inspired by it and the Clojure flavor of Datalog. These include Datalevin, Datahike, DataScript, XTDB. Datomic itself is probably best for enterprise though there's now an embedded version.

But I'm pretty convinced that most LLMs I've used are more reliable with Clojure (and Elixir) than with most of the popular languages, and I can say they use Datalog extremely well, seemingly much better than SQL despite the vast difference in corpus size. For one thing Datalog just gets rid of joins issues.


> I would love to use Clojure but there are basically no jobs in my area with the language

I created my own job :-)

(although there are Clojure jobs in my area)


Always a solution ofc!

cljfmt is included with both Clojure-LSP and CIDER, so if you have either installed it should work out of the box.

With LSP mode the standard `lsp-format-region` and `lsp-format-buffer` commands should work, and on the CIDER side `cider-format-defun`, `cider-format-region` and `cider-format-buffer` should also invoke cljfmt.


Hey! Thanks for creating the package =) I'll need to try the integration again.

I'll add a note to the cljfmt README to tell people about these commands, as your experience shows that it might not be obvious to people that they likely already have access to cljfmt in Emacs as a result of using LSP or CIDER.

There are still Clojure remote positions. Thankfully, I have used Clojure professionally long enough that my core ability shouldn't atrophy too much now that we have moved away from it at my current position. I am looking forward to Jank actually.

Why did you move from it if I may ask?

There were multiple reasons at our company -- my particular team, all skilled Clojurists, decided to default to python last year for a variety of reasons including both AI code generation suitability and AI model utilization in our code bases; the latter is of high relevance for our particular work. While I find Clojure to be among the best languages for interacting with LLMs via API, it is awkward for interacting with local models directly. Of all on the team, I was probably most open to a polyglot approach.

> AI code generation

Incidentally, I am having great success using AI with Clojure. In fact, from what I read online, better than most. I'm not sure if it's due to Clojure's terseness (and hence, token economy), or other reasons, but it works very, very well.


Fair enough!

Simple Made Easy[https://www.infoq.com/presentations/Simple-Made-Easy/] in particular had a huge impact on the way I think about writing software at a formative time in my development/career. I have not had the chance to use Clojure professionally, but thinking about software in terms of "intertwining" is the idea I return to when evaluating software designs, regardless of technology, and gave me a way to articulate what makes software difficult to reason about.

FYI, the canonical version (recut slides / video / audio) of this talk can now be found at https://www.youtube.com/watch?v=SxdOUGdseq4

I always wondered what that juggling animation slide looked like!

> open-source is our only hope against enshittification. Everything that is VC backed or publicly traded will become enshittified

Solo founder here. My business is not VC-backed nor publicly traded, and I specifically avoided taking investment so that I can make all the decisions.

I avoid enshittification. This sometimes hurts revenue, but so be it. I wouldn't want to subject my users to anything I wouldn't like.

So, open-source is not the only hope. You can run a sustainable business without enshittification. The problem is money people. The moment money people (career managers, CFOs, etc) take over from product people, the business is on a downward path towards enshittification.


I believe you, it's just I've seen similar stories and the good-intentioned founder gets tired and eventually sells the business and the new owner ends up enshittifying the product. Not saying in the slightest it will happen to your company and I don't hold that against the founder. It's their prerogative after all.

Even when I use proprietary software, I sleep easier at night knowing that open-source alternatives keep them honest in their approach and I have an out if things do change.


An interesting and sad aspect of the war on bots and scraping that is being waged is that we are hurting ourselves in the process, too. Many tasks I'm trying to get my AI assistant to do cannot be done quickly, because sites defensively prohibit access to their content. I'm not scraping: it's my agent trying to fetch a page or two to perform a task for me (such as check pricing or availability).

We need a better solution.


You aren’t scraping for the sake of training a model, but scraping the prices and availability is still scraping, right?

I think some of the folks running sites would rather have you go to the site and view the items “suggested based on your shopping history” (I consider these ads, the vendors might disagree), etc.

I’m more sympathetic to the people running sites than the LLM training scrapers, but these are two parties in a many-party game and neither one is perfectly aligned with users.


> scraping the prices and availability is still scraping

Web browsing is scraping, too.

I am not doing anything that I myself wouldn't do, it would just take me longer. I'm not mass-scraping, training new models, etc etc. I'm just using a helper tool to do some work for me.

If you prevent that, you are effectively saying: humans have to perform the manual labor of clicking and browsing through our site, they are not allowed to be helped in any way. I don't think this is the right answer.


I would assume most sites that block access to your AI assistant do so because they want to show a human ads, i.e. not run at a loss. Seems reasonable.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: