Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

People are making their whole personalities about being anti-AI. I'm personally a mix of skeptical+worried about AI; but at some point just being a reactionary ninny makes people tune out on whether there might be some truth at the core of your concern.


It's an interesting point. I too am deeply skeptical and worried, and I would be concerned people think I am "anti-AI". Which I am not -- not least because it's a sixty-plus-year-old discipline that has a development history that spans long before OpenAI and the current burst of empty calories.

But if there's something I am building into my personality, it is rejecting (and calling out) grift. That is what makes me seethe about this entire big picture. The hype, the cultural parasitism, and the callous, blasé "oh yeah if you're not using this already you're probably fucked" FUD/FOMO shit smoothie that generative-AI people seem entirely too comfortable dishing up.

People who even seem in a hurry to jump on that hype train are going to catch the same side eye.

Re: iTerm2 specifically: I don't use iTerm2. I didn't mind it when I did. I wish the developer luck; terminal apps always need more attention. And in this case I don't think a tickbox to switch something on would trouble me.

But if I really relied on a product, seeing its developer divide attention and start shoehorning in LLM APIs to gain a bit of contemporary relevance would at least slightly bother me.

Like when one of your least rigorous-thinking friends or relatives starts talking to you about some opportunity to do with Ethereum. Not often a positive sign.

(This Gitlab issue is not the silliest overreaction I've seen. At the height of the Apple/Samsung Android lawsuit proxy war, I once saw someone demand in a support thread that Wacom remove some Android connection tools that one or more of their smart tablets were using, because Android was "stolen property" or somesuch obviously Jobsian phrasing.)


Well said and I share the same sentiment. AI is not inherently bad, but the hype and the culture surrounding it most certainly has at least weird if not bad vibes (looking at you r/singularity).


I am fully aware that eventually I will need to engage with it -- not least because I want to be teaching.

But I'm really interested in finding a maximally-ethical way through it all. The MagPi magazine has just started an article series about applications of ethical, non-infringing models and on-device AI, so I think there must be an emerging trend line around that.

Though whenever I see people talking about ethical AI stewardship the debate seems to be about one specific corporation which is run by a guy who launched a "let us scan your iris and we'll give you crypto" business.


> start shoehorning in LLM APIs to gain a bit of contemporary relevance

This claim implies that there's no utilitarian reason for this integration, but I don't think that is true. Shell scripting is notoriously arcane, and conversely LMs are pretty decent at unraveling that. You might notice that there are quite a few comments on the issue where users specifically state that they are using the feature and find it helpful. I was actually mildly skeptical when seeing it show up in the changelog for 3.5.0, but after giving it a try, I think this is exactly the kind of useful AI integration that I'd want to see more of (as opposed to how LLMs are being used most of the time).


I think your reading is fair considering what I actually wrote!

It's just not quite what I had in mind. But what I wrote is still quite scrappy; that line is missing at least an "in general" to broaden that point out beyond iTerm 2 specifically. I need to slow down a bit more.

I don't really agree with you on the LLM side of the equation, and I am bothered by the idea that this is where we're all headed. But where I think this integration is not ridiculous is in doing this at the GUI level.

There is an argument for saying "why isn't this a utility at the remote (shell) end", but of course from the perspective of OpenAI API calls being added to everything and calling out from the command line, that would be worse, because you'd be installing it everywhere.

So if it belongs anywhere (colour me wholly unconvinced) it definitely belongs somewhere within the terminal client itself.

But as I say, I don't use iTerm 2. I did take this opportunity to look at what iTerm 2 offers, out of fairness to the author, and it is obviously an impressive bit of work. Maybe when I find a need for Python scripting like that I'll come back to it.


iTerm is a "kitchen sink" type of app in general. That is, it is definitely a terminal emulator, but it has lots of features, and I doubt that most users use even half of that. So in that sense, if you're using it, you're already at least tacitly accepting that philosophy as valid. So optional LLM integration is not really out of place there in the sense that it would be in a truly minimalist terminal emulator, IMO.

With regard to LLMs, for what it's worth, I'm not suggesting that people use them to routinely drive their shell. This is the kind of stuff that you use very occasionally, when it is time to use that one command that is immensely useful for very specialized things, and which you can never in your life remember the syntax for precisely because it's not something you do every day. The canonical examples there are ffmpeg, ImageMagick, and similar tools.

Remember https://linux.die.net/man/1/cdecl? This is basically like that, just based on tech that allows it to be better generalized.


> This is the kind of stuff that you use very occasionally, when it is time to use that one command that is immensely useful for very specialized things, and which you can never in your life remember the syntax for precisely because it's not something you do every day.

Don't people make notes of that somewhere they can look it up?

I mean, you're not going to use this to generate command line arguments for commands you've never heard of before, you're likely not going to use it for commands whose outputs are crucial or behaviours unsafe, and if you do you're going to need to use your actual knowledge to check it hasn't hallucinated something dangerous before you run it -- which means consulting the manual and doing the work.

I get that man pages are a particularly rich, standardised form of training text, I just don't believe there is as much advantage in asking an LLM.

This is one of those areas where I think people project success onto LLMs where there is none. It's like the songwriting example. Sure it can write a bad song fast, but so can literally anyone half-skilled, and if you want to help it write a good song, you're going to have to redo half the work.

This is just like having a bad dishwasher.


The workflow here is to generate the command line first, then consult the manual to see what exactly it does. Which is much easier than reading the whole thing end-to-end trying to find the exact combination for your needs to begin with.


I‘ve never met a person who has made their whole personality about being anti-AI. Can we please calm down a bit?


Luddites are making their whole personalities about being anti-exploitation of labor by capitalists. /s

Love how you just lump everyone who take their time to voice legitimate concerns about the ethical and privacy nightmare that is proprietary generative AI services into a category called "reactionary ninnies".

While I'm sure some comments went overboard, people had every right to be upset about this integration being added to iTerm—even as a configure-to-use-it feature. I'm glad this is being extracted out to a completely separate add-on.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: