Hacker Newsnew | past | comments | ask | show | jobs | submit | dummydummy1234's commentslogin

Arbitrage I assume

I know, I just don't see the arbitrage in what's described? If I order online because it's cheaper than the high street, that's not an arbitrage – the arbitrage would include then selling it on the high street afterwards, getting paid to close the gap until it reflects only delivery fees and the value of immediacy.

I assumed from the context the arb was California salary vs their local salary.

And to make that an arbitrage you'd need to subcontract someone local to do the job you've taken the California pay for. It doesn't mean 'get a better deal in a non-obvious way/place', it's taking both sides of the trade in different markets.

they might hire them to work on the project and then sell the project themselves hence the arb, or outsourcing basically.

Good point about the project lifecycle. In my experience, open source contributions often get repurposed this way. The key is clear licensing from the start.

Prepaid/paid limits with shutoff is appropriate for this though.

If you have per key limits, this is not possible, and even in a wild situation you should b able to expect that your firebase key will not use 50k.


How much is industrial scale batteries for solar?

The LCOE is better than nuclear and nuclear is not getting cheaper while industrial scale batteries continue to get cheaper.

Mid thinking cycle seems dangerous as it will probably kill caching.

The mid thinking cycle would require significant architecture change to current state of art and imo is a key blocker to AGI

Can you use zellij over ssh on a remote server?

Yes you can!

without running zellij on the remote machine? how?

I'm unclear what's being asked. Zellij is just a TUI-based terminal multiplexer like tmux and screen, you either run it locally and SSH within it to a remote machine, or SSH to a remote machine and run Zellij from within the remote connection.

I guess they mean 'have zellij hold your session when you log off/close controlling terminal'. (that would require zellij on remote)

Never underestimate that people are lazy.


I think there is a difference between things like coding where it is semi closed loop, at the end of the day the software works or not.

Vs fields where there is not a reliable feedback path, or that feedback path is much more noisy.


There definitely is but even then, you can get a feel for a loop for more open-ended tasks too - you move forward until the model output starts to look handwavy/contradictory, then pause to talk to it/consult outside sources to improve your own knowledge. Most "fuzzy" fields also have quantitative components, and it's often worth stopping for a moment to put together some kind of quantitative evaluation suie to give the model grounding. When you've learned the right path yourself, you start moving forward again. It's for sure slower and more error-prone if you were already an expert when you started, but it's workable, and head-and-shoulders better than what you could do without the AI.


I generally think it's better to phrase it as a gift.

My motto, is that people have helped me a lot in life, with time, resources and sometimes money.

If I loan money, I explicitly do not expect to be repaid, and will generally say, pass it on.

Also it's often not the loaner side who cuts off contact. But if the person who receives the loan cannot repay it, and every time they talk to you, they feel guilty and think about it. They might just start avoiding you.


I have found the one of the better use cases of llms to be a rubber duck.

Explaining a design, problem, etc and trying to find solutions is extremely useful.

I can bring novelty, what I often want from the LLM is a better understanding of the edge cases that I may run into, and possible solutions.


I always find folks bringing up rubber ducking as a thing LLMs are good at to be misguided. IMO, what defines rubber ducking as a concept is that it is just the developer explaining what their doing to themselves. Not to another person, and not to a thing pretending to be a person. If you have a "two way" or "conversational" debugging/designing experience it isnt rubber ducking, its just normal design/debugging.

The moment I bring in a conversational element, I want a being that actually has problem comprehension and creativity which an LLM by definition does not.


Sometimes I don't want creativity though, I'm just not familiar enough with the solution space and I use the LLM as a sort of gradient descent simulator to the right solution to my problem (the LLM which itself used gradient descent when trained, meta, I know). I am not looking for wholly new solutions, just one that fits the problem the best, just as one could Google that information but LLMs save even that searching time.


> I'm just not familiar enough with the solution space

Neither is the LLM


(Trying to find where you might still see this)

I've read the thread and in my mind you're missing that LLMs increase the surface area of visibility of a thing. It's a probe. It adds known unknowns to your train of thought. It doesn't need to be "creative" about it. It doesn't need to be complete or even "right". You can validate the unknown unknown since it is now known. It doesn't need to have a measured opinion (even though it acts as it does), it's really just topography expansion. We're getting in the weeds of creativity and idea synthesis, but if something is net-new to you right now in your topography map, what's so bad about attributing relative synthesis to the AI?


Because if that's it we've made a ludicrously expensive i-ching.


If there is something LLMs are good at it's knowing some obscure fact that only 10 other people on this planet know.


They're also very good at almost knowing an obscure fact that only 10 people know but getting a detail catastrophically wrong about it


No, this is the kind of thing LLMs are very good at. Knowing the specifics and details and minutiae about technologies, programming languages, etc.


Oh Lord, no. Not at all. That's what they're terrible at. They are ok-ish at superficial overviews and catastrophically bad at specific minutiae


Honest, non-confrontational, non-passive aggressive question: Have you used any of the latest models in the last 6 months to do coding? Or frankly, in the last year?


I have. And the people who say "use a frontier" model are full of it. The frontier models aren't any better than the free ones.


What are you defining as free versus frontier, and for what purpose? For coding there is a big difference between Opus and GPT 5.3/4 versus Sonnet and other models such as open weight ones.


They note in another comment they don't even use search engines so I don't think they're the right person to ask regarding frontier models.


I'd ask them what tools they do use, but I doubt they'll see my comment; I'll see if I can mail it to them.


(Why wouldn't I see your comment?)

I just don't use the web much anymore because the experience has degraded so much over the past several years and it has become decreasingly useful at work as well. I do sometimes need to search for a document and find Kagi pretty good for that, but the old way of using a search engine to kind of explore and discover stuff just isn't viable anymore, unfortunately.

I administer software for a living so I read a lot of documentation of that software but it comes with the software so I don't ever really need to search for it; I also read and participate in some forums and us the relevant IRC channels.


Oftentimes it is though, good enough for my purposes.


If you're not familiar with the problem space, by definition you don't know whether or not that's the case. The problem spaces I do know well, I know the LLM isn't good at it, so why would I assume it's better at spaces I don't know?


I said familiar enough, not familiar. For example, let's say I'm building an app I know needs caching, the LLM is very good at telling me what types of caching to use, what libraries to use for each type, and so on, for which I can do more research if I really want to know specifically what the best library out of all the rest are, but oftentimes its top suggestion is, like I said, good enough for my purpose of e.g. caching.


I still don't get what you're saying. If you possess enough information to accurately judge the LLM's suggestions you possess enough information to decide on your own. There's not really a way around that.


Of course I'm deciding on my own, I'm not letting the LLM decide for me (although some people do). But the point is whatever the suggestion is is merely an implementation detail that either solves my problem or not, not sure what part of that is confusing. Replace LLM with glorified Google and maybe it's less confusing.


No, Google (at least back when it worked) ranked results based on the feedback of other users, so it was a useful signal.


Theoretically the LLM would weight more popular suggestions more too. Regardless you're reading too much into this, either use the LLM or don't, I'm not sure if someone else can convince you. As I said for my purposes of getting shit done it works perfectly fine and works more like a research tool than anything else, especially if it can understand my specific use case unlike general research tools like Google or Stack Overflow.


IDK man this sounds a lot like my junior devs saying "it works fine for me" as they hand in PRs that break prod


If you don't review the code it generates then that's still on you. There isn't an excuse for handing in breaking PRs like your juniors. It's a tool at the end of the day and it's the responsibility of the user to utilize it correctly.


Do you use search engines or do you just memorize all the world’s information?


I don't use search engines for much of anything nowadays (does anybody still?) At work I read documentation if I need to learn something.


This is a very strange and contradictory situation. I'm not sure there's any point in engaging with you since there is nothing but a stream of weak dismissals farming for engagement.

You dismiss LLMs because of factual inaccuracy, which is fair, but now you're doubling down on an anti search engine stance, which is weird, because the modern substitute is letting LLMs either use search engines on your behalf or learn the entire internet with some error and you've dismissed both.

Yes, I'm the "backwards" guy who still uses search engines. We still exist.


I've noticed that HN can attract some of the most extreme people I've ever seen, and I suppose there is precedent in the tech world when I'm reminded of the story of Stallman not using a browser but instead sending webpages to his email where he then reads the content. It's literally nonsensical for 99.9999% of the population and I've read similar absurd things on HN as well.

This person not using LLMs is fine, I understand the argument like you said, but the double down on not using search engines either makes me not take anything they say seriously. Not to be too crass but it reminds me of this situation on the nature of arguing on the internet [0].

[0] https://www.reddit.com/r/copypasta/comments/pxb2kn/i_got_int...


Absolutely, the whole point of the rubber duck is that it's inanimate. The act of talking to the rubber duck makes you first of all describe your problem in words, and secondly hear (or read) it back and reprocess it in a slightly different way. It's a completely free way to use more parts of your brain when you need to.

LLMs are a non-free way for you to make use of less of your brain. It seems to me that these are not the same thing.


Maybe it’s just a semantic distinction, which, sure. I guess I’d just call it research? It’s basically the “I’m reading blogs, repos, issue trackers, api docs etc. to get a feel for the problem space” step of meaningful engineering.

But I definitely reach for a clear and concise way to describe that my brain and fingers are a firewall between the LLM and my code/workspace. I’m using it to help frame my thinking but I’m the one making the decisions. And I’m intentionally keeping context in my brain, not the LLM, by not exposing my workspace to it.


Sometimes people just need something else to tell them their ideas are valid. Validation is a core principle of therapeutic care. Procrastination is tightly linked to fear of a negative outcome. LLMs can help with both of these. They can validate ideas in the now which can help overcome some of that anxiety.

Unfortunately they can also validate some really bad ideas.


I feel I've had the most success with treating it like another developer. One that has specific strengths (reference/checklists/scanning) and weaknesses (big picture/creativity). But definitely bouncing actual questions that I would say to a person off it.


My understanding was that rubber ducking was using a different portion of your brain by speaking the words.

The same discovery often happens when you explain a problem to a coworker and midway through the explanation you say "nvm, I know what I did wrong"


Do you not know any people who can help? Suddenly realised how lonely this sounds.


Coordinating with people is hard and only gets harder as you live. And actually, finding someone that is earnestly receptive to hearing you pitch your half-baked startup ideas (just an example) and is in some capacity qualified to be at all helpful, is uhhh, not easy.


Really? Sometimes I think I'm not very social, then I read something like this. Don't you have any friends? Colleagues? Maybe that's the problem you need to solve rather than sitting in a room burning energy for endless token streams with LLMs that anyone has access to?


Ah, I couldn't help myself practice my creative writing in the other reply. This reply is more constructive:

Both LLM based rubber-ducking and human discussions seem like a win win. I see no reason to jump to labeling unhealthy social connections just for pairing with LLMs.


lol. nobody is proposing this "well if not friends, then...". Appreciate your concern. I am fine.

This is for Internet posterity: thought-partnering with AI does not in fact make you a sorry socially inept loser that needs globular-toast to come in and help you dial that helpline.

Also: one's friends do not, in reality want to thought-partner about work issues, esoteric hobbies, and that million dollar idea.

Also: these friends, every and any one of them it seems, will not in fact speak the word of God into your ear as manifest insight for said work issue, million dollar idea, and so forth.


Really like the online builder!


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: