Hacker Newsnew | past | comments | ask | show | jobs | submit | plastic3169's commentslogin

I’ve been testing opencode and it feels TUI in appearance only. I prefer commandline and TUIs and in my mind TUI idea is to be low level, extremely portable interface and to get out of the way. Opencode does not have low color, standard terminal theme so had to switch to a proper terminal program. Copy paste is hijacked so I need to write code out to file in order to get a snippet. The enter key (as in the return by the keypad) does not work for sending a line. I have not tested but don’t think this would work over SSH even. I have been googling around to find if I am holding it wrong but it feels to break expectations of a terminal app in a way that I wish they would have made it a gui. Makes me sad because I think the goods are there and it’s otherwise good.

> Copy paste is hijacked

FWIW, in Kitty on Linux, SHIFT + mouse-select copies and SHIFT + middle-mouse-button pastes. This use of SHIFT and otherwise using standard Unix style copy/paste is common in a lot of TUIs (eg, weechat).


I don’t think good TUI’s are the same as good command line programs. Great tui apps would to me be things like Norton/midnight commander, borlands turbo pascal, vim, eMacs and things like that

Yes cli and tui are not the same, but I expect TUI to work decent in general terminal emulator and not acitvely block copying and pasting. Having to install supported terminal emulator goes against the vibe.

Yes and it was a massive manual effort. In a way they acknowledged that keying does not really work all the way and having that unnatural color everywhere in the set is not worth it. It’s a massive production with heavy VFX work so not something you can apply to your own production. Sand screen and roto sections of this discussion are interesting.

https://youtu.be/UARrOsNPviA


Anyone have recommendations on EU services where one could run open models before buying expensive hardware?


Koyeb (recently acquired by Mistral if I'm not mistaken) have GPUs you can rent by the minute and they also have one-click deploy of some open models.


Great work! I have been wondering what would it take to train with higher image bit depth (10 or 12b) and/or using camera footage only, not already heavily processed images? The usefulness of video generation in most professional use cases is limited because models are too end to end and completely contaminated with stock footage. Maybe quantities of training material needed is simply not there?

Not blaming you, but asking as I don’t usually have access to professionals working with video training.


It’s a great question. In terms of pre-training even if they were was enough data at that quality, storing it and either demuxing it into raw frames OR compressing it with a sufficiently powerful encoder likely would cost a lot of $. But there’s a case to potentially use a much smaller subset of that data to dial in aesthetics towards the end of training. The gotcha there would come in terms of data diversity. Often you see that models will adapt to the new distribution and forget patterns from the old data. It’s hard to disentangle a model learning clarity of detail from concepts, so you might forget key ideas when picking up these details. Nevertheless maybe there is a way to use small amounts of this data in a RL finetuning setup? In our experience RL post training changes very little in the underlying model weights — so it might be a “light” enough touch to elicit the the desired details.


Hate to say this, but manufacturing bitcoin would make the most sense. And hard to see how even that would work.


I have started to use it to write small throwaway things. Like write a standalone debug shader that can display all this state on top of this image in real time. Not in a million years would I had spent time to mess with fonts in a shading language or bring in immediate gui framework or such. Codex could oneshot that kind of thing and the blast radius is one file that is not part of the project. Or write a separate python program that implements this core logic and double check my thinking. I am not a professional programmer though.


The credit card tapping option should be required by law. This registering apps and fobs flow is the worst ux imaginable. And while we are at it the car should hold the payment info. Plugging it in should be enough. I know it’s all coming.


I agree but I'd go further: Cash should be required by law, we shouldn't require people to have a bank account just to buy electricity.


Great game, I returned back to play next day.

> I don’t think the gates should animate up into the air. It breaks the visual logic of 2D for no benefit.

I also feel it would make more sense either for everything to be 2.5D or pure top down. Having appear / disappear animation is nice feedback to user though.

Other thing is that maybe the hitbox should change when the wall comes up. Now to remove it you need to press the grid, essentially the root of the wall. Unintuitive to me.

Thanks for the game, looking forward to when there is multiple horses or sheep to enclose.


> The reason nukes have been good is because it makes it clear that war is unwinnable which effectively ended direct conflict between world powers. Yet of course proxy wars are alive and well with Ukraine being the king of them all.

Not looking forward to being your proxywar or small regional conflict. It’s amazingly frustrating to be dragged into this without any provocation or possibility to actually affect the situation. Just unfortunate geography I guess.

I don’t think nukes stopped the direct conflict between world powers. They made it possible for the first time. There is no reach to US without them.

War in Europe or parts of Asia is easy the old fashioned way and seems to happen on a regular basis.


”What a way to show them. You rock! Unfortunately I can’t create the musical art you requested as you reference multiple existing musical acts by name. How about rephrasing your request in a way that is truly original and unique to you”


Again I’m referring to the future. When ChatGPT came out nobody thought it was good enough to be an assistant coding agent. That future came to pass.

Nobody gives a fuck about what ChatGPT can currently do. It’s not interesting to talk about because it’s obvious. I don’t even understand why you’re just rehashing the obvious response. I’m talking about the future. The progression of LLMs is leading to a future where my prompt leads to a response that is superior to the same prompt given to a human.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: