Hacker Newsnew | past | comments | ask | show | jobs | submit | deskamess's commentslogin

Is there a link to US/Canada retailers?

Edit: Never mind. I always find it after asking a question.


uv has been very useful but I also looking at pixi. Anyone have any experience with that? I hear good things about it.

Can highly recommend pixi. It really is the "uv but for Conda" and actually quite a bit more imo. Don't know how relevant this is for you, but many packages like PyTorch are not being built for Intel Macs anymore or some packages like OpenCV are built such that they require macOS 13+. That's usually not too much of a problem on your most likely pretty modern dev machine but when shipping software to customers you want to support old machines. In the Conda ecosystem, stuff is being built for a wide variety of platforms, much more than the wheels you'd find on pypi.org so that can be very useful.

So I can really recommend you trying pixi. You can even mix dependencies from pypi.org with the ones from Conda so for most cases you really get the best of both worlds. Also the maintainers are really nice and responsive on their Discord server.


I had no idea it was AI assisted (as another comment put it). However I am fine with this... I would certainly enhance my long form content like the author described. The author mentioned the use of world bible and style guides, and it shows through in the consistency and tightness of the article. And that is key... to take something AI generated (based on a prompt) and rework it systematically in an iterative human-in-the loop. The end result was a great read.

Are there any restrictions on how short the error_slug should be? The meat of some of my errors can be pretty long (for example an ffmpeg error). There are also many phases to a job - call them tasks. Can a canonical log line be a collection of task log lines?

You should avoid dumping the raw error entirely. The idea is that error_slug is a stable grouping key.

The idea is to consolidate all that can be grouped into one logical unit. So you would do one long log line at the end, after all tasks are done.


I see one of your responses that this is a complement to an existing logging system - a one line summary. That works for me.

Can you use handy exclusive via the cli if you have a file to feed it?


Not sure about that


Not currently


Do they have a good multilingual embedding model? Ideally, with a decent context size like 16/32K. I think Qwen has one at 32K. Even the Gemma contexts are pretty small (8K).


It is still prescribed for epilepsy. I am actually hoping for some medication stories if anyone/someone they know has ADHD and epilepsy. It's for a juvenile, but your stories can be for any age. Or pointers to any resources about the combo.


This makes 50 cookies. I think they are too small (tsp scoop on baking sheet). That's the only mod I would make.


I think there is a book (Chip War) about how the USSR did not effectively participate in staying at the edge of the semiconductor revolution. And they have suffered for it.

China has decided they are going to participate in the LLM/AGI/etc revolution at any cost. So it is a sunk cost, and the models are just an end product and any revenue is validation and great, but not essential. The cheaper price points keep their models used and relevant. It challenges the other (US, EU) models to innovate and keep ahead to justify their higher valuations (both monthly plan, and investor). Once those advances are made, it can be bought back to their own models. In effect, the currently leading models are running from a second place candidate who never gets tired and eventually does what they do at a lower price point.


In some way, the US won the cold war by spending so much on military that the USSR, in trying to keep up, collapsed. I don't see any parallels between that and China providing infinite free compute to their AI labs, why do you ask?


How do you read youtube videos? Very curious as I have been wanting to watch PDF's scroll by slowly on a large TV. I am interested in the workflow of getting a pdf/document into a scrolling video format. These days NotebookLM may be an option but I am curious if there is something custom. If I can get it into video form (mp4) then I can even deliver it via plex.


I use yt-dlp to download the transcript, and if it's not available i can get the audio file and run it through parakeet locally. Then I have the plain text, which could be read out loud (kind of defeating the purpose), but perhaps at triple speed with a computer voice that's still understandble at that speed. I could also summarize it with an llm. With pandoc or typst I can convert to single column or mult column pdf to print or watch on tv or my smart glasses. If I strip the vowels and make the font smaller I can fit more!

One could convert the Markdown/PDF to a very long image first with pandoc+wkhtml, then use ffmpeg to crop and move the viewport slowly over the image, this scrolls at 20 pixels per second for 30s - with the mpv player one could change speed dynamically through keys.

ffmpeg -loop 1 -i long_image.png -vf "crop=iw:ih/10:0:t*20" -t 30 -pix_fmt yuv420p output.mp4

Alternatively one could use a Rapid Serial Visual Presentation / Speedreading / Spritz technique to output to mp4 or use dedicated rsvp program where one can change speed.

One could also output to a braille 'screen'.

Scrolling mp4 text on the the TV or Laptop to read is a good idea for my mother and her macula degeneration, or perhaps I should make use of an easier to see/read magnification browser plugin tool.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: