Oh man, this was the release I wanted to link, as it has a new feature (tiled follow link) that I actually started using right away. A new browser feature I find useful didn’t happen that often for me, so I got excited.
I have this nagging feeling I’m more and more skimming text, not just what the LLMs output, but all type of texts. I’m afraid people will get too lazy to read, when the LLM is almost always right. Maybe it’s a silly thought. I hope!
People will say "oh, it's the same as when the printing press came, people were afraid we'd get lazy from not copying text by hand", or any of a myriad of other innovations that made our lives easier. I think this time it's different though, because we're talking about offloading the very essence of humanity – thinking. Sure, getting too lazy to walk after cars became widespread was detrimental to our health, but if we get too lazy to think, what are we?
there are some youtube videos about the topic, be it pupil in high school addicted to llms, or adults losing skills, and not dev only, society is starting to see strange effects
One of the more annoying software that does this is the copilot Office 365 on the web. Every time (!) I open it, it shows a popup on how to add files to the context. That itself would be annoying, but it also steals focus! So you would be typing something and suddenly you’re not typing anymore for M$ decided it’s time for a popup.
I finally learned to just wait for the pop up and then dismiss it with esc. Ugh!
If you login to the exchange online admin center you first have to complete a short "on-rails-shooter" video game. They constantly shuffle shit around and want to give you a tour via popups about it.
I have the admin accounts for multiple companies, so I have to play the game repeatedly.
I built this recently. I used nvidia parakeet as STT, open wake word as the wake word detection, mistral ministral 14b as LLM and pocket tts for tts. Fits snugly in my 16 gb VRAM. Pocket is small and fast and has good enough voice cloning. I first used the chatterbox turbo model, which perform better and even supported some simple paralinguistic word like (chuckle) that made it more fun, but it was just a bit too big for my rig.
> Is anyone doing true end-to-end speech models locally (streaming audio out), or is the SOTA still “streaming ASR + LLM + streaming TTS” glued together?
reply