Hacker Newsnew | past | comments | ask | show | jobs | submit | rileyt's commentslogin

it's similar to triggers, but with a routing layer that combines semantic triggers and memory. the magic is defining them as files in the repo (like skills) and not worrying about the execution.

the working spec is files like `.agents/daemons/<name>/DAEMON.md` and they have access to skills and rules in the repo so you don't need to duplicate them.

you could even have a daemon that just says to run an existing skill.


seems like and appropriate way to share.


It was less than $2 to embed all 100+ episodes with the new OpenAI embeddings and was as easy as just making a bunch of API calls. Pretty hard to beat that experience.


It's not fine tuned. You literally just add something like "if the answer to the question isn't in the context, say 'I don't know'" It's wild.


So do you have the entire Huberman podcast transcript in the context of the prompt?


How did you do that?


He just told you? The prompt is a combination of the top 5 search results, the phrase to say it doesn't know if it does not have context, and the question the user is actually asking. That is sent to OpenAI and the response is shown along with the search results as the references.


Thanks a lot. It's nice when people appreciate the stuff I build for fun.

The UI is my own design system that I will open source at some point. The app is Remix with a Redis cache to keep things snappy.


Old version? Works on latest for me, but the CSS uses @layer which doesn't have great support with older browsers.


Ah, right, I'm still on Firefox 91 ESR.


The embeddings cost less than $2 for all 100+ episodes. The cost for the answering calls to davinci are around $30 so far.


It uses Whisper for transcripts, which I believe are better than the YouTube generated ones.

My guess is that there are more relevant results from the semantic search than I'm including in the context (to reduce costs) and that exact snippet isn't being given to the answering model as context.


As I wrote here: https://news.ycombinator.com/item?id=34035123, I also wrote a tool to access them. I'm pretty sure there are English transcripts which are manually generated, not just the YouTube generated ones. I've always found them to be high quality, enough to make a book out of.


For Huberman Podcast I imagine he pays someone to do the annotations manually, so they're accurate. But on most videos I've found Whisper's annotations to be more accurate than YouTube's default annotations - not to bash YouTube's, they're still great but occasionally you get some weird annotations


You're welcome :)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: