Off the shelf, LLMs can’t just look at docs and give you the answer.
But if you properly pre-process the documents and create a RAG type system (which uses embedding to find semantically similar docs before inserting them into LLM context) then it actually works quite well.
It’s good for big organizations with internal wikis, I’ve found.
It also works well for ingesting articles from online publications.
Well that’s because Hugo is very simple to set up, there are lots of tutorials online about it, it hasn’t changed drastically in the past year and it’s mostly the same for different static site generators.
You won’t find that kind of ease of use with say webgpu+winit for building a small renderer in rust.
But if you properly pre-process the documents and create a RAG type system (which uses embedding to find semantically similar docs before inserting them into LLM context) then it actually works quite well.
It’s good for big organizations with internal wikis, I’ve found.
It also works well for ingesting articles from online publications.