Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We just launched our AI-based API-Testing tool (https://ai.stepci.com), despite having competitors like GitHub Co-Pilot.

Why? Because they lack specificity. We're domain experts, we know how to prompt it correctly to get the best results for a given domain. The moat is having model do one task extremely well rather than do 100 things "alright"



Domain specialization could be the moat, not only in the business domain but the sheer cost of deployment/refinement.

Check out Will Bennett's "Small language models and building defensibility" - https://will-bennett.beehiiv.com/p/small-language-models-and... (free email newsletter subscription required)


If the primary value-proposition for your startup is just customized prompting with OpenAI endpoints, then unfortunately it's highly likely it could be easily replicated using the newly announced concept of GPTs.


If you just launched it is too soon to speak.


Of course! Today our assumption is that LLMs are commodities and our job is to get the most out of them for the type of problem we're solving (API Testing for us!)


Sorry to be blunt but they can be totally right, if you do not succeed and have to shut down your startup.


It certainly will be a fun experience. But our current belief is that LLMs are a commodity and the real value is in (application-specific) products built on top of them.


Exactly, everyone is so pessimistic but for every AWS sku there is a billion dollar startup that leads that market.


Time will tell




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: