We just launched our AI-based API-Testing tool (https://ai.stepci.com), despite having competitors like GitHub Co-Pilot.
Why? Because they lack specificity. We're domain experts, we know how to prompt it correctly to get the best results for a given domain. The moat is having model do one task extremely well rather than do 100 things "alright"
If the primary value-proposition for your startup is just customized prompting with OpenAI endpoints, then unfortunately it's highly likely it could be easily replicated using the newly announced concept of GPTs.
Of course! Today our assumption is that LLMs are commodities and our job is to get the most out of them for the type of problem we're solving (API Testing for us!)
It certainly will be a fun experience. But our current belief is that LLMs are a commodity and the real value is in (application-specific) products built on top of them.
Why? Because they lack specificity. We're domain experts, we know how to prompt it correctly to get the best results for a given domain. The moat is having model do one task extremely well rather than do 100 things "alright"