If the special tool search tool is available, then a client would not load the descriptions of the tools in advance, but only for the ones found via the search tool. But it's not widely supported yet.
Indeed, the article would have been correct one year ago.
Now, modern LLM APIs do require the tools to be described outside the prompt [1]. This negates the whole article, although one bit where he's right is that it does not matter if those tools are MCP tools or local, the call to LLM looks the same.
Weird deflection. GPT 3, a 175 billion parameter model, came out in 2020, so it very much is not before LLMs. I guess you can say the story happened before ChatGPT by a few weeks. As for putting lost in quotation marks, I have no idea why you think the quantity of money is relevant to the outcome. No one was expecting them to file bankruptcy over this, only to follow the original agreement.
>Broadly speaking, Russian historians are generally of the opinion that the Holodomor did not constitute a genocide. Among Ukrainian historians the general opinion is that it did constitute a genocide.
An endpoint implementing this protocol would describe the agent and its capabilities, including examples. So I guess you could index that and create a discovery service.
It's not so much about what you _can do_ but about the messaging and posturing, which is what drives the adoption of standards as a social phenomenon.
My team's been working on implementing MCP-agents and agents-as-tools and we consistently saw confusion from everyone we were selling this into (who were already bought in to hosting an MCP server for their API or SDK) for their agents because "that's not what it's for".
In fact there is: https://platform.claude.com/docs/en/agents-and-tools/tool-us...
If the special tool search tool is available, then a client would not load the descriptions of the tools in advance, but only for the ones found via the search tool. But it's not widely supported yet.
reply