> Google's Web Model Context Protocol (WebMCP) handles authentication by inheriting the user's existing browser session and security context. This means that an AI agent using WebMCP operates within the same authentication boundaries (session cookies, SSO, etc.) that apply to a human user, without requiring a separate authentication layer for the agent itself.
Here’s what Gemini says about copy-pasting AI answers:
> Avoid "lazy" posting—copying a prompt result and pasting it without any context. If the user wanted a raw AI answer, they likely would have gone to the AI themselves.
I've been doing a version of this in a side project. Instead of saving the prompt directly, I have a road map. When implementing features, I tell it to brainstorm implementation for the road map. When fixing a bug, I tell it to brainstorm fixes from the roadmap. There's some back and forth, and then it writes a slice that is committed. Then, I look it over, verify scope, and it makes a plan (also committed). Then it generates work logs as it codes.
My prompts are literally "brainstorm next slice" or "brainstorm how to fix this bug" or "talk me through trades offs of approach A Vs B" so those prompts aren't meaningful in their own.
I wonder how scalable that is. After the twentieth feature has been added, how much connection will the conversation about the first feature still have with the current code? And you’ll need a larger and larger context for the LLM to grok the history; or you’ll have to have it rewrite it in shorter form, but that has the same failure modes why we can’t just have it maintain complete documentation (obviating the need to keep a history) in the first place.
Things like MemGPT/Letta, ToM-SWE, and Voltropy have made long context documentation pretty manageable. You could probably build some specialized tooling/prompts for development artifacts specifically too. But I’ll be the first to admit this is basically “Throw more agents at the problem”
I agree with this, I like spec-driven-development tooling partially for this reason. That being said, what I’ve found is often that I don’t include enough of the “why” in my prompt artifacts. The “what” and “how” are pretty well covered but sometimes I find myself looking back at them thinking “Why did I do this?” I’ve started including it but it does sometimes feel weird because I feel like “Why would the LLM ‘care’ about this story?”
The article doesn't mention about which LLM or total cost. Because if they have used ChatGPT or such, the token cost itself should be very expensive, right?
There is a cost associated with each investigation (that the Mendral agent is doing). And we spend time tuning the orchestration between agents. Yes expensive but we're making money on top of what it costs us. So far we were able to take the cost down while increasing the relevance of each root cause analysis.
We're writing another post about that specifically, we'll publish it sometimes next week
same here in safari. first strike ok, second froze for a few sec. I did like the sort of 'obstruction' effect of the rain on the house for example. obvioulsy a limitation of the char based render, but it adds a pleasing kind of obscuring effect.
reply