Jump straight to the second option. You have to presume that the content they sent you has no relation whatsoever to their actual understanding of the matter.
We all use Claude at my work and I have a very strict rule for my boss and my team: we don’t say “I asked Claude”. We use it a lot, but I expect my team to own it.
I actually think there’s almost an acceptable workflow here of using LLMs as part of the medium of communication. I’m pretty much fine with someone sending me 500 lines of slop with the stated expectation that I’ll dump it into an LLM on my end and interact with it.
It’s the asymmetric expectations—that one person can spew slop but the other must go full-effort—that for me personally feels disrespectful.
I also don't mind that. Summarized information exchange feels very efficient. But for sure, it seems like a societal expectation is emerging around these tools right now - expect me to put as much effort into consuming data as you did producing it. If you shat out a bunch of data from an LLM, I'm going to use an LLM to consume that data as well. And it's not reasonable for you to expect me to manually parse that data, just as well as I wouldn't expect you to do the same.
However, since people are not going to readily reveal that they used an LLM to produce said output, it seems like the most logical way to do this is just always use an LLM to consume inputs, because there's no easy 100% way to tell whether it was created by an LLM or a human or not anymore.
This kinda risks the broken telephone problem, or when you translate from one language to another and then again to another - context and nuance is always lost.
Just give me the bullet points, it's more efficient anyway. No need to add tons of adjectives and purple prose around it to fluff it up.
Some day someone brilliant will discover the idea of "sharing prompts" to get around this issue. So, instead of sending the clean and summarized LLM output, you'll just send your prompt, and then the recipient can read that, and in response, share their prompt back to the original sender.
I think we'll eventually move away from using these verbose documents, presentations, etc for communication. Just do your work, thinking, solving problems, etc while verbally dumping it all out into LLM sessions as you go. When someone needs to be updated on a particular task or project, there will be a way to give them granular access to those sessions as a sort of partial "brain dump" of yours. They can ask the LLM questions directly, get bullet points, whatever form they prefer the information in.
That way, thinking is communication! That's kind of why I loved math so much - it felt like I could solve a problem and succinctly communicate with the reader at the same time.
If you write 3 bullet points and produce 500-pages of slop why would my AI summarise it back to the original 3 bullet points and not something else entirely?
It won't, and that's the joke. They will write three bullet points, but their AI will only focus on the first two and hallucinate two more to fill out the document. Your AI will ignore them completely and go off on some unrelated tangent based on the of the earlier hallucinations. Anthropic collects a fee from both of you and is the only real winner here.
It's way too early to tell. Safe to say that it's different. But it might be better than some of our current async comms.
If I spend time and thought and research around an idea and a corpus of information and dump that all into an LLM and converse with it, eventually producing an artifact that's partly the LLM's processing of that corpus and partly the result of my direction, and you take that artifact and drop it into an LLM and interrogate it with your own perspective and lenses, that's going to go in directions that I may not have imagined for you but will still contain the kernel of my perspective. And you could indeed interrogate the thing, not just sit back and think about it.
No idea whether this is faster/better or shallower/deeper or if it encourages us to connect more or differently as people or what-have-you. At present I'm not even sure I care, personally, about measuring differences on these traditional axes. It just seems like a vast new communication medium worthy of some exploration so that we can collectively have some idea what we're talking about when we do start to judge it.
> It’s the asymmetric expectations—that one person can spew slop but the other must go full-effort—that for me personally feels disrespectful.
This has always been the case. Have some junior shit out a few thousand lines of code, leave, and leave it for the senior cleanup crew to figure out what the fuck just happened...
Yes, though usually setting up asymmetric expectations requires a power imbalance, so might instead be a PM or someone with influence but not technical acuity creating that initial kLoC.
If you shove content at me that I even suspect was AI generated I will summarily hit the delete button and probably ban you from sending me any form of communication ever again.
It's a breach of trust. I don't care if you're my friend, my boss, a stranger, or my dog - it crosses a line.
I value my time and my attention. I will willingly spend it on humans, but I most certainly won't spend it on your slop when you didn't even feel me worth making a human effort.
Or I'll walk up to your desk and ask you to explain it.