I think an additional big part of why LLM-aided coding is so draining is that it has you constantly refreshing your mental model of the code.
Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.
It rings true. Then we have a question in front of use - when you're doing the changes yourself you are also building and adapting the mental model of the system - which approach needs less effort in total?
Making sense of new or significantly changed code is very taxing. Writing new code is less taxing as you're incrementally updating the model as you go, at a pretty modest pace.
LLMs can produce code at a much higher rate than humans can make sense of it, and assisted coding introduces something akin to cache thrashing, where you constantly need to build mental models of the system to keep up with the changes.
Your bandwidth for comprehending code is as limited as it always was, and taxing this ability to its limits is pretty unpleasant, and in my experience, comes at a cost of other mental capabilities.