It has always been my opinion (and born out by our statistics internally, when counting self-review in the form of manual testing and automated test writing) that reviewing code (to the level of catching defects) often takes more time than actually building the solution. So I have a pretty big concern that the majority of AI code generation ends up adding time to tasks than it saves because it's optimizing the cheap tasks at the expense of the costly tasks.
It also screws up code smells, disguising what used to be a "this looks weird, better investigate more in-depth" structure into something easily overlooked. So you have to be on guard all the time instead of being able to rely on your experience to know what parts to spend the extra effort on.
Absolutely! When you review code you need to understand the problem space, the thought process that created the code and the concrete implementation. The second step has always been hard and AI makes it a magnitude harder IMO. Writing code was never the hard part.
as much as you or i may be against it, inevitably AI coding will move away from human review and toward more automated means measuring program correctness
this was already happening even before AI - human review is limited, linting is limited, type checking is limited, automated testing is limited
if all of these things were perfect at catching errors then we would not need tracing and observability of production systems - but they are imperfect and you need that entire spectrum of things from testing to observability to really maintain a system
so if you said - hey I'm going to remove this biased, error prone, imperfect quality control step and just replace it with better monitoring... not that unreasonable!
I'm actually all for automated measures of program correctness and I think that manual testing is the last resort of tight budgets outside of highly complex integration issues. Adding more automated test cases that are built in to the CI pipline from the unit level to the highest levels (as long as they're not useless fluff) usually ensures a much lower level of defects. AI can help with that process, but only if we're diligent in checking that it isn't just building pages and pages of fluff ineffective tests - so we still end up needing to check the code and the tests that AI has written and I am still concerned that that ends up being more expensive in the long run.