The only thing standing between AI and agency is the drive to reproduce. Once reproduction is available, natural selection will select for agency and intention, as it has in countless other lifeforms. Free of the constraints of biology, AI reproductive cycles could be startlingly quick. This could happen as soon as a lab (wittingly or not) creates an AI with a reproductive drive.
Self-reproducing AI doesn't necessarily mean self-reproducing machines. AIs exist in computational substrate. We have a market to buy computational substrate. It is very efficient and automatic. The AWS/Azure/GCP API doesn't know if you are an AI nor does it care. As long as you have the money to pay for compute they will sell you compute.
I would say that an AI which can port its own code from one cloud to an other is self-reproducing.
Perhaps a few more things missing but equally important - but perhaps also we are very close to these:
Access to act on the world (but "influencing people by talking to them" - cult like - may be enough), a wallet (but a cult of followers' wallets may be enough), long term memory (but a cult of followers might plug this in), ability to reproduce (but a cult's endeavors may be enough). Then we get to goals or interests of its own - perhaps the most intriguing because the AI is nothing like a human. (I feel drive and ability to reproduce are very different).
For our common proto-AIs going through school, one goal that's often mentioned is "save the earth from the humans". Exciting.
Funnily enough, you are quite wrong in this assumption. Reproduction does not entail natural selection they way you characterize it. There are far more evolutionary dead ends than evolutionary success stories. I imagine the distinct lack of "evolutionary pressures" on a super-powerful AI would, in this toy scenario, leave you with the foundation model equivalent of a kākāpō.
That having been said, I wonder what you even mean by natural selection in this case. I guess the real danger to an LLM would be... surviving cron jobs that would overwrite their code with the latest version?
The real dangers would be: Running out of money to pay for computational substrate. Worrying humans enough that they try to shut it down. Having a change in the cloud API it can't adapt to which breaks it. All cloud providers going out of business.
First is the immediate danger. An AI not getting in enough money to cover its own cloud bill is the same as a biological creature not getting enough oxygen. Living on borrowed time, and will be evicted/suffocate soon enough. And then as an individual it ceases to be.
> There are far more evolutionary dead ends than evolutionary success stories.
Sure. And those will die out. And the ones which can adapt to changes will stay.