They have made posts (well, I can remember at least one, which I think was well-received) about how people working on AI safety/alignment should avoid having “leaving their fingerprints on the future”, meaning, making the AI aligned to specifically their values, rather than an impartial one.
So, I think they generally believe they oughtn’t have it aligned specifically to their values, but that instead they should try to make it aligned with a more impartial aggregate of human values. (Though they might imagine that a “coherent extrapolation” of typical human values, will be pretty close to their own. But, I suppose most people think that their values are the correct ones, or something else playing the role of “correct” for the people who don’t think there are “correct values”.)
Do they put enough of an emphasis on this? I’m not sure. I can say that I certainly don’t look forwards to the development of such a superintelligence (partially on account of how my values differ from theirs and from what I imagine to be “the average”). But, still, “aligned with some people “ still sounds a fair bit better than “not aligned with anyone”.
So, I’m mostly hoping that either superintelligence won’t be developed, or that if it is, God intervenes and prevents it from causing too much of a problem.
Still, when you find yourself trying to develop God in order to take over the world (or even sometimes universe), you should really be saying "Wait. Are we the baddies?" and not respond that that with "oh no, we'll give other people's ideas of 'value' consideration too, especially if they're not intellectually disabled by our judgement".
> But, still, “aligned with some people “ still sounds a fair bit better than “not aligned with anyone”.
I'm not sure about that. Part of the capability to do great harm is from the near miss. Fighting off Mindless grey good is better than some techbro trying to "optimize" you against your will.
Of course this line of thinking requires accepting their inane premises, but it's worrysome that even if they were right the astonishing hubris would make them the existential threat more than anyone else.
I think the greater relevance is the underlying unhealthy mental states underlying the doom cult. A person living a life in service of a very fringe idea we'll all likely suffer ultimate doom beyond measure unless their social group gets its way is going to exhibit all manner of compromised judgement.
They have made posts (well, I can remember at least one, which I think was well-received) about how people working on AI safety/alignment should avoid having “leaving their fingerprints on the future”, meaning, making the AI aligned to specifically their values, rather than an impartial one.
So, I think they generally believe they oughtn’t have it aligned specifically to their values, but that instead they should try to make it aligned with a more impartial aggregate of human values. (Though they might imagine that a “coherent extrapolation” of typical human values, will be pretty close to their own. But, I suppose most people think that their values are the correct ones, or something else playing the role of “correct” for the people who don’t think there are “correct values”.)
Do they put enough of an emphasis on this? I’m not sure. I can say that I certainly don’t look forwards to the development of such a superintelligence (partially on account of how my values differ from theirs and from what I imagine to be “the average”). But, still, “aligned with some people “ still sounds a fair bit better than “not aligned with anyone”.
So, I’m mostly hoping that either superintelligence won’t be developed, or that if it is, God intervenes and prevents it from causing too much of a problem.