I think I see what you are saying - that gpg blocking, and failing to create a key on a VM, is actually a desired behavior, and that the only real problem is gpg doesn't time out more quickly and say something like, "System not providing sufficient entropy for key generation".
But, if that's the case, then the entire thesis behind, "Use /dev/urandom" is incorrect. We can't rely on /dev/urandom, because it might not generate sufficiently random data. /dev/random may block, but at least it won't provide insecure sequences of data.
This is kind of annoying, because I was hoping that just using /dev/urandom was sufficient, but apparently there are times when /dev/random is the correct choice, right?
/dev/urandom will generate secure random data. That's what it does.
That was the point of the blog post: if you are using /dev/random as input in to a cryptographic protocol that will stretch that randomness over to many more bytes, WTF do you think /dev/urandom is already doing?
What /dev/urandom might fail to do, and this primarily applies to your specific case of a VM just as it is first launching and setting things up, is generate unpredictable data, and most terribly, it might generate duplicate data, for certain specific use cases where that would just be awful.
I would agree that you got the gist right though: /dev/urandom is usually the right choice, but when it is not, /dev/random is indeed needed. Most people misunderstand the dichotomy as "random" and "guaranteed random", which leads to very, very unfortunate outcomes. Other people misunderstand what they are doing cryptographically and somehow think that some cryptographic algorithm that uses a small key to encrypt vast amounts of data shouldn't have any concerns about insecure, non-random cyclic behaviour, but oddly take a jaundiced eye to /dev/urandom. It basically amounts to "I think you can securely generate a random data stream from a small source of entropy... as long as that belief isn't embodied in something called urandom".
Again, if you don't know the right choice, you should pass the baton to someone who does, because even if you make the right choice, odds favour you doing something terrible that compromises security.
But, if that's the case, then the entire thesis behind, "Use /dev/urandom" is incorrect. We can't rely on /dev/urandom, because it might not generate sufficiently random data. /dev/random may block, but at least it won't provide insecure sequences of data.
This is kind of annoying, because I was hoping that just using /dev/urandom was sufficient, but apparently there are times when /dev/random is the correct choice, right?