It’s not just the writes. If you’re using a non-k-sortable item (like UUIDv4) as the PK, you’re throwing away every linear read-ahead that the kernel is doing for you, because the odds of what you need being on the same page is nil. When you’re paying for IOPS, like in a cloud, that’s even worse.
You’re also causing huge amounts of WAL bloat (unless you’re running ZFS and can thus safely disable full-page writes) [1].
And on a system with a clustered index (InnoDB, SQL Server), the performance and space bloat is even worse.
Thousands? In many scenarios I have worked with the number is millions or billions, but I do "data engineering".
Want to tag all traffic on your website? 1 billion visits = ~375 visits a second.
Some of the accounting systems I have worked with were raking in datasets ~300GB at a time that were almost purely compressed transactions.
Some of the companies I have worked for in retail have thousands of stores and a pretty constant volume, getting to that rowcount for some of their customer tracking stuff would easily blow that out.
I have some vendor webhooks that spike to above this number for 30 minutes at a time, multiple times a day. They use UUIDs for the event id you are supposed to use for event deduplication.
One project I worked on someone picked UUID for tracking AAA telco records. Then decided to make it the primary key (you can guess how well that worked out).
There are lots of uses for thousands per second. Usually in some sort of logging tracking application where you have have lots of processes and users.