IBM's AS/400 (and all of its renames) is a 128bit architecture. The huge address space is beneficial for implementing capability security on memory itself, plus using single level store for the whole system (addresses span RAM and secondary storage like disks, NVMe, etc)
One of my mentors is one of the IBM engineers who developed the original AS/400’s capability-based security architecture way back in the early eighties. I can confirm that (according to her) the 128-bit addressing was indeed a very convenient manner of implementing the system. However, nobody ever expected (nor expects, I suspect) that those addresses will ever be used to actually access that amount of memory. It’s a truly astronomical amount of memory, on the order of grains-of-sand-on-countless-planets...
To put it another way it's not just enough to count the grains on sand on a beach, it's enough to count all the atoms in all the grains of sand on planet Earth. Give or take a few orders of magnitude[1].
(There's no meaningful notation of size here - the denotations are just to show just how much data you can fit in 128 bits of space.)
Blinks a few times
Ultimately fails to mentally grasp and make useful sense of the number due to its sheer size
As an aside, apparently DNA can store a few TB/PB (I don't remember which). The age of optimizing for individual bytes as a routine part of "good programming" is definitely over, I guess. (I realize this discussion is about address space and not capacity, but still)