You can do this with machine verified programs. It's like proving a maths theorem; you don't check it for all values but you create a robust argument that it must be true for all values.
That is why I like using random data generators for tests. You can input some static data and then the rest is random. Every once in a while a bug pops out when you see a test fail that was previously passing.
None of the above methods are used for testing. You use boundary testing, branch testing, equivalence partitioning etc. Random data is not a good method of testing.
Except for the fact that it is exactly the method that has been used to discover a large number of critical bugs in the most popular OSS projects (including SQLite):
http://lcamtuf.coredump.cx/afl/
Fuzzing isn't really practical if all you do is just generate a totally random bit stream for input. There are many much more clever and robust strategies to hit as many edge cases as possible. Check AFL[1] for some details on generating smart random input files. You can also combine that with pretty advanced dynamic execution analysis to fuzz against unknown processor instruction sets, like in sandsifter[2].
On the contrary, for years, the most prolific fuzzers basically did just generate random bitstreams, and that technique will still find vulnerabilities in all the memory-unsafe software that hasn't been fished out by those same dumb fuzzers.
Sorry, I strongly disagree. Random data with a few static arguments is an incredibly great way to test. Adding in some chaos finds bugs. "why did that test fail after 100 times...ohhhh"
I try to only use random data when possible, less and smaller tests to write with a proper setup. End result: more bugs found.