At my previous job, two of the most important C files exceeded 500k - 20k lines in one, 25k lines in the other (expression parser and declaration parser, respectively).
Emacs doesn't do so well on those files either, for reasons I don't quite understand. C-Home (M-<) from the end of the file took 3+ seconds on an 3.8GHz i7 machine, yet the command those keys are bound to - beginning-of-buffer - was instantaneous, as was M-g g 1 (go to line 1).
i would assume it's the syntax highlighting. emacs uses gap buffers for it's data store. i faintly remember that it regexes can be slow in it. but maybe my memory is failing me on this one
Multiple files would mostly just make them more fiddly to work with, juggling all these buffers. In my experience, the modern fashion for tiny source files is not intrinsically better, especially when all the code is related to a functional area (and is mutually recursive, in a compiler's parser).
You'd still rely on search to navigate, and whether it's in the same file or a different file is just an IDE / editor detail. If anything, fewer source code files meant you always knew which file to switch to to find any given function, which you could in turn find with an incremental search.
Emacs doesn't do so well on those files either, for reasons I don't quite understand. C-Home (M-<) from the end of the file took 3+ seconds on an 3.8GHz i7 machine, yet the command those keys are bound to - beginning-of-buffer - was instantaneous, as was M-g g 1 (go to line 1).