I've hit a number of small "oh, I could do that if..." use cases over the years; none is really immediately springing to mind.
There are things that a shell function can do that a spawned program cannot - mostly to do with affecting shell state. Updating shell variables and opening file descriptors are obvious examples.
This is one of the reasons shell languages feel kind of functional. But they're not really designed like a functional language so they feel kind of clunky.
On the subject of "oh, I could do that if..." I know exactly how you feel. I really want the ability to pass multiple filestreams to a command, so you could, say, use cat on the outputs of two commands. Assuming the syntax was @`<pipeline>`, It would like something like this:
foo|grep bar|cat @`baz|grep bar`|sort -rn|tail
The above script assumes that the commands foo and baz are generating lines with time signatures at the start, and then checking for the most recent entries. I chose the @` syntax for its similarity to the lisp @, syntax, but it could be anything. Unfortunately, I think the only way to do this would be to monkey-patch either open(2) or fopen(2), probably both, which would be a Really Bad Idea (tm).
the <(...) syntax seems to be what I was looking for, passing a stream/pipeline to a function where a filename was expected. Hey, you learn something every day.
EDIT: Unless you're just piping both of them to STDIN. piping two files to STDIN is useful, just not as useful as the thing that I wanted. This seems to be what you're doing.