I assume you're thinking of Kevin Egan's paper from 2009 [1]? There have been some follow ups since, but the basic idea is "you can filter the hell out of it, kind of". Sadly, while these look okay, such filtering is still prone to over blurring. The frames described actually focus on hair which is a perfect example of what wouldn't work well in the filtering systems, and requires enough samples for anti-aliasing that motion blur comes "for free".
It's engineering a viable cheap/quality solution for production that is the challenge being spoken of, of course. Research work only gives a starting point for that, and in this case it's other parts of the pipeline that were optimized to support the needs of motion blur.
Ok, I'm not a rendering person, but didn't Ravi Ramamoorthi have a series of papers that solved motion blur?