Hmm it's a bit more subtle than that. There are all kinds of ways to misrepresent progress, the simplest way is to pick bogus metrics. For example, all of machine-learning based dialog system work uses metrics that were designed for machine translation ( basically n-gram overlaps ), this is an example of a metric that you can improve on but is meaningless. So the resulting system is very much the dog-and-pony show variety. Or people who use simulations to train and test ML systems, and say the metrics get better but do not address the sim2real gap.
> I asked an Ivy League professor about hype and they said the exact opposite of your Ivy League professor
The context matters, I have heard the opposite response as well, from a professor I was working for, while he was doing just that: packaging narratives w/o regard to how well the methods actually work. The honest take came at a wine-and-cheese party from a professor I was not working for. There is a real conflict of interest here, and if people stand to gain by withholding information, they will do Just that.
> I asked an Ivy League professor about hype and they said the exact opposite of your Ivy League professor
The context matters, I have heard the opposite response as well, from a professor I was working for, while he was doing just that: packaging narratives w/o regard to how well the methods actually work. The honest take came at a wine-and-cheese party from a professor I was not working for. There is a real conflict of interest here, and if people stand to gain by withholding information, they will do Just that.