Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

They kill exposure to anything fresh, you teach it a couple things you like and then it keeps you swimming in the same pool.

Rather than discovering something new, everyone just watches The Office and Parks and Rec., again and again. Now those theme songs make my skin fucking crawl.



> ... and then it keeps you swimming in the same pool.

This is a consequence of the metrics that are being optimized, it's not a fault of the algorithm per se.


It's not a fault at all. If you're going to spend more time watching videos if you're recommended stuff Youtube knows you already like, that's what it's going to do. Youtube just wants you to watch more videos. They don't care whether you are exposed to a variety of content.


Except I think there’s convincing argument to make that engagement will go down over time, if the algorithm makes no attempt to prioritize or suggest novel content.

The rare occasions I discover a new channel, it’s almost always from some source other than the algorithm: a referral from a friend, this site, another YouTuber, etc. My viewership of the same repetitive roster of videos absolutely tails off until I find something new from elsewhere.

For example, in months of being subscribed to my mechanics [0] (who does incredibly engrossing and relaxing restorations of mechanical stuff), not once was I suggested a video from Baumgartner Restoration [1], an art conservator who produces videos with a similar attention to detail and high production value.

Thematically this should be an easy recommendation for YouTube to make, but evidently the content is just different enough that it scores as a false-negative. After finding the latter channel independently, my viewing time absolutely rose for a while.

In theory, YouTube ought to be able to detect and learn from this signal of non-algorithmic discovery of new content. Yet, here we are.

[0]: https://www.youtube.com/c/mymechanics

[1]: https://www.youtube.com/c/BaumgartnerRestoration


Stagnation is a known problem in reinforcement learning and similar methods. It's very easy to get stuck at a local maximum. My favorite fun example is https://gym.openai.com/envs/BipedalWalkerHardcore-v2/ where a standard DDPG(https://arxiv.org/abs/1509.02971) will get stuck at pits in the environment. Although it could get a higher score if it learned to jump, there is a penalty with falling in that makes it stabilize on standing still and running out the timer. Video: https://www.youtube.com/watch?v=DEGwhjEUFoI

I suspect there is something similar going on with video/music recommendations. When a bad novel suggestion is made the penalty is likely too high to overcome (User immediately clicks off) with traditional reinforcement methods.


I agree with you, I have the same feeling about Spofity, it's algorithm just doesn't work for me, I have to search somewhere else for recommendations.


My (wild) guess is that it would be very hard to come up with a universal algorithm that doesn't exhibit this characteristic, due to some sort of effect that's comparable to the class imbalance problem, but with added feedback effects.


On the other hand I taught pandora to only play songs by artists that had done heroin, thats kind of cool, it has tons of variety from ray charles to jhonny cash to alice in chains, and it finds artists I had no idea about like James Taylor. Also, don't try to code while listening to my horses channel... There may be variety but there is also a common quality of alert-sedation.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: