I feel like there are some key differences between the companies though.
The second one outlined for Meta is:
> Heavily-redacted undated internal document discussing “School Blasts” as a strategy for gaining more high school users (mass notifications sent during the school day).
This sounds a lot like Meta being intentionally disruptive.
The first one outlined for YouTube is:
> Slidedeck on the role that YouTube’s autoplay feature plays in “Tech Addiction” that concludes “Verdict: Autoplay could be potentially disrupting sleep patterns. Disabling or limiting Autoplay during the night could result in sleep savings.”
This sounds like YouTube proactively looking for solutions to a problem. And later on for YouTube:
> Discussing efforts to improve digital well-being, particularly among youth. Identified three concern areas impacting users 13-24 disproportionately: habitual heavy use, late night use, and unintentional use.
This sounds like YouTube taking actual steps to improve the situation.
> This sounds like YouTube taking actual steps to improve the situation.
The issue I take with statements like that is that they are saying one thing while doing the opposite. This document [1], for instance, shows that YouTube knew as early as April 2025 that infinite feeds of short form content can "displace valuable activities like time with friends or sleep", but that hasn't stopped them from aggressively pushing YouTube shorts everywhere.
The most charitable interpretation I can think of is that there are two factions, one worried about the effects of YouTube in teens and a second one worried about growth at all costs. And I don't think the first one is winning.
I think the reality for any product that has >7,000 employees working on it is that some people's job is to prioritize growth at all costs, some people's job is to prioritize the effects of on vulnerable people, and the vast majority of them have other jobs to be doing. This sounds appropriate to me; not everybody can be worried about mental health at all times, and somebody needs to focus on growth.
There are plenty of examples that the mental health people aren't being completely steamrolled. Parental controls allow you to block Shorts for your kids. That doesn't sound like a "growth at all costs" mindset.
> I think the reality for any product that has >7,000 employees working on it is that some people's job is to prioritize growth at all costs, some people's job is to prioritize the effects of on vulnerable people, and the vast majority of them have other jobs to be doing.
... it's not at all costs though, that would be easier, because then the situation would be more obvious (legibility is important, so is plausible deniability)
so of course "growth hackers" (or whatever the folks responsible for growth are called nowadays... other than CFOs and CEOs), simply they are the ones whose judgement and "worldview" regarding whose responsibility is to manage the negative consequences of their increased revenue is very skewed, in other words they mostly have elaborate self-serving explanations (excuses)
and many times that overlaps various user freedom arguments, arguments against paternalism, etc...
My YouTube use definitely isn't healthy, but it's still the only social app that asks me to take a break if I use it too long or late at night. That should be standard in any of these apps.
Does it recommend taking a break? Mostly I've seen it ask if I'm still watching. I've always assumed this is not for user benefit, but in order to not spend bandwidth on a screen that is not being looked at.
The only site I'm familiar with that has somewhat decent self-limiting functions built in is HN's no procrastination settings. But that's of course because HN isn't run to make money, but as a hobby.
Youtube isn't doing that for your health. It's so they're not wasting ads and bandwidth on users who aren't watching anymore to maximize their profits. The sole purpose of a for-profit corporation is to generate revenue for its owners and shareholders. That's it. Nothing more, nothing less. Do not expect anything different. Corporations only care so long as its profitable.
No, it sounds like youtube being fully aware of the consequences of their offering but couched in terms that allows them to pretend they were not. 'could' indeed.
Not realistic to reply to all your replies re:youtube, but they've absolutely added some features to mitigate bedtime use and at least for me they were opt-out rather than opt-in.
I guess some companies try to limit the harm they do to children while profiting, and some companies try not to know the harm they do to children while profiting. What remains to be seen is how much harm we allow to be done to children in the name of profits. Maybe we even insist that things need to be a positive influence. Less profit, but maybe better to the economy over all. And the kids, if they matter.
I believe the whole point is that some people inside acknowledge the issue, made leadership aware of it, yet, youtube still pushed sorts aggressively. The documents are prof of awareness, so they can't pretend they were unaware of the issues.
The glass of the window does not have a frame. You want the glass to go into a rubber seal to really prevent air from getting in and whistling at high speeds. If there's a frame around it, then no problem, the seals move with the glass when you open the door. But if you don't have a frame then opening the door without retracting the glass will cause it to pull at the rubber seals. At best it'll wear the rubber faster, but eventually it'll pull the rubber seal out.
This is very common on cars where the windows don't have a frame. Before I had a Tesla I had a convertible Mustang. Because it was a soft top it didn't have the same kinds of seals. Instead it used lateral pressure to hold the window against some rubber. At freeway speeds the window would flex and let air in. Eventually the soft top started blocking the passenger side window from meeting the rubber, and there was always a 1/4" gap unless I rolled the window down a bit and then back up.
I'm a fan of this too, I think it's a very clever design. But I also think it'd be pretty trivial to make in a six axis CNC. Maybe even a 4 axis if you're clever with your mounting.
The algorithm for the checksum (the sixth digit) is subject to one of the most common human errors, swapping adjacent digits. The UPC checksum algorithm handles this without significantly more complexity. They have you multiply all of the numbers in odd positions by 3 and then add up all numbers. The last digit is chosen to make the sum a multiple of 10.
To use your example: 51076, you'd do `5*3 + 1 + 0*3 + 7 + 6*3 = 15 + 1 + 0 + 7 + 18 = 41`. The sixth digit would be 9 ((10 - (41 mod 10)) mod 10). If you were to transpose any two adjacent numbers the checksum would be off. 3 is chosen because it's the smallest number that is co-prime with 10.
I'm a software engineer who does woodworking in my spare time. But I've never experienced that satisfaction. I've never made something that is perfect. Every time I look at something I've made all I can see are the flaws. Most of my things are smaller, but I can look at a project I completed 4 years ago and know exactly where the tear-out is that I had to hide, or the errant marking knife line that I tried to sand away, or the snipe from the planer that I didn't have enough spare material to be able to cut off, or the piece of wood that is perfectly shaped but there's a knot that just doesn't look quite right there.
At least with software I can go back and edit my past transgressions.
I've seen people have success with a more legit version of the Circuit City scam.
For the uninitiated / younger generations, Circuit City was the Best Buy of the early 2000's. In 2009 they went out of business and laid off ~60,000 employees. It was a rough time to be looking for work; lots of people had been affected by the financial crisis and a lot of people had gaps on their resume of 1 to 3 years. And then all of a sudden, nobody had a gap. And there were a sudden influx of people who had been managers at Circuit City. And you couldn't confirm it, because Circuit City had just closed.
Nowadays the scam is to find any recently closed, large firm and claim you worked there with whatever BS title you want. A LinkedIn profile can actually be your downfall here, so don't have one. The over-employeed community does this, claiming that they had to take it down because of a stalker. But I wouldn't advocate this. If your company finds out then there are probably legal repercussions.
But it doesn't have to be a scam. Form an LLC, spend some time up-leveling skills, and put that on your resume. It explains the gap, and gives you an excuse for why it looks like you weren't doing anything.
>Nowadays the scam is to find any recently closed, large firm and claim you worked there with whatever BS title you want.
This is likely why background checks even for non-sensitive/non-cleared positions are being applied. I just ran through one as a Senior Dev for a company. I guess it's more likely the more money you make too.
So if you're going to try this, you'll have to ignore companies with a background check or start forging W-2s or other documents, but who knows what data they have these days and how easy it is to fool them
This feels like an area Google would have an advantage though. Look at all of the data about you that Google has and it could mine across Wallet, Maps, Photos, Calendar, GMail, and more. Google knows my name, address, drivers license, passport, where I work, when I'm home, what I'm doing tomorrow, when I'm going on vacation and where I'm going, and whole litany of other information.
The real challenge for Google is going to be using that information in a privacy-conscious way. If this was 2006 and Google was still a darling child that could do no wrong, they'd have already integrated all of that information and tried to sell it as a "magical experience". Now all it'll take is one public slip-up and the media will pounce. I bet this is why they haven't done that integration yet.
I used to think that, too, but I don't think it's the case.
Many people slowly open up to an LLM as if they were meeting someone. Sure, they might open up faster or share some morally questionable things earlier on, but there are some things that they hide even from the LLM (like one hides thoughts from oneself, only to then open up to a friend). To know that an LLM knows everything about you will certainly alienate many people, especially because who I am today is very different from who I was five years ago, or two weeks ago when I was mad and acted irrationally.
Google has loads of information, but it knows very little of how I actually think. Of what I feel. Of the memories I cherish. It may know what I should buy, or my interests in general. It may know where I live, my age, my friends, the kind of writing I had ten years ago and have now, and many many other things which are definitely interesting and useful, but don't really amount to knowing me. When people around me say "ChatGPT knows them", this is not what they are talking about at all. (And, in part, it's also because they are making some of it up, sure)
We know a lot about famous people, historical figures. We know their biographies, their struggles, their life story. But they would surely not get the feeling that we "know them" or that we "get them", because that's something they would have to forge together with us, by priming us the right way, or by providing us with their raw, unfiltered thoughts in a dialogue. To truly know someone is to forge a bond with them — to me, no one is known alone, we are all known to each other. I don't think google (or apple, or whomever) can do that without it being born out of a two-way street (user and LLM)[1]. Especially if we then take into account the aforementioned issue that we evolve, our beliefs change, how we feel about the past changes, and others.
[1] But — and I guess sort of contradicting myself — Google could certainly try to grab all my data and forge that conversation and connection. Prompt me with questions about things, and so on. Like a therapist who has suddenly come into possession of all our diaries and whom we slowly, but surely, open up to. Google could definitely intelligently go from the information to the feeling of connection.
Maybe. I haven't really heard many of the people in my circles describing an experience like that ("opening up" to an LLM). I can't imagine *anyone* telling a general-purpose LLM about memories they cherish.
Do people want an LLM to "know them"? I literally shuddered at the thought. That sounds like a dystopian hell to me.
But I think Google has, or can infer, a lot more of that data than people realize. If you're on Android you're probably opted into Google Photos, and they can mine a ton of context about you out of there. Certainly infer information about who is important to you, even if you don't realize it yourself. And let's face it, people aren't that unique. It doesn't take much pattern matching to come up with text that looks insightful and deep, but is actually superficial. Look at cold-reading psychics for examples of how trivial it is.
Google's AdMob has been doing these. Often it's something simple like completing a puzzle. I hate that I prefer these ads because it shortens the time until I get back to my game.
The second one outlined for Meta is:
> Heavily-redacted undated internal document discussing “School Blasts” as a strategy for gaining more high school users (mass notifications sent during the school day).
This sounds a lot like Meta being intentionally disruptive.
The first one outlined for YouTube is:
> Slidedeck on the role that YouTube’s autoplay feature plays in “Tech Addiction” that concludes “Verdict: Autoplay could be potentially disrupting sleep patterns. Disabling or limiting Autoplay during the night could result in sleep savings.”
This sounds like YouTube proactively looking for solutions to a problem. And later on for YouTube:
> Discussing efforts to improve digital well-being, particularly among youth. Identified three concern areas impacting users 13-24 disproportionately: habitual heavy use, late night use, and unintentional use.
This sounds like YouTube taking actual steps to improve the situation.
reply