Hacker Newsnew | past | comments | ask | show | jobs | submit | jacobgkau's commentslogin

It's been cut in half year-to-date. It's about where it was a full year ago right now.

> January 11, 2023

> Based on current internal deliberations, the company could launch its first touch-screen Mac in 2025

Even if it didn't come to pass, just a few years ago is a more relevant leak than the every-year-since-the-iPad-released "rumors."


Yes and it's an article about a leak 3 years ago. And there were more "leaks" before that. I just can't be bothered to research and link the obvious to argument against an "opinion".

False equivalence. A text editor does not type characters that you didn't explicitly type or select.

LLMs can make mistakes in different ways than humans tend to. Think "confidently wrong human throwing flags up with their entire approach" vs. "confidently wrong LLM writing convincing-looking code that misunderstands or ignores things under the surface."

Outside of your one personal project, it can also benefit you to understand the current tendencies and limitations of AI agents, either to consider whether they're in a state that'd be useful to use for yourself, or to know if there are any patterns in how they operate (or not, if you're claiming that).

Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.


Sure, the point about LLM "mistakes" etc being harder to detect is valid, although I'm not entirely sure how to compare this with human hard to detect mistakes. If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc. This is where testing come into play too and I'm definitely reviewing your tests (obviously).

>Burying your head in the sand and choosing to be a guinea pig for AI companies by reviewing all of their slop with the same care you'd review human contributions with (instead of cutting them off early when identified as problematic) is your prerogative, but it assumes you're fine being isolated from the industry.

I mean listen: I wish with every fiber of my being that LLMs would dissapear off the face of the earth for eternity, but I really don't think I'm being "isolating myself from the industry" by not simply dismissing LLM code. If I find a PR to be problematic I would just cut it off, thats how I review in the first place. I'm telling some random human who submitted the code to me that I am rejecting their PR cause its low quality, I'm not sending anthropic some long detailed list of my feedback.

This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.


> If anything I find LLM code shortcomings often a bit easier to spot because a lot of the time they're just uneeded dependencies, useless comments, useless replication of logic, etc.

By this logic, it's useful to know whether something was LLM-generated or not because if it was, you can more quickly come to the conclusion that it's LLM weirdness and short-circuit your review there. If it's human code (or if you don't know), then you have to assume there might be a reason for whatever you're looking at, and may spend more time looking into it before coming to the conclusion that it's simple nonsense.

> This is also kind of a moot point either way, because everyone can just trivially hide the fact that they used LLMs if they want to.

Maybe, but this thread's about someone who said "I'd like to be able to review commits and see which were substantially bot-written and which were mostly human," and you asking why. It seems we've uncovered several feasible answers to your question of "why would you want that?"


>It seems we've uncovered several feasible answers to your question of "why would you want that?"

Fair enough


> For instance, I'm in favor of bets that a certain astroid will strike the earth at a certain time and place. A signal from the prediction markets might cause somebody to evacuate in a scenario where they'd otherwise cry "fake news."

I understand the point you're making, but in this case, you're still incentivizing someone somewhere to not attempt to the best of their ability to intervene in that astroid. Bets that truly can't cause any change in behavior that might affect the outcome are a mostly theoretical category, in my opinion.


If the bet can't cause any change in behavior then the whole thing is useless. The whole point is to do some good with it. The constraint is that the bettor can't alter the outcome.

Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.

Granted that's also theoretical, but it's worth theorising about how we'll get things done in a world where the only way to be heard is to put your money where your mouth is.


> Another one would be discovering malware in a PR and betting loudly enough that it won't get merged. The bet is how you make your certainty rise above the bot noise and attract extra attention on the maintainers' part.

This example seems weaker than the asteroid example. Consider that if you're betting "loudly enough" it won't get merged, the less likely option becomes the malware actually being merged. Now you have a repo maintainer who can bet that it'll get merged (probably a more lucrative bet since it's less likely), and merge it to make money from that bet.


> Gambling/prediction markets are 100% optional to participate in and you should go in with the expectation that you're going to lose.

The stock market is 100% optional to participate in, and every broker tells you (is legally required to tell you, in fact) that you should go in with the expectation that you're going to lose money. That didn't stop whatever forces have made them essentially required to plan for the normal life stage of retirement these days.


> The law California (and other states) passed doesn't define what content has to be blocked for which ages

No, but it's a framework that would allow other laws to do so. Because...

> it's not as if they had no idea the children endlessly posting selfies and posting "six seven" on their service weren't adults.

...you can make statements like that which sound like common sense, but it would be incredibly hard to regulate based on "if you know, you know" (or "you should have known"/"you had to have known"). The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.

> As a parent, I might think that my 16 year old should be allowed to look up information on STDs but the websites that collect my child's age could decide they can't

This is a different problem. It sounds like you're essentially wanting to guarantee access to certain things, not just for your own 16-year-old, but for everyone else's, too (because if it was just yours, you could look it up for/with them if necessary). It'd be difficult to compel businesses to provide services to audiences they don't want to. But again, that's a separate problem that doesn't necessarily conflict with the rest of the system.


> No, but it's a framework that would allow other laws to do so.

I worry that's it's the start of a lot of "other laws" which will limit the ability for children and adult's to maintain even pseudo-anonymity online.

> The law has to provide (guarantee) a way for them to know in order to actually require them to take action based on it.

That sounds like an argument for even stronger proof of age than what the law calls for. Online platforms should do what nearly every other publisher does and provide a rating for their content. Netflix doesn't need to know how old I am. They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings (PG, R, TV-14, etc.) It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.

> It'd be difficult to compel businesses to provide services to audiences they don't want to.

This is the norm. It's what every business does apart from those who demand ID for every transaction. It's useful for businesses to give people their opinion or intention for who they're targeting, but it's entirely inappropriate for every website and online service to force their opinion onto others. They aren't qualified to know what's appropriate for a specific child and platforms like facebook have repeatedly demonstrated that they absolutely can't be trusted to put our children's interests above their own.


> Online platforms should do what nearly every other publisher does and provide a rating for their content.

That only happens to "publications" of particular forms where state regulation has mandated it, or enough noise was made about state regulation mandating it (or simply censoring content) was made that the industry adopted a rating system as a way to discourage that (and in the latter case, there are always plenty of publishers that don't make use of the industry rating system, either at all or at least for selected publications in the field to which the ratings nominally apply.)

> They provide a "kids" profile populated with their own curated content if that's the kind of thing I want and for everything else they provide ratings

Netflix does not provide ratings for "everything else". Most of what they carry has either MPAA or TV Parental Guidelines ratings, and if it has such ratings they provide them. But they have content which does not have such ratings, which is simply noted as not being rated. (Of course, if "not rated" as an option is a valid to comply with your "you must have ratings in an HTTP header" law HTTP header, then it is trivial to comply and provide the "not rated" header for every piece of content, but this doesn't actually achieve anything.)


> Online platforms should do what nearly every other publisher does and provide a rating for their content.

That's fine, but it needs an enforcement mechanism, or we're back to where we currently are ("click here if you're 18").

> It would be easy enough to push a rating to clients, they could even use HTTP headers for it. If lawmakers really felt the need to interfere in all of our operating systems it could require some means to collect and act on those ratings.

I would completely agree it seems reasonable at a glance to have websites push ratings and have the enforcement be done e.g. at the web browser level (with the web browser knowing how to enforce based on the OS's supplied age bracket), rather than making websites read the age bracket and act on it directly. Although it does still run into questions about how you handle websites with content from multiple brackets (like Reddit or X)-- what's the UX supposed to look like if a child attempts to access adult content on one of those platforms? If the platform can't know what's happening (due to your privacy/safety concerns), then you're limited to the web browser entirely breaking the interaction or somehow redirecting them somewhere else.


> That's fine, but it needs an enforcement mechanism, or we're back to where we currently are ("click here if you're 18").

It'd be dead simple to tell if a website returned a rating or not, just pull the http headers and if it isn't there fine them or warn them first and then fine them or whatever. You could even have browsers just refuse to load pages that didn't include a rating header in their response and enforcement would take care of itself.

> it does still run into questions about how you handle websites with content from multiple brackets

I think it'd be up to reddit (or mods) to either set ratings for each subreddit and moderate accordingly. Pages at /r/MsRachel/ would return a different rating than /r/watchpeopledie.

Same with twitter I guess. Every user can specify if their account was intended for children or not. Elmo's twitter account would be shown to everyone, while accounts that don't intend to self-censor wouldn't.

> what's the UX supposed to look like if a child attempts to access adult content on one of those platforms?

browsers that detect a rating higher than authorized can just throw up an about:blocked page telling kids to talk to their parents for access to the page they wanted or click the back button to return to the page they were on.

The platforms would see that a page was requested, and they'd transmit the data to the client along with the rating header. They wouldn't get any signal that the page was blocked. It'd look no different on the server side than it would if the user had clicked a link and then closed their browser/tab/window. If you wanted to be sneaky, you could actually have the browser load the page in the background to avoid platforms guessing between a closed tab and blocked access.

This not only solves the privacy/safety concerns, most importantly it puts parents back in control of what their children can access. Parents would even be able to run software that would log the times/urls of blocked pages, and let them override a rating based on URL or domain. Parents could block roblox.com even though it returns a "for kids" header if they didn't want their 8 year old playing in an ad infested online pedo playground but still allow their mature 10 year old access to plannedparenthood.org even though it has an adult rating without exposing them to adult everything else on the internet.

There are countless better alternatives to what facebook wants us all to be subjected to, but facebook couldn't care less about our interests they are only looking out for themselves and lawmakers are happy to take their bribes and eager to erode our ability to browse without an ID attached to our every action.


As someone who uses VideoJS on a website with a large video library, and has generally been dismayed at the state of the plugin ecosystem every time I consider doing a major version upgrade of VideoJS, this kind of thing is great to hear.

Drop a note in the discussions some time. I'd love to hear about what you're doing and even help migrate when the time is right for you.

https://github.com/videojs/v10/discussions


One of the footnotes at the bottom of the page says:

> Apple Business Essentials, Apple Business Manager, and Apple Business Connect will no longer be available once Apple Business launches.

So it's a consolidation. They call out Business Connect data as "including claimed locations, place card information, photos, organization information, account details, and more," so that's some of what differs from Business Essentials.


> There is no need for any verification beyond that or it's just government mandated surveillance.

There is no verification beyond that in these sorts of bills (CA, CO, IL). It's the parent's responsibility to watch their kids when they set up an account.

> Legitimate adult websites will not show the content.

This is a big problem (that won't necessarily be solved by this particular legislation, granted). There are already voluntary rating HTML tags websites can add to indicate parental control software should block them, but they're voluntary and non-standardized. Websites can choose not to comply with no real-world consequences. And I don't think platforms like Reddit or X, which are ostensibly all-ages social media but also have an abundance of adult content, are properly set up to serve tags like that on NSFW posts but not other ones.

It's a tricky problem to solve, and, imo, it's one the tech industry has demonstrated it doesn't have any desire to solve itself, hence legislation starting to get involved.

> Websites can send down a single header indicating adult content.

It sounds at first glance like a no-brainer that websites shouldn't have access to any information and the enforcement should be done at a local level (like the current voluntary HTML tags that locally installed parental control software can sometimes read). But some websites might want to display alternate content to minors-- e.g. a Wikipedia article with some images withheld, or Reddit sending a user back to an all-ages subreddit instead of just fully breaking or failing to load when the user stumbles upon something 18+. For anything like that, the website will need to know in some form that the user isn't able to see 18+ content.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: