Hacker Newsnew | past | comments | ask | show | jobs | submit | NewsaHackO's commentslogin

I feel the same way, but I think my feelings may change if I didn't actually think the person was a good enough person that deserves to have their writing immortalized, like in this case. Of course, we only have his side, but the GP doesn't seem to think his dad was a good person and wrote some hurtful things in the diary about someone they cared about, which I feel as though is justification for their actions.

>Knowledge distillation works like this: you take a large model, have it perform tasks with detailed reasoning, then feed those reasoning traces to a smaller model until the student learns to mimic the teacher. The smaller model ends up far more capable than if you’d trained it from scratch on the same data. Apple can now do this with the full Gemini, not just their own in-house models, and the distilled output runs locally. No internet required.

No freaking way. AI companies see this as tantamount to pirating their models. There is no way that Google is not explicitly banning this in their agreement to allow Apple use their models.


Here is a technical comparison to show how Knowledge Distillation works for on-device AI. It has visual maps for the Teacher vs Student models and explains things like 'Dark Knowledge' - https://vectree.io/compare/apple-intelligence-vs-knowledge-d...

Yeah, it's weird they even included that. It reads like a psych shelf exam question to test if you know the connection between marijuana use and acute psychosis. But still, it is difficult to completely separate the AI being a possible catalyst for it.

Yea, everything about this post is just weird. IDK if they are even bots vs paid actors vs actual people who are clueless etc.

Yes, and the natural extension is that a lot of what people do day to day is not work-driven by intelligence; it is just reusing a known solution to a presented problem in a bespoke manner. However, this is something that AI excels at.

> This is focused product development and craftsmanship which is very different from Vibe coding something. So let this be a reminder to all the "I can vibe code this or that in a weekend". Good products / experiences take time.

How do you know? There isn't a git repo that one can see the history of, he could have coded this in one weekend and used the rest of the time doing noncoding activities. Also, he could have made the entire thing by prompting without any hands on coding at all. The fact that it is a web app with a SaaS platform (the thing that LLM-assisted coding is the best at) doesn't inspire confidence.


I signed up and gave the product a spin and its clear that its not some vibe coded weekend project. Clearly a lot of effort has gone into it and OP also was clear that they've spent 10 months on this.

> and used the rest of the time doing noncoding activities

That’s half of the point! Building (and selling) products requires a lot of those too.


if you can build this in one weekend, I'd like to hire you

I hate it when people use commentary articles as fake sources for their points. It's even more aggravating when the "journalists" are making points that play to the ignorance and outrage of the reader, as they know those readers are the easiest to bait for clicks and mislead. For instance, how is Anthropic claiming that its total revenue since January 2025 as $5 billion contradict that its expected run-rate revenue for the year 2026 is $19 billion?

> Anthropic claiming that its total revenue since January 2025 as $5 billion contradict that its expected run-rate revenue for the year 2026 is $19 billion?

Isn’t the “exceeding $5BN” comment a lifetime revenue? … on $30BN (edit: previously said spent) raised (or something ridiculous.)

A lot of the commentary on the frontier model companies is based on how much money they’ve spent to the relatively small amount they’ve made in return, and the skepticism, especially given almost continuous reporting, that deploying AI in a variety of situations doesn’t seem to yield favorable business outcomes. OpenAI shifting to enterprise / coding type stuff this week seems, also, potentially informative. Is Gen AI actually useful for anything but code? Signs keep pointing to no… and even then, we’re in the early stages of figuring out how to build without destroying everything… something Amazon just recognized as possible with their recent shopping outage.


> on $30BN spent (or something ridiculous.)

Where did you get that figure? The filing says 10 billion has been spent on training and serving customers.


Whoops! Not spent, raised.

Is there an actual book available?


Of course, I saw that, but if the text of the book is not freely available, then the examples wouldn't really be helpful, no?

So buy the book? The expectation of free stuff is all too common.

Regardless, a link to a repo of disjointed examples is not very interesting or helpful.

If you don't wanna pay, Library Genesis has the first edition (2004), but, if you didn't find the examples to be at least modestly interesting in themselves, is this even your bag? As a Linux sysadmin and occasional writer of lousy C programs, I often consult NetBSD's source tree for when I want good examples that aren't as complex as GNU's, so I expect to come back to these.

Judging by the publisher's sample,[1] the second edition (2025) looked like a worthwhile upgrade, so I ordered it. Much of the material is in the manpages, but this presents it with better explanations.

___

1. <https://ptgmedia.pearsoncmg.com/images/9780135325520/samplep...>



Or perhaps maybe rather free stuff is all too uncommon…

Linux is rarely a porting issue for C++ or python: https://wxwidgets.org/

Static linking libraries for MacOS or Windows is contaminated by GPL/LGPL code, and this is why wxwidgets excludes the disclosure requirement.

Also, if you are looking for a VueJS cross-platform GUI framework for most Desktop and Mobile platforms (modern MacOS hardware and developer account is a requirement):

https://github.com/quasarframework/quasar

Qt5/Qt6 frameworks sooner or later cause more problems than they solve, and require a lot more maintenance/support.

Best of luck =3


"contaminated by GPL/LGPL code"? really? if you want to make profit off of your code buy for several thousands of dollars commercial libraries and you are in the clear. if you don't want to pay for libraries you have to accept their conditions

I usually prefer releasing under Apache 2.0 license, as I can't predict what people will need 10 years from now.

People use LGPL libraries in commercial software all the time, as the shared objects (.so) do not contaminate the source application license. The instant someone static links LGPL/GPL lib into an application binary its source also must carry a compatible open license. Note this obligation differs from publishing patches back to the main lib branch.

It gets messy when people release libraries under multiple licenses. Porting games and applications from *nix systems can be non-trivial. Best regards =3


Also, Subscription: against the TOS of Claude Code, need to spoof a token and possibly get banned due to it.

Yup. And right now I'm straight-up breaking Claude's TOS by modifying OpenCode to still accept tokens. But I only have a few days left and don't care if they ban me. I'm using what I paid for.

Also, Microsoft does not allow use of their LSP for python. You have to use the barebones Jedi LSP.

Fortunately, there are competing LSPs of reasonable quality now. I'm using pyrefly. Not sure if ty/ruff have one too.

Did not know about this, thanks. Now I don't have to do half of my development in Zed and half in VSCodium :)

This very company being acquired (astral) is up to fix this, by making the ty lsp server available

basedpyright has existed for years and now we have pyrefly from meta too. I think ty is also working on one.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: