Hacker Newsnew | past | comments | ask | show | jobs | submit | JustinCS's commentslogin

I agree, real intelligence may also potentially be explained as all "math and probability", whether it's neurons or atoms. A key difference between our brains and LLMs is that the underlying math behind LLMs is still substantially more comprehensible to us, for now.

It's common to believe that we have a more mystical quality, a consciousness, due to a soul, or just being vastly more complex, but few can draw a line clearly.

That said, this article certainly gives a more accurate understanding of LLMs compared to thinking of them as if they had human-like intelligence, but I think it goes too far in insinuating that they'll always be limited due to being "just math".

On a side note, this article seems pretty obviously the product of AI generation, even if human edited, and I think it has lots of fluff, contrary to the name.


I'm not highly concerned but I think there is merit in at least contemplating this problem. I believe that it would be better to reduce suffering in animals, but I am not vegan because the weight of my moral concern for animals does not outweigh my other priorities.

I believe that it doesn't really matter whether consciousness comes from electronics or cells. If something seems identical to what we consider consciousness, I will likely believe it's better to not make that thing suffer. Though ultimately it's still just a consideration balanced among other concerns.


I too think there is merit in exploring to what degree conciousness can be approximated by or observed in computational systems of any kind. Including neural networks. But I just can't get over how fake and manipulative the framing of "AI welfare" or concern over suffering feels.


That's reasonable, I certainly believe that there are many fake and manipulative people who say what's best for their personal gain, perhaps even the majority. But I still think it's reasonable to imagine that there are some people are genuinely concerned about this.


When you put it like that, it makes me wonder if we can just stick to using the self-driving cars in the Bay Area and not go to these bad and dangerous places.


anywhere outside the bay area is "bad and dangerous"??


I agree with this, it reminds me of how most people don't need to write assembly anymore, but it still helps with certain projects to have that understanding of what's going on.

So some people do develop that deeper understanding, when it's helpful, and they go on to build great things. I don't see why this will be different with AI. Some will rely too much on AI and possibly create slop, and others will learn more deeply and get better results. This is not a new phenomenon.


Indeed it's not a new phenomenon, so why are we fretting about it? The people who were going to understand (assembly|any code) will understand it, and go on to build great things, and everyone else will do what we've always done.


This stops to make sense as soon as the prompter can be automated. Who is going to pay for your artisanal software? Who will afford it?


This sounds like an assignment to learn to use LLMs, which as an isolated assignment sounds reasonable. Students should learn how to use tools of all kinds to maximize their effectiveness. It might be a bigger problem if all assignments are done like this but I doubt that's the case.


Even as AI generates more writing and code, we still have a way of ranking quality: Good writing and successful projects tend to get more popular and prominent. This selection can allow LLMs to continue to improve. They get a huge flow of slop, but they generate based on the patterns correlated with better quality. The model developers can also develop better ways to curate the input data themselves and keep the slop at bay. It's not a guaranteed or trivial mechanism, but I don't think we need a new breakthrough either.


Maybe.

As a counterpoint: isn't popularity of a library more a metric of API convemience than actual code quality?

And isn't popularity of an essay more about how it conforms to existing beliefs than the quality of the thinking?


Those are good points and that's why progress is not guaranteed or trivial, just plausible.


This isn't really true, everyone has a different basal metabolic rate, and effectiveness with absorbing calories from food can vary as well. Even small differences can add up to large effects, the difference between being at net-zero, or having caloric surplus or deficit every day.

That said, in practice it may be reasonable advice on average, but there's also a problem where it's not very practical to eat the "same" calories as someone else, unless they are together with you all the time.


As someone with fast metabolism who struggled to gain weight: I get that, but at the same time, understanding that there's trial and error with your own body but is ultimately all about input and output does more good than saying "haha I just have fast/slow rate looool" as justification for not taking care of yourself.


I find that frontend takes most of the dev time for most apps, and I certainly consider it "harder" to get everything working to the quality level I want. However, backend work is usually more critical, as problems can result in data loss or security issues. Frontend problems often just result in bad UX, and they are easier to do a surface-level check too (just use the app and check that it works).

Due to this, companies may have a higher bar of expertise for backend which may give the impression that it is "harder", but I don't think this is a very important distinction.


I’m a frontend dev. I’ve done backend and database administration work at a few points in my career, but never on sizable products or for more than a year or two at a time.

I largely agree with your points. The backend has the largest security and reliability burden. It’s the bottleneck everything has to go through, and it can’t trust anything it’s receives. If it breaks nothing else can work. Also, backend deployments tend to involve a lot more moving parts.

> they are easier to do a surface-level check too (just use the app and check that it works)

If what you were getting at with this is that the make change -> see result loop for the frontend is overall faster, I’d thoroughly agree with that. It’s why I’ve never stayed on the backend for long. It’s pretty cool to be able to make a change and see it reflected in the UI within a couple hundred milliseconds without losing application state.

But while that’s probably the usual case, for a significant amount of the work it’s woefully insufficient. When trying to fix something, “just using the app” often involves significant deviations from the way you’d normally use it. Typical user bases use apps in maddeningly diverse ways. They have different browsers, operating systems, displays (DPI and color systems/accuracy), screen sizes, pointing devices (mouse vs touch), and assistive technologies (e.g. multiple screen readers, each of which has its own quirks). Members of product teams—particularly ones who aren’t web specialists—frequently forget some of these. Surface level tests obviously don’t include the entire testing matrix, but they often involve iterating on at least two combinations of use at the same time, and those combinations may involve significant overhead.

Accessibility presents a particularly hard challenge for quick tests, as most developers I’ve worked with don’t know how to use screen readers to do much, so just using the app isn’t possible for them without some additional learning.

Hopefully your testing matrix is mostly automated, but those automated tests are too slow to use during development. Initial bug isolation and getting proper tests around specific interactions can be extremely tricky.


I'm saying from the perspective of someone overseeing a frontend dev, that I can just try out the app feature and see if things seem to be working as expected. Though as you mention, it's necessary to check a variety of devices and other edge cases, depending on the project requirements.

With backend though, even if it seems to work, there can be severe hidden problems with the architecture and security, so I really need to trust the backend dev or verify things deeply myself in order to ensure quality.

If I'm making a quick app for a startup, I can often hire relatively less experienced frontend devs, but have to care much more about the backend.


I hope you're testing for all of the following:

- All of the widely used mobile and desktop browsers

- Inexpensive Android devices - very common, most devs don't test in them, frequently suffer from terrible frontend performance that goes overlooked

- Browsers running common ad blockers

- Screenreaders - frontend accessibility is a whole speciality in itself

- SEO concerns, making sure crawlers see the right stuff

- Slow network connections - simulate how the site behaves on devices in rural areas with bad connections


It really depends what I'm building, and I find that these are often additional tasks that are done later after the core functionality is validated. I've most often built apps used internally or to test concepts for early user feedback, that had a relatively low bar. But regardless, I can verify these without as much deep knowledge of the code by trying them myself, but can't verify easily that the backend was coded securely and properly.

But, covering all these cases and doing all the polish and animations expected of high quality frontends has usually taken much longer when I've needed to do it, 80% of the dev time has been frontend in some cases.


> It really depends what I'm building, and I find that these are often additional tasks that are done later after the core functionality is validated. I've most often built apps used internally or to test concepts for early user feedback, that had a relatively low bar.

Prototyping and internal tooling are both obviously things with far different bars for quality (much as I wish internal tooling wasn’t treated like that). I’ve not felt much difference in prototyping endpoints on the backend versus new pages in the UI in terms of difficulty. Internal sites usually have dramatically lower quality bars than production apps, though hopefully you’re not treating accessibility any differently for them. A company can mandate that their employees use a specific browser. The company knows which hardware/OS are used. They also can often support only one language unless they’re large. Most tools don’t have to worry much about turning away customers.

Earlier you mentioned sneaking architectural issues. Frontends are rife with these, and this is part of why there’s so much churn in frontend frameworks and APIs. It’s often incredibly easy to make a UI which appears to work and satisfy the requirements as given, but unless you’re aware of the things listed in other comments the resulting codebase might need an entire rewrite to support the actual features which customers expect. Maybe you built an entire UX which fundamentally doesn’t work when using a screen reader or a software keyboard. You might you have no way to support optimistic updates or undo. Maybe your choice of routing framework won’t allow you to block a route update until some data is loaded or an animation is finished. Sometimes these things are easy to bolt on, sometimes they result in weeks of lost progress. All that might be acceptable for an internal site, but the vast majority of frontend developers spend most of their time on client-facing ones.

I guess my main point is that I frequently hear stuff like this from backend developers, and I also frequently have to fix the frontend code they wrote because they just didn’t know how much goes into making the slightest bit of a non-trivial workflow for customers. As an aside, I think the worst offenders in this regard are actually people who describe themselves as full stack. I have done a fair bit of backend work. I’m not full stack. My backend work rarely has to worry about load balancing or database consistency, but that’s because actual backend devs are catching those things for me. I know those things exist though, and that many more that I’m not aware of do too.


> I'm saying from the perspective of someone overseeing a frontend dev, that I can just try out the app feature and see if things seem to be working as expected

Not every project has a quality bar as low as this.


Related to taking tiny steps, I've set up a daily habit checklist with the lowest bar possible, even lower than the author's suggested log statement. When it comes to software dev, it's just "open my IDE and look at my notes for what to do next". This usually just takes 10 seconds, but it's the first step in starting and usually leads to me doing at least a bit more, so it's helpful when I'm at my lowest in terms of energy. And even if I do nothing else, I get some satisfaction that I at least completed my to-do and did a tiny bit more than nothing for the day.


++ for the “lowest bar” and constantly negotiate with oneself on if every line is still valuable and brings profit and not despair.

Like “brush teeth”, “do nothing at all for half an hour after work” “remove trash photos for the day in the phone”, “finish working” (here I have a detailed sublist ending with “close computer lid”) “move todos I did not have time for today to tomorrow”

another cool habit is “I did list”: add items that you did that were not planned, because we sometimes forget why we did not do something “planned”, because we actually did something else important that we are just blind to when “planning”. for example, “meal”, “took some rest that I actually need”, “took out trash”, “told someone irritating to fuck off” etc etc


This seems like a case of selection bias, where they are looking at all the Gen AI startups and seeing that they are making revenue faster than previous startups. But Gen AI startups have mostly only started very recently, so it's obvious that all the successes must have grown fast, as they haven't been around long enough to grow slowly. Maybe in 5 years, we'll see a lot of cases of successful startups that took a slower growth trajectory instead.

But whether it's short-sighted for the investors or not, I think the takeaway for founders is "investors now expect you to make more revenue faster, and B2C applications are more interesting than before".


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: