Hacker Newsnew | past | comments | ask | show | jobs | submit | rishabhaiover's commentslogin

I was reading a paper on dark silicon and how it broke the beautiful scaling laws of the past (Moore's law/Dennard Scaling). We hit a wall, innovated and at the moment, the hardware industry is thriving. To me, that means scaling the industry and riding that momentum wasn't wrong. In fact, it allowed us to be where we are today.

Why are we so against, in principle, to the current pre-training scaling laws? Perhaps, we'll require new innovations at some point, but the momentum allows us to reach to newer heights that we've never climbed before.


haha the NSFW toggle is crazy

Ha, the only feedback I needed :) I spent far too much time on the Unicorn exploding properly...

A high-level language or a compiler wasn't automating end-to-end reasoning for a programming task.

I'm already seeing a degradation in experience in Gemini's response since they've started stuffing YouTube recommendations at the end of the response. Anthropic is right in not adding these subtle(or not) monetization incentives.

I mean, that’s almost just fair. They ripped the answer from a YouTube video, but at least link you back to the source now.

I found it a remarkable transition to not use Redis for caching from Sonnet 4.5 to Opus 4.6. I wonder why that is the case? Maybe I need to see the code to understand the use case of the cache in this context better.

Yea, was it over engineered the first time or neglecting scenarios with multiple replicas the second time?

is it anything like the OpenAI ad model but for tool choice haha

Claude Free suggests Visual Studio.

Claude Plus suggests VSCode.

Claude Pro suggests emacs.


> ~~Claude Pro suggests emacs.~~

Claude Pro asks you about your preferences and needs instead of pushing an opinionated solution?


I'm not quite sure if you're making fun of emacs or actually praising it.

Stallman paying for advertising, now that is good one :)

Copilot suggests leftpad

I'd thought about model providers taking payment to include a language or toolkit in the training set.

I'd be very interested to learn about output quality vs token utilization for both these approaches

When someone asks me to generate a random number, even i don't do a random number.

I used to always reflexively blurt out 67 when asked for a random number.

I'm a proto gen alpha. I 6-7'd before it was cool.


For a second I assumed you meant 69 but then it hit me, i'm getting old.

Claude Opus 4.6 says the same

As a student, I constantly worry about this. But everyone in my class is producing output at a pace I can't compete with without AI assistance.

what class are you in that "producing output at a [rapid] pace" is relevant to the grade?

pick any cs class

I have a minor in CS and no -producing the assignment by the deadline is important- grades are not based on quantity of code vs classmates.

I mean, maybe things have changed (I finished college about 20 years ago), but I don't remember producing large volumes of stuff as being a particularly important part of a CS degree.

Between a challenging job market, increasing new frontiers of learning (AI, MLops, parallel hardware) and an average mind like mine, a tool that increases throughput is likely to be adopted by masses, whether you like it or not and quality is not a concern for most, passing and getting an A is (most of my professors actively encourage to use LLMs for reports/code generation/presentations)

It will be a very interesting experiment when your generation of computer science graduates enters the job market, to put it mildly.

Individuals believe they act freely, but they are constrained and directed by historical forces beyond their awareness - Leo Tolstoy

Historical forces beyond your awareness cannot force you to submit mountains of slop.

slop is not a thing anymore, stop living in a fantasy world

Remember, you chose this. You chose not to learn, to offload your thinking in the name of competition.

What are you talking about? Slop existed long before AI and it will exist long after.

The last one already killed unique web designs, killed flash, gave us us soulless flat design and electron bloat.

They'll have to work pretty hard to outdo that!


That was never a worry in any of my CS classes.

Copying AI slop isn’t producing output! It’s also not conducive to learning

As if you are just a such a genius the models are of no use to you.

How can you not think that makes you sound like a complete moron?


I would urge you to leverage some critical thinking, re-read what I stated, and identify where I said that the models are of no use to me. If the ability to think for yourself without AI assistance hasn't fully atrophied on your end you may be able to see that you are the moron in this thread.

I guess I really am just that much smarter than you.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: