Hacker Newsnew | past | comments | ask | show | jobs | submit | guessmyname's commentslogin

Why thousands? You never read or delete all your emails within a day?

My inbox, which I have for almost two decades only has 28 emails in it. Not 28 unread emails, but 28 total emails. I delete everything within a day of receiving, except for every important things, hence why 28 of them still remain.

Keeping thousands of emails in your inbox, while virtually free, is an attack vector for hackers, and also a gold mine for advertisement brokers who pay email providers money to show you ads based on your daily habits.


I am not saying I'm right, I'm just explaining how it got this bad.

See I used to have 2 MB on my hot mail and 4 MB on my Yahoo! Mail. I used to do exactly what you said. Then, I got invitation to Google mail. 1GB and counting!

I got lazy. I no longer had to delete mail anymore. So, it started accumulating. There. That's the whole story.


28 important emails in 20 years? Would the information in those emails had gotten to you via a different vector if you did not have email? This sounds like a case for not having email.

OP is aiming to help a quite common problem. Curious: how many others have you met with as spare of an email inbox as yours?

Same. Much more difficult as an immigrant because you have to prove a lot more than other candidates but I have always managed to get a job with cold-applications, zero referrals, which even I find interesting because I dozens of people at different companies, have thousands of followers on LinkedIn, and am super active in local meetups as an organizer, which means lots of locals know me, at least by name, but still no good referrals whenever I apply for a job, which is why I always resort to cold apply.

With Go v1.26.1

  package main
  import "fmt"
  func main() {
    fmt.Printf("Hello World!\n")
  }
Binary sizes:

• 2581616B (2.5MB) → 1714560B (1.6MB) (with -ldflags="-s -w")

• 1531920B (1.5MB) → 753680B (0.7MB) (with upx --force-macos)

That said, a trivial “Hello World!” isn’t a meaningful benchmark. If you’re going to play that game, you might as well swap `fmt.Printf` for `fmt.Println`, or even `println` to avoid the import statement entirely. At that point, you’re no longer comparing anything interesting, the binaries end up roughly the same size anyway.


I find it quite interesting that import of "fmt" package alone leads to a 2+ MiB binary :). But, to be fair, TinyGo doesn't seem to treat "fmt.Printf" function any differently from others, so it does compile the same source code as the regular Go compiler and just has better escape analysis, dead code elimination, etc.

> frizlab: […] while Zed is nice, Sublime is better.

> ramon156: What does this have to do with Tauri?

Not @OP but I imagine they are thinking: “because Zed is built on top of Tauri and Sublime Text is not.” Sublime Text’s user interface is built on top of a mix of (native) UI renderers for each major OS [1], mostly based on Google’s 2D graphics library: Skia https://skia.org/ . Recent versions (v3) go even lower: Vulkan and OpenGL https://www.sublimetext.com/blog/articles/hardware-accelerat...

EDIT: I stand corrected, Zed does not use Tauri (anymore?) but instead gpui ( https://www.gpui.rs ) as seen in their Cargo.toml file → https://github.com/zed-industries/zed/blob/main/Cargo.toml#L...


Doesn't zed use gpui?

Wait, what, Zed is Tauri? How? One of their main things was that they implemented the UI layer completely from scratch using their own GPU-accelerated rendering engine. It's got none of that browser-type stuff.

Problem is, it is often just 1-2 posts on Twitter. Maybe 5… Heck! maybe 10, but that’s it.

And it’s often people who are only superficially involved in the thing they are so expertly talking about.

Sometimes it’s teenagers who just want to troll adults, especially knowing that their posts could appear in the news. Sometimes it’s adults who want to troll other adults for the LOLs or to fulfill a particular agenda. Sometimes it’s bots, actually, usually bots. Something the posts don’t even exist.


> I personally dropped $20k on a high end desktop - 768G of RAM, 96 cores, 96 GB Blackwell GPU - last October, before RAM prices spiked […]

768GB of RAM is insane…

Meanwhile, I’ve been going back and forth for over a year about spending $10k on a MacBook Pro with 128GB. I can’t shake the feeling I’d never actually use that much, and that, long term, cloud compute is going to matter more than sinking money into a single, non-upgradable machine anyway.


> 768GB of RAM is insane.

Before this price spike, it used to be you could get a second-hand rack server with 1TB of DDR4 for about $1000-2000. People were massively underestimating the performance of reasonably priced server hardware.

You can still get that, of course, but it costs a lot more. The recycling company I know is now taking the RAM out of every server and selling it separately.

Apple hardware is incredibly overpriced.


Look at the way age gating is going in a global coordinated push. Can control of compute be far behind?

It wasn't my primary motivator but it hasn't made me regret my decision.

I hummed and hawed on it for a good few months myself.


Just look at ITAR and the various attempts at legislating 3D printing and CNC machining of firearms parts to see one justification point of that.

> Can control of compute be far behind?

How is this going to work? You need uncontrolled compute for developing software. Any country locking up that ability too much will lose to those who don't.


> How is this going to work? You need uncontrolled compute for developing software.

I've read about companies where all software developers have to RDP to the company's servers to develop software, either to save on costs (sharing a few powerful servers with plenty of RAM and CPU between several developers) or to protect against leaks (since the code and assets never leave the company's Citrix servers).


Even for tiny crews doing nothing of fatal significance, this is unironically superior to "throw it on GitHub"

>You need uncontrolled compute for developing software

Oh you sweet summer child :(

You think our best and brightest aren't already working on that problem?

In fact they've fucking aced it, as has been widely celebrated on this website for years at this point.

All that remains is getting the rest of the world to buy in, hahahaha.

But I laugh unfairly and bitterly; getting people to buy in is in fact easiest.

Just put 'em in the pincer of attention/surveillance economy (make desire mandatory again!).

And then offer their ravaged intellectual and emotional lives the barest semblance of meaning, of progress, of the self-evident truth of reason.

And magic happens.

---

To digress. What you said is not unlike "you need uncontrolled thought for (writing books/recording music/shooting movies/etc)".

That's a sweet sentiment, innit?

Except it's being disproved daily by several global slop-publishing industries that exist since before personal computing.

Making a blockbuster movie, recording a pop hit, or publishing the kind of book you can buy at an airport, all employ millions of people; including many who seem to do nothing particularly comprehensible besides knowing people who know people... It reminds me of the Chinese Brain experiment a great deal.

Incidentally, those industries taught you most of what you know about "how to human"; their products were also a staple in the lives of your parents; and your grandparents... if you're the average bougieprole, anyway.

---

Anyway, what do you think the purpose of LLMs even is?

What's the supposed endgame of this entire coordinated push to stop instructing the computer (with all the "superhuman" exactitude this requires); and instead begin to "build" software by asking nicely?

Btw, no matter how hard we ignore some things, what's happening does not pertain only to software; also affected are prose, sound, video, basically all electronic media... permit yourself your one unfounded generalization for the day, and tell me - do you begin to get where this is going?

Not "compute" (the industrial resource) but computing (the individual activity) is politically sensitive: programming is a hands-on course in epistemics; and epistemics, in turn, teaches fearless disobedience.

There's a lot of money riding on fearless disobedience remaining a niche hobby. And if there's more money riding on anything else in the world right now, I'd like an accredited source to tell me what the hell that would be.

Think for two fucking seconds and once you're done screaming come join the resistance.


Your battery is going to suffer because of the extra ram as well.

I don't know your workloads, but for me personally 64 GB is the ceiling buffer on RAM - I can run entire k8s cluster locally with that and the M5 Pro with top cores is same CPU as M5 Max. I don't need the GPU - the local AI story and OSS models are just a toy for my use-cases and I'm always going to shell out for the API/frontier capabilities. I'm even thinking of 48 config because they already have those on 8% discounts/shipped by Amazon and I never hit that even on my workstation with 64 GB.


> Your battery is going to suffer because of the extra ram as well.

No, it won't. The power drain of merely refreshing DRAM is negligible, it's no higher than the drain you'd see in S3 standby over the same time period.


Given the DRAM refresh is part of S3 standby, I'm afraid this is circular reasoning.

I suspect this is one of those "it depends" situations; does the 128gb vs 64gb sku have more chips or denser chips? If "more chips" probably it'll draw a tiny bit more power than the smaller version. If the "denser" chips, it may be "more power draw" but such a tiny difference that it's immaterial.

Similarly, having more cache may mean less SSD activity, which may mean less energy draw overall.

If I had a chip to put on the roulette table of this "what if" I'd put it on the "it won't make a difference in the real world in any meaningful way" square.


I thought my Z620 with 128GB of RAM was excessive! Actually, HP says they support up to 192GB of RAM, but for whatever reason the machine won't POST with more than 128GB (4Rx4) in it. Flawed motherboard?

With the way legislation is going these days, self hosting is becoming ever more important. RAM for zfs + containers on k3s doesn't end up being that crazy if you assuming you need to do everything on your own. (at home I've got 1 1tb ram machine, 1 512gb, 3x 128gb all in a k3s cluster with some various gpus about about a half pb of storage before ~ last sept this wasn't _that_ expensive to do)

My home server has 512GB RAM, 48 cores, my 4 desktops are 16 cores 128GB, 4060GPU each. Server is second hand and I paid around $2500 for it. Just below $3000 price for desktops when I built them. All prices are in Canadian Pesos

Canadian Pesos?

Jokes because the Canadian dollar’s value isn’t very high right now.

See a $1100 GPU on eBay, but it’s in the US? Actually a $1900 GPU.

A colleague were just talking about how well he timed the purchase of his $700 24GB 3090.


It is sarcasm. Our dollar which used to be on par with US is no more.

Please, it's actually Cambodian Dollhairs or Canuckistan Pesos.

> spending $10k on a MacBook Pro with 128GB.

As someone who just bought a completely maxed out 14" Macbook Pro with an M5 Max and 128GB of RAM and 8TB SSD, it was not $10k, it was only a bit over $7k. Where is this extra $3k going?


Tangential, I bought a nearly identically-spec'd (didn't spring for the 8 TB SSD - in retrospect, had I kept it, I would've been OK with the 4 TB) model, and returned it yesterday due to thermal throttling. I have an M4 Pro w/ 48 GB RAM, and since the M5 Max was touted as being quite a bit faster for various local LLM usages, I decided I'd try it.

Turns out the heatsink in the 14" isn't nearly enough to handle the Max with all cores pegged. I'd get about 30 seconds of full power before frequency would drop like a rock.


I haven't really had a problem with thermal throttling, but my highest compute activity is inferencing. The main performance fall-off I've observed is that the cache/context size to token output rate curve is way more aggressive than I expected given the memory bandwidth compared to GPU-based inferencing I've done on PC. But other than spinning up the fans during prompt processing, I'm able to stay peak CPU usage without clock speed reducing. Generally though this only maintains peak compute utilization for around 2-3 minutes.

I'm wondering if there was something wrong with your particular unit?


CPU performance was acceptable; GPU was the one was that falling off a cliff.

Re: particular unit, I’m not sure - it was perfectly fine during anything “normal,” and admittedly, asking a laptop to run at 100% for any extended period of time is already a big ask. But it’s possible, I suppose.

I’m waiting for the Studios to get the Max and / or Ultra, and will reconsider if I want one, or if I don’t really need to play with local LLM at this time.


It could be a different country?

It's really not. I got 128 GB over 5 years ago and I paid far less than 20k for that PC.

Oh yeah! lnav is famous. I remember using it like a decade ago to monitor an array of web servers while at GoDaddy; good ol' times.

First commit is from Sep 13, 2009: https://github.com/tstack/lnav/commit/b4ec432515e95e86ec9d71... . Woah! we’re old.

This is what the UX looked like back in the day: https://github.com/tstack/lnav/commit/bce2caa654160518ec11f6...


Aside from the bitmap font, this looks pretty much the same as it does now lol

I'll also add I used lnav more recently for viewing logs from many small lab devices centralized via syslog, it was extremely lightweight and effective.


Wow, the GitHub mobile app doesn't preview PNGs. TIL


GitHub website does on mobile.


GitHub mobile requires login, sadly.


I'm not seeing them on Chrome on Windows either, but FF works for me.


There’s actually a lot more environment variables:

edit: removed obnoxious list in favor of the link that @thehamkercat shared below.

My favorite is IS_DEMO=1 to remove a little bit of the unnecessary welcome banner.



Curiously this is missing IS_SANDBOX=1 (allows running as root)


There’s actually already an app for that, and I’m not even joking.

edit: I was going to link a specific one I found a few weeks ago, but it turns out there are tons of them now, so I’ll just explain the idea. Most of these apps are basically reminder tools disguised as simple little games. A common example is a flower garden. Each “flower” represents a friend, and you keep the flower alive by staying in touch. That might mean sending a message or planning a hangout. If you don’t, the flower wilts, just like a real one would without care.


I love how you responded to today’s tech fatigue with “last week’s” tech fatigue slogan lol


please share.


I’m also as pedantic as you and use “LLM” even talking about these systems but you need to be flexible and accept that “AI” is already in everyone’s head when referring to GPT variants.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: