Hacker Newsnew | past | comments | ask | show | jobs | submit | superkuh's commentslogin

I mostly just listened during homeroom and lunch period. But once I was sent to in-schoool-suspension in high school in the early 2000s for listening to my mp3 player (Diamond Rio PMP300) after I finished taking the yearly standardized tests the state used to judge schools.

It's crazy to me that cell phones, and especially smart phones, were ever allowed in the classroom during class.

I suspect it was sneaky.

The old Nokia in school wasn't a problem. You get in trouble for playing snake. The iphone 1 wasn't really a problem. There weren't that many, and it served as a calendar.

But year after year, release after release, the industry deliberately loaded more and more addictive machinery, pushed more and more boundaries, until it's beyond unacceptable.

As an aside, it's amazing how hard it is to turn the modern phone into a no-nonsense tool, and I'm an adult with self-control, a deep understanding of dark patterns, and a fully-functioning brain after 3 cups of coffee.


Completely. I'm a software engineer that has a better shot at this than just about anybody, and I have no idea how to give a child a phone that's not just digital crack. If you think ScreenTime etc will do the job you probably have no idea what's actually happening on your child's phone.

You can buy a dumbphone. For example a Nokia 3210 4G.

Interestingly dumb phones are making a comeback.

They disappeared for a few years, but now you can buy a dumb phone, for example running KaiOS, that charges with USB-C and supports modern cell networks. You can even get a Nokia!

There is absolutely no need to buy a smartphone to any kid younger than 15. Now for high school students it's a bit different, they should be old enough to have self control and respect rules to keep their phones in their bags during class.


They were not. The rule now is that they have to go into a special bag that cannot be opened while school is in session. Before they could be left in a backpack and snuck out or used between classes.

They are not allowed in any school I've been to, especially during class.

I once had to sit in the principals office for bringing in some electronic fishing game. How we went from that to phones being allowed is insanity. They came like a tsunami.

And it's worse than this because there is no wayland. Without a strong reference implementation and with the very minimal wayland core protocol, each desktop environment picks and chooses and implements their own incompatible extensions for what should be wayland core features. This means you don't develop for linux, or even linux wayland. You develop for linux wayland mutter. Or linux wayland plasma. Or linux wayland hyprland. Because those three waylands are going to be doing things which you need every day on an average desktop in their own incompatible ways: https://wayland.app/protocols/

Developers have to decide which DE they'll have their applications run in rather than having your application be able to function across all linux desktops. This is different than how it was the last 20 years. No matter what else you say, this is a change from how it was. It's massive fragmentation of the userspace.

Literally the only wayland DE that supports screen readers right now is GNOME's mutter and that's mostly just for GNOME's software because of course they invented something new to work around the problems of the wayland architecture.


What is this “massive fragmentation” you speak of?

Anecdotally, I’m using Plasma, and every Gnome or Gtk app I’ve tried appears to be working perfectly, and vice versa when I occasionally try out Gnome.

Much less so for DIY/BYOB desktops like Hyprland, but I feel like that’s what you sign up for there.


https://wayland.app/protocols/

Click any protocol, very few outside the core and absolute essential extensions have universal support.


That's one of the things that freak me out about Wayland.

The DIY/BYOB experience is perfectly viable in the X11 world. I don't think I've ever had a piece of software balk at me because I used FVWM instead of kwin. I don't want to be railroaded into a desktop environment with strong opinions and mediocre tools when there's a sprawling flea maret worth of software to explore.


And surely that isn't happening either, though? Hyprland, Sway, Niri... I hear people are loving them. Enjoy!

I guess you have to decide if you are a GNOME app, an Ubuntu app, or an XFCE app unfortunately. I'm sorry that this is the case but it wasn't GNOME's fault that Ubuntu has started this fork. And I have no idea what XFCE is or does sorry.

Prophetic words were once spoken and mocked long ere.


Can this establish an QUIC connection without the other end having a CA cert? Or, like most other QUIC libs will it default to only allowing connections to corporation approved domains?

The TLS authentication story is fully configurable. This hasn't changed compared to Quinn. We use noq in iroh, and in iroh we use raw public keys (RFC7250). When you use iroh, you don't need to set up DNS or TLS certificates, you just generate a key, share the public key and others can connect to you. (Of course the trouble is sharing the public key securely.)

It turns out that "Have the defaults arranged so that they suit a handful of crazy people but inconvenience literally everybody else" isn't popular. In fact preferring a tiny minority preference is sort of inherently unpopular, that's basically its defining feature as a policy.

Visiting websites (or making connections to IP based services) that aren't CA approved is not some crazy niche desire as you're painting it. It was the default for a very long time and is still the only way to access many websites. When changing the feature requires recompiling a lib and linking it into your browser which you also recompile it's basically reality and not just a default. The only reality. And that's bad. It should be a setting at least.

HTTP is incomparibly less fragile than HTTPS which is why HTTP+HTTPS is such a great solution for websites made by human persons for human persons. Lets be clear, corporate or institutional persons using HTTPS alone is fine and reasonable. But for human use cases HTTP+HTTPS gets you the best of both worlds. No HTTPS cert system ever survives longer than a few years without human input/maintainence. There's just too much changing and too much complexity. From the software of the user to the software of the webserver.

Which is to say, HTTP is not some "ancient" tech like an analog television. It is a modern technology used today doing things that HTTPS can't.


I'd rather have some expired cert than http

I saw once my ISP injecting javascript ads into http traffic and the horror is with me forever


Agree strongly. An expired cert is better than no cert.

Also would argue maintenance is only as complicated as you make it for yourself. Countless people keep patched, secure, https web servers running with minimal effort. If its somehow effort, introspect some on why you are somehow making so much work for yourself.


Might be a bit of each of us touching different ends of the elephant. To be clear I am talking about long timespans. Lets Encrypt hasn't even existed for a full decade yet. During that time it's dropped support entirely for the original acme protocol. During that time it's root certs have expired at least twice (only those I remember where it caused issues in older software). And that's ignoring the churn in acme/acme2 clients and specific OS/Distro cert choice issues and browser CA issues. Saying that there's no trouble with HTTPS must be coming from experiences on short timescales (ie, a few years).

HTTP/3 already doesn't allow anything but CA TLS only. It won't be too long before they no longer allow you to click through CA TLS warnings.

If human people want things to be on the web for long time periods those things should be served HTTP+HTTPS.


If you can't keep your site's certs working, I don't have much faith you can keep your server working. Maintenance is required in the face of entropy

There is some kind of middle ground here.. My first HTML file still renders like it did on Mosaic. The HTTP server I used back then still works today 35 years later without maintenance. I do agree that HTTPS is a simple solution but there is too much cargo cult around it. Honestly I do not see the use to maintain everything published if you follow sane practices.

EDIT: I have 15 year old things at work that do not compile, you have to maintain it for sure, biggest problem is cryptography. I am not sure that unstable tech should be part of the application ever.


Unless I'm misunderstanding your point, your HTTP server from 35 years ago is still working today without any maintenance? Does that mean no security patching and no updates for bugfixes? or does "no maintenance" means something else I'm missing? I find it difficult to discuss these topics when comments like these pretend that you can leave your system exposed on the internet for years without any maintenance.

If we're talking applications that don't actively listen on the internet that's fine, and I would agree that we should have complete software that just works. But a webserver, unless it's for personal/home use, it's on the internet and I don't see how it could work for 35 years without any update/change


Static html webservers don't really have any need for security patching or bugfixes constantly like dynamic complex stuff. They literally can just live forever. The sites themselves are just files. Not applications.

I hate to break it to you, but HTTP servers (what is an html server) absolutely can have all manner of fun exploits, like RCE.

That's no use when your automated registrar stops working in 3 years because it went out of business or changed protocols. Let's Encrypt has been an outlier.

On the one hand, I agree with you given that state of the world.

On the other hand, that state of the world shouldn't exist. It's incredible to me that it's not illegal.


I thought that was a one time thing in a 3rd world country blown out of proportion into myth status.

Would you mind sharing what ISP it was and what time period this was in?


I’m not sure whether this applies globally, but in Japan, around 2015, some mobile carriers deployed a “traffic optimization” feature that would lossily compress images in transit.

On the platforms of NTT Docomo and KDDI (au), users could opt out of this behavior. However, with SoftBank, it could not be disabled, which led to controversy.

As you might expect, this caused issues—since the image data was modified, the hash values changed. As a result, some game apps detected downloaded image files as corrupted and failed to load them properly.

Needless to say, this was effectively a man-in-the-middle attack, so it did not work over HTTPS.

Within a couple of years, the feature seems to have been quietly discontinued.

There were also concerns that this might violate the secrecy of communications, but at least the government authorities responsible for telecommunications did not take any concrete action against it.

There is a Japanese Wikipedia article about this: https://ja.wikipedia.org/wiki/%E9%80%9A%E4%BF%A1%E3%81%AE%E6...


This event sounds much more realistic/common, the motivation of an ISP to save bandwidth costs is much more likely/frequent than the motivation of an ISP to monetize through ads (in addition to monthly service fees).

Where as my ISP did not put in ads, they did inject messages such as maintenance was going to occur and did things like redirect bad dns to their own search.

Also ISPs were monitoring and selling browsing data years ago.



Cox Communications used to do it in California to inject JS into sites. I remember seeing little Cox popup/toast messages in the corner of other sites.

it was some mobile ISP in Russia. Maybe 6 or 8 years ago

That's when you connect the VPN...

This is such a weird framing. HTTPS is HTTP. TLS is at a different layer of the network stack. You may as well say HTTP through a proxy is better or worse than HTTP through a VPN; all of those statements are equally nonsensical.

You are simply arguing that insecure network requests require less work. Which is obviously true. TLS did not appear out of nothing. Much effort was expended to create it, and there's a reason


My thoughts exactly. By this logic both are fragile because they run over lossy wireless networks.

The composability of TLS/HTTP is really a beautiful thing.


Any fans of retrocomputing will certainly agree. Much of the plain-HTTP internet that's left is there by them and for them.

But, as we learned with the telnet filter going into place, we exist on the network at the pleasure of everyone else. Their concerns must come before ours. The needs of the many outweigh the needs of the few.

That explains why I've been using this to find all the cool stuff :) https://whatsonhttp.com/votes

Agree 100%. HTTP is much more accessible, and HTTPS has more failure modes. When I want to ensure that someone can read my content, I offer both.

Using HTTP does not guarantee your content can be read, since it can be modified in transit. Your content could be replaced entirely and you would never know unless someone reported it to you.

This is true, and is a real failure mode of HTTP.

Where I live, and for people with older devices, this happens much less frequently than the HTTPS failure modes of unsupported browsers.


If you don't care about security, you could just use a browser which ignores invalid certificates.

Invalid certificates are one thing, and you can probably click through that. But maybe your older browser tops out at TLS 1.0, and servers don't offer that anymore (I think the credit card PCI cert discourages it) or maybe your older browser can't do ECC certs and the server you want to talk to only has an ECC cert.

Or maybe your older server only speaks TLS 1.0 and that's not cool anymore. Or it could only use sha1 certs, so it can't get a current cert.

When I can, I like to server http and https, and serve the favicon with HTTPS and use HSTS to induce current clients to use https for everything. Finally, a use for the favicon.


Someone with an older browser can update the browser outside of very niche situations. I have little concern for that use case.

If a server can't do TLS 1.2 from 2008 I question how it's still stable and unhacked more than anything.


It would be cool if the javascript were re-written a bit to only use standard long existing javascript features so one didn't have to run a modern corporate browser to use the "wander" functionality. It did not work in my browser even with JS enabled.

Sorry to know it did not work in your browser and thank you for reporting the problem here. If someone is able to reproduce it and share details either here or at <https://codeberg.org/susam/wander/issues>, especially which JavaScript features are not working, I would be happy to update the code.

I am aware that I have used a number of relatively modern features. For example, I suspect `for ... of` might be one, and perhaps `Element.append()` (instead of `Node.appendChild()`) could also be an issue. Unfortunately, I do not have old browsers to test this reliably, so any help in identifying the exact constructs or functions causing problems would be very appreciated.


browserstack might be a good option to test a bunch of browsers and their different versions on real devices.

Never used it personally, but might get some mileage out of the free plan before their time-based usage expires.

https://www.browserstack.com/docs/automate-self-hosted/getti...


Hello @superkuh, if you happen to revisit this, could you let me know which browser and version you were using? It would help me decide how far back I should support.

Hi susam thanks for the heads up on IRC. I'm using an ancient firefox fork from ~2015 called Palemoon 26.5. I use it because it's the best available on my computer with the good screen reader and text to speech setup. I am not a javascript developer but my JS console says

"SyntaxError: Missing ; before statement" on line 136: let consoles = window.wander.consoles

I'd bet it's that 'let' that's emcascript6. I sort of have a polyfill extension for some things from es6 but not all.


Thank you. This makes a lot of sense. I need to think it through since 'let' is one of the things about modern JavaScript that I quite like, so I need to decide whether I really want to go back to using 'var' again.

To be honest, I resisted adopting ES6 features for quite a long time because I was concerned about exactly this kind of situation, where a modern feature might not work in an older browser that is still in use. However, in 2022, prompted by ESLint's no-var rule [1], I began using ES6 features in my code [2]. Now I find myself wondering whether my earlier, more conservative approach to JavaScript might have been the better approach.

In any case, thank you very much for following up on this thread. It has given me something to think about.

[1] https://eslint.org/docs/latest/rules/no-var

[2] https://codeberg.org/susam/texme/commit/8e31dbf


I haven't been able to read most text on github on my old system with a good screenreader because the browser is outdated and can't run the github javascript applications fully correctly. This gitclassic interface is a lifesaver. From my VPS IP I got blocked but from my home residential IP it worked fine.

The github accessibility team has consistently ignored tickets about the lack of text for years. In the sense that they'll fix it for a month then revert and make it even worse. And that's fine, Microsoft is a business and making a profit is their goal. Supporting people who want text in the HTML doesn't make money.


Most of the AI facial recognition cameras in the USA are from Flock and use small solar panels to keep the system battery charged. I've noticed that when I run small computers off small batteries and small solar panels even a bit of bird poop on the panel eventually causes the computer to run out of power. Bird poop, or bird poop simulants (like milk powder, black pepper, corn starch, water, wey powder) are non-destructive to solar panels or anyone's property. Sure would be cool if the birds would start helping.

It sure would be nice if this standard of conduct in court were also upheld for the US federal officials who refuse to answer or straight up bold faced lie in court. But nah, it only ever happens to normal people.

I'll definitely give this a try. Older linux distros that have full working accessibility support don't run very well on modern hardware if at all (ie, the CSM compat mode quirks CSMWrap is meant to avoid). It'd be great to keep running my xorg linux with working screen reader on a modern ryzen system that'll last another 15 years.

> Older linux distros that have full working accessibility support

Such a sad state of affairs


they dont run in a VM well?

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: