Hacker Newsnew | past | comments | ask | show | jobs | submit | demetris's commentslogin

Some of the stuff we have been adding since then is GOOD though.

Some examples:

We now have to accommodate all types of user agents, and we do that very well.

We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.

Our image elements can now have lots of attributes that add a bit of weight but improve the experience a lot.

Etc.

Also, things are improving/self-correcting. I saw a listing the other day for senior dev with really good knowledge of the vanilla stuff. The company wants to cut down on the use of their FE framework of choice.

I cannot remember seeing listings like that in 2020 or 2021.

PS.

I did not mean this reply as a counterpoint.

What I meant to say is, even if we leave aside the SPAs that should not be SPAs, we see the problem in simple document pages too. We have been adding lots of stuff there too. Some is good but some is bad.


> We now have to accommodate all types of user agents, and we do that very well.

Simple websites don't even care about the UA.

> We now have complex navigation menus that cannot be accessible without JavaScript, and we do that very well.

Is there an actual menu which is more than a tree? Because a dir element that gets rendered by the UA into native menu controls would be just so much better.


Websites do care about the UA. They don’t care, at least most don’t care, about the User-Agent string. That is different.

About an element that gets rendered into native menu controls, I am not sure. I haven’t been following closely for the last two or three years. But that seems like a good candidate for a native element. 9 out 10 websites need it.


But how does that work?

Does Cloudflare force firewall rules for those who choose to use it for their websites?

If the tool that does the crawling identifies itself properly, does Cloudflare block it even if users do not tell Cloudflare to block it?


Firefox could (should?) be better in several aspects but it seems excessive to say it is pretty irrelevant.

It has 4.5% market share in Europe, 9% in Germany (statcounter numbers).

It is the browser that got the Google Labs folks to write a Rust jxl decoder for it, and now, thanks in part to that, Chrome is re-adding support for jxl.

You can be unhappy with Firefox (I often am myself), and Firefox HAS lost relevance, but can you really say it has become pretty irrelevant?


I didn’t pay close attention to the domain and I thought it was the other one:

https://moderncss.dev/

One of the best educational resources for modern CSS.

BTW, one of the reasons I love modern CSS is front-end performance. Among other things, it allows you to make smaller DOMs.

I talk about a modern CSS technique that does that here:

https://op111.net/posts/2023/08/lean-html-markup-with-modern...

It is an idea I started playing with when custom properties landed in browsers, around 2016 or 2017? Around 2021 I started using the technique in client sites too.

Now I want to write a 2026 version of the post that talks about container queries too. The technique becomes more powerful if you can rely on container queries and on the cqw unit. (You cannot always. That stuff is still new.)

For an example of the convenience cqw offers if you can rely on it, see the snippets I have in this:

https://omnicarousel.dev/docs/css-tips-know-your-width/


I don’t believe resips will be with us for long, at least not to the extent they are now. There is pressure and there are strong commercial interests against the whole thing. I think the problem will solve itself in some part.

Also, I always wonder about Common Crawl:

Is there is something wrong with it? Is it badly designed? What is it that all the trainers cannot find there so they need to crawl our sites over and over again for the exact same stuff, each on its own?


Many AI projects in academia or research get all of their web data from Common Crawl -- in addition to many not-AI usages of our dataset.

The folks who crawl more appear to mostly be folks who are doing grounding or RAG, and also AI companies who think that they can build a better foundational model by going big. We recommend that all of these folks respect robots.txt and rate limits.


Thank you!

> The folks who crawl more appear to mostly be folks who are doing grounding or RAG, and also AI companies who think that they can build a better foundational model by going big.

But how can they aspire to do any of that if they cannot build a basic bot?

My case, which I know is the same for many people:

My content is updated infrequently. Common Crawl must have all of it. I do not block Common Crawl, and I see it (the genuine one from the published ranges; not the fakes) visiting frequently. Yet the LLM bots hit the same URLs all the time, multiple times a day.

I plan to start blocking more of them, even the User and Search variants. The situation is becoming absurd.


Well, yes, it is a bit distressing that ill behaved crawlers are causing a lot of damage -- and collateral damage, too, when well-behaved bots get blocked.


I published some benchmarks recently:

https://op111.net/posts/2025/10/png-and-modern-formats-lossl...

I compare PNG and the four modern formats, AVIF, HEIF, WebP, JPEG XL, on tasks/images that PNG was designed for. (Not on photographs or lossy compression.)


It seems like the natural categories are (1) photographs of real things, (2) line art, (3) illustrator images, (4) text content (eg, from a scanned document).

Is there a reason you used only synthetic images, ie, nothing from group 1?


Hey, tasty_freeze!

The motivation behind the benchmarks was to understand what are the options today for optimizing the types of image we use PNG for, so I used the same set of images I had used previously in a comparison of PNG optimizers.

The reason the set does not have photographs: PNG is not good at photographs. It was not designed for that type of image.

Even so, the set could do with a bit more variety, so I want to add a few more images.


Would be nice to also see decompression speed and maybe a photo as a bonus round.


Yeah.

Numbers for decompression speed is one of the two things I want to add.

The other is a few more images, for more variety.


Max memory required during decompression is also important. Thanks for sharing this research.


https://op111.net - My blog

https://omnicarousel.dev - Docs and demos site for Omni Carousel, a library I wrote recently


I did that that recently for a couple of personal projects and I like it. I think I will start doing it for client sites too.

https://omnicarousel.dev

The main navigation menu is just above the site footer in the HTML document.

Question for people who know that stuff:

What is the recommended way of hiding features that require JavaScript on browsers that do not support JavaScript, e.g., on w3m?


"What is the recommended way of hiding features that require JavaScript on browsers that do not support JavaScript, e.g., on w3m?"

You can try the <noscript> tag.


> The main navigation menu is just above the site footer in the HTML document.

Just letting you know, that stuff is a bit confusing to screen reader users.

Though I really wish we standardized on putting content first, like mobile apps do. At least we woulnd't haave to explain to new screen reader users why getting to the f???ing article is so damn hard if you don't know the right incantations to do it quickly.


Thank you!

Would a Jump to navigation link next to Skip to content make this arrangement better for screen reader users?


Do you know what user agent the browsers send?

I tried with Windows 7 (Firefox 115) and it reports Windows 7.

It seems though that it cannot distinguish between Windows 10 and Windows 11, so, without looking further, I suppose the detection is based on the User-Agent string? (The OS version browsers report on Windows is frozen, so Windows 10 and Windows 11 have the same version there.)


I was working on a carousel library a few months ago. I had made a few stress-test demos so that I could catch obvious issues while I was adding things and tweaking things.

One carousel there had 16K slides.

On Windows both Chrome and Firefox managed that fine. They scrolled from start to end and back without issue and you could see, I think, all the frames in my 60Hz screen.

On GNOME and X11 (dual boot, so same hardware) Chrome was fine but there were issues with Firefox. I was curious so I logged out and logged in with Wayland. On Wayland Firefox was fine too, indistinguishable from Chrome.

I don’t understand hardware, compositors, etc., so I have no idea why that was, but it was interesting to see.


Firefox remains very conservative on enabling modern features on X11. Some distributions force them on, but otherwise it's up to the user to figure out how to do that.

It's likely that some hwaccel flag in about:config wasn't turned on by default. Similarly, if you want smooth touchpad scrolling, you need to set MOZ_USE_XINPUT2


Oh! That’s interesting. Thank you.

My main Firefox in that setup is from the Mozilla repos, rather than the ESR version that is the default in Debian stable. So, it could very well be that. I will have to check to see what the ESR Firefox from the Debian repos does.


> Firefox remains very conservative on enabling modern features on X11.

So old school throthling if you don't use the "right" version (Apple batterygate, Microsoft wordperfectgate). They could blame it on testing though (we only use Wayland and we are too lazy to test the X11 version)


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: