It's not. You won't be writing one byte, ever (even if you had layers that actually supported less-than-block writes), because the overhead of instruction would be massive and you'd be murdering both latency and bandwidth for anything non-trivial
So instead of replacing every 5 years you replace every 5 years because if you need that level of performance you're replacing servers every 5 years anyway
Our developers managed to run around 750MB per website open once.
They have put in ticket with ops that the server is slow and could we look at it. So we looked. Every single video on a page with long video list pre-loaded a part of it. The single reason the site didn't ran like shit for them is coz office had direct fiber to out datacenter few blocks away.
We really shouldn't allow web developers more than 128kbit of connection speed, anything more and they just make nonsense out of it.
PSA for those who aren’t aware: Chromium/Firefox-based browsers have a Network tab in the developer tools where you can dial down your bandwidth to simulate a slower 3G or 4G connection.
Combined with CPU throttling, it's a decent sanity check to see how well your site will perform on more modest setups.
I once spent around an hour optimizing a feature because it felt slow - turns out that the slower simulated connection had just stayed enabled after a restart (can’t remember if it was just the browser or the OS, but I previously needed it and then later just forgot to turn it off). Good times, useful feature though!
hahaha - I've done something similar. I had an automated vitest harness running and at one point it ended up leaving a bunch of zombie procs/threads just vampirically leeching the crap out of my resources.
I naturally assumed that it was my code that was the problem (because I'm often the programmer equivalent of the Seinfeld hipster doofus) and spent the next few hours optimizing the hell out of it. It turned out to be unnecessary but I'm kind of glad it forced me into that "profiling" mindset.
I wonder if that works beneficial on old computers that freeze up when you try load the GB js ad-auction circus news circus website. I want to browse loaded pages while the new tabs load. If the client just hangs for 2 min it gets boring fast.
Datapoint: During the pandemic, I had to use an old 2004 Powerbook G4 12" (256 MB RAM, OS X Leopard). Everything sort of worked and was even reasonably snappy. But open one website, and the machine went down. Unusable. Even if, indeed, I just wanted to read or look up a few kB of text. So painful.
One tool I've found useful in low-power/low-bandwidth situations is the Lynx web browser [1]. Used to be installed by default in most Linux distributions but I think that's probably not the case anymore. Wikipedia says its also available on OSX and Windows.
It’s the speed of the JavaScript compiler, on those old browsers they were expected to handle a few kilobytes max of event listeners. The chrome vs Firefox browser wars sped up JavaScript compilation by 10x at least
I still test mine on GPRS, because my website should work fine in the Berlin U-Bahn. I also spent a lot of time working from hotels and busses with bad internet, so I care about that stuff.
Developers really ought to test such things better.
Thank you for doing this! I really mean it. We need more developers who care about keeping websites lean and fast.
There's no good reason a regular site shouldn't work on GPRS, except maybe if the main content is video.
CPU/network throttling needs to be set for the product manager and management - that's the only way you might see real change.
We have some egregious slowness in our app that only shows up for our largest customers in production but none of our organizations in development have that much data. I created a load testing organization and keep considering adding management to it so they implicitly get the idea that fixing the slowness is important.
For macOS users you can download the Network Link Conditioner preference pane (it still works in the System Settings app) to do this system wide. I think it's in the "Additional Tools for Xcode" download.
I had a fairly large supplier that was so proud that they implemented a functionality that deliberately (in their JS) slows down reactions from http responses. So that they can showcase all the UI touches like progress bars and spinning circles. It was an option in system settings you could turn on globally.
My mind was blown, are they not aware of F12 in any major browser? They were not, it seems. After I quietly asked about that, they removed the whole thing equally quietly and never spoke of it again. It's still in release notes, though.
It was like 2 years ago, so browsers could do that for 10-14 years (depending how you count).
That's great. Well, just to let them know if they ever need something like that in the future, I'm available for hire as an overpriced consultant.
I guarantee with 100% satisfaction that my O(n^n) code will allow visitors sufficient time to fully appreciate the artistic glory of all the progress bars and spinners.
For Firefox users, here's where it's hidden (and it really is hidden): Hamburger menu -> More tools -> Web developer tools, then keep clicking on the ">>" until the Network tab appears, then scroll over on about the third menu bar down until you see "No throttling", that's a combobox that lets you set the speed you want.
Alternatively, run uBlock Origin and NoScript and you probably won't need it.
What a weird comment, not sure what you are trying to achieve. Any web developer knows how to find the network tab of the web developer tools in any browser including Firefox, and then the throttle option is immediately there.
You can make it look like any feature in any UI is hidden by choosing the longest path to reach it, using many words to describe it despite the target audience already knowing this stuff, and making your windows as small as possible.
Moreover, that a developer tool is a bit hidden in submenus in a UI designed for nontechnical users is fair game.
Even considering this, right click > inspect or Ctrl+shift+k also gets you the web developer tools. Not that hidden.
And then usually the network tab is visible immediately, it is one of the first tabs unless you moved it towards the end (even then, usually all the tabs are visible; but it's nice you can order the tabs as you want, and that a scroll button exists for when your window is too small -- and if the web developer panel is too small because it's docked at the left you can resize it, dock it to bottom or undock it).
This stuff is pretty standard across browsers, it's not like Firefox's UI is specifically weird for this. I don't have ideas for improving this a lot, it looks quite well designed and optimized to me already.
And then no, ublock Origin and No Script can't help you optimize the size of the web page you are working on. You ought to unblock everything to do this. They are a solution for end users, who have very few reasons to use the throttle feature. And unfortunately for end users, blocking scripts actually breaks too much to be a good, general workaround against web pages being too heavy. I know, I browse the web like this.
A nitpick to add to the sibling comment, more a minor personal annoyance than anything: No throttling is a menu button that, when clicked, gives you a dropdown menu - not a "combobox". A combobox is a text input element that has an associated dropdown menu.
I see this mistake very often from people whose UI learnings came via Visual Studio, because it didn't have a separate UI element named "dropdown menu" or similar. You instead had to add a combobox and configure an option to turn it into a plain drodown list (e.g. set editable to false in VB6, or change dropDownStyle in VB.net).
Peanuts! My wife’s workplace has an internal photo gallery page. If your device can cope with it and you wait long enough, it’ll load about 14GB of images (so far). In practice, it will crawl along badly and eventually just crash your browser (or more), especially if you’re on a phone.
The single-line change of adding loading=lazy to the <img> elements wouldn’t fix everything, but it would make the page at least basically usable.
Reserve a huge share of the blame for the “UX dEsIgNeRs”. Let’s demand to reimplement every single standard widget in a way that has 50% odds of being accessible, has bugs, doesn’t work correctly with autofill most of the time, and adds 600kB of code per widget. Our precious branding requires it.
> Let’s demand to reimplement every single standard widget in a way that has 50% odds of being accessible, has bugs, doesn’t work correctly with autofill most of the time, and adds 600kB of code per widget.
You're describing the web developers again. (Or, if UX has the power to demand this from software engineering, then the problem is not the UX designers.)
I as a developer cannot refuse to not build as-is what was signed off by product manager in figma.
Recently had to put so many huge blurs that there was screen tearing like effect whenver you srcolled a table. AND No i was not allowed to use prebake-blurs because they wouldnt resize "responsively"
If you don’t have an engineering manager or tech lead able to back you on saying no to a PM, there is something seriously broken with that organization.
That e.g. a form should work predictably according to some unambiguous set of principles is of course a UX concern. If it doesn't, then maybe someone responsible for UX should be more involved in the change review process so that they can actually execute on their responsibility and make sure that user experience concerns are being addressed.
But sure, the current state of brokenness is a result of a combination of overambitious designs and poor programming. When I worked as a web developer I was often tasked with making elements behave in some bespoke way that was contrary to the default browser behavior. This is not only surprising to the user, but makes the implementation error prone.
One example is making a form autosubmit or jump to a different field once a text field has reached a certain length, or dividing a pin/validation code entry fields into multiple text fields, one for each character. This is stupidity at the UX level which causes bugs downstream because the default operation implemented by the browser isn't designed to be idiotic. Then you have to go out of your way to make it stupid enough for the design spec, and some sizeable subset of webpages that do this will predictably end up with bugs related to copying and pasting or autofilling.
often we're told to add Google XSS-as-a-serv.. I mean Tag Manager, then the non-tech people in Marketing go ham without a care in the world beyond their metrics. Can't blame them, it's what they're measured on.
Marketing and managers should be restricted as well, because managers set the priorities.
I recently had to clean up a mess and after days asking what’s in use and what’s not, turns out nothing is really needed, and 80 tracking pixels were added “because that’s how we do it”.
You can still make a site unusable without having it load lots of data. Go to https://bunnings.com.au on a phone and try looking up an item. It's actually faster to walk around the store and find an employee and get them to look it up on an in-store terminal than it is to use their web site to find something. A quick visit to profiles.firefox.com indicates it's probably more memory than CPU, half a gigabyte of memory consumed if I'm interpreting the graphical bling correctly.
How gaslit I must be to remark how more painless this is to use than literally any NA store website I've used.
Less useless shit popping up (with ad block so I mean just the cookies, store location etc harassments)
Store selector didn't request new pages every time I do anything; resulting in all the popups again. (just download our spyware and all these popups will go away!)
Somehow my page loads are snappier than local stores despite being across the planet.
Not saying it's a good site. It's almost the same as Home Depot. Just slightly better.
I mean there's an AI button for searching for a product so you can do agentic shopping with a superintelligence on your side.
You don't even need video for this: I once worked for a company that put a carousel with everything in the product line, and every element was just pointing to the high resolution photography assets: The one that maybe would be useful for full page print media ads. 6000x4000 pngs. It worked fine in the office, they said. Add another nice background that size, a few more to have on the sides as you scroll down...
I was asked to look at the site when it was already live, and some VP of the parent company decided to visit the site from their phone at home.
Many web application frameworks already have extensive built-in optimization features, though examples like the one that you shared indicate that there are fundamentals that many people contributing to the modern web simply don't grasp or understand that these frameworks won't just 'catch you out' on in many cases. It speaks to an overreliance on the tools and a critical lack of understanding of the technologies that they co-exist with.
I now wonder if it'd be a good idea to move our end to end tests to a pretty slow vm instead of beefy 8 core 32gb ram machine and check which timeouts will be triggered because our app may have been unoptimized for slower environments...
For blocking presubmit checks, getting the fastest machine you can is probably reasonable. Otherwise, the advantage of the craptop approach is that it needs basically no infra work and gives an immediate impression of the site, and not CI, being slow.
If you’re willing to build some infra, there’s probably a lot more you can do—nightly slow-hardware runs come to mind immediately, browser devtools have a convincing builtin emulation of slow connections, a page displaying a graph of test runtime over time[1] isn’t hard to set up, etc.—but I don’t really have experience with that.
I kid you not a few jobs ago I found several race conditions in my code and tests by running them at the same time as a multi threaded openssl burn test. :)
Gonna bookmark that article for tomorrow, craptop duty is such a funny way to put it.
Similarly, a colleague I had before insisted on using a crappy screen. Helped a lot to make sure things stay visible on customers’ low contrast screens with horrible viewing angles, which are still surprisingly common.
Music producers often have some shitty speakers known as grot boxes that they use to make sure their mix will sound as good as it can on consumer audio, not just on their extremely expensive studio monitors. Chromebooks are perfectly analogous. As a side note, today I learned that Grotbox is now an actual brand: https://grotbox.com
Should also give designers periodically small displays with low maximum contrast, and have them actually try to achieve everyday tasks with the UX they have designed.
If you want to see context aware pre-fetching done right go to mcmaster.com ...
There are good reasons to have a small cheap development staging server, as the rate-limited connection implicitly trains people what not to include. =3
Making it easy to buy stuff from them definitely helps their bottom line. Unfortunately the few companies I've wanted to buy from but their website was horrible and made me go elsewhere, either completely ignored or dismissed my complaints about having just lost a customer.
Well as long as the website was already full loaded and responsive, and the videos show a thumbnail/placeholder, you are not blocked by that. Preloading and even very agressive pre-loading are a thing nowaadays. It is hostile to the user (because it draws their network traffic they pay for) but project managers will often override that to maximize gains from ad revenue.
this is a general problem with lots of development. Network, Memory, GPU Speed. Designer / Engineer is on a modern Mac with 16-64 gig of ram and fast internet. They never try how their code/design works on some low end Intel UHD 630 or whatever. Lots of developers making 8-13 layer blob backgrounds that runs at 60 for 120fps on their modern mac but at 5-10fps on the average person's PC because of 15x overdraw.
I used the text web (https://text.npr.org and the like) thru Lyx. Also, Usenet, Gopher, Gemini, some 16 KBPS opus streams, everything under 2.7 KBPS when my phone data plan was throttled and I was using it in tethering mode. Tons of sites did work, but Gopher://magical.fish ran really fast.
Bitlbee saved (and still saves) my ass with tons of the protocols available via IRC using nearly nil data to connect. Also you can connect with any IRC client since early 90's.
Not just web developers. Electron lovers should be trottled with 2GB of RAM machines and some older Celeron/Core Duo machine with a GL 2.1 compatible video card. It it desktop 'app' smooth on that machine, your project it's ready.
I'm pretty damn sure those videos were put on the page because someone in marketing wanted them. I'm pretty sure then QA complained the videos loaded too slowly, so the preloading was added. Then, the upper management responsible for the mess shrugged their shoulders and let it ship.
You're not insightful for noticing a website is dog slow or that there is a ton of data being served (almost none of which is actually the code). Please stop blaming the devs. You're laundering blame. Almost no detail of a web site or app is ever up to the devs alone.
From the perspective of the devs, they expect that the infrastructure can handle what the business wanted. If you have a problem you really should punch up, not down.
> Please stop blaming the devs. You're laundering blame. Almost no detail of a web site or app is ever up to the devs alone.
If a bridge engineer is asked to build a bridge that would collapse under its own weight, they will refuse. Why should it be different for software engineers?
It's a website and not a bridge. Based on the description given, it's not a critical website either. If it was, the requirements would have specified it must be built differently.
You're not even arguing with me BTW. You're arguing against the entire premise of running a business. Priorities are not going to necessarily be what you value most.
> If it was, the requirements would have specified it must be built differently.
I’ve seen a lot of times where “business people” ask for a feature that sounds good but isn’t technically viable for any number of reasons. The devs not doing pushback would lead to similarly non-functional/broken stuff getting shipped.
The pushback doesn’t even need to be adversarial, just do some requirements engineering, figure out what they want and go “Okay, to implement X in the best possible way, we should do Y and avoid Z because of W.”
In the bridge analogy, the people who are asking for a specific design might not know that it’d collapse under its own weight and the engineers should look for the best solution.
There are environments where devs can't do that sort of requirements engineering and those are generally pretty dysfunctional - obviously you don't need that for every feature request, but it's nice to have that ability be available when needed.
While I assume that there are plenty of critical websites out there which are built with efficiency and resource consumption control in mind, the few I have worked on were not.
On those sites you’re right: the approach was different, but not necessarily better. Tracking library bloat and marketing-driven design were reduced. But insane “security” constraints (e.g. “you have to stay on outdated revisions of this library” or “containers are not allowed on the backend, only bare metal”, no joke—constraints that led to significant increased security risk) and extremely user-hostile design practices increased, as well as there being an exceedingly long hurry-up-and-wait turnaround time for shipping important fixes/improvements.
Working on a safety/state-critical site isn’t a panacea, in other words.
And the devs are responsible for finding a good technical solution under these constraints. If they can't, for communicating their constraints to the rest of the team so a better tradeoff can be found.
this isn't purely laundering blame. it is frustrating for the infrastructure/operations side is that the dev teams routinely kick the can down to them instead of documenting the performance/reliability weak points. in this case, when someone complains about the performance of the site, both dev and qa should have documented artifacts that explain this potential. as an infrastructure and reliability person, i am happy to support this effort with my own analysis. i am less inclined to support the dev team that just says, "hey, i delivered what they asked for, it's up to you to make it functional."
> From the perspective of the devs, they expect that the infrastructure can handle what the business wanted. If you have a problem you really should punch up, not down.
this belittles the intelligence of the dev team. they should know better. it's like validating saying "i really thought i could pour vodka in the fuel tank of this porsche and everything would function correctly. must be porsche's fault."
this still undersells the developers' intelligence and presses the metaphor a bit too far. if the implication is that the developers are unaware of (or do not have access to) infrastructure capabilities, that's seems like a procedural failure (communication, education, information, etc). i wouldnt expect developers to know everything, but i'd expect them to be curious about how their work will interact with the goal, at large.
I agree except for your definition of "developers". I see this all the time and can't understand why the blame can't just be the business as a whole instead of singling out "developers". In fact, the only time I ever hear "developers" used that way it's a gamer without a job saying it.
The blame clearly lies with the contradictory requirements provided by the broader business too divorced from implementation details to know they're asking for something dumb. Developers do not decide those.
Fuck that. I just left a job where the IT dept just said "yes and" to the executives for 30 years. It was the most fucked environment I've ever seen, and that's saying a lot coming from the MSP space. Professionals get hired to do these things so they can say "No, that's a terrible idea" when people with no knowledge of the domain make requests. Your attitude is super toxic.
I suppose I understand why devs who don’t know how to say no, or work with stakeholders, are terrified of AI. What value do you have, at this point, when you’re unwilling to or incapable of pushing back on bad ideas?
You'd have to define a "bad idea" much more precisely and in the context of that particular business.
Developers often do push back and warn against ideas that have too many compromises, but cannot outright say no just because of that. There are too many other people involved.
You seem to think that any one person/group has/wants/should have full control when deemed necessary. That doesn't make sense unless either the success criteria are lacking (you call the shots alone and probably miss a ton of opportunities), or the requirements are so constrained that all the work is just optimizing the implementation (someone else already called the shots without you).
If your work is either of those situations it means the business plan sucks. AI is the least of your worries.
There’s a magic word that can be used in scenarios like this: “No.”
Failing that, interpret the requirements.
Nobody can watch a bunch of videos at once that don’t even show up until you scroll! That’s a nonsense requirement and the dev’s failure to push back or redirect in a more viable direction is a sign of their incompetence, not that of the non-technical manager that saw YouTube’s interface and assumes that that’s normal and doable.
It is! You’d have to know about lazy loading and CDNs, but neither is black magic.
> You’d have to know about lazy loading and CDNs, but neither is black magic.
I suppose you've never experienced the corporate hell that can happen with a CDN. The dev could submit a dozen servicenow tickets only to see half of them rejected by those same incompetent non-technical managers, or they could just make the thing work now and move on.
The next project will be better after the dust settles and those rejections have been reviewed and escalated into proper discussions. Nobody tells the story of that project because it does the things everyone expects. Guess who led those discussions and fought to get the meetings on the calendar? The "incompetent" devs of course!
It's not a sign of their incompetence, it's a sign of the realities of many corporate environments.
But hey, if you want to rail against incompetent developers who exist in a make-believe world where they hold all the power are simply too lazy and incompetent to 'do the right thing' then go ahead!
There’s nothing “make believe” here, incompetent devs, and devs (regardless of competence) who don’t push back against silly requirements _absolutely_ exist.
Stop making excuses and start taking ownership and responsibility of your craft.
I work in huge government departments, large financial orgs, and other "enterprise" places that are the poster child for the "realities of corporate environments".
Automatically saying "yes" to everything makes you a useless meat robot.
If you do everything that the customer asks, without push back, negotiation, or at least a deeper understanding, then you will produce broken garbage.
I see this all the time: "The customer asked for X, so I pressed the button!" is the cry of the incompetent junior tech that will never be promoted.
Nobody wants a uselessly slow website. Nobody wants to piss of their customers. Nobody wants angry rants about their online presence to make headline news.
What the customer wanted was multi media content. That's fine. The technical specifics of how that is presented is up to the engineering team to decide. You're not advisors! You own the technical decision making, so act like it.
If you make the decision to shove nearly a gigabyte down the wire to show the landing page, then that's on you. The manager asking for "video clips" or whatever as the feature probably doesn't even know the difference between megabyte and gigabyte! They shouldn't have to in the same way that I shouldn't have to know about my state's electrical wiring standards if I get a sparky out to add a porch light. If my house burns down, that's the electrician's fault, not mine as the customer!
Similarly, if someone asks for lights inside their pool, an electrician that strings ordinary mains cabling through the water should be jailed for criminal negligence. Obviously, only special low-voltage lighting can be used in water, especially near people. Duh.
Act like an electrician, not like a bored shopkeer who's memorised the line "the customer is always right" without realising that the full quote ends in "... in matters of taste."
Man, I probably say no to like 40% of the requests I get as a dev. Often we will come up with a better way of doing things by just spending 15-30 mins talking to the business about the actual problem they are having.
Some are just flat out refused as they are just too stupid and will cripple the system in some way.
The devs are the subject matter experts. Does marketing understand the consequences of preloading all those videos? Does upper management? Unlikely. It’s the experts’ job to educate them. That’s part of the job as much as writing code is.
From the perspective of the devs, they have a responsibility for saying something literally wont fly anywhere, ever, saying the business is responsible for every bad decision is a complete abrogation of your responsibilities.
Why don't you tell your boss or team something like that and see how well that flies.
The responsibility of the devs is to deliver what was asked. They can and probably do make notes of the results. So does QA. So do the other stakeholders. On their respective teams they get the same BS from everyone who isn't pleased with the outcome.
Ultimately things are on a deadline and the devs must meet requirements where the priority is not performance. It says nothing about their ability to write performant code. It says nothing about whether that performant code is even possible in a browser while meeting the approval of the dozens of people with their own agendas. It says everything about where you work.
Maybe everyone’s got a different situation, but when a different department tried to put ActiveX avatars all over their site, though it offended me from a UX perspective, I was able to get higher ups to reject it by pointing out that it would shut out 20% of their customers.
We always have discussions here about how you have to learn to talk to communicate your value to clients in a language they understand. Same goes for internal communications.
> The responsibility of the devs is to deliver what was asked.
Software development isn't factory work. And factory workers are expected to notice problems and escalate them.
Anyway, they're paying me far too much to have me turn off my brain and just check the boxes they want checked in all situations. Sometimes, checking boxes because they need to be checked is the thing to do, but usually it's not.
I didn't say anything about their development abilities, what I am pointing to is their professional responsibility. If a doctor is asked by a client to cut off their arm and they say no, and the client fires them, did the doctor err? (No) This doesn't comment on their ability to do surgery.
So just to check, instead of doing something you were told to, that you know is a stupid idea (after telling all concerned it's a dumb idea and being told to go ahead anyway, eg adding a crapton of video to a page), you would just resign, to protect your personal integrity?
no, you offer a technical solution to the problem. Show some videos is the problem. Downloading almost a GB of video content on page load is the (bad) technical solution. There are better ways and as a developer it's part of your job to solve things in a way that makes sense.
At $31/month, I'm pretty sure that even with today's prices you could buy a dedicated machine for this purpose alone and that you would be saving $30 every month in like a year.
reply