Hacker Newsnew | past | comments | ask | show | jobs | submit | more ecmascript's commentslogin

Once I wrote a Wordpress image downloader that would recursively go through a wordpress blog and download all the images. Only issue was that I had a bug that made it keep redownloading all images over and over again which made the bill for data go through the roof.

Ended up scrapping the project and didn't pay the bill. Luckily it was hosted on a platform that just complained on me for going over it instead of requiring me to pay.


Very cool, I have had my eyes on liveview since it came out but since what I am building is a heavy user of maps and client side functionality that will utilize offline support, liveview brings unfortunately pretty little to my table and would be impractical.

Besides I can get the same functionality for most of the app with Server Sent Events and EventEmitter in node even if it is a tiny bit more of a hassle. But since the SSE is a better protocol than Websockets (especially with http3) you also get benefits you can't get with Liveview such as stuff working when customers have Proxies or firewalls that block stuff that is not http.

I really like Elixir as a language and I think Phoenix Liveview is a game-changer and I can't fathom why more people don't use it that doesn't have the same client side requirements as myself.


I'll start since this year I'm finally in the process of building my first startup, even if it isn't official yet.

What is your tech stack?

I use React with the Remix lib/framework together with Mapbox for mapping purposes and PostgreSQL and SQLite for data storage. I host everything on a GleSYS server. I also use some C# that acts as a plugin to retrieve and sync data to the web app.

Why did you choose it?

I chose it because I am building a thick front end application and because it is tech I already knew. I need a lot of client side functionality so using what tech I already have knowledge of seemed easiest and I can bring whatever I learn to my day job and vice versa to get the synergy effects.

Do you think your choices had any impact on your success or failure?

Haven't had failure or success yet since I am still in the process of building it.


The only way of fighting back is stop using Chrome and preferrably chromium based browsers.

I recommend using https://librewolf.net/


Very good initiative, I will try it out for my project and donate monthly if it is successful!


I use Element X every day. On the phone it's very buggy and doesn't get push notification for the latest messages sometimes.


what OS, build, and can you please link me to GH issues so i can chase them? We're not aware of any push problems.


Android 14, build AP2A.240905.003

But it has only happened a few times for me. I have not created any github issues. I only have 1 friend on matrix so I don't get that many messages :)

But each time I open the app, the syncing is kinda slow, much slower than on the computer in my experience. Isn't it possible to enable fetching in the background?


hah, sorry - i was trying to find out what build of EX you were on, not Android :D

That said, from these symptoms, it really sounds like you are on an older build of the app - and probably using the old sliding sync proxy rather than native sync implementation which has now landed in Synapse. The new builds also fetch in the background (whenever you get a push).

Please upgrade, try again, and failing that, file bugs on github.com/element-hq/element-x-android!


I have never understood the rush for "the cloud".

I run all my shit on a VPS (which could be called a cloud) or a dedicated rented server but that is so easy to setup and I can run all projects on the same server. Easy, simple and if I need to scale I just rent a bigger server.

Scaling vertically is easy, scaling horizontally is hard. Most people never need to scale horizontally but does so anyway because they think they do.

You also get like 10x the perf for the same money. Using SQLite makes it easy to have backups and even time-specific testing databases.


Because most companies/startups have this in-built assumption that they will eventually grow like wildfire. Obviously this is because they are selling to investors who want this to happen so they can cash out on their unicorn. So they just build it in the cloud in the first place with that in mind.

Now in some miniscule amount of cases this is true and probably did help some people who's business 100x'd overnight, but in the vast majority of cases your business just will never get to the point it needs to be "cloud scale" in the first place. Nevermind accidentally shooting yourself in the foot with a recursive lambda here and there in certain instances or a misconfuguration causing a huge bill.

Edit: Another is because lots of companies who do actually end up succeeding negotiate a shit-load of credits with cloud providers so they can basically grow their business for free for a while. That is until those credits run out and they get hit with the actual costs.


For most tech startups, when the growth period hits, its vital there must be NO DELAY to scale. Miss the timing and its permanently gone. If word of mouth leads to a sudden surge of orders, you must be able to supply the good, or you get very angry customers who walk straight into the competitors door.

Most of the value generated by startups is highly concentrated in the few that succeed. So naturally the industry should optimize itself to go big or go home, not penny pinch. That's also, why they hire expensive engineers rather than offshored developers, because speed matters more than cost.

As for non-tech companies. Their demand is more stable in the long run, but they are not tolerant of outages. Amazon cannot have its servers go down during a big sale, too many physical ongoing costs that gets wasted for every second the central nervous system is down. So the cloud is good for its reliability.


For most people I agree.

When AWS was first getting big though they solved genuine, really hard problems for a lot of organisations that were large or growing quickly. NVME drives didn't exist, SSDs were expensive and a lot of servers still had spinning SAS drives – A little box with some ram and some NVME drives didn't scale as ridiculously far as they do now.

I do think as computers keep getting faster and smaller the number of use cases that need a 'cloud' shrink very quickly though.


The problem is, the larger your organization is, the more difficult is to break free from vendor lock-in.


The initial rush for what is now called the public cloud was caused by many factors, the main ones being "you don't need to manage things as we will do it for you" (=you save on operations) and "you can scale up and down how much you want without any commitment" (=you save on recurring costs) with success stories like the NYT resizing tons of images fast without having to rent servers, configure them etc.

Fast forward and what we now have is a terribly complex beast with nets of dependencies that got developed partly by marketing and product teams, partly by demands of larger customers. And it's more or less clear that if you are small, you will be much better off using VPS (that's why Amazon decided to offer Lightsail), and if you are very big, you will save a lot of money moving at least part of your infra away from the public cloud.

But what remains is a large part of the market: medium sized businesses and large organizations that depend on the public cloud for many reasons. But they are not stupid: once a project becomes expensive, someone starts asking questions. And after you've exhausted the path of reserved instances, spot etc., and still burn a lot of money with not-always-stellar performance, you'll find a way of moving these workloads where it makes business sense.


One recurring pattern I have seen at multiple customer sites is that scaling makes the engineers lazy to optimize. One production performance calamity requires the team to add CPU as a quick fix, and from that time on the baseline for the product's requirement has been set to the new number of CPUs.

"Back in the olden days", if your product was slow but the number of CPUs was fixed (or could not be increased instantly), the solution was to go and fix your code.

Basic system level skills are now no longer taught or practiced at the appropriate levels, so teams end up without engineers who actually know how to profile and optimize.

The cloud providers are the big winners here.


"scaling makes the engineers lazy to optimize"

I've lost track of the times I've heard "compute is cheap! engineers are expensive!" Except... that compute cost will live forever. The time it takes someone to debug a bad loop or poor query is at worst a one time cost. Longer term, it may even make other stuff faster in the future.


Looking at some numbers on cloud, yeah I don't believe in that statement anymore. End device compute might be cheap, but cloud services certainly are not.


We've also seen a rise (or maybe I just notice it more) of stories of "I changed these 3 lines in 5 minutes and saved my company $40k/month!"

And then you'll get responses like "pfft.... that's hardly the cost of one part time FAANG person who makes $680k/year - what's the point?"

And around we go...


Because it's cheaper now to throw hardware at a problem than actually try to fix the code or root of the issue. This wasn't the case a couple of decades ago.

It also means less engineers are needed for most companies.


> Because it's cheaper now to throw hardware at a problem than actually try to fix the code or root of the issue.

It's not cheaper, it's just more opaque.

Back when your service was deployed on that 2 CPU box and it was too slow for obvious reasons, you optimized it and then it was good.

Today you just shrug and increase that kubernetes cluster from 16 to 48 nodes and forget about it. Costs a lot but the bill shows up somewhere else, in most groups the engineer doesn't even know what it is.


It isn't engineers that are lazy, it is management that tells the engineers to work on features that make money not performance that will save money.

Sometimes management is correct in that decision, sometimes it is "penny wise, pound foolish".


"scaling makes the engineers lazy to optimize"

More often I think that's more of an overall engineering department time budgeting / culture issue.


For large companies, a big reason is to transform capex into opex, and the predictability. Moreover, large organizations tend to favor predictivity over levels, i.e. are ok to increase average if variance is decreased.


This. The beginning of my career on cloud was a POC, where the director shared with me this was a major driving factor (capex > opex), as well as some of the fringe benefits.

I got to see close up that a team of devs ran their whole solution (with a bunch of paying customers and everything) in the cloud, because cloud automation was good enough that they didn't need dedicated ops people.

Now I work for a cloud provider. I can't say that if I was running a business that I'd build it cloud-first instead of OnPrem. Certain use cases, sure. If I didn't need a lot of horsepower, I might build it on a cluster of VM's with some segmentation of duties - not quite microservices, not quite a monolith. Most likely if I was hosting on the cloud, I'd use the provider I work for, just because I know the system and how to get things done and how to talk to support.

I will say though - learning the ins and outs of cloud computing has made for a great career. Challenging, but lucrative.

FTA:

> Microsoft and Google decided not to officially comment on the survey's findings. However, a representative for one of the hyperscalers retorted that the figures seemed cherry-picked and pointed out that, as an example, customers using reserved instances could realize significant savings.

Reserved instances are a thing for sure. There's lots of other ways you can control cloud spend (enterprise agreements, dev/test subscriptions, spot instances, automated shut down / scale down, etc.) - it's enough complexity by itself that big companies hire entire teams of people to just work on tracking, projecting and controlling cloud costs.


The value proposition of "the cloud" is to ease all the "not important at first but you'll need one day" things: logging, alerting, availability, backups, SSO etc which usually requires different know-how from what devs have.

But it has become so complex that instead of an OPS team you now have a Cloud team. With a huge wallet.

Best of both worlds is to setup your own Cloud on multiple VPS which is relatively easy nowadays: HAproxy, Rancher, Kubernetes, , Keycloack, Openwhisk, Gitlab, Harbor, Opentelemtry + Prometheus + Grafana and your devops will feel right at home.


If you can run everything on just a couple servers then the cloud doesn't make sense. There are a lot of companies that have tens of thousands of servers scattered around the world. just tracking them is a headache and you can offload a lot other overhead as well. Remember you are paying for the people full time to be in that computer room either if you do it in the cloud or not, but the cloud allows sharing those costs better when you don't need the computer.


Can't people figure out some other tooling besides bundlers? I mean, how many do we really need?

It's probably fine, but so are all the others as well. The authors have probably spent a fair amount on time on this project so I don't want to be negative but it's just hard to be excited when it brings nothing new to the table.

Why should I use this over Vite or esbuild? Because it's written in Rust? I don't understand why that even matters. Even if it was 10 times faster I wouldn't use it because Vite is fast enough and have all the plugins I would ever need.


Why matters that is written in Rust? Because there are already a few JS tools written in Rust, so you can now use the crates from projects like Deno[0], OXC[1], BiomeJS[2], etc to write your own tool with minimal effort.

Also note that the Vite team is writing Rolldown[3], and guest what? They are writing it in Rust.

[0] https://crates.io/search?q=deno [1] https://crates.io/search?q=oxc [2] https://crates.io/search?q=biome [3] https://rolldown.rs


None of those tools you quoted are production ready based on my investigation, in the sense that if you manage the JS infrastructure of a company of 2000 developers, you would stick with webpack. Lots of Rust based tooling is still half baked and missing things here and there, so much that you wish these people work together to create one (or at most two) tool that is comparable to webpack.


> None of those tools you quoted are production ready based on my investigation

This is very true and almost all of them are taking far longer to develop than they initially thought. swc/turbopack is being pushed by Vercel and it has been a huge ongoing disaster.


Yeah okay, but that's not the reason why people write it in the title. They write it in the title because they know that many engineers like Rust and think people will immedietly be drawn to it.

But the language itself is not a goal or at least shouldn't be IMO. Thus it have the opposite effect on me, who do not care about what language my bundler is written in.

If I did, it still wouldn't have any competitive advantage since as you point out Vite will soon also be based on Rust.


I've read some faq and docs.

Their reasons are to have fast builder with flexisbility needed for business cases. If other words they are making internal tooling publically available.

Being faster than es-build is not a goal, get people excited about speed is not a goal. Have control over tooling, flexibility, be fast-enough, be opensources are the goals.


You'll get downvoted but I completely agree, it seems rewriting things in Rust and tinkering with bundlers is the new in-vogue thing to do. Lord knows why


I didn't enjoy the Rust hype on here in years past, but I'm always glad of any better tooling. Just an example from the other week... I swapped out NVM for FNM (Rust) and now I don't have to put up with performance issues, especially slow shell startup times.


Just me being curious since I have used nvm for years without any issues. What do you mean by slow shell startup times? In what way do you use nvm in order to experience any slowness?


I followed the standard nvm install process, to get it loaded from my .zshrc

I noticed a second or two in lag between launching the terminal and getting a shell prompt. Commenting out the nvm load as a test removed the delay. I installed fnm, aliased it to be nvm, and everything is snappy. Also nicer if you use tooling to 'nvm use' when changing into a project directory.

There are a few issue threads such as this one : https://github.com/nvm-sh/nvm/issues/2724

BTW, this blog post was great for finding the culprit if there is zsh startup latency : https://stevenvanbael.com/profiling-zsh-startup


Damn, I have a hard time reading articles like this since it feeds into my health anxiety I have developed over the years after having many issues with my bowels.

I have several issues due to this, reflux being one and I am afraid of getting a diagnosis like this man. Somehow, by reading about it, I can catch myself almost convincing me that I must have something else badly going on even if it is just the same old problems that I have had ever since the symtoms first started appearing.

I sincerely hope that he survives his disease and that researches can develop vaccines against cancer.


They’re already doing clinical trials on cancer vaccines in many countries.

https://www.theguardian.com/science/article/2024/may/31/what...


Seems like the server disconnects pretty often. I get disconnected twice now which ruined the boxes I had selected. :(


Yes sorry I just had to upgrade the server and deploy a new version which now shows an updated checkbox checked count.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: