Alternatively: this is an America problem. I'm outside of America and I've been fielding more interviews than ever in the past 3 months. YMMV but the leading indicator of slowed down hiring can come from so many things. Including companies just waiting to see how much LLMs affect SWE positions.
It's from AI either directly or indirectly, either the top SWE's using AI are replacing 10 mid/juniors or your job is outsourced to someone doing it at half your Salary with a AI subscription. Only the top/lucky/connected SWE's will survive a year or two, if you have used any SOTA agent recently or looked at the job market you would have seen this coming and had a plan B/C in place, i.e. Enough capital to generate passive income to replace your salary, or another career that is AI safe for next 5-10 years. Alternatively stick your head in the sand.
I guess I just don’t see that happening right now. I’m at a big public startup and our hiring hasn’t changed much and we still have a ton of work and Claude code with SOTA models can shortcut some tasks but I’m still having a hard time saying it’s giving us much of a multiplier. Even with plenty of .MDs describing what we want. It can ad-lib some of the stuff but it’s not AGI yet. In 5-10 years I have no idea
In Europe it doesn’t seem too bad right now (for the 15+ yr cohort?). I interviewed at a handful of places and got an offer or two and my current team and company is hiring about the same as the last few years
I’ve seen this play out at multiple startups. The people holding things together often don’t fit neatly into the KPIs or eng chart levels. They’re mentoring juniors, redesigning workflows, updating documentation, and bridging gaps between departments. Because their scope isn’t bundled into a single “initiative,” reviews don’t always capture their true impact.
I’ve felt this personally working on the design system used across the entire engineering org. Three years after I left, that system is still the foundation the team builds on. At the time, the cross-team coordination and invisible maintenance work pulled me away from more visible deliverables, so it was harder to show impact in a review cycle. But the endurance of that system is its own validation—it shows how much hidden glue work pays off when invested properly.
The takeaway for me is that the best orgs figure out how to see this kind of work before it fades into the background. If you can spot and reward it early, you not only retain the people doing it, you build resilience into the team itself.
I remember first hearing about protein folding with the Folding @Home project (https://foldingathome.org) back when I had a spare media server and energy was cheap (free) in my college dorm. I'm not knowledgable on this, but have we come a long way in terms of making protein folding simpler on today's hardware, or is this only applicable to certain types of problems?
It seems like the Folding @Home project is still around!
As I understand it, folding at home was a physics based simulation solver, whereas alphafold and its progeny (including this) are statistical methods. The statistical methods are much, much cheaper computationally, but rely on existing protein folds and can’t generate strong predictions for proteins that don’t have some similarities to proteins in their training set.
In other words, it’s a different approach that trades off versatility for speed, but that trade off is significant enough to make it viable to generate protein folds for really any protein you’re interested in - it moves folding from something that’s almost computationally infeasible for most projects to something that you can just do for any protein as part of a normal workflow.
1. I would be hesitant to not categorize folding@home as statistics based; they use Markov state models which is very much based on statistics. And their current force fields are parameterized via machine learning ( https://pubs.acs.org/doi/10.1021/acs.jctc.0c00355 ).
2. The biggest difference between folding@home and alphafold is that folding@home tries to generate the full folding trajectory while alphafold is just protein structure prediction; only looking to match the folded crystal structure. Folding@home can do things like look into how a mutation may make a protein take longer to fold or be more or less stable in its folded state. Alphafold doesn’t try to do that.
You’re right, that’s true - I’d glossed over the folding@ methodology a bit. I think the core distinction is still that Folding is trying to divine the fold via simulation, while Alphafold is playing closer to a gpt-style predictor relying on training data.
I actually really like Alphafold because of that - the core recognition that an amino acid string’s relationship to the structure and function of the protein was akin to the cross-interactions of words in a paragraph to the overall meaning of the excerpt is one of those beautiful revelations that come along only so often and are typically marked by leaps like what Alphafold was for the field. The technique has a lot of limitations, but it’s the kind of field cross-pollination that always generates the most interesting new developments.
The network bandwidth between nodes is a bigger limitation than compute. The newest Nvidia cards come with 400gbit busses now to communicate between them, even on a single motherboard.
Compared to SETI or Folding @Home, this would work glacially slow for AI models.
No, the problem is that with training, you do care about latency, and you need a crap-ton of bandwidth too! Think of the all_gather; think of the gradients! Inference is actually easier to distribute.
Yeah, but if you can do topologies based on latencies you may get some decent tradeoffs. For example with N=1M nodes each doing batch updates in a tree manner, i.e the all reduce is actually layered by latency between nodes.
Apparently from a F@H blog post [1] they say it's still useful to know the dynamics of how it folded, in addition to the final folded shape. And that having ML-folded proteins is a rich target for simulation to validate and to understand how the protein works
lol I still run it in the winter but I feel bad running it in the summer, so I don't run it when A/C or heating is not necessary. I figure some contribution is infinitely more than 0 contribution.
I grew up loving magic. I watched David Copperfield on a grainy old tv, and vividly remember rewatching taped performances of "The World's Greatest Magic" (https://en.wikipedia.org/wiki/The_World%27s_Greatest_Magic) trying to figure out how the big illusions were done. I was part of a magic club and loved peeking behind the curtain. It fascinated me how as you learned those building blocks of simple sleight of hand, you could compound and build on those components to pull off more and more impressive tricks. A double lift, palming, french drop, etc...all pulled together to a cohesive "trick".
I feel like a lot of what entertained me about magic also pulled me towards web development. Sites and interactions online seem like magic until you realize they also break down into simple problems, simple components that build upon one another to deliver the trick. That interest in figuring out how things work just never went away I guess!
I had almost this exact interview experience recently with a popular AI startup. The exercise was to build a search UI over a static array of dictionary terms. It was a frontend role so I wired it up with filter and startsWith and spent more time polishing the UI and UX.
The final interview question was: “Okay, how do you make this more performant?” My answer was two-tiered:
- Short term: debounce input, cache results.
- Long term: use Algolia / Elastic, or collaborate with a backend engineer to index the data properly.
I got rejected anyway (even with a referral). Which drove home OP's point: I wasn't being judged on problem solving, but auditioning for the "senior eng" title.
With candidate interview tools and coding aids increasingly hard to detect in interviews, this gap between interview performance and delivering in the role is only going to widen. Curious how many of these "AI-assisted hires" will start hitting walls once they're outside of the interview sandbox.
One of the worst things about the employment market in the US is that you almost never get accurate feedback about how well you actually performed. The reasons for this are of course legal (i.e. the company doesn't want potential liability in case the rejected employee uses the feedback to sue), but it is one of those things that work out against job seekers in a major way.
- At a large tech company, a referral can help you get an interview; it rarely affects the actual hiring decision or the offer.
- As an interviewee, I might feel like I did great, but I don’t know what signal the interviewer wanted or what their bar is for that level.
My son’s school uses an adaptive test three times per year (MAP Growth). It’s designed so each student answers about 50% of the math questions correctly. Most students walk out with a similar perception of:
- how hard the test was, and
- how well they did.
Those perceptions aren’t strongly related to differences in their actual performance.
Interviews are similar. A good interviewer keeps raising the difficulty and probing until you hit an edge. Strong candidates often leave feeling 50/50. So “I crushed it” (or “that was brutal”) isn’t a reliable predictor of the outcome. What matters are the specific signals they were measuring for that role and level, which may not be obvious from the outside, especially when the exercise is intentionally simple.
Many years ago, when I interviewed at an investment bank for a structuring role, I answered all of their questions correctly, even though some of them were about things I'd never heard of (like a 'swaption'). I answered at what I thought was a reasonable pace, and only for one or two questions did I need a minute or two to work out the answer on paper. At the time, I thought I'd done well. I didn't get the job. I now know more about what they were looking for, and I'd say my performance was somewhere between 'meh' and 'good enough'. I'm sure they had better candidates.
When I interviewed at Google (back in 2014), I was asked the classic https://github.com/alex/what-happens-when question. I didn't know it was a common question, and hadn't specifically prepared for it. Nonetheless, I thought I crushed it. I explained a whole bunch of stuff about DNS, TCP, ARP, subnet masks, HTTP, TLS etc.
I said nothing about equally important things that were much less familiar to me: e.g. keyboard interrupts, parsing, rendering, ...
Luckily I passed that interview, but at the time I thought I'd covered everything important, when in reality my answer helped show the interviewer exactly where the gaps were in my understanding.
I mean, you know that the answer the interviewer was looking for was "use a trie/prefix-tree, want me to implement it", not "that's not my job, ask another team to setup elasticsearch".
If you're going to do coding interviews, you can say "I would use X tool", but you can't _just_ say that, you also have to say "but if I can't, I would write X algorithm, should I write it?"
Also, based on your description, you're suggesting going from entirely client-side, to having a server round-trip, to make it more performant. I could be misunderstanding the full question and context though.
This immediately reminded me of the teamLab Planets [1] experience in Tokyo, Japan.
Specifically "Flowers and People, Cannot Be Controlled but Live Together" [2] - the entire soundtrack for the experience was incredible, and this ambient garden took me right back. Thank you for sharing!
It's true, this project was always related to exhibits with audio sources laid out in space. While doing research for this, I came across software specifically designed to help lay out audio sources in exhibits. I hadn't heard of teamLabs though, some of their art seem almost like physical versions of ambient.garden!
EDIT: After reading the article, I see the OP calls out DIY Perks specifically - the OPs design is much more compact :)
> It's compact. The total size is 19cm x 19cm x 9cm. This is quite compact for a 5cm focal length and an effective lighting area of 18cm x 18cm. Reflective designs like the DIYPerks video or commercial products like CoeLux do not achieve this form factor.
It uses a trash can plus a super-bright LED bulb plus a plastic book magnifier.
The main trick is that you can get a big magazine-sized flat plastic fresnel lens for like 10 bucks.
The original poster's solution is definitely better, but it's also possible to do this on the cheap with no 3D printing (or in fact, any skills whatsoever).
Those who downplay it are either business owners themselves or have been employed for 2+ years.
I think a lot of software engineers who _haven't_ looked for jobs in the past few years don't quite realize what the current market feels like.
reply