> “There was no warning that I would not be able to access my accounts for five days,” she added. “If I had to use that money, it was completely inaccessible.”
A good reminder that your emergency fund should be held in cash at a bank, not in shares a brokerage. Not that a glitch like this couldn’t block access to your bank account too, but rather that the process of liquidating and transferring securities is much slower than ACH or Zelle.
Something like Fidelity's Cash Management Account (has a debit card, automatically liquidates money market funds to cover spending) works fine as an emergency fund. But you need your emergency fund in two separate accounts at two different financial institutions to be safe from shenanigans like this.
This is not merely a theoretical risk. Patelco (a Bay Area credit union with half a million members) got hit with ransomware back in 2024 that disrupted banking services for two weeks.
You can have multiple accounts, yes; but you can also diversify via friends and family.
As a good friend, if I know that you are good for the money, I'd happily spot you even a few tens of thousands for a few days or weeks. (I'm actually more reluctant to do that with some family members, because I might be less likely to get the money back.)
I find a T-bill ladder works best for me. I keep half my emergency savings in 4 week bills. Given that my emergency savings is intended to sustain me for months, I can easily access the back half of my savings over the course of 4 weeks as each bill expires, returning another 12.5% of my emergency savings to me.
And in a worst case scenario where I can’t access the front half of my savings due to a bank run or other failure, I am only a week or so away from getting access to some part of my savings.
T-bills are highly liquid. You can sell them before they mature for very low transaction cost and get back their true market value, including the accrued interest, meaning there’s not much monetary benefit to staggering them week by week. You could just as well only roll them once a month and dip in freely if you need the cash.
If you have the capability and discipline to perform a T-bill ladder, you've gotta understand that you're not the person who general financial advice is targeted to. Being deliberately vague, but I've lost nontrivial amounts of money simply because I got distracted when doing a critical financial task and only remembered to get back to it months later. I think I can safely speculate that the story in the source article would never happen to you because you could easily locate the right account numbers if you found yourself locked out of a financial webapp.
I appreciate the vote of confidence but to be clear I just set it up on auto-reinvest for 24 months. There’s an initial setup every two years but the rest is mindless.
Maybe you’re right though. I maintain a non-trivial amount of data in my password manager to ensure I always have a centralized place to begin the hunt for information.
Not totally joking after some thought. If the problem that a person experienced is "complete loss of bank account" then having a physical backup - not at a bank! - would help to cater to that scenario.
And yeah, looks like that's not a foolproof solution either. 3rd backup option might be needed... :D
> the process of liquidating and transferring securities is much slower than ACH or Zelle.
That doesn't seem relevant here. The account disappeared due to a data maintenance error; it wasn't extracted via a legitimate transfer to another account, they literally forgot about it.
Yes, that is why I caveated my comment. But her quote is still a good reason to point out that if there is a chance you might need access to money within short notice (I’d say within a month), it should be in a checking or savings account.
Friendly reminder that banks are not a reliable place to keep emergency funds, they don't really have a vault full of everyone's cash always available.
Which is the way it's supposed to work. You keep enough for daily transactions, because the expectation of multiple large scale withdrawals happening either in short succession or simultaneously is the most unlikely scenario to happen during operation. A bank keeps records of every account's value, but at any given time only has enough cash to cover one fifth of all of the money it has on record to be in those accounts. In other words, the bank's physically only got 20% of the money it has on the books. It has to work this way because there's no way a bank could hold all of the money it's customers are said to have, either because of physical space constraints or because there's literally not enough money in existence to cut it out of circulation without creating ridiculous deflation. The change away from the gold standard changed this quite a bit, and so has digital banking, but the numbers in your account are still backed by something that tangibly exists.
> the numbers in your account are still backed by something that tangibly exists
Only if you consider fiat money that can be printed in arbitrary amounts by Mr. Bernanke's famous printing press to be "something that tangibly exists".
Why would that be necessary? For most people, liquid funds are something that's electronic anyway, and in most countries banks can't run out of customers' electronic money. (Safeguards kick in pretty quickly.)
Most of the talk in this discussion is about personal emergencies, like being locked out of your accounts; not about system-wide bank collapses.
> they don't really have a vault full of everyone's cash always available.
When the Silicon Valley Bank collapsed, funds were only inaccessible for 72 hours, and no depositors lost any money [0]. Which is still not ideal, but most people will never experience a bank collapse, and there are plenty of banking activities that will take longer than 72 hours to process in regular circumstances anyways.
Indeed; most personal banking customers can fall back on FDIC insurance ($250k should be more than enough to cover your emergency fund). This isn't the 1920s.
Alas, for Silicon Valley Bank they went with 'too big to fail' and also covered uninsured deposits. That's moral hazard and endangers the core purpose of the insurance.
Agreed. That said, FDIC would have not been able to cover all $150 billion or so of uninsured SVB deposits directly from the insurance fund, so had that been the only available option for making depositors whole, then FDIC would have had to pass.
Well, insurance should only covered insured deposits.
> [...] so had that been the only available option for making depositors whole, [...]
On paper, FDIC might be independent and have its own balance sheets. But in practice and given politics, FDIC itself can't fail / isn't allowed to fail. It'll always be bailed out, and that's what the market expects.
For the stability of the economy, it would have been better not to make uninsured depositors whole.
It sure isn’t the 1920s, it’s the 2020s so things like digital money are ephemeral and whimsical.
The bigger question is how much food and medicine is there in the supply chain buffers? If all production was to stop immediately — how many calories are on the continent? How many grams of insulin or penicillin?
In a crisis how will those things be distributed? Will it be based on immediate need or social class?
What’s keeping the system going anyways? Why do ships continue to come with consumer goods from China? Why do farmers send their grain to market?
It’s kind of neat to think about what will happen in this sort of scenario. I wonder how long the data centres will keep running, churning out models that don’t have a market an aren’t quite good enough for AGI.
That does not match my memory at all. Booting my family’s 386, even into DOS, was a minutes-long affair involving memory tests and messages like “loading HIMEM.SYS”.
I would posit that Windows only became more technically sophisticated than early Mac OS with the release of Windows/386, the version of Windows 2.1 that ran multiple DOS VMs in protected mode.
I’m distinguishing “sophistication” from “complexity”. Windows/386 is “sophisticated” in that it implements a much richer model of execution than its predecessors, with a supervising kernel and memory-protected virtual DOS machines. This is different from the complexity of programming with segmented memory, or punching through the various layers of backward compatibility that had built up even as early as the 286.
Likewise, the Mac had some complexities of its own, even though the 68k wasn’t nearly as challenging to program for. Since the Toolbox shipped in ROM, they had to design syscalls (A-traps) in a way that could be patched by later versions of the system software. They soon had to work around software that wasn’t 32-bit clean when they started shipping machines with the 68020.
One of the more sophisticated bits of later Mac OS was the 68k virtual machine that was used to run major chunks of the Toolbox on PowerPC Macs.
The way 386 enhanced mode worked was that there was a 32-bit preemptive multitasking kernel that would run 16-bit virtual machines. The first VM ran Windows in standard mode, and the rest were the DOS VMs. This meant that Windows programs still shared an address space and were cooperatively multitasked. This is actually somewhat similar to how Apple shoehorned multiprocessor support into later versions of Classic Mac OS. The OS runs under a microkernel. There's one main thread where all the cooperative multitasking happens (anything that uses the Toolbox must run in this thread), and then both user software and the system can make new threads that get preemptively multitasked with the main thread. The main difference is that I don't believe there's a way for Windows software to make use of the preemptive scheduler (unless maybe they do something hacky with a VxD driver, but that's kind of silly).
I remember reading the PowerPC System Software volume of Inside Macintosh[1] at the time, and what I found impressive wasn't that 68k applications ran under emulation on PowerPC, but that much of the Toolbox and System Software were also still implemented in 68k code, implemented in terms of a general mechanism that could also be used by third-party code.
This came at a performance cost, of course, but I don't recall the Workgroup Server 6150 I was using at the time feeling significantly slower than my Quadra 605, though to be fair the 605 was at the very bottom end of the 040 Mac line. Then again, with the exception of a larger hard drive and bundled AppleShare software, the WGS 6150 was equivalent to the entry-level Power Mac 6100, as well.
That obtained the cutting edge technology by buying an American company that had been founded to productize technology developed at an American defense laboratory based on a Japanese researcher’s work
You are forgetting 20 years and billions of dollars developing, in collaboration with research institutes like IMEC and funding from chipmakers like Intel, Samsung, and TSMC.
But it doesn’t fit your ideological narrative of how innovation functions so…
I am not the person you originally replied to. I have no ideological motivation here. I am merely pointing out ASML did not invent EUV, nor did they fund its initial development or the first decade or so of its productization. ASML employs plenty of scientists and engineers who did important work getting EUV to market, but your characterization implied that ASML single-handedly introduced a step-function increase in semiconductor fabrication technology from their labs in the Netherlands, and that is a misleading impression to give. It’s belied by the fact that ASML can’t even choose their own customers without approval from the U.S. government.
I believe that it's a bit more complicated than that especially if we look at the contributions of IMEC.
But irregardless I can hand you the point that you are making and then say that yours is a very tight standard that would not pass most of what passes for innovation in Silicon Valley.
The point I'm trying to make for the initial poster is that they are confusing "technological innovation" for money making. And yes you don't have a money printing machine in the EU, but you have A LOT of technological innovation that eventually goes to market through SV.
I think it’s to their credit that they don’t have one and instead got cern.
A bunch of shitty crud apps made by mediocre rent seekers that got rich on tax avoidance, gov money + research, and low interest rates vs actual ground breaking research that benefits humanity. Silicon Valley is probably the worst thing that happened to humanity between the 2008 crash and Covid. People have been figuring it out but not after they have already given these scammers permission to inspect their wallets.
Wow. Can't tell if this is a parody or just very, very uninformed. Either way, good day. Hope you understand, one day, how technology has helped millions of people in many different ways.
While I have never personally been invited to a spring gay peptide party at a SoMa warehouse full of twinks, I have not been able to avoid exposure to these sorts of people. It feels like they are making up a larger and larger portion of incoming engineers, but that may have always been true.
Then again, I’m now seeing ads for “leukopaks” on tech websites. It really does feel like the culture has shifted for the worse.
> Although we can already buy commercial transceiver solutions that allow us to use PCIe devices like GPUs outside of a PC, these use an encapsulating protocol like Thunderbolt rather than straight PCIe.
> [snip]
> As explained in the intro, this doesn’t come without a host of compatibility issues, least of all PCIe device detection, side-channel clocking and for PCIe Gen 3 its equalization training feature that falls flat if you try to send it over an SFP link.
So, uh… what’s the benefit? How much overhead does Thunderbolt really introduce, given it solves these other issues?
I go over it in the video but yes, active thunderbolt is probably a very good choice for a lot of people. I went into another direction for some reasons that are not applicable to everyone:
- Learning : I want to learn about the lower level of PCIe and it's a good project.
- Re-use of cabling : I have a bunch of single mode fiber bundle going around already. You can't find thunderbolt that just have a LC connector ...
- Isolation : Active thunderbolt cable still often have copper for some low speed signals, they don't offer true galvanic isolation
- Avoid dealing with thunderbolt. I want a custom chassis/pcb at one end and chips to convert from TB back to PCIe are not readily available to make custom stuff with ... (not as an individual anyway).
So yeah, if you want a ready to use solution, TB cable is absolutely a good choice, here I'm having some fun, learning in the process and hopefully sharing some of the knowledge.
Hey, I love a great self-educational deep dive. Don’t have time to watch the video until after the workday, but it sounds enlightening! (I swear that was not intentional.)
The benefits are twofold: physical colocation and bandwidth.
Thunderbolt 5 offers 80Gbps of bidirectional bandwidth. PCIe 5.0 16x offers 1024Gbps of bidirectional bandwidth. This matters.
TB5 cables can only get so long whereas fiber can go much farther more easily. This means that in a data center type environment, you could virtualize your GPUs and attach them as necessary, putting them in a separate bank (probably on the same rack).
Active optical (yes!) Thunderbolt cables can be much longer. After all, optical fiber was the original medium for Thunderbolt, back when it was still called Light Peak.
As for bandwidth, the medium transition seems to actually limit the author’s capabilities by losing some of the more advanced link-training features that are necessary for the highest-bandwidth PCIe 3 connections, never mind PCIe 5.
Hundreds of meters is considered short range in the world of *SFP. If you just plan on putting the GPUs in the same rack then I'm not sure it really matters, but you can really put anything anywhere in your DC and have things zoned with *SFP.
I don't think there is any reason TB couldn't do the same, beyond it would be even more niche to want non-modular/patchable cables+transceivers at those lengths (especially since fiber is often bundled dozens/hundreds of strands over a single trunk cable between racks).
Thunderbolt is kind of cursed. To insure maximum compatibility it mandates a legacy usb2 connection via separate connections. TB3/USB4/TB4 are packetized, but afaik there's no defined way to packetized usb2, it's expected there be a whole separate set of wires for it.
And because of timings, my admittedly so-aonunderstanding that you can only get about 7m before you absolutely have to have a hub/repeater (unless you can speed up the speed of light considerably). This limit to how long a single length can be can't really be cheated without violating usb specs.
It's awesome if folks have packetized USB2. A pity it's not in the flipping spec though!!
That Corning made it 50m is wild. You need a virtual hub at the start that can pretend to be hubs 1-5 (so it's close enough to time well). Then a hub on the other side of the cable at (skinny) tree depth 6. Allowing for 4 devices under it (the number of ports on a usb2 hub in the spec. But you could work around by faking being not a skinny tree but a fat tree, maybe?).
IIRC, USB-PD also requires USB 2.0 signaling. The idea of dedicated lower-bandwidth signaling wires isn’t uncommon in my very limited EE experience—level 3 charging reuses J-1772 signaling to control the charge available of the DC pins.
I was looking into the highest bandwidth optical transceivers. 400Gbps were easy enough to find so thanks for posting this. I honestly didn't know there were 1.6Tbps transceivers like this.
One note: I believe the SMF max fiber length is 2km not 1m [1]. The data sheet [2] also says:
Bidirectional is a lot like biweekly. Biweekly depending on context means twice a week or once every two weeks and bidirectional can both mean per direction and total of both directions.
I'm only a single datapoint but I've never encountered that usage. My understanding of a bidirectional link is that it meets the same spec in both directions simultaneously. It's important precisely because many links aren't bidirectional, sharing a single physical link between two logical links.
The video is about a 2x1 link, which the author hopes to eventually scale up to 3x4 using 40 gig transceivers. I'd say thunderbolt is probably safe in the near future.
A good reminder that your emergency fund should be held in cash at a bank, not in shares a brokerage. Not that a glitch like this couldn’t block access to your bank account too, but rather that the process of liquidating and transferring securities is much slower than ACH or Zelle.
reply