Hacker Newsnew | past | comments | ask | show | jobs | submit | joerichey's commentslogin

If you accept that _all_ vector spaces have a (Hamel) basis, you can then prove the Axiom of Choice: http://www.math.lsa.umich.edu/~ablass/bases-AC.pdf

This means if you want to deny the Axiom in some cases, you will also have to allow for the existence of vector spaces without a basis.


One thing that makes Secure Boot nice is how it (in theory) works _with_ measured boot. You get a measurement into the TPM that contains the public signing key that was used to verify the signature on your bootloader. This means if you update from one signed bootloader to a newer signed bootloader, you don't need to change any disk encryption or sealing.

Of course blocking execution is orthogonal to verifying the boot chain, but unfortunately those issues are conflated in the UEFI spec.


Part of the issue with this MSI problem, is that the firmware also measures TPM events that say "Secure Boot is enabled with this configuration" even when it's not. These events are (almost always) used for FDE (via PCR 7) with a TPM.

This means that even if you setup FDE correctly (binding to say PCRs 0, 7, and 11), you would be able to bypass FDE using this MSI bug. For example, BitLocker binds to PCR 7.

You could get around this bug by sealing to PCR 4 (which contains the _hash_ of the bootloader). But then you have to redo FDE sealing every time your bootloader updates.


I looked into this on my motherboard, and the issue is that MSI's firmware measures in the TPM events saying "Secure Boot is On", even when it is in this insecure mode.

This means that even if Windows "checks" (via measured boot) that Secure Boot is on, they are still being lied to by the motherboard firmware.


PCR 7 doesn't just indicate whether secure boot was enabled, it also contains information about which certificates were used to boot. Obviously if you'll happily sign something unsigned the unsigned thing can just fake a measurement that contains the expected certificate, but I'd be interested to see what the event log looks like on one of these systems when it boots an unsigned binary.


This is actually what the Precision Time Protocol (PTP) does. It's the successor to NTP, so it improves on some of NTP's mistakes. The protocol uses TAI, but also sends the TAI-UTC offset so the computer can display times in UTC.

https://en.wikipedia.org/wiki/Precision_Time_Protocol


PTP and NTP have completely different scopes: PTP requires end-to-end layer 2 support and hateful choice of hardware, so it can only work within a single network; NTP on the other hand was always designed to work across the internet between different organizations, where the network doesn’t help with timekeeping and the organizations don’t work closely with each other.


Whoops! You're right, instead of "successor" I should have just said "newer".


“hateful”? oops! i meant “careful”


Why does the time protocol need to send the UTC offset, rather than have the offset be part of system data files, like with timezones? Wouldn't you need the data anyways to translate historical timestamps to UTC?


I think this is the argument NTP makes for not including the offset. TAI can be just another "timezone", so that TZDATA should be to used it to derive it.

But that's backwards. A Stratum 1 NTP usually gets its data from GPS, which HAS the offset (GPS runs TAI). But it only outputs UTC, but not the offset, making other programs compute it from TZDATA. Why is NTP making user programs harder to get the data that IT ALREADY HAS? Because philosophically, NTP is married to UTC (even though NTP is mostly for computers!)

And providing this offset would basically get rid of a large body of people (like the TFA) who wants to CHANGE the definition of UTC, which is a more drastic proposal.


> GPS runs TAI

GPS time is actually 19 seconds behind TAI at all times.


Those are two different problems that require two different solutions:

1. Displaying current time: for that ideally you need the offset directly from the time server because the system timezone data can be out of date in regards to current time.

2. Displaying historical timestamps: for that you use the system timezone file.


Seems like a simple and accurate way to handle it.


For future leap seconds, Google (including GCP) are planning to use a "standard" smear (https://developers.google.com/time/smear). This is also the same smear used by AWS.

It seems like if the ITU decides to keep the leap second (a bad idea, in my opinion), the large infrastructure providers will just use the same standard smear for their clocks.


They will be wrong all day long, not just for one second. AWS will be, too, evidently.

Few others will do anything so idiotic.


They will be wrong by at most one second. And being wrong doesn't even matter, as long as everyone (or rather, all your servers) agree.


At most half a second, in the middle of the smear at midnight when the leap second is applied.

The Facebook smear is asymmetrical so it starts off 1 second off just after the leap second, and subsequently corrects itself.

[ The reason Google and Amazon use a linear smear is because NTP clients try to measure the rate difference between their local clock and the reference clocks; if that is different every time the NTP client queries its servers, it will have trouble locking on and accurately matching the smear curve. You can mitigate this somewhat by fixing a higher NTP query frequency, but that’s a heavy-handed fix for an engineering mistake. ]


They are "wrong" technically but they won't be "wrong" more than a packet getting routed transatlantic would be - so if I send two packets from my laptop in London - one to AWS in London during the smear and one to another laptop in NYC, and timestamped the packets arrivals, the time stamps would likely be similar. Yes "wrong" but if it's a problem then it's a problem you have with speed of light. The answer is to find a different / reliable method of ordering.


What are the practical issues with potentially being wrong for one day by at most one second?


I assume finance and control systems to start. I might be helpful to have a fallback time-ordering algorithm not dependent upon one monotonic clock, but then you might have a rarely-used fallback to have bugs in, I imagine.


What are the practical issues for finance and control systems?


For finance, it matters what order events occur in. I've never messed with control systems, but I suppose similar issues arise.


Control systems, or any form of embedded safety critical system, use monotonic clocks where calendar time is a complete non-issue.

Ordering of events, on what scale are we talking here? If it’s just within a transactional database there are multitude of ways to do it. Even distributed dbs have such features without relying on perfect time. If you are looking at a spanner style db you need a lot more guarantees than “I just used the time my cloud provider assigned to my vm”, plus being in sync only matters within your own cluster?


Ordering can be solved by a mutex.


I was thinking of more distributed control systems, particularly where testing for edge cases might be difficult and rare, and rigorous methods (did Lamport solve distributed mutex?) are probably off the radar in terms of culture.


Any forum is a distributed system of a server and clients that agree with each other on order of user posts.


One reasonable way to do this could involve running the reference TPM2 simulator [0] on the Arduino. It's just a C library that already implements all the cryptographic routines and TPM2 commands. In fact, this is basically how TPM vendors implement their chips. They just generally have:

  - A lot more hardening against physical attacks
  - Cryptographic libraries optimized for their low-resource hardware
  - (sometimes) a vendor certificate for a primary TPM key, aka an "EK cert"
Certainly a TPM running on an Arduino wouldn't have the physical hardware properties of a "real" TPM. But you could probably get it into a state with similar software properties.

[0] https://github.com/microsoft/ms-tpm-20-ref


I'd use this over a real TPM so that I have more control over my PC.


As someone who's spent too much time with this stuff, you're correct. The TPM (either 1.2 or 2.0) is an entirely _passive_ chip. It only creates keys or measures data if the OS or UEFI asks it to. This means that it can't block or modify programs on your CPU.

Secure Boot is implemented by UEFI, so it can block the loading of a particular bootloader. You can have Secure Boot without a TPM or have a TPM without Secure Boot. They can be useful together though as you can have a disk-encryption key with a policy saying "I can only decrypt stuff if you've booted using Secure Boot in a particular configuration".

As for DRM, the TPM doesn't work very well as part of a DRM solution (as it's entirely passive). This is probably why very few (if any) DRM products use TPM. Most PC DRM that I've heard of either uses Windows Kernel modules or Intel SGX.


I don't think that Windows 11 requires any sort of EK cert at all. If they did, it would require them to restrict the TPMs to a list of "approved" vendors.

In this case, they bought the actual TPM2 part of the chip from Infinion, so it might already have an EK Cert on it.


Well, that’s what I get for skimming poorly. They did just slap an off the shelf infineon chip in, so yeah, real EKcert from a legit vendor.

I skimmed and had some wishful thinking that they just made a cheap off the shelf chip do the job of a TPM2 by slamming in an existing TPM2 implementation.


> I skimmed and had some wishful thinking that they just made a cheap off the shelf chip do the job of a TPM2 by slamming in an existing TPM2 implementation.

Is there any evidence that it wouldn't be possible - does Windows have a list of approved EK certificate authorities it expects?

The only reason I could think for this would be DRM, but I wouldn't expect this to be a requirement merely to install the OS (the un-approved TPM would still be good for any non-DRM uses, and would be useful in VMs where the TPM is already emulated by the hypervisor).


Shouldn't this be marked with (2014) as it was made at DebConf 14?


Right, 2014. Although I’m not sure anything has improved on his main complaints. There is no possibility of compatibility between binaries for the end user which means the experience will rarely be download and run. The distorts will always be incompatible, the package managers don’t care about you, and the core system components will break things in the name of progress (for them, hope you can come along!).


Things like Flatpak or Snap try to solve this problems. Do you think they don't succeed?


Try but since its more of them that devs have to support (Flatpak, Snap, AppImage...), problem is still the same.


Has Linus any opinion on flatpak, snap or appimage?


Linus Torvalds uses AppImage for his diving app subsurface. He is on the record saying he endorses it, and you can find a quote by him on the AppImage website. I don’t think he would have anything good to say about snap. I don’t think he has said anything about flatpak


In my personal opinion it's not very advisable to run binaries "from somewhere". Some views regarding flatpak: https://www.flatkill.org/2020


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: