I'm only familiar with Android, and it bothers me that I cannot exert complete sandbox control over every app.
I think I should be able to completely cut it off from the network and/or local storage; prevent it from running even though it is installed; and prevent it from having any personalizing information about me, my movements, my network connectivity status or patterns, my device usage (i.e. screen on versus locked, any proxy like battery state of charge), etc.
I am very reluctant to install apps because I see that the platform is designed for needs and a mindset that is not my own. I do not see it as essential or preferable that an app be able to monetize my usage or really gather any telemetry at all.
People understand hierarchy. That named file is in a folder in a particular drawer of a particular cabinet in a particular room of a particular building in a particular neighborhood in a...
What some people struggle with is recursive hierarchy where each step doesn't change the kind of container. I guess they never saw a Matryoshka doll when they were little.
Ha, I remember this religious debate all the way back in the days of text-mode word processing in the 80s on CP/M and PC. I was indoctrinated in the WordStar camp where style controls were visible in the editor between actual text characters, so you could move the cursor between them and easily decide to insert text inside or outside the styled region. This will forever seem a more coherent editing UI to me.
This might be why I also liked LaTeX. The markup itself is semantic and meant to help me understand what I am editing. It isn't just some keyboard-shortcut to inject a styling command. It is part of the document structure.
Heh, I'm not even sure WordStart other styles at that level. Changing the color back then would mean having the print job pause and the screen prompt you to change ink ribbon and press a key to continue. I can't remember if it could also prompt to change the daisy wheel, or whether font was a global property of the document. The daisy wheels did have a slant/italic set, so it could select those alternate glyphs on the fly from the same wheel. Bold and underline were done by composition, using overstrike, rather than separate glyphs.
But yeah, this tension you are describing is also where other concepts like "paragraph styles" bothered me in later editors. I think I want/expect "span styles" so it is always a container of characters with a semantic label, which I could then adjust later in the definitions.
Decades later, it still repulses me how the paragraph styles devolve into a bunch of undisciplined characters with custom styling when I have to work on shared documents. At some point, the only sane recourse is to strip all custom styling and then go back and selectively apply things like emphasis again, hoping you didn't miss any.
And... I preferred WordPerfect's separate "reveal codes" pane, which reduced the opportunity for ambiguity. WP 5.1 has never been equalled as a general-purpose word processor.
The brief directly cites some of the compliance frameworks which have supply chain risk controls in them.
This topic is kind of fascinating though. Considering the mindset from the Reflections on Trusting Trust paper, I do wonder how you bootstrap an assured supply chain like this. I know verification of chips and designs has been an active research area. But is there any formal solution to the larger problem of all the transitive dependencies of design and control of production?
How do you get there if you weren't already doing it from the start? It isn't just the chain of custody of the new chip that comes out. What about all the chips used in the production process and in the chain-of-custody tracking process? What about the chain of custody of all the design and process control artifacts that influenced the implementation of these processes? And the chips used to develop and manage those artifacts...
It feels like it most likely is a "turtles all the way down" kind of myth. Eventually, do you just give up and hope your layers of compliance frameworks have produced some kind of defense in depth cocoon?
I'm not sure it is even all that asymmetric. Do all the layers of compliance ritual disrupt the attacker more aggressively than it disrupts the desired production? There is a strong whiff of regulatory capture to these compliance frameworks, making it hard to divine how much it really blocks attackers versus upstart competitors...
In the case of the US, they've been maintaining assured supply chains fully sourced in the US for several decades so they've been able to bootstrap it. It is one of the reasons a domestic manufacturer exists for every kind of computing even though most has moved to Asia. It isn't a coincidence, for example, that Micron is based in Idaho.
Bootstrapping that from scratch today would be slow. The more feasible path is to use an existing assured supply chain to bootstrap initial capability and then swap out those bits with your own.
There’s a role for humans vis-à-vis accountability. Simply recording whose head goes on a pike for every step if something going wrong can be effective too.
I think this whole genre flirts with Capgras Syndrome, the basic identity perception malfunction behind concepts like changelings and many other "exact duplicates" or "tampering" scenarios which have malice as an optional component.
I think it is something that people are aware of, perhaps subconsciously, from cultural exposure. But, I also think many (most?) people have at least some personal experience of a similar sort. Not the full-blown delusional state, but an anxious moment of having feelings of recognition or safety turn inside-out as they realize things are not as they first appeared.
Our whole flood control and water supply system is designed around the expected storage of water as snow.
Ignoring horrifying drought scenarios, it is also troubling to think about how this will change if we start having warm winters and more of the winter precipitation as rain.
I think the worst case would be if we end up like some tropical countries, where they can have disastrous flooding and then drought in very short cycles. The water comes all at once and you cannot hope to control or contain it. But there are also gaps that strain the ability to store enough water and manage consumption rates.
I'm on Colorado's Western Slope. Last summer we got about no precipitation for 2 months and then 6" in one day. Wooo... very fun.
Even better, in some place like Ruidoso, NM (where I've lived) there have been pretty massive deforestations from wildfires with the result being that it floods about any time it rains.
I've spent about 3-5hr/day for the last 4 weeks trying to get rid of stuff that burns as far out from my shacks as I can, but I would bet that when it burns, it's going to go big.
Based on your last sentences, I am pretty sure you will dismiss me. But, I have a null hypothesis to consider...
Like you implied, I think a personal threshold crossing gives this false impression that "everything changed" this month or last month or last year. Like you said, the main thing that changed in one particular month was the observer.
But, perhaps the AI epiphany is not waking up to recognize how good AI already was. Instead, it could be when an individual's standards degrade such that the same AI usage is seen as a benefit instead of a liability. Both interpretations yield the same basic pattern of adoption and commentary that we see right now.
The difference will be in the long-term outcome. Some years from now, will we see that this mass adoption yielded a renaissance of productivity and quality, or a cataclysm of slop-induced liability and loss?
I know appeal to authority can be a fallacy, but there is something to be said for appeal to a preponderence of concurring authorities. Multiple notable personalities known for their technical chops have been endorsing AI assisted coding, so it's hard to argue that every one of them lowered their standards.
It's been fun seeing the cognitive dissonance in anti-LLM tech circles as technical giants that they idolized, from Torvalds through Carmack all the way up to Knuth, say something positive about AI, let alone sing praises of it!
I have to point out that having "high personal standards" is its own fatal flaw. The worst quality code I've seen comes from developers with little self awareness or humility. They call themselves artisans and take no responsibility for the minefield of bugs and security vulnerabilities left in their wake. The Internet is held together with bubblegum and baling wire [1] [2] because artisans reject self improvement.
These same artisans complain about how bad AI generated code is. The AI is trained on your bad artisan code. It's like they are looking in the mirror for the first time and being disgusted by what they see.
I think I should be able to completely cut it off from the network and/or local storage; prevent it from running even though it is installed; and prevent it from having any personalizing information about me, my movements, my network connectivity status or patterns, my device usage (i.e. screen on versus locked, any proxy like battery state of charge), etc.
I am very reluctant to install apps because I see that the platform is designed for needs and a mindset that is not my own. I do not see it as essential or preferable that an app be able to monetize my usage or really gather any telemetry at all.
reply