Dude, Spotify is a song streaming service. Holding on to song history is one of the features they offer. Lyft and Uber are taxi services. Holding on to history is a feature. Go start your own privacy car if you want. I’m not in favour of this world where you privacy first people want all these features removed from applications I use.
If they could tell me where I listened to what I’d love that. I wish there were Services for Feature-lovers. I’d love for my player to be able to handle mood (road trips vs work) and hold all these things without me thinking about it. I don’t want checkbox hell.
There have been many free ad-supported services that don't collect your data for many years: newspapers, TV channels, radio, etc.
"Accept data collection or pay for it" is a false dichotomy. Ad-supported websites don't need data collection to be profitable, just as NBC doesn't.
Furthermore, there are many paid services that collect data as well. Last time I flew with KLM the online check-in didn't work because some JS errored out as its data collection script was blocked. Turned out it was sending data to 17 domains on the on-line checkin page:
Well Facebook makes about $1.50 per year per user. So they could charge a lot less than almost $6 per month, but it would be harder to capture that money.
>Data collection is not going anywhere, so long as people are willing (even unknowingly) to give up info for a perceived discount.
People are willing in large part because they have no clue what their data is actually worth and/or what pieces of their data are actually out there. Data collection survives because people don't realize they're already paying $5.99 a month (or whatever the real breakeven number is.)
I think it's a pretty good rule of markets that people should know what it is they're exchanging. To that end I should be able to see what these companies gather on me when I use their service.
Someone on here recently recommended uMatrix for this purpose and I find that a nice trade-off between usability and request blocking.
It's an extension but given it's less opaque than a generic ad-blocker I feel more in control and that it's less likely to go 'rogue' like adblockers do.
The most frustrating thing about UM (which is the same problem I had with NoScript back in the day) is that some scripts call other scripts. So, particularly when I'm trying to play an embedded video served by another site served through a CDN, the process for getting the damn video to play is something like:
Click video -> Open uMatrix -> whitelist some scripts -> reload -> whitelist more scripts being loaded by the first batch of scripts -> reload -> whitelist some XHR references called by new scripts -> reload -> finally whitelist the actual media being served.
If you do it five times with some intelligence, you have rules that you can apply and have most things work most of the time from then on. It's really not a huge hassle and generally the malware-containing networks aren't the ones you are greylisting.
Yep, I do that often. It's apparently a very underrated tool because it can pretty much download most of the video content out there on the internet at large. But many people have no idea about it, even technical people.
Yes, a bit bothersome at times. But if I take the trouble to finetune worthwhile sites I run into, and make the settings permanent, life does get markedly easier after a while.
I rarely see an ad. I didn't see Troy's responsible sponsor message either. I should have and would have if he had chosen to display the thing without the need for scripting. So I don't feel hugely guilty.
uBO can block with pattern-matching URLs of network requests and additionally cosmetic filtering (hide DOM elements), while uMatrix works strictly with hostname of network requests and types of resources.
Use youtube-dl instead? It's win-win: you don't have to compromise your browser's security, and you get a permanent copy of the video that you can watch whenever you like and that the CDN can't comply with takedown requests on or otherwise maliciously bitrot.
For popular stuff switching to "global" and marking them as enabled helps reduce the song-and-dance for things like YouTube videos or common CDNs (e.g. Bootstrap)
Here's the thing, gorhill maintains uM and uBlock Origin and he is one of the most trusted names in several sec circles to the degree that ubo has been deployed in many enterprise settings. Is the elephant in the room by Troy 'well do you think gorhill will sell out for a measly 10k?' Or is the market for 'adblocking extensions' that inundated with shoddy extensions that simply serve as data mining tools and Troy wants to make us all aware?
I am happy with uMatrix, too, but FWIW, I could not recommend it to non-technical or impatient people. For many pages, I require multiple iterations of stepwise refining of what is and is not allowed before a site works for me.
I do not mind, but I can imagine it easily gets annoying for many people rather quickly. (OTOH, those people would not care to set up Pi-hole, either.)
I would rather spend some time setting up a solution with minimal maintenance than constantly be adjusting and tweaking my solution to get things to work just to browse the web. I use uBo because I rarely have to go in an tweak something and it's mostly just a temporary pause on blocking. A pi-hole might be nice but I like how plugins actually remove the spot where the ad once was so the site looks less like swiss cheese.
I consider myself a pretty savvy user, I'm not a web dev but I understand web technologies, javascript and all that and I simply can't use uMatrix decently. Am I supposed to audit every single external resource to whitelist it? For every website I may want to visit? I don't get it.
Ublock seems to do an okay job of blocking most ads and tracking stuff so I'll stick to that in the meantime but I would be really interested to see a uMatrix tutorial or something like that.
uMatrix takes time to grok. It made no sense to me at first. Overtime I understood it and see it as a beautiful method of presenting data and using controls.
There is very good youtube tutorial of about 7 minutes that explains it use.
I also love uMatrix. Unfortunately its not an option on mobile. You could theoretically install it in Firefox mobile for Android, but it would be so difficult to use. I also use a Pi-Hole. I see my Pi-Hole as the solution for mobile browsing and apps, where uMatrix is the better option for desktop browsing since it can differentiate between image requests vs. scripts, iFrames, cookies, etc
uMatrix, while having a bit of a dense UI, is what I prefer as well.
I installed a pi-hole in my home network about a week ago, and it survived less than a week.
My wife likes using sites like eBates when she shops online, and it redirects her through a random sequence of tracking sites before landing on a site like the Gap. It caused all sorts of problems for her, as those sites were being blocked.
If I was going to keep the pi-hole running, I would have had to constantly be adding white list entries. Or, I could have manually created a black list from scratch. I was not interested in doing either.
I found that dropping a handful of domains in uMatrix got rid of most ads (but not tracking), and that was good enough for my uses.
Despite not working in infosec I tune into risky busisness (risky.biz) every week, it's a good round-up of the week without the overreaction you sometimes get from twitter and here in the immediate aftermath of a big leak or vuln.
If you're on the run, you've probably changed your number plate. If you're just an opposition politician being monitored, then where you buy petrol probably doesn't reveal very much about your activities. And you're probably using a card to pay for the petrol anyway.
Sure; but it raises the bar for criminals. Ie, it makes being an effective criminal require more knowledge and more work. And that makes a huge difference in practice. How many criminals actually have good enough opsec to change the license plates on their car? I bet it’s well under 20%. And I know that an 80% solution kills me as an engineer, but I bet law enforcement sees an 80% solution as a massive win.
Us technologists should know how much this stuff matters from the huge effect good design has on product adoption. (Or dark patterns on user behaviour). This is the same effect in action - changing defaults changes the behaviour of the majority.
Another example: People say that “if you make guns illegal only criminals will have guns”. Yet here in Australia very few crimes are committed using firearms. This is the same effect in action. (I’m not arguing for gun control - just that these laws have an effect)
And with that in mind, I think the reason why we’re finally seeing a big push from the 5 eyes is because finally, finally one of the big chat platforms (WhatsApp) has rolled out end to end encryption. That lowered the bar far enough that privacy from the government is becoming the default.
One implication of this way of thinking is that it changes where the battle lines are. To win, the government doesn’t need to make end to end encryption impossible. They just need to make end to end encryption a bit difficult and non-obvious. Doing that will probably push the % of criminals who use proper encryption back into single digit percentages. After all, if you can research and understand the implications of application and messaging security, you can probably make a better living working at an IT desk somewhere than you can from stealing cars. Law enforcement would probably see that as a huge win, even if all us techies can keep sideloading Signal or whatever.
Personally I don’t consider that good enough - I want a society where everyone has privacy. Not just those who have opted in to it.
I think a lot of petty criminals are driving vehicles which are not correctly registered. (I've heard about cases in which a cyclist has been hit by a white van, gone to the police with the van's number plate and been told: Oh, they don't seem to have registered themselves properly. Since no one's been killed we can't be bothered to investigate any further.) So a lot of this computer-based, large-scale surveillance is more effective against law-abiding political activists than it is against ordinary criminals, who drive second-hand white vans and pay for everything with cash.
From your last paragraph, I think we basically agree.
> After all, if you can research and understand the implications of application and messaging security, you can probably make a better living working at an IT desk somewhere than you can from stealing cars
I doubt that. I think the main thing keeping cars safe from the 1% or so who don’t care about the law or ethics of theft is that it’s almost impossible to get away with it. Those with the relevant skill and the willingness to be criminals probably just take an easier approach, like card skimming.
This belief is based on how much second hand cars are worth and therefore how few cars a thief would need to steal each month for a very big salary.
I think you can get away with it, if you know what you're doing, but a stolen car is worth a lot less than the same car sold second-hand legitimately. Probably you either have to sell it to someone who knows it's stolen, knows not to take it anywhere near a legitimate service centre, and is prepared to forfeit it if stopped by the police, or you break it up and sell the parts, or you have a way of smuggling it out of the country to somewhere where they don't care about where cars came from.
Right; and this was my point in the first place. The police don't have to make it impossible to get away with stealing a car. They just need to make it difficult and awkward. Thats still enough to massively disincentivized car theft - which in turn has resulted in far fewer cars being stolen.
Likewise if they ban end-to-end encrypted chat apps from the app stores, I bet that would decimate the number of people who used them. Even if anyone could just get an android phone and sideload signal, in practice adoption would still fall low enough to make law enforcement happy. Even amongst criminals.
It's the police who generally have the most up to date database and are also the only people really able to do anything with the information (other than court summons, I guess).
Regardless of who owns the ANPR, since their sole purpose (if you ignore the survelience aspect) is cutting down crime, the data will end up with the state eventually.
If this is a planned escape, a full tank and a couple of 20l jerry cans will get you from Land's End to John o' Groats, and back in a modern estate doing ~50mpg.
Or just swap your plate for one from a same make and model you found on eBay. You can trigger ANPR's all you want and even the police vehicle mounted ones won't be cause for much suspicion.
It's not possible to legally purchase a plate for a registration you don't own. About 10 years ago the DVLA started to crack down on places printing plates without changing eligibility ahead of time.
I know we're already talking about the law, and there are always ways around this, but you're probably better off sticking a foreign plate on the car.
I find this type of thing the easiest place to purchase number plates because they don't make you jump through a load of hoops with sending copies of the V5.
I did say legally. The Road Safety Act 2006 requires suppliers of number plates to be registered and, looking at the law, I see no provision for the sale of "things that look like number plates but they're really not, honest guv".
UK statute is painful to read and interpret though. I may have missed something.
You implied that the illegality of purchasing such a plate rested on the customer. I think it is perfectly legal to purchase such a plate, it's just not legal to sell one.
VR isn't decent yet and won't be until it's comfortable enough to slap a VR headset on and watch a movie.
That's comfortable both in terms of literal headset weight but also latency, resolution, sound, etc.
When I sit down in front of a 48" TV I'm absorbed into a different world and forget where I am. When VR is good enough you can spend hours inside the virtual world, that's when it can be called good enough.
It's not there yet, but sure we might be there in a generation or two.
When I first found notebooks I was amazed at their utility. The ability to quickly display algorithms and processes to a wider (potentially layman) audience is amazing.
To be able to have a scratchpad and play around with techniques while keeping a sort-of record is also amazing.
I see it like I see excel, it's a fantastic tool for data exploration and some visualisation. It's not something that should be used in any final workflow or production system.
Like excel, it can be badly mis-used but they unlock ways of working that simply weren't possible before it.
The criticisms about hidden state are fair, I think it would be better if previous step data was more explicitly wrapped in the following cells so you could choose to use the "wrapped package" of the previous data or choose to use a re-evaluating version.
I think it works best when most cells have few side-effects, even if that means repeating previous calculations.
XML with schema and validtion can be a joy to use compared to the untyped JSON world, but it's the lack of having to specify type which helps JSON remain popular.
Given a choice between strong or weak typing and weakly typed or un-typed systems will prevail in popularity.
Looking at javascript itself and the type coercion rules it is easy to dismiss it as unworkable mess which leads to expensive bugs in production, but in the real world the ability to mostly ignore types lets people get things done in a more hackish / amatuer way which eases the learning curve and helps popularity.
Typing (enforcing constraints) is an important aspect. But even without typing XML has one fundamental flaw. You are not able to (correctly) represent tuples with attributes which are sets. In XML, tuple attributes are properties, for example:
<book id="1" title="XML"> -- object or object
Now let us assume that I want to have a list of authors as a property:
<book id="1" title="XML" authors=["Me", "My Friend"]> -- NOT SUPPORTED
Therefore, we use a workaround (a crime actually):
Now a book is a collection where authors are members (IS-IN relationship). This is not what we wanted. Our goal was to have an attribute "authors" which is a collection.
Tim Bray recommended James Clark's Random Thoughts about "Do we need a new kind of schema language?": "If you’re doing original work around the intersection of messaging and programming, you need to read it and think about it."
>Some people propose solving the XML-processing problem by adopting an XML-centric processing model, for which the leading technologies are XQuery and XSLT2. The fundamental problem here is the XQuery/XPath data model. I'm not criticizing the WGs' efforts: they've done about as good a job as could be done given the constraints they were working under. But there is no way it can overcome the constraint that a data model based around XML and XSD is just not very good data model for general-purpose computing. The structures of XML (attributes, elements and text) are those of SGML and these come from the world of markup. Considered as general purpose data structures, they suck pretty badly. There's a fundamental lack of composability. Why do we need both elements and attributes? Why can't attributes contain elements? Why is the type of thing that can occur as the content of an element not the same as the type of thing that can occur as a document? Why do we still have cruft like processing instructions and DTDs? XSD makes a (misguided in my view) attempt to add a OO/programming language veneer on top. But it can't solve the basic problems, and, in my view, this veneer ends up making things worse not better.
>I think there's some real progress being made in the programming language world. In particular I would single out Microsoft's LINQ work. My doubts on this are with its emphasis on static typing. While I think static typing is a invaluable within a single, controlled system, I think for a distributed system the costs in terms of tight coupling often outweigh the benefits. I believe this is less of the case if the typing is structural rather than nominal. But although LINQ (or at least newer versions of C#) have introduced some welcome structural typing features, nominal typing is still thoroughly dominant.
>In the Java world, there's been a depressing lack of innovation at the language level from Sun; outside of Sun, I would single out Scala from EPFL (which can run on a JVM). This adds some nice functional features which are smoothly integrated with Java-ish OO features. XML is fundamentally not OO: XML is all about separating data from processing, whereas OO is all about combining data and processing. Functional programming is a much better fit for XML: the problem is making it usable by the average programmer, for whom the functional programming mindset is very foreign.
>If other formats start to supplant XML, and they support these goals better than XML, I will be happy rather than worried.
>From this perspective, my reaction to JSON is a combination of "Yay" and "Sigh".
>It's "Yay", because for important use cases JSON is dramatically better than XML. In particular, JSON shines as a programming language-independent representation of typical programming language data structures. This is an incredibly important use case and it would be hard to overstate how appallingly bad XML is for this. The fundamental problem is the mismatch between programming language data structures and the XML element/attribute data model of elements. This leaves the developer with three choices, all unappetising:
>live with an inconvenient element/attribute representation of the data;
>descend into XML Schema hell in the company of your favourite data binding tool;
>write reams of code to convert the XML into a convenient data structure.
>By contrast with JSON, especially with a dynamic programming language, you can get a reasonable in-memory representation just by calling a library function.
It is again a workaround because <authors> is a member of a collection - it is not a tuple attribute. Which suggests that we cannot enforce this separation and an application has to understand itself which element is an object property and which element is a member of a collection.
You're trying to impose an arbitrary model on XML.
XML grew from markup languages. It's as if we took some text, parsed it, and then store the resulting syntax tree along with the text. Here elements are non-terminals of the grammar and element attributes are additional augmenting properties of those non-terminals.
The text, of course, does not have to human-readable, it can very well be a sequence of anything, for example, of bits, bytes, words, etc. In XML we'd have to represent these with special elements, something like <byte value="00" /> or <int32 value="12345" />, but once we do this, we can use XML to enclose them into additional non-terminals that tell us what these bytes mean and let us use automated tools to manipulate them.
XML can represent objects, but in its own way: we have to first sort of serialize our object into a sequence and once we have this sequence, we can use XML. The model you seem to be talking about an abstract model of abstract objects in memory. Although memory is technically sequential, we normally ignore this and treat objects as nodes in some graph. This is not the domain of XML; it has to be a sequence to begin with. XML has a concept of IDs and references to IDs and thus can represent graphs reasonably well, but it must be a serialized graph.
So XML is basically a language to express the underlying grammatical structure of an arbitrary sequence. That structure is a tree, but it is based on a sequence nonetheless. It's not quite what abstract objects are in programming; but if you think of files, for example, files are sequences and thus are totally the domain of XML.
> It is again a workaround because <authors> is a member of a collection - it is not a tuple attribute.
Its an element not an attribute, but the set of elements and the set of attributes are both collections.
> Which suggests that we cannot enforce this separation and an application has to understand itself which element is an object property and which element is a member of a collection
It's true that that is not part of bare XML and the nature of attributes vs. elements, even though you want to impose it there.
OTOH to the extent that those words correspond to a well-rounded semantic distinction, it is arguably captured in schema languages, and not mere application-level knowledge.
XML with schema (and namespaces, which schemata necessitate) is an immense pain to work with. XML with simple elements and no namespaces is actually fine to work with, as is something like thrift/protobuf. I don't think the problem is that XML has schema and validation, it's that it has bad schema and validation.
JSON does have Schemas and Validation (JSON-LD and RDF I believe are the relevant standards here), which are backwards compatible with parsers that don't support it (though the number of parsers that do is sadly rather low).