In the cafe my team were having a discussion about what measures each of us took to protect our data online. Some of us used popular extensions like PrivacyBadger or uBlock Origin to anonymize or remove trackers, some talked about using VPNs on public WiFi, others talked about bad experiences with NoScript, etc, and there seemed to be a general confusion about what purpose each of these things actually offered. Fair enough! Even for a group of people working in Cybersecurity, the field is complex and ranging, and not everyone is super technical or interested in web specifically.
I'm going to fall back to thinking about web threats like any other malware, as they fit the definition of "devices or wares that are used against the will of the user", and describe those tools my team was talking about into 3 categories:
- Anti-malware: eg. NoScript, TOR - any tool which prevents the involuntary execution of code in your environment. Whilst this is more vague, this category of tool might stop attacks like cryptominers which operate solely in the context of a web browser whilst on a website, or may aid in the deployment of an exploit kit aiming to trigger the next stage of an attack.
- Anti-spyware: eg. VPNs, proxies, DNS-over-HTTPS - any tool which prevents a 3rd party, illegal or not, from observing your traffic or otherwise sensitive information. Many of these tools may seek to prevent man-in-the-middle attacks which are typically performed by an entity with physical access to your network and may be sold under the anti-censorship umbrella. I don't count trackers in this category as I believe they deserve their own definition of;
- Anti-adware: eg. PrivacyBadger, DNT, uBlock Origin, Firefox Containers, DuckDuckGo - any tool which prevents the display of advertising, or which deploys or retrieves devices which help in unique tracking across sites for reasons of commercial gain.
Generally it's important to know when each of these tools is necessary to use; you might not need to use a VPN on a safe network unless you're doing something shady, but in an airport it should be a necessity1; you might not be able to use NoScript on a phone, but also try to stick to trusted websites. For the moment I'm going to focus on the 3rd category and what they seek to improve: privacy. It's a hot topic at the moment with consortiums and international bodies writing legislation left and right, and I really feel like it's been thrust into the public conversation in the last 5 years which is a great change for the better. Telling someone about tracking cookies in 2010 they'd have laughed in your face with a "who cares"!
So who does care? The argument I hear most and even partially agree with to some degree is that software giants like Google, Amazon, and Facebook provide a service that is far from free to operate and maintain - the provision of those things we love so much is dependent on there being a business model behind it, right? "If you're not the customer, you're the product." The options here seem to amount to either: those companies can choose to develop and provide services as losses to drive sales of other products; those companies can charge money for all services; or those companies can not provide products. Let's take Google Maps as an example. The core product generates no revenue; people do not pay for maps or directions, but revenue comes from advertising businesses and making recommendations2. If a company wants to appear higher in the search term 'pub' in a certain area, they pay. If a company wants to have their branding or logo appear on the map, they pay etc. Everyone's happy, maybe the user has been steered towards Shannigan's Irish Pub rather than Harry's the local, but welcome to capitalism baby. Critically, these are typically businesses the user is already interested in and searching out of their own volition - they were thirsty regardless, Google just tipped them in the direction of one and not the other. In essence, the user gave Direct Consent to accept the suggestions of Google when they asked for results.
Amazon have perfected the e-commerce experience, offering customers thousands of options from competing sellers at the lowest prices, available at speeds which are unrivaled anywhere else in the industry, and they have dominated the landscape as a result. You can browse for the products you like, read reviews, see how popular items have been with other users, and perhaps buy something. Numerous products are advertised along the way. Here, the user has given an Implied Consent; they didn't ask to be shown products they might be interested in but ultimately you were here buying products and Amazon is suggesting more of the same, or closely related items based on what other users did3. The option to buy the product is yours, and the relationship remains clear.
Companies have the right to use user information to better inform how they run the business through which they acquired that information. Personally, I think Amazon is the best example of this; I wouldn't be mad if a bartender noticed me coming in every Thursday at 5PM to order a Bloody Mary, and started preparing it a few minutes early if he had spare time so that he could deal with other customers. He has used customer information to improve processes and become a better business. Amazon use huge amounts of data4 to better sell products and services. There is still an issue of how comfortable you are with a company holding that sort of personally identifiable information about you, but as long as the context of its use does not change then I don't see a challenge. When you give Amazon information through the products you buy, and it uses that data to offer you deals on products it thinks you want to buy in the future then the context has remained the same. However, if Amazon see you buying a yoga mat, and suggest local yoga studios to attend, the context has shifted from 'what I buy' to 'where I go', crossing an ethical boundary.
How about Facebook? Similarly to Google Maps, a social network where users share their lives with their friends and family is provided for free at the expense of having to endure some adverts. It is common knowledge that Facebook monetizes5 this 'life' information by catering adverts to very specific demographics, offering advertisers an incredibly comprehensive target marketing landscape. I believe that the reason Facebook has attracted so much of a 'creepy' reputation6 through the debate it acquires information in one realm (photos, statuses, events) and uses it in another (general advertising). I personally would feel much less violated if Facebook advertised only things which were directly related to things I have openly posted; locations of photos and check-ins, content of statuses, events similar to ones I've been to before, etc. In that way they would be maintaining the context in which I'd volunteered my data with my implied consent. However, this is not how they operate, but instead inferences are made about you as a person and target you based on those generalities. Not only is this level of complexity more difficult for a consumer to understand, removing the 'understanding' requisite of consent, but it is often wrong and leads to unintended consequences. The adverts I see are not related to the context of the service I am using, and so there is no consent, implied or direct. I could have no way of predicting what kind of adverts I might see, and whilst I can complain about certain adverts there is no way to use the service without my data being used in ways that are unclear to me. Transparency is an issue Facebook has struggled with massively, and they have abused users.
Why does this matter to the end consumer? As long as you don't look at the adverts, the companies don't win right? Wrong. The ways that things are advertised to us are incredibly pervasive, and even if you hide every ad as 'Not relevant to me' you are inevitably influenced by the things you see. When you see your Facebook Feed you are in a state of mind of taking in information, making snap social judgements that affect your subconscious - and unfortunately the ads you're shown at the same time get processed in the same way. It's very intentional that stealth adverts7 8, adverts which match the same visual style as normal content, are more effective9 10 - for exactly this reason.
The danger comes when the types of adverts shown are weakly controlled or regulated. It is one thing to be subverted towards buying a product you would have otherwise not been interested in, removing an element of free will about how you spend your money, but as a society we hold religious and political freedoms as sacred above that. The abuse of data in advertising campaigns for the promotion of political parties or groups with religious affiliations is potentially the most blatant and violent violation of personal liberty possible in the area of data ethics with huge implication on free will for the individual.
Malicious advertising is an assault on your psyche, and should be treated as non-consensual data abuse. Definitions remain challenging, but the framework we have to discuss these issues is becoming more rigorous with every scandal, select committee, and piece of legislation. Ultimately how much time you invest in protecting yourself from adverts intended to alter your tastes or opinions is your choice, but the tools listed at the start of this article are a great place to start.