What are the new features Apple is implementing in iOS 15 that have privacy and security people all up in arms? And why none of this should come a surprise to anyone who’s actually paid some attention and is thinking for themselves instead of just buying the company propaganda.

Everybody is outraged about Apple all of a sudden. They are supposedly violating everybody’s trust in their privacy promise. It looks like this week, on The Private Citizen, I not only have to explain what new features Apple has implemented and what they are actually doing, but also clear up a fundamental misunderstanding about Apple in particular and companies in general.


This podcast was recorded with a live audience on my Twitch channel. Details on the time of future recordings can usually be found on my personal website. Recordings of these streams get saved to a YouTube playlist for easy watching on demand after the fact.

New Image Scanning Features in iOS

Apple has caused quite a stir amid the privacy-conscious section of the public by announcing two new features in iOS 15 to scan photos on peoples' phones for porn – in one case legal nude images and in the other illegal material including child pornography. These features should not come as a surprise to anyone who’s followed the EU’s regulatory steps to enforce the algorithmic search for child pornography on online platforms (EU Directive 2020/0259) and the German government’s steps into similar directions, calling for operating system makers to police their software for pornography by default.

The two new features in iOS 15 are:

  1. Apple will scan all photos uploaded to iCloud for known child abuse material. If such material is detected, it will be reported to local and/or federal authorities. This is done by matching hashes of known child porn material to hashes generated from every image uploaded to iCloud.
  2. Apple will scan photos included in iMessage chats locally on each phone. If parental protection features are enabled on a device, the user (presumably underage) will have images including nudity blurred for them and will be warned that they are about to see sexually explicit content. The message apparently says it is “OK not to look at the image”. If a child under 13 opts to view the image, parents will be alerted of this by the operating system. Nudity is detected, according to Apple, by a locally running algorithm that uses machine learning to analyse images.

Both features have garnered criticism because they could potentially be used in the future to scan for other things. As Edward Snowden points out, Apple is essentially establishing a surveillance infrastructure that might, in the future, be used by intelligence services or repressive governments to look for incriminating materials. Additionally, building client-side scanning of encrypted messages into their operating system has been seen by many infosec people as a dangerous precedent set by Apple in this regard. And they are asking the obvious next question: Where will it end? Apple is essentially implementing an official version of the best way to hack end-to-end encryption and thus making this encryption a lot less useful – or completely useless, in the eyes of the purists.

There are also questions about the hashing algorithm and ways it could be reverse-engineered and manipulated. If you can use a sticker on a road sign to make a Tesla change lanes, it’s safe to assume that you can also come up with innocent looking photos that get people who have them scanned by Apple reported to the FBI. Or at least manipulate people’s family photos so they end up getting reviewed by some minimum wage intern in Croatia that Apple outsourced their “manual review process” to.

And of course, the people who are making a living producing child pornography will have access to the database used for checking anyway. Not to mention that Apple’s initiatives target everyday people and the means by which they communicate and not the means by which people who sell and consume child pornography share such content.

If you look at these news technologies in context of some other privacy features Apple has recently rolled out, a clear picture is forming.

The App Tracking Transparency Smokescreen

For example, there’s the App Tracking Transparency thing. A feature that was released in iOS 14.5 and that requires apps to present a screen at first start, asking users to consent to being tracked across websites opened through the browser and by other apps on the device. This feature, which was billed by Apple as a great innovation, seems to me to be a necessary implementation to comply with basic GDPR rules.

Apple’s PR claims that this feature is a huge success:

During the company’s Q3 2021 earnings call, Cook was asked how the change to Identifier for Advertisers tracking tag handling was developing, and how it was influencing the trajectory of advertising within Apple’s services. “We’ve been getting quite a bit of customer reaction, positive reaction to being able to make the decision …on whether to be tracked or not,” Cook said, adding that the feature seems to “be going very well from a user point of view.”

The change doesn’t seem to bother the big players in the advertising and tracking business that much, however. Facebook is doing fine, it seems.

Facebook’s original public predictions about App Tracking Transparency’s effect were apocalyptic. But even though App Tracking Transparency took effect during Facebook’s most recent quarter (Q2 of 2021), the company still posted huge ad revenue growth. Facebook’s revenue, which is largely driven by the kinds of advertising that Apple’s iOS change undermines, grew 56 percent year-over-year in Q2, beating investor expectations. The company had 1.9 billion daily active users and 2.9 billion monthly active users. It earned $10.12 of revenue per user, on average.

Data on user opt-in rates for tracking has varied quite a bit. Some firms put the figure at just 4 percent, but others place opt-in rates as high as around 30 percent. The rate likely depends on the app in question. In any case, users who opt in are definitely not the majority; most users are declining to be tracked when prompted. And each user who does is worth a lot less money to Facebook, which makes much of its money leveraging each user’s data to charge advertisers money to microtarget users and others with similar attributes.

But what if Apple didn’t implement this feature because it cares about the privacy of its users? Maybe it was forced to implement this by privacy laws in the EU. And there might also have been some motivation for Apple (notoriously bad at getting traction for its online services and never a big player in the ad business) to gain an advantage over its competitors.

Today, Zuckerberg is dedicating much of his time to describing his vision for the “metaverse,” which he has identified as the new direction for the company. He has described this vision as putting a mixed-reality layer on our lives through which people can interact and socialize with one another virtually in new ways while crossing geographic barriers. But Apple executives have also outlined a somewhat similar long-term vision, albeit with a very different approach. By forcing Facebook to play by different ad-targeting rules, Apple has strengthened its position against the social media company in any coming battle over a future mixed-reality computing landscape.

Could it be that Apple is actually implementing changes, under the cover of outwardly being a privacy-centric company, that change the playing field from benefitting their competitors to something that Apple has more control over? Another recently announced iOS features does make me think so…

Apple’s Weird Tracking Pixel Play

Another privacy feature announced this year is one of these things that sounds good on the face of it: Apple is fighting tracking pixels in email. You know, these invisible one-pixel images that you can’t see but that get opened when you open an HTML email. Since your email client fetches the image from a server belonging to a tracing company or to whoever sent that mail, they know you opened the email and they can track your IP address.

At its WWDC this month, Apple announced a new feature aimed at stopping email senders from gathering critical pieces of data about email recipients. Specifically, the Apple Mail Privacy Protection, set to roll out this Fall with iOS 15, allows users to block email senders like publishers, retailers and more from detecting when they open an email, among other things.

The change has many email senders and ad servers in a panic. For years, email open rates have been the de facto standard for measuring campaign success and user engagement and for monetizing emails through advertising. Not only are open rates a primary KPI, but most email advertising vendors operate on a CPM or cost per impression model, which means brands pay and publishers get paid based on how many recipients see the ad. If you can’t track opens, you can’t track ad views, which means CPM-based performance metrics will be null and void.

Apple is fighting that? Sounds great, right? Well, have a look at how they are doing it:

Mail Privacy Protection downloads remote content in the background by default. All remote content downloaded by Mail is routed through multiple proxy servers, preventing the sender from learning your IP address. As a result, email senders will only receive generic information rather than information about your behavior.

So, if I understand this correctly, then Apple is opening every single email you receive before you even see them. They will download the images in that mail to one of their servers and then send them on to you.

Am I the only one who sees the glaring privacy problem with this? This means that Apple is reading (albeit automatically) all of your mail. All those images pass through Apple servers. Where they might be scanned for who-knows-what – remember what we just talked about in the first segment of the episode? And we are talking a local email client here. Not webmail. How is it that Google is under constant fire from privacy advocates for doing this with Gmail for webmail, but nobody complains when Apple announces the same thing for local email clients as well? At least you can turn it off. For now.

It seems to me that this move doesn’t have the user’s privacy at heart at all. It’s about giving Apple more data about that very user. And as a side-effect they are hurting some of their competitors who rely on advertising to generate their income.

Trusting Apple is a Dumb Thing to Do

All of this hammers home a point I have been making for at least ten years now: Apple is a company like any other. Companies do not have ethics, they don’t have morals. They don’t care about you. Companies don’t have their best interest at heart. They only do what their leadership things is best for the company. And Apple is a company like any other.

All that stuff about being privacy-focused and making all of their money from hardware is PR bullshit. It’s propaganda, designed to make Apple look good. The truth is that Apple was never good with online services and can’t even approach to compete with companies like Google and Facebook when it comes to making money by using data from their users to track them and serve them ads. If Apple was suddenly able to do this and make a lot of money off it, they would. Their whole privacy PR campaign would go out the window in a heartbeat. Same if someone, say the Chinese Communist Party, was to suddenly pressure them to compromise on privacy or lose the Chinese market – which has happened in the past, by the way.

Over the last decade or so, there have been more instances than I can count of me getting into a discussion with someone who told me they buy Apple devices because “Apple makes money on hardware, not my data” or because “with Apple, their hardware is the product, not me and my data”. This included many security researchers, privacy advocates and a number of journalists. For years, I have told all those people that I think this idea is incredibly naïve. It is an analysis, if you can even call it that, based solely on what a company has said in press releases – with other words: it’s completely based on propaganda. I have told all of them: Apple is a company like any other. If circumstances, which you have no control over, suddenly change, they will reverse this decision (probably without you knowing) and there’s nothing you can do about it.

It is beyond me how people who are incredibly smart and very technically literate can at the same time be dumb enough to not see this as a simple universal truth. It’s common sense, really.

I’ve been on the record on at least three podcasts I was previously involved in (Linux Outlaws, Geek News Radio and c’t Uplink) of saying this. And I’ve written about it in opinion pieces several times throughout the years. And I’ve been belittled for this opinion almost universally. Therefore, it is very satisfying to me to have seen the rest of the world wake up to this fact over the last week or so.

I hope that now that we are all on the same page on this, people will remember a simple lesson from this situation: Companies are all the same. They have no ethics, no morals and they don’t care about you. They do what they believe is best for them. Do not believe their propaganda. Or, more simply: Start to think for yourself once in a while.

Producer Feedback

In our Matrix room Evgeny Kuznetsov says in reply to episode 80:

Funny how you first mention in the episode proper that one of the issues with the internet is that country borders and jurisdictions don’t necessary apply, and then in the feedback part explain how YouTube violated your rights as a German journalist and should be held liable by the German law. As much as I sympathize your case in this particular situation, I can’t help thinking that I’m actually extremely glad YouTube doesn’t always exactly follow the Russian regulations, the Chinese regulations, the Turkish regulations, etc. For me living in Russia the fact that YouTube tends to ignore a big part of Russian authorities' demands (however perfectly legal those might be here in Moscow) is definitely a welcome thing. That comes at the cost of YouTube not giving a flying fuck about German regulations either, I guess.

I mean, in Russia saying that same-sex relationships are not entirely bad is usually considered by the courts to be “gay propaganda” which is actually illegal and punished by law. Same goes for the recreational drug use, euthanasia, anti-clericalism, etc, etc. Not even mentioning critics of the-powers-that-be. By the Russian law, YouTube should be banning all of that. I’m glad it doesn’t. By the Russian law, YouTube is absolutely not allowed to ban Russian officials, especially when they say things like “gay people should get mental treatment or, better, be executed on sight”, yet that’s exactly what YouTube does. I’d rather they didn’t, because I think the stupidity and moronity of our public persons should be visible to anyone who cares to watch, but I see how it can be viewed as beneficial, too. So hen you say “YouTube should do this and that, and shouldn’t do that and the other, because Grundgesetz” my gut reaction is: “Fuck no! Abiding by the local regulations is not a good idea, no-no-no!”

To which Georges replied:

Interesting point of view. In Russia’s case though I wouldn’t be surprised if YT’s policy is on purpose more liberal in your country as it helps maintain the narrative of the US government that everything ruled by your president must be evil, so maybe they’re just trying to influence the Russian people with YT’s woke contents instead of being just as authoritarian as your government.

Evgeny Kuznetsov answered:

Could well be. In fact, that’s what our authorities constantly say: YT, FB, Twitter etc. actively promote pro-American and anti-Russian (well, anti-Putin, but Putin equals Russia and there can be no Russia without Putin, or so our head of the Parliament says) narrative. I find that entirely possible.

In China’s and Russia’s cases (as opposed to Germany, admittedly) there are local alternatives: we have RuTube for videos and VK for FB. Those suck and very few people voluntarily use them, but hey, it is entirely possible for the Russian authorities to actively ban YT, FB, etc, the same way Chinese did years ago. Russia has successfully banned LinkedIn several years ago, so there’s an example. Who needs Google when there’s Yandex that doesn’t sell your data (only gives it to FSB for free)?! The problem is: this enforcement of the local jurisdiction does not, in my opinion, benefit end users in any way thinkable, nor does it benefit the Internet as a whole or humankind as species.

If you have any thoughts on the things discussed in this or previous episodes, please feel free to contact me. In addition to the information listed there, we also have an experimental Matrix room for feedback. Try it out if you have an account on a Matrix server. Any Matrix server will do.

Toss a Coin to Your Podcaster

I am a freelance journalist and writer, volunteering my free time because I love digging into stories and because I love podcasting. If you want to help keep The Private Citizen on the air, consider becoming one of my Patreon supporters.

You can also support the show by sending money to via PayPal, if you prefer.

This is entirely optional. This show operates under the value-for-value model, meaning I want you to give back only what you feel this show is worth to you. If that comes down to nothing, that’s OK with me. But if you help out, it’s more likely that I’ll be able to keep doing this indefinitely.

Thanks and Credits

I like to credit everyone who’s helped with any aspect of this production and thus became a part of the show. This is why I am thankful to the following people, who have supported this episode through Patreon and PayPal and thus keep this show on the air:

Georges, Steve Hoos, Butterbeans, Jonathan M. Hethey, Michael Mullan-Jensen, Dave, 1i11g, Michael Small, Jackie Plage, Philip Klostermann, Vlad, Jaroslav Lichtblau, ikn, Kai Siers, Bennett Piater, Fadi Mansour, Joe Poser, Dirk Dede, tobias, m0dese7en, David Potter, Sandman616, Mika, Martin, Rhodane the Insane, Rizele, avis, MrAmish, Dave Umrysh, drivezero, RikyM, Barry Williams, Jonathan Edwards, Cam, Philip, Captain Egghead, RJ Tracey, D, Rick Bragg, Robert Forster, Superuser and noreply.

Many thanks to my Twitch subscribers: Mike_TheDane, Flash_Gordo, Sandman616, m0dese7en_is_unavailable, epochsky, l_terrestris_jim, redeemerf, Galteran, BaconThePork and jonathanmh_com.

I am also thankful to Bytemark, who are providing the hosting for this episode’s audio file.

Podcast Music

The show’s theme song is Acoustic Routes by Raúl Cabezalí. It is licensed via Jamendo Music. Other music and some sound effects are licensed via Epidemic Sound. This episode’s ending song is Why Don’t We Feel It by Velvethead feat. Easton.