The New York Times recently published an article entitled "Apple Cracks Down on Apps That Fight iPhone Addiction," noting that Apple is removing apps that help users monitor and restrict smartphone usage, shortly after introducing its own "Screen Time" tool, which performs a similar function:
"Over the past year, Apple has removed or restricted at least 11 of the 17 most downloaded screen-time and parental-control apps, according to an analysis by The New York Times and Sensor Tower, an app-data firm. Apple has also clamped down on a number of lesser-known apps."
The apps are helpful to parents, who may want to monitor or restrict the activities of their kids, or for users who might want to limit the usage of addictive apps on which they spend too much time, such as social media apps or games. The paper interviewed app makers affected by the decision, many of whom have had their businesses disrupted, because they have used technologies that Apple has deemed inappropriate for privacy reasons.
Apple has responded in a public statement clarifying their position, entitled "The Facts About Parental Control Apps." It centers around the usage of a technology called "Mobile Device Management," or MDM, which Apple feels could be hacked or misused, thereby exposing the user's private data:
"We recently removed several parental control apps from the App Store, and we did it for a simple reason: they put users’ privacy and security at risk. It’s important to understand why and how this happened."
So, who is in the right here? Is this a case of Apple bullying independent developers? Or is Apple on the leading edge of protecting users' privacy? We will try to explain what happened and take a look at the bigger picture by drawing parallels to the Cambridge Analytica data breach and recent examples of user privacy protections.
The Facts: An MDM Primer
It's first useful to understand the technology behind what is in question here. Mobile Device Management is the name of the set of technologies that are used to remotely monitor and manage mobile devices. Think of a work phone or a work laptop - the IT department can remotely wipe the device in case it gets stolen, or they can restrict what apps are run or what websites you visit, in the name of ensuring proper usage of work resources.
In order to perform these functions, the IT department, which is managing the device, has the ability to know and approve or block what app is running, what website your phone is visiting, and generally all the activity you are performing with your phone. This data can be streamed to a remote server, stored in a database and logged for later analysis.
The services that Apple has cracked down on use these same MDM tools to monitor users' phones, and restrict app usage. The developers of the aforementioned apps creatively applied a tool designed for enterprise monitoring, and adapted it as well as built business models to solve problems around phone addiction and usage monitoring.
Screen Time: Is it any different?
It would be one thing for Apple to block all access to monitoring technology for non-work usage, but can we say that Apple is being a hypocrite for disallowing others to use these tools to monitor users' phones while doing it themselves?
Functionally, Screen Time, which was released as part of iOS 12 in the fall of 2018, provides many of the same features as MDM-based iPhone addiction apps. There is the ability to set time limits for apps and to block types of content which may be age-inappropriate. In addition, parents can set restrictions (i.e. "Downtime" or "App Limits") and view usage reports.
From Apple's point of view, it turns out to not be about capabilities, but about trust. In an email to a MacRumors reader, Phil Schiller, Apple's SVP of Worldwide Marketing, wrote:
"No one, except you, should have unrestricted access to manage your child’s device, know their location, track their app use, control their mail accounts, web surfing, camera use, network access, and even remotely erase their devices." - Phil Schiller
Apple is making the case that, while the monitoring and usage restriction functionality can be helpful, the use of MDM as the means provides too much private data to a third party. Even if a person chooses to enter into an agreement with a third party to perform the monitoring, Apple has deemed the risk to be too great. It's likely that Apple has decided that users are not aware of the extent to which MDM allows third parties to have access and control of their data, so they are issuing a blanked ban on this technology for family monitoring and addiction control purposes.
The Case for User Privacy
Apple's behavior may seem unfair to individual developers, until you look at the broader picture of how this also fits into the narrative of data protection and the burden of responsibility for tech platforms today.
The biggest example in the recent past has been the Cambridge Analytica scandal at Facebook, where Facebook took the blowback when personality data from millions of Americans leaked from one developer who was given legitimate access at the time, to another party who used it for nefarious purposes. From the Wired article "How Cambridge Analtyics Sparked the Great Privacy Awakening:"
These scandals and blowbacks have badly bruised Facebook and arguably the entire tech industry. If Zuckerberg had trouble seeing the "risk" associated with sloppy privacy protections back in 2012, they should be all too familiar to him now. Facebook faces a potential record fine by the Federal Trade Commission, and just this week news broke that the company is under criminal investigation for its data sharing policies.
In fact, Facebook itself recently came under fire from Apple for misusing data gathered from a virtual private network (VPN), another technology allowing a company to monitor web traffic, in a program Facebook Research ostensibly designed to research how people use their phones. The Verge writes:
Controversy over Onavo erupted last year when Facebook was forced to pull the app from the iOS App Store after Apple said it violated rules about data collection. However, code from the service lived on in the Facebook Research app, which paid teenagers as much as $20 a month for access to all their phone activity data. - The Verge
Apple went on to purge the app store of several similar services from different companies - services that paid users to track their phone and internet usage via VPN technology, making the decision for users that this invasion of privacy was not an appropriate trade off for the users to be making.
Ultimately it's true that Apple is restricting the ability for third party developers to offer usage monitoring services to iOS users, specifically those that leverage MDM to provide those services. However, to Apple, there is a bigger picture at play here. The case that Apple is making is that MDM enables a third party to see and control too much personal data, and there is no guarantee that the data will be protected. Apple knows from the Cambridge Analytica scandal that, whether or not any current actors misuse the data, if the data does leak sometime in the future, Apple will take the majority of the blame and its reputation will be irreversibly damaged. The company goes to the extent of not even giving users the choice to give third parties access to their phones for this purpose.
So what is Apple's calculus here? iOS users already trust Apple to provide the software and hardware for their phones, so there is an innate level of trust that users have with the company. In addition, Apple claims to be a champion of privacy for the user - so by implementing Screen Time themselves, users get the monitoring features without extending the chain of trust to another party. But is the right choice for users no choice? Ultimately it's up to you, given the whole story, to decide what tradeoff you want to be able to make, and whether or not you choose to be a part of the ecosystem.
Subscribe to Thrivve
Get the latest posts delivered right to your inbox