top of page
film-16-1long.jpg

thoughts & essays   |   jeremy lee

No, Really, Let Me Pay — I Insist:
Can a Subscription Model for Technology Platforms
Help Preserve Privacy?

jeremy lee
Fall 2020

I.  Terms of Service

 

It’s free and always will be.” Hard to fathom that I once felt relief at the sight of this reassuring tagline when logging in to Facebook. Along with other early users, I signed up during Facebook’s relative infancy, at a time when the website was restricted to college students with a valid “.edu” email. As a freshman undergrad attending university out-of-state I had a particularly active engagement with the social network. I missed my friends, most of whom had remained in-state for school, and Facebook gave me a fun, free, quasi-teleportation device to stay up-to-date and in-touch with them. Indeed, I felt social fulfilment from the engagement Facebook provided, but also a sense of dread. I could hardly afford my books for class—what if this fulfilment I’d come to rely on decided to charge for the service? At the time, the tagline, “It’s free and always will be,” helped alleviate that fear. Ironically, however, fifteen years and an eruption of exponentially dangerous privacy concerns later, the “free” use of services such as Facebook is now the source of dread.

The business model employed by todays technology platforms has been exposed by works like Shoshana Zuboff’s “Surveillance Capitalism” and the docudrama “The Social Dilemma,” among others. These modern muckrakers have enabled a more candid public discourse regarding the contract which exists between user and platform—unmasking how these services were never “free” to begin with and actually pose an existential threat to democracy and freedom of thought. The purpose of this essay is to discuss whether users should have the right to pay for services like Facebook when the only alternative contract is the monetization of their behavioral data. This right would be rooted in federal regulation and applied to contracts where the primary business model is collecting, analyzing, and selling user data. Two larger questions, which, due to constraints of space cannot be explored here, are (i) how or where to draw the line in determining what constitutes a “primary business model,” and ultimately (ii) whether such a right would actually work to reduce the existential threats of surveillance capitalism. Due to foreseeable collective action problems like financial inequality among users and the indifference some may feel about their privacy, the choice alone of a subscription model could prove fruitless against such threats. But, while it may be true that this right could be a small knife in a battle against technology platforms who roll in tanks, when the stakes are this high, resistance is never futile.

 

II.  Money or Data & the Right to Choose

 

A.    A Sound Business Decision

 

From 2011 to 2019, Facebook’s annual ARPU (Average Revenue Per User) grew from $5 to about $29 (USD). This metric is calculated by dividing total revenue by number of users and “shows how effectively companies monetize their users.” Here, the relevance of understanding the ARPU of “free” technology platforms rests on the proposition that the costs of a subscription model could mirror a company’s ARPU. With this configuration, users and platforms are able to realistically transition from “free” to paid because (i) the service would not be prohibitively expensive and (ii) fairly compensates the companies for their lost advertising revenue. Granted, however, when you consider the feasibility of a subscription model to a service that has seen financial growth of nearly 600% in just eight years, the future affordability of such a service is in question. But those concerns may hold little weight; technology platforms, and Facebook in-particular, are in danger of “becoming victims of their own success.” As more users sign-up, the “pool of potential new users” shrinks; but what’s more, specifically with Facebook, the number of active users has declined recently and is predicted to “remain flat or decline” in the future." These challenges faced by technology platforms suggest that ARPU has plateaued and is unlikely to continue the same exponential growth of the last eight years. Therefore, assuming a stabilized ARPU, users who choose a subscription contract can achieve some level of reliability that their costs will not mushroom.

 

B.    Data Privacy as a (fundamental) Right

 

The discussion above helped to illuminate the financial plausibility of a subscription model. Yet, very few actually exist on the market—but why? Given the lack of paid options, there is an inference created that perhaps technology platforms see value in user data beyond what is reflected in the ARPU. Indeed, the type of data and behavioral patterns collected by these platforms has already demonstrated the ability to influence and modify human behavior. This can, with slightly varying perspectives, be defined as the intrinsic value of privacy and patterns of behavior. Obviously a dangerous and incredibly powerful tool, but, as private companies, and completely optional services, technology platforms do not face the same privacy restrictions imposed on entities such as state actors or common carriers when handling user data. This should be reviewed and reformed through federal regulation. Behavioral user data is so fundamentally sensitive and powerful that people should, at the very least, have the legal choice about whether or not they want to pay to keep it private.

 

III.  Conclusion

 

This right to choose, when applied to the types of contracts that presently exist between technology platforms and their users (i.e. conveniently “free” services in exchange for behavior collection) could function as another useful tool against surveillance capitalism and help reduce the existential threats to democracy and freedom of thought. Some users may, of course, not have the means to pay for such a choice, while others simply might not care (today) about the consequences of their behavior being collected and sold. These concerns are valid, but should not preclude the adoption of such a right just because it may not be equally applicable to all users or produce systemic change on its own. Death by a thousand cuts might be slow, but can ultimately prove effective.

bottom of page