What we mean when we talk about platform transparency

personal data


Share on twitter
Share on linkedin
Share on whatsapp
Share on email

Join Our Newsletter

Subscribe to the Datasphere Pulse for updates on data governance news and activities.

Magdalena Ewa Jozwiak, Datasphere Initiative Fellow, Datasphere Initiative

Transparency is a vague term and as such can be used by different actors for different purposes. On the most basic level, the idea behind the principle of transparency is that it allows for accountability and enhances the legitimacy of institutions, giving insight into how and why decisions are made and by which actors.

In the context of the data economy, the actors that attracted numerous calls for more transparency are online platforms. And understandably so, as platforms are famously powerful, yet opaque and complex creatures that mediate and influence many aspects of our modern lives — from the mundane (which clip will play next on YouTube) to important (who you will vote for).

This opacity is especially related to how personal and non-personal data is captured, used and stored by platforms, and the increased tension this lack of transparency spurs among the actors of the data economy.

For the purposes of this discussion, we introduce the concept of the Datasphere¹, which can be defined as the complex system encompassing all types of data and their dynamic interactions with human groups and norms. In this context, transparency is one of the normative relations that will control the behavior of actors in the Datasphere.

However, here, I argue that transparency, while useful, in its current form under EU norms is hardly a meaningful strategy for protecting individuals’ rights in the context of the Datasphere. Worse, ill-implemented transparency practices can lead to complacency and unfounded trust in big platforms, to ‘digital resignation’ and consent fatigue, only further embedding current data-laden business models rather than empowering the users.

Where transparency falls short: the limits of individual action

Transparency in the EU is a fundamental principle of good governance, specially of public entities.² The principle of transparency is also paramount for the EU legislative framework of data protection contained in the General Data Protection Regulation (GDPR),³ a core Datasphere normative element informing actors’ relations and behaviors in the EU. Data controllers — those who determine the purpose of data processing — have to provide concerned individuals information about their personal data at different stages of processing (Articles 12–15 GDPR), in particular where controllers seek to obtain consent for processing (Art 7 GDPR). It is assumed that the more information on how data is processed the more control individuals can exert, as if the main obstacle in rationally managing one’s data was lack of proper knowledge.

 For this reason there was a push in the EU for enhancing the information duties of controllers and already prior to the GDPR many scholars have been pointing out the problems of the consent-based processing of personal data: that individuals can hardly understand the legalese in which consent requests are written; that they are too long, that there are too many of them. The GDPR was supposed to solve these problems with the enhanced transparency requirements on data controllers indicating that ‘request for consent shall be presented in a manner which is clearly distinguishable from the other matters, in an intelligible and easily accessible form, using clear and plain language. 

However, current GDPR framework for data processing, with its focus on individual and consent, depends on the propositions that individuals can know the value of their data and potential harms of processing in the way that allows them to rationally assess the quid pro quo of giving the consent for their processing. This is simply not the case as I explain below.

First, in the complex environment of the Datasphere, where our daily lives produce masses of personal data which are combined in datasets with non-personal data and used by many different actors, it is impossible to truly understand how given personal data might be operationalized and the consequences it might produce at individual but also population level

Therefore, even the clearest and most comprehensible consent request cannot reveal the full consequences of giving platforms access to data by an individual. The ‘informed consent’ approach places individuals in the market realm where they can freely exchange data for online services as reasonable agents. However, this only distracts from the fact that many problems created by power imbalances in Datasphere, like disinformation, manipulation, discrimination, segregation, and other forms of abuse of power, do not belong to the market realm but rather to political one and as such would be best addressed at collective level by public authorities equipped with powerful accountability apparatus.

Second, a sense of doom and discouragement called by scholars ‘digital resignation’, which to my mind correctly describes the general attitude of people using any digital services, belies the vision of empowered consumer taking control of her data. Digital services are so embedded in daily lives that constant attempts to control the data flows paradoxically lead to the sense of disempowerment. The consent-based system of data protection which offers transparency as an easy fix is hardly a solution, rather, it perpetuates quiet acquiescence fostered by bad faith where companies use dark patterns to induce the consent from their users at the same time paying lip service to the idea of transparency. 

Indeed, as argued by Draper and Turow, the digital platforms profit from the sense of resignation in their users and transparency policies are used for instilling this sentiment. The researchers argue that ‘Under the rubric of self-regulation, companies engage in obfuscatory strategies and tactics that cultivate the perception that efforts at control are pointless. The result is to encourage feelings of resignation by conveying a sense of normalcy around consumer surveillance practices and discouraging collective action’. Consequently, the main challenge for establishing meaningful transparency is to move from individual to collective level of control.

Finally, beyond the individual right to information, there are only few possibilities to scrutinize how platforms use their data emporiums, and in any case, these are voluntary, meaning that transparency only goes as far as platforms decide to grant it. For example, in recent years most platforms voluntarily adopted various transparency policies, in the form of transparency reports on content removal, advertising or the moderation policies. Additionally, several platforms, like Facebook or Twitter, have granted access to their vast data troves to independent researchers — a move praised by academic communities. 

But such voluntary disclosures only go this far — while they might serve the platforms in maintaining their image of actors supporting freedom and openness, they are not a sufficient tool for scrutiny of the actions of platforms. It is remarkable that information about electoral campaign manipulation, data leaks, exploitative algorithmic practices, all came to light thanks to whistleblowing, not because of platforms voluntarily coming forward. This series of revelations highlighted the scale of power of platforms, their reliance on data and how enmeshed they became in our daily lives. 

However, they were of little practical consequence since the platform users did not massively vote with their feet by leaving the platforms. For example, after the whistle-blower Frances Haugen came forward with information about Instagram knowingly hurting teenage girls’ body image there was no massive outflow of users from the platform and the shares of the corporation barely dropped.¹⁰ 

Similarly, following the Cambridge Analytica scandals users did not abandon Facebook despite initial public outrage. Therefore, this hard-won transparency at the personal cost of several individuals who came forward did not ultimately lead to accountability of the abusers. It appears that public shaming of digital platforms does lead to popular outrage but does not diminish the platform power. This observation further feeds into the feeling of digital resignation as no amount of transparency seems to contribute to dismantling current corporate power structures. Thus, asymmetry persists in the Datasphere.

The way forward

The problems mapped out above suggest that it is time to shift the exclusive reliance on the users consent in the regulatory framework and investigate more top-down solutions enforced by public authorities. Against this background, I do not think that the regulatory answer should be more of the same — more ‘informed’ consumer choices in hope for bottom-up control of the Datasphere. I argue that transparency, while crucial, is hardly a meaningful strategy for protecting rights and challenging asymmetries in the context of the Datasphere when it is brought down to the idea of informing individuals about processing personal data. The power of the platforms and potential of its abuse do require transparency and oversight but not from single individuals but rather at the state level where they can be meaningful — by which I mean understood and acted upon.

It seems that the time is ripe for a new approach to regulation of the data ecosystem both in the EU and the US and thus it is important not to waste this momentum and think carefully about what kind of insights would make transparency meaningful — not as a goal in itself but as a means to assuring that the actors using data do so in accordance with certain basic values. Everyone seems to agree that transparency is important, but it is also necessary to reflect on what we want to achieve through transparency. This reflection should inform the future regulatory efforts.

Alternatively, novel forms of governance could be also a solution to the power imbalances of the Datasphere, and much more effective than the current individual-level transparency. For instance, one avenue to explore is the idea of design-level intervention, where the digital systems would be designed not to extract data, polarize, escalate incivility or capture the attention of the users. The design-level solutions would shift the transparency requirements from ex post assessment of the results that a given system is engineered to deliver to the design phase of the digital systems. Yet another idea for shaking up the current data ecosystem could be based on decentralized, bottom-up forms of data governance based on the forms of data stewardship (like for example data trusts). 

Such solutions depart completely from the current model where powerful platforms acquire and monetize data in exchange for free services and with resigned approval of individuals. Rather, these new models of data governance might effectively bring power to the people, the principle behind them being that the individuals pool their data and set their own rules of how these data can be used. In this scenario it is the trustees responsibility to manage data according to the agreed rules and values. Granted, such new ideas revolutionize the current models of using data, but perhaps nothing short of a revolution is needed for a fair and innovative Datasphere.

¹De La Chapelle, B. and L. Porciuncula (2022),“Hello Datasphere — Towards a Systems Approach to Data Governance”, Datasphere Initiative Medium, https://medium.com/@thedatasphere/hello-datasphere-towards-a-systems-approach-to-data-governance-d602f96c9e1d

²Alemanno, A. (2013),“Unpacking the Principle of Openness in EU Law: Transparency, Participation and Democracy”, HEC Paris Research Paper No. LAW-2013–1003, https://ssrn.com/abstract=2303644.

³Hert de, P. and S. Gutwirth (2006),“Privacy, data protection and law enforcement: Opacity of the individual and transparency of power”, Privacy and the criminal law (pp. 61–104), Intersentia, https://research.tilburguniversity.edu/en/publications/privacy-data-protection-and-law-enforcement-opacity-of-the-indivi

⁴European Data Protection Board (2020), “Guidelines 05/2020 on consent under Regulation 2016/679”, EDPB, https://edpb.europa.eu/sites/default/files/files/file1/edpb_guidelines_202005_consent_en.pdf

⁵Solove, D. (2013),“Privacy Self-Management and the Consent Dilemma”, Harvard Law Review, https://harvardlawreview.org/wp-content/uploads/pdfs/vol126_solove.pdf

⁶GDPR Text (n.d),“Article 7 GDPR. Conditions for consent”, GDPR, https://gdpr-text.com/read/article-7/

⁷Draper, N. and J. Turow (2019),“The corporate cultivation of digital resignation”, New Media & Society; 21(8):1824–1839, https://doi.org/10.1177/1461444819833331

⁸Draper, N. and J. Turow (2019),“The corporate cultivation of digital resignation”, New Media & Society; 21(8):1824–1839, https://doi.org/10.1177/1461444819833331

⁹Gorwa, S. and T. Garton Ash (2020),“Democratic Transparency in the Platform Society”, Cambridge, https://www.cambridge.org/core/books/social-media-and-democracy/democratic-transparency-in-the-platform-society/F4BC23D2109293FB4A8A6196F66D3E41

¹⁰Persily, N. (2021), “Facebook hides data showing it harms users. Outside scholars need access”, The Washington Post, https://www.washingtonpost.com/outlook/2021/10/05/facebook-research-data-haugen-congress-regulation/