In the «onlife» era, the digital world and the physical world are deeply intertwined. What happens online influences what happens offline, and vice versa. Particularly, social network platforms are one of the main gateways to information and a channel for political engagement. What users of such platforms see depends on AI-enhanced recommender systems, which organise the display of user-generated content (UGC) on individual news feeds based on profiling. Like eclipse glasses, these algorithms shape the way users of social networks see the digital "agora", thereby influencing personal opinions and public debate. Therefore, recommender systems have an impact on electoral processes and fundamental rights, such as non-discrimination and freedom of expression and information. These systems are a double-edged sword for users’ autonomy. Profiling increases the chances of finding content that is relevant and entertaining, in alignment with the expectations of consumers of social network services. However, recommender systems obey their masters, namely social network companies, who are driven by profit and thus seek to maximise user engagement. The algorithms running news feeds shape users’ preferences rather than merely inferring them, as they create feedback loops, filter bubbles, and echo chambers that undermine users’ autonomy. Building on previous scholarship on digital vulnerability, this research supports the view that consumers of online services, particularly social networks, are universally vulnerable vis-à-vis tech companies. Given the critical role that social networks’ recommender systems play for individual users and society, this paper investigates whether the European Union’s Artificial Intelligence Act addresses the risks posed by such systems to users’ autonomy. Finally, it discusses the interplay between the AIA and different EU legal acts and branches of the law – namely the Digital Services Act, the Unfair Commercial Practices Directive, and Member States’ contract laws – with regard to social networks’ recommender systems.
Frattone, C. (2025). The Impact of Social Networks’ Recommender Systems for Content Moderation on Autonomy. Legal Issues in the Aftermath of the EU Artificial Intelligence Act. OSSERVATORIO DEL DIRITTO CIVILE E COMMERCIALE(2), 505-539 [10.4478/119071].
The Impact of Social Networks’ Recommender Systems for Content Moderation on Autonomy. Legal Issues in the Aftermath of the EU Artificial Intelligence Act
Cristina Frattone
2025-01-01
Abstract
In the «onlife» era, the digital world and the physical world are deeply intertwined. What happens online influences what happens offline, and vice versa. Particularly, social network platforms are one of the main gateways to information and a channel for political engagement. What users of such platforms see depends on AI-enhanced recommender systems, which organise the display of user-generated content (UGC) on individual news feeds based on profiling. Like eclipse glasses, these algorithms shape the way users of social networks see the digital "agora", thereby influencing personal opinions and public debate. Therefore, recommender systems have an impact on electoral processes and fundamental rights, such as non-discrimination and freedom of expression and information. These systems are a double-edged sword for users’ autonomy. Profiling increases the chances of finding content that is relevant and entertaining, in alignment with the expectations of consumers of social network services. However, recommender systems obey their masters, namely social network companies, who are driven by profit and thus seek to maximise user engagement. The algorithms running news feeds shape users’ preferences rather than merely inferring them, as they create feedback loops, filter bubbles, and echo chambers that undermine users’ autonomy. Building on previous scholarship on digital vulnerability, this research supports the view that consumers of online services, particularly social networks, are universally vulnerable vis-à-vis tech companies. Given the critical role that social networks’ recommender systems play for individual users and society, this paper investigates whether the European Union’s Artificial Intelligence Act addresses the risks posed by such systems to users’ autonomy. Finally, it discusses the interplay between the AIA and different EU legal acts and branches of the law – namely the Digital Services Act, the Unfair Commercial Practices Directive, and Member States’ contract laws – with regard to social networks’ recommender systems.I documenti in IRIS sono protetti da copyright e tutti i diritti sono riservati, salvo diversa indicazione.


