Narrative, creative – immersive? The immersive potential of right-wing extremist communication on social media

Sandra Kero
Center for Advanced Internet Studies (CAIS), Bochum


Josephine B. Schmitt 
Center for Advanced Internet Studies (CAIS), Bochum 


Social media is ubiquitous. More and more people are using platforms such as Facebook, Instagram and TikTok to share their experiences, opinions and interests directly with others. But what does this mean for our experiences in these digital spaces – and what effects does it have on the formation of our political opinions? In particular, the immersive experiences that users have in these media environments can trigger powerful emotional effects and influence political or ideological attitudes. In this blog article, we look at these immersive effects of social media platforms. Alongside a detailed examination of what exactly immersiveness can mean in this context, we present mechanisms that contribute to the immersiveness of right-wing communication on social media, taking into account concepts and theories from media science, communication science and psychology. Based on these findings, we then provide recommendations for action aimed at developing preventive and repressive measures to counter the improper and antidemocratic use of immersive environments.

INTRODUCTION

Social media platforms like Instagram or TikTok are a popular tool of right-wing actors. As political outsiders, they exploit the opportunities that social media offers to establish them-selves as independent voices outside journalistic media for the purpose of orchestrating their narratives, channelling their political opinions and ideological attitudes and recruiting new members (Fielitz & Marcks, 2020; Rau et al., 2022; Schmitt, Harles, et al., 2020; Schwarz, 2020). On Instagram, right-wing groups deliberately rely on creators, lifestyle and a connection to nature to make their ideology more easily digestible (Echtermann et al., 2020). As a result, ideological content is not only subtly presented – often, the corresponding ideological perspectives are explicitly stated (Kero, in press). But also TikTok, which is particularly popu-lar among young users (mpfs, 2022), is becoming increasingly important as a platform for the far-right to interact with target groups that have not yet formed firm political views (pre:bunk, 2023). The range of content provided is broad here too: alongside supposedly humorous and musical content, there is also material from AfD politicians who showcase their beliefs. Extreme right-wing ideas are made accessible through normalisation strategies, e.g. pseudo-scientific misinformation and emotionalisation (Müller, 2022).

Social media platforms not only give everyone the opportunity to enter the public political discourse – the disseminated content also allows boundaries between political information, entertainment and social issues to become blurred. At the same time, various functions of social media facilitate users’ immersive experiences – something that far-right extremists in particular can benefit from in their communication practices. 

This article aims to contextualise the immersiveness of extreme right-wing content on social media and present selected immersive mechanisms within these environments as examples. On the basis of this, preventive measures are identified. 

WHAT DO WE UNDERSTAND BY IMMERSION?

Immersion essentially means being deeply absorbed or engrossed in a particular environment, activity or experience (Murray, 1998). Immersion is often mentioned as a feature of digital technologies such as virtual reality (VR) or augmented reality (AR), or in the gaming industry (Mühlhoff & Schütz, 2019; Nilsson et al., 2016). From a media theory and media psychology perspective, the term primarily describes the state of dissolution of physical and fictitious boundaries, i.e. users’ subjective experiences of plunging into an imaginary world or narrative environment and being emotionally involved in it (e.g. E. Brown & Cairns, 2004; Haywood & Cairns, 2006). This is usually associated with a strong feeling of presence in which perception and attention are focused on the specific immersive content. Conversely, attention to time and (real) space decreases for the period of immersion (Cairns et al., 2014; Curran, 2018). Immersive environments and mechanisms can encourage engagement and learning (Dede, 2009), but can also promote the ideological effect of content among users (Braddock & Dillard, 2016).

Moving away from an understanding of the term that is limited to a technically or culturally versed or psychological phenomenon, Mühlhoff and Schütz (2019) consider immersion from an affect theory and social theory perspective. Here, immersion is described as a dynamic interaction between (non-)human individuals and between them and their environment, i.e. as “a specific mode of emotional and affective involvement in a current or mediatised social event” (p. 20). Affects are spontaneous, instinctive reactions that can be influenced both individually and socially and which shape a person’s behaviour (Strick, 2021, among others). The design of certain (media) environments can influence affective reactions in specific ways and thus also modulate power relations and social dynamics (Mühlhoff & Schütz, 2019). In this sense, certain immersion offerings can help to control and regulate behaviour, thus creating “immersive power” (p. 30). Immersion can therefore be considered a form of situational in-fluence exerted on the thoughts and feelings of the individual. It represents an indirect exercise of power that arises without hierarchies and through social contexts.

Immersion is considered in a wide range of scientific disciplines (e.g. psychology, computer science, cognitive science, design); therefore, the synonyms and closely related concepts are just as diverse. The literature thus includes terms such as presence, flow, involvement and engagement, which describe similar phenomena (e.g. Curran, 2018; Nilsson et al., 2016). To describe psychological immersion in the context of the reception of narrative media content, the term transportation is used in media psychology and communication science research (e.g. Moyer-Gusé, 2008). Immersion as a subjective experience of users is not limited to a specific medium or technology. Learning situations, books, films and social media content can therefore also have an immersive effect, as well as computer games or VR applications.

Against the backdrop of this integrated understanding of the term, we focus in this article on immersive mechanisms in and through social media and their content. We are therefore primarily concerned with the social and psychological dimension of immersiveness.

IMMERSIVE MECHANISMS IN SOCIAL MEDIA

Social media can contribute to immersive mechanisms and effects in various ways. With an understanding of immersion from a media theory and media psychology perspective, these experiences relate to the constant presence of smartphones and, with them, social media applications in our everyday lives, resulting in a merging of virtual, physical and social space. Users’ emotional involvement is also intensified through the primary type of use of social media platforms – such as direct sharing of everyday experiences, interests and opinions – and the (psychological) connection with individual producers of this content, hereinafter referred to as creators, and their posts. From a social theory perspective, a power structure is created here in which affective dynamics can be used in a targeted way to influence the behaviour, attitudes and perceptions of individuals. 

In the following, we aim to take a closer look at the underlying mechanisms in order to thoroughly examine the immersive potential of social media in the communication of extreme right-wing creators. A distinction is made between two contexts: the platform environment as an environment of social events, and the interaction between individuals and content in this environment.

Presenting the persona and the story

In the competition for the undivided attention of users, diverse far-right extremists are also taking to social media. The amount of extreme right-wing content has been steadily increasing for years (e.g. Munger & Phillips, 2022). Opinions are becoming things that can be sold. Social media marketing strategies are being used to attract and retain the attention of users (Cotter, 2019) and shape opinions in line with right-wing ideology. How the particular media persona is presented and the way in which the stories are told play a crucial role when it comes to the immersiveness and impact of the content provided.

A number of creators and influencers have emerged from the far-right scene in recent years. They are an essential prerequisite for reaching users, especially those who are otherwise not interested in political content. Generally speaking, influencers are previously unknown social media users who become well-known personalities through careful self-promotion and regular posting of content on social media, and who cover a range of topics on their channels (Bauer, 2016). 

Female activists in particular use popular social media platforms to present extreme right-wing ideology to a wide audience in a personal and emotive way through recipes, beauty tips and inspiring landscape photos (Ayyadi, 2021; Echtermann et al., 2020; Kero, in press); the boundaries between entertainment, lifestyle and politics are fluid. The creators act as role models (Zimmermann et al., 2022), are opinion leaders as regards content that threatens democracy (Harff et al., 2022), can attract users to content and channels, and encourage interaction via personal connections (Leite et al., 2022). The greater the trustworthiness and centrality of the creators, the more the recipients report immersive experiences with the media offering (Jung & Im, 2021).

Ein Bild, das Text, Screenshot, Collage, Menschliches Gesicht enthält.

Automatisch generierte BeschreibungEin Bild, das Text, Menschliches Gesicht, Screenshot, Person enthält.

Automatisch generierte Beschreibung
Figure 1: TikTok channel “victoria” – a mix of nature, homeland and historical scenesFigure 2: The youth organisation of the AfD – classified as an extremist group that threatens the constitution – posts satirical clips on TikTok  

Above all, it is the communicative presentation styles of the content producers that let recipients really immerse themselves in their everyday world. They tell stories of nature, homeland and a sovereign German people, and provide simple answers to complex political questions. Concepts of enemies as those responsible for social problems are soon found and quickly identified based on supposedly distinct characteristics (see Figure 1). Through their narratives, they not only engage to create identity and meaning, but also offer recipients a strong group that provides interested individuals with a framework and (political) orientation. Formats with an emphasis on humour, pop culture and youth culture are used to make content easy to digest and build recipients’ loyalty to channels and creators (Schmitt, Harles, et al., 2020, see also Figure 2). The network of right-wing actors on social media is dense; they refer to each others’ stories and narratives – including across platforms (Chadwick & Stanyer, 2022). An apparent consistency of narratives, i.e. the same story is told by several actors, makes a story appear even more credible.

MECHANISMS OF NARRATIVE PERSUASION ARE USED FOR SPECIFIC PURPOSES

Research in the fields of media psychology and communication science identifies three mechanisms that make stories convincing and thus make right-wing extremist communication particularly immersive and effective: transportation, identification and parasocial interactions (e.g. Braddock, 2020). 

Transportation describes the process whereby, in order to understand the narrative presented in a story, people must shift their attention from the real world around them to the world constructed in the narrative. Ideally, as a result of this psychological immersion, they lose their awareness of the real world and completely immerse themselves in the fictional world (Braddock, 2020). More engagement with the narrative world means less questioning of persuasive information (Igartua & Cachón-Ramón, 2023; Moyer-Gusé, 2008).

Identification means the adoption of the media character’s viewpoint or perspective by recipients. This happens, for example, where the media figure is perceived as being particularly similar. 

Parasocial interaction is a psychological process in which media users like and/or trust a media figure so much that they feel as if they are connected to them. Where this happens over a longer period, e.g. several videos or articles, it is also called a parasocial relationship. A parasocial interaction or relationship with a media figure reduces users’ reactance to the content presented and the desire to object to content. This facilitates the adoption of beliefs, attitudes and behaviours (Braddock, 2020).

Through narrative storytelling, the direct presentation of everyday situations, self-disclosure and directly addressing recipients, right-wing creators can increase the level of their authenticity and reinforce parasocial relationships with their followers. The more socially attractive a media figure is perceived to be (Masuda et al., 2022) and the greater the recipients’ identification with the media figure (Eyal & Dailey, 2012), the stronger the parasocial relationship. Social media creators offer young users in particular great potential for identification by presenting themselves as similar to them and approachable (Farivar & Wang, 2022; Schouten et al., 2020). ­­­­­An example of this is the account belonging to Freya Rosi, a young, right-wing creator. With tips on beauty, baking and cooking and photos of nature, she orchestrates the image of an approachable, traditional young far-right supporter who loves her homeland (a detailed analysis of the account is also found in Rösch, 2023). A particularly close and enduring parasocial relationship with social media protagonists can even lead to addiction-like behaviours among users (de Bérail et al., 2019). In other words: users have difficulty escaping the fictional world manufactured by the social media creators – in this context their content has a highly immersive effect. 

Ein Bild, das Kleidung, Menschliches Gesicht, Collage, Mädchen enthält.

Automatisch generierte BeschreibungEin Bild, das Text, Screenshot, Menschliches Gesicht, Frau enthält.

Automatisch generierte Beschreibung
Figure 3: Screenshots of Freya Rosi’s Instagram account. With tips on beauty, baking and cooking and photos of nature, the creator orchestrates the image of an approachable, traditional young far-right supporter who loves her homeland. A detailed analysis of the presentation strategies can also be found in Rösch, 2023

Self-disclosure by creators can lead to a higher level of parasocial interactions and relationships among followers via the feeling of social presence, i.e. perceiving the media figure as a natural person (Kim et al., 2019; Kim & Song, 2016), and to greater engagement with and commitment to communicators and their content (Osei-Frimpong & McLean, 2018). In turn, the feeling of social presence makes it less likely that people will check the truth of information (Jun et al., 2017). This is particularly problematic when dealing with extreme right-wing content, e.g. disinformation or conspiracy theories. 

EMOTIONALISATION OF CONTENT

In the context of their communication, radicalised communicators talk about fears, uncertainties and developmental tasks; they also use emotionalising content and images to make their content relatable and convince followers of their world view (Frischlich, 2021; Frischlich et al., 2021; Schmitt, Harles, et al., 2020; Schneider et al., 2019). In the so-called post-truth era, emotions, as well as perceived truths, are becoming a key manipulation tool of extreme right-wing creators. An experimental study by Lillie and colleagues (2021) indicates that narrative content which triggers fear among recipients leads to more flow experiences – i.e. the complete absorption of users in the reception of the content – and encourages behavioural intentions; on the other hand, fear reduces the willingness to contradict content. This makes it appear more convincing. 

The development of social media into platforms for creating, publishing and interacting with (audio-)visual content supports the communication and effect of this sort of content. In particular, (audio-)visual content is suitable for conveying emotional messages (Houwer & Hermans, 1994) and thus generating corresponding emotional responses. In turn, this affects information processing, as the brain pays more attention to processing the emotional reaction than to the information (Yang et al., 2023). Tritt et al. (2016) found that politically conservative people appear to respond more easily to emotional stimuli than liberals.

The immersive effects outlined so far may result in followers becoming more intensely involved in the world portrayed by creators, forging a deeper emotional bond and identifying more strongly with the content presented (Feng et al., 2021; Hu et al., 2020; Slater & Rouner, 2002; Wunderlich, 2023).

THE ROLE OF PLATFORM FUNCTIONS 

Time plays an important role on social media. Users’ attention must be captured quickly in order to attract them to the platform. It is fair to assume that immersive effects of communication on social media are intensified through the design and functional logic of the platforms. Platform-specific algorithms and dark patterns play a special role here. 

Algorithms determine relevance and importance

In addition to highlighting the relevance and importance of certain content, algorithms encourage the emotional involvement of users by preferentially showing them content that they or similar users potentially prefer. This is designed to attract users to the platform and content and retain them for as long as possible; it is part of the platforms’ business model. In this way, platforms let users plunge deeper and deeper into the virtual worlds. This makes them an immersive and effective recruitment tool. The algorithmic selection and sorting of particular content depends on various factors including the type of content, its placement, the language used, the degree of networking between those disseminating it and the responses from users (Peters & Puschmann, 2017). 

Commercial platforms prioritise posts that trigger emotional reactions, polarise opinions or promote intensive communication between members (Grandinetti & Bruinsma, 2023; Huszár et al., 2022; Morris, 2021). In particular, visual presentation modes such as photos, videos and memes trigger affective emotions and reactions among recipients and draw them in (Maschewski & Nosthoff, 2019). Algorithmic networking provides technical reinforcement of users’ emotional connection and involvement with the content. This also applies to contact with antidemocratic content. Studies suggest that social media algorithms can facilitate the spread of extreme right-wing activities (Whittaker et al., 2021; Yesilada & Lewandowsky, 2022) – sometimes similar keywords are enough, as with unproblematic content (Schmitt et al., 2018). In particular, people who are interested in niche topics (e.g. conspiracy theories) (Ledwich et al., 2022) and who follow platform recommendations for longer (M. A. Brown et al., 2022) are manoeuvred into corresponding filter bubbles by the algorithms. At the same time, however, the majority of users are shown moderate content (M. A. Brown et al., 2022; Ledwich et al., 2022). 

In recent years, TikTok has faced scrutiny particularly because of its strong algorithmic control. This makes it even more likely that users will unintentionally come into contact with radicalised content (Weimann & Masri, 2021). In addition, the stream of content continues to run if users do not actively interrupt it, thus creating a much more immersive experience on the platform compared to other social media (Su, Zhou, Gong, et al., 2021; Su, Zhou, Wang, et al., 2021). Sometimes the platform is even said to have addictive potential due to its functions (e.g. Qin et al., 2022). It is difficult to break away from, and this poses a great danger for users, especially given the increase in extremist communication on TikTok. 

Dark patterns as mechanisms to enhance immersion

Dark patterns also play a role in discussions around the immersive effect of social media. These describe platforms’ design decisions that use deceptive or manipulative tactics to keep users on the platforms and encourage them to make (uninformed) usage decisions (Gray et al., 2023). On social media, dark patterns are primarily aimed at retaining users’ attention for long periods, ensuring excessive and immersive use of the platforms and getting users to perform actions (e.g. likes). Examples of dark patterns are: activity notifications, the major challenge on Facebook of really logging out of the platform or even deleting an account, or the often counter-intuitive colours of cookie notifications which mean that users accept all of them, including unnecessary ones. Nearly all popular platforms and applications have dark patterns; many users do not notice them (Bongard-Blanchy et al., 2021; Di Geronimo et al., 2020). Creators also aim to influence user behaviour through dark patterns. For example, they manipulate popularity metrics (e.g. likes, number of followers) and images to try to artificially boost their credibility and generate attention for their content (Luo et al., 2022).

WHAT IS AN EFFECTIVE PREVENTIVE RESPONSE?

The immersiveness of extremist social media communication poses a major challenge in terms of preventive measures. This relates in particular to the many different levels that must be considered in this context. In the following, the considerations of prevention measures are specified at a) content level, b) platform level and c) media level. To clearly define the measures from the users’ point of view, we will use the three prevention levels of awareness, reflection and empowerment (Schmitt, Ernst, et al., 2020). Awareness includes a general consideration or a general awareness (e.g. of one’s own usage behaviour, extremist narratives, platform functioning), while reflection refers to critical reflection on content and the functioning of the media. Empowerment, however, means the ability of recipients to position themselves as regards certain media content or functions and be capable of acting. 

At content level (a) awareness must be raised about extremist content and its specific digital forms of communication and representation on social media. This is therefore about creating awareness of how right-wing extremist creators communicate in order to gain the attention of social media users, generate interest in their narratives and ultimately motivate people to act in accordance with the far-right world view (Schmitt, Ernst, et al., 2020). Furthermore, users should be enabled to take a critical stance as regards extremist content so they can position themselves accordingly in social discourse. This positioning does not only have to take place in the context of political discussions – the fact of reporting content to the platform itself or to one of the common reporting platforms (e.g. Jugendschutz.net) also indicates a stance. As well as a strong ability to take a critical view of media, it is particularly important here to have historic, intercultural and political knowledge. Tolerance of ambiguity, which enables people to tolerate complex and possibly contradictory information, is also important.

Platform-related prevention measures (b) should enable users to be aware, understand and reflect on algorithmic functional logic (e.g. what am I being shown and why?) and raise awareness of the mechanisms of action of the platform design (e.g. how is content being shown to me? How is that affecting my actions?) (Di Geronimo et al., 2020; Silva et al., 2022; Taylor & Brisini, 2023). For a school context, there is, for example, the second learning package in the CONTRA lesson series (Ernst et al., 2020) which has been explicitly designed to facilitate the ability to criticise media with regard to algorithms. 

On the platform side, it is important to create transparency around algorithms. Disclosure of the platform design, such as clarifying the functional logic of the recommendation algorithm, can help people identify dark patterns – and thus also understand the potential spread dynamics of extremist messages and counteract these (Rau et al., 2022). In a social media context, dark patterns have been relatively poorly researched to date (Mildner et al., 2023). In this field, information from research activities is provided, for example, by the Dark Pattern Detection Project, which also enables users to report dark patterns. 

Political regulatory measures, such as those in the Network Enforcement Act (NetzDG) and the Digital Services Act (DSA), can also help as regards the disclosure of platform design and require transparency reports to be provided, for example. With a view to potential new virtual media environments – such as the metaverse – it is essential to enforce transparency rules and consider new forms of design. In what circumstances does the virtual interaction take place? How are the digital activity spaces designed? What criteria are used for the visual representation of the actors, e.g. avatars, and how inclusive is their design? 

On the media level (c), it is crucial to raise users’ awareness of their own media behaviour and encourage them to reflect on it. Among other things, this involves questions about the time spent using certain applications (e.g. how long have I spent on TikTok today?) and also the type of use[1] (e.g. what content and mechanisms are drawing me in?). Help could be provided by apps that set time limits or send warnings about the usage times of other apps or block them (e.g. StayFree or Forest). The digital wellbeing function, which can measure the usage time of individual apps, is already established in various smartphone brands. 

SUMMARY AND OUTLOOK

The statements above illustrate the levels on which the social media communication of far-right creators can have an immersive effect from a social and psychological perspective. This occurs both as a result of the communication styles they choose and also through the platforms themselves. When these are combined, a parallel world is created – a metaverse – that attracts users in a variety of ways and can therefore have an effect. 

From a research perspective, a number of unanswered questions arise. For example, TikTok, as an extremely popular platform, should be examined more thoroughly with regard to its immersive potential. The new possibilities offered by AI-based image generation tools also raise questions about the effect and immersiveness of synthetic imagery produced by extremist communicators. The topic of gaming and right-wing extremism has also gained a lot more attention in recent years (e.g. Amadeu Antonio Stiftung, 2022; Schlegel, no date, 2021). Becoming absorbed in virtual worlds is a key motivation for gamers (for an overview see, for example, Cairns et al., 2014). Currently, there are few reliable findings regarding exactly how far-right actors use gaming for their own purposes, what role avatars, 3D worlds and VR play here and what impact this can have in the context of radicalisation processes. However, some initial findings are available on how interactive games and VR can be used for prevention purposes (e.g. Bachen et al., 2015; Dishon & Kafai, 2022). Through a fun and immersive approach, more abstract topics like politics and democracy – including their controversial aspects – can be explored in an engaging and cooperative way. This makes it possible for people to take different points of view and learn and practise interactions that are also relevant in everyday life. Games can also promote knowledge, empathy and critical thinking. This potential should be explored more fully by researchers and those working in prevention. Although we have outlined a selection of considerations for the prevention of immersive extremist communication, those in the field of prevention are asked to address the various facets of immersiveness in greater detail and possibly use comparable mechanisms for their own goals, taking into account Germany’s prohibition on overwhelming students (Überwältigungsverbot) (bpb, 2011; for criticism of the Beutelsbach Consensus see also Widmaier & Zorn, 2016).

REFERENCES

Amadeu Antonio Stiftung. (2022). Unverpixelter Hass—Gaming zwischen Massenphänomen und rechtsextremen Radikalisierungsraum? https://www.amadeu-antonio-stiftung.de/neue-handreichung-unverpixelter-hass-gaming-zwischen-massenphaenomen-und-rechtsextremen-radikalisierungsraum-81173/

Ayyadi, K. (2021, August 25). Rechte Influencerinnen: Rechtsextreme Inhalte schön verpackt. Belltow-er.News. https://www.belltower.news/rechte-influencerinnen-rechtsextreme-inhalte-schoen-verpackt-120301/

Bachen, C. M., Hernández-Ramos, P. F., Raphael, C., & Waldron, A. (2015). Civic Play and Civic Gaps: Can Life Simulation Games Advance Educational Equity? Journal of Information Technology & Politics, 12(4), 378–395. https://doi.org/10.1080/19331681.2015.1101038

Bongard-Blanchy, K., Rossi, A., Rivas, S., Doublet, S., Koenig, V., & Lenzini, G. (2021). ”I am Definitely Manipulated, Even When I am Aware of it. It’s Ridiculous!”—Dark Patterns from the End-User Perspective. Proceedings of the 2021 ACM Designing Interactive Systems Conference, 763–776. https://doi.org/10.1145/3461778.3462086

bpb. (2011, April 7). Beutelsbacher Konsens. bpb.de. https://www.bpb.de/die-bpb/ueber-uns/auftrag/51310/beutelsbacher-konsens/

Braddock, K. (2020). Narrative Persuasion and Violent Extremism: Foundations and Implications. In J. B. Schmitt, J. Ernst, D. Rieger, & H.-J. Roth (Hrsg.), Propaganda und Prävention: Forschungser-gebnisse, didaktische Ansätze, interdisziplinäre Perspektiven zur pädagogischen Arbeit zu ext-remistischer Internetpropaganda (S. 527–538). Springer Fachmedien. https://doi.org/10.1007/978-3-658-28538-8_28

Braddock, K., & Dillard, J. P. (2016). Meta-analytic evidence for the persuasive effect of narratives on beliefs, attitudes, intentions, and behaviors. Communication Monographs, 83(4), 446–467. https://doi.org/10.1080/03637751.2015.1128555

Brown, E., & Cairns, P. (2004). A grounded investigation of game immersion. CHI ’04 Extended Ab-stracts on Human Factors in Computing Systems, 1297–1300. https://doi.org/10.1145/985921.986048

Brown, M. A., Bisbee, J., Lai, A., Bonneau, R., Nagler, J., & Tucker, J. A. (2022). Echo Chambers, Rabbit Holes, and Algorithmic Bias: How YouTube Recommends Content to Real Users (SSRN Scholar-ly Paper 4114905). https://doi.org/10.2139/ssrn.4114905

Cairns, P., Cox, A., & Nordin, A. I. (2014). Immersion in Digital Games: Review of Gaming Experience Research. In Handbook of Digital Games (S. 337–361). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118796443.ch12

Chadwick, A., & Stanyer, J. (2022). Deception as a Bridging Concept in the Study of Disinformation, Misinformation, and Misperceptions: Toward a Holistic Framework. Communication Theory, 32(1), 1–24. https://doi.org/10.1093/ct/qtab019

Cotter, K. (2019). Playing the visibility game: How digital influencers and algorithms negotiate influ-ence on Instagram. New Media & Society, 21(4), 895–913. https://doi.org/10.1177/1461444818815684

Curran, N. (2018). Factors of Immersion. In The Wiley Handbook of Human Computer Interaction (S. 239–254). John Wiley & Sons, Ltd. https://doi.org/10.1002/9781118976005.ch13

de Bérail, P., Guillon, M., & Bungener, C. (2019). The relations between YouTube addiction, social anxie-ty and parasocial relationships with YouTubers: A moderated-mediation model based on a cognitive-behavioral framework. Computers in Human Behavior, 99, 190–204. https://doi.org/10.1016/j.chb.2019.05.007

Dede, C. (2009). Immersive Interfaces for Engagement and Learning. Science, 323(5910), 66–69. https://doi.org/10.1126/science.1167311

Di Geronimo, L., Braz, L., Fregnan, E., Palomba, F., & Bacchelli, A. (2020). UI Dark Patterns and Where to Find Them: A Study on Mobile Applications and User Perception. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376600

Dishon, G., & Kafai, Y. B. (2022). Connected civic gaming: Rethinking the role of video games in civic education. Interactive Learning Environments, 30(6), 999–1010. https://doi.org/10.1080/10494820.2019.1704791

Echtermann, A., Steinberg, A., Diaz, C., Kommerell, C., & Eckert, T. (2020). Kein Filter für Rechts. cor-rectiv.org. https://correctiv.org/top-stories/2020/10/06/kein-filter-fuer-rechts-instagram-rechtsextremismus-frauen-der-rechten-szene/?lang=de

Ernst, J., Schmitt, J. B., Rieger, D., & Roth, H.-J. (2020). #weARE – Drei Lernarrangements zur Förderung von Medienkritikfähigkeit im Umgang mit Online-Propaganda in der Schule. In J. B. Schmitt, J. Ernst, D. Rieger, & H.-J. Roth (Hrsg.), Propaganda und Prävention: Forschungsergebnisse, di-daktische Ansätze, interdisziplinäre Perspektiven zur pädagogischen Arbeit zu extremistischer Internetpropaganda (S. 361–393). Springer Fachmedien. https://doi.org/10.1007/978-3-658-28538-8_17

Eyal, K., & Dailey, R. M. (2012). Examining Relational Maintenance in Parasocial Relationships. Mass Communication and Society, 15(5), 758–781. https://doi.org/10.1080/15205436.2011.616276

Farivar, S., & Wang, F. (2022). Effective influencer marketing: A social identity perspective. Journal of Retailing and Consumer Services, 67, 103026. https://doi.org/10.1016/j.jretconser.2022.103026

Feng, Y., Chen, H., & Kong, Q. (2021). An expert with whom i can identify: The role of narratives in in-fluencer marketing. International Journal of Advertising, 40(7), 972–993. https://doi.org/10.1080/02650487.2020.1824751

Fielitz, M., & Marcks, H. (2020). Digitaler Faschismus: Die sozialen Medien als Motor des Rechtsext-remismus. Dudenverlag.

Frischlich, L. (2021). #Dark inspiration: Eudaimonic entertainment in extremist Instagram posts. New Media & Society, 23(3), 554–577. https://doi.org/10.1177/1461444819899625

Frischlich, L., Hahn, L., & Rieger, D. (2021). The Promises and Pitfalls of Inspirational Media: What do We Know, and Where do We Go from Here? Media and Communication, 9(2), 162–166. https://doi.org/10.17645/mac.v9i2.4271

Grandinetti, J., & Bruinsma, J. (2023). The Affective Algorithms of Conspiracy TikTok. Journal of Broad-casting & Electronic Media, 67(3), 274–293. https://doi.org/10.1080/08838151.2022.2140806

Gray, C. M., Sanchez Chamorro, L., Obi, I., & Duane, J.-N. (2023). Mapping the Landscape of Dark Patterns Scholarship: A Systematic Literature Review. Companion Publication of the 2023 ACM Designing Interactive Systems Conference, 188–193. https://doi.org/10.1145/3563703.3596635

Harff, D., Bollen, C., & Schmuck, D. (2022). Responses to Social Media Influencers’ Misinformation about COVID-19: A Pre-Registered Multiple-Exposure Experiment. Media Psychology, 25(6), 831–850. https://doi.org/10.1080/15213269.2022.2080711

Haywood, N., & Cairns, P. (2006). Engagement with an Interactive Museum Exhibit. In T. McEwan, J. Gulliksen, & D. Benyon (Hrsg.), People and Computers XIX — The Bigger Picture (S. 113–129). Springer. https://doi.org/10.1007/1-84628-249-7_8

Houwer, J. D., & Hermans, D. (1994). Differences in the affective processing of words and pictures. Cognition and Emotion, 8(1), 1–20. https://doi.org/10.1080/02699939408408925

Hu, L., Min, Q., Han, S., & Liu, Z. (2020). Understanding followers’ stickiness to digital influencers: The effect of psychological responses. International Journal of Information Management, 54, 102169. https://doi.org/10.1016/j.ijinfomgt.2020.102169

Huszár, F., Ktena, S. I., O’Brien, C., Belli, L., Schlaikjer, A., & Hardt, M. (2022). Algorithmic amplification of politics on Twitter. Proceedings of the National Academy of Sciences, 119(1), e2025334119. https://doi.org/10.1073/pnas.2025334119

Igartua, J.-J., & Cachón-Ramón, D. (2023). Personal narratives to improve attitudes towards stigma-tized immigrants: A parallel-serial mediation model. Group Processes & Intergroup Relations, 26(1), 96–119. https://doi.org/10.1177/13684302211052511

Jun, Y., Meng, R., & Johar, G. V. (2017). Perceived social presence reduces fact-checking. Proceedings of the National Academy of Sciences, 114(23), 5976–5981. https://doi.org/10.1073/pnas.1700175114

Jung, N., & Im, S. (2021). The mechanism of social media marketing: Influencer characteristics, con-sumer empathy, immersion, and sponsorship disclosure. International Journal of Advertising, 40(8), 1265–1293. https://doi.org/10.1080/02650487.2021.1991107

Kero, S. (im Druck). Jung, weiblich und extrem rechts. Die narrative Kommunikation weiblicher Akteu-rinnen auf der Plattform Instagram. easy_social sciences.

Kim, J., Kim, J., & Yang, H. (2019). Loneliness and the use of social media to follow celebrities: A mod-erating role of social presence. The Social Science Journal, 56(1), 21–29. https://doi.org/10.1016/j.soscij.2018.12.007

Kim, J., & Song, H. (2016). Celebrity’s self-disclosure on Twitter and parasocial relationships: A medi-ating role of social presence. Computers in Human Behavior, 62, 570–577. https://doi.org/10.1016/j.chb.2016.03.083

Ledwich, M., Zaitsev, A., & Laukemper, A. (2022). Radical bubbles on YouTube? Revisiting algorithmic extremism with personalised recommendations. First Monday. https://doi.org/10.5210/fm.v27i12.12552

Leite, F. P., Pontes, N., & Baptista, P. de P. (2022). Oops, I’ve overshared! When social media influenc-ers’ self-disclosure damage perceptions of source credibility. Computers in Human Behavior, 133, 107274. https://doi.org/10.1016/j.chb.2022.107274

Lillie, H. M., Jensen, J. D., Pokharel, M., & Upshaw, S. J. (2021). Death Narratives, Negative Emotion, and Counterarguing: Testing Fear, Anger, and Sadness as Mechanisms of Effect. Journal of Health Communication, 26(8), 586–595. https://doi.org/10.1080/10810730.2021.1981495

Luo, M., Hancock, J. T., & Markowitz, D. M. (2022). Credibility Perceptions and Detection Accuracy of Fake News Headlines on Social Media: Effects of Truth-Bias and Endorsement Cues. Communi-cation Research, 49(2), 171–195. https://doi.org/10.1177/0093650220921321

Maschewski, F., & Nosthoff, A.-V. (2019). Netzwerkaffekte. Über Facebook als kybernetische Regie-rungsmaschine und das Verschwinden des Subjekts. In R. Mühlhoff, A. Breljak, & J. Slaby (Hrsg.), Affekt Macht Netz: Auf dem Weg zu einer Sozialtheorie der Digitalen Gesellschaft (1. Aufl., Bd. 22, S. 55–80). transcript Verlag. https://doi.org/10.14361/9783839444399

Masuda, H., Han, S. H., & Lee, J. (2022). Impacts of influencer attributes on purchase intentions in so-cial media influencer marketing: Mediating roles of characterizations. Technological Fore-casting and Social Change, 174, 121246. https://doi.org/10.1016/j.techfore.2021.121246

Mildner, T., Savino, G.-L., Doyle, P. R., Cowan, B. R., & Malaka, R. (2023). About Engaging and Govern-ing Strategies: A Thematic Analysis of Dark Patterns in Social Networking Services. Proceed-ings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–15. https://doi.org/10.1145/3544548.3580695

Morris, L. (2021, Oktober 27). In Poland’s politics, a ‚so- cial civil war‘ brewed as Facebook rewarded online anger. The Washington Post. https://www.washingtonpost.com/world/2021/10/27/poland-facebook-algorithm/

Moyer-Gusé, E. (2008). Toward a Theory of Entertainment Persuasion: Explaining the Persuasive Effects of Entertainment-Education Messages. Communication Theory, 18(3), 407–425. https://doi.org/10.1111/j.1468-2885.2008.00328.x

mpfs. (2022). JIM-Studie 2022. Jugend, Information, Medien. Basisuntersuchung zum Medienumgang 12- bis 19-Jähriger. https://www.mpfs.de/fileadmin/files/Studien/JIM/2022/JIM_2022_Web_final.pdf

Mühlhoff, R., & Schütz, T. (2019). Die Macht der Immersion. Eine affekttheoretische Perspektive. https://doi.org/10.25969/MEDIAREP/12593

Müller, P. (2022). Extrem rechte Influencer*innen auf Telegram: Normalisierungsstrategien in der Corona-Pandemie. ZRex – Zeitschrift für Rechtsextremismusforschung, 2(1), Art. 1. https://www.budrich-journals.de/index.php/zrex/article/view/39600

Munger, K., & Phillips, J. (2022). Right-Wing YouTube: A Supply and Demand Perspective. The Interna-tional Journal of Press/Politics, 27(1), 186–219. https://doi.org/10.1177/1940161220964767

Murray, J. H. (1998). Hamlet on the Holodeck: The Future of Narrative in Cyberspace. The MIT Press.

Nilsson, N. Chr., Nordahl, R., & Serafin, S. (2016). Immersion revisited: A review of existing definitions of immersion and their relation to different theories of presence. Human Technology, 12(2), 108–134.

Osei-Frimpong, K., & McLean, G. (2018). Examining online social brand engagement: A social presence theory perspective. Technological Forecasting and Social Change, 128, 10–21. https://doi.org/10.1016/j.techfore.2017.10.010

Peters, I., & Puschmann, C. (2017). Informationsverbreitung in sozialen Medien. In J.-H. Schmidt & M. Taddicken (Hrsg.), Handbuch soziale Medien (S. 211–232). Springer VS.

pre:bunk. (2023). Rechtsextremismus und TikTok, Teil 1: Von Hatefluencer*innen über AfD und bis rechter Terror. Belltower.News. https://www.belltower.news/rechtsextremismus-und-tiktok-teil-1-147835/

Qin, Y., Omar, B., & Musetti, A. (2022). The addiction behavior of short-form video app TikTok: The information quality and system quality perspective. Frontiers in Psychology, 13. https://www.frontiersin.org/articles/10.3389/fpsyg.2022.932805

Rau, J., Kero, S., Hofmann, V., Dinar, C., & Heldt, A. P. (2022). Rechtsextreme Online-Kommunikation in Krisenzeiten: Herausforderungen und Interventionsmöglichkeiten aus Sicht der Rechtsextre-mismus- und Platform-Governance-Forschung. Arbeitspapiere des Hans-Bredow-Instituts. https://doi.org/10.21241/SSOAR.78072

Rösch, V. (2023). Heimatromantik und rechter Lifestyle. Die rechte Influencerin zwischen Self-Branding und ideologischem Traditionalismus. GENDER – Zeitschrift für Geschlecht, Kultur und Gesell-schaft, 15(2), Art. 2. https://www.budrich-journals.de/index.php/gender/article/view/41966

Schlegel, L. (o. J.). Super Mario Brothers Extreme. Gaming und Rechtsextremismus. Abgerufen 9. Sep-tember 2023, von https://gaming-rechtsextremismus.de/themen/super-mario-brothers-extreme/

Schlegel, L. (2021). The Role of Gamification in Radicalization Processes. modus | zad. https://modus-zad.de/publikation/report/working-paper-1-2021-the-role-of-gamification-in-radicalization-processes/

Schmitt, J. B., Ernst, J., Rieger, D., & Roth, H.-J. (2020). Die Förderung von Medienkritikfähigkeit zur Prävention der Wirkung extremistischer Online-Propaganda. In J. B. Schmitt, J. Ernst, D. Rieger, & H.-J. Roth (Hrsg.), Propaganda und Prävention: Forschungsergebnisse, didaktische Ansätze, interdisziplinäre Perspektiven zur pädagogischen Arbeit zu extremistischer Internetpropagan-da (S. 29–44). Springer Fachmedien. https://doi.org/10.1007/978-3-658-28538-8_2

Schmitt, J. B., Harles, D., & Rieger, D. (2020). Themen, Motive und Mainstreaming in rechtsextremen Online-Memes. Medien & Kommunikationswissenschaft, 68(1–2), 73–93. https://doi.org/10.5771/1615-634X-2020-1-2-73

Schmitt, J. B., Rieger, D., Rutkowski, O., & Ernst, J. (2018). Counter-messages as Prevention or Promoti-on of Extremism?! The Potential Role of YouTube Recommendation Algorithms. Journal of Communication, 68(4), 780–808. https://doi.org/10.1093/joc/jqy029

Schneider, J., Schmitt, J. B., Ernst, J., & Rieger, D. (2019). Verschwörungstheorien und Kriminalitäts-furcht in rechtsextremen und islamistischen YouTube-Videos. Praxis der Rechtspsychologie, 1, Art. 1.

Schouten, A. P., Janssen, L., & Verspaget, M. (2020). Celebrity vs. Influencer endorsements in adverti-sing: The role of identification, credibility, and Product-Endorser fit. International Journal of Advertising, 39(2), 258–281. https://doi.org/10.1080/02650487.2019.1634898

Schwarz, K. (2020). Hasskrieger: Der neue globale Rechtsextremismus. Herder.

Silva, D. E., Chen, C., & Zhu, Y. (2022). Facets of algorithmic literacy: Information, experience, and indi-vidual factors predict attitudes toward algorithmic systems. New Media & Society, 14614448221098042. https://doi.org/10.1177/14614448221098042

Slater, M. D., & Rouner, D. (2002). Entertainment—Education and Elaboration Likelihood: Under-standing the Processing of Narrative Persuasion. Communication Theory, 12(2), 173–191. https://doi.org/10.1111/j.1468-2885.2002.tb00265.x

Strick, S. (2021). Rechte Gefühle: Affekte und Strategien des digitalen Faschismus. Transcript.

Su, C., Zhou, H., Gong, L., Teng, B., Geng, F., & Hu, Y. (2021). Viewing personalized video clips recom-mended by TikTok activates default mode network and ventral tegmental area. NeuroImage, 237, 118136. https://doi.org/10.1016/j.neuroimage.2021.118136

Su, C., Zhou, H., Wang, C., Geng, F., & Hu, Y. (2021). Individualized video recommendation modulates functional connectivity between large scale networks. Human Brain Mapping, 42(16), 5288–5299. https://doi.org/10.1002/hbm.25616

Taylor, S. H., & Brisini, K. S. C. (2023). Parenting the Tiktok Algorithm: An Algorithm Awareness as Pro-cess Approach to Online Risks and Opportunities for Teens on Tiktok (SSRN Scholarly Paper 4446575). https://doi.org/10.2139/ssrn.4446575

Tritt, S. M., Peterson, J. B., Page-Gould, E., & Inzlicht, M. (2016). Ideological reactivity: Political conser-vatism and brain responsivity to emotional and neutral stimuli. Emotion, 16(8), 1172–1185. https://doi.org/10.1037/emo0000150

Weimann, G., & Masri, N. (2021). TikTok’s Spiral of Antisemitism. Journalism and Media, 2(4), Art. 4. https://doi.org/10.3390/journalmedia2040041

Whittaker, J., Looney, S.-0007-3114-0806, Reed, A., & Votta, F. (2021). Recommender systems and the amplification of extremist content. https://doi.org/10.14763/2021.2.1565

Widmaier, B., & Zorn, P. (Hrsg.). (2016). Brauchen wir den Beutelsbacher Konsens? Eine Debatte der politischen Bildung. Bundeszentrale für politische Bildung. https://www.lpb-mv.de/nc/publikationen/detail/brauchen-wir-den-beutelsbacher-konsens-eine-debatte-der-politischen-bildung/

Wunderlich, L. (2023). Parasoziale Meinungsführer? Eine qualitative Untersuchung zur Rolle von Social Media Influencer*innen im Informationsverhalten und in Meinungsbildungsprozessen junger Menschen. Medien & Kommunikationswissenschaft, 71(1–2), 37–60. https://doi.org/10.5771/1615-634X-2023-1-2-37

Yang, Y., Xiu, L., Chen, X., & Yu, G. (2023). Do emotions conquer facts? A CCME model for the impact of emotional information on implicit attitudes in the post-truth era. Humanities and Social Sci-ences Communications, 10(1), Art. 1. https://doi.org/10.1057/s41599-023-01861-1

Yesilada, M., & Lewandowsky, S. (2022). Systematic review: YouTube recommendations and proble-matic content. Internet policy review, 11(1), 1652. https://doi.org/10.14763/2022.1.1652

Zimmermann, D., Noll, C., Gräßer, L., Hugger, K.-U., Braun, L. M., Nowak, T., & Kaspar, K. (2022). In-fluencers on YouTube: A quantitative study on young people’s use and perception of videos about political and societal topics. Current Psychology, 41(10), 6808–6824. https://doi.org/10.1007/s12144-020-01164-7


[1] Tools such as Dataskop can show TikTok use.

Regulatory approaches to immersive worlds: an introduction to metaverse regulation

Matthias C. Kettemann
Department of Legal Theory and Future of Law, University of Innsbruck
Leibniz-Institute for Media Research | Hans-Bredow-Institut, Hamburg
Humboldt Institute for Internet and Society, Berlin

Martin Müller
Department of Legal Theory and Future of Law, University of Innsbruck




Caroline Böck
Department of Legal Theory and Future of Law, University of Innsbruck 




1 INTRODUCTION

Metaverses[1] are phenomenologically diverse, technically complex, offer huge economic potential, challenge traditional concepts of democratic codetermination and still largely lack legal foundations. “In the metaverse, you’ll be able to do almost anything you can imagine – get together with friends and family, work, learn, play, shop, create – as well as completely new experiences that don’t really fit how we think […] today,” said Meta’s Mark Zuckerberg, outlining his vision. Where people shop, learn, play, make statements, enter into contracts, that is where standards are relevant. Where we are active, where we express opinions, we come into contact and conflict with others. Standards in the metaverse – like standards in general – solve distribution problems, coordination problems, cooperation problems; they have a shaping, pacifying and balancing function. But who makes the rules for governing the metaverse, and for governing in the metaverses?

These fundamental questions are not easy to answer from a democratic theory perspective. But we can learn from history. In many respects, the metaverse is now where digital platforms were at the turn of the millennium: less regulated, offering huge potential. As these platforms have emerged, the communicative infrastructures of democratic public spheres have faced significant changes. With the metaverse, the challenges are accelerating. As regards platforms, institutional solutions for democratic reconnection have come to the forefront; they have even made it into the current German government’s coalition agreement.[2] This process has also established itself in the scientific field as the “constitutionalisation” of social media (Celeste/Heldt/Keller 2022; Celeste 2022; De Gregorio 2022). These steps have yet to be taken for the metaverse. An examination of some legal aspects of the metaverse makes a contribution to this. 

The key challenges of regulating the metaverse as a virtual, immersive and interactive space concept created by merging the physical and digital worlds are therefore found in the areas of data protection and privacy, security, content governance, interoperability, openness and democratic participation. This article focuses on some of these.

Following the introduction (1), two chapters examine the regulation of the metaverse (2) and selected legal questions relating to the application of rules in the metaverse (3). Future developments are then considered (4).[3]

2 REGULATING THE METAVERSE

2.1 The basics

So far, neither national nor European Union legislators have developed a well thought out regulatory approach for the emerging metaverses[4]; the stage of normative visions has currently been reached (European Commission 2023).[5] Nevertheless, individual features of the metaverse, its providers and the actants in it (the avatars) are governed by various regulations in the normative multilevel system between private rules, national law and EU law. The technical entry points to the metaverses are also regulated: VR glasses and similar connection devices are subject to existing product safety regulations.

The metaverse communication space is also bound by standards. They include standards under private law (what the platform allows) and national law (what the state allows) and increasingly also European law, due to the growing amount of regulations regarding digital services, markets, data and algorithms. 

2.2.  European regulatory approaches

The Digital Services Act (DSA)[6] is an EU Regulation that came into force in November 2022. The DSA and the Digital Markets Act (DMA), which was negotiated at the same time, aim to create a safer digital space in which the fundamental rights of all users of digital services are protected and to establish a level playing field to foster innovation, growth, and competitiveness in the European Single Market.

As stated in Article 2, point (1), of the DSA, the Regulation applies to intermediary services offered to recipients in the European Union, irrespective of whether the providers of the services have their place of establishment in the EU. The scope of the DMA is similarly regulated in Article 1, point (2), of the DMA. If offered to users in the EU, metaverses therefore fall within the scope of the DSA. Of the three intermediary services listed in the DSA, metaverses are regarded as hosting services: to present the virtual world and interaction with it, information provided by users must be stored on their behalf by operators, meaning that the requirements of hosting services in Article 3 (g) (iii) of the DSA are met. Fully decentralised metaverses are a special case. Here, there is not one hosting service operating the metaverse – there are multiple operators. 

Firstly, the operators of metaverses must comply with the regulations applicable to all intermediary services in Articles 11–15 of the DSA. Further obligations are now being introduced here in relation to the Electronic Commerce Directive. For example, points of contact for authorities, the Commission and users must be designated (Articles 11 and 12 of the DSA); this obligation should be met through an appropriate form of the legal notice as laid down in Section 5 of the Telemedia Act. The terms and conditions of all intermediary services must meet certain requirements (Article 14 of the DSA), and in particular ensure legally secure moderation behaviour that can be challenged. 

In addition to the minimum requirements for the content of terms and conditions as described in Article 14, point (1), of the DSA, Article 14, point (4), stipulates that the interests of users are to be taken into account when content is moderated and when complaints are handled by platforms. The fundamental rights of users are explicitly mentioned, such as the right to freedom of expression. Contrary to previous decisions by the Federal Court of Justice, it is therefore a matter of a direct horizontal commitment to fundamental rights by platforms, irrespective of their size (Quintais/Appelman/Fahy 2022).[7] Metaverse operators must therefore clearly state when and why they moderate and what legal remedies exist. Article 15 on transparency obligations is also relevant with regard to content moderation.

The Digital Markets Act (DMA)[8] attempts to restrict the economic power of the big tech platforms on digital markets. For the application of the DMA to metaverses, there is no designation of metaverses as central platform services, regardless of the future variable conditions of impact on the single market and the stable and sustainable position.

The Data Governance Act (DGA)[9] is the first legislative act at Union level to address data sharing. While the GDPR deals with the protection of personal data, the DGA initially aims to regulate the commercial use of data in general, i.e., of personal and non-personal data, and thus represents a reorientation of Union policy (Metzger/Schweitzer 2023).

The European Commission’s proposal for a data act[10] is the core component of the data strategy. The aim is to increase the amount of publicly available data. Currently, devices in the Internet of Things (IoT) generate huge quantities of data which usually remain with the manufacturers and can only be accessed in exceptional cases. Developing more data altruism or data trust models would bring added value for metaverse operators, including small operators.

The European Commission’s proposal for an artificial intelligence act[11] is a risk-based regulation (Ebers et al. 2021, 589 (589); De Gregorio/Dunn, 473 (488 ff.)), in which the use of AI systems is divided into various risk categories, with further regulations for higher risks for the fundamental rights of users.

It also appears possible for interoperability regulations, which are currently primarily geared towards communication services, to ensure a certain standardisation of data formats for metaverses. The regulations on data interoperability currently apply only to data intermediation services (Article 26, points 3 and 4, in conjunction with Article 29 of the Data Act) and operators of data spaces as defined in Article 28 ff. of the Data Act. However, if these services have the success anticipated by the Commission (European Commission, no date), large parts of the digital economy, e.g. operators of metaverses, will use data intermediation services and data spaces in the near future and will therefore inevitably have to follow the standardisation rules. 

2.3 General terms and conditions

It is apparent that the existing regulations only marginally regulate the main features of the metaverse – namely hardware, software and content – and depend very much on which metaverses will prevail in the market and how they are specifically designed. The significant actors are rather the above-mentioned digital companies with their contract-based private regulations.

3 REGULATION IN METAVERSES

3.1 Communication space 

The private regulations on digital platforms, like social media, are nowadays subject firstly to technical settings as well as to the guidelines that the digital company itself has developed (Quintais/de Gregorio/Magalhães 2023). These rules are often called community guidelines. Community guidelines describe users’ relationships with each other and also the relationship between the user and the platform.[12] These rules do indeed have an effect of creating order because they constitute private communication rules as a partial regime constitution. However, they systematically fall under civil law and have a concrete effect due to the private-law legal relationship (Quintais/de Gregorio/Magalhães 2023). The admissibility of such regulations can in turn be derived from the Basic Law, specifically from the fundamental rights of private autonomy, occupational freedom and freedom of ownership, since the principles enable private individuals to organise private structures and systems within legal parameters (Teubner 2012, 36 ff.; Mast/Kettemann/Schulz 2023, forthcoming).

Use of a platform is only possible after consenting to the terms of use, which include the community guidelines. Thus, a platform usage agreement is typically concluded.[13] The terms of use themselves are regularly incorporated into the contract as standard business terms in accordance with Section 305 (1) sentence 1 of the German Civil Code (BGB).[14] This classification as standard business terms is unproblematic in this respect because the terms of use are unilaterally provided by the company operating the platform for a number of contracts, are pre-worded and are fundamentally not negotiable. Law relating to terms and conditions is therefore applicable to the terms of use. The individual clauses must be able to stand up to a legal check of terms and conditions in their own right. 

In addition to the specific prohibited clauses in Sections 308 f. of the Civil Code, Section 307 (1) sentence 1 of the Civil Code, concerning the principle of good faith, offers the possibility of considering constitutional law valuations as an opening clause under civil law.[15] Case law has taken advantage of this and by way of the doctrine of the indirect third-party effect[16] has established a commitment to fundamental rights for operators of online platforms[17], which must be taken into account when designing terms of use. The fundamental rights would not bind the platforms directly because they are private companies and do not have any state or state-like role.[18] A state-like role is then only to be accepted if a private actor in fact “grows into a comparable obligation or guarantor role that traditionally belongs to the state”[19]. This can only be accepted in the field of communication, in which platforms – and in the future sometimes also metaverses – (will) also move if “private companies themselves take on the task of providing conditions for public communication and thus take on roles that – like ensuring post and telecommunication services – were formerly assigned to the state as a service for the public”[20]. However, such communication provision is not (yet) happening. It cannot be completely ruled out since access to metaverses is much more intensely privately regulated. 

 3.2 Avatars

In metaverses, human actors act via non-physical actants: avatars. From a civil law point of view, the question is how the aspect of merging the real world with the world of the metaverse via an avatar affects issues of attribution under civil law. It is clear that the avatar is the central point of contact for actions in the metaverse. Avatars represent the digital identity of a natural or legal person and are considered an extension of a legal entity (Kaulartz/Schmid/Müller-Eising 2022). The avatar will represent the significant attribution object in the metaverse, which will perform all actions in the metaverse for or by the legal entity (Rippert/Weimer 2007). People ordering items on the Internet are also acting in a technology-mediated way – from this viewpoint, an avatar is no different from an email that can be dressed up. Avatars are able to make declarations of intent in the metaverse, e.g. to purchase concert tickets or other goods and services (Kaulartz/Schmid/Müller-Eising 2022). However, it is not the avatar who is acting, but the person behind it.

Can avatars be liable to prosecution? No, but the people acting through them can be. A post or an email cannot commit an offence, so the focus must shift away from the actant – the avatar – to the person whenever legal responsibility is to be attributed. However, this presupposes the applicability of national criminal law, such as the German Criminal Code (StGB). According to the territorial principle, sovereign powers of punishment are restricted to a state’s own territory (more on the territorial principle: Mills 2006, basis: Schmalenbach/Bast 2017). When determining the location of criminal acts on the Internet, it is recognised that at least such acts committed against a German national or committed by a German national are subject to German criminal law (Schönke/Schröder/Eser/Weißer, no date). This principle can be transferred to the metaverse if there is a possibility of associating the avatar in the metaverse with a specific real person (concurring: Kaulartz/Schmid/Müller-Eising 2022, 521 (529 f.).

In reverse: if an avatar is a victim of an act, e.g. of an insult, does this also always affect the person behind it? That depends: an avatar is a communication medium. I cannot insult a mobile phone, but in the case of an avatar a distinction must be made as to how strong the relationship is between the avatar and the real person behind it. If there is a very strong relationship – for example, an avatar depicts a person’s key characteristics so it can be assumed that the real person is identifiable –, then an insult is likely to be assumed (to the person behind it). 

Some violations of rights, such as murder or conventional theft, cannot be committed against avatars. In the case of other violations of rights, facts other than in real life can be applied. If an avatar is “kidnapped”, for example, someone can be held accountable on the grounds of computer-related violations (hacking). 

May avatars be copied or distorted? That also depends. They have no personal rights of their own but, insofar as a real person is recognisable, the rights of this person prevail. Intellectual property rights may be relevant also for avatars that do not resemble anyone. Not everyone can create an avatar in the form of a Disney character, for example.

4 A METAVERSE FOR EVERYONE?

Despite substantial investment, metaverse technology is neither market ready nor widely deployed. This means it is possible for national and European legislators to introduce appropriate rules to proactively protect individual areas of freedom and reduce negative social consequences. While the law relating to platforms (DSA and DMA) only emerged around 20 years after they gained importance at the beginning of the 21st century, smart metaverse regulation can ensure the effective protection of legal rights. The constitutionalisation process of the platforms through internal juridification and external attribution of responsibility (e.g., through procedural obligations and checks of terms and conditions and also through transparency requirements and risk minimisation obligations) can also be deployed in a targeted manner at a much earlier stage for the metaverse, before its use becomes widespread.

Centralised metaverses present risks for democratic values, for open discourse, for rational self-determination processes. Metaverse operators can abuse their special position of influence on rules and moderation practices. The opening of the platforms and increased criticism from the perspective of democratic theory clearly shows the direction in which the regulatory wind is blowing (Hermann 2022). 

With regard to the metaverse in particular, there are good arguments for developing innovative models for the institutional restriction of power of operators and for improving the legitimacy of social arrangements of the metaverse.  This should happen in close collaboration with all stakeholders involved in digital transformation. As called for by the National Academy of Sciences, “innovative participation ideas must be specifically promoted because established platform operators and service providers are likely to be closely attached to their current business and participation models, and this could hinder the support of democracy-friendly, commercially less usable formats from private commercial sources” (German National Academy of Sciences Leopoldina/acatech – National Academy of Science and Engineering/Union of the German Academies of Sciences and Humanities, 2021, 56). Democracy in the metaverse could thus be supported by initiatives from below and outside. 

The EU has recognised this and has set up comprehensive, Union-wide citizens’ panels. In addition to guidance on good behaviour in the metaverse, these have produced further recommendations and eight basic principles that are intended to apply to the regulation of the metaverse and within the metaverse (Bürgerrat.de 2023). The protection of users has been highlighted here as a particularly important basic principle. Such participatory processes – Meta itself has organised similar events on a global scale – are important preliminary stages in the development of legitimate rules for virtual worlds. An adjustment of regulation would be effective against the backdrop of the Brussels effect, as, accordingly, the EU’s rules have an impact on the international community and encourage other states to adopt the rules or enact similar legislation of their own. Also pointing in this direction are the first preliminary studies by European institutions, including a document by the Committee on Culture and Education[21] and a longer study by the European Parliament Committee on Legal Affairs (2023).

It would also be desirable to have an international legal framework to determine access to and development of metaverses, as the metaverse cannot be thought of on a national level but must be considered global, especially since it is designed for global use. Nor should global regulation exclude issues of international solidarity: access to the metaverse is currently based on privilege. If the metaverse is to develop into a fair and open communication space where rights are supported, perspectives of access for all must also be developed.

BIBLIOGRAPHY

Bürgerrat.de, Virtual worlds for everyone, 24 April 2023, https://www.buergerrat.de/en/news/virtual-worlds-for-everyone/

Celeste/Heldt/Keller (eds.), Constitutionalising Social Media (2022)

Celeste, Digital Constitutionalism (2022)

Sam Jungyun Choi et al. (2023). Regulating the Metaverse in Europe, https://www.globalpolicywatch.com/2023/04/regulating-the-metaverse-in-europe.

De Gregorio, Digital Constitutionalism in Europe (2022).

De Gregorio/Dunn (2022), Common Market Law Review, 473 (488 ff.).

European Commission (no date), European Data Strategy, https://commission.europa.eu/strategy-and-policy/priorities-2019-2024/europe-fit-digital-age/european-data-strategy_en (last accessed: 6 June 2023).

German National Academy of Sciences Leopoldina/acatech – National Academy of Science and Engineering/Union of the German Academies of Sciences and Humanities (p. 56).

Ebers et al. (2021). Multidisciplinary Scientific Journal, 589.

European Commission (2023). Virtual worlds (metaverses) – a vision for openness, safety and respect, https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/13757-Virtual-worlds-metaverses-a-vision-for-openness-safety-and-respect_en

European Parliament Committee on Legal Affairs, Metaverse

Hermann, Demokratische Werte nach europäischem Verständnis im Metaverse (2022).

Kettemann/Böck (2023), Regulierung des Metaverse. In Steege/Chibanguza (eds.), Metaverse

Kettemann/Müller (2023), Plattformregulierung, in Steege/Chibanguza (eds.), Metaverse (2023) (forthcoming).

Mast/Kettemann/Schulz (2023, forthcoming). In Puppis/Mansell/van den Bulck (eds.), Handbook of Media and Communication Governance.

Metzger Axel/Schweitzer, Heike (2022). Shaping Markets: A Critical Evaluation of the Draft Data Act. ZEuP, 01, 42.

Mills, International and Comparative Law Quarterly 55 (2006), 1 (13)

Müller/Kettemann (2023, forthcoming). European approaches to the regulation of digital technologies. In Werthner et al. (ed.), Introduction to Digital Humanism (2023)

Quintais/Appelman/Fahy (2022). “Using Terms and Conditions to Apply Fundamental Rights to Content Moderation”, https://ssrn.com/abstract=4286147, 25.

Quintais, João Pedro/De Gregorio, Giovanni/Magalhães, João C. (). How platforms govern users’ copyright-protected content: Exploring the power of private ordering and its implications. Computer Law & Security Review, 48 (Article 105792), 3.

Quintais/de Gregorio/Magalhães, Computer Law & Security Review 2023, 105792, 5.

Schmalenbach/Bast, VVDStRL 2017, 245 (248).


[1] The metaverse is understood in the sense of the EU’s (non-binding) definition as “an immersive and constant virtual 3D world where people interact through an avatar to enjoy entertainment, make purchases and carry out transactions with crypto-assets, or work without leaving their seat” (European Commission’s Analysis and Research Team, Metaverse – Virtual World, Real Challenges, 9 March 2022, p. 3.). Various metaverses exist.

[2] The German National Academy of Sciences Leopoldina also recommends the increased democratic reconnection of platforms. See Leopoldina, the Union of the German Academies of Sciences and Humanities, acatech – National Academy of Science and Engineering (2021): Digitalisierung und Demokratie, https://www.leopoldina.org/uploads/tx_leopublication/2021_Stellungnahme_Digitalisierung_und_Demokratie_web_01.pdf, p. 46.

[3] More on the role of law in the metaverse, with further evidence: Kettemann/Böck (2023, forthcoming) and Kettemann/Müller (2023, forthcoming). More on platform regulation can be found in Müller/Kettemann (2023, forthcoming). This article builds on these explanations and summarises them.

[4] However, consultations have already been carried out, see Choi et al. 2023.

[5] “The European Commission will develop a vision for emerging virtual worlds (e.g. metaverses), based on respect for digital rights and EU laws and values. The aim is open, interoperable and innovative virtual worlds that can be used safely and with confidence by the public and businesses.” (European Commission 2023).

[6] Regulation (EU) 2022/2065 of the European Parliament and of the Council of 19 October 2022 on a Single Market For Digital Services and amending Directive 2000/31/EC (Digital Services Act), OJ L 277, 1.

[7] Spindler, GRUR 2021, 545 (551); Mast, JZ 2023, 287 (289).

[8] Regulation (EU) 2022/1925 of the European Parliament and of the Council of 14 September 2022 on contestable and fair markets in the digital sector and amending Directives (EU) 2019/1937 and (EU) 2020/1828 (Digital Markets Act), OJ L 265, 1.

[9] Regulation (EU) 2022/868 of the European Parliament and of the Council of 30 May 2022 on European data governance and amending Regulation (EU) 2018/1724 (Data Governance Act), OJ L 152, 1.

[10] Proposal for a Regulation of the European Parliament and of the Council on harmonised rules on fair access to and use of data (Data Act), COM(2022) 68 final.

[11] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final.

[12] Comparisons: Instagram’s Community Guidelines, available at: https://help.instagram.com/477434105621119/?helpref=hc_fnav; more detail at: Mast/Kettemann/Schulz (2023, forthcoming).

[13] Friehe NJW 2020, 1697 (1697); basis OLG Munich NJW 2018, 3115 (3116); concurring: BGH ZUM 2021, 953 (957 f.).

[14] Spindler, CR 2019, 238 (240); Friehe, NJW 2020, 1697 (1697); OLG Munich NJW 2018, 3115 (3116).

[15] Munich commentary Civil Code/Wurmnest, Civil Code Section 307 margin no. 57.

[16] BVerfGE 7, 198 (205 ff.).

[17] BVerfG NJW 2019, 1935 (1936) relating to digital platforms; also: BGH NJW 2012, 148 (150 f.); BGH NJW 2016, 2106 (2107 ff.) relating to liability of hosting services.

[18] BGH ZUM 2021, 953 (961).

[19] BGH ZUM 2021, 953 (960); BVerfG ZUM 2020, 58 (70) with further references.

[20] BGH ZUM 2021, 953 (960); BVerfG ZUM 2020, 58 (70 f.) with further references.

[21] Draft Opinion of the Committee on Culture and Education for the Committee on the Internal Market and Consumer Protection on virtual worlds – opportunities, risks and policy implications for the single market (2022/2198(INI)), 27 April 2023, https://www.europarl.europa.eu/doceo/document/CULT-PA-746918_EN.pdf; with amendments: Amendments 1–64, Draft opinion by Laurence Farreng (PE746.918v01-00), Virtual worlds – opportunities, risks and policy implications for the single market (2022/2198(INI)), 5 June 2023, https://www.europarl.europa.eu/doceo/document/CULT-AM-749262_EN.pdf.

Chances and limits of immersive environments (IU) for anti-discrimination and (historical-)political education

Dr. Deborah Schnabel is a psychologist and has been the director of the Anne Frank Educational Center in Frankfurt/Main since 2022.

The digitalization of the public sphere puts political educators under considerable pressure. With every new technology and app, a new forum emerges that can become a problem for anti-democratic agitation, but also offers the opportunity to make democracy education better for more people. Racism, anti-Semitism, and other threats to democracy are present in all areas of life, and they are quickly conquering the new digital worlds. With every new space, democracy is also renegotiated, and political education is called upon to intervene in this negotiation.

Immersive environments (IU) offer potential to reach more people at a lower threshold and with a different degree of involvement. There is great interest from the entire field of political education to become effective in these new worlds. It may even be possible to preserve important testimonies and witnesses of history by recreating them in virtual realities. Therefore, it is primarily historical-political education, memorial sites, and museums that are launching projects despite the insufficiently explored possibilities of immersive environments. 
Especially memorials feel the pressure to make places of remembrance alive, and of course, especially in the field of Shoah Education: the last eyewitnesses are leaving us. First-hand accounts will no longer be possible. Many actors in political education see Immersive User (IU) as a possibility to reconstruct the authenticity of a contemporary witness encounter. For example, there is the Anne Frank House in Amsterdam, whose digital VR exhibition allows a tour of the family’s hiding place. There is also the app “Inside Auschwitz”, developed by WDR, which combines the accounts of three eyewitnesses with a virtual visit to the memorial.[1] Additionally, there is the project “Dimensions in Testimony” by the USC Shoah Foundation (USC SF), which lets eyewitnesses speak to us as AI-based holograms.[2] There are also the first projects in which students “travel back” in the metaverse to key moments in Black history.[3] What all actors in the field have in common is that they perceive the new possibilities of IU as very important, sometimes invest a lot in the field, have high expectations of the possibilities, but at the same time deal with them rather restrictively. For example, the VR Anne Frank House has no protagonists at all, but instead presents the hiding place furnished, in contrast to the physical exhibition. Other applications only allow certain characters, who usually have a bystander role. Behind this lies the still unanswered question of how to unite the possibilities of new technology with the principles of political education. It will depend on the quality of the answers whether IU will become suitable for widespread use in democracy education and history teaching. 

No matter how it is approached, IU will come up against limits – technological ones (which I will address later), but often also educational-theoretical ones: Is “artificial authenticity” even possible? Does it make pedagogical sense? Or is an immediacy that cannot be reconstructed fetishized here?

HOW REALISTIC CAN REMEMBERING BE?

The limits of immersion are often discussed in such applications, a debate that is older than IU. In memorial education, long before the Internet, there was a debate about how deep the experience of the visitors should go. The main issue was whether such an experience should be accompanied or unaccompanied. In German-speaking contexts, the so-called Beutelsbach Consensus was used as a guideline, which emphasizes the prohibition of overpowering, the requirement of controversy, and student orientation. It is difficult to apply Beutelsbach to IU, as such environments usually overwhelm from their very game logic. In the USA, however, the focus is often on “empathy education”, the development of compassion and understanding, often with methods that are quite “overwhelming”.[4] The problem of “artificial authenticity”, which can lead to distancing or empathy, is obvious: history can potentially be distorted. A purely imagined image of the past emerges that is neither congruent with historical knowledge nor with the victims’ world of experience.

Apart from the inherent pedagogical risks of the technology, the controversy also goes to the content. The biggest point of contention is the possibility for non-victims to play victims of racism or anti-Semitism in IU. What some may view as an opportunity for empathy and a change of perspective can be seen by others as insensitive, cultural appropriation, false alignment of perpetrators and victims, violation of the prohibition of overwhelming and a potential trigger for (transgenerational) trauma. Complex feature compositions are truncated and cannot be explored in depth, especially when, for example, a typical VR session usually does not exceed one hour. The socio-political effect on the dominant society must also not be underestimated:Memory-political problematics and exoneration narratives can be reinforced if the descendants of the perpetrators can feel even more strongly as part of the persecuted through IU.

The balance of distance and empathy that political education has to maintain in IU could be understood as a kind of variant of the “uncanny valley” effect – at a certain level of reality of the simulation it becomes strange, “creepy”. The mentioned example of the WDR app “Inside Auschwitz” illustrates how difficult this balancing act is. On the one hand, it deliberately tries to eliminate the distance: “The users should consciously not take a distanced position but explore the memorial site as if they were visiting it on foot.”[5] On the other hand, the app’s producers warn against too much closeness and even recommend not using VR glasses in a school context, as the lack of distance can disturb students.[6]

It appears that applications are better accepted when they use alienation effects, creating a mixed space that is not real, but not completely fictional either. It is not about identification, but rather sympathy. This seems to apply to the medium as a whole: while VR/AR are still niche products, MMORPGs[7], for example, are functioning excellently as a mass medium. Here, it often appears to be the artificiality of the environment that makes it possible to deal with it confidently. There is never any doubt that one is not in reality; an avatar that does not resemble the user enables greater identification than the clinical perfection of VR/AR, which often does not enable the user to experience their body at all, as when looking down with VR goggles, one does not see their body, but the floor.

It seems almost impossible to consider every potential user and every possible multiple concern. Can’t we operate similarly to our everyday experience – with limited knowledge, but with room to learn? Our daily behaviour is also based on limited knowledge, yet we are getting very far here as well.

Educational science findings on the pedagogical impact of IU are still manageable: mixed results were found for “Anne Frank House VR” compared to desktop learning – while identification with those involved was stronger, at the same time less content was absorbed.[8] Learning dynamics are complex and require more intensive research.

METAVERSE UND CO.

Certainly, the best-known IU application is Mark Zuckerberg’s “Horizon Worlds”, widely known simply as “Metaverse”[9] – known both in terms of ambition and for negative headlines. What is less important for our field is whether “the Metaverse,” i.e. Horizon Worlds has been successful, but it is uncertain whether a similar platform for IU applications could be successful. Civic education must prepare for the possibility that such a platform could become a mass medium. Comparisons can be drawn with the failed “Second Life” platform.

The same questions that apply to social media also apply to the metaverse: Who leads the discourse? Who dominates? Do discriminated people feel safe in the metaverse? The higher degree of reality of IU puts the classic problems of social media in sharper focus. There is still little research on whether IU has an even greater potential for radicalization than social media already does. Heuristically, it is probably safe to assume a similar threat level as social media. Furthermore, adaptive and reactive content in IU will increasingly be produced by AIs, as human content development cannot keep up with the demand for these demanding environments – this is also the view of a contribution by the World Economic Forum, for example.[10]In this scenario, it is conceivable that AI language models could create personalized worlds that essentially affirm users’ attitudes, reinforce them in problematic attitudes, force ideological cocooning effects, and drive them into radicalization tunnels. The already low inhibition threshold in the metaverse would do the rest.[11] These structural aspects of AI racism could have an exponential effect in immersive environments: Racist echo chambers in virtual worlds would inevitably leak outward, shaping society – and in turn serving as a source of AI content generators.

The likelihood of this scenario also depends on the people who develop the metaverse. They would need to be diversity- and discrimination-sensitive from recruitment through beta testing, in all areas of work. Experience has shown that this cannot be achieved through decree, but through the active participation of the people involved. The question of who uses the metaverse and why has not yet been answered. What inhibitions might exist for certain parts of society to use the Metaverse? Why might this be the case? Do corporations like Meta actively counteract such tendencies? What is being done to provide representation and protection? How do Meta and Co. prevent right-wing infiltration, as can also be observed on other digital platforms that operate more in the niche?

POLITICAL EDUCATION IN THE METAVERSE

Apart from these questions, this task arises in our field of action: How should civic education respond to these and similar questions? In our opinion, the question should not be whether political education should now “invest in the metaverse.” Such a generalist approach is doomed to fail. However, it should definitely be experimented with – the metaverse should be seen as software in which concrete, limited settings are tried out. Thus, the maxim should not be “We’re putting our whole memorial into the Metaverse,” but rather, “This specific exhibition can (also) be visited in the Metaverse.”

The potential of such steps can already be observed today, for example in Minecraft. For example, a library of black literature was saved not only to Meta’s “Horizon Worlds” but also to a Minecraft server. One explanation for Minecraft’s success could be found in the “uncanny valley,” as it does not try to imitate reality. There is currently a project underway to transfer the Yad Va-Shem memorial to Minecraft[12], as well as private projects that strive to reflect the Holocaust on the platform[13]. However, this should not hide the fact that Minecraft is also used by the opposite side: users often point out servers on which Nazis glorify Nazi atrocities.

EXCURSUS: AI, DISCRIMINATION AND JUSTICE

The increasing importance of AI in various environments is platform-independent and requires its own analysis. The language used to discuss “artificial intelligence” promises a revolution in all aspects of life, from the world of work and macroeconomics to literature and art. At the same time, a promise of justice based on procedural rationality and the absolute neutrality of the algorithm is being made. Exclusions resulting from misanthropic attitudes can thus be avoided automatically – the assessment is made on the basis of an alleged objectivity that cannot be achieved by humans.

In fact, the publicly known language models, first and foremost ChatGPT, do not seem to be suitable for reproducing discriminatory language – ChatGPT in particular stands out here for its robust ethical framework. ChatGPT strongly opposes racism and discrimination. All attempts to use the software for the production of racist texts are supposed to fail theoretically already in the approach (although this can be circumvented with the appropriate Prompts). It should not be forgotten that there are projects that specifically train language models to combat misanthropy, such as the international research project “Decoding Antisemitism”[14].

At the same time, there are increasing reports that structural racism is being reinforced by intelligent image recognition software. The algorithm bases its decisions on patterns it finds in society, and evaluates non-white people differently than white people. This means that racism is avoided on a linguistic level, while it is further perpetuated, automated, and industrialized on an operational level, usually without transparency for those affected. If AI is used to make decisions about credit and housing allocation, job applications, and social forecasts in the future, the problems are foreseeable. Unlike human decisions, the apparent objectivity of the machine still makes it immune to criticism.

The framework conditions of the AI industry also need to be critically examined – for example, the underpaid Kenyan workers whose performance was essential for the success of ChatGPT.[15] On an economic level, AI reproduces capitalism instead of critically questioning it, and this is done along racial distinctions.Who owns the systems? Who is allowed to market their services, and who benefits from the increased productivity? Who owns AI art, whose creation is inspired by the names of famous artists? Who can work in this new AI world, whose job may be replaced by automation, and whose life may be managed by A.I.? All indications suggest that those affected by racism will not receive positive answers to these questions, and that the negative effects of the revolution will disproportionately affect them.

Furthermore, the implications for anti-Semitism should not be underestimated. Already, the portrayal of Jewish people is subject to a historical bias – image-creation programs draw primarily on historical representations and images of ultra-Orthodox groups. As a result, Jewish people, who today participate in modern life as a matter of course, are absent from the collective imagination.

Last but not least, Artificial Intelligence (AI) systems can already be used to generate an endless stream of fake news, deep fakes, and conspiracy theory content – with a plausibility and apparent authenticity that current conspiracy ideologues can only dream of. In all likelihood, the effects of this will be felt particularly strongly by Jewish people.

All these questions arise with increased urgency when AI is not just a nice toy within Information Units (IUs), but is instead constitutive of them. How will educational IUs change when they no longer present content that is prefabricated by humans, but instead AI dynamically generates such content? What will happen when AI-based avatars target users and radicalize or intimidate them with personalized propaganda? Where is the control? And how can automated patterns of reproduction of anti-human attitudes be interrupted?

This requires asking questions that naturally point to the need for further exploration. Beyond the field of IU in civic education, namely: what conditions for success do we need for a discrimination-sensitive AI ethic? How can AI be taught to recognize power asymmetries? Here, limits become apparent that are already inherent in the current way of working of AI – it is fundamentally reproducing.

A COUNTER-MODEL: IU AS LIVED UTOPIA

Perhaps it is not expedient to reconstruct stories of persecution and experiences of discrimination for political education in IU. Perhaps it would make sense to simulate worlds in IU in which a certain utopian hope has become real. Instead of putting learners into a pedagogical capsule, with certain preconceived content and learning goals, they could independently explore worlds that have already been liberated. The learning effect would arise precisely from the absence of a context of violence that is now commonplace.

An example of this is the everyday simulation game “The Sims” (released in 2000) and its role in promoting homosexual/queer emancipation. In this PC game, queer relationships could be experienced for the first time in a mainstream title without it being a constant theme. Many queer people today report having experienced queer representation and empowerment for the first time simply because of the possibility that was not imposed or used as a teaching tool, but was explored and motivated intrinsically.

Perhaps political education for IU should take a similar approach: provide a concrete utopia that can be explored, with whose possibilities users can experiment freely – and in the process reduce fears and prejudices. The virtual world should be a model for reality, not the other way around.

This approach would possibly have the advantage of avoiding the numerous pitfalls mentioned above that threaten political education in IU from the outset. The boundaries of the user experience would also have to be drawn less tightly, there would be less need for pedagogical guidelines, and less top-down control overall.


[1] Planet Schule: Inside Auschwitz | Erzählungen Überlebender und 360 Grad-Videosa (10.12.22), URL: https://www1.wdr.de/schule/digital/unterrichtsmaterial/dreisechzig-inside-auschwitz-100.html (Stand: 12.09.23).

[2] N.N.: Dimensions in Testimony. Speak with survivors and other witnesses to the Holocaust and other genocides through their interactive biographies. (o.J.), URL: https://sfi.usc.edu/dit (Stand: 12.09.23).

[3] N.N.: Education in the Metaverse takes students back in history (01.12.23), URL: https://www.wbur.org/hereandnow/2022/12/01/black-history-metaverse (Stand: 12.09.23).

[4] Besand, A., Overwien, B., & Zorn, P. (Hrsg.) (2019): Politische Bildung mit Gefühl. Bundeszentrale für politische Bildung, Bonn.

[5] Verena Lucia Nägel, Sanna Stegmaier: AR und VR in der historisch-politischen Bildung zum Nationalsozialismus und Holocaust – (Interaktives) Lernen oder emotionale Überwältigung? (08.10.2019), URL: https://www.bpb.de/lernen/digitale-bildung/werkstatt/298168/ar-und-vr-in-der-historisch-politischen-bildung-zum-nationalsozialismus-und-holocaust-interaktives-lernen-oder-emotionale-ueberwaeltigung/ (Stand: 12.09.23). 

[6] “The stories told by the witnesses and the images from the camp can be challenging for sensitive students or young people with their own experience of war and flight. It can help to watch the videos with a little more distance (for example, over the shoulder of a classmate holding the tablet). In addition, it is advisable not to use headphones or VR.” Planet Schule: Inside Auschwitz | Erzählungen Überlebender und 360 Grad-Videosa (10.12.22), URL: https://www1.wdr.de/schule/digital/unterrichtsmaterial/dreisechzig-inside-auschwitz-100.html (Stand: 12.09.23).

[7] Massively Multiplayer Online Role-Playing Games, online video game worlds with many thousands of users.

[8] Miriam Mulders: Learning about Victims of Holocaust in Virtual Reality: The Main, Mediating and Moderating Effects of Technology, Instructional Method, Flow, Presence, and Prior Knowledge.(06.03.23), URL: https://www.mdpi.com/2414-4088/7/3/28 (Stand: 12.09.23).

[9] The term “metaverse” in the narrower sense is theoretically meant to encompass all immersive environments and their interconnection – in this sense, “the metaverse” does not yet exist, but individual IU Horizon Worlds, Minecraft, etc. do. ChatGPT offers the following definition of the metaverse (prompt: write a scientific definition of the metaverse): „The Metaverse is a comprehensive, large-scale virtual environment that integrates multiple interconnected digital spaces, facilitating seamless interactions, communication, and collaboration among users, who are represented by digital representations known as avatars. This virtual framework merges aspects of virtual reality, augmented reality, and interconnected networks to create an immersive and persistent ecosystem for users, with wide-ranging applications spanning entertainment, social interaction, education, commerce, and various other domains. By transcending physical limitations and geographical boundaries, the Metaverse fosters novel opportunities for creative expression, innovation, and interaction in the digital realm.“

[10] N.N.: AI is shaping the metaverse – but how? Industry experts explain” (9.05.2023). URL:
https://www.weforum.org/agenda/2023/05/generative-ai-and-how-can-it-shape-the-metaverse-industry-experts-explain/ (Stand: 12.09.23)

[11] cf. only Yinka Bokinni: A barrage of assault, racism and rape jokes: my nightmare trip into the metaverse. (25.04.22.) URL: https://www.theguardian.com/tv-and-radio/2022/apr/25/a-barrage-of-assault-racism-and-jokes-my-nightmare-trip-into-the-metaverse (Stand: 12.09.23)

[12] N.N.: Yad Vashem comes to Minecraft for International Holocaust Day. (26.01.23) URL: https://www.i24news.tv/en/news/israel/technology-science/1674714331-yad-vashem-comes-to-minecraft-for-international-holocaust-day (Stand: 12.09.23)

[13] Jcirque25: Using Minecraft as a medium to reflect on the Holocaust (2020). URL: https://www.reddit.com/r/Minecraft/comments/htlst6/using_minecraft_as_a_medium_to_reflect_on_the (Stand: 12.09.23)

[15] Billy Perigo: OpenAI Used Kenyan Workers on Less Than $2 Per Hour to Make ChatGPT Less Toxic (18.01.23). URL:https://time.com/6247678/openai-chatgpt-kenya-workers/ (Stand: 12.09.23).

Decentralized Autonomous Organisations (DAOs) in the metaverse: the future of extremist organisation?

Julia Ebner is a Senior Research Fellow at the Institute for Strategic Dialogue (ISD), where she has led projects on online extremism, disinformation, and hate speech. Based on her research, she has given evidence to numerous governments, intelligence agencies, and tech firms, and advised international organizations such as the UN, OSCE, Europol, and NATO. Ebner’s first book, The Rage: The Vicious Circle of Islamist and Far-Right Extremism, won the Bruno-Kreisky Award for Political Book of the Year 2018 and has been disseminated as teaching material across Germany by the Federal Agency for Civic Education. Her second bestselling book, Going Dark: The Secret Social Lives of Extremists, was awarded the “Wissenschaftsbuch des Jahres 2020” Prize (“Science Book of the Year 2020”) by the Austrian Federal Ministry for Education as well as the Caspar Einem Prize 2022. Her new book, Going Mainstream: How Extremists Are Taking Over, was published in June 2023. Julia has just completed her DPhil in Anthropology at the University of Oxford.

INTRODUCTION

The transition from Web 2.0 to Web 3.0 is likely to leverage the latest technological advances including AI, machine learning and blockchain technology. This might not only bring about a new interplay of the physical, virtual and augmented reality but could also fundamentally change the ways in which corporate entities, social communities and political movements organise themselves. 

Decentralisation is a key property of the Metaverse. Policies, legal contracts and financial transactions that were traditionally the domain of governments, courts and banks might be replaced with smart contracts and financial transactions in blockchain. Cryptocurrencies can be used as a new medium of exchange outside of established banking systems, while non-fungible token (NFT) might serve as unique digital identifiers to certify ownership and authenticity. Taken together, these new forms of self-governance could lead to the explosion of so-called Decentralized Autonomous Organisations (DAOs) in the Metaverse.

Decentralized Autonomous Organisations (DAOs) are digital entities that are collaboratively governed without central leadership and operate based on blockchain (Jentzsch, 2016). As such, DAOs allow internet users to establish their own organisational structures, which no longer require the involvement of a third party in financial transactions and rulemaking. DAOs allow online communities to simplify their transactions and use a community-based approach to establish rules (World Economic Forum, 2023). However, as this study will explore, they might also give rise to new threats emerging from decentralised extremist mobilisation, pose a risk to minority rights, challenge the rule of law, and disrupt institutions that are currently considered fundamental pillars of our democratic systems.

RESEARCH AIM AND METHODS

The aim of this research paper is to explore potential ways in which DAOs could be exploited by extremist and anti-democratic actors. To what extent are extremist movements incentivised and able to make use of such new forms of self-governance? What are the types of threats that might emerge from anti-democratic or anti-minority DAOs? 

To make progress on these research questions, the project reviewed existing literature and used a combination of expert interviews and digital ethnographic research. More specifically, qualitative interviews were carried out with two leading experts on DAOs, a manual review of roughly 350 existing DAOs was performed, and exploratory digital ethnographic research was carried out across the extremist fringe platforms Odysee, Bitchute and Gab as well as the encrypted apps Telegram and Discord. 1All quoted and referenced primary source content was documented and archived in the form of screenshots and can be made available to fellow researchers upon request.

The report will first provide an overview of DAOs, their relationship with the Metaverse and current use cases. It will then assess potential areas of misuse before providing an outlook of future trends, including key challenges and opportunities, and ideas for future research areas. 

DAOS AND THE METAVERSE

The next iteration of the internet, the Metaverse, is inherently tied to decentralised forms of communication, collaboration and finance. The metaverse is built on blockchain technology. Researchers have pointed out that collective governance and decentralised finance will likely be a key characteristic of the Metaverse (Chao et al, 2022; Goldberg & Schär, 2023; Tole & Aisyahrani, 2023). Ownership over an asset or “a piece of land” in the Metaverse would be governed by NFTs (Laeeq, 2022). 

The World Economic Forum described DAOs as “an experiment to reimagine how we connect, collaborate and create” (World Economic Forum, 2023, 27). DAOs operate differently from traditional organisations in their allocation of resources, coordination of activities and decision-making processes. Code-driven and community-oriented structures allow stakeholders to be directly involved in governance and operations via voting systems (World Economic Forum, 2023). Decentraland is the first large-scale virtual world based on the architecture and premises of a DAO. With the aim of distributing power among its participating users, it operates on the Ethereum blockchain. The “residents” of Decentraland can initiate policy updates based on decentralised voting systems that are embedded in Decentraland DAO governance structure (Goldberg & Schär, 2023, 3).

There is an active ongoing debate about the legal status of DAOs. Proponents of DAOs have argued that smart contracts are self-executing codes on blockchain that can operate independently of legal systems. They point to lower transaction costs and reduce the need for intermediaries. However, more skeptical commentators have suggested that DAOs should be owned and/or operated by humans (Jentzsch, 2016, Mondoh et al, 2022). A more detailed discussion of existing efforts to regulate DAOs can be found in the final section of this report. 

To date, research into DAOs and the Metaverse remains scarce. However, in recent years there has been a notable rise in publications that investigate the trends and implications of DAOs for various sectors. For example, Chao et al. (2022) discuss the advantages and disadvantages of DAOs for non-profit organisations. While they argue that DAOs provide a “strong democratic system”, they also caution that they “are not subject to legal, physical, or economic constraints” and are therefore capable of operating “outside the control of a single central authority or a single governing body” (Chao et al, 2022). 

Mondoh et al (2022) discuss whether DAOs might be the future of corporate governance, while Goldberg and Schär (2023) write that DAOs could disrupt the monopoly market structures in the technology sector. Their assessment is that “open standards and blockchain-based governance are a necessary but not a sufficient condition for a decentralized and neutral platform” (Goldberg & Schär, 2023). Meanwhile, Tole and Aiyahrani (2023) predict that the Metaverse and DAOs have the potential to revolutionise the education system. Through DAOs, they claim, “technology courses, certificates, and more can become automated and authenticated on the blockchain” (Tole & Aisyahrani, 2023, 1). DAOs could provide the basis for what they call “Metaversity”, whereby the necessary infrastructure is provided for decentralised learning centers that have incentive structures and tailored courses in place (Tole & Aisyahrani, 2023, 3).

CURRENT USE OF DAOS

The current DAO ecosystem is highly diverse and growing at fast pace. By 2023, DAOs collectively manage billions of dollars and count millions of participants (World Economic Forum, 2023; DeepDAO, 2023). DAOs exist across a range of industries such as finance, philanthropy and politics. A manual review of the 356 DAOs listed on the website Decentrallist.com at the time of writing (July 2023), illustrates the highly diverse nature and purpose of DAOs. The descriptions and stipulated mission statements include social, activist, investment, gaming, media and collector aims (Decentralist, 2023). Some DAOs are clearly satirical in nature, including the Café DAO which aims “to replace Starbucks”, the Doge DAO which wants to “make the Doge meme the most recognizable piece of art in the world” and the Hair DAO, “a decentralised asset manager solving hair loss”.


By 2023, DAOs have primarily attracted libertarians, activists, pranksters and hobbyists. However, there are also activist and political DAOs. For example, Decentralist.com lists DAOs that are tied to the climate movement and Black Lives Matter community as well as social justice and anti-banking DAOs that want to tackle social inequality. Figure 1 provides an overview of the most common DAOs found on Decentralist:

Figure 1: Overview of most common existing DAOs

Figure 1: Overview of most common existing DAOs

While the review of existing DAOs did not identify any openly anti-democratic or anti-minority DAOs, some DAOs make use of alt-right codes and conspiracy myth references, or express support for far-right populist leaders. For example, the “Redacted Club DAO” claims to be a secret network with the aim “slaying” the “evil Meta Lizard King”. The official Discord chat of the “Redacted Club DAO”, which counts 850 members, contains Pepe the Frog memes and references to “evil lizards” and “good rabbits”. Another DAO, which is not listed on Decentralist, is called “Free Trump DAO”. Its Twitter account describes it as being “for Patriots” and serving as “a powerful tool that fights for freedom and liberty around the world.” Its Telegram channel, which counts 474 members, is filled with Trump-glorifying memes, MAGA symbols and announcements such as “We’re supporting Trump. Bring back Trump”. There is also the “Trump DAO – the 47th US President”, which claims to want to “raise money through the crypto to support Trump for 2024”. The aim is to shape political campaigns by fundraising and giving complete anonymity to people who want to support Trump.

EXTREMIST INCENTIVES TO EXPLOIT DAOS

This sub-chapter used ethnographic research in extremist forums and chat groups to assess their conversations about DAOs. The analysis was performed on the far-right fringe platforms Bitchute, Odysee and Gab as well as the encrypted apps Discord and Telegram. An exploratory discourse analysis was used, as the volume of content that is specifically focused on DAOs is still limited in extremist communities and therefore does not provide enough data points for statistical analysis. 

A total of 85 pieces of content were identified across the analysed far-right fringe platforms, 46 on Odysee, 23 on Bitchute and 16 on Gab. Additionally, 100+ channels related to DAOs were detected on Telegram and Discord, including far-right extremist channels. While many of the posts on Odysee, Bitchute and Gab simply explain the characteristics and advantages of DAOs and how they might shape the Web 3.0, some of the detected conversations also reveal that there is much appetite for decentralised alternative forms of collaboration, communication and crowdfunding. Broadly speaking, the intent of far-right activists to use DAOs can be divided into two overarching types of incentives: practical and ideological motivations. 

Important practical reasons are that DAOs can help users to circumvent monitoring, regulatory mechanisms and traditional institutions. For example, extremists may view them as useful to escape surveillance by security services, avoid perceived censorship by tech firms and find an alternative to frozen bank accounts. Many extremist and even terrorist movements already created their own cryptocurrencies and make use of anonymous bitcoin wallets. In particular, non-transparent cryptocurrencies such as Monero served extremists whose bank accounts have been frozen. Jihadists used cryptocurrencies as early as 2016 to fund violent activities (Irwin & Milad, 2016). In 2019, Treasury Secretary Steve Mnuchin warned that cryptocurrencies pose a national security threat, allowing malicious actors to fund criminal activities (Rappaport & Popper, 2019). While decentralised finance is already a trend among extremists, a shift to entirely decentralised forms of self-governance could be the next step (Krishnan, 2020). For example, Gab users highlighted that DAOs can help organisations to “unlock their full potential and usher in a new era of decentralized governance.” The post continued: “Embrace the power of automation and embark on a journey of efficient and transparent decision-making within your DAO.” Another one shared that DAOs may not be “capable of being subject to sanctions”. 

There are also ideological incentives that might lead extremists to use DAOs for their purposes. In particular, fundamental distrust in the establishment means that DAOs can be an appealing alternative. Users who believe that the “deep state” or the “global Jewish elites” control everything from governments and big tech to the global banking system might prefer to set up their own technological, financial and logistical networks. For example, QAnon-related groups on Telegram were discussing the future of decentralized finance and how this might be an escape route to evade the U.S. federal banking system. DAOs also fit ultra-libertarian utopian visions of online worlds that are entirely unregulated, in which speech and actions are “truly free”. “FREEDOM Meta-DAO” declared in its mission statement on Discord: “We believe in freedom of speech, privacy, and protection from cancel-culture bullying. Using a DAO as a service platform, we bring society together as a whole by removing borders and adaptation blockages.” Finally, some users may be motivated by the outlook of creating their own digital state that operates based on their own rules, values and ideologies. 

Figure 2 provides an overview of the practical and ideological incentives:

Practical IncentivesIdeological Incentives
Safe haven from surveillance by security services or journalistsConspiracy myths about the establishment and financial institutions
Escape route from social media removal policiesUltra-libertarianism and desire for unlimited free speech
Financial solution to frozen bank accountsCreation of digital state or corporation according to own rules
Figure 2: Overview of practical and ideological incentives

Most of the posts about DAOs on far-right websites were positive, however there were also a few warnings among the analysed pieces of content. For example, one user on Gab wrote that “though Blockchain technologies make traditional authoritarianism less likely, they make a new kind of authoritarianism, born of decentralized autonomous organizations (DAOs), more likely.” The user continued: “By design and accident, DAOs will tend to develop into the computational equivalents of eusocial colony animals such as ants, bees, and termites. Once formed into such superorganisms, DAOs will exhibit emergent behaviors like swarming and collective intelligence.” Indeed, the user pointed out that one of the risks is that “humans venturing into the DAOs’ native habitat would then find themselves forced to live under the arbitrary will of not another human, but instead of a vast, mysterious hoard of nonhuman, and perhaps inhuman, entities.” 

EXTREMIST CAPABILITIES TO EXPLOIT DAOS

In this section an analysis of potential areas for exploitation was performed based on a review of literature and interviews with two leading technology experts. The first interviewee was Carl Miller, the Director of the Centre for the Analysis of Social Media (CASM) at the think tank Demos who has long warned of potential threats emerging from the misuse of DAOs. The second interviewee was Christoph Jentzsch, a leading developer of the blockchain Ethereum and head behind “The DAO”, which became one of the largest crowdfunding campaigns in history, raising over $150 million after launching in April 2016 via token sale, but was hacked soon later. 

This sub-chapter addresses questions such as: Could trolling armies start cooperating via DAOs to launch election interference campaigns? What happens if anti-minority groups establish their own digital states in which they impose their own governing structures? Finally, how might terrorists leverage DAOs to fund and plot criminal activities and violent attacks? 

DAOs that can be used by movements to further their social, political and criminal objectives. Three potential threats were identified related to extremist and terrorist use of DAOs: 1.) influence campaigns, 2.) radicalisation of sympathisers, and  3.) attacks on political opponents. In all three potential threat areas, DAOs can be exploited in at least three ways on a tactical level: a.) by coordination and planning activities, b.) by crowdfunding and purchasing activities, and c.) by radicalisation and training activities. As such, they could change the nature of rebellion movements as well as their relative position of power, effectiveness and resilience to governmental countermeasures (Krishnan, 2020). 

Figures 3 and 4 summarise the potential threats as well as potential tactical exploitation areas associated with the use of DAOs by extremists and terrorists:

Figure 3: Overview of potential tactical areas for exploitation 

Influence Campaigns

One risk is that DAOs might be used in the future to carry out large-scale influence and election interference campaigns. Carl Miller told me that “beyond the speculative activities around crypto and NFTs, the deeper simmering experimentation around governance provided more fundamental challenges to the state and security.” DAOs allow extremist groups to engage in deliberative and collective governance and decision-making in ways where, according to Carl, it is a.) very unclear who is doing it, and b) hard for statutory security agencies to do anything about it. “As we gear up for the next U.S. presidential election, we might see DAOs being used for election campaigns,” he warns. For example, Trump DAO could incentivise people to donate and participate in campaigns. “One could imagine a campaign being funded by tens of thousands of dark wallets that do not have clear links to real-world identities.” Would this be recognised as electoral campaigning? Carl noted that the regulatory space in U.S. currently regulates at wallet level, not at DAO levels. However, these wallets can be fully anonymous. “There are already crypto-based gig sites where small jobs can be done anonymously based on crypto remuneration,” Carl said. “In the future we might see gig workers being paid to run disinformation campaigns or stage protests.”

Radicalisation of Sympathisers

Another risk is that DAOs could facilitate radicalisation efforts undertaken by extremist groups. Carl noted that “the basic problem of extremists is that they are being denied the conventional, easiest ways of reaching people”. For example, they find it hard to rent halls, to have Facebook groups, to raise money or get the word out. He continued “You could easily see a world where there are protocol-based alternatives or replacements for conventional organizational structures”, allowing extremist movements to surmount collective action problems, create international coalitions and fundraise for their activities.

Extremist movements could potentially even create their own digital states, for example in the form of digital white ethnostates or cyber caliphates. In 2022, the former chief technology officer at Coinbase Balaji Srinivasan described in his book The Network State how DAOs could soon give rise to new forms of digital statehood. Any group of online users could decide to start their own country, with their own laws, social services and financial transactions (Srinivasan, 2022). 

Attacking Political Opponents 

Finally, DAOs might also serve as safe havens for the planning and plotting of violent terrorist attacks and cybercrimes. The circumvention of government regulation and monitoring activities could make them particularly useful for violent extremist and criminal organisations. Moreover, white nationalist movements have long been advocating for decentralised structures and so-called “leaderless resistance” (Michael, 2012; Malone et al, 2022). Increasingly, jihadist organisations such as Islamic State and Al Qaeda have adopted similarly decentralised models of leadership. Researchers have argued that leaderless organisations are more resilient to disruptions and interventions than hierarchical organisations. They compared hierarchical organisations to spiders who will die when you cut off their head, while the equated leaderless organisations with starfish whose legs will regrow when you cut them off (Brafman & Beckstrom, 19-29). 

According to Christoph Jentzsch, the most notable difference between DAOs and traditional forms of organisations is that, unlike associations or co-operatives, DAOs cannot be outlawed and their assets or bank accounts cannot be frozen. “If you can get organised without relying on traditional infrastructure, this also means that you won’t be controlled by traditional institutions. It would be difficult to intervene on a statutory level”.  Jentzsch argues that cash already provides an alternative route for clandestine groups to fundraise, hold or spend money. Yet, DAOs might make it easier to conduct international, anonymous operations.

DAOs could give rise to new forms of the dark market. It might even be possible for DAOs to facilitate an anonymous assassination market (Krishnan, 2020). Carl noted that there would probably be law enforcement responses to terrorist plots and other serious criminal activities, which could include “a mix of infiltration and subversion, and perhaps direct cyber offensive activity.” DAOs would be resilient in some ways because they are decentralised and rely on smart contracts. However, they might also be more vulnerable to hacking offenses. “It’s difficult for creators to know when they are safe from a hacking attack due to their complex structures,” Carl explained.  

CONCLUSIONS, FUTURE RESEARCH AND CHALLENGES ON THE HORIZON

DAOs bring both a range of challenges and opportunities for democratic culture in cyberspace. Advocates of DAOs have argued that these blockchain-based forms of self-governance promise enhanced security, transparency and trust and reduce transaction costs arising throughintermediaries. Meanwhile, this paper has explored some of the key challenges and risk areas for national security and democracy. 

The potential exploitation of DAOs for extremist or criminal purposes has not received enough attention in the research and policy communities. This report identified a range of ways in which DAOs might be misused by extremist movements in the future, which could challenge the rule of law, pose a threat to minority groups, and disrupt institutions that are currently considered fundamental pillars of democratic systems. More specifically, the study explored how extremist movements might tap into DAOs to plan, coordinate and launch influence and interference operations, radicalisation campaigns and violent attacks. 

Risks associated with the misuse of DAOs for extremist and criminal purposes has not been on the radar of global policymakers. Many governments have started to develop or pass legal frameworks to regulate AI. However, few countries or regions have even recognised the existence of DAOs or considered regulating them. Technology expert Carl Miller said that “even though DAOs behave like companies, they are not registered as legal entities”. There are only a few exceptions: The U.S. States Wyoming, Vermont and Tennessee have passed laws to legally recognise DAOs. 

This report is a first attempt to understand the potential risk areas of DAOs in the Metaverse. While this study focused on non-state actors, exploring emerging threats from hostile state actors might be another important area for future research. Studies could also use experimental methods and interviews with policymakers and law enforcement to map the threats landscape. “DAOs can be used by anyone: by charities, investment funds, but yes, they can also be used by terrorist organisations,” Christoph Jentzsch argues. However, he believes that the positive cases of use significantly outweigh the negative ones. Future studies should investigate both the potential opportunities and challenges related to collective decision-making and self-governance, diversity and the protection of minorities, radicalisation and extremism in decentralized communities in the Metaverse. The positive ways in which DAOs can shape future democratic culture are just as poorly understood as the negative impact they could have on politics and society. 

BIBLIOGRAPHY

Brafman, Ori and Rod A. Beckstrom, The Starfish and the Spider: The Unstoppable Power of Leaderless Organizations (London: Penguin, 2007).

Chao, Chian-Hsueng, I-Hsien Ting, Yi-Jun Tseng, Bing-Wen Wang, Shin-Hua Wang, Yu-Qing Wang, and Ming-Chun Chen. “The Study of Decentralized Autonomous Organizations (DAO) in Social Network”. Proceedings of the 9th Multidisciplinary International Social Networks Conference, 2022, 59-65.

Decentralist. “The List of DAOs”. 2023. Available online: https://www.decentralist.com/list-of-daos. 

DeepDAO. “Organziations”. 2023. Available online: Deepdao.io/organziations.

Goldberg, Mitchell, and Fabian Schär. Metaverse governance: An empirical analysis of voting within decentralized autonomous organizations”. Journal of Business Research 160 (2023).

Irwin, Angela S.M., and George Milad. “The Use of Crypto-Currency in Funding Violent Jihad”. Journal of Money Laundering Control 19, no. 4 (2016): 411.

Jentzsch, Christoph. “Decentralized autonomous organization to automate governance”. White Paper, 2016.

Krishnan, Armin. “Blockchain Empowers Social Resistance and Terrorism Through Decentralized Autonomous Organizations”. Journal of Strategic Security 13, no. 1 (2020). 

Laeeq, Kashif. “Metaverse: Why, How and What”. Presentation, February 2022. Available online: https://www.researchgate.net/publication/358505001_Metaverse_Why_How_and_What. 

Malone, Iris, Lauren Blasco, and Kaitliyn Robinson. “Fighting the Hydra: Combatting Vulnerablities in Online Leaderless Resistance Networks”. National Countertrerorism, Innovation, Technology and Education Center, U.S Department of Homeland Security, September 2022. Online: https://digitalcommons.unomaha.edu/ncitereportsresearch/24/.

Michael, George. Lone Wolf Terror and the Rise of Leaderless Resistance (Nashville: Vanderbilt University Press, 2012).

Mondoh, Brian Sanya and Johnson, Sara M. and Green, Matthew and Georgopoulos, Aris. “Decentralised Autonomous Organisations: The Future of Corporate Governance or an Illusion?”. 23 June, 2022.

Rappaport, Alan and Nathaniel Popper, “Cryptocurrencies Pose National Security Threat, Mnuchin Said,” New York Times, July 15, 2019. Available online: https://www.nytimes.com/2019/07/15/us/politics/mnuchin-facebook-libra-risk.html.

Srinivasan, Balaji. The Network State: How to Start a New Country (Self-published book, 2022).

Sutikno, Tole, and Asa Ismia Bunga Aisyahrani. “Non-fungible tokens, decentralized autonomous organizations, Web 3.0, and the metaverse in education: From university to metaversity”. Journal of Education and Learning 17, no. 1 (2023), 1-5.

Venis, Sam. “Could New Countries be Founded – On the Internet?”. The Guardian, 5 July 2022. Available nline: https://www.theguardian.com/commentisfree/2022/jul/05/could-new-countries-be-founded-on-the-internet.

World Economic Forum. “Decentralized Autonomous Organizations Toolkit”. Insight Report, January 2023. Available online: https://www3.weforum.org/docs/WEF_Decentralized_Autonomous_Organization_Toolkit_2023.pdf.

  • 1
    All quoted and referenced primary source content was documented and archived in the form of screenshots and can be made available to fellow researchers upon request.

A safe space for everyone – a plea for a democratic and participative metaverse

Dr Octavia Madeira, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)

Dr Georg Plattner, Institute for Technology Assessment and Systems Analysis (ITAS), Karlsruhe Institute of Technology (KIT)

1 MALEVOLENT ACTORS IN THE METAVERSE

The vision of a metaverse presented by Meta CEO Mark Zuckerberg in October 2021 was a watershed moment for society and for the tech world. Although the concept of a second digital life, including a digital identity, is neither new nor exclusive to Meta (see, for example, Second Life), this was the first time that almost all the possible functionalities of the metaverse had been presented up to that point. Here, there is a special emphasis on immersion via virtual reality which, as an extension of today’s Internet applications, is meant to give users a completely new sense of participation and let them experience the metaverse in a multimodal and multi-sensory way. The presentation of the vision also focused in particular on technological permeability, on the diffusion of social media in all areas of human life and therefore on the displacement of social media as a pure entertainment platform.

Should the metaverse turn out to be as Zuckerberg and other proponents envision it, this would mean a radical transformation of social interaction with the digital space, and also a radical change in our everyday lives. Shopping could increasingly shift to the metaverse as an immersive experience, sports classes could take place in a virtual environment and virtual church services could be held with believers from all over the world. The world of work has already permanently changed, partly due to the coronavirus pandemic – and we could soon move from working at home to working in the meta-office.

But these innovations will not only change our everyday lives – they will also cause extremism and radicalisation to strike out in new directions and transform to adapt to new environments. Generally speaking, extremists use technologies that are cheap, readily available, easy to use and widely accessible for their purposes, like propaganda, communication and recruitment. Using technology for a function other than that intended by the developers with the intention of doing harm to others is an inherently creative process. Cropley, Kaufman and Cropley (2008) call this “malevolent creativity”. They define it as a form of creativity that “is deemed necessary by some society, group, or individual to fulfill goals they regard asdesirable, but has serious negative consequences for some other group, these negative consequences being fully intended by the first group” (106). We describe actors who display malevolent creativity (such as extremists or spreaders of fake news) as malevolent actors.

In the past, malevolent actors were very creative especially when it came to realigning their own organisation and distributing their own ideology. The digital revolution has equipped them with an unprecedented number of tools with which to further their cause: from (encrypted and instant) mass communication for propaganda and recruitment to alternative instruments for financing operations and logistics through to new means of destruction and terror. Recent technological advances have opened up a wide range of new opportunities for malevolent actors. For example, Web 2.0, the rise of social media and the availability of nearly all content on the Internet have enabled these actors to easily connect with other like-minded individuals and form almost entirely closed communities that reinforce their own views.

Research into the metaverse as the successor to social media and the mobile Internet can provide important insights into how malevolent actors could creatively use the metaverse. While we generally agree with Joe Whittaker and others (Whittaker 2022; Valentini, Lorusso and Stephan 2020) that distinguishing between offline and online radicalisation does not make sense from an analytical perspective, the way in which malevolent actors are currently using social media could give an idea of the metaverse of the future.

It is generally recognised that malevolent actors (with different ideological backgrounds) began to make use of the Internet and its possibilities at an early stage (Feldman 2020; Fisher 2015; Stewart 2021; Lehmann and Schröder 2021). They used new technologies in creative ways in order to evade monitoring and detection and also to improve their own operations. As an anonymous place of countless possibilities where one can find a wealth of information tailored to one’s own interests, the Internet is a gold mine for extremists (Bertram 2016, p. 232).

While research on the radicalisation patterns of convicted jihadi terrorists has shown that offline networks played a much greater role in their radicalisation than online networks (Hamid and Ariza 2022), other research indicates that the Internet has a more important role for right-wing extremists. This applies especially to the planning of their attacks and actions (von Behr et al. 2013; Gill et al. 2017). “The Internet is largely a facilitative tool that affords greater opportunities for violent radicalization and attack planning. Nevertheless, radicalization and attack planning are not dependent on the Internet […].” (Gill et al. 2017, p. 113).

In particular, social media has been used by malevolent groups to create, target and distribute self- generated content without the traditional processes of vetting used by traditional media companies and avoiding policing and censorship from nation states (Droogan et al. 2018, p. 171). Furthermore, social media has also become an instrument of social interaction for those who are already radicalised and those they want to convince or who are interested in their activities (Conway 2017).

The introduction of the metaverse could further reinforce this momentum. By further bridging the gap between offline and online, it could be even more difficult to maintain the distinction between the two spheres of radicalisation (and extremism and terrorism). At present, offline networks provide familiarity and a close environment and are more likely to evade security services than online extremists (Hamid and Ariza 2022). The future metaverse could bring together these advantages of the offline world in an extensive and immersive digital experience. Combined with the advantages of the online world – instant mass communication and propaganda – the metaverse could become an even bigger game-changer than the Internet and social media were.

2 THE METAVERSE AS A DEMOCRATIC SPACE

The metaverse is still in the early stages of development and still has a long way to go before it reaches a certain stage of maturity in which promises and actual functionalities are implemented. It is already apparent that the risks of the metaverse are comparable to those of social media and, in the past, a response often came too late. Freedom and security will probably be the decisive variables in this technology of the future, which makes engaging with malevolent actors all the more crucial (Neuberger 2023). In the initial phase of the metaverse, it is already becoming clear that malevolent actors are finding fertile ground – as illustrated, for example, by the incidents of sexual harassment that have occurred in the current test versions of the metaverse (Bazu 2021; Bovermann 2022; Diaz 2022; Wiederhold 2022).

How can these developments be tackled? How can they be prevented before they cause harm? It will be important to ensure the democratic involvement of actors and marginalised groups in decision-making and development processes. While this would now be a genuinely reactive process in the case of social media, the developers of the metaverse still have the opportunity to build beneficial structures. Democratisation of social media is desirable from a sociopolitical perspective because it is a very powerful tool due to its widespread use and its economic and cultural importance. This power should be democratically legitimised and controlled (Engelmann et al. 2020). However, democratic safeguarding should not follow a party political pattern.

In the development of the metaverse, social media should be informative in various ways – from the creativity with which malevolent actors use new media and technologies (see above) through to the democratic involvement of users. Social media operators have already tried to take account of the aspect of participation:

  • META conceived the idea of an Oversight Board in 2018 as a body whose independent judgement could help the company make tough content decisions. This board is committed to being independent, accessible and transparent. META has granted it the authority to decide whether content should be allowed or removed.
  • Twitter has been advised by a Trust and Safety Council in the past. This consisted of various NGOs and researchers who advised the company on online security issues. Elon Musk dissolved the Council after taking over the company (The Associated Press 2022). 
  • On its YouTube video platform, Google has introduced the Priority Flagger Programme. This enables NGOs and public authorities to use highly effective tools to report content that violates the Community Guidelines. This flagged content is then reviewed by moderators as a priority. However, the deletion criteria are the same as for any other reports. The programme was revised by YouTube in 2021, which led to major criticism from the community (Meineck 2021).

In general, there seems to be a worrying trend on social media to cut back on these participative models of moderation and security in favour of artificial intelligence (AI) applications (Gorwa et al. 2020; Llansó 2020). However, AI solutions cannot and should not replace the involvement of civil society in decision-making processes and questions of democratic culture, not least because AI-supported content moderation solutions are still prone to error and lack transparency (Gillespie 2020; Gorwa et al. 2020). 

3 ENCOURAGING PARTICIPATION AND DEMOCRATIC INVOLVEMENT

In social media research and particularly in platform governance research, important approaches can be found that may help to enable a democratic and inclusive metaverse. In addition to essential cooperation between operators and governmental and non-governmental actors on issues of transparency and research, there is an emphasis in particular on actively strengthening democratic actors and narratives (Bundtzen and Schwieter 2023; Engelmann et al. 2020; Rau et al. 2022). 

This strategy is crucial in order to ensure that a state’s repressive apparatus is actually only used as a measure of last resort to stop malevolent actors. Democratic argument and discourse must be possible in an inclusive metaverse without people constantly having to fear repression and restriction. Instead, platform operators can also take steps in the metaverse to consciously and actively promote democratic actors and narratives, and thus build democratic resilience in the metaverse. 

Here too, the metaverse can take inspiration from existing approaches in the social media field, such as YouTube’s trusted flagging programme. Democratic actors, e.g. NGOs and government organisations, specialising in areas such as hate speech, group-focused enmity or strengthening democracy could have access to special reporting tools. They could also be given extended powers to contextualise questionable content. 

However, as well as reinforcing democratic narratives, the democratisation of the platform itself is a crucial factor for inclusivity. Involving users in decision-making and design processes can have enormous added value for a platform that is interested in democratic interaction. Marginalised groups and their representatives know exactly where hate and harassment may be lurking in the digital space. By involving such stakeholders at an early stage, some of the mistakes that were made on social media could be minimised from the outset.

In political practice, mini-publics have already proved effective as an instrument of user participation (Escobar and Elstub 2017; Smith and Setälä 2018). Mini-publics are groups of (randomly or systematically) selected citizens who work together over an extended period to examine socially relevant issues, with the inclusion of external sources, e.g. scientific expertise. Topics are examined, discussed and assessed from a broad range of perspectives, and the resulting recommendations are forwarded to political decision-makers (Escobar and Elstub 2017; Pek et al. 2023). One example of this is the virtual citizens’ assembly in Germany. In June 2022, its members debated the consequences of using artificial intelligence (Buergerrat.de 2022). These types of assemblies allow platform-specific topics to be discussed with the aim of ensuring that decision-making is more democratic. 

Although quite controversial (see above), platform councils can also develop potential for promoting democracy if they are able to operate independently, objectively and transparently (Haggart and Keller 2021; Rau et al. 2022). To ensure this, platform councils of this type could be based on the press and broadcasting councils that are already established in Germany, in line with the recommendations of Kettemann and Fertmann (2021). It should be noted, however, that responsibilities (geographical, practical), participants (citizens, experts, NGOs, political decision-makers) and not least powers (quasi-judicial, advisory) must be part of the social discourse and cannot yet be conclusively clarified (Cowls et al. 2022; Kettemann and Fertmann 2021). Furthermore, such councils could boost public confidence – the more diverse and transparent their line-up is and also the more publicly visible the effects of their recommendations are.

Last but not least, the aim must also be to strengthen media literacy and policy competence by means of various training opportunities. These should be designed in such a way that individuals who are not (or no longer) associated with the education system are also able to benefit from them. Here it is vital to provide the necessary tools for dealing with fake news, other manipulated or extremist content and also hate speech on the Internet. One example to mention is the Good Gaming – Well Played Democracy project directed by the Amadeu Antonio Foundation, which aims to raise the gaming community’s awareness of extremist content, among other things.

In addition, it must be noted that building a democratic metaverse is not solely a task for citizens. The creation of a digital twin in the sense of a well-fortified democracy is also important. However, according to Rau et al. (2022), this does not exclusively mean the use of repressive measures such as deletion or suppression of problematic content (see, for example, Bellanova and De Goede 2022) but also, coupled with this, the strengthening of democratic actors, e.g. through algorithmically increased visibility. In this context, the empowerment of marginalised democratic actor groups becomes especially important in order to adequately represent social diversity. They are properly trained to recognise problematic content at an early stage, for example, and can thus also be consulted for advice (Rau et al. 2022). The use of counter speech could also be another strategy for tackling extremist content in the metaverse (Clever et al. 2022; Hangartner et al. 2021; Kunst et al. 2021; Morten et al. 2020). The term (digital) counter speech refers to comments or other content posted as a response to hate speech in order to minimise and weaken the impact of it or to support potential victims (Ernst et al. 2022; Garland et al. 2022). In this regard, studies have shown that counter speech can be an effective means of tackling extremist content and reducing it effectively (Garland et al. 2022; Hangartner et al. 2021). In the context of newer technological complexes, e.g. AI, consideration is currently being given to implementing counter speech automatically in certain circumstances, although final concepts and responsibilities are still the subject of intensive discussion (Clever et al. 2022). 

In addition to participatory methods, legislation can also be used to prevent extremist content. In Germany, the dissemination of unconstitutional symbols and signs is forbidden and perpetrators can be prosecuted. Germany’s Network Enforcement Act (Netzwerkdurchsetzungsgesetz, NetzDG) also provides a legal framework for dealing with hate crime on social media. Accordingly, the Terrorist Content Online Regulation (European Union 2021) requires platform operators offering services in the EU to remove or block reported terrorist content within one hour. Recent results of extremism research indicate, however, that so-called legal but harmful content is already proving to be a major challenge and is likely to be of significance in the metaverse as well (Jiang et. al. 2021; Rau et al. 2022). This includes, for example, digital content that may have a subtle radicalising effect but is not unlawful. However, it should be noted in this regard that content moderation must comply with the constitutional principle of free speech. Consequently, it is to be assumed that the ongoing discussion on the relationship between freedom and security will also significantly influence the design of the metaverse and will or must be the result of a negotiation process involving society as a whole in order to guarantee the democratic dimension.

4 DISCUSSION

If the immersiveness of the metaverse measures up to Mark Zuckerberg’s vision, it is very likely to have a huge impact on our everyday lives and on social interaction. This immersiveness would mean that the operators of the metaverse (or metaverses) would need to deal intensively with questions of democratisation. Not only would the state probably play a (yet to be defined) role in a metaverse, its users must also be enabled to participate democratically in it. This would help to make the platform inclusive and as safe as possible from malevolent actors.

Building on the social media research of recent decades, there are many common points of reference which can support and steer the design of a democratic metaverse. As mentioned above, the metaverse is still at an early stage of development. However, given the rapid pace of advancement, it is vital to support this process, stay on the ball and take an active role in discussions. A multi-perspective approach from all stakeholders involved is also relevant to ensure a balance between security and freedom for all users. The possibilities outlined here for building a metaverse present some solutions for implementing democratic pillars. In summary, the following solutions should be deployed by platform operators:

  • early implementation of methods for user participation, e.g. mini-publics or independent platform councils
  • strengthening of democratic actors and inclusion of marginalised groups
  • reference to existing scientific research findings on social media, hate speech and (digital) extremism, as well as open cooperation with research institutions
  • offer of educational opportunities in cooperation with democratic actors

Final and concrete implementation is currently still the subject of lively discussion. However, the status of the early development phase of the metaverse is encouraging active participation, which is also reflected in this Immersive Democracy Project and can be understood as an invitation to this process. Participation is not a panacea for the dangers that lurk in the digital space. But it is an important source of support that can help to empower marginalised groups or individuals in specific ways and thus give them the tools to work together with operators against discrimination and hate in the metaverse. Now is the time to develop these tools and make sure that a future metaverse is as safe and secure as possible for everyone.

BIBLIOGRAPHY

Bazu, Tanya (2021): The metaverse has a groping problem already. MIT Technology Review. Available online at https://www.technologyreview.com/2021/12/16/1042516/the-metaverse-has-a-groping-problem/, last checked on 28 September 2022.

von Behr, Ines; Reding, Anaïs; Edwards, Charlie; Gribbon, Luke (2013): Radicalisation in the Digital Era: The Use of the Internet in 15 Cases of Terrorism and Extremism | Office of Justice Programs. In: RAND Europe.

Bellanova, Rocco; De Goede, Marieke (2022): Co‐Producing Security: Platform Content Moderation and European Security Integration. In: JCMS: Journal of Common Market Studies 60 (5), pp. 1316–1334. https://doi.org/10.1111/jcms.13306

Bertram, Luke (2016): Terrorism, the Internet and the Social Media Advantage: Exploring how terrorist organizations exploit aspects of the internet, social media and how these same platforms could be used to counter-violent extremism. In: Journal for Deradicalization 2016 (7), pp. 225–252.

Bovermann, Philipp (2022): Online-Belästigungen im Metaverse – Am eigenen Leib. Süddeutsche Zeitung. Available online at https://www.sueddeutsche.de/kultur/metaverse-vr-virtual-reality-microsoft-sexuelle-belaestigung-1.5519527?print=true, last checked on 4 February 2022.

Buergerrat.de (2022): Bürgerrat diskutierte über künstliche Intelligenz. Buergerrat.de. Available online at https://www.buergerrat.de/aktuelles/buergerrat-diskutierte-ueber-kuenstliche-intelligenz/, last checked on 22 June 2023.

Bundtzen, Sara; Schwieter, Christian (2023): Datenzugang zu Social-Media-Plattformen für die Forschung: Lehren aus bisherigen Maßnahmen und Empfehlungen zur Stärkung von Initiativen inner- und außerhalb der EU. Berlin: Institute for Strategic Dialogue (ISD).

Clever, Lena; Klapproth, Johanna; Frischlich, Lena (2022): Automatisierte (Gegen-)Rede? Social Bots als digitales Sprachrohr ihrer Nutzer*innen. In: Julian Ernst, Michalina Trompeta and Hans-Joachim Roth (eds.): Gegenrede digital: Neue und alte Herausforderungen interkultureller Bildungsarbeit in Zeiten der Digitalisierung. Wiesbaden: Springer Fachmedien, (Interkulturelle Studien), pp. 11–26. https://doi.org/10.1007/978-3-658-36540-0_2

Conway, Maura (2017): Determining the Role of the Internet in Violent Extremism and Terrorism: Six Suggestions for Progressing Research. In: Studies in Conflict & Terrorism Routledge, 40 (1), pp. 77–98. https://doi.org/10.1080/1057610X.2016.1157408

Cowls, Josh; Darius, Philipp; Santistevan, Dominiquo; Schramm, Moritz (2022): Constitutional metaphors: Facebook’s “supreme court” and the legitimation of platform governance. In: New Media & Society. https://doi.org/10.1177/14614448221085559

Cropley, David H.; Kaufman, James C.; Cropley, Arthur J. (2008): Malevolent Creativity: A Functional Model of Creativity in Terrorism and Crime. In: Creativity Research Journal 20 (2), pp. 105–115. https://doi.org/10.1080/10400410802059424

Diaz, Adriana (2022): Disturbing reports of sexual assaults in the metaverse: ‘It’s a free show’. New York Post. Available online at https://nypost.com/2022/05/27/women-are-being-sexually-assaulted-in-the-metaverse/, last checked on 28 September 2022.

Droogan, Julian; Waldek, Lise; Blackhall, Ryan (2018): Innovation and terror: an analysis of the use of social media by terror-related groups in the Asia Pacific. In: Journal of Policing, Intelligence and Counter Terrorism Routledge, 13 (2), pp. 170–184. https://doi.org/10.1080/18335330.2018.1476773

Engelmann, Severin; Grossklags, Jens; Herzog, Lisa (2020): Should users participate in governing social media? Philosophical and technical considerations of democratic social media. In: First Monday. https://doi.org/10.5210/fm.v25i12.10525

Ernst, Julian; Trompeta, Michalina; Roth, Hans-Joachim (2022): Gegenrede digital – Einleitung in den Band. In: Julian Ernst, Michalina Trompeta and Hans-Joachim Roth (eds.): Gegenrede digital. Wiesbaden: Springer Fachmedien Wiesbaden, (Interkulturelle Studien), pp. 1–7. https://doi.org/10.1007/978-3-658-36540-0_1

Escobar, Oliver; Elstub, Stephen (2017): Forms of Mini-Publics: An introduction to deliberative innovations in democratic practice. (Research and Development Note) New Democracy.

European Union (2021): Regulation (EU) 2021/784 of the European Parliament and of the Council of 29 April 2021 on addressing the dissemination of terrorist content online (Text with EEA relevance). OJ L.

Garland, Joshua; Ghazi-Zahedi, Keyan; Young, Jean-Gabriel; Hébert-Dufresne, Laurent; Galesic, Mirta (2022): Impact and dynamics of hate and counter speech online. In: EPJ Data Science 11 (1), p. 3. https://doi.org/10.1140/epjds/s13688-021-00314-6

Gill, Paul; Corner, Emily; Conway, Maura; Thornton, Amy; Bloom, Mia; Horgan, John (2017): Terrorist Use of the Internet by the Numbers: Quantifying Behaviors, Patterns, and Processes. In: Criminology & Public Policy 16 (1), pp. 99–117. https://doi.org/10.1111/1745-9133.12249

Gillespie, Tarleton (2020): Content moderation, AI, and the question of scale. In: Big Data & Society 7 (2). https://doi.org/10.1177/2053951720943234

Gorwa, Robert; Binns, Reuben; Katzenbach, Christian (2020): Algorithmic content moderation: Technical and political challenges in the automation of platform governance. In: Big Data & Society 7 (1). https://doi.org/10.1177/2053951719897945

Haggart, Blayne; Keller, Clara Iglesias (2021): Democratic legitimacy in global platform governance. In: Telecommunications Policy 45 (6). https://doi.org/10.1016/j.telpol.2021.102152

Hamid, Nafees; Ariza, Cristina (2022): Offline Versus Online Radicalisation: Which is the Bigger Threat? London: Global Network on Extremism & Technology.

Hangartner, Dominik et al. (2021): Empathy-based counterspeech can reduce racist hate speech in a social media field experiment. In: Proceedings of the National Academy of Sciences 118 (50). https://doi.org/10.1073/pnas.2116310118

Jiang, Jialun Aaron; Scheuerman, Morgan Klaus; Fiesler, Casey; Brubaker, Jed R.; Alexandre Bovet (ed.) (2021): Understanding international perceptions of the severity of harmful content online. In: PLOS ONE 16 (8). https://doi.org/10.1371/journal.pone.0256762

Kettemann, Matthias C.; Fertmann, Martin (2021): Die Demokratie Plattformfest Machen: Social Media Councils als Werkzeug zur gesellschaftlichen Rückbindung der privaten Ordnungen digitaler Plattformen. Potsdam-Babelsberg: Friedrich-Naumann-Stiftung.

Kunst, Marlene; Porten-Cheé, Pablo; Emmer, Martin; Eilders, Christiane (2021): Do “Good Citizens” fight hate speech online? Effects of solidarity citizenship norms on user responses to hate comments. In: Journal of Information Technology & Politics 18 (3), pp. 258–273. https://doi.org/10.1080/19331681.2020.1871149

Llansó, Emma J (2020): No amount of “AI” in content moderation will solve filtering’s prior-restraint problem. In: Big Data & Society 7 (1). https://doi.org/10.1177/2053951720920686

Meineck, Sebastian (2021): Trusted Flagger: YouTube serviert freiwillige Helfer:innen ab. netzpolitik.org. Available online at https://netzpolitik.org/2021/trusted-flagger-youtube-serviert-freiwillige-helferinnen-ab/, last checked on 27/06/2023.

Morten, Anna; Frischlich, Lena; Rieger, Diana (2020): Gegenbotschaften als Baustein der Extremismusprävention. In: Josephine B. Schmitt, Julian Ernst, Diana Rieger und Hans-Joachim Roth (eds.): Propaganda und Prävention. Wiesbaden: Springer Fachmedien Wiesbaden, pp. 581–589. https://doi.org/10.1007/978-3-658-28538-8_32

Neuberger, Christoph (2023): Sicherheit und Freiheit in der digitalen Öffentlichkeit. In: Nicole J. Saam and Heiner Bielefeldt (eds.): Sozialtheorie. Bielefeld: transcript Verlag, pp. 297–308.

Pek, Simon; Mena, Sébastien; Lyons, Brent (2023): The Role of Deliberative Mini-Publics in Improving the Deliberative Capacity of Multi-Stakeholder Initiatives. In: Business Ethics Quarterly 33 (1), pp. 102–145. https://doi.org/10.1017/beq.2022.20

Rau, Jan; Kero, Sandra; Hofmann, Vincent; Dinar, Christina; Heldt, Amélie P. (2022): Rechtsextreme Online-Kommunikation in Krisenzeiten: Herausforderungen und Interventionsmöglichkeiten aus Sicht der Rechtsextremismus- und Platform-Governance-Forschung. In: Arbeitspapiere des Hans-Bredow-Instituts SSOAR – GESIS Leibniz Institute for the Social Sciences. https://doi.org/10.21241/SSOAR.78072

Smith, Graham; Setälä, Maija (2018): Mini-Publics and Deliberative Democracy. In: Andre Bächtiger, John S. Dryzek, Jane Mansbridge and Mark Warren (eds.): The Oxford Handbook of Deliberative Democracy. Oxford University Press, pp. 299–314. https://doi.org/10.1093/oxfordhb/9780198747369.013.27

The Associated Press (2022): Musk’s Twitter has dissolved its Trust and Safety Council. National Public Radio (NPR). Washington, DC, 12/12/2022.

Wiederhold, Brenda K. (2022): Sexual Harassment in the Metaverse. In: Cyberpsychology, Behavior, and Social Networking 25 (8), pp. 479–480. https://doi.org/10.1089/cyber.2022.29253.editorial

Workshop report: VR vs. Hate Crime 

On August 22nd, a workshop was held at the University of the Police in Hamburg, where the English non-profit agency Mother Mountain Productions and the Manchester Police presented the virtual reality app “Affinity” for the further education of police officers. This application aims to familiarize police officers with hate crime situations in an empathetic and professional manner. The workshop of the Immersive Democracy project in cooperation with the Hamburg professors Eva Groß and Ulrike Zähringer offered prospective police officers and experts the opportunity to try out the English-language VR experience and discuss it with the developers and experts.

Hate crime is a major problem for diversity and inclusion in many European countries. It affects people who are exposed to discriminatory attacks as well as hostile acts based on their identity, such as racism, anti-Semitism or transphobia. Many victims complain that they feel that they are not taken seriously by the police or that they are treated inappropriately. Often, this is not due to bad intentions, but to a lack of empathy, understanding and training on the part of police officers.

How can police officers be prepared to deal with such situations in an empathetic and professional manner? The English non-profit agency Mother Mountain Productions, in collaboration with the Manchester Metropolitan Police, has created the virtual reality app “Affinity” to provide further training for police officers. This app enables them to experience real cases of anti-Semitic, transphobic, and ableist hate crime staged by actors, as well as appropriate and inappropriate police responses to the attacks. In the virtual environment, they can observe how their different approaches, body language, cultural competence, and understanding affect those affected. They also learn about the manifestations and narratives of these discriminatory phenomena in order to better recognize anti-Semitic statements, for example.

The app was developed with extensive research with victims of hate crimes, in order to increase empathy for these victims and to increase the professionalism of police officers. Data showed a high and long-term effectiveness of the app in terms of attitudinal and behavioral change among police officers, as well as a confirmatory effect among those who are already sensitized. Participants in the workshop also commented positively on “Affinity.” Statements from police students included:

– “It was like I was experiencing it myself”

– “I was spoken to directly”

– “You paid more attention than you would in a film, where you often look away and get distracted”

– “It made me understand more how the characters were feeling”

– “It’s a good way to get people’s perspective, which is important when they interact with the police”

– “I immediately felt a tension that I’m sure victims of hate crime feel when they know it could happen again”

During the discussions, not only were the experiences evaluated, but also the possibilities of using VR technologies in Germany with special offers to support victims of hate crime were discussed.

A detailed report will be published on this website in the coming months.

HATE SPEECH IN THE METAVERSE

Esen Küçüktütüncü is a documentary filmmaker, visual artist, and researcher in VR (Virtual Reality) and Clinical Psychology. She completed her Master’s degree in Cognitive Systems and Interactive Media at Universitat Pompeu Fabra and is currently a PhD Candidate at Universitat Barcelona within the Event Lab under the supervision of Mel Slater. Her PhD research focuses on the development of shared virtual environments and virtual agents, tackling conflict resolution and social harmony.

Danielle Shanley is a post-doctoral researcher at the Faculty of Arts and Social Sciences at Maastricht University in the Netherlands. She is currently exploring the ethical implications of immersive environments (which combine advanced technologies such as virtual reality and artificial intelligence). Dani’s expertise is mainly within science and technology studies (STS) and the philosophy of technology, with a particular focus on reflexive, participatory design methodologies (or, responsible innovation), such as social labs and value sensitive design (VSD).

Hate speech, as defined by the United Nations (UN), refers to any form of communication, gesture, or conduct that may incite violence, discrimination, hostility, or prejudicial action against individuals or groups based on attributes such as race, ethnicity, religion, gender, sexual orientation, or other characteristics. It encompasses expressions that demean, dehumanize, or stereotype individuals or communities, perpetuating harmful stereotypes and promoting intolerance (UN n.d.).

Despite this definition appearing relatively clear and concise, defining hate speech is no easy task. Any attempt at defining hate speech effectively has to try to strike a balance between safeguarding freedom of expression–as a fundamental human right–and protecting individuals and groups from harm. This is further complicated given the fact that different individuals may perceive the same speech act differently, and what one person considers offensive or harmful, another may view as a legitimate expression of opinion. The subjective nature of determining whether specific expressions cross the line into hate speech makes it challenging to establish universally applicable definitions.

Despite these definitional difficulties, it is crucial to engage with the topic, especially in online settings like the metaverse. As we learned from early research into virtual worlds, what happens within these worlds can shape the values that influence individuals’ real-world lives, and vice versa. For example, writing in the early 1990s, Christine Ward Gailey argued that video games reflect dominant cultural values in society, reinforcing and promoting behaviors that align with the dominant ideology (Gailey 1993). In essence, commercially successful games often replicate and reinforce the values and activities associated with prevailing societal norms.

BIASES IN VIRTUAL WORLDS

As a result, it is perhaps unsurprising that the portrayal of different social groups in the virtual worlds of video games tends to reflect existing biases held by individuals in the real world, including sexism and racism. For example, the gender stereotyping of video game characters’ appearances, where women are often depicted as thin with large breasts and emphasized sexual features (a trend epitomized by the character Lara Croft in the Tomb Raider series) and men are often portrayed with muscular physiques and aggressive attitudes, has been well documented. Psychological studies among adolescents and college students indicates that exposure to these stereotypical portrayals can desensitize individuals to real-world sexism, making instances of sexism seem less shocking, potentially perpetuating harmful beliefs such as “rape myths” that blame victims of sexual assault (Breuer et al. 2015).

Of course, sexism in video games is not only an issue when it comes to content. User behavior, particularly within multiplayer online games, is also often deeply problematic. For example, female players are often targeted and harassed and have to develop their own coping mechanisms and strategies in order to safeguard themselves against undesirable behavior. 

Racist imagery has also been a persistent issue in video games since the early 1990s. Games like Duke Nukem 3D and Shadow Warrior exemplify how troubling racial stereotypes have shaped the narratives of video games. Duke Nukem 3D’s storyline revolves around eugenic panic concerning race mixing between invading aliens and white women in a future Los Angeles depicted as mono-ethnic. The main character embarks on a mission to stop the alien invaders in order to preserve the genetic purity of the human species. In Shadow Warrior meanwhile, the protagonist’s stereotypical and generic “Asian” identity is accompanied by a skill set portrayed as biologically determined, perpetuating ideas about the character’s perceived deficient masculinity (Weise 2021).

SEXIST AND RACIST IDEAS HAVE REAL WORLD CONSEQUENCES

The sexist and racist ideas and representations depicted in video games can transcend the virtual world and have real-world consequences. Psychological research suggests that playing violent video games can increase ethnocentrism and trigger heightened aggression when individuals encounter someone who is different from themselves (Ewoldsen et al. 2012). Despite activists like Anita Sarkeesian taking up these issues within the gaming industry,1Feminist Frequency (2013). Damsel in Distress: Part 1 – Tropes vs Women in Video Games. Online: https://youtu.be/X6p5AZp7r_Q (Accesed on June 27th 2023). hate and discriminatory attitudes within video games have become normalized over time, which is likely to influence the design and development of other virtual worlds too. In 2022, media outlets were quick to notice the metaverse’s “groping problem”, which only goes to show how important it is that we address and challenge these issues in order to create inclusive and safe online environments. 

HATE SPEECH ON SOCIAL VR

On social VR platforms, such as VRChat, AltSpace, and Meta Horizons, hate speech often takes similar forms to that of more traditional social media. Users can engage in discriminatory or offensive behavior targeting individuals or groups based on their characteristics or identities.

  • Racist or derogatory remarks: Users may verbally express racial slurs, engage in racial stereotyping, or make discriminatory comments based on a person’s race or ethnicity.
  • Homophobic or transphobic behavior: Hate speech can manifest as verbal harassment, bullying, or exclusion, targeting individuals based on their sexual orientation or gender identity.
  • Religious or cultural intolerance: Users may engage in hate speech by expressing discriminatory attitudes or insulting remarks against specific religions or cultural groups.
  • Cyberbullying and harassment: Similar to traditional social media, social VR platforms can become spaces for targeted harassment, where individuals are subjected to online bullying, threats, or offensive behavior.

While there are important lessons to be learned from how these forms of behavior take place on social media, there are a number of important differences between them and social VR platforms, particularly given their immersive nature. 

Social VR platforms introduce additional elements to user experiences, which can affect the nature and extent of hate speech, bullying, and discrimination. These platforms offer users the opportunity to embody avatars and engage in more immersive interactions. As a result, new or different forms of behavior may emerge. Some examples of this are:

  • Nonverbal expressions: Hate speech can extend beyond verbal communication. Users can utilize avatars to engage in offensive or discriminatory gestures, actions, or visual representations, which can amplify the impact of hate speech.
  • Spatial proximity and presence: In VR environments, users can physically navigate and interact with others in close proximity. This physical presence can intensify the emotional impact of hate speech, leading to increased feelings of harassment or discrimination.
  • Immersive experiences and anonymity: The immersive nature of VR can provide a heightened sense of anonymity and disinhibition, potentially leading to more extreme or offensive behavior compared to traditional social media platforms.

MITIGATING HATE SPEECH ON SOCIAL VR

To mitigate the negative impact of hate speech in social VR platforms, it is important for platform providers to prioritize proactive moderation, establish clear community standards, and promote user empowerment through reporting tools and educational initiatives. Collaborative efforts involving platform developers, users, and relevant stakeholders can help create safe and welcoming virtual environments where users can freely express themselves without fear of harassment or discrimination.

Recent literature on technology ethics emphasizes the value of deliberative engagement for shaping technological development.2See, for example, a number of articles published in the Journal of Responsible Innovation and the Journal of Responsible Technology. These works widely—both implicitly and explicitly—draw on concepts of “deep democracy” (Buhmann and Fieseler 2021, 101475) by highlighting the epistemic potential of open engagement processes. Broadly speaking, this literature proposes that innovators, as proactive participants of a wider public debate and discourse, can contribute to responsible processes of innovation. So essentially, the goal is to harness the potential of different forms of engagement in order to help find optimal solutions. 

Around 2010, the term “Responsible Innovation” (RI) became a popular way of talking about responsibility-related issues for academics and policy-makers alike. It is used to refer to a way of organizing research and innovation so that its impacts are safe, equitable, and aligned with societal needs. 

New and emerging technologies, like AI and VR, are going to shape our future in powerful new ways. As we can already see, the result of this is that we will need to confront new questions about risks, ethics, justice, and equity. Responsible innovation essentially provides us with the concepts and practices that are required to address these sorts of questions, helping us to think about things like hype, scale, power, and inclusion in research and innovation.

Drawing on the work of Buhmann and Fieseler, responsible innovation can be seen as encompassing three main dimensions (which are often reflected by the public discourse surrounding new and emerging technologies). First, the responsibility to avoid harm, which refers, for example, to risk management approaches supposed to control for potentially harmful consequences. Second, the responsibility to do good, which refers to the improvement of living conditions, such as are set out in the sustainable development goals. Finally, governance responsibility, which refers to the responsibility to create and support global governance structures that can facilitate the former two responsibilities (Ibid).  

Examples of the sorts of tools and concept that fall under the umbrella of responsible innovation are:

  • Value Sensitive Design: A framework for exploring stakeholder’s values in order to then translate those values into operational design criteria, through iterative conceptual, empirical, and technical investigations. VSD asks questions such as: What values to include in design? How to make these values bear on the design process? How to make choices and trade-offs between conflicting values? How to verify whether the designed system embodies the intended values?
  • Scenario Planning Workshops: Narratives, or scenarios, are essentially hypothetical sequences of events constructed for focusing attention on causal processes and decision points. In this sense, the development of scenarios can be used for learning and deliberation, producing decision-making processes that are based on the involvement and interaction of different stakeholders.
  • Envisioning Cards: Combines both VSD and scenario planning, the Envisioning Cards are built upon a set of envisioning criteria that are intended to raise awareness of long-term and systemic issues in design. Cards provide prompts for thinking through various implications and value tensions and can be used within workshops or team meetings to trigger discussion and reflection. 

It is ultimately in and through these sorts of approaches to technology development that we can try to confront and mitigate potential harms, such as hate speech and bias, before they become locked in. 

DISCUSSION

With regards to social VR applications, we can already see the emergence of several key issues that need to be addressed sooner rather than later. For example, managing user behavior in online platforms presents a serious challenge, due to the fragmented domain area and multiple systems involved. Each platform may have its own rules and guidelines, leading to conflicting values and standards. Furthermore, enforcing age restrictions becomes difficult, as users can misrepresent their age or simply bypass the restriction. Another issue is that the speed of development and deployment in the digital space often outpaces the ability to implement effective moderation measures. It is also difficult to legally mandate industry-wide standards, making it hard to establish consistent guidelines for content regulation. Testing these platforms under real-world conditions is also complicated, as the virtual environment is still relatively uncharted territory. In light of these factors, we must ensure that virtual worlds are developed responsibly which will require continuous adaptation and collaboration between platform developers, users, and regulatory bodies.

Until now, efforts to combat hate speech have largely involved fostering dialogue, promoting education, and encouraging media literacy to enhance understanding, empathy, and respect among individuals and communities. In the case of the development of the metaverse, collaborations between governments, civil society organizations, and technology companies will also play a crucial role in developing guidelines, policies, and tools to address hate speech effectively while respecting diverse perspectives and cultural sensitivities.

It is important to acknowledge that while social VR platforms undoubtedly offer opportunities for socialization and creativity, they also face significant challenges ahead when it comes to addressing hate speech, bullying, and discrimination. As discussed, moderating content and enforcing policies are both complex tasks due to the dynamic and immersive nature of VR environments. However, drawing upon ideas and concepts from responsible innovation, implementing reporting mechanisms, educating users, and developing community guidelines all can and more importantly should play a role in fostering inclusive and respectful virtual communities.

LIST OF REFERENCES

  • UN (n.d.). What is hate speech?  https://www.un.org/en/hate-speech/understanding-hate-speech/what-is-hate-speech (Accessed on 27th of June 2023).
  • Gailey, C. W. (1993). Mediated messages: Gender, class, and cosmos in home video games. Journal of popular culture, 27(1), 81.
  • Breuer, J., Kowert, R., Festl, R., & Quandt, T. (2015). Sexist games= sexist gamers? A longitudinal study on the relationship between video game use and sexist attitudes. Cyberpsychology, Behavior, and Social Networking, 18(4), 197-202.
  • Weise, Matthew (2021). The Hidden, Destructive Legacy of ‘Duke Nukem’.  https://www.vice.com/en/article/pkdvxb/the-hidden-destructive-legacy-of-duke-nukem (Accessed on 27th of June 2023).
  • Ewoldsen, D. R., Eno, C. A., Okdie, B. M., Velez, J. A., Guadagno, R. E., & DeCoster, J. (2012). Effect of playing violent video games cooperatively or competitively on subsequent cooperative behavior. Cyberpsychology, Behavior, and Social Networking, 15(5), 277-280.
  • Buhmann, A. & Fieseler, C., (2021). Towards a deliberative framework for responsible innovation in artificial intelligence. Technology in Society, 64, 101475.
  • 1
    Feminist Frequency (2013). Damsel in Distress: Part 1 – Tropes vs Women in Video Games. Online: https://youtu.be/X6p5AZp7r_Q (Accesed on June 27th 2023).
  • 2
    See, for example, a number of articles published in the Journal of Responsible Innovation and the Journal of Responsible Technology.

The 8th International XR Metaverse Conference was held on June 28-30 at the University of Las Vegas. Experts from academia and industry discussed the opportunities and challenges of immersive technologies from different disciplinary perspectives. The Immersive Democracy project of the European Metaverse Research Network presented their research results and networked with the international community.

The 2023 International XR Metaverse Conference was a remarkable event that brought together leading experts, scholars, and enthusiasts from around the world to discuss and explore the future of XR (Extended Reality) and the Metaverse. With its diverse range of sessions, workshops, and presentations, the conference provided an insightful platform to understand and explore the potential impact of XR and the Metaverse on various industries and aspects of our lives. Presentations addressed the diverse applications of XR/Metaverse technologies in the fields of medicine, business, tourism, culture, and society, among others. Researchers from around the world presented their projects, demonstrating the diverse applications of XR and the Metaverse in different fields. 

The presentations of many young researchers showed that the importance of the Metaverse as a field of research is growing, and highlighted the cultural, structural, and technological challenges for the acceptance and adaptation of the diverse technologies. Topics ranged from sustainable travel behavior and communication in VR learning environments to the influence of value expectations and personal characteristics on avatar creation in the Metaverse.

Blair MacIntyre from JP Morgan Chase presented potentials of XR technologies in the financial sector, emphasizing opportunities for diversity, equity, and inclusion in breaking down various barriers. Alex Leavitt from Roblox spoke about safeguards in the Metaverse, including abuse, exploitation, well-being, control, civility, norms, and pro-social design, and how growing Metaverse platforms can address these issues. 

Murphy’s presentation on the role of the Metaverse in international climate law also stimulated discussion on energy consumption and the dangers of greenwashing by companies pushing the energy-intensive Metaverse. 

The importance of the gaming industry and gaming communities to the Metaverse was particularly evident in Las Vegas, where the conference was held at the International Gaming Institute at the University of Nevada. By juxtaposing virtual and physical gaming experiences, it became clear that immersive virtual experiences can complement, but not replace, activities in physical reality. 

Matthias Quent presented the Immersive Democracy Project and the challenges of immersive virtual technologies for democratic culture at the macro, meso and micro levels. He emphasized the importance of involving civil society in this development and of taking minimum democratic standards into account, for example, in online voting.

In discussions, it became clear that the question of how the Metaverse can be used for societal progress and how damage to democracy can be prevented is an important challenge for many actors.

The 2024 International XR Metaverse Conference was undoubtedly a great success, bringing together brilliant minds, cutting-edge research, and innovative ideas. As we look to the future, it is clear that XR and the Metaverse have immense potential to redefine industries, revolutionize experiences, and change the way we perceive and interact with the world around us. The conference served as a catalyst, paving the way for further exploration and collaboration in this exciting and rapidly evolving field.

Our thanks go to the conference organizers, especially Timothy Jung and M. Claudia tom Dieck of Manchester Metropolitan University, for such a diverse and interesting conference with many opportunities for networking and further collaboration.

Conference Website: https://iaiti.org/conference

Interdisciplinary network aims to advance research on the metaverse

European Metaverse Research Network launched

Barcelona, 17. May 2023

Eight research institutions working in seven different countries in Europe have launched the European Metaverse Research Network. The purpose of the European Metaverse Research Network (EMRN) is to carry out research that will map out both the highly beneficial possibilities inherent in the metaverse as well as possible risks and challenges related to it. The network will achieve this through interdisciplinary research and through collaboration with industry partners, civil society organizations and policymakers to ensure knowledge sharing and building the metaverse responsibly.

“The importance of the type of research carried out by the EMRN cannot be underestimated. The EMRN must negotiate its research program in the pendulum that swings from the exciting possibilities that could emerge from this revolutionary technological transformation to the challenges involved. The role of EMRN is to support the process of developing the metaverse in a responsible manner” says VR researcher Mel Slater, the coordinator of the network.

The founding members of the EMRN are the following research institutions: Paris Dauphine-PSL University (France), Magdeburg-Stendal University of Applied Sciences (Germany), Milan Polytechnic University (Italy), Renaissance Numérique (France), RISE Research Institutes of Sweden (Sweden), TNO The Netherlands Organisation for Applied Scientific Research (The Netherlands), University of Alicante (Spain), and University of Technology in Poznan (Poland). The network is being led independently by VR researcher Mel Slater (University of Barcelona) in collaboration with the previously listed research institutions. The network is an initiative that stems out of a group of research institutions, who had received funding provided through an unrestricted donation by Meta in 2022.

“The metaverse is set to change the way we meet and collaborate, enabling us to break distance barriers of our shared experiences and creating new possibilities for social connections combining digital and physical worlds. Exploring the vast potential of the metaverse, for example in the context of defining the future of work, requires a collaborative and interdisciplinary approach, and the European Metaverse Research Network (EMRN) is at the forefront of this effort. By bringing together experts and institutes from a wide range of fields and fostering innovative research, EMRN is playing a crucial role in shaping the future of this transformative emerging technology and ensuring that it reflects European values and priorities in building a responsible metaverse.” says Sylvie Dijkstra-Soudarissanane from TNO The Netherlands Organisation for Applied Scientific Research, who are one of the EMRN founding members.

The idea of this interdisciplinary network is to enhance the research on the metaverse, to support the process of knowledge sharing, to arrange regular metaverse lectures and to produce joint publications. For more information concerning the network, please visit the website of the network .

Hate Speech in the Metaverse

June 1, 2023 | 10:00 – 11:30 CEST | online

The event will explore the impact of bias and hate speech in extended reality (XR) platforms and discuss ways of prevention.

This presentation will explore the impact of bias and hate speech on social harmony in extended reality (XR) platforms. Drawing on research from psychology, ethics and linguistics we will discuss the language and practices that contribute to bias and hate speech and how they affect individuals and communities. Having examined the ways in which bias and hate speech are propagated in XR spaces, we will then begin to explore a number of ways in which they could be prevented or mitigated.

The event will be held in english.

Speaker:

Danielle Shanley

is a post-doctoral researcher at the Faculty of Arts and Social Sciences at Maastricht University in the Netherlands. She is currently exploring the ethical implications of immersive environments (which combine advanced technologies such as virtual reality and artificial intelligence). Dani’s expertise is mainly within science and technology studies (STS) and the philosophy of technology, with a particular focus on reflexive, participatory design methodologies (or, responsible innovation), such as social labs and value sensitive design (VSD).

Esen Küçüktütüncü

is a documentary filmmaker, visual artist, and researcher in VR (Virtual Reality) and Clinical Psychology. She completed her Master’s degree in Cognitive Systems and Interactive Media at Universitat Pompeu Fabra and is currently a PhD Candidate at Universitat Barcelona within the Event Lab under the supervision of Mel Slater. Her PhD research focuses on the development of shared virtual environments and virtual agents, tackling conflict resolution and social harmony.

Stay tuned to the latest news:

Current publications and dates in the newsletter