Safeguarding Youth in the Metaverse
This project examines how social dynamics in the Metaverse impact youth self-disclosure. Through virtual bystander experiments, it develops privacy strategies to protect young users from risks.
Factsheet
-
Schools involved
School of Social Work
School of Engineering and Computer Science
Business School -
Institute(s)
Institute for Human Centered Engineering (HUCE)
Institute for Digital Technology Management
Institute for Specialised Didactics, Profession Development and Digitalisation - Strategic thematic field Thematic field "Humane Digital Transformation"
- Funding organisation BFH
- Duration (planned) 01.01.2025 - 31.12.2025
- Head of project Prof. Dr. Roman Rietsche
-
Project staff
Prof. Dr. Roman Rietsche
Livia Müller
Prof. Marcus Hudritsch
Prof. Dr. Manuel David Bachmann
Sascha Ledermann - Keywords Metaverse, privacy, self-disclosure, immersion, youth, virtual reality, social interaction, AI, digital ethics, Privacy Calculus
Situation
The Metaverse is becoming a key platform for social interaction, especially among young people. By 2030, it is projected to reach a market volume of $800 billion. A major issue is young users’ lack of privacy awareness, leading to excessive personal information sharing. The absence of physical cues (e.g., body proximity, facial expressions) and the immersive nature of virtual spaces influence behavior and risk perception. This project investigates how virtual environments and the presence of others affect the disclosure of sensitive information. Using theories like Privacy Calculus and Information Disclosure, it aims to identify behavioral patterns and develop new protective mechanisms.
Course of action
- Conceptual Model Development: Examines factors like trust, risk-benefit assessment, anonymity perception, and immersion. - Experimental Design: Recruiting adolescents (ages 13–18) who interact in virtual scenarios. Different conditions (with/without virtual bystanders) are tested. - Data Collection & Analysis: Self-disclosures are coded and analyzed using statistical methods (ANOVA, regression).
Result
- Development of data-driven design principles for safe interactions in the Metaverse. - Identification of key factors that help reduce privacy risks. - Proposals for adaptive privacy mechanisms, such as visual indicators of virtual listeners or AI-driven warning systems.
Looking ahead
The project provides empirical insights into digital privacy and supports platform developers in creating safer virtual environments. It contributes to BFH’s strategic research and could serve as a foundation for further third-party funded projects.