Generative AI for the Sound Arts and Music Performance
Investigate the potential of Generative AI through an interdisciplinary project that explores the application of AI for Sound Art creation.
Fiche signalétique
-
Départements participants
Haute école des arts de Berne
Technique et informatique -
Institut(s)
Institut Interprétation
Institute for Data Applications and Security (IDAS) -
Unité(s) de recherche
Intersections de la musique contemporaine
IDAS / Applied Machine Intelligence - Organisation d'encouragement BFH
- Durée (prévue) 01.10.2022 - 31.01.2023
- Direction du projet Prof. Dr. Souhir Ben Souissi
-
Équipe du projet
Prof. Dr. Teresa Carrasco
Franziska Baumann - Mots-clés Generative AI, Novel research, Sound Art, Augmented Intelligence
Situation
Generative AI is taking the academic and industrial world by storm. The ability of Deep Learning architectures such as Transformers to sustain textual conversations with humans, produce realistic and surrealistic images from textual descriptions or generate plausible chemical compositions for drug discovery are merely the first few prominent examples. Here at BFH, our research and educational efforts around AI and Deep Learning has thus far focused on Computer Vision and NLP (Natural Language Processing) for classification, segmentation, regression, prediction and decision making. With this project we aim to expand our portfolio to content generation with an intriguing and interdisciplinary case-study: The use of Generative AI for the Sound Arts.
Approche
The project will run as a collaboration between the Engineering and Art Departments of BFH. From a computational perspective, we will explore: • Generation of music lyrics through transfer learning and LLMs (Large Language Models) in both co-operative (human/machine) and semi-independent mode (artificial lyrics produced from an initial seed). • Generation of MIDI music scores for different music genres, using RNNs (Recursive Neural Networks) and Transformers. Exploring both offline generation (with longer inference times) and on-line (with real-time segments generated during a performance). • Offline and real-time generation of contextual visualizations (images and video sequences) using transfer learning and diffusion models, that can accompany live music performances.