Generative AI for the Sound Arts and Music Performance
Investigate the potential of Generative AI through an interdisciplinary project that explores the application of AI for Sound Art creation.
Factsheet
-
Schools involved
Bern Academy of the Arts
School of Engineering and Computer Science -
Institute(s)
Institute Interpretation
Institute for Data Applications and Security (IDAS) -
Research unit(s)
Intersection of Contemporary Music
IDAS / Applied Machine Intelligence - Funding organisation BFH
- Duration (planned) 01.10.2022 - 31.01.2023
- Head of project Prof. Dr. Souhir Ben Souissi
-
Project staff
Prof. Dr. Teresa Carrasco
Franziska Baumann - Keywords Generative AI, Novel research, Sound Art, Augmented Intelligence
Situation
Generative AI is taking the academic and industrial world by storm. The ability of Deep Learning architectures such as Transformers to sustain textual conversations with humans, produce realistic and surrealistic images from textual descriptions or generate plausible chemical compositions for drug discovery are merely the first few prominent examples. Here at BFH, our research and educational efforts around AI and Deep Learning has thus far focused on Computer Vision and NLP (Natural Language Processing) for classification, segmentation, regression, prediction and decision making. With this project we aim to expand our portfolio to content generation with an intriguing and interdisciplinary case-study: The use of Generative AI for the Sound Arts.
Course of action
The project will run as a collaboration between the Engineering and Art Departments of BFH. From a computational perspective, we will explore: • Generation of music lyrics through transfer learning and LLMs (Large Language Models) in both co-operative (human/machine) and semi-independent mode (artificial lyrics produced from an initial seed). • Generation of MIDI music scores for different music genres, using RNNs (Recursive Neural Networks) and Transformers. Exploring both offline generation (with longer inference times) and on-line (with real-time segments generated during a performance). • Offline and real-time generation of contextual visualizations (images and video sequences) using transfer learning and diffusion models, that can accompany live music performances.