Generative AI for the Sound Arts and Music Performance

Investigate the potential of Generative AI through an interdisciplinary project that explores the application of AI for Sound Art creation.

Steckbrief

  • Lead-Departement Technik und Informatik
  • Weitere Departemente Hochschule der Künste Bern
  • Institut(e) Institut Interpretation
    Institute for Data Applications and Security (IDAS)
  • Forschungseinheit(en) Schnittstellen der zeitgenössischen Musik
    IDAS / Applied Machine Intelligence
  • Förderorganisation BFH
  • Laufzeit (geplant) 01.10.2022 - 31.01.2023
  • Projektverantwortung Prof. Dr. Souhir Ben Souissi
  • Projektleitung Prof. Dr. Souhir Ben Souissi
  • Projektmitarbeitende Prof. Dr. Teresa Carrasco
    Franziska Baumann
  • Schlüsselwörter Generative AI, Novel research, Sound Art, Augmented Intelligence

Ausgangslage

Generative AI is taking the academic and industrial world by storm. The ability of Deep Learning architectures such as Transformers to sustain textual conversations with humans, produce realistic and surrealistic images from textual descriptions or generate plausible chemical compositions for drug discovery are merely the first few prominent examples. Here at BFH, our research and educational efforts around AI and Deep Learning has thus far focused on Computer Vision and NLP (Natural Language Processing) for classification, segmentation, regression, prediction and decision making. With this project we aim to expand our portfolio to content generation with an intriguing and interdisciplinary case-study: The use of Generative AI for the Sound Arts.

Vorgehen

The project will run as a collaboration between the Engineering and Art Departments of BFH. From a computational perspective, we will explore: • Generation of music lyrics through transfer learning and LLMs (Large Language Models) in both co-operative (human/machine) and semi-independent mode (artificial lyrics produced from an initial seed). • Generation of MIDI music scores for different music genres, using RNNs (Recursive Neural Networks) and Transformers. Exploring both offline generation (with longer inference times) and on-line (with real-time segments generated during a performance). • Offline and real-time generation of contextual visualizations (images and video sequences) using transfer learning and diffusion models, that can accompany live music performances.

Dieses Projekt leistet einen Beitrag zu den folgenden SDGs

  • 9: Industrie, Innovation und Infrastruktur
  • 17: Partnerschaften zur Erreichung der Ziele