From the same archive

Introduction à la journée d'études du GdR IASIS dédiée à la synthèse audio - Thomas Hélie, Mathieu Lagrange

November 7, 2024

Audio Language Models - Neil Zeghidour

November 7, 2024

Poster sessions - Clara Boukhemia, Samir Sadok, Amandine Brunetto, Haoran Sun, Vincent Lostanlen, Morgane Buisson, Xiran Zhang, Reyhaneh Abbasi, Ainė Drėlingytė, Étienne Paul André, Yuexuan Kong, Étienne Bost, Axel Marmoret, Javier Nistal, Hugo Pauget Ballesteros

November 7, 2024

Music sound synthesis using machine learning - Fanny Roche

November 7, 2024

Grey-box modelling informed by physics: Application to commercial digital audio effects - Judy Najnudel

November 7, 2024

Introduction à la journée d'études du GdR IASIS dédiée à la synthèse audio - Thomas Hélie, Mathieu Lagrange

November 7, 2024

Audio Language Models - Neil Zeghidour

November 7, 2024

Poster sessions - Clara Boukhemia, Samir Sadok, Amandine Brunetto, Haoran Sun, Vincent Lostanlen, Morgane Buisson, Xiran Zhang, Reyhaneh Abbasi, Ainė Drėlingytė, Étienne Paul André, Yuexuan Kong, Étienne Bost, Axel Marmoret, Javier Nistal, Hugo Pauget Ballesteros

November 7, 2024

Music sound synthesis using machine learning - Fanny Roche

November 7, 2024

Grey-box modelling informed by physics: Application to commercial digital audio effects - Judy Najnudel

November 7, 2024

Hybrid deep learning for music analysis and synthesis - Gaël Richard

November 16, 2023 53 min

Invariance learning for a music indexing robust to sound modifications - Rémi Mignot

November 16, 2023 51 min

Basic Pitch: A lightweight model for multi-pitch, note and pitch bend estimations in polyphonic music - Rachel Bittner

November 16, 2023 43 min

GDR ISIS, Méthodes et modèles en traitement de signal, Introduction

November 16, 2023 05 min

Labeling a Large Music Catalog - Romain Hennequin

November 16, 2023 01 h 04 min

AI in 64Kbps: Lightweight neural audio synthesis for embedded instruments

0:00/0:00

The research project led by the ACIDS group at IRCAM aims to model musical creativity by extending probabilistic learning approaches to the use of multivariate and multimodal time series. Our main object of study lies in the properties and perception of musical synthesis and artificial creativity. In this context, we experiment with deep AI models applied to creative materials, aiming to develop artificial creative intelligence. Over the past years, we developed several objects aiming to embed these researches directly as real-time object usable in MaxMSP. Our team has produced many prototypes of innovative instruments and musical pieces in collaborations with renowned composers. However, The often overlooked downside of deep models is their massive complexity and tremendous computation cost. This aspect is especially critical in audio applications, which heavily relies on specialized embedded hardware with real-time constraints. Hence, the lack of work on efficient lightweight deep models is a significant limitation for the real-life use of deep models on resource-constrained hardware. We show how we can attain these objectives through different recent theories (the lottery ticket hypothesis (Frankle and Carbin, 2018), mode connectivity (Garipov et al. 2018) and information bottleneck theory) and demonstrate how our research led to lightweight and embedded deep audio models, namely 1/ Neurorack // the first deep AI-based eurorack synthesizer 2/ FlowSynth // a learning-based device that allows to travel auditory spaces of synthesizers, simply by moving your hand 3/ RAVE in Raspberry Pi // 48kHz real-time embedded deep synthesis.

speakers

information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
date
November 7, 2024

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.