From the same archive

Invariance learning for a music indexing robust to sound modifications - Rémi Mignot

November 16, 2023 51 min

Basic Pitch: A lightweight model for multi-pitch, note and pitch bend estimations in polyphonic music - Rachel Bittner

November 16, 2023 43 min

GDR ISIS, Méthodes et modèles en traitement de signal, Introduction

November 16, 2023 05 min

Labeling a Large Music Catalog - Romain Hennequin

November 16, 2023 01 h 04 min

Introduction à la journée d'études du GdR IASIS dédiée à la synthèse audio - Thomas Hélie, Mathieu Lagrange

November 7, 2024

Audio Language Models - Neil Zeghidour

November 7, 2024

Poster sessions - Clara Boukhemia, Samir Sadok, Amandine Brunetto, Haoran Sun, Vincent Lostanlen, Morgane Buisson, Xiran Zhang, Reyhaneh Abbasi, Ainė Drėlingytė, Étienne Paul André, Yuexuan Kong, Étienne Bost, Axel Marmoret, Javier Nistal, Hugo Pauget Ballesteros

November 7, 2024

AI in 64Kbps: Lightweight neural audio synthesis for embedded instruments - Philippe Esling

November 7, 2024

Music sound synthesis using machine learning - Fanny Roche

November 7, 2024

Grey-box modelling informed by physics: Application to commercial digital audio effects - Judy Najnudel

November 7, 2024

Invariance learning for a music indexing robust to sound modifications - Rémi Mignot

November 16, 2023 51 min

Basic Pitch: A lightweight model for multi-pitch, note and pitch bend estimations in polyphonic music - Rachel Bittner

November 16, 2023 43 min

GDR ISIS, Méthodes et modèles en traitement de signal, Introduction

November 16, 2023 05 min

Labeling a Large Music Catalog - Romain Hennequin

November 16, 2023 01 h 04 min

Hybrid deep learning for music analysis and synthesis

0:00/0:00

The access to ever-increasing super-computing facilities, combined with the availability of huge data repositories (although largely unannotated), has permitted the emergence of a significant trend with pure data-driven deep learning approaches. However, these methods only loosely take into account the nature and structure of the processed data. We believe that it is important to rather build hybrid deep learning methods by integrating our prior knowledge about the nature of the processed data, their generation process or if possible their perception by humans. We will illustrate the potential of such model-based deep learning approaches (or hybrid deep learning) for music analysis and synthesis.

speakers

information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
53 min
date
November 16, 2023

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.