Les médias liés à cet évènement

Mettre en son The Powder Toy - Kieran McAuliffe

28 mars 2025

Sinusoidal run rhythm

28 mars 2025

ART MUSIC DENMARK presents : Presentation of “vssl” (new hardware electronic instrument) - Xavier Bonfill

28 mars 2025

Performance télématique immersive - Randall Packer, Théophile Clet, Federico Foderaro

28 mars 2025

Session C-LAB : Application de Dicy2 dans la production - The Day in Gad-Avia - Chia Hui Chen, Jing-shiuan Tsang

28 mars 2025

RAVE Model Challenge - Cérémonie de remise des prix

28 mars 2025

Installation vidéo interactive « Here's the Information We Collect » - Tansy Xiao

28 mars 2025

Overton - Synthèse spatiale décorrélée - Martin Antiphon

28 mars 2025

Point sur MacIntel et les logiciels du Forum - Carlos Amado Agon, Riccardo Borghesi, Karim Haddad, Nicholas Ellis

29 novembre 2006 20 min

Nouveautés AudioSculpt 2.7 et SuperVP 2.91 - Xavier Rodet, Alain Lithaud, Niels Bogaards, Axel Roebel

29 novembre 2006 01 h 07 min

Nouveautes OpenMusic - Gérard Assayag, Jean Bresson, Carlos Amado Agon, Karim Haddad

29 novembre 2006 59 min

Point sur le Spatialisateur - Olivier Warusfel, Rémy Muller, Terence Caulkins

29 novembre 2006 12 min

Nouveautés Modalys - Joël Bensoam, Nicholas Ellis, Jean Lochard

29 novembre 2006 50 min

Mlys - une interface de contrôle de Modalys dans Max/MSP - Manuel Poletti

29 novembre 2006 47 min

Accueil - Andrew Gerzso

29 novembre 2006 18 min

Développements récents de l'équipe applications temps réel - Diemo Schwarz, Riccardo Borghesi, Norbert Schnell

29 novembre 2006 51 min

« Latent Terrain » : Dissecting the Latent Space of Neural Audio Autoencoders

0:00/0:00

We present Latent Terrain, an algorithmic approach to dissecting the latent space of a neural audio autoencoder into a two-dimensional plane. Latent Terrain questions the conventional paradigms of dimensionality reduction in creative interactive systems, in which the projection from high to low dimensional spaces is done by modelling similar objects with nearby points. Instead, with a mountainous and steep surface, a terrain material generated by our approach affords greater spectral complexity when navigating an audio autoencoder's latent space.

Extending from this, we present Latent Terrain Synthesis, which is a method for sound synthesis whereby a waveform is generated by pathing through a terrain surface. Latent terrain synthesis aims to help musicians create tailorable and flexible materials to explore musical expressions leveraging the sonic capabilities of neural audio autoencoders such as RAVE.

We provide our MaxMSP externals nn_terrain that work together with nn~ to generate latent terrains for pre-trained RAVE models and allow users to navigate the terrain in real-time.

In this talk, I will first present the technical details behind latent terrain, workflow, how it integrates with RAVE, and a demo interface with a tablet and a stylus. I will also present a recent user study workshop at the Centre for Digital Music at Queen Mary University of London, with co-authors Anna Xambó Sedó and Nick Bryan-Kinns, of 18 musicians from various backgrounds exploring musical affordances and deriving sonic materials for musical expressions.

Acknowledgment: This work is supported by the UKRI Centre for Doctoral Training in Artificial Intelligence and Music are supported by UK Research and Innovation [grant number EP/S022694/1].

intervenants

informations

Type
Ensemble de conférences, symposium, congrès
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
date
28 mars 2025

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.