April 14, 2005 01 h 01 min
April 14, 2005 24 min
May 12, 2005 52 min
February 4, 2005 01 h 18 min
October 17, 2007 49 min
June 27, 2007 01 h 12 min
July 11, 2007 48 min
September 12, 2007 01 h 07 min
September 19, 2007 01 h 13 min
September 26, 2007 01 h 00 min
October 3, 2007 01 h 12 min
October 10, 2007 01 h 10 min
October 24, 2007 50 min
November 21, 2007 57 min
0:00/0:00
For sonic interactive systems, the definition of user-specific mappings between sensors capturing performer's gesture and sound engine parameters can be a complex task, especially when using large network of sensors to control a high number of synthesis variables. Generative techniques based on machine learning can compute such mappings only if users provide a sufficient amount of examples embedding and underlying learnable model. Instead, the combination of automated listening and unsupervised learning techniques can minimize effort and expertise required for implementing personalized mapping, while rising the perceptual relevance of the control abstraction. The vocal control of sound synthesis is presented as a challenging context for this mapping approach.