Music generation: Composing behaviours

Compose at the behavioural level and activate by interactioN

The aim of this research axis is to develop models and architectures of AI generative technologies merging the usually exclusive “free”, “reactive”, and “scenario-based” paradigms in interactive music generation to adapt to a wide range of musical contexts involving hybrid temporality and multimodal interactions.

The goal is then to propose a continuum of musical practices, from
meta-djing – where a musician-operator defines real-time intentions as symbolic specifications serving as generation queries to the system,
to the composition of behaviors activated by the interaction during the performance – by defining before the performance the “musical memory” of an agent, the audio-musical dimensions listened to and analyzed in real time, as well as the temporality of the listening and the reaction mechanisms, but by letting the musical form be created by the interaction during the improvisation.

A recent presentation about this research axis.

https://jeromenika.com/research-designing-generative-agents/dyci2-project

Jérôme Nika, Ken Déguernel, Axel Chemla––Romeu–Santos, Emmanuel Vincent, Gérard Assayag. DYCI2 agents: merging the ”free”, “reactive”, and “scenario–based” music generation paradigms. International Computer Music Conference, Oct 2017, Shangai, China. 

Jérôme Nika, Marc Chemillier, Gérard Assayag. ImproteK: introducing scenarios into human–computer music improvisation. ACM Computers in Entertainment, 2017.

Dimitri Bouche, Jérôme Nika, Alex Chechile, Jean Bresson. Computer–aided Composition of Musical Processes. Journal of New Music ResearchTaylor & Francis (Routledge), 2017, 46 (1).

Jérôme Nika, Guiding human–computer music improvisation: introducing authoring and control with temporal scenarios. Sound [cs.SD]. UPMC – Université Paris 6 Pierre et Marie Curie, 2016.

https://jeromenika.com/research-designing-generative-agents/code/dyci2-library

Some associated artistic productions