Research

Research in AI Generative Technologies -&/for- Music Creation

As a researcher in the Music Representations Team at Ircam, Jérôme Nika’s work focuses on how to model, learn, and navigate an “artificial musical memory” in creative contexts. In opposition to a “replacement approach” where AI would substitute for human, this research aims at designing novel creative practices involving a certain level of symbolic abstraction such as “interpreting / improvising the intentions” and “composing the narration“.

The design of these AI-empowered musical practices involves researches on the integration of temporal specifications (scenarios) in music generation processes; on the dialectic between reactivity and planning in human-computer creative interactions; as well as real-time detection and inference of underlying structure in an audio stream. This research is conducted in constant and long-term interaction with expert musicians and encompasses the entire process from modeling, development, artistic validation, up to implementation in ambitious artistic productions.

Jérôme Nika’s PhD work Guiding Human-Computer music improvisation (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016) focused on the introduction of authoring, composition, and control in human-computer music co-improvisation.

Then, he developed the DYCI2 library of generative musical agents combined machine learning models and generative processes with reactive listening modules. This library offers a collection of “agents/instruments” embedding a continuum of strategies ranging from pure autonomy to meta-composition thanks to an abstract “scenario” structure.


Current main research axes


Ressources