As a researcher at Ircam Centre Pompidou in Paris (ISMM STMS lab), Jérôme Nika focuses on designing interactions with generative sound processes, as well as on modeling, learning, and navigating “musical memory” models in creative contexts. His research aims to develop new creative practices based on symbolic abstraction, making it possible to interpret live electronics at the level of intention and to compose them at the level of narrative.
See all the posts in the “Research” category.
Designing systems for interactive sound synthesis means navigating the tension between immediate response and longer-term intention in human–computer interaction, while uncovering structure in sound as it unfolds in real time. This work is grounded in ongoing collaboration with expert musicians and extends from conceptual modeling to its realization in large-scale artistic projects.
Jérôme Nika’s PhD work Guiding Human-Computer music improvisation (Young Researcher Prize in Science and Music, 2015; Young Researcher Prize awarded by the French Association of Computer Music, 2016) focused on the introduction of authoring, composition, and control in human-computer music co-improvisation.
Then, he developed the Dicy2 for Max and Dicy2 for Ableton Live generative musical agents combined machine learning models and generative processes with reactive listening modules. This library offers a collection of “agents/instruments” embedding a continuum of strategies ranging from pure autonomy to meta-composition thanks to an abstract “scenario” structure.


