composing AT THE narratiVE LEVEL FOR LARGE SCALE GUIDED MUSIC GENERATION
A scenario was introduced to guide the generation process by Jérôme Nika’s PhD thesis, enabling then to introduce anticipation and forward motions in the improvised human-computer performances. This symbolic sequence is defined on the same alphabet as the labels annotating the memory (chord labels, chunked audio descriptor values, or any user-defined labelled items).
In this research axis we focus on higher-level compositional applications taking advantage of this scenario object to design offline large-scale structured audio generators. In order to increase the level of abstraction in the meta-composition process, a second agent can be introduced beforehand to generate the scenario itself. The underlying structures are then unknown to the user, and the meta-composition paradigm changes radically: the user no longer explicitly defines a temporal evolution, but provides a corpus of sequences that will serve as “inspirations” to articulate the narrative of the generated music.
The compositional practices allowed by this association of generative models and computer-assisted composition could be qualified in a metaphorical way as the generation of musical material “composed at the scale of the narrative”; where the compositional gesture remains fundamental while at a high level of abstraction.
The video above shows 3 simplified stereo examples illustrating this “meta-composition” process to generate singing clouds” for Pascal Dusapin’s Lullaby Experience. Each artificial musical choir is composed abstractly “at the narrative level”. It is defined by a high-level abstract scenario and a “musical memory”, so that generative agents can propose several possible instantations of the high-level “meta-composition”.
related Projects
Some related articles
Jérôme Nika, Jean Bresson. Composing Structured Music Generation Processes with Creative Agents. 2nd Joint Conference on AI Music Creativity (AIMC), 2021, Graz, Austria. ⟨hal-03325451⟩
Jérôme Nika, Marc Chemillier, Gérard Assayag. ImproteK: introducing scenarios into human–computer music improvisation. ACM Computers in Entertainment, 2017.
Jérôme Nika, Guiding human–computer music improvisation: introducing authoring and control with temporal scenarios. Sound [cs.SD]. UPMC – Université Paris 6 Pierre et Marie Curie, 2016.