EXPLORING MUSICAL PRACTICE: INTERVIEW WITH PIERRE JODLOWSKI

Elsa Filipe: As a musician, could you tell us about your musical background?

Pierre Jodlowski: I started with a very classical training (piano, saxophone, music theory, harmony…), but I quickly moved towards a creative approach. At first, it was quite intuitive, composing pieces for piano, then playing in an experimental rock band, and finally entering the Conservatoire in Lyon, where I studied under Philippe Manoury. There, I focused especially on the convergence of acoustic (instrumental writing) and electronic worlds (real-time processing and studio work), which became the foundation of my musical universe. My training culminated with a year at IRCAM’s Computer Music Cursus, where I was able to refine these techniques and, at the same time, build an initial network of collaborators for my professional journey.

EF: Considering the diversity of your work, would you describe yourself as an eclectic composer? What led you to use electronics in your compositions?

PJ: I wouldn’t say I’m an “eclectic” composer. The diversity of my work is more material than conceptual. Of course, I compose for very different formats, for film or theatre, I design installations or staging projects—but all of these are centered on very similar ideas or concepts. I often work around memory or political engagement. Each work, regardless of its form, offers a perspective on those themes. Electronics are fully part of our society, so there’s nothing marginal about using them in music. In amplified music, in cinema, in theatre, this question doesn’t even arise. It's not something specific—I use these tools just like any other instrument, object, or situation my imagination calls for.

EF: How would you define real-time processing and the concept of interaction? In your view, what are the advantages and drawbacks of using them?

PJ: Real-time processing refers to the principle of transforming sound as it happens—like modifying a vocal or instrumental sound live using various processes, simple or complex. This technique—unlike fixed soundtracks in mixed music—offers performers greater freedom. However, the more complex the processing, the harder it is to set up, and performers can’t work alone (unlike with a fixed track, which is much easier to use). Interaction can also extend to installations. I’ve developed several projects where the audience manipulates data through sensors. This is fascinating, because the gesture that triggers the sound (or image, or light) is entirely unpredictable. You have to develop entirely different strategies from those used to structure time in traditional composition.

EF: Regarding your compositional process, how have you integrated new technologies and their potential into your music? Have they influenced your writing? What are you trying to achieve through their use?

PJ: I don’t think of new technologies as something special. They’re just part of my toolkit, like paper. I enjoy moving from written sketches that reflect mental states (writing, situational descriptions) to improvised sequences where I use complex processing chains to create unique sound materials. All the technological tools (interfaces, software, sensors, controllers…) are fully integrated into my imagination because I’ve worked with them for so long. Ultimately, what matters isn’t the tools themselves, but the strength of the artistic statement.

EF: In your article “Le Geste, question de composition” (“Gesture, a question of composition”), you mention “active music.” Could you explain what that means, and how it relates to mixed music?

PJ: The notion of “active music” refers to an aesthetic idea—or more simply, to what I’d call the core of my artistic approach. It’s about activating zones of memory, emotional states, dynamic processes—in short, ensuring that the act of creation engages multiple perceptual levels consciously and intensely. This stands in contrast to “passive” music, which might exist simply for itself, almost independently of its reception context. This is a tricky point, of course—it raises questions not just about the composer, but also the audience, performance spaces, conditions of presentation, etc. Some concert formats have become so conventional (repeating habits inherited from past centuries) that it feels like the concert is happening “without us.” In that sense, there’s a kind of passivity, which for me represents a renunciation of art’s function as a conscious, worldly act. When I create my works—taking into account not only the sound (the mixture of sources and spatialisation) but also gestures, lighting, movement—I’m essentially exploring a continuous dynamic that activates diverse perceptual fields. It’s a kind of “montage of attractions,” as Eisenstein described in his writings on cinema.

EF: Concerning the relationship between performer and machine, what are your main concerns during the writing process? Given that performers often have a classical background, how do they adapt to using these new technologies?

PJ: I pay particular attention to how instruments are amplified. I don’t amplify them just to make them louder—I want a sound that feels “closer.” By placing microphones near the instruments and amplifying them, the idea is to transform their “physicality,” to reveal all their inner components, as if listening under a microscope. Then, in many of my pieces, I explore this human-machine relationship from a critical or dialectical perspective. The sounds that play alongside the instruments aren’t just there to create a backdrop—they must intervene in the direction indicated by the performer and alter its contours. It’s like several musicians playing together—they have to interact. Otherwise, the experience becomes passive again. Contemporary musicians are increasingly comfortable with electronic tools. If some resist, it’s usually because they haven’t yet had the chance to experience a truly meaningful setup. That said, working with mixed music obviously requires dedicated rehearsal time, which doesn’t always fit the working habits of classically trained musicians. There’s a lot that needs to change in that regard. In theatre, rehearsal time is massive compared to what we get in music. And the score alone can’t replace this hands-on practice. So yes, ideally, we’d have much more time to work—especially with extended setups—but that raises other issues, especially financial ones.

EF: In your view, what future lies ahead for mixed music?

PJ: From what I see among younger generations of composers, the integration of electronic tools—as well as lighting and scenography—is becoming more common, even normalized. In many ensembles and soloist circuits, concerts increasingly involve video, lighting, amplification, and so on. But orchestras are far behind in this regard—and some may never catch up. The programming in orchestral institutions or opera houses remains tragically backward-looking, almost exclusively focused on the past. Many composers from the 1970s already believed that orchestras no longer served the needs of contemporary writing. And indeed, with just a few loudspeakers, one can recreate sound environments that are just as rich and complex as a symphony orchestra. Personally, I find it unfortunate that these ensembles—which consume significant music budgets—don’t embrace these practices more, remaining obsessed with orchestral sound as defined by Beethoven. The construction of new philharmonic halls proves this point: they’re paradoxically tools designed for “museum music,” completely unsuited to modern technologies. Fortunately, there are exceptions, and we can hope that the future for these musicians involves a more diverse practice—one that incorporates other media more fully.

 Back to catalog

Appendix to a Musicology Thesis, Elsa Filipe - 2017