Forum Posts

DJRoche
Dec 10, 2019
In BLOGS
It’s not fair for me to say that I haven’t had any experience with electronics; I grew up playing electric guitar, an instrument that relies on amplifiers and expects a degree of fluency when it comes to effects; I wrote and recorded a planetarium show for the Royal Observatory Greenwich and a theatre piece for Hijinx Odyssey; I wrote an orchestral piece for the Orion Orchestra and Dyson; I wrote a piece for Astrid the Dutch street organ; I frequently mess around with audio recording and editing software; and I have taken courses on MAX MSP and, with Trevor Wishart, electroacoustic music. Even so, when CoDI Sound started I was green about live processing and its relationship to amplification. The closest I’d come to something like this was playing the opening to ‘Sweet Disposition’ by The Temper Trap (cracking song, surprisingly old). Trevor Wishart's Imago, made from the sound of two glasses clinking. As far as I see it, the biggest difficulty when writing for electronics is figuring out what you actually want to do. There is so much that is available to anyone choosing to use programs associated with live processing that you really need to know what you want; triggered samples, complicated delays, pitch processing, and any significantly more complicated combinations or advancements of any of these… The sky’s the limit. So, what did I actually do? Firstly, I sat down and composed until something seemed to stick (shock!). My initial idea was to make a piece of music using audio samples as a kind of metronome. Live processing would initiate a delay that sat in time with this metronome and everything would tick along aggressively with the rate of delay changing over time, altering the feel of the pulse. I made some scores and tried this idea out in the first session. What I realised immediately was that the specific layout of the microphones and the setup of the processing worked more effectively with different sounds - broader sounds with a greater degree of flexibility. So, in response, I decided to try a different idea. A kind of metronome.... I started trying to create large blocks of sound. My idea was this: the microphones will capture the sound but it will not be amplified, gradually, as the sounds build up, they will be amplified and will emerge covered in reverbs and delay. As the volume of these sounds increases, this wall of sound eventually overwhelms everything. Excellent. Again, I put my ideas down on paper and tried them with UPROAR in our second session. This was much closer to what I was after – I could shape this idea and control it on the score much more securely and I think the flexibility of this type of sound generation gave the performers much more to work with. Perfect - now I just have to write the rest of the piece! One of the hardest parts of this project so far has been finding a way into a composition via sounds that are relevant to me. You can use all sorts of programs; MAX MSP (complicated but not outrageously so, will take about 2-6 months to get going with it comfortably), Integra (I think this is the easiest, prone to some crashing), PureData (I know lots of people use this to create really good music), Super Collider (I’m not a coder so this was very hard for me), or more standard affairs like Ableton and Logic Pro (I am trying to find ways to use these). I think directness and quality of sound are key. So, with that in mind, I am going to prepare 2 versions of my piece. 1 for Integra (possibly Ableton) and 1 with sampled sounds instead of live processing – I will use Logic Pro for this. Bristol-based Poisonous Birds use PureData as part of their setup. The gut feeling that I get when it comes to electronics is that you should just jump in and mess around at your laptop. Working with UPROAR reminds me of setting up for gigs with a large live band; microphones, the quality of the room, feedback, monitors, and all that sort of thing are ubiquitous. I felt much safer and confident when I was surer about what I wanted from my piece – I would advise just working on getting some basic sounds that you think will work well in a musical context. Anyone who has heard UPROAR already knows they’re fantastic, patient musicians. Writing effectively for them is something that is extremely important to me in the process of creating this piece. But, above all, I still feel my biggest responsibility is making the best piece of music that I am possibly able to with the electronics at hand. Bring on the many hours of messing around with delays in Logic!
CoDI Sound - Writing for Ensemble and Electronics!  content media
0
0
55
DJRoche
Oct 17, 2018
In BLOGS
My first post discussed some of my initial impressions and approaches to composing the music for Hansel, Gedeon, and the Grimms’ Wood. Since then I’ve spent a lot of time composing, a lot of time sitting in rehearsal with Hijinx Odyssey, and some time tightening up my approach to the whole project. Before I get down to the nitty gritty about the actual in-production music I’m going to whip through some important but basic concerns in 2 blogs. One of these will look at technical elements, one will look at compositional elements. Let’s Go! Straight off the bat I cannot stress enough how important it is to make notes on a copy of the script, a book, or your laptop, phone, or whatever. It’s imperative that you try to understand the difference between what’s on the page and what’s actually going to happen. Making informed notes will guide you through this. There were three main technical elements that were brought to my attention by spending time with the script and theatre group. 1. Getting to know how the show works The biggest technical concern for any composer working on a production of this kind should be working out whether the music needs to be in the background, foreground, or somewhere in between. It is completely useless and, frankly, bizarre to have a huge, dense, attention-grabbing symphonic sketch over the top of some gentle dialogue. The music has to support the on-stage action, it has to work. In Hansel, Gendeon, and the Grimms’ Wood This is made even more complicated by the fact that scenes are not static, there are fluctuations that beg for musical responses, and I’m not just talking about characters knocking on doors! Sitting in rehearsal allowed me to experience these nuanced changes. A good example, without giving anything away, exists right at the start. There is a scene where some soft, gentle musical material seems appropriate – I popped some in and it was all well and good for the first 30 seconds. After this some jokingly sinister dialogue occurs. Because of the flexibility of the performers I can never plan exactly when this will happen. In order to respond to this fluctuation I need two things; firstly, I need to be able to control the entry of a sound that responds to the onstage action (more in point 3); secondly I need to ensure that this sound works with the music that’s already playing. The latter issue is simpler than it sounds as one just has to support the harmony or soundworld that has been composed. My gentle material is in C major (a lovely little lullaby), my interjecting sound is a spooktacular low C and D dyad – nothing complicated, but it works without blasting everyone out of the way. Creating strata that can be added or removed at any point is a really important part of the compositional process for this project. As a quick aside, there are loads of great examples of this happening in videos games. Escape From Monkey Island was famous for the way it gradually added or removed musical strata to evoke a sense of location… but my favourite is Super Mario Galaxy. In this boss fight (I’m amazed at how few videos of this there were – apologies it’s not particularly focused!) the introduction of new pitches creates larger scales. The number of introduced pitches corresponds to the number of times you have to hit the bad guy! Brilliant! Getting back to Hansel, Gedeon, and the Grimms’ Wood… Another important aspect to consider is the quality of voice that the different performers have. People inhabit roles differently and some members of the cast speak very softly or in a tone of voice that could be easily obscured by music. Part of the process of experiencing the rehearsals and annotating the script was to explore how I could support this. The fact of the matter is that I just have to make a note of who is in each scene and compose accordingly. If I know a certain character is performing at a given point I will imagine them reading the lines and respond to what I think will happen on the stage – I’ll be quiet if they are. You won’t know if these things are going to happen unless you’re at rehearsal making notes, it’s as simple as that! It’s also worth noting that spending time in rehearsals also allows you to see how the director interacts with the rehearsals. The discussions about how characters will interact with more complicated onstage characters is super important. Hopefully I’ll be able to talk about this a bit more in future blogs… but I need to do some more composing first! 2. Understanding the structure Another important aspect relating to spending time with the script relates to the implementation of recurring themes or ideas (musical or otherwise). One often spots the following scene structure; there’s an introductory moment where the characters enter, a perfect moment for louder music; this melts in to a calmer section of speech and interaction where the characters act out the show, quieter music is required to support the section – these sections also require extra sound effects, underscoring, and bits and bobs to support the on stage action; and finally there’s a section when they leave the stage, this is another point where the music can be prominent. My wall is covered in charts that details the structures of the scenes, it’s so important to consider the different arcs of different scenes in relation to the structure of the show as a whole. Less linear approaches are important too. Rehearsals are often split into groups, so different characters can workshop their scenes. The benefit of experiencing rehearsals in these smaller groups is that it allows one to see collections of scenes involving these characters in isolation. This has helped me to more easily map changes of character, recurrence of characters, and recurrence of ideas. For example, there are lots of parties in this show. The first one is early on and the characters run in to some issues, there’s a different party with a separate group of characters in a later scene, and then there’s a third one with all 3 groups of characters meeting at the end. The rehearsals made me mindful of the interaction between these groups – I can now plan these scenes to interact with each other musically. This is something I might have missed if I wasn’t sitting in watching the rehearsals and making notes! 3. How are you actually going to do it? The final thing worth talking about is how I plan on tightening up my approach to the performance of the music for the show. I was little concerned about how I was actually going to perform the music (more on the possibility of a live musician in later blogs) as it needs to be super sleek. This might sound like a super, super obvious thing to say but, if possible, always speak to someone who has done it before. James Clarke, previous Hijinx composer, very generously gave up a big chunk of time to chat through how he approached the music for his production and after chatting with him I’d more-or-less decided how I was going to approach the bulk of the musical performance. He recommended using QLab to stack up different sounds, fade them in and out, loop sounds, and everything in between. Although this can be done on Logic or other sequencers, it’s definitely easier on QLab. Speaking to someone who’d done it previously prevented me from wasting time experimenting on something for which there is a simple solution! We also chatted about microphone usage, hall layout, and being flexible with the music. So, I’m hoping to spend a little bit of time in the performance hall once I’m further along with the composition. I’m certainly going to test as much of my music as I can in rehearsal and I’m planning on having the whole score drafted by the end of October – fingers crossed I don’t regret typing this! Hopefully this gives you an insight in to how important I find spending time with a script, how important I find the rehearsals, and how much planning and thought goes in to this type of composition before anything actually hits the page for keeps. My advice is soak up as much knowledge as possible, spend as much time with the script as you can, spend as much time engaging with the actors as you can! In my next blog I’ll be chatting about basic compositional concerns, looking specifically at how I respond to these ideas musically.
BLOG 2: David Roche - Composing Hansel, Gedeon, and the Grimms' Wood: Basic Technical Concerns content media
2
0
28