Been musicking, again. As a hobby. I have few hobbies, these days, but this one suits me and brings me back to my roots, as a musician.
I took out my sax on a few occasions, but most of the musicking I do is on my iPad, which is particularly suited for playful musical activities. Part of the suitability has to do with the interface itself, a large multitouch display makes quite a bit of sense for music. Which might be the reason why there are so many neat musical apps for the iPad.
Among the things I’ve been using on my iPad:
It may sound like a lot, but it’s such a small fraction of what’s available. And it’s enough for me to have fun.
Was already having quite a bit of fun with Thumbjam on its own. A reason it’s my favourite instrument is that it makes it incredibly easy to play something pleasurable. It’s also extremely flexible and truly powerful. And it fits my musicking practice in that it’s based on scales and I tend to think melodically. Among all the music apps I’ve tried, Thumbjam is the one which gave me the most in terms of instant and lasting gratification. Not that I was using it constantly, but I kept coming back to it, every so often.
My Thumbjam experience got even more fun, recently, after I went to a bansuri and raga workshop given by Kiran Chandrashekhar Hegde. Went there with my saxophone but, in retrospect, I might have tried using Thumbjam, as translating “sargam” to the saxophone was particularly difficult, for me. Still, the workshop unleashed something in me and it motivated me to try a few simple things with some simple raga material.
Which is how I got into Samvada. Someone at the workshop was using it and since the app’s name is pretty prominent on its display (good job, iotic!), it was easy for me to find the app afterwards. In a way, it’s a fairly specialized app. You set it to a given raga and notes you play within that raga appear on screen. When you hit a note precisely in tune, that note resonates through the app. You can also use the app to produce a continuous drone, which also helps in tuning.
I’ve used this app with three different instruments, with satisfying results in each case. Trying it with my saxophone, at first, it helped me practice the “sargam to saxophone translation” I found so problematic during the workshop. It also helped me practice intonation, which is notoriously difficult on the saxophone. Then, using my voice, I was able to sing scales within a raga, with the notes proper swara names. The raga I chose, bhopali, is relatively simple in this context because it’s based on a straightforward pentatonic scale using the same notes in both directions (ascending and descending). I happen to be quite familiar with playing in that same pentatonic scale as a mode of the same scale is used by the main instrument played in a Malian band with which I’ve been playing for years. But being able to do “solfeggio” in that scale was quite a satisfying experience.
The third instrument I’ve tried with Samvada is… Thumbjam, played through Audiobus. (Bringing everything together after going off-track for a while happens to be one of the things I love the most about Classical Indian music). What’s particularly neat about this is that Thumbjam supports microtonality in general, and Samvada’s raga-based scales in particular. Not to mention that Tj has glide and continuum modes which allow for a lot of work around pitch, like the trombone, theremin, or ondes Martenot. With Samvada’s “sympathetic strings” resonating on the scale’s notes, playing around those notes is quite a bit of fun. Samvada itself is what gave me the idea to give Audiobus another try. Had heard of it through an update to Apple’s Garage Band app. I was particularly interested to hear about this because Apple rarely supports third party features in its own apps and interapp communication is a particularly intriguing space for iOS work. In the end, Audiobus is simple enough that it almost feels like a normal part of the platform. Yet it opens some potential for specialization in music apps, with some of them primarily as sound generators, effects, or recorders.
Audiobus is also how I discovered Loopy (apparently made by an Audiobus developer). Was searching for a way to record the resulting audio from my Thumbjam to Samvada fun. Some of the apps I already own could do recording, but Loopy sounded especially convenient. Watching videos by Macdetriumphe and Dub Fx was particularly inspiring. Made me realize how simple it was to create and control those loops. Thumbjam has its own looping system and I’ve used on a fairly regular basis, mostly creating a simple loop on top of which I’d just have fun for a while, occasionally recording the session. What Loopy makes easy is to arrange loops together, triggering them on or off quickly. In the case of playing with Thumbjam and Samvada, I realized that creating simple snippets as loops made it easy to lay down a base for my playful musicking. (Thumbjam’s loops can be imported in Loopy, which I find particularly appropriate as I create the initial loop which serves as groundwork for the rest.)
Playing with these four apps (Thumbjam, Samvada, Loopy, and Audiobus), I’ve musicked for a while and recorded two of these sessions, that I put on my Soundcloud.
Speaking of Soundcloud… Going back to it a few times in the past little while, I get to think about its potential. In a way, it’s a bit like musicians’ Github, a place to “show your work” (both in the “portfolio” sense and in the context of math problems). But instead of a straightforward adaptation of the Github model to soundtracks (with “version control” and “forking”), it’s clearly built by musicians, for musicians and their communities. It sounds like it may be pushing communities a bit more, recently, and it’s quite possible to be a passive listener of Soundcloud music. In this sense, it can be an alternative to MySpace for musicians. Because musicians post their own sounds on the site, it’s quite different a model from the one based on recording labels. As Soundcloud makes it possible to post any kind of sound, it can also be a space to share samples for remixing or to podcast. Some musicians post actual tracks or full sets of samples, kind of like a portfolio. So far, I’ve been posting some excerpts of my playful musicking, without being too self-conscious about the fact that none of it has any interest outside of my own activities. In this sense, it’s also a bit like Instagram, Flickr, and the plethora of photo-focused online services, these days (Google+ and even Tumblr). As an aural person, I’m more in tune with Soundcloud than with many other services. I have more fun with sound than with images. Musicking is about having fun with sound the way doodling is about having fun with images.
Looping is a special kind of fun. Simple, gratifying, and potentially powerful. It got me to think about musical “inscription”, different ways music is registered, transcribed, recorded. In fact, I had one of those epiphanies which are hard to describe and rather profound but are based on something rather simple and well-known by large groups of people. It’s something to do with recordings, loops, samples, patterns, sequences, and licks in connection to music cognition. As I play a monophonic instrument, I tend to think of music a certain way.
Whether I’m sight-reading or improvising, I try to reproduce melodic patterns through sequences of notes, a bit like words in a phrase. Because producing any note involves a rather complicated set of simultaneous actions (fingers, breath, lip) and because these sets of actions are quite different from one note to another (but basically the same, regardless of tonality), it’s a bit like encoding words in a complicated system. It’s part of the reason Kiran Hedge’s workshop was so difficult, to me. I’m sure other sax players (including some with classical training) would have an easier time, but I trained my brain in a specific way and it’s difficult to think about it in other ways.
Also, because I was trained in strict tonal music theory, I really think through scales. More specifically, my muscle memory is relatively good at going from one tone to another based on their connections within a given scale. This can include some uncommon scales, like the whole tone scale or scales I just “invent” by mucking around with existing ones. Unfortunately, I’ve never really been able to learn the connections between scales and chord progressions, so I have a hard time with Jazz improvisation. But the way I think about tones isn’t too far from the way Jazz soloists think of them.
Like most people in music schools, I had to take piano lessons. Unlike many others, I was unable to move beyond some very simple stuff. Again, something about “the way my brain works”. Polyphonic instruments, especially those which make it relatively easy to produce chords or even complicated harmonies, have long intrigued me. But that’s not the way I music, myself. In fact, I prefer having the harmonies come from diverse instruments and several of my favourite experiences as a musician came from the harmonic interplay between instruments (and musicians). Something as simple as the vibration felt, while playing a chord precisely in tune, can give me goosebumps. One reason I don’t like most keyboard instruments is 12-EDO, which doesn’t allow for that kind of vibration. But it’s mostly a technical difficulty which prevented me from ever learning to think like a keyboardist.
Many instrument (and other music) apps on the iPad are based on an analogy to keyboard instruments. It’s the case with Sunrizer, mentioned above, along with NanoStudio, BeatMaker 2, Some of these produce interesting results, but I find it quite hard to really get into them. When people talk about “knowing music”, they typically mean “knowing how to play an instrument” or “knowing how to read musical notation”. With these keyboard-based apps, I feel like I don’t “know music” in that I don’t have the necessary knowledge to make them work.
Which is part of the reason Thumbjam is so much fun, for me. Setting it to a given scale, it means that everything I play will be within that scale, and notes are colour-coded to make it easy to find tonal centres. As far as I can tell, SoundPrism is quite similar to this, and I know it’s meant to make it easy for people to get started musicking while allowing for more complexity. I just discovered Chordion which adds chords and the ability to switch between scales through chord changes. Though it doesn’t support microtonality and has a fairly convoluted interface, I might be able to have quite a bit of fun with Chordion. But Thumbjam is the one which is closest to the way I think about tones and it enabled to expand my approach.
And Thumbjam reveals itself particularly amenable to use as a controller for other apps. Apart from interapp communication through Audiobus, it supports a virtual MIDI connection. Something I’ve done is use Thumbjam to play with Sunrizer Synth sounds. Because of its keyboard focus, Sunrizer fun seemed inaccessible, to me. I’d play a few notes on the keyboard to try a sound, but I wouldn’t go any further. Trying the same sounds using Thumbjam is a completely different story, as I can easily perceive their potential and start musicking with them right away. Set in an appropriate scale, this technique produces something I find quite fun.
Here are some excerpts from a few minutes of playing with this combination: Thumbjam Fun in Sunrizer.
It may all sound self-indulgent or overly easy. The idea is to have fun. Much of it reminds me of Laurie Spiegel’s Music Mouse, with which I’ve had hours of fun some years ago. I don’t have the skills needed to have this kind of fun on the keyboard approach of so many apps.
A second approach that instrument apps tend to use is a set of “pads” which trigger distinct samples. Both NanoStudio and BeatMaker have this, in addition to the keyboard model. The “pad” approach comes from drum machines and electronic drums (BeatMaker calls them “drum pads”), but samples need not have anything to do with drums. It sounds like this mode of triggering samples has been particularly common in different forms of electronic music. And it’s one which is native to electronic music.
Wherein lies another part of my epiphany on music inscription and cognition.
As I think tonally and monophonically, I hadn’t really grokked the potential for layering or stacking sounds on top of one another. Way back when I used a sampler in my music school’s MIDI studio, it was keyboard-based. I did control it via a MIDI sequencer, but the way I conceived of those samples was still pretty similar to the way I think about notes. Of course, the technical limitations of the Ensoniq EPS was part of this. The result is that I wasn’t able to think through samples. I don’t think like a turntablist any more than I think like a keyboardist.
At least, not in practice. I’m quite able to think about the way it works and I can hear what some things may sound like once put together in a certain way. But I don’t (yet) have the skills to make samples really work for me in a musical context. What makes it sad is that MIDI studio experience, long ago, triggered (!) a certain appreciation for electronic music, so I feel like I “should” be able to do this.
Playing with Loopy, though, it eventually hit me. Again, it’s hard to explain. But it’s about moving away not only from tones but from notes in general. The unit of inscription isn’t really like a word; it’s more like a phrase (of any length) including its full intonation, accent, grain of voice, etc. In this sense, it’s much more concrete than a note. And the way these things are layered on top of one another is quite different from the harmony and melody of instrumental music.
Again, I knew about much of this, intellectually. Ever since my experience in MIDI studio, playing with things like Csound, Alchemy, and TurboSynth, I did get some appreciation for working with the sound itself. It led me to some work in sound analysis, including my experience in speech research. Not to mention that following my friend Alexandre Burton’s career occasionally gave me some idea of what was going on in that sphere. But I never really produced any music through these means.
With a few tools, though, I got started with my baby steps into this world.
The spark happened after a fascinating concert involving several of my musician friends. I’ve played Malian hunters’ music and Gospel with some of them. The show brought together several other genres, from Dancehall to Highlife and from Reggae to Afropop. Some things stayed in my head, including a pattern I’ve heard several times in those musical performances.
Not being well-versed in those musical genres and styles, it took me a while to identify it (it’s a pretty basic Dancehall pattern). Exploring music from that whole universe, I got several moments of musical inspiration. Problem is, I wasn’t really able to capture them. Sure, I can record myself singing a pattern, but the musical idea isn’t encapsulated in that pattern. The lack of polyphony (and polyrhythm!) in my voice is severely limiting, here. And though I can try to imitate the sounds themselves, the complexity of the sounds involved is way beyond what I can produce with my mouth.
This is another connection between recording and cognition. There’s something really powerful about recording technology as used by Jamaican soundsystems or British progrockers. Chances are that, without the former, Hip Hop wouldn’t exist. (Without the latter, “ambient music” would be quite different.) Playing with sounds instead of notes is a very specific process and it’s one which has dominated a large number of development in music of the past fifty years and more. But it’s only now that I start “getting” it.
I guess the breakthrough came when I finally listened to Nate Harrison’s well-known recorded explanation of the “Amen break”, a few months ago. It took a while to seep in, I guess, but I think I get it, now. How the sample itself has been borrowed, transformed, reappropriated. I have no issue thinking of a rhythmic pattern being borrowed, transformed, and reappropriated. Groovologist Charlie Keil did some work on this that I found very inspiring, including some work on participatory discrepancies and “Bo Diddley’s Beat” which found its way into Music Grooves, leading to an insightful dialogue on participation. Despite Music Grooves co-author Steven Feld’s work on schizophonia (which led to its own dialogue), I still couldn’t understand what changes when what’s reappropriated is the recording itself. I mean, I understood it conceptually, but I missed something simple yet profound about making something your own by remixing it.
Speaking of remixing… It’s an important part of the Commons movement and my contact with Remix the Commons are probably helping me understand these things. Part of what the movement is showing is that the practice of working with recordings isn’t neutral. There’s an interesting connection with collages and graffiti, here, as it’s the material itself which is transformed. No wonder culture jamming is emerging so readily.
Social, economic, legal, and political dimensions may have dominated my thoughts about work with recordings. I’m finally getting into “digital æsthetics”. In fact, the most recent episode of France Culture’s Place de la toile (in French) revolves around the relationship between art and digital technology. Was struck by some comments about shifts in the status of artists and their works, but only because it’s something I find essential but rarely discussed. What may have had the deeper impact on me, though, has to do with æsthetics, including musical æsthetics (though they focus on visual arts). For instance, a comment about digital technology being like perspective, in art history, has helped me realize something rather deep about work with samples.
All of these realizations have made me listen in new ways. I was already aware of (and interested in) soundscapes. And I’ve been accustomed to listening through recording for a while. For instance, field recordings were the main part of my doctoral field research and, a few years later, I made informal field recordings in a variety of situations, using a portable MP3 player/recorder. All of these things contribute to the way I listen to things. (Quite difficult to explain, of course, but it has a lot to do with attention.)
Until now, though, I haven’t been so engaged in playing with sound after the fact. In creating something with those recordings. In hearing the world as musical material. Apart from a few things related to my since-defunct ethnography podcast (including creating a jingle), the last time I did so was in 1990 and quite a few things have changed, in the meantime.
Actually, one thing which changed a lot for me is the chilling effect surrounding legality of recording practices. It was a big part of my field recording experience, and I have chills just thinking about the whole issue. Which makes me really careful not to record anything identifiable.
Still, I was able to go beyond my fears when I recorded the following using Loopy on my iPad, while standing in the bus: BusDuBass.
The experience of recording this reminded me of two apps from Reality Jockeys that I find fascinating:
In both cases, the main idea is that ambient sounds are processed to produce an even more complex aural experience than you would get otherwise. In a way, it’s the acoustic component of “augmented reality”. While the tech world focuses on visuals, I for one wish that more attention were paid to sounds.