Gestural Music Sequencer from Unearthed Music on Vimeo.
Something as simple as remapping a single knob can give you new musical ideas. So expand that to entire gestures and live video input, and you can help push your performance in new directions and out of old habits. That’s why it’s always great to see projects like the Gestural Music Sequencer.
Built entirely in free tools – tools fairly friendly even to non-coders – the GMS lets composer and musician John Keston explore new ideas through gestures captured in a video stream. It’s easier to see than to talk about, so check out the just-completed documentary short by Josh Klos, with the aid of Julie Kistler and Brian Smith. (And yes, documentation makes a huge difference; we’d love to see more of this stuff!)
The ingredients:
- Processing, the free, multiplatform coding environment [site | cdmu tag | cdmo tag]
- controlP5, a lovely, light, quick-and-dirty library for UI controls
- Ableton Live – though you could substitute other software via MIDI, Live makes a nice, familiar interactive music engine
Lots more information on John Keston’s wonderful Audio Cookbook blog, which is fast becoming one of my favorite reads:
http://audiocookbook.org/category/gms/
And here’s a really lovely video that demonstrates what you can do with video. It uses a string of lights in a jar as the source. Yes, in a way, it’s almost like having a very focused random generator, but I think there’s nothing wrong with that. There’s an almost analog approach to seeing the source, and using that to organically create music.
GMS: Chromatic Currents Part II from Unearthed Music on Vimeo.
I have to observe, while this works reasonably well with MIDI, it reveals why standardizing on networked communication, as OSC does, makes more sense. In a world of software, “controller” can really mean anything you like. Control is increasingly about software talking to software – including when devices are involved, since they generally have a software layer of their own. Also, because sometimes it’s easier to code this with Processing than with Max, I can see some powerful uses of the Python-based Live API, which we expect to mature later this year. (Yes, the project called Live API seems to be in a holding pattern, but we may be able to work up a more complete, Live 8-ready alternative.)
By the way, our goal is to make noisepages a platform and collection of tools for people doing this sort of work (or anything creative with music and motion), even if you host your blog elsewhere. Stay tuned for the details on that.
This is really cool 🙂
And, amazingly, local to me. Thanks for posting this.
It makes me feel a lot less weird for sitting at home and trying to figure out how to sequence the mapping of what controls actually control along a timeline – if I were waving my hand in the air and then held it still and what happens at a certain gestural position changed audibly through time I'd have a lot less trouble getting someone a little more analytical to tweak it into a sweet spot.
Is there anything harder to put into words than nonlinear change-over-time which is non-visualizable?
This video helped me in more than one way!
I am reminded of a piece we performed in this year titled Blinky, by Rebecca Fiebrink. She has written a java-based machine learning algorithm to extract features from all kinds of input – though this piece involves waving flashlights at the built-in camera on our macbooks. The wekinator is an incredible tool, as it allows realtime assignment of various controls to different parameters via OSC.
Wekinator: <a href="http://wekinator.cs.princeton.edu/” target=”_blank”>http://wekinator.cs.princeton.edu/
Thanks for the entry, Peter. I would like to note that Grant Muller (grantmuller.com) helped significantly by adding functionality to the RWMidi library including higher PPQ resolution and external sync capabilities.