Going through my GitHub today, I found a repository containing what I submitted for a class at Goldsmiths called Music Computing 1. The goal of the class was to get students playing with generative systems. I seem to remember a session in the Electronic Music Studio where we all plugged our computers into the Disklavier, and listened to our generative systems perform on acoustic piano, which was a great experience, if somewhat embarassing.
I don't have a recording of the Disklavier performance, but I do have the recording of the midi synthesizer on my Mac playing the melody that I submitted at the time.
My course report was light on details about what I intended to accomplish, no doubt because I wrote it in the 90 minute run up to the submission deadline. My code was pretty inscrutible - it's been quite a while since I've touched Java-flavor Processing code (the most non-googleable language of all, maybe that's why they chose it for teaching). It turns out I was running two parallel sines functions at the same rate and using them to pick midi notes, rather than to play audio directly.
I evidently didn't think to map the notes to a scale of any kind, or play with rhythm at all - it's just quarter note diads all the way through, so it's not a pleasant thing to listen to at all.
After 8 years of distance, it's great to look back at what I submitted and see where I could've gone if I had put more effort in, or had stronger coding skills at the time. How I could've disentangled the relationship between the two "agents" in the piece, and generatively automated the parameters of each agent.
I piped the midi output into Ableton Live, and after a little midi wrangling, and synth path knob twiddling, I ended up with this.
Not really musically interesting, but a revisited view of the original, and maybe a starting point for new music based on generative systems.