One of the best experiences at the Opcode Systems was getting to know pioneer, innovator, and musician David Zicarelli, and benefiting from his contributions.
In 1985 Dave Oppenheim, who had just founded Opcode, was demonstrating his new MIDIMac sequencer and MIDI interface in a booth at a "strange little Mac Expo" on the Stanford University campus. Zicarelli bought a MIDI interface and Oppenheim suggested he write a software editor for the extremely popular Yamaha DX7. Because the hardware had very few controls and only a small info LCD, it was very time-consuming to customize sounds.
Right before Summer NAMM that same year Zicarelli gave Oppenheim a floppy disk with a DX7 editor to take to the show, and the relationship with Opcode was born. In addition to everything else, Zicarelli was instrumental in bringing talented engineers and marketing people into the company.
In the late 80s David designed a couple of algorithmic composition applications called Jam Factory and M. They were marketed by Intelligent Music, a company started by one of David's teachers Joel Chadabe, one of the pioneers of harnessing the computer for music.
Later David met Miller Puckett (the original author of Max) at IRCAM in Paris. David worked with Miller to make the application more object oriented and accessible. Opcode became the distributor of Max in 1989 and remained in that role until 1997, when it was taken over by David's newly founded company Cycling '74.
In 2009 the teams at Cycling '74 and Ableton merged parts of Max into Ableton's Live DAW fulfilling the dreams that their respective founders had of a truly powerful DIY environment for musicians and producers willing to learn it.
Typically modest about his achievements and eager to share the credit with his compadres. David spent some talking about some of his experiences.
What led you to Stanford?
I wanted to study computer music in graduate school and all six places I applied rejected me. Then I met Andy Schloss and David Jaffe who were at CCRMA at Stanford and they suggested I apply to a small program called Hearing and Speech Sciences and focus on auditory perception rather than music composition. That was excellent advice as I found I enjoyed studying psychology a lot more than music.
What was the genesis of your DX7 Editor?
So Chris, the fact that you even know to ask me this question, I think I should point out that you and I have known each other for a ridiculously long time and we met while we were both involved with Opcode Systems back in the mid 1980s. I met Dave Oppenheim who started Opcode, when he came to some kind of fair to show his brand new MIDI interface for the original Macintosh. That was probably early 1985. I was interested in using it to write software and he suggested I write an editor for the DX7 I had. Another good piece of advice. I learned a lot from that project.
What inspired you to invent Jam Factory and M?
These were interactive composition programs I worked on after the DX7 Editor. They were inspired by the ideas of my teacher Joel Chadabe. M was a more direct reflection of Joel's concepts of compositional variables and was created with two other students of Joel's, John Offenhartz and Antony Widoff. Joel wanted users to be able to create and adjust aspects of their music while it is being composed by a computer algorithm. These days, this is just kind of the way everything works but at the time it was a pretty new idea.
Jam Factory performed a simple analysis of what you played into it and immediately started replaying a version of it back to you. I was interested in a certain kind of sound in which if you zoom out the music would sound vaguely the same but if you focused on the specific notes it wouldn't really be repeating. I haven't really ever gotten past that idea and it's still what I try to do all the time.
How did you meet Miller Puckette, and how was your vision for Max different from his?
After I had worked on some interactive composition programs for a few years, Miller asked me if I would be interested in "commercializing" Max with him. At the time, before there was the possibility of giving your software away by putting it on the Internet, the only way you could really get your ideas in front of people was putting your program on some floppy disks and selling it.
I don't know that I had a "vision" for Max but the first thing I thought would be good to do was to apply all the stuff I had learned working with Dave Oppenheim, like how to do accurate timing and high performance MIDI I/O. These things were not built into the operating system the way they are now. Music software inventors had to do super low-level programming to get these computers to perform at a level musicians would accept.
How do you see the relative merits of Max's graphic programming environment and line-oriented coding?
What I initially found cool about Max was that it recognizes a fundamental problem very specific to using computers creatively, which is that most of the ideas you have when you're working on art or music are actually lame and you need a way to discover that quickly so you can get to the ideas that are worthwhile. Max as an environment lets you explore creative ideas much more effectively than text-based programming — you can make changes in a second that would take half an hour in a text-based language. But many of the low-level building blocks that you need to do that are usually better created in text-based programs because the code will run faster and you can manage complexity better.
In the past 10 years my co-workers and I at Cycling '74 have been exploring something we call code generation that addresses both the performance problems and some of the low-level things you can only do with textual programming. You can make something that is graphical that looks like a Max patcher and the code generation translates that into actual C++ source code you can use for whatever purpose you want.
Cycling '74 was merged with Ableton a few years ago. What has changed for you since then?
Nothing, except that my meetings start earlier in the morning because I live in California.
Max and MFL have a very nice way of blurring the lines between operating, reconfiguring and coding. How has this affected the way people use these tools to make music or other art forms?
I think "using" Max is not just making software with it. You can be a user of Max by using MFL devices in your Live set. Or maybe I make the MFL device and then I use it in a Live set I am creating. If you do learn how to create stuff in Max, then anything you don't like about someone else's MFL device you could potentially change to work the way you prefer. I suppose one thing that's happened is that some people might have been influenced to think about their music from a process perspective, not just a content perspective.
How is MFL different from the full version of Max? Have you noticed similarities (or differences) between the users?
The focus with MFL is the extension of the Live environment via creating audio effects or instruments. There are some restrictions in terms of newer features of Max such as Gen and MC that you can't edit in the MFL version. Another way to think about Max for Live is that it provides all the features of Live to users of Max; in other words it can be Live for Max. For example, you can apply automation to the stuff you do in Max, or leverage the Live mixer or all the stuff you can do with the warp engine. You could build a lot of that stuff in Max but why go to all the trouble?
What are some of the creative or innovative ways people use Max that have surprised you? Composition? Live Performance? Any favorites?
There are so many amazing things I've seen with the software it's almost impossible to single anything out, but I will mention two of my favorite artists whose work is surprising.
First, Luke Dubois finds ways to make Max do things it should never do, like generate maps as PDF files and analyze large data sets. For example, he has a piece called A More Perfect Union where he generated maps of every state in the US where the name of the city is replaced by the most common word in online dating profiles for people who live in that city.
I am also a big fan of Katsuhiro Chiba's music — I simply have no idea how he manages to achieve such incredibly detailed timbres. And the visual interface of the Max patch he uses to perform is just as gorgeous and detailed as the music it produces.
How have you noticed that MFL has changed the way people make music?
If you think about music making in a larger sense than just what recordings people are releasing, then you could say that a lot of the devices people make with MFL can be as much artistic statements as practical tools. I think this has an impact on the people who use them, because Live is a place where people are trying things out, obsessively tweaking stuff. So when you are using Live, you are in a position to be influenced by these creative tools.
Robert Henke's Granulator would be an example for me. The sounds that device makes changed how I think about mixing and effects. You can use Granulator as a kind of "reverb" but it isn't really reverb at all. The other way Granulator influenced me was to think about this more abstract concept of reverb and how I might be able to make other tools that take this idea as a starting point.
Do you use any parts of Max and/or MFL in music practice? Gen, MC, Jitter, The Live Object Model, etc.?
I think about the music I make as building a system or process rather than a fixed recorded track. The two aspects of Max I use the most these days are the MPE support and MC. As a piano player I like how quickly I can be expressive with a single key and apply different parts of that performance to different parameters in my system simultaneously. It's like being able to turn five knobs at the same time but I can do it with one finger.
MC, which allows working with large numbers of audio channels, has completely changed how I work with making sounds in Max. I can quickly make complex sounds that would have otherwise required hours of tedious patching. Because it would have been so tedious to make those patches, I never would have tried making those sounds in the first place. That's always the thing I hope to achieve when working on the software — make an idea visible to someone that they would have never seen before.
Max has been an enabling technology for many people. How does it feel to see the many creative things people make with it?
When I used to show something created with Max to my kids, they would say, "Oh there's another one of Dad's weird friends." I like it that there are least a few tools out in the world that cater at least a little bit to people who aren't completely normal. I particularly think this is important in the context of education. I've heard so many stories of people who had a transformative experience learning Max. They may not even use the software any more, but Max caused some realization about what they were capable of doing. Every other subject in school didn't do that for them. I have no idea what happens in those cases.
Max can be a useful tool for just about anything related to audio, but it's not for everybody. An application as powerful as Max requires some dedication to learning it. Here are some additional resources: