May 31st, I got to go to the DigiPen Audio Symposium! I had a great time and learned a ton about using computers and other hardware to create Audio.
“: Live Orchestra Meets Highly Adaptive Score”
This presentation showed how the interactive audio for the game Peggle2 came about. It was a really interesting look about creating a demo of what Guy Whitmore wanted to do with the audio, selling it to the decision makers at PopCap, implementing the audio technically, and how all the sounds fit together musically. I really enjoyed this talk because it not only looked at implementing the audio, but it ran the gamut of how to add audio to the game.
“From Speaking Statues to Ventriloquists to Vocoders to Autotune: A Brief History of Technology and the Expressive Voice”
Perry Cook took us on a tour of music creation “hardware” starting, of course, with the voice and the creation of language through mimicking other animals. The history behind sound is amazing and as we approached modern time, it just got more fascinating. Perry gave a demonstration of a new electronic instrument he created called a “C.O.W.E.” (Controller, One With Everything) that he made of spare audio parts he had lying around. He blows into the top like a bagpipe changer and it has buttons on it where the chanter would have holes. But there, the similarity ends. The instrument also has motion sensors in it so he can manipulate the sound by moving the instrument around.
“The Art and Science of Aural Cinema”
With Richard Karpen, we listened to several different types of music. Everything from Beethoven to Stevie Ray Vaughn to modern computer generated compilations. It was a very interesting look at different kinds of music, what they have in common, and how they differ. One of the more fascinating instruments shown was a đàn bầu, a single string instrument from Vietnam. It’s amazing how much variety can come out of a single string instrument!
“Procedural and Interactive Sound Design and Music I and II”
ArenaNet sent out 3 people who work on different aspects of audio for GuildWars2.
The first person works as an audio programmer. He talked about the different aspects of adding audio to games and the challenges some game frameworks present. He is currently creating a new audio server for Unity because the Unity audio server was not flexible enough for him. I’ve just started playing around with Unity myself, but I’ll leave that for another post 🙂
The second person is a music composer, mixer, and editor. He showed several ways of capturing new sounds (like a bucket of plaster and a plunger, or driving down I-90 with the sun roof open and sticking things out the sun roof for different wind effects). Then, he showed how to process them into various effects.
The third person bridged the two previous speakers. He said his basic job is to get the two other people to work together to get good audio into the game. But he also talked about how all three of them sometimes wear the same hats, all of them programming sometimes, or creating effects other times.
“Choirs of the Future? A Brief History, State of the Art, and Projections About Singers and Technology”
Perry Cook came back a second time to talk about some of the directions audio is going into the future. He showed several recent concerts where each instrument was a computer hooked up to a speaker. Each person at a computer was composing in real-time and together they were generating a song. It was really fascinating. He is also working on a “song book” that he holds in his hands that contains a tablet, microphones, cameras, and several recorders. He then holds a concert where he is the only singer, but he uses the “song book” to record him self while he is singing and then plays it back during the concert. This has the effect of multiplying his voice so it sounds like a complete choir is singing.
Over all, it was a fascinating look at music and audio. I’m really glad I went and learned a ton. Now I have a new list of things to research. I’m also really looking forward to next years symposium.