Posts Tagged ‘ Kyma

Field and Toys 2010.

The voice of WALL-E

Today’s article is about Disney-Pixar’s animation film Wall-E and the memorable voice of the main little robot character. According to sound designer Ben Burtt, the robot voices are “like a toddler [...] universal language of intonation. ‘Oh’, ‘Hm?’, ‘Huh!’, you know?”[44].

Ben Burtt explains how he did the voice of Wall-E here: “You start with the human voice input and record words or sounds and then it is taken into a computer and I worked out a unique program which allowed me to deconstruct the sound into its component parts.

I could reassemble the Wall-E vocals and perform it with a light pen on a tablet. You could change pitch by moving the pen or the pressure of the pen would sustain or stretch syllables or consonants and you could get an additional level of performance that way, kind of like playing a musical instrument.

.

Voices are the hardest because the audience listens to them with much more critical ears than sound effects. We are all experts at interpreting the nuances of speech, so anything that might be interpreted as a vocal or expression the audience really listens carefully.”

Here’s a little example of what I’ve been able to recreate in Kyma with my own voice:

.

FFT Synthesis/Resynthesis

  • Additive synthesis parameters in a discrete-time implementation can be determined using the Fast Fourier Transform (FFT).
  • The analyzed time-domain signal is split into blocks or  ‘frames”, each of which is processed using the FFT (referred to as the Short-Time Fourier Transform (STFT).
  • The STFT provides a means for joint time-frequency analysis.
  • As well, a time-domain signal can be resynthesized using the Inverse Fast Fourier Transform (IFFT). The resulting IFFT frames are “assembled” using overlap-add techniques.
  • With improvements in computer processing speed, it is now possible to perform IFFT resynthesis in real time.
  • FFT/IFFT synthesis lends itself well to sound transformations, such as time-stretching and pitch scaling.

Here’s a very short and simple explanation of the Kyma patch that you can see at the bottom of this page:

As Ben Burtt simplified:

We all know how pictures are pixels now and you can rearrange pixels to change the picture. You kind of do the same thing with sound.

.

  • FFT Synthesis/Resynthesis patch in Kyma:

Digging in Kyma.

This is the very first post of my blog and as an introduction, I’d like to share the sound of my demo reel that you can watch at http://www.jedsound.com/. Type”kyma” for the password.

Everything started when I received a software called Kyma from Symbolic Sound. As I was reading Kyma X Revealed! by Carla Scaletti, I discovered many different ways to manipulate and process a sound that I never heard before. It took me over 6 months to dig into every aspect of the “prototypes” (the leaves of the big tree). Then I could build more complex patches (the branches of the tree) depending on my needs and write little scripts in kyma language to control my sounds algorithmically. For example, a sound can be just a parameter of another sound parameter that controls one parameter of a totally different sound. This was a very good learning experience that opened my mind on what a sound actually is:  Frequencies and amplitudes are nothing else than a sequence of digits that can be modified mathematically.

  • A mix of my favourite ones:

.

So, I downloaded a few trailers I found interesting in terms of visual texture and dynamic, converted them into quicktime DV, brought them up in a ProTools session and started to record while performing my sounds in real time on picture.

I really wanted to go further and use some fresh sounds instead of picking up from my libraries. I was so influenced by all those talented and dedicated sound designers, such as Chuck Russom, Nathan Moody, Tim Prebble, Michael Raphael, David Steinwedel etc. sharing their sound effects on the web, that I decided to go in the field too…

I got a lot of fresh sound materials recorded at 96kHz/24b that received metadata within soundminer. This includes spring coils and slinkies, electromagnetic fields recorded with guitar pickups, neonodium magnets, motors, servos, gadgets and gizmos, metal impacts and underwater metal impacts, wobble boards, car doors, washing machines, sewing machines, dumpsters, bungee cords, elastics, slingshots, balloons, wine glasses, chairs, winds dragging bags on carpets etc. that I will talk about later in other articles on this blog.

From all that stuff, I built around 10GB of processed elements in Kyma, Metasynth, Michael Norris Suite, IRCAM’s AudioSculpt and ProTools with Waves, Sound Toys, GRM Tools and Altiverb. Obviously, only a little percentage of that was used, because some of it was intended for some other projects.

For this article though, I’ll stay focused on the sounds processed in Kyma… (read more!)

Read more