Technology 3 min read

Computer-Generated Sound Effects set to Revolutionize Film Industry

Computer-generated sound effects could spell the end of an entire section of the film production industry. | Image By BokehStore | Shutterstock

Computer-generated sound effects could spell the end of an entire section of the film production industry. | Image By BokehStore | Shutterstock

Stanford researchers have devised a new system that automatically produces computer-generated sound effects to accompany animated content.

Widely used in films, television, video games, VR, and advertising, Computer-Generated Imagery (CGI) is becoming more and more realistic to the point that it’s getting hard to discern it from reality.

Highly photorealistic CGI can pose serious problems. In courts, for example, the jurors may find themselves in a situation where they have to make a judgment based on fake photos presented as evidence.

If computers can generate images and 3D animations that are more “real” than ever, with sounds, however, they can’t do much.

For example, in the post-production stage of a film, Foley artists reconstruct much of the sounds and the noises we’d hear, using objects and tricks of all kinds.

The job of Foley artists requires a lot of talent, imagination, and dedication.

Now, a new computer-generated sound effects system from Stanford University could make things easy for sound artists, or spell the end of their craft.

Read More: Researchers Use Sound Waves to Render Large Objects Invisible

Any Sound Effect, at the Push of a Button

A team of Stanford scientists has developed a computer-generated sound effects system that automatically renders realistic sound effects in sync with computer animations.

Researchers think that “In addition to enlivening movies and virtual reality worlds, this system could also help engineering companies prototype how products would sound before being physically produced, and hopefully encourage designs that are quieter and less irritating.”

To produce synchronized sounds, the wave-based system takes into account the geometry and the physical movement of objects to calculate the vibrations that they will naturally generate.

As objects move, they create pressure waves that bounce off surfaces, and it’s these waves that the Stanford algorithm measures to recreate sounds. Thanks to acoustic shaders, the system doesn’t replicate room acoustics, like echoes in a big cathedral.

Watch the video below to hear different sounds, not only realistic, but also perfectly synchronized with the animation, be it water filling up a glass, a cymbal hit, a Lego brick falling to the ground, a spinning bowl, or a virtual person speaking on a megaphone.

The team will present their work in ACM SIGGRAPH 2018, the annual conference on computer graphics and interactive techniques (Vancouver, 12-16 August).

Stanford’s Integrated Wavesolver, as it’s referred to in a paper, works offline and could eventually make pre-recorded sounds useless.

What would Foley artists think of such sound synthesis systems?

First AI Web Content Optimization Platform Just for Writers

Found this article interesting?

Let Zayan Guedim know how much you appreciate this article by clicking the heart icon and by sharing this article on social media.

Profile Image

Zayan Guedim

Trilingual poet, investigative journalist, and novelist. Zed loves tackling the big existential questions and all-things quantum.

Comments (0)
Most Recent most recent
share Scroll to top

Link Copied Successfully

Sign in

Sign in to access your personalized homepage, follow authors and topics you love, and clap for stories that matter to you.

Sign in with Google Sign in with Facebook

By using our site you agree to our privacy policy.