Scientists develop system to synthesise realistic sounds for computer animation

Stanford scientists have developed a system that can automatically synthesise realistic sounds for computer animation.


PTI | Updated: 06-08-2018 17:09 IST | Created: 06-08-2018 16:46 IST
Scientists develop system to synthesise realistic sounds for computer animation
  • Country:
  • United States

Stanford scientists have developed a system that can automatically synthesize realistic sounds for computer animation.

In addition to enlivening movies and virtual reality worlds, the system could also help engineering companies prototype how products would sound before being physically produced.

It could also encourage designs that are quieter and less irritating, the researchers at Stanford University in the US said.

"The first water sounds we generated with the system were among the best ones we had simulated - and water is a huge challenge in computer-generated sound," said Doug James, a professor at Stanford University.

Informed by geometry and physical motion, the system figures out the vibrations of each object and how, like a loudspeaker, those vibrations excite sound waves.

It computes the pressure waves cast off by rapidly moving and vibrating surfaces but does not replicate room acoustics.

Although it does not recreate the echoes in a grand cathedral, it can resolve detailed sounds from scenarios like a crashing cymbal, an upside-down bowl spinning to a stop, a glass filling up with water or a virtual character talking into a megaphone.

Most sounds associated with animations rely on pre-recorded clips, which require vast manual effort to synchronize with the action on-screen.

These clips are also restricted to noises that exist - they can't predict anything new.

"Ours is essentially just a render button with minimal pre-processing that treats all objects together in one acoustic wave simulation," said Ante Qu, a graduate student at Stanford University.

The simulated sound that results from this method is highly detailed.

It takes into account the sound waves produced by each object in an animation but also predicts how those waves bend, bounce or deaden based on their interactions with other objects and sound waves in the scene.

(This story has not been edited by Devdiscourse staff and is auto-generated from a syndicated feed.)

Give Feedback