DROPPING SOUND 3D
WeChat%20Screenshot_20201108141031_edited.jpg

Interactive Programming | Fall 2020

This project aims to enable people to edit sound in a more immersive and interactive way. Instead of the mechanical and 2-dimensional graphics that most sound editing software have been using nowadays, this project transformed the traditional soundtrack into simple knocking motions that’s similar to how we, in everyday life, perceives sound.

Deliverables

A Unity Demo of a 3D Sound Editor with hand tracking features

​Time Span

2 months

Team

Independent Study

DESIGN THOUGHTS

How are we editing sounds?

"

How are we perceiving sounds?

"

In the actual world we live in, we perceive sound by listening through our ears. We are constantly exposed to sounds as they’re all around us.

However, when we are editing sounds we tend to visualize them into sound waves and flatten it onto a single surface. 
Is there a way to visualize and edit sound in a more interactive way?

BACKGROUND RESEARCH

A group of Japanese researchers proposed a new system to visualize sound. They suggested that the traditional planar display of sound waves cannot resolve depth information directly. They solve the problem using STHMD systems, which enables binocular vision and movement of the viewpoint. This inspired me of how helpful the STHMD system is in providing information.

WeChat Image_20201018213309.jpg
WeChat Image_20201018212847.jpg
WeChat Image_20201018212842.jpg
WeChat Image_20201018212834.png
 

A different group of researchers took a completely different approach by focusing on single aspects of how people perceive sounds. They created three different GUI systems based on different logics such as semantic and retrieval, and by testing how people process these information it provides design implications for sound visualization and recognition.

WeChat Image_20201025205845.png

They also suggested that thumbnails of the sound producing object and spectrograms together can provide the most information a certain sound and is thus the most effective. Whereas spectrogram by itself can be troubling for people without enough familiarity to it.

WeChat Image_20201018215053.jpg
WeChat Image_20201018215051.jpg
WeChat Image_20201018215048.jpg

Kekunnaya, Ramesh, et al. “A Protracted Sensitive Period Regulates the Development of Cross-Modal Sound–Shape Associations in Humans - Suddha Sourav, Ramesh Kekunnaya, Idris Shareef, Seema Banerjee, Davide Bottari, Brigitte Röder, 2019.” SAGE Journals, 4 Sept. 2019, doi.org/10.1177%2F0956797619866625. 

Reference:

Tatsuya Ishibashi, Yuri Nakao, and Yusuke Sugano. 2020. Investigating Audio Data Visualization for Interactive Sound Recognition. In 25th International Conference on Intelligent User Interfaces (IUI ’20), March 17–20, 2020, Cagliari, Italy. ACM, New York, NY, USA, 11 pages. https: //doi.org/10.1145/3377325.3377483

EXPERIMENT ANALYSIS

To further investigate the relationship between the general shape of an object and the sound it produces. I formulated an experiment in which I made sounds using objects of different materials and have different shapes. I used two ways of collecting sounds and tried to find a pattern out of the sounds I collected.

 

I  begun with 3 different set of objects, each of these sets contains 3 objects with the same material but different shapes.

paiban-03.png

I then started tapping on these objects using a plastic stick and recorded the sound of them.

paiban-03.png

In the end I dropped every one of these objects from a certain height onto a wooden surface and recorded the sound of them.

paiban-03.png

RESULTS

For the result, I found out that objects with the same material will have similar pattern before their sound waves reach their peaks, but the shape of the objects will determine how the sound fades.

paiban-03.png

Metal (aluminum)

paiban-03.png

Plastic (PE)

paiban-03.png

Wood (poplar)

I used these sounds that represent different materials and objects as the basic inputs of the program. Then, from the pattern of the sound waves, I’m able to produce many more sounds that are of the same materials but from objects that have distinct shapes and sizes and utilize them as inputs as well.

CONCEPT DEVELOPMENT

The final concept of the project is to create an interface that uses VR technology to visualize and edit sound. Such interface would also enable users to change the pattern of a sound just by morphing the shape of a 3D object that represents it.

 
WeChat Image_20201101200808.jpg
paiban-04.png

MAKING PROCESS

paiban-05.png
WeChat Screenshot_20201225224833.png
paiban-05.png
paiban-05.png

FINAL DESIGN