Virtual Reality for Emotional Response

Journal for High Schoolers, Journal for High Schoolers 2020

Authors

Sylvia Chin, Tiffany Liu, Aditya Rao

Abstract

Virtual reality (VR) is a visionary platform that enables individuals to immerse themselves into simulated experiences. Thus far, VR simulation has been widely applied for entertainment and educational purposes. However, due to recent constraints caused by COVID-19, we hoped to explore its potential applications in health and wellness to make mental health support more accessible. Our research studied how VR experiences can better engage and influence a user’s mood, as opposed to a video’s detached 2D experience. Identifying the advantages and strengths of its effects on emotions can lead to health development programs such as proactive therapy and meditation.

Our team created two otherwise identical experiences: a VR 360º and a 2D video. Prior to watching the respective experiences, study participants were given a series of questions to quantify the levels at which they were feeling certain emotions at the time. We utilized a between subject experimental design with a control group of 20 users who experienced the 2D video and a variable group of 20 users who experienced the 360º VR experience. Both experiences were designed to evoke calming emotions using visuals, sound effects, and music. Afterwards, the users were once again asked to answer the same questions from the beginning of the experiment to evaluate if the experience had successfully shifted their moods. After the study, we analyzed video recordings the participants took of themselves during the experiment in order to determine if machine learning models could accurately detect their emotions.

1. Background

Virtual reality aims to transform an individual’s sighted environment by controlling their full vision scope. Past studies of “mood induction” have shown that static audio and visual elements in film can affect moods [1]. As VR lenses like the Google Cardboard become more prevalent and accessible, there is a broadening field for simulated mental health treatment and entertainment.

Virtual Reality Exposure Therapy (VRET) is an increasingly popular tool for therapists and doctors to use on patients suffering from physical or emotional pain. Through VRET, patients can be treated from the comfort of their homes without a practitioner physically present [2]. Our project aimed to support the development of VRET by exploring the emotional responses of people to VR technology.

2. Materials and Methods

2.1 VR and 2D Experiences

We filmed the video experiences with a Rylo 360º camera mounted to a drone, edited in auditory supplements through iMovie, and uploaded the videos to Youtube. The video depicted a flyover above a lagoon at sunset and aimed to evoke pleasant and calm emotions. This content was the same for both study groups, but Group A’s video was formatted as a regular 2D Youtube video and Group B’s video was a 360º Youtube video. Study subjects in Group B were provided Google Cardboards for their experience and asked to use headphones.

2.2 Survey 

Each participant was given a survey to quantify how much they were experiencing eight different emotions or moods at the time of the survey. These moods included happy, sad, peaceful, calm, tired, anxious, excited, and alert. The participants were asked to self-identify their emotional state in these eight moods on a scale of 1 to 5, with 1 being the most disagreement and 5 being the most agreement. They did this survey before and after watching the study video. There were several other questions on the survey regarding demographics and preferences to mislead the participants about the purpose of the experiment. The survey included instructions for navigating to either the video or the virtual reality headset.

2.3 Machine Learning Algorithm for Detecting Emotions

Participants were instructed to record videos of themselves, specifically their faces, throughout the entire duration of the survey. We found a pre-trained facial recognition model from an open source Python machine learning code on GitHub and configured the code to analyze the recorded videos [3]. The facial recognition algorithm uses a deep convolutional neural network to classify emotions into seven categories: angry, disgusted, fearful, happy, sad, surprised, and neutral. We estimated the accuracy of the facial recognition algorithm by comparing the emotions that the algorithm detected from the videos with the survey results.

3. Results

Figure 1:

Figure 2:

Figure 3:

Figure 4:

4. Conclusion

4.1 Survey

Figures 1 and 2 show a comparison between the before and after answers of both study groups. Figure 3 was derived from these graphs, showing the mood percentage changes in each group. The results support the hypothesis that the VR experience shifted more moods compared to the 2D experience. The direction of mood shifts were the same between the control and variable groups for 7 out of 8 moods. The most significant difference between the emotional shifts of the two groups was in calmness, thus underscoring the potential for VR in meditation. There was the greatest percentage change for anxiety decrease among all moods in both groups, which suggests that the experience likely affected anxiety the most. In a free response question asking respondents to describe their thoughts and feelings, 12 out of 20 subjects in the 2D video group used the word “video” in their response and 3 used the word “experience,” whereas the responses were flipped for the VR group. Ultimately, these findings support the notion that users found VR to be more real and immersive than 2D video.

4.2 Machine Learning Algorithm for Detecting Emotions

Facial Recognition accurately detected the participants’ emotions 52% of the time. The algorithm was 17% more effective post-experience in comparison to pre-experience. This shows that the video experiences were effective in changing moods, since the facial recognition had a greater success rate after the participants watched the videos. The algorithm could not make predictions for participants wearing the Google Cardboards in portions of the recorded videos, likely contributing to some of the inaccuracies. Camera quality, lighting, and unique neutral expressions of individuals could also be sources of error. Nevertheless, given a limited sample size and a relatively mild experience, a 52% success rate is a good indicator that facial recognition has great potential in determining moods.

5. Future Directions

After studying the short-term shifts in emotions discovered through this project, we hope to explore VR’s potential for prolonged mood stability and positive improvement. If granted more time and testing equipment, we could create more complex and customized scenes for users, which may lead to more generalizable results. There are several factors such as cultural background, age, and time of day that could be more thoroughly studied. Based on these factors, we could tailor the VR experience to certain individuals in order to further engage and enhance their experiences. Additionally, our experimentation with machine learning facial interpretation yielded good results, and its promise may broaden the possibilities to capture precise data and analyze the effects of VR in real time while a VR headset is worn. It would also be interesting to examine VR’s potential for treating a specific type of physical or mental illness. 

6. Acknowledgements

We would like to acknowledge founding director Professor Tsachy Weissman of the Stanford Electrical Engineering department for a profound look into research, the humanities, and endless opportunity. Special thanks to Stanford undergraduate senior Rachel Carey for mentoring us throughout the entire process. Another thank you to Devon Baur for her valuable insight on analysis and experimental design. We would also like to thank Cindy Nguyen and Suzanne Marie Sims for ordering and delivering the materials that made our experiment possible. Finally, a big thank you to our fellow SHTEM students, family members, and friends who provided data for our research.

7. References

[1] Fernández-Aguilar, Luz, et al. “How Effective Are Films in Inducing Positive and Negative Emotional States? A Meta-Analysis.” PLOS ONE, Public Library of Science, https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0225040.

[2] Charles, Nduka. “Emotion Sensing In VR: How VR Is Being Used To Benefit Mental Health.” VRFocus, 29 Sept. 2017, https://www.vrfocus.com/2017/05/emotion-sensing-in-vr-how-vr-is-being-used-to-benefit-mental-health/.

[3] Dwivedi, Priya. “Priya-Dwivedi/face_and_emotion_detection.” GitHub, https://github.com/priya-dwivedi/face_and_emotion_detection.

[4] Baños, Rosa María, et al. “Changing Induced Moods Via Virtual Reality.” SpringerLink, Springer, Berlin, Heidelberg, 18 May 2006, https://link.springer.com/chapter/10.1007/11755494_3.

[5] Droit-Volet, Sylvie, et al. “Emotion and Time Perception: Effects of Film-Induced Mood.” Frontiers in Integrative Neuroscience, Frontiers Research Foundation, 9 Aug. 2011, https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3152725/.

[6] Senson, Alex. “Virtual Reality Therapy: Treating The Global Mental Health Crisis.” TechCrunch, TechCrunch, 6 Jan. 2016, https://techcrunch.com/2016/01/06/virtual-reality-therapy-treating-the-global-mental-health-crisis/.

Leave a Reply