Effectiveness of Virtual Reality in Surgeries, Surgeon Training, and Medical Education

Blog, Journal for High Schoolers, Journal for High Schoolers 2023, Uncategorized

By: Alys Jimenez Peñarrieta, Davyn Paringkoan, Nyali Latz-Torres, Yasmeen Galal, Karen Zhang

Mentor: Suyeon Choiv


Augmented Reality (AR) and Virtual Reality (VR) have emerged as transformative perspective tools for medical surgeries. These technologies have the potential to enhance surgical precision, drastically improve patient outcomes, and revolutionize medical training. Furthermore, they can alter the way medical education is approached. However, AR/VR assisted surgeries raise critical policy, accessibility, and privacy concerns, given the information necessary about surroundings and the potential inequities of VR. This research paper provides a comprehensive review of existing literature. The results demonstrate how the mechanisms behind VR improve healthcare.

In addition to our literature review, our team programmed an educational brain anatomy simulation for elementary and middle school students. Educational VR programs could be an effective way of teaching as they are more engaging than traditional teaching mediums and they help students to visualize concepts, which is likely to lead to an improvement in learning retention. We used the program Unity to construct a 3D model of the brain. The different sections of the brain were labeled and color coordinated. When a student clicked on a label, it would take them to a screen with more information regarding what that section of the brain does. In addition, we made a PDF document with the same information, but with 2-D visuals.

We distributed our VR product to a test group, and our PDF document to a separate group. Both groups consisted of 20 elementary school children who are going into the same grade and attend the same summer camp. After each group was given an hour and a half to read the PDF or explore the program, they were given a short test on the information presented. The results demonstrate that VR programs can be an effective tool to teach anatomy and medical concepts.


We wanted to include MRI scans in our project, so our mentor pointed us to “Towards 3D Medical Imagery Navigation and Alignment on a Human Body”. Their objective was to []. We chose to focus on their work with VR, as it is better related to our research project. In their paper, they explained their methodology as to how they entered 3d Digital Imaging and Communications in Medicine (DICOM) models into Unity. They first downloaded the DICOM scans, imported them into a DICOM conversion program called 3d Slicer, converted the scans into a 3d model, and then imported that 3d model into Unity. Unity is a free game development platform that can support both VR and AR programs. They succeeded in converting DICOM scans into VR, however the 3d models in Unity were not detailed for the target demographic of medical professionals. They found that the scans did not import enough detail from 3d Slicer to Unity. Their suggestion to improve the amount of detail in 3d models was to use separate programs that are specialized for the medical field. While these platforms did not provide enough detail for medical personnel, the amount of detail would be acceptable for K-12 educational demonstrations. With 3d Slicer and Unity being free and accessible platforms, students would only need access to a device that could support the programs.


To showcase emerging technology and emerging medical practices, we wanted to give students access to a 3-dimensional model of MRI brain scans. To implement this into our project, we followed the process described in “Towards 3D Medical Imagery Navigation and Alignment on a Human Body”. We imported DICOM brain files from the Harvard Dataset into 3D Slicer. Then we used the markup function to put a point that displayed on each view of the MRI slices, which would serve as a reference point for the program. Under the ‘edit segmentation’ display, we applied a threshold, and then used the ‘level tracing’ function to trace various MRI slices, and finally we applied the ‘fill between slices’ function, which had 3d slicer fill in the blank areas between the traced slices. Once the ‘fill between slices’ function was applied, the program was able to display a 3D model of the brain. The resources we referenced suggested creating a separate 3D model for the skull and then using the settings to hide it. Interestingly, we found that 3D Slicer produced more accurate brain 3D models when the skull surrounding the brain was not segmented.

During the process, we encountered challenges when the program attempted to fill in slices between manually segmented ones. This occasionally led to inaccuracies and omitted sections of the brain. In attempts to correct these issues, we used the segmentation brush tool to manually fill in these missing areas in certain slices. The fill area between slices function was then reapplied. Notably, the segmentation.brush tool proved less precise than the layer fill tool, as it sometimes created smooth areas on the outside of the brain that did not exist.

Image 1: Screenshot of the MRI scans in 3D Slicer

Image 2: 3D model that had been edited using the brush segmentation method.

In the next attempt, we segmented a greater number of slices using the layer fill tool, which yielded a fairly accurate and detailed result. The layer fill tool notably allowed the 3D model to retain most of the brain’s texture and curves in comparison to other segmenting tools. The 3D model was then downloaded in OBJ file format and seamlessly imported and integrated into Unity as an asset.

To understand the potential of VR in education and its capacity to yield positive learning outcomes, we created a brain anatomy simulation using the Unity game development platform. Unity is one of the best free tools available for creating VR programs. As a versatile game development platform, Unity enabled us to implement a wide range of interactive and polished elements into our simulation. We first imported a 3D model of the brain, and color coded it to correspond with different brain sections for instructional purposes. We then integrated a user interface (UI), for users to interact with the simulation and learn about the specific sections. For program functionality and intricate features like the perpetual rotation of the brain, we crafted a straightforward C# script. . We also implemented a feature where students could press a button to view a screen with the MRI 3D model and an age-level explanation delineating how similar models are being used by doctors.

https://drive.google.com/drive/u/0/my-drive <- Gif link

Our team then designed a hypothetical experiment to gauge the efficacy of VR educational content compared to conventional paper-based learning. The control group is a group of eight, average-performing middle school students, ages ten to twelve, who would receive traditional paper packets with 2D images accompanied by descriptions of individual brain section functions. The experimental group is a second group of seven, average-performing, middle schoolers; however, they would be presented with the same information, but within a dynamic 3D environment. Both groups had no prior knowledge of the brain or with VR equipment. However, before the experiment each student was assessed on their knowledge of the brain’s anatomy and functions, to later quantify their improvement. The participants will then be separated into groups A and B; A will use the paper packet and group B will use VR equipment and the Unity project we created. After both groups finish exploring the educational content, they will be presented with an identical test of comprehension that they must complete individually. After everyone has completed the test, the students will fill out a survey rating their experience and commenting on the experiment. Results from the test and survey would be compiled and compared between the two groups.


Participants: A total of 15 participants ages 8 to 12 were recruited from Monterrey, México. All of the participants are enrolled in either a public or private institution and had no previous knowledge of the brain’s anatomy and functions. Thirteen of the students had never used VR gear, while two mentioned to have used it for gaming and non-educational content. All fifteen participants mentioned they did not have access to VR equipment in and out of their schools. We also asked each student how long lessons take at their schools. They mentioned that they are assessed every two weeks(nine forty-five minute lessons), while this experiment only assessed the student after completing one fifteen minute lesson.

Procedure: Participants were divided into two groups: Group A engaged in traditional reading tasks, while Group B used VR to learn about the brain’s anatomy and functions. Each part of the experiment will have a set time limit, with a total estimated time of twenty-five minutes, and five volunteers will be needed to guide the participants.

We assessed all of the participant’s learning using pre and post assessment results and compared between Group A and Group B. Throughout the experiment, we wrote down observations on the participants behavior and responses. Additionally, at the end of the experiment we asked both groups a set of questions relating to how they felt about their experience and about the experiment.

Observations and Insight

First Assessment:For the first assessment, students were all in one quiet room taking the quiz. From the student’s expressions and hesitation, you could tell they had no prior experience with the content and would guess the answers. Many participants asked questions and seemed anxious and overwhelmed.

The assessment consisted of ten questions, three true or false and seven multiple choice, based on the content that will be reviewed later in the experiment.

After collecting the assessments, we asked eight students to volunteer to be in Group A and seven to be in Group B. All of the students wanted to be in Group B as they were excited to use the VR materials. We mentioned that all of the participants will be able to use our Unity Brain Model with the headsets after the experiment, and then eight participants volunteered to be in Group A.

Results from the First Assessment:

Group A: pre-experimental assessment

Graph 1: Group A pre-experimental assessment; eight responses; scored an average of 5.25/20 or 26.25% in accuracy.

Group B: pre-experimental assessment

Graph 2: Group B pre-experimental assessment; seven responses; scored an average of 4.86/20 or 24.3% in accuracy.

Traditional Learning (Group A): Group A participants engaged in reading materials related to the brain. We set up the material in a classroom separate from Group B, and each student had a pen and a learning packet in their desk. Group A displayed initial interest in reading but later faced declining engagement. Participants mentioned that they found the content to be complex, and you could see many of them getting distracted after six minutes into the reading. A participant mentioned that the text felt burdensome, similar to taking an exam. Another participant needed assistance in comprehending text-image relationships. Notably in the middle of the experiment, a participant from Group A opted out as they mentioned they did not feel like reading the content. These students’ behavior and interactions with the paper packet demonstrate struggles in concentration and personal interest when using these traditional learning materials. All of the participants from Group A mentioned learning from similar traditional learning materials at school. Participants in Group A felt like they were in a learning environment and felt they had to be focused and serious.

Virtual Reality Experience (Group B): Participants in Group B experienced VR materials, Oculus Go headsets, and content about the brain. Participants in Group B exhibited considerable excitement, compared to those of Group A, to engage with the educational content. As mentioned, none of the participants had any previous interactions with VR equipment which influenced their experience.

Before handing out equipment, we set up safety rules. The students were not allowed to run, jump, walk more than two steps, or make any big motion with their hands when using the headsets. They could also not explore into different apps that were installed in the headset and had to give us the material after the timer went off. Additionally, they had to let us know if they felt dizzy, nauseous or if they had any other reactions. We quickly noticed that the classroom we were using was not big enough for the seven students to perform the experiment safely. We then decided to only let two students use the equipment at a time.

As students used the material, they were completely immersed in the learning content; they used the controllers to engage with the simulation and were guided into what parts of the Unity Project to interact with. Towards the end of the experiment, students would turn their heads and move around the room as they became distracted from the immersive experience. Additionally, some students would play with the controllers and pretend they were guns or swords.

Notably, one participant mentioned they were scared and throughout the experiment made sudden movements. Other participants were confused about how to use the materials, even after they were given instructions. Another participant mentioned it felt like a game and like play.

Despite their limited understanding of the subject matter and of VR, they engaged enthusiastically with the content and were eager to learn more about the brain and about the use of this technology. The participants were not used to this type of learning and mentioned it made them excited about possibly implementing it at school.

Final Assessment:After both groups completed their learning tasks, they all took a comprehensive assessment based on the content they reviewed.

Results from the Final Assessment:

Group A: post-experimental assessment

Graph 3: Group A post-experimental assessment; seven responses(one student opted out); scored an average of 9.71/20 or 48.55% in accuracy. Participants improved at an average of 22.3% from their first assessment.

Group B: post-experimental assessment

Graph 4: Group B post-experimental assessment; seven responses; scored an average of 10.86/20 or 54.33% in accuracy. Participants improved at an average of 30.03% from their first assessment.

Group A: Participants Assessments Comparison

Table 1: Group A participant assessment scores. Student #4 did not continue with the experiment. All participants that partook in the activities improved their scores. Participants improved at an average of 22.3% from the first assessment.

Group B: Participants Assessments Comparison

Table 2: Group B participant assessment scores. All participants that partook in the activities improved their scores. Participants improved at an average of 30.03% from the first assessment.

Comparison: During the reading, Group A’s participants displayed focused and serious attitudes, reminiscent of traditional learning settings. In contrast, Group B’s participants’ interaction with the educational content using VR technologies invoked a playful atmosphere. Although participants seemed less serious, their interest and comprehension were notably enhanced. Students were used to participating in an environment like the one we created for Group A and had trouble adjusting to the VR technology. Both groups of participants significantly improved their accuracy score from their first assessment, however in the final assessment Group B slightly out performed Group A by 6.22% and improved at an average of 30.3%, which is 7.73% more than Group A’s improvement. Both Groups scores demonstrate each learning format successfully taught students the content. Students in Group A were visibly more comfortable than students in Group B. Participants in Group B were confused and disoriented. Volunteers in Group B had to be trained to use the VR equipment and had to take additional measures to ensure the safety of everyone involved. There needed to be more volunteers helping and space to execute Group B’s lesson than in Group A’s.

Experimental Conclusions:

Traditional Learning (Group A): Group A’s lesson and learning packet was easy to set up and students seemed accustomed to the learning material. It is worth noting that participants were serious, bored and unengaged. A student even opted out of the experiment as they got anxious from the similarity of the learning packet to an exam. The results show that students learned from the traditional learning material, however if students feel anxious and uninterested, what does this say about their current education? Is it sustainable? Does it encourage them to keep learning? Do they actually enjoy it?

Virtual Reality Experience (Group B): Participants in Group B were engaged and interested throughout the experiment; the environment was playful and exciting. All of the participants mentioned never partaking in a learning environment like VR before. After the assessments and experiment, Group B participants left eager to learn more about the brain and about the use of VR. Since students were trying to figure out how to use the distinct platforms, and as they were extremely excited to use the headsets, they lost focus from the educational content. Additionally, they mentioned that although the unfamiliarity of VR made them uncomfortable, they feel they can benefit from the use of this technology in their classrooms. Although students enjoyed the experience of using VR, we have to acknowledge that it was difficult to install the material, to set up the classroom and to generate content for this type of lesson. It is also important to recognize that VR technologies are not easily accessible to many students.

At the end of the experiment and assessments, we allowed students from Group A to experiment with the VR equipment and Unity simulation. The participants expressed similar behavior and interactions than students from Group B. Notably, one student who was distracted while using the learning packet was engaged and active while using the headset.

Our team feels that with growing amounts of educational VR content becoming available, with an increase of innovation and educational investment, and with proper VR training for teachers and students, these technologies will significantly enhance institutions’ learning environment.


The methods used to convert MRI scans into 3d models proved to be effective, however, professionals should use higher quality programs than 3d Slicer and Unity. It was difficult to create 3d models with finer details necessary for medical use. With better quality programs, this issue will likely be resolved.

Future Directions

To further improve the Unity simulation, we could go more in depth about the functionalities of the existing brain sections, and also include the other parts of the brain, including the pituitary gland, brain stem, and corpus callosum. We could also implement mini-games to teach about a certain section. For example: a would you rather game to demonstrate the frontal lobe’s responsibility to plan, organize, and make decisions, a music memory game to demonstrate the temporal lobe’s responsibility to receive and comprehend sensory language such as sound and speech, and a balancing game to control a players movement to demonstrate the cerebellum’s responsibility in coordinating movements and maintaining balance.

Links to our projects:

Literature Review: https://docs.google.com/document/d/101g9gwvEEyp9MHGHFIj_ItU9lPOotavlU4LlePhSDlQ/edit

Unity Program Download: https://drive.google.com/drive/folders/173oJT1W6vKEWO-LpxWr6P2r4SgiynU0p?usp=sharing


  1. Emmerling, Franziska, 2016, “Raw DICOM files / The role of the insular cortex in retaliation”, https://doi.org/10.7910/DVN/AI2OXS, Harvard Dataverse, V1
  2. Player1. (2023, June 15). Brain segmentation in 3D Slicer via mask scalar volume. https://www.youtube.com/watch?v=S5tOn_AxrbM
  3. Fedorov A., Beichel R., Kalpathy-Cramer J., Finet J., Fillion-Robin J-C., Pujol S., Bauer C., Jennings D., Fennessy F., Sonka M., Buatti J., Aylward S.R., Miller J.V., Pieper S., Kikinis R. 3D Slicer as an Image Computing Platform for the Quantitative Imaging Network. Magnetic Resonance Imaging. 2012 Nov;30(9):1323-41. PMID: 22770690.
  4. Henry, D., & Konz, S. Towards 3D Medical Imagery Navigation and Alignment on a Human Body. http://stanford.edu/class/ee267/Spring2019/report_konz_henry.pdf.

Leave a Reply