In 2019, from June to August, 40 high school students attended the STEM to SHTEM (Science, Humanities, Technology, Engineering and Mathematics) summer program hosted by Prof. Tsachy Weissman and the Stanford Compression Forum. During this summer program, the high schoolers pursued fun research projects in various domains under the supervision of 18 mentors, where the entire collection of the high schoolers’ reports can be found below.
Anushka Sheth, Bikrant Das Sharma, Harvey Jarin, Ian Zhang, Jonathon Sneh, Leena Elzeiny, Prayusha Parikh and Vyomika Gupta
Currently, VR systems immerse an individual visually and auditorily, but the lack of olfactory stimulation is part of the void that separates virtual reality from real life experiences. The prospect of adding scent as a third dimension to virtual reality prompted our research of the effects of adding an olfactory feature to virtual interactions, as well as its effect on the judgement of pleasantness and personality of others met in the virtual environment. At the start of the experiment, all participants take a personality quiz. The quiz predicts how much each participant enjoys a certain smell. Each individual then participates in a conversation in virtual reality with a stranger. Some participants have scent dispersed during their conversation, and some participants do not. The stranger the participant meets has scripted responses. Immediately after, each participant answers questions based on their conversation. The questionnaire quantitatively scores the (1) pleasantness to examine if scents impact the perception of someone and (2) the perceived personality of the stranger, in order to determine whether olfactory communication aids in an accurate first impression. This research is aimed at investigating the implications of integrating olfactory communication in virtual interactions and assessing its effects on the judgement of the other person’s personality and pleasantness.
Olfactory communication is deeply instinctual. A scent is first processed by the olfactory bulb which directly activates the amygdala, the control center for emotions in the brain. The association between smells and emotions can run so deep that, according to three clinical studies by two psychiatric professors, patients with post-traumatic stress disorder reported feeling guilt and nausea upon smelling an environment associated with the traumatic incident . Currently, technology is slowly shifting to incorporate olfaction machinery into spaces, to bring a new era of immersive reality. In order to simulate the physical world, one must mimic all dimensions of it, but little research has been conducted to understand the human response to syntents.
The purpose of our research is to determine the effect of aroma in a person’s judgement of the pleasantness and personality of a person who they meet. A personality quiz is given to the participant to predict their enjoyment of a certain smell. The participant interacts with another person. During the virtual reality conversation between the participant and the actor, the scent is dispersed for some subjects without their knowledge.
Virtual reality (VR) is a highly realistic, immersive, and simulated experience incorporating sophisticated visual and audio elements. Virtual reality is primarily used for educational and entertainment purposes, as it simulates a realistic interaction. The two participants interact in virtual reality, through the platform AltspaceVR, a social virtual world where users create avatars and explore chat rooms. AltspaceVR serves to supplement social interactions, and connects users worldwide. While AltspaceVR and other such virtual communication platforms attempt to mimic and replicate real life encounters, many subtleties humans have in their physical interactions are lost. Most first impressions, as well as the formation of memories, require tangible visual and olfactory cues that cannot be conveyed currently through virtual means. First impressions are formed in the amygdala and posterior cingulate cortex; parts of the brain that process fear and assign value to objects. The factors involved in the assignment of value and trustworthiness are largely derived from visual signals, primarily body language. People can evaluate faces on trustworthiness, status, and attractiveness following even just a brief glance (e.g., 33 ms) and extra time (100 or 500 ms) only led to a small improvement in the correspondence of these time-constrained evaluations with an independent set of time-unconstrained judgments. Perceptions of personality are also based in visual cues, including clothing, posture, and eye contact. Many such cues, however, are also conveyed through olfaction.
Yet, such a strong correlation has not always been addressed. The potential of olfactory communication has been tabled since the ancient Greek philosophers: Plato and Aristotle described scent as the most animalistic of the five senses . Later philosophers such as Immanuel Kant agreed because smell did not contribute to the music and art during the Enlightenment. On the other hand, some non-Western cultures glorified their sense of smell . For example, the Bororo people in Brazil consider body odor to be the life force of an individual and the smell of breath to be linked to the soul.
3. Methods and Materials
We received a scent machine from OWidgets, a device from the SCHI lab at the University of Sussex. AltspaceVR was used for the virtual reality world and experience. The VR headsets used were an Oculus Rift and an HTC Vive. Scents were received from the Institute of Art and Olfaction in Los Angeles, California. An Arduino board was used to control the dispersion of the OWidgets machine. The board was connected to a laptop which controlled the dispersion of scent.
The experience begins in a waiting room, where participants are ushered in and out. Each participant takes the scent personality quiz anonymously and the database stores the person’s scent preferences scored by the quiz. From there, participants are told the narrative for the experiment: they have left their lives and are in a state which determines whether they will return to their old life or move on to the afterlife. In the room, the participant enters the virtual reality world. There is another person in the Virtual Reality world, and the two users converse. After the experience, they take an exit survey, judging each others personality and pleasantness.
An important factor in conducting the experiment relies on the participants perceived reality as a blinded experiment. The experiment is disguised as a study on the effect of virtual reality on first impression. Each participant meets an “actor”. An actor is a person who has memorized his/her responses to the questions and gives the memorized response when the participant asks them a question.
After their interaction, the participant steps out of the VR headset and take an exit survey on the interaction they just had. The exit survey asks the participant to judge the actor’s personality and pleasantness.
AltspaceVR is a virtual reality platform that is used as a meeting space for both private and public events. In AltspaceVR, worlds can be built from scratch, which allows for greater customization and control over certain variables (avatars, music, boundaries, setting). In addition to its customization features, Altspace also supports world-building through Unity and in-game app development through its MDK feature. This, in combination with its other features, makes AltspaceVR ideal for our experimental procedure.
For our world, we wanted to create a neutral, non-religious interpretation of the Afterlife. In order to ensure that the world would not elicit any negative emotions, we also wanted to make the world as serene and calm as possible. The world we determined that would be best included the following features: soothing music, a night time and mountain background, a reflective plane, and a limiting boundary in the form of rocks. To make the experience more interactive, we also implemented a Tic-Tac-Toe game to go along with the questions.
3.2 Personal Quiz
In order to determine the scent that corresponded to a participant’s unique personality, we created our own version of a personality quiz. Based on the research we conducted on various personality quizzes (Myers-Briggs, Five Factor Model, Eysenck’s PEN Model), we determined that a quiz following the general structure and trait association as the FFM would best suit our purpose. We devised 12 questions, 11 four response multiple choice and 1 two response question, based around the five main personality traits in the FFM: Openness to experience, Conscientiousness, Extraversion, Agreeableness, and Neuroticism (OCEAN). Since there are four main fragrance families–Floral, Oriental, Woody, Fresh–and five trait groups, we decided to combine openness and extraversion due to their similarities . To connect fragrance families and the personality traits, we found correlations between one’s personality and preferred perfume. The fragrance-trait relations we used were Floral-Neuroticism, Oriental-Conscientiousness, Woody-Agreeableness, Fresh-Openness and Extraversion .
Early versions of our quiz were inaccurate, as a majority of people tended to overwhelmingly favour one response over the other answer choices for several questions. This led to a majority of people ending up with a high score in the Fresh fragrance family as opposed to the other three fragrance families. After going through several iterations of the quiz while modifying and removing certain biased questions, we were left with 12 questions with even distributions in terms of answer responses. For scoring, we assigned each response three points (with the exception of #11 which was given four points for more weight). The points corresponded to different scents, as our research proved that those who chose a specific answer choice were more likely to favor a specific fragrance. Each answer choice that an individual made added points to one or more scents. In total, there were 37 possible points to be earned in each of the four scents, meaning that there was an equal opportunity to score any of the four fragrance families. After all 12 questions were answered, the total number of points scored in each fragrance family was summed up to create the participant’s scent array. The scent array ranked the four scents, in order to anticipate which one they would be partial to. Next, we had to evaluate whether an individual’s personality – as judged by our quiz – actually had an effect on the kind of smells they preferred. To test the accuracy of our personality quiz, we went to various locations on campus and surveyed people randomly. Participants were first given the quiz in order for us to predict the scent that they should prefer based on their personality type. Then, they were given four strips of paper, each sprayed with a different fragrance oil and were asked to choose which one they prefered. Out of the 24 people we surveyed, 16 of them chose one of their top two scoring scents as their favourite, proving that our quiz results could accurately predict which aromas an individual would prefer a majority of the time.
3.3 OWidgets Scent Machine
To disperse the participants’ unique perfume throughout the room, we looked into various methods of scent delivery systems such as essential oil diffusers, heat diffusers, and nebulizer oil diffusers. While each of these have proved to be effective at dispersing smells throughout a given environment, the fragrance oils they give off tend to linger in the room for hours. Over time, the different smells would blend together to form olfactory white – an unidentifiable scent that would distort our data. We wanted to avoid the Smell-o-vision disaster of the mid-twentieth century, in which users were overwhelmed by the multiple scents coming at them. Furthermore, we had to ensure that the smell was not too strong. Since each VR experience would only last for four to five minutes, we needed a way to easily get rid of the smell after each encounter. To do this, we partnered with OWidgets, a company that designs ways to incorporate olfaction into various experiences. This company was founded by Emanuela Maggioni, an experimental psychologist whose interest in the human olfactory system led her to develop a machine that could be used for further experimentation with smells. The scent machine that she and her team created allows three different scents to be sprayed from the nozzles. Through experimentation, we confirmed that perfumes sprayed from the machine only noticeably lingered in the environment for 1-3 seconds after the machine was turned off. We found that the ideal distance between the user and the OWidgets Scent machine was five feet. Moreover, we discovered that the best way to spray the fragrances was to have the machine on a cycle with it on for five seconds and off for 15 seconds for the duration of the experience – about four minutes. This ensured that the participant could smell the scent for a majority of the experience and was not overwhelmed by the strong fragrance.
4. Future Directions
The next steps for continuing this research would be using more perfumes in the VR interaction. Expanding from only fresh to also including floral, warm and spicy, and woody would offer more data about responses to smells which people like and do not like.
Afterwards, expanding to a more dynamic set of scents would also be a possible future direction. By implementing scents that correspond to one’s unique environment, an even more immersive experience would be gained from the users. Smells in an environment could be identified by an E-nose and then transferred to another user using a similar system as ours, but the actual replication of the scent and as well the current capabilities of an E-nose limit this process. Future advancements in these areas would be needed to further the inclusion of dynamic scents in VR or any other platform.
We would like to acknowledge Professor Tsachy Weissman of Stanford’s Electrical Engineering Department and head of the Stanford Compression Forum for his guidance and help with the project. We also want to thank Devon Baur for being the mentor of our group and helping us with designing the project, accessing and getting materials, and running the experiment. Additionally, we also want to thank Professors Subhashish Mitra, Gordon Wetzstein, and Debbie Senesky for all of their help with the project. Also, we want to thank the OWidgets group in London for giving us a scent machine and helping us set it up and troubleshoot. Lastly, we want to thank the Institute of Art and Olfaction in Los Angeles for providing us with the scents we used for the project.
 Vermetten E & Bremner JD. Olfaction as a traumatic reminder in posttraumatic stress disorder: case reports and review. The Journal of Clinical Psychiatry 64 (2003), 202-207.
 Camille Ferdenzi, S. Craig Roberts, Annett Schirmer, Sylvain Delplanque, Sezen Cekic, Christelle Porcherot, Isabelle Cayeux, David Sander, Didier Grandjean, Variability of Affective Responses to Odors: Culture, Gender, and Olfactory Knowledge, Chemical Senses, Volume 38, Issue 2, February 2013, Pages 175–186, https://doi.org/10.1093/chemse/bjs083
Georgina Cortez, Nicole Krukova, Thuan Le, and Suraj Thangellapally
Nanopore technology is a modern way of sequencing DNA, in other words, it determines the order of nucleotides in a DNA strand . First, a DNA strand passes through a motor protein that serves as an inhibitor to improve the accuracy of the readings. Then, the strand runs through a nano-sized hole that is experiencing an ionic current. As the DNA strand passes through the nanopore, the individual nucleotides cause disruptions in the electric current which help determine the nucleic acid sequence . Approximately 250 bases can be sequenced per second by Nanopore technology . However, the results are “noisy” or not completely reliable because of several factors including the inconsistency of the nucleotides’ dwell times, which are the time periods of the central bases in the nanopore. The wavelength of the ionic current depends on the dwell time; inconsistent dwell times produce different wavelengths that make it difficult to accurately determine the nucleic acid sequence.
We believe the central base’s dwell time depends on the nucleotides present in the motor protein at that moment. The central base is the nucleotide currently in the nanopore whose dwell time we are focusing on. A k-mer (e.g. “AAT” or “CGATC”) consists of multiple nucleotides that correspond to the central base’s dwell time. To characterize the inconsistencies of the dwell times, we analyzed the data to find correlations between the k-mers and their dwell times. These correlations can help future researchers assign specific characteristics to k-mers and their impact upon dwell times.
We conducted our research with sequenced DNA data previously acquired from the lambda bacteria, which has approximately 48,000 nucleotides. Through Python, we manipulated the data to determine trends and correlations between the k-mers and their corresponding dwell times. Our main goal in the programming was to create visual representations of the data in the form of plots. These plots allowed us to discern noticeable trends in the data that we otherwise might not have observed. Our initial plots were difficult to analyze because there was too much data that made it nearly impossible to understand any trends. To resolve this issue, we organized the data into a more digestible format, by only plotting the average dwell time for each k-mer and sorting the points from lowest dwell time to highest. This allowed us to more easily spot any outliers or trends in the data and compare different k-mers.
The plot and table showed that the data for all of the bases was very similar as there were barely any differences between their means, standard deviations, etc. These results validated our belief that the dwell times did not depend on the base in the pore but rather that the dwell times are dependent on the following bases that are falling through the motor protein since it controls the speed of the DNA strand. To test this idea, we moved on to creating plots of different k-mer’s dwell times to see if any correlations became more apparent.
We began with an unshifted 3-mer as practice to construct our function and begin experimenting with associating k-mers to the central base’s dwell time. As expected, there were no significant trends in our data because the unshifted 3-mer has only one nucleotide following the central base. We believed that there would be correlations with a longer k-mer and its dwell time since, as our hypothesis claims, the central base’s dwell time depends on the nucleotides passing through the motor protein in that moment. Following on with this observation, we created a dataset with shifted 3-mers, where we shifted the 3-mer one space to the right.
Since the error bars for the dwell times of the 3-mer are so significant, the data for the 3-mer plot should not be used as it is unreliable. This further affirmed our focus on the 5-mer as it holds more reliable data.
We observed that the outlying dwell times below 7 units and above 9 units had a consecutive repetition of a base in their k-mers. The dwell times in these ranges varied greatly from the “normal” dwell time in the 7 to 9 unit range. The k-mers in this “normal” range had a minimal amount of repetition in comparison to the outlying ranges. This lead us to believe that the consecutive repetition of a nucleotide causes an irregular dwell time.
Similar to the previous sample of 5-mer data, there was a repetition of a single base in k-mers with irregular dwell times. In the 6.5 to 6.7 unit range of the dwell time there was a repetition of a single base in each k-mer. We also noticed that there was a higher prevalence of C’s in the lower spectrum of the dwell times. Moving into the higher 10 to 12 unit range of the dwell time, we observed that there was a consistent repetition of the nucleotide G in each k-mer. G’s are known to be particularly disruptive so we believe that the repetition of them caused the change in the dwell time. This data supported that the repetition of a nucleotide causes irregularities in the dwell times.
The first plot models that there is no difference between the prior k-mers and the 7-mer. This indicates that in order to find some trend, we need to increase the length of the k-mer and keep shifting it. There are 16,384 possible 7-mers combinations, so we created a plot of their averages so there would be a better visualization. The second plot above represents the averages of the first 10,000 bases of the lambda data collected, sorted from least to greatest. However, after analyzing the plot, we were sure that we would need a longer k-mer to spot any significant trend. For future reference, researchers should look to analyzing the repetition of nucleotides, focusing on specific ranges, and any other possible trends that could lead to a discovery.
Our project confirmed that the dwell time is not dependant on the central base but rather on the following bases in the motor protein. A significant discovery was that the consecutive repetition of a nucleotide causes irregular dwell times. The nucleotides G and C also appear to affect the consistency of the dwell time. With this information, future researchers should continue to create different shifts of k-mers and gather more data about the impact of base repetition on the dwell time. By understanding this relationship, researchers will be able to characterize and predict the effect a k-mer will have upon the dwell time. This can lead to more accurate readings of the sequenced DNA which will allow for the expansion of Nanopore technology.
Advancements in Nanopore technology will expand the applications of DNA sequencing in fields such as healthcare and data compression. Increased reliability will permit for the technology to have a more widespread role in personalized medicine, where treatment is unique to one’s genetics, and DNA data storage, an incredibly efficient way of storing a substantial amount of data in a microscopic space.
 N. Jetha, C. Feehan, M. Wiggin, V. Tabard-Cossa and A. Marziali, “Long Dwell-Time Passage of DNA through Nanometer-Scale Pores: Kinetics and Sequence Dependence of Motion”, Biophysical Journal, vol. 100, no. 12, pp. 2974-2980, 2011. Available: 10.1016/j.bpj.2011.05.007.
When two people carry an object together, they manage to change directions and avoid obstacles with minimal or even without verbal communication. Communication using nudges, forces and eye contact is often sufficient. In this paper, we study this nonverbal language with the aim to understand it better. This type of communication is useful also for artificial intelligent robots. The results of our study and further experiments may contribute to better communication between AI robots and humans as well as between robots. The experimental setup consisted of two persons carrying an object to a certain goal. Meanwhile data was collected from force sensing resistors attached onto the object and from a camera. The results showed that there seems to be different styles of communicating. To be able to apply this kind of communication onto robots we think it will be necessary to identify these different methods rather than collecting experimental data and calculate an average.
Materials. A grid made out of masking tape was used to outline the experimental area, a table which was the object the participants carried, four force sensing resistors to measure the different forces that were applied to the object, a web camera (logitech Pro Stream Webcam C922 1080HD) to record videos to later be able to analyze them in a program (OenPose). To get the forces we used 4 FRS’s connected to 4 breadboards which are then connected individually through a 3.3kΩ resistor and 3 breadboard cables to an Arduino UNO(REV3) which is connected to a pc through USB.
The pictures above show how the ZJchao Force Sensitive Resistor was attached to the UNO R3 board. The FSR was attached to the breadboard through a F-M Dupont Wire. The red and blue breadboard jumper wires are for 5V power and grounding while the yellow breadboard jumper wire is used for direct connection to the computer. The resistance of the resistor was 3.3kΩ. To convert the analog output to digital the Ardunio IDE was used and some code to retain the output of the force sensors.
Design. The experiment took place in a room with a grid on the floor with some fabric pieces of four different colors deployed in the squares of the grid. The grid had 32 squares (84), each square was 50cm50cm. In the ceiling there was a camera recording the procedure. Afterwards the videos were analyzed in a program named PoseNet which tracked the participants’ motions and trajectory. At the short side of the grid was the starting position of the table with four force sensing resistors attached.
On the short sides of the table there were handles with two force sensing resistors. The sensors were directly attached onto the table. On top of the sensors were the handles. The handles had a piece of eraser with the same area as the sensors attached to them. The handles were attached to the table with silver tape and so that the eraser touched the sensor. No pressure was being put onto the sensors while resting. The handles were loosely attached to the table.
The schematic above shows the connection between the FSR and the computer using the Arduino UNO which allows us to convert the analog signal from the FSR to a digital signal that can be printed. The relationship is generally linear from 50g and up, but note what the relationship does below 50g, and even more-so below 20g. These sensor’s have a turn-on threshold — a force that must be present before the resistance drops to a value below 10kΩ, where the relationship becomes more linear. To counteract this irregularity in the beginning of chart I altered the code. This alteration changes the parabolic reading of pressure put upon the force sensor to a linear one through dividing the force by a certain constant that was given to us.
Procedure. There were two participants at a time. Initially they got a general explanation of the experiment read out loud. They were told that they were supposed to carry a table together through a room and that they were not allowed to either turn around or speak with the other person. The experiment would be repeated three times with some small changes. The experiment was over when the table is put down on the floor. Further information was given individually.
These were the instructions that were given to the participating pairs on a paper.
Experiment 1, person 1:
“On the floor there will be a grid with some fabric pieces of different colors in it. The grid is the area where the experiment will take place. Remember, you are not allowed to speak with the other person or turn around.”
Experiment 1, person 2:
“On the floor there will be a grid with some fabric pieces of different colors in it. The grid is the area where the experiment will take place. Your task is to place the two legs of the table on your side into two squares with blue colored fabric pieces. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”
Experiment 2, person 1:
“Your task is to place the two legs of the table on your side into two squares with green colored fabric pieces. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”
Experiment 2, person 2:
“Your task is to place one of the two legs of the table on your side into a square with a yellow colored fabric piece in it. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”
Experiment 3, person 1:
“Your task is to place one of the two legs of the table on your side into a square with a yellow colored fabric piece in it, and the other into a square with a blue colored fabric piece. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”
Experiment 3, person 2:
“Your task is to place one of the two legs of the table on your side into a square with a white colored fabric piece in it, and the other into a square with a blue colored fabric piece. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”
All the experiments were recorded on film using the logitech camera in the ceiling. These videos were after the experiment was done analyzed in OpenPose.Then we got the exact position data, movements and trajectories. During the experiment the force sensing resistors were read every .5 seconds. We took time on all the experiments using a stopwatch.
To analyze the force data I connected all four force sensors to their own breadboard and their own arduino UNO board, and through a usb connection to my computer I was able to use an Arduino IDE with some code to print out the force outputs. Then by using a script and vi I was able to easily isolate the forces and time allotted into a text editor.
Looking like this:
To have something to compare the results from the experiments with we made a baseline. The baseline is the same exact experiments, but with both participants knowing where they should place the table.
Participants. Participants were recruited from different departments at Stanford University (The Artificial Intelligence Laboratory, Wallenberg Hall and interns from the internship program “Stem to SHTEM”). All participants were between the ages 15 and 52. The average age was 21 years. 12 people participated in the experiment (5 female and 7 male). All participants participated voluntarily.
All trials were recorded with the webcam attached to the ceiling. Then Machine Learning Techniques were used to extract various body points. OpenPose + Tensorflow allowed us to track positions relative to the camera. A combination of python scripts and bash tools was needed to extract and format the data.
It seems like different types of people react differently to each of the experiments based off previous interactions. For example, a dad and his daughter moved the table calmly while a pair of friends made more powerful movements. Another thing that was being noticed is that people pulled more than they pushed.
In the graphs above we have calculated the difference between the measurements from the pushing and pulling sensors on each handle to get the resulting force. This has been done during the whole time the experiment lasted. Down in the right corner of the figure we have the baseline which can be used for comparison.
Not that surprising, the baseline solved the task faster than the others on all of the experiments. The baseline did also use more exact nudges rather than a lot of back and forth communication. However, the other participants who did not know exactly where the goal was also managed to complete the tasks well.
There is a pattern in the way the different groups solved the task. For example, group 3’s three graphs look a lot alike. The same is for group 2 and 1. More measurements are needed to establish reliable conclusions but it seems like there are a few different styles on how to solve these kinds of tasks. Maybe it would be good to try to identify these different styles rather than collecting data and calculate an average because a combination of working methods does not necessarily have to result in a new working one.
Humans have an advanced and efficient way of communicating with each other. The rapidly growing research area artificial intelligence would benefit from learning this kind of communication. In this paper we focused on the nonverbal communication via nudges when carrying an object. We have developed a methodology to measure push and pull forces that are being put onto an object that is being carried. The results from the experiments showed that the tasks were possible to solve but that it went more efficiently when both of the participants knew the goal. Another conclusion is that there seems to be different methods to solve these kinds of tasks and that they may depend on the relationship between the two participants. The different methods should be identified.
Limitations and future work. When we designed this experiment we wanted to eliminate all other ways to communicate except for nudges so that we can be sure that the measurements we get are reliable and correct. One limitation is that we did not exclude the possibility to communicate via eye contact. Future work should take eye contact into consideration. An easy way to solve this problem would be to make the participants wear capes or hats. The measurements we got from the force sensing resistors showed how the two participants pushed and pulled. Another dimension that should be added is turnings. Then we would get a more complete idea of how the exchange of information via nudges works. Measure turnings can easily be done with two handles on each side of the table placed in the four corners. (For this four more force sensing resistors are needed.) With four handles the measurements will show when someone is pushing more on one of the handles or pulling stronger on one side of the table which would make it possible to identify turnings. The complete product of this project would be to transfer our measurements onto AI robots so that they can be able to communicate more efficiently and more human-like.
We would like to thank Professor Weissman for allowing us to participate in the STEM to SHTEM program. Also many thanks to Mengxi Li and Minae Kwon for mentoring us in this project and finally a big thank you to all the participants who helped us gather data for this paper.
Force Sensitive Resistor Hookup Guide, learn.sparkfun.com/tutorials/force-sensitive-resistor-hookup-guide/all.
Kevin W. Rio, William H. Warren (2016). Interpersonal coordination in biological systems The emergence of collective locomotion.
Patrick Nalepka, Rachel W. Kallen, Anthony Chemero, Elliot Saltzman, and Michael J. Richardson(2017). Herd Those Sheep: Emergent Multiagent Coordination and Behavioral-Mode Switching.