Implicit Communication in Collaborative Object Transport

Journal for High Schoolers, Journal for High Schoolers 2019

Authors:

Munir Bshara and My Pohl

Abstract:

When two people carry an object together, they manage to change directions and avoid obstacles with minimal or even without verbal communication. Communication using nudges, forces and eye contact is often sufficient. In this paper, we study this nonverbal language with the aim to understand it better. This type of communication is useful also for artificial intelligent robots. The results of our study and further experiments may contribute to better communication between AI robots and humans as well as between robots. The experimental setup consisted of two persons carrying an object to a certain goal. Meanwhile data was collected from force sensing resistors attached onto the object and from a camera. The results showed that there seems to be different styles of communicating. To be able to apply this kind of communication onto robots we think it will be necessary to identify these different methods rather than collecting experimental data and calculate an average.

1. Methods

Materials. A grid made out of masking tape was used to outline the experimental area, a table which was the object the participants carried, four force sensing resistors to measure the different forces that were applied to the object, a web camera (logitech ​Pro Stream Webcam C922 1080HD) ​to record videos to later be able to analyze them in a program (OenPose). To get the forces we used 4 FRS’s connected to 4 breadboards which are then connected individually through a ​3.3kΩ resistor and 3 breadboard cables to an Arduino UNO(REV3) which is connected to a pc through USB.

Figure 1. A schematic overview of the electrical circuit.
Figure 2. A photograph of the actual set up.

The pictures above show how the ZJchao Force Sensitive Resistor was attached to the UNO R3 board. The FSR was attached to the breadboard through a F-M Dupont Wire. The red and blue breadboard jumper wires are for 5V power and grounding while the yellow breadboard jumper wire is used for direct connection to the computer. The resistance of the resistor was ​3.3kΩ. To convert the analog output to digital the Ardunio IDE was used and some code to retain the output of the force sensors.

Design. ​The experiment took place in a room with a grid on the floor with some fabric pieces of four different colors deployed in the squares of the grid. The grid had 32 squares (84), each square was 50cm50cm. In the ceiling there was a camera recording the procedure. Afterwards the videos were analyzed in a program named PoseNet which tracked the participants’ motions and trajectory. At the short side of the grid was the starting position of the table with four force sensing resistors attached.

On the short sides of the table there were handles with two force sensing resistors. The sensors were directly attached onto the table. On top of the sensors were the handles. The handles had a piece of eraser with the same area as the sensors attached to them. The handles were attached to the table with silver tape and so that the eraser touched the sensor. No pressure was being put onto the sensors while resting. The handles were loosely attached to the table.

Figure 3 and 4. Figure 3 is a photograph which shows what the construction of the handles looked like. Figure 4 illustrates how the handles were used.
Figure 5. A schematic showing a connection between an FSR and computer using Arduino UNO.
Figure 6. A graph showing the relationship between resistance and force.

The schematic above shows the connection between the FSR and the computer using the Arduino UNO which allows us to convert the analog signal from the FSR to a digital signal that can be printed. The relationship is generally linear from 50g and up, but note what the relationship does below 50g, and even more-so below 20g. These sensor’s have a turn-on threshold — a force that must be present before the resistance drops to a value below 10kΩ, where the relationship becomes more linear. To counteract this irregularity in the beginning of chart I altered the code. This alteration changes the parabolic reading of pressure put upon the force sensor to a linear one through dividing the force by a certain constant that was given to us.

Procedure. ​There were two participants at a time. Initially they got a general explanation of the experiment read out loud. They were told that they were supposed to carry a table together through a room and that they were not allowed to either turn around or speak with the other person. The experiment would be repeated three times with some small changes. The experiment was over when the table is put down on the floor. Further information was given individually.

Figure 7. A schematic overview of the experimental area with the three goals plotted. The numbers stand for the number of the experiment.
Figure 8. Photograph of the actual experimental area taken with the web camera from the same view as the video recordings.

These were the instructions that were given to the participating pairs on a paper.

Experiment 1, person 1:

“On the floor there will be a grid with some fabric pieces of different colors in it. The grid is the area where the experiment will take place. Remember, you are not allowed to speak with the other person or turn around.”

Experiment 1, person 2:

“On the floor there will be a grid with some fabric pieces of different colors in it. The grid is the area where the experiment will take place. Your task is to place the two legs of the table on your side into two squares with blue colored fabric pieces. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”

Experiment 2, person 1:

“Your task is to place the two legs of the table on your side into two squares with green colored fabric pieces. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”

Experiment 2, person 2:

“Your task is to place one of the two legs of the table on your side into a square with a yellow colored fabric piece in it. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”

Experiment 3, person 1:

“Your task is to place one of the two legs of the table on your side into a square with a yellow colored fabric piece in it, and the other into a square with a blue colored fabric piece. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”

Experiment 3, person 2:

“Your task is to place one of the two legs of the table on your side into a square with a white colored fabric piece in it, and the other into a square with a blue colored fabric piece. The other person doesn’t know about this and you will have to navigate the both of you so that you accomplish the mission. Remember, you are not allowed to speak with the other person or turn around.”

All the experiments were recorded on film using the logitech camera in the ceiling. These videos were after the experiment was done analyzed in OpenPose.Then we got the exact position data, movements and trajectories. During the experiment the force sensing resistors were read every .5 seconds. We took time on all the experiments using a stopwatch.

To analyze the force data I connected all four force sensors to their own breadboard and their own arduino UNO board, and through a usb connection to my computer I was able to use an Arduino IDE with some code to print out the force outputs. Then by using a script and vi I was able to easily isolate the forces and time allotted into a text editor.

Looking like this:

To have something to compare the results from the experiments with we made a baseline. The baseline is the same exact experiments, but with both participants knowing where they should place the table.

Participants. ​Participants were recruited from different departments at Stanford University (The Artificial Intelligence Laboratory, Wallenberg Hall and interns from the internship program “Stem to SHTEM”). All participants were between the ages 15 and 52. The average age was 21 years. 12 people participated in the experiment (5 female and 7 male). All participants participated voluntarily.

2. Results

Figure 9. Showing the position for player 2.
Figure 10. Showing the position for player1.

All trials were recorded with the webcam attached to the ceiling. Then Machine Learning Techniques were used to extract various body points. ​OpenPose + Tensorflow allowed us to track positions relative to the camera. A combination of python scripts and bash tools was needed to extract and format the data.

It seems like different types of people react differently to each of the experiments based off previous interactions. For example, a dad and his daughter moved the table calmly while a pair of friends made more powerful movements. Another thing that was being noticed is that people pulled more than they pushed.

Figure 11. Four graphs illustrating the forces that were put onto the object from the two participants in experiment 1.
Figure 12. Four graphs illustrating the forces that were put onto the object from the two participants in experiment 2.
Figure 13. Four graphs illustrating the forces that were put onto the object from the two participants in experiment 3.

In the graphs above we have calculated the difference between the measurements from the pushing and pulling sensors on each handle to get the resulting force. This has been done during the whole time the experiment lasted. Down in the right corner of the figure we have the baseline which can be used for comparison.

Not that surprising, the baseline solved the task faster than the others on all of the experiments. The baseline did also use more exact nudges rather than a lot of back and forth communication. However, the other participants who did not know exactly where the goal was also managed to complete the tasks well.

There is a pattern in the way the different groups solved the task. For example, group 3’s three graphs look a lot alike. The same is for group 2 and 1. More measurements are needed to establish reliable conclusions but it seems like there are a few different styles on how to solve these kinds of tasks. Maybe it would be good to try to identify these different styles rather than collecting data and calculate an average because a combination of working methods does not necessarily have to result in a new working one.

3. Conclusions

Humans have an advanced and efficient way of communicating with each other. The rapidly growing research area artificial intelligence would benefit from learning this kind of communication. In this paper we focused on the nonverbal communication via nudges when carrying an object. We have developed a methodology to measure push and pull forces that are being put onto an object that is being carried. The results from the experiments showed that the tasks were possible to solve but that it went more efficiently when both of the participants knew the goal. Another conclusion is that there seems to be different methods to solve these kinds of tasks and that they may depend on the relationship between the two participants. The different methods should be identified.

Limitations and future work. ​When we designed this experiment we wanted to eliminate all other ways to communicate except for nudges so that we can be sure that the measurements we get are reliable and correct. One limitation is that we did not exclude the possibility to communicate via eye contact. Future work should take eye contact into consideration. An easy way to solve this problem would be to make the participants wear capes or hats. The measurements we got from the force sensing resistors showed how the two participants pushed and pulled. Another dimension that should be added is turnings. Then we would get a more complete idea of how the exchange of information via nudges works. Measure turnings can easily be done with two handles on each side of the table placed in the four corners. (For this four more force sensing resistors are needed.) With four handles the measurements will show when someone is pushing more on one of the handles or pulling stronger on one side of the table which would make it possible to identify turnings. The complete product of this project would be to transfer our measurements onto AI robots so that they can be able to communicate more efficiently and more human-like.

4. Acknowledgements

We would like to thank Professor Weissman for allowing us to participate in the STEM to SHTEM program. Also many thanks to Mengxi Li and Minae Kwon for mentoring us in this project and finally a big thank you to all the participants who helped us gather data for this paper.

5. References

Force Sensitive Resistor Hookup Guide​, learn.sparkfun.com/tutorials/force-sensitive-resistor-hookup-guide/all.

Kevin W. Rio, William H. Warren (2016). Interpersonal coordination in biological systems The emergence of collective locomotion.

Patrick Nalepka, Rachel W. Kallen, Anthony Chemero, Elliot Saltzman, and Michael J. Richardson(2017). Herd Those Sheep: Emergent Multiagent Coordination and Behavioral-Mode Switching.

Leave a Reply