COSI AI Robot

Supported by an "AI grant," this research serves as a preliminary exploration of conversational AI in museums, focusing on AI-powered robot. A robot model, designed based on functional components, was integrated into augmented reality (AR) scenarios using Reality Composer on iPad. Interviews were conducted at COSI, a science museum in Columbus, to gather visitors' insights and opinions about the robot concept. Participants used the iPad to view the virtual robot in real-world settings and provided feedback through targeted questions. The findings informed the design of AI-powered robots in museums and highlighted the potential of AR technology in user research.

Procedure

1
Robot Design and Prototyping based on Internal Components.
2
Developed interview materials, including interview questions and AR scenarios in Reality Composer on iPad.
3
Conducted interviews at COSI the science center, with both screen and audio recording.
4
Transcribed the audio recordings and analyzed the data.

Robot Design and Prototyping

At this stage, we considered two main internal components for the robot. First, an iPad will capture voice and image data while running the AI model for human recognition, language processing, and dialogue generation. Second, an external monitor mounted on a mobile stand will be used to display visual information. Based on these components, we designed the robot enclosure with the following features:
1) Identifiability: The robot should be instantly recognizable, with internal components like the iPad hidden to avoid distracting visitors from the experience. 2) Approachability: The design should be friendly, approachable, and appealing, especially to children, encouraging interaction. We drew inspiration from popular robot characters in cartoons and movies. 3) Feasibility: The enclosure is designed around essential components to ensure practicality. A preliminary structural design was also developed to support functionality.
Three Reality Composer Project on Ipad

AR Scenarios

There are three reality composer projects on iPad. In the interview session, researchers will shift the projects based on the questions that are going to be asked.
1. Project 1 contains four files of robots with different color schemes.
2. Project 2 two contains four files of robots with different poses.
3. In project 3, the robot will be animated to looking at the camera, making the participants fell the robot is following and looking at them.
Project 1: Four Color Schemes
Project 2: Four Poses
Front and Back of the Robot & Project 3: Robot looking at Camera

Interview Questions

(Show the white robot in Project 1)
1. Hold the iPad and look at the screen. You will see an installation. What do you think this is?
2. Why do you think this is a robot? Or why do you think this is a ____? Why is this not a robot?
3. What would you walk up to this robot?
4. When you are close to this robot, how would you like the robot to react to you? a. How would you expect the robot to behave when you are near it? b. Would you want to talk to the robot?
(Ask participants to tap on iPad to go through four color versions of the robot)
5. Here are the robots with different body colors and eye colors. Can you tell me the colors you like most? a. Why?
6. If you walked past this robot, would you want the robot to look at you?
a. Or under what circumstances do you hope the robot will and will not look at you?
7. Robots can be made up of many parts. What do you think this robot is made up of?
- Battery. - Display screen. - iPad. - Camera. - Gears. - Chips. - Others _____
(Shift to Project 3 on iPad)
8. Walk around this robot. Do you notice anything different?
a. Do you like that the robot is looking at you? b. Does the robot looking at you make it seem more friendly, or not?
(Shift to Project 2 on iPad)
9. Pick the pose that you think will mostly represent the following behaviors of the robot:
- Waiting. - Inviting. - Listening. - Talking. - Done/Go away.
10. Do you have any suggestions on the look of the robot?
a. How could the robot look friendlier? b. How could the robot look angry?
11. Can you give the robot a name?

Data Collection and Analysis

On June 12, 2024, we conducted interviews at COSI the science center, with 8 groups and a total of 26 visitors. More than half of the participants were children under 10 years old. The iPad was used to record both the screen and audio during the sessions. The collected data was analyzed as follows.
● Robot Identification: All participants identified the robot, linking its design to familiar robot characters due to features like hinges and a square body.
● Willingness to Interact: All participants were open to interacting with the robot, envisioning actions such as greeting, answering questions, playing games, and performing human-like or superhero actions.
● Color Preference: Most participants favored the red and blue robot with green eyes, associating the colors with superheroes, vibrancy, and personal preference.
● Eye-Following Feedback: Opinions on the robot’s eye-following feature were mixed—some found it engaging, while others found it unsettling. AR animations influenced opinions, with some participants changing their views after seeing the robot in action.
● Component Recognition: Participants identified external parts like metal, screens, and wires but did not recognize hidden internal components such as the iPad.
● Pose Interpretation: Participants had similar perceptions for certain poses, like waving representing "inviting," while the fourth pose, with arms crossed, elicited a wide range of interpretations, including impatience, authority, or a superhero stance.
● Design Suggestions: Participants suggested adding more human-like features, such as a mouth or customizable elements like sunglasses, to make the robot more relatable and approachable. These additions would help the robot express a wider range of emotions and enhance user engagement.
● AR Interaction and User Research: Participants, particularly children, were comfortable using the iPad and were excited to interact with the AR robot. The use of AR technology proved effective in user research, allowing participants to see and engage with the digital twin of the robot in real-world environments. This approach reduced the need for physical prototypes in early design stages, saving time while offering valuable insights into how users might interact with the final product.