Self-driving technologies have been increasingly developed and tested in recent years (e.g., Volvo's and Google's self-driving cars). However, only a limited number of investigations have so far been conducted into communication between self-driving cars and pedestrians. For example, when a pedestrian is about to cross a street, that pedestrian needs to know the intension of the approaching self-driving car. In the present study, we designed a novel interface known as "Eyes on a Car" to address this problem. We added eyes onto a car so as to establish eye contact communication between that car and pedestrians. The car looks at the pedestrian in order to indicate its intention to stop. This novel interface design was evaluated via a virtual reality (VR) simulated environment featuring a street-crossing scenario. The evaluation results show that pedestrians can make the correct street-crossing decision more quickly if the approaching car has the novel interface "eyes" than in the case of normal cars. In addition, the results show that pedestrians feel safer with regard to crossing a street if the approaching car has eyes and if the eyes look at them.
It is promising to apply eye-gaze techniques in designing an external human-machine interface (eHMI) for a self-driving car. We can find several prior "eye" studies; however, due to the difficulty of running a study in a real environment, prior research was often evaluated in a controlled VR environment. It is unclear how physical eyes on the car affect pedestrians' thoughts in the real-world outdoor environment. To answer the question, we built and mounted a set of physical eyes of suitable size for a real car, drove the car in a public open space, activated the physical eyes, and performed the eye-gaze interaction with pedestrians without providing them any prior explanation. We administered a questionnaire to collect pedestrians' thoughts and conducted a thematic (inductive) analysis. By comparing our findings to the previous results through a literature review, we highlighted the significance of physical implementation of the "eye concept" for future research.
We devised a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction, and then draw lines by raising the fibers by moving the finger in the opposite direction. These material properties can be found in various items such as carpets in our living environments. We have developed three different devices to draw patterns on a "fur display" utilizing this phenomenon: a roller device, a pen device and pressure projection device. Our technology can turn ordinary objects in our environment into rewritable displays without requiring or creating any non-reversible modifications to them. In addition, it can be used to present large-scale image without glare, and the images it creates require no running costs to maintain.
Computer displays play an important role in connecting the information world and the real world. In the era of ubiquitous computing, it is essential to be able to access information in a fluid way and non-obstructive integration of displays into our living environment is a basic requirement to achieve it. Here, we propose a display technology that utilizes the phenomenon whereby the shading properties of fur change as the fibers are raised or flattened. One can erase drawings by first flattening the fibers by sweeping the surface by hand in the fiber's growth direction and then draw lines by raising the fibers by moving a finger in the opposite direction. These material properties can be found in various items such as carpets and plush toy in our living environment. Our technology can turn these ordinary objects into displays without requiring or creating any non-reversible modifications to the objects. It can be used to make a large-scale display and the drawings it creates have no running costs.
Abstract An evaluation technique is introduced for artifacts appearing on PDP images when the eye of an observer does not follow the motion of the images. The motional artifacts are serious especially when the eye moves arbitrarily. Dividing the light‐emission periods of sub‐fields into smaller blocks, and adding equalizing pulses to the original signal are effective in reducing the disturbances.
We propose a drawing method to create large-scale pictures in public space. We use a property of anisotropic reflection to show images on the grass field. We created a prototype of roller type device which can control the angle of grass. We observed that our system entertains people in public exhibition.
There are increasing needs in communication between an autonomous car and a pedestrian. Some conceptual solutions have been proposed to solve this issue, such as using various communication modalities (eyes, smile, text, light and projector) on a car to communicate with pedestrians. However, there is no detailed study in comparing these communication modalities. In this study, we compare five modalities in a pedestrian street-crossing situation via a video experiment. The results show that a text is better than other modalities to express car's intention to pedestrians. In addition, we compare the modalities in different scenarios and environments as well as pedestrian's perception of the modalities.
Recently, there has been a growing emphasis on autonomous vehicles (AV s), and as they coexist with pedestrians, ensuring pedestrian safety at crosswalks has become paramount. While AVs exhibit commendable performance on traditional roads with established traffic infrastructure, their interaction in different environments, such as shared spaces lacking traffic lights or sign rules (also known as naked streets), can present significant challenges, including right-of-way and accessibility concerns. To address these challenges, this study proposes a novel approach to enhance pedestrian safety in shared spaces, focusing on the proposed smart pole interaction unit (SPIU) combined with an external human-machine interface (eHMI). By evaluating the proposal of SPIU developed by a virtual reality system, we explore its usability and effectiveness in facilitating vehicle-to-pedestrian (V2P) interactions at crosswalks. Our findings from this study showed that SPIU facilitates safe, quicker decision-making to stop and pass at crosswalks in shared space and reduces cognitive load compared to scenarios where an SPIU is absent for pedestrians and reduce the need for eHMI to see on multiple AVs. The SPIU addition with the eHMI in vehicles yields a noteworthy 21 % improvement in response time, enhancing efficiency during pedestrian stops. In both scenarios, whether with a single AV (1-way) or multiple AVs (2-way), SPIU has a positive impact on interaction dynamics and statistically demonstrates a significant improvement (p = 0.001).