The detection of face-screen distance on smartphone (i.e., the distance between the user face and the smartphone screen) is of paramount importance for many mobile applications, including dynamic adjustment of screen on-off, screen resolution, screen luminance, font size, with the purposes of power saving, protection of human eyesight, etc. Existing detection techniques for face-screen distance depend on external or internal hardware, e.g., an accessory plug-in sensor (e.g., infrared or ultrasonic sensors) to measure the face-screen distance, a built-in proximity sensor that usually outputs a coarse-grained, two-valued, proximity index (for the purpose of powering on/off the screen), etc. In this paper, we present a fine-grained detection method, called "Look Into My Eyes (LIME)", that utilizes the front camera and inertial accelerometer of the smartphone to estimate the facescreen distance. Specifically, LIME captures the photo of the user's face only when the accelerometer detects certain motion patterns of mobile phones, and then estimates the face-screen distance by looking at the distance between the user's eyes. Besides, LIME is able to take care of the user experience when multiple users are facing the phone screen. The experimental results show that LIME can achieve a mean squared error smaller than 2.4 cm in all of experimented scenarios, and it incurs a small cost on battery life when integrated into an SMS application for enabling dynamic font size by detecting the face-screen distance.
Bootstrapping efforts and scalability issues hinder large-scale deployment of indoor navigation systems. We present FollowUs, an easily-deployable (bootstrap-free) and scalable indoor navigation system. In addition to robust navigation through real-time trace-following, FollowUs integrates cloud services to process and combine traces at large scale. It can also leverage optional floor plans to further enhance navigation performance. We designed and implemented FollowUs, including a mobile app and cloud services on Azure, and validate its real-world usability.
Massive video camera networks are now driving innovation in smart retail stores, road traffic monitoring, and security applications, and realizing live video analytics over these networks is an important challenge that Wi-Fi and cellular networks alone cannot solve. Spider is the first live video analytics network system to use a multi-hop, millimeter-wave (mmWave) wireless relay network to realize such a design. To mitigate mmWave link blockage, Spider integrates a separate low-latency Wi-Fi control plane with the mmWave relay data plane, allowing agile re-routing without suffering the penalty of mmWave beam searching for the new route. With the objective of maximizing video analytics accuracy (rather than simply maximizing data throughput), Spider proposes a novel, scalable flow planning algorithm that operates over hundreds of cameras to simultaneously calculate network routes, load-balance traffic, and allocate video bit rates to each camera. We implement Spider in a mmWave camera network testbed, comparing its object, text, and face detection recall performance against the state-of-the-art wireless video analytics system. Under different video analytic tasks, Spider improves recall by 41.5%—52.9% on average. Further experiments demonstrate Spider’s high scalability and gradual degradation under node and link failures, with a 57% reduction in average failure recovery time.
Short video streaming applications have recently gained substantial traction, but the non-linear video presentation they afford swiping users fundamentally changes the problem of maximizing user quality of experience in the face of the vagaries of network throughput and user swipe timing. This paper describes the design and implementation of Dashlet, a system tailored for high quality of experience in short video streaming applications. With the insights we glean from an in-the-wild TikTok performance study and a user study focused on swipe patterns, Dashlet proposes a novel out-of-order video chunk pre-buffering mechanism that leverages a simple, non machine learning-based model of users' swipe statistics to determine the pre-buffering order and bitrate. The net result is a system that achieves 77-99% of an oracle system's QoE and outperforms TikTok by 43.9-45.1x, while also reducing by 30% the number of bytes wasted on downloaded video that is never watched.
Conventional wireless network designs to date target endpoint designs that view the channel as a given. Examples include rate and power control at the transmitter, sophisticated receiver decoder designs, and high-performance forward error correction for the data itself. We instead explore whether it is possible to reconfigure the environment itself to facilitate wireless communication. In this work, we instrument the environment with a large array of inexpensive antenna (LAIA) elements, and design algorithms to configure LAIA elements in real time. Our system achieves a high level of programmability through rapid adjustments of an on-board phase shifter in each LAIA element. We design a channel decomposition algorithm to quickly estimate the wireless channel due to the environment alone, which leads us to a process to align the phases of the LAIA elements. We implement and deploy a 36-element LAIA array in a real indoor home environment. Experiments in this setting show that, by reconfiguring the wireless environment, we can achieve a 24% TCP throughput improvement on average and a median improvement of 51.4% in Shannon capacity over baseline single-antenna links.
Autonomous vehicles are predicted to dominate the transportation industry in the foreseeable future. Safety is one of the major challenges to the early deployment of self-driving systems. To ensure safety, self-driving vehicles must sense and detect humans, other vehicles, and road infrastructure accurately, robustly, and timely. However, existing sensing techniques used by self-driving vehicles may not be absolutely reliable. In this paper, we design REITS, a system to improve the reliability of RF-based sensing modules for autonomous vehicles. We conduct theoretical analysis on possible failures of existing RF-based sensing systems. Based on the analysis, REITS adopts a multi-antenna design, which enables constructive blind beamforming to return an enhanced radar signal in the incident direction. REITS can also let the existing radar system sense identification information by switching between constructive beamforming state and destructive beamforming state. Preliminary results show that REITS improves the detection distance of a self-driving car radar by a factor of 3.63.