The continually expanding number of electric vehicles in circulation presents challenges in terms of end-of-life disposal, driving interest in the reuse of batteries for second-life applications. A key aspect of battery reuse is the quantification of the relative battery condition or state of health (SoH), to inform the subsequent battery application and to match batteries of similar capacity. Impedance spectroscopy has demonstrated potential for estimation of state of health, however, there is difficulty in interpreting results to estimate state of health reliably. This study proposes a model-free, convolutional-neural-network-based estimation scheme for the state of health of high-power lithium-ion batteries based on a dataset of impedance spectroscopy measurements from 13 end-of-first-life Nissan Leaf 2011 battery modules. As a baseline, this is compared with our previous approach, where parameters from a Randles equivalent circuit model (ECM) with and without dataset-specific adaptations to the ECM were extracted from the dataset to train a deep neural network refined using Bayesian hyperparameter optimisation. It is demonstrated that for a small dataset of 128 samples, the proposed method achieves good discrimination of high and low state of health batteries and superior prediction accuracy to the model-based approach by RMS error (1.974 SoH%) and peak error (4.935 SoH%) metrics without dataset-specific model adaptations to improve fit quality. This is accomplished while maintaining the competitive performance of the previous model-based approach when compared with previously proposed SoH estimation schemes.
This paper presents a Mixed-Initiative (MI) framework for addressing the problem of control authority transfer between a remote human operator and an AI agent when cooperatively controlling a mobile robot. Our Hierarchical Expert-guided Mixed-Initiative Control Switcher (HierEMICS) leverages information on the human operator's state and intent. The control switching policies are based on a criticality hierarchy. An experimental evaluation was conducted in a high-fidelity simulated disaster response and remote inspection scenario, comparing HierEMICS with a state-of-the-art Expert-guided Mixed-Initiative Control Switcher (EMICS) in the context of mobile robot navigation. Results suggest that HierEMICS reduces conflicts for control between the human and the AI agent, which is a fundamental challenge in both the MI control paradigm and also in the related shared control paradigm. Additionally, we provide statistically significant evidence of improved, navigational safety (i.e., fewer collisions), LOA switching efficiency, and conflict for control reduction.
Inertial parameters characterise an object's motion under applied forces, and can provide strong priors for planning and control of robotic actions to manipulate the object. However, these parameters are not available a-priori in situations where a robot encounters new objects. In this paper, we describe and categorise the ways that a robot can identify an object's inertial parameters. We also discuss grasping and manipulation methods in which knowledge of inertial parameters is exploited in various ways. We begin with a discussion of literature which investigates how humans estimate the inertial parameters of objects, to provide background and motivation for this area of robotics research. We frame our discussion of the robotics literature in terms of three categories of estimation methods, according to the amount of interaction with the object: purely visual, exploratory, and fixed-object. Each category is analysed and discussed. To demonstrate the usefulness of inertial estimation research, we describe a number of grasping and manipulation applications that make use of the inertial parameters of objects. The aim of the paper is to thoroughly review and categorise existing work in an important, but under-explored, area of robotics research, present its background and applications, and suggest future directions. Note that this paper does not examine methods of identification of the robot's inertial parameters, but rather the identification of inertial parameters of other objects which the robot is tasked with manipulating.
This paper addresses the problem of transfer of control authority between a robot's AI and a remote human operator, when controlling a Mixed-Initiative (MI) robotic system. We propose a negotiation-theoretic method that enables the robot's AI and the human operator to cooperatively and dynamically determine (i. e. negotiate) the transfer of control authority between these two agents. An experimental study is presented in which a state-of-the-art Expert-guided Mixed-Initiative Control Switcher (EMICS) method is compared with our proposed Negotiation-Enabled Mixed-Initiative Control Switcher (NEMICS) algorithm. Results suggest that the NEMICS framework is able to successfully avoid conflicts for control, which is a fundamental challenge encountered with previous MI control methods. Comparing NEMICS with the EMICS, we provide evidence of improved navigational safety (i. e. fewer collisions). Additionally, our usability study suggests that human operators perceived their interactions with NEMICS as less intrusive than with EMICS.
Abstract Student-created water quality sensorsSensor development is a topical and highly interdisciplinary field, providing motivatingscenarios for teaching a multitude of science, technology, engineering and mathematics (STEM)subjects and skill sets.This paper describes the development and implementation of high and middle school lessons,tied to the state and national standards in science, math, and technology, that integratefundamental STEM principles while at the same time introducing students to the field of sensorsand sensor networks—technologies that are increasingly important in all fields, but particularlyin the world of environmental research.In this project, students build, calibrate and test a set of sensors and circuits, to measure a varietyof physical quantities. To build and understand their sensors, they must make use of a wide rangeof core knowledge of mathematics and physical science, as well as learning practical hands-ontechnology skills such as soldering and debugging circuits. In later modules, students interfacetheir sensors with computers, and write programs to gather raw signals from the sensors,implement calibration curves, and perform data manipulation and data logging. In later modules,students program their own communications protocols for wireless data transmission, andconnect their computerized sensor stations together to form a distributed wireless sensornetwork. Additional modules explore the use and implications of this technology forenvironmental research.The project has been highly successful in a wide range of classrooms, including pre-engineering,biology, earth science, physics, chemistry, mathematics and environmental science, for studentsat all academic levels, and in both rural and inner-city schools.This paper will provide an overview of the educational modules, a description of the sensorsbuilt by students, and examples of how these activities are tied to core curricula, enabling themodules to be utilized in regular classes without disrupting the semester’s teaching goals, andwill briefly discuss the benefits of the professional development model through which they wereintroduced to the teachers. We will then present the research results of the first three years ofclassroom implementation, during which over 60 teachers were equipped, trained on curriculum,and implemented the modules with over 3,000 middle and high school students and resultingmodifications to the lessons. Results show that as students engaged in hands-on problem solving,they learned engineering, math, and physics concepts. Not only did building and testing sensorsengage the students and increase their interest in STEM subjects and careers, but increased theirunderstanding of fundamental concepts of electricity and increased their basic math (algebra)skills and their awareness of water quality as an environmental issue grew as well.
<p>This paper is accepted for the IEEE SMC 2023. Adjusting the level of autonomy in human-machine systems (e.g., human-robot systems) holds great potential for achieving high system performance while maintaining operator involvement. To support operators with the task of setting the proper level of autonomy, we present a novel approach to realise a Model Predictive Controller that determines the optimal LoA for each tessellation in the robot's path plan based on the estimated performance degradation due environmental adversities. We also report on an experimental evaluation of a mixed-initiative system where both the operator and the Model Predictive Controller are in charge of dynamically adjusting the level of autonomy cooperatively while performing a challenging navigational task with a mobile ground robot in a high-fidelity simulation. To this end, we conducted a user study with 15 participants comparing the performance and user experience of the model predictive system with a state-of-the-art system. The results show significant benefits of the model predictive system in terms of a reduction of conflicts for control and an improved user experience. Additionally, there are indications of benefits in terms of robot health and, consequently, performance for the model predictive system.</p>
In this work we introduce the concept of Robot Vitals and propose a framework for systematically quantifying the performance degradation experienced by a robot. A performance indicator or parameter can be called a Robot Vital if it can be consistently correlated with a robot's failure, faulty behaviour or malfunction. Robot Health can be quantified as the entropy of observing a set of vitals. Robot vitals and Robot health are intuitive ways to quantify a robot's ability to function autonomously. Robots programmed with multiple levels of autonomy (LOA) do not scale well when a human is in charge of regulating the LOAs. Artificial agents can use robot vitals to assist operators with LOA switches that fix field-repairable non-terminal performance degradation in mobile robots. Robot health can also be used to aid a tele-operator's judgement and promote explainability (e.g. via visual cues), thereby reducing operator workload while promoting trust and engagement with the system. In multi-robot systems, agents can use robot health to prioritise robots most in need of tele-operator attention. The vitals proposed in this paper are: rate of change of signal strength; sliding window average of difference between expected robot velocity and actual velocity; robot acceleration; rate of increase in area coverage and localisation error.
This paper addresses the problem of RGBD object recognition in real-world applications, where large amounts of annotated training data are typically unavailable. To overcome this problem, we propose a novel, weakly-supervised learning architecture (DCNN-GPC) which combines parametric models (a pair of Deep Convolutional Neural Networks (DCNN) for RGB and D modalities) with non-parametric models (Gaussian Process Classification). Our system is initially trained using a small amount of labeled data, and then automatically prop- agates labels to large-scale unlabeled data. We first run 3D- based objectness detection on RGBD videos to acquire many unlabeled object proposals, and then employ DCNN-GPC to label them. As a result, our multi-modal DCNN can be trained end-to-end using only a small amount of human annotation. Finally, our 3D-based objectness detection and multi-modal DCNN are integrated into a real-time detection and recognition pipeline. In our approach, bounding-box annotations are not required and boundary-aware detection is achieved. We also propose a novel way to pretrain a DCNN for the depth modality, by training on virtual depth images projected from CAD models. We pretrain our multi-modal DCNN on public 3D datasets, achieving performance comparable to state-of-the-art methods on Washington RGBS Dataset. We then finetune the network by further training on a small amount of annotated data from our novel dataset of industrial objects (nuclear waste simulants). Our weakly supervised approach has demonstrated to be highly effective in solving a novel RGBD object recognition application which lacks of human annotations.