Cyber-physical systems (CPS) constitute a promising paradigm that could fit various applications. Monitoring based on the Internet of Things (IoT) has become a research area with new challenges in which to extract valuable information. This paper proposes a deep learning classification sound system for execution over CPS. This system is based on convolutional neural networks (CNNs) and is focused on the different types of vocalization of two species of anurans. CNNs, in conjunction with the use of mel-spectrograms for sounds, are shown to be an adequate tool for the classification of environmental sounds. The classification results obtained are excellent (97.53% overall accuracy) and can be considered a very promising use of the system for classifying other biological acoustic targets as well as analyzing biodiversity indices in the natural environment. The paper concludes by observing that the execution of this type of CNN, involving low-cost and reduced computing resources, are feasible for monitoring extensive natural areas. The use of CPS enables flexible and dynamic configuration and deployment of new CNN updates over remote IoT nodes.
The application of machine learning techniques to sound signals requires the previous characterization of said signals. In many cases, their description is made using cepstral coefficients that represent the sound spectra. In this paper, the performance in obtaining cepstral coefficients by two integral transforms, Discrete Fourier Transform (DFT) and Discrete Cosine Transform (DCT), are compared in the context of processing anuran calls. Due to the symmetry of sound spectra, it is shown that DCT clearly outperforms DFT, and decreases the error representing the spectrum by more than 30%. Additionally, it is demonstrated that DCT-based cepstral coefficients are less correlated than their DFT-based counterparts, which leads to a significant advantage for DCT-based cepstral coefficients if these features are later used in classification algorithms. Since the DCT superiority is based on the symmetry of sound spectra and not on any intrinsic advantage of the algorithm, the conclusions of this research can definitely be extrapolated to include any sound signal.
Environmental audio monitoring is a huge area of interest for biologists all over the world. This is why some audio monitoring system have been proposed in the literature, which can be classified into two different approaches: acquirement and compression of all audio patterns in order to send them as raw data to a main server; or specific recognition systems based on audio patterns. The first approach presents the drawback of a high amount of information to be stored in a main server. Moreover, this information requires a considerable amount of effort to be analyzed. The second approach has the drawback of its lack of scalability when new patterns need to be detected. To overcome these limitations, this paper proposes an environmental Wireless Acoustic Sensor Network architecture focused on use of generic descriptors based on an MPEG-7 standard. These descriptors demonstrate it to be suitable to be used in the recognition of different patterns, allowing a high scalability. The proposed parameters have been tested to recognize different behaviors of two anuran species that live in Spanish natural parks; the Epidalea calamita and the Alytes obstetricans toads, demonstrating to have a high classification performance.
Public utilities services (gas, water and electricity) have been traditionally automated with several technologies. The main functions that these technologies must support are AMR, automated meter reading, and SCADA, supervisory control and data acquisition. Most meter manufacturers provide devices with Bluetoothreg or ZigBeetrade communication features. This characteristic has allowed the inclusion of wireless sensor networks (WSN) in these systems. Once WSNs have appeared in such a scenario, real-time AMR and SCADA applications can be developed with low cost. Data must be routed from every meter to a base station. This paper describes the use of a novel QoS-driven routing algorithm, named SIR: sensor intelligence routing, over a network of meters. An artificial neural network is introduced in every node to manage the routes that data have to follow. The resulting system is named intelligent wireless sensor network (IWSN).
Several biological research studies have shown that the number of individuals of certain species of anurans in a specific geographical region, and the evolution of this number over time, can be used as an indicator of climate change. To detect the presence of anurans, Wireless Sensor Networks (WSNs) are usually deployed with the aim of obtaining bio-acoustic information in a set covering numerous locations. However, the identification of the anuran species from a huge number of recordings usually involves an overwhelming task that has to be undertaken by expert and intelligent systems. Previous studies into this issue have proposed several classification techniques with a common approach: they all take into account the sequential characteristic of sounds by considering syllables or other kinds of vocal segments. In noisy sounds, as it is usually the case in recordings made in natural habitats, segmentation of the signal is no straightforward task and may cause low classification accuracy. To override this problem, a new non-sequential approach is proposed in this paper. It is based on considering very small pieces of sounds (frames) each of which is then classified without considering preceding or subsequent information. Up to nine frame-based classifiers are explored in this paper and their performances are compared to the most commonly used sequential classifier: the Hidden Markov Model (HMM). Additionally, for featuring the frames, many choices have been described, although the application of the Mel Frequency Cepstral Coefficients (MFCCs) has probably become the most common method. In this work, an alternative methodology is suggested: the use of a set of MPEG-7 parameters, which offers a normalized solution with a much greater semantic content. The experimental results have shown that the proposed method clearly outperforms the HMM, thereby showing the non-sequential classification of anuran sounds to be feasible. From among the algorithms tested, the decision-tree classifier has shown the best performance with an overall classification success rate of 87.30%, which is an especially striking result considering that the analyzed sounds were affected by a decidedly noisy background.
One hundred development and certification ground tests must be performed before the A400M aircraft's first flight. Ground testing is an essential but also expensive process performed by the Airbus Defense and Space (Airbus DS) Company in the manufacturing cycle of an aircraft. This process involves the repetition of a large group of tests due to failures (known in the terminology of the company as “incidences”) in the testing. One or more incidences in a test imply that it will be repeated, which requires a significant investment of resources and time by the company's engineers. In this article, an innovative decision support environment to manage the ground testing sequence is presented and a data mining analysis of the testing time and the trend of test incidences is included. The core application, developed in R language, is supported by an easy-to-use customer web application using the Shiny environment. The environment was used to analyze real-world cases of tests to be performed by Airbus DS, producing a useful decision tool for company experts to evaluate the ground testing sequence. It is currently in the last stage of testing by the Airbus DS ground test staff using real-world data.