Targeted nature-based small-scale interventions is an approach commonly adopted by urban developers. The public acceptance of their implementation could be improved by participation, emphasizing residents or shopkeepers located close to the areas of interest. In this work, we propose a methodology that combines 3D technology, based on open data sources, user-generated content, 3D software and game engines for both minimizing the time and cost of the whole planning process and enhancing citizen participation. The proposed schemes are demonstrated in Piraeus (Greece) and Gladsaxe (Denmark). The core findings can be summarized as follows: (a) the time and cost are minimized by using online databases, (b) the gamification of the planning process enhances the decision making process and (c) the interactivity provided by the game engine inspired the participation of non-experts in the planning process (co-creation and co-evaluation), which decentralizes and democratizes the final planning solution.
Hyperspectral data classification is one of the fundamental problems in remote sensing.Several algorithms based on supervised machine learning have been proposed to address it.The performance, however, of the proposed algorithms is inherently dependent on the amount and quality of annotated data.Due to recent advances in hyperspectral imaging and autonomous (unmanned) aerial vehicles collecting new hyperspectral data is an easy task.Annotating those data, however, is a tedious, time-consuming and costly task requiring the in-situ presence of human experts.One way to loosen the requirement of a large number of annotated data is the shift to semi-supervised learning combined with highly sample-efficient tensorbased neural networks.This study provides a comprehensive experimental analysis of the performance of a variety of graph-based semi-supervised learning techniques combined with tensor-based neural network embeddings for the problem of hyperspectral data classification.Experimental results suggest that the combination of tensor-based neural network embeddings with graph-based semi-supervised learning can significantly improve the classification results minimizing human annotation effort.
Diabetic foot ulcers (DFUs) constitute a serious complication for people with diabetes. The care of DFU patients can be substantially improved through self-management, in order to achieve early-diagnosis, ulcer prevention, and complications management in existing ulcers. In this paper, we investigate two categories of image-to-image translation techniques (ItITT), which will support decision making and monitoring of diabetic foot ulcers: noise reduction and super-resolution. In the former case, we investigated the capabilities on noise removal, for convolutional neural network stacked-autoencoders (CNN-SAE). CNN-SAE was tested on RGB images, induced with Gaussian noise. The latter scenario involves the deployment of four deep learning super-resolution models. The performance of all models, for both scenarios, was evaluated in terms of execution time and perceived quality. Results indicate that applied techniques consist a viable and easy to implement alternative that should be used by any system designed for DFU monitoring.
One of the main characteristics of the Internet era we are living in, is the free and online availability of a huge amount of data. This data is of varied reliability and accuracy and exists in various forms and formats. Often, it is cross-referenced and linked to other data, forming a nexus of text, images, animation and audio enabled by hypertext and, recently, by the Web3.0 standard. Search engines can search text for keywords using algorithms of varied intelligence and with limited success. Searching images is a much more complex and computationally intensive task but some initial steps have already been made in this direction, mainly in face recognition. This paper aims to describe our proposed pipeline for integrating data available on Internet repositories and social media, such as photographs, animation and text to produce 3D models of archaeological monuments as well as enriching multimedia of cultural / archaeological interest with metadata and harvesting the end products to EUROPEANA. Our main goal is to enable historians, architects, archaeologists, urban planners and affiliated professionals to reconstruct views of historical monuments from thousands of images floating around the web.
The urban environment seems to affect the citizens' health. The implementation of Blue-Green Solutions (BGS) in urban areas have been used to promote public health and citizens well-being. The aim of this paper is to present the development of an mHealth app for monitoring patients and citizens health status in areas where BGS will be applied. The "HEART by BioAsssist" application could be used as a health and other data collection tool as well as an "intelligent assistant" to monitor and promote patient's physical activity in areas with Blue-Green Solutions.
The scarcity of green spaces, in urban environments, consists a critical challenge. There are multiple adverse effects, impacting the health and well-being of the citizens. Small scale interventions, e.g. pocket parks, is a viable solution, but comes with multiple constraints, involving the design and implementation over a specific area. In this study, we harness the capabilities of generative AI for multi-scale intervention planning, focusing on nature based solutions. By leveraging image-to-image and image inpainting algorithms, we propose a methodology to address the green space deficit in urban areas. Focusing on two alleys in Thessaloniki, where greenery is lacking, we demonstrate the efficacy of our approach in visualizing NBS interventions. Our findings underscore the transformative potential of emerging technologies in shaping the future of urban intervention planning processes.
Accurate prediction of the seawater intrusion extent is necessary for many applications, such as groundwater management or protection of coastal aquifers from water quality deterioration. However, most applications require a large number of simulations usually at the expense of prediction accuracy. In this study, the Gaussian process regression method is investigated as a potential surrogate model for the computationally expensive variable density model. Gaussian process regression is a nonparametric kernel-based probabilistic model able to handle complex relations between input and output. In this study, the extent of seawater intrusion is represented by the location of the 0.5 kg/m3 iso-chlore at the bottom of the aquifer (seawater intrusion toe). The initial position of the toe, expressed as the distance of the specific line from a number of observation points across the coastline, along with the pumping rates are the surrogate model inputs, whereas the final position of the toe constitutes the output variable set. The training sample of the surrogate model consists of 4000 variable density simulations, which differ not only in the pumping rate pattern but also in the initial concentration distribution. The Latin hypercube sampling method is used to obtain the pumping rate patterns. For comparison purposes, a number of widely used regression methods are employed, specifically regression trees and Support Vector Machine regression (linear and nonlinear). A Bayesian optimization method is applied to all the regressors, to maximize their efficiency in the prediction of seawater intrusion. The final results indicate that the Gaussian process regression method, albeit more time consuming, proved to be more efficient in terms of the mean absolute error (MAE), the root mean square error (RMSE), and the coefficient of determination (R2).