Abstract. This paper presents the research and development of a volumetric 3D visibility analysis application to assess public space in regards to safety and security. While most of the academic literature on space visibility concentrates on developing viewsheds from a specific viewpoint, this work integrates dynamic scenarios and real-time calculation of space visibility. Voxel-based spatial representation is utilised as a base for the 3D visibility analysis. Different space configurations are tested illustrating the robustness and usefulness of the developed application. In order to measure and evaluate different safety zones, an innovative approach is tested according to which the number of times each voxel is observed is recorded and further classified into different zones. In this way, an accurate calculation of the observed space can be performed understanding which part of an area of interest is less or more observed. The research has shown great potential for first responders, researchers, architects and safety experts. The developed application can be used as a simulation tool to assess the safety of different urban environments and identify potentially vulnerable locations.
Abstract. In recent years, the concept of Digital Twin (DT) for cities is increasingly at the core of most smart city initiatives, as it has been identified as a critical tool for tackling the challenges of this century. A robust city modelling framework is essential if local, state and national governments are to move towards sustainable built environments and work together across complex multi-sectoral problems to drive impacts that improve urban liveability and climate adaptability. Furthermore, the level of collaboration and interoperability required to address these cannot be achieved without proper standardisation of DT components. The aim of this project is to develop a demonstration DT that integrates existing data using a standardised 3D format based on CityGML and that embeds analytics, such as sun exposure and tree coverage, to assess liveability within a 3D city modelling framework. Common urban features such as buildings, roads, railways, vegetation and water bodies are also processed and incorporated. Additionally, IoT sensors are integrated into the model and all processes are performed using open-source tools to improve accessibility and repeatability. Details of the workflow, including the storage of the city features in a 3D City Database (3DCityDB), the 3D upgrading of urban features commonly available as 2D data as well as a few use cases are illustrated and discussed in this paper.
When dealing with complex and multi-faceted urban design challenges, the sheer weight of the information available can make discerning the 'bigger picture' challenging.It is the suggestion of this paper that there is a requirement for intelligent tools and mechanisms to assist in the capturing, comprehending and communication of solutions to such problems whilst keeping in mind the consensus of the aims and targets.To make knowledgeable decisions, there is a need to access the most relevant sources of information possible.Quality intelligence requires a quality foundation of data.This paper will outline some fundaments of how to best structure urban components and then examine how these can be applied to assist in improving design and planning of urban precincts.In conclusion, some next steps are proposed for the development of these tools and their application within an urban context.
The United Kingdom (UK) has placed itself on a transition towards a low-carbon economy and society, through the imposition of a goal of reducing its 'greenhouse' gas emissions by 80% by 2050. A set of three low-carbon 'Transition Pathways' were developed to examine the influence of different governance arrangements on achieving a low-carbon future. They focus on the power sector, including the potential for increasing use of low-carbon electricity for heating and transport. These transition pathways were developed by starting from narrative storylines regarding different governance framings, drawing on interviews and workshops with stakeholders and analysis of historical analogies. Here the quantified pathways are compared and contrasted with the main scenarios developed in the UK Government's 2011 Carbon Plan. This can aid an informed debate on the technical feasibility and social acceptability of realising transition pathways for decarbonising the UK energy sector by 2050. The contribution of these pathways to meeting Britain's energy and carbon reduction goals are therefore evaluated on a 'whole systems' basis, including the implications of 'upstream emissions' arising from the 'fuel supply chain' ahead of power generators themselves.
The Urban Politics of Squatters’ Movements is an edited collection of essays presenting a series of detailed case studies and analysis of squatting activity across several European countries using ...
In the US, over 20% of collisions between automobiles are rear-end collisions. But for transit buses, whose exposure and pattern of movement may be quite different, the number is significantly higher, being over 35%. If buses are more susceptible to this type of collision than automobiles, one wonders why. In this report we describe both a property of human vision that may relate to the answer plus a theory as to the link. Some formulations of the visual task inherent in avoiding such collisions suggest that an observer may utilize the angular width of an object ahead, divided by its time rate of change, to calculate a time to collision (TTC). Consider the initial act of perceiving closure with a vehicle ahead. If the quotient of angular width to its time derivative is what is used to initiate braking, then it must be done accurately. We have discovered that the time to react (RT) to a step increase in angular width is seen more slowly the larger the object. The stimulus contained only the 2D cue to closure and depth of field was unchanged at 1.5 m. Objects were light gray squares set in a dark gray surround. The size change started at a random time after a ready signal and was effected gradually, averaging close to a pixel per MSEC in 13.3 MSEC steps. Six normal observers were employed. Average reaction time (2 s.e.) were 284.5 (1.65) MSEC for 3.7 DEG, 294.2 (1.69) MSEC for 5.3 DEG, and 296.5 (1.80) MSEC for 7.6 DEG. The effect is small, about 10% of the estimated perceptual delay, but it is significant. Suppose that the computation of the time derivative of width, dw/dt, is thus distorted by an elevated dt. This lowers dw/dt and thus raises TTC. This finding accords with a conjecture by Leibowitz to explain collisions where trains strike vehicles. He suggested that larger objects appear to move more slowly, damaging the accuracy of the required TTC computation. His conjecture may also apply to collisions between automobiles.