While the global impact of plastic waste is increasingly concerning, the application of reused materials in the built environment remains little explored. This paper presents research into the reuse of plastic in architecture by means of computational design and robotic fabrication. Design possibilities using reclaimed plastic artefacts were explored by testing their structural stability and robotically modifying them in order to create a pavilion. While the design conceptualization started with the reclaimed material and the analysis of its potential, the digital workflow involved generative and performance- driven design, structural optimization and geometry generation for robotic fabrication.
The Design-to-Robotic-Assembly project presented in this paper showcases an integrative approach for stacking architectural elements with varied sizes in multiple directions. Several processes of parametrization, structural analysis, and robotic assembly are algorithmically integrated into a Design-to-Robotic-Production method. This method is informed by the systematic control of density, dimensionality, and directionality of the elements while taking environmental, functional, and structural requirements into consideration. It is tested by building a one-to-one prototype, which is presented and discussed in the paper with respect to the development and implementation of the computational design workflow coupled with robotic kinematic simulation that is enabling the materialization of a multidirectional and multidimensional assembly system.
Founded on the imperative to understand, evaluate and consciously decide about the use of digital media in architecture this research not only aims to analyze and critically assess computer-based systems in architecture, but also proposes evaluation and classification of digitally driven architecture through procedural- and object-oriented studies. It, furthermore, introduces methodologies of digital design, which in-corporate intelligent computer-based systems proposing development of prototypical tools to support the design process. Bijlagen op DVD in TU Delft Tresor collectie, TR diss 5272
Digital technology has introduced in the last decades data-driven representational and generative methodologies based on principles such as parametric definition and algorithmic processing. In this context, the 15th Footprint issue examines the development of data-driven techniques such as digital drawing, modelling, and simulation with respect to their relationship to design. The data propelling these techniques may consist of qualitative or quantitative values and relations that are algorithmically processed. However, the focus here is not on each technique and its respective representational and generative aspects, but on the interface between these techniques and design conceptualisation, materialization, and use.
This paper presents the implementation of a facial-identity and -expression recognition mechanism that confirms or negates physical and/or computational actuations in an intelligent built-environment. Said mechanism is built via Google Brain's TensorFlow (as regards facial identity recognition) and Google Cloud Platform's Cloud Vision API (as regards facial gesture recognition); and it is integrated into the ongoing development of an intelligent built-environment framework, viz., Design-to-Robotic-Production & -Operation (D2RP&O), conceived at Delft University of Technology (TUD). The present work builds on the inherited technological ecosystem and technical functionality of the Design-to-Robotic-Operation (D2RO) component of said framework; and its implementation is validated via two scenarios (physical and computational). In the first scenario-and building on an inherited adaptive mechanism-if building-skin components perceive a rise in interior temperature levels, natural ventilation is promoted by increasing degrees of aperture. This measure is presently confirmed or negated by a corresponding facial expression on the part of the user in response to said reaction, which serves as an intuitive override / feedback mechanism to the intelligent building-skin mechanism's decision-making process. In the second scenario-and building on another inherited mechanism-if an accidental fall is detected and the user remains consciously or unconsciously collapsed, a series of automated emergency notifications (e.g., SMS, email, etc.) are sent to family and/or care-takers by particular mechanisms in the intelligent built-environment. The precision of this measure and its execution are presently confirmed by (a) identity detection of the victim, and (b) recognition of a reflexive facial gesture of pain and/or displeasure. The work presented in this paper promotes a considered relationship between the architecture of the built-environment and the Information and Communication Technologies (ICTs) embedded and/or deployed.