What’s that fuzz about geometry

Looking at the scientific discourse and, partly what we teach our students in engineering one might wonder about the above question a lot. We have very vibrant research and teaching lines in engineering that focus on computational geometry. Also in practice, we often see that the discussion around Building Information Modeling and related standards is to a great extent focusing on geometry (the 3D model as the equivalent of the Building Information Model).

Since quite some time I am wondering whether this focus is still relevant in light of the ever advancing BIM practice. After all BIM based design is parametric by nature, designers choose different building components (walls, doors, windows, roofs) choose parameters of the components (height, width, depth) and the computer automatically generates the geometry of the elements. Moreover, the computer not only generates one geometry, but many different geometric representations. This allows to represent the object in the multiple views (and levels of representational detail) that the software provides.

Considering that modern design practice moves more and more towards parametric design tools, I am wondering whether teaching and science about how to generate complex geometric models should move more and more into the domain of software developers.

Does it really make sense to represent geometry within our interoperable product model exchange formats? Or do we rather need to think better about ways to exchange the logical design parameters and leave it to the software engineers that implement the standards to think about the best way for geometric representation? Do we need geometric model checkers – or can these be replaces by software that can check parameters by large? Will engineers in the future start their design efforts by drawing geometry or will they directly jump into creating parametric networks of design logic and let the computer generate the geometry automatically for them?

Design optimization: Searching for the Impact

Design optimization tools become readily available and easy to use (see for example Karamba or Dynamo. It is not surprising that studies exploring these tools are exploding. Many examples exist that illustrate how to design optimization models and execute optimizations. Often, however, these studies fail to provide the true impact that was expected in terms of improving the (simulated) performance of the engineering design. Showing that the deflection of a structure could be reduce by a centimeter or the material utilization used for the structure was reduced by some percent remains of course an academic exercise that can provide little evidence on the engineering impact of optimization technologies.

As we move forward in this field of research, we need to develop more studies that move away from simply showing the feasibility to apply relatively mature optimization methods towards formalizing optimization problems that matter. Finding such problems is not easy as we cannot truly estimate the outcomes of mathematical optimizations upfront. Whether a specific impact can be achieved can only be determined through experimentation – a long, labor extensive and hard process.

Even worse, identifying relevant optimization problems through a discussion with experts is difficult. The outcomes of each design optimization needs to be compared with the solution an expert designer would have developed using his intuition and a traditional design process. Hence, working with expert designers to identify problems might be tricky. After all they are experts and probably already can come up with pretty good solutions. It seems as one would rather need to identify problems that are less well understood, but still relevant. These problems might also be scarce as relevant problems are of course much more widely researched.

In the end, I think we need to set us up towards a humble and slow approach. An approach that is time consuming, that will require large scale cooperation, and needs to face many set-back in terms of providing an impact that truly matters. Maybe this is also the reason why a disruption of design practice is not yet visible. Until we will be able to truly understand how we can impact design practice with optimization we will still need to rely on human creativity and expertise for some time to come. (not saying that we should stop our efforts.

Engineering Design Tools, UX, and Flow

In psychology flow is the state in which a person performing an activity is fully emersed in the activity feeling an energizing focus, full involvment, and joy. The concept of flow was introduced into psychology by Mihály Csíkszentmihályi a Hungarian Psychologist.

I believe that the concept of flow is also relevant for practicing designers who probably develop their best design ideas within this state. The feeling of being immersed in an activity of quickly sketching, evaluating, discarding or changing ideas is maybe a very accurate description for highly productive , innovative, and creative design (it would be very interesting to hear from you how and whether they experience such flow states during their work).

Looking at the ongoing digitalization of design work with computational design support software, I am wondering whether the user interfaces of these applications are really designed towards getting a user into the flow state. Though I think it is possible to smartly design user interfaces that would seamlessly allow for this. I think we can borrow here from the computer game industry that has designed a number of games that are highly addictive (yes, I started playing Aven Colony yesterday evening and went too bed much too late) that could be considered integrated design tools.

Aven Colony is such an example. The goal of the game is to design a space station that is economically prosperous and provides a happy living space for colonists. Basically an integrated urban planning design support tool with advanced simulation features. The user interface of Aven Colony is highly intuitive, after playing the game for 30-60 minutes its functionality is entirely understood. But of course, it does not stop there … once understood, of course, the flow starts, so it is very easy to forget the time and play all night long … something to be avoided for real design support applications from employee health and safety aspects.

Product Modeling and Product Development Processes (Are we doing this with BIM?)

The design of complex engineering products is characterized by parallel problem solving of large design teams, distributed team work, and complex project management patterns. So the long standing question remains how to support this work with digital technologies and databases that can store the quickly evolving, changing, and complex data over the product development life-cycle.

The digital solutions to support complex process patterns have to support a number of different varying needs with respect to how product information is represented. A good categorization of different representation forms is provided in the seminal paper of Krause et al. that I want to reiterate here:

  • Structure-oriented product models that describe the components of an engineering system and that lend themselves well for managing the suppliers, parts, quantities of the engineering product
  • Geometry-oriented product models that, well, describe the geometrical shape of the engineering product
  • Feature-oriented product models that represent the product in terms of form-features that often can be directly related to different geometries
  • Knowledge-based product models that describe the engineering product in terms of an ontology mapping the knowledge that is required during the product development process

All these different ways to represent a product, of course, are required during designing and engineering a complex engineering product. All these different ways also need to be combined and adjusted to the required engineering processes at each stage of the engineering design process. Now quite some of the processes at each stage are also running in parallel (yes: Integrated engineering). So the question is how well we support this with the existing BIM applications in civil engineering design? Again something I would like to post here as food for though for an ongoing discussion.

Robotic Construction Automation and Engineering Knowledge

The worldwide labor shortage, a continuous push towards increasing construction productivity, and the surge in computational power has triggered a new wave of research into construction robotics lately. So far, much of this work has focused on exploring the technical aspects of robotic design in terms of locomotion, sensing, and manipulating technology. Moreover, research into new construction materials, in particular, to allow for additive manufacturing moves prominently into the spotlight of our research activities.

To pave the way forward, I believe that an additional topic for scientific research will be equally important: Formalizing advanced engineering and trade knowledge. Considering that robots need to take over tasks that humans are conducting on construction sites (not only labor related, but also planning related), we need to clearly and explicitly understand the knowledge these humans possess to inform the design of robots. Gaining such an understanding will be important for all aspects of robotic design; for deciding upon robotic hardware in terms of locomotion, sensing, and manipulating and for developing robotic software – from sensor interpretation to path finding. Equally important, formalizing engineering knowledge is required to develop new design methods and materials.

To understand and formalize engineering knowledge for robotic design, research is required: a sound body of research methodologies will need to be developed that is based on design thinking, but also provide the means for sound validation (in the real world and simulation based). We also should review the existing robotic solutions that have been proposed in the past and understand how engineering knowledge has influenced their design in retrospective studies.

The NO FREE LUNCH theorem for point cloud processing

As scanning technology gets cheaper the availability of point clouds for buildings is increasing significantly. At the same time decades of research exist that has tried to convert point clouds to semantically rich Building Information Models, a practice that has been recently termed Scan2BIM. Despite the significant past research a breakthrough is not visible that allows us to convert point cloud data to general purpose BIM models. Since quite some time, I am therefore wondering whether a general purpose Scan2BIM conversion is possible at all. To me it rather seems as if conversion processes need to be closely steered by very detailed and specific information requirements. These information requirements should be based on a sound analysis of engineering decisions that are to be made on the information. Once it is clear what information to extract and in what detail this information is required, dedicated extraction algorithms can be developed. Looking at the recently published studies research seems to shift towards such specific purposes. However, such processes can hardly be labeled general purpose Scan2BIM.

The entire discussion reminds me of a paper that I was writing some years ago with Robert Amor and Bill East about the sense and non-sense of general purpose information models. While writing, Bill suggested that we should argue for a FREE LUNCH THEOREM (NFL) for information models. We all liked the idea, but the reviewers did not, so the NFL for information models never made it into the final publication. Bill’s idea was inspired by the NFL theorem in search and optimization. Once this theorem was established, it immediately stopped the extensive research efforts into the ideal general purpose optimization method. More about the NFL for search and optimization here.

Now years later I think we should consider a NFL for point cloud processing as well. For research the existence of such a NFL would have quite some ramifications. It would require a much more humble approach to point cloud processing focusing on very small purposeful engineering applications and the development of clear ontologies describing the knowledge required for these applications. These ontologies then need to steer the development of the geometric point cloud extraction methods. Developed methods would, however, not work for generalized purposes.

A viable green business model: Building renovation consultancy for a district

I was just working on an initial business plan for a consultancy business for Architects to support and manage renovation planning and execution. The consultancy service is targeted towards local districts and assumes that the service providers are able to build a strong network with the local property owners in this district. I envisioned a number of services that could be provided that follow the 4M process we designed during our P2Endure project:

  • providing a service for an initial evaluation for the feasibility of a renovation (Mapping)
  • supporting the detailed planning of the renovation with energy simulation, engineering the renovation, and managing the supply chain for executing the renovation work (Modeling and Making)
  • setting up continuous monitoring to be able to assess renovation possibilities on a continuous basis throughout the life-cycle of a building (Monitoring)

I conducted an initial financial assessment for a district of roughly 100 privately owned properties and roughly 20% of owners who are interested in upgrading the properties. The assessment resulted in a sound business for an office with two partners. Such businesses could significantly improved the renovation rate of the building stock in Europe and, in turn, make a large contribution to the reduction of CO2. All in all a true green deal business deal. We will discuss this business mode now internally within the P2Endure project, but once the model is formally published I will provide an update. Stay tuned!