Modeling, abstraction, and digital twins

In our recently granted Horizon 2020 project Ashvin, we defined a digital twin as a digital replica of a building or infrastructure system together with possibilities to accurately simulate its multi-physics behaviour (think of structural, energy, etc.). Additionally, digital twins provide possibilities to represent all important processes around the building or infrastructure system throughout its product development lifecycle. These processes include evolving design information (as-designed vs. as-built) and an accurate description of all relevant construction, as well as, maintenance activities. The technical enabler for such digital twin representations is the internet of things (IoT) that allows establishing connections between real-time data coming from sensors and cameras as to establish a close correspondence of the physical entities and processes with their virtual representation. This definition is similar to the definitions provided by others, such as the one provided by for example Sacks et al. (2020) – https://doi.org/10.1017/dce.2020.16.

A closer look at these definitions, however, reveal the inherent danger of blurring technical possibilities with the realities of engineering design work and the constraints that are imposed on engineering by the boundaries of human cognition. The technical possibilities we have nowadays in terms of increasing the sophistication and depth of models representing products and their behavior is growing quickly. At the same time, we are more and more able to fuse and combine different data-driven and physics based models with each other in ways that would have not been computational possible just a number of years earlier. What is missing from technically driven definitions is however a clear focus on how an increase in model complexity can support engineers.

Reflecting some more, the notion of the ‘twin’ of the real world that exists in the digital might be ill chosen. After all, engineers establish models as simplified and highly abstracted representations of the reality so that they can cognitively deal with reality’s complexity. Instead of ‘accurately simulating the multi-physics behavior’ and ‘representing all important processes’, engineers are probably better supported by models that are supported by simplified simulations of the multi-physics behavior and by the representation of very few processes. Of course, all with the aim to support engineers within their limited cognitive abilities. But also from the utilitarian understanding that a simpler model that allows for similar understanding, is superior to a more complex model.

To account for this aspect, it might be appropriate to start technical research and development efforts from a solid and in depth empirical understanding of engineering work. From this understanding clear requirements can be derived, not only in terms, of what needs to be modeled, but equally important how abstract these models need to be for allowing engineers to still come to creative conclusions within their cognitive abilities.

Applications of the Ashvin project with their envisioned impact.

To allow for such a development approach, the strategy we will follow therefore on Ashvin is a clear focus on how engineers can impact the productivity, resource efficiency, and safety of construction processes within different phases of the design and engineering life-cycle. From this we derived a number of very specific applications for supporting engineers (we call this the Ashvin tool kit) and drive all digital twin related development work based on the requirements for these applications. The next three years will show how successful we can be with such an approach to achieve our envisioned impact. The project’s website should be online soon at www.ashvin.eu.

Knowing the surroundings before renovating

To design, plan, and execute a building renovation, engineers need to not only understand the building itself, but also need to know quite some about the building’s environment. Knowledge about the environment is already required in the early stages of building condition assessment. For example, to plan drone flights round a building it is important to know whether there are trees of other objects around that would inhibit the drone flight. To evaluate different design options later on in the process with respect to the energy and occupant related performance knowledge about the local weather conditions or objects that might provide shading is required. During the execution one of the most important aspects that need to be accounted for is accessibility.

The above are only a couple of examples of the required knowledge. On our projects, more often than not, this knowledge is not systematically managed using our existing information models. To overcome this problem, in her research within the European BIM-Speed project Maryam Daneshfar explored the knowledge that engineers require to for renovating buildings. She formalized her findings in an ontology that is freely available here and discussed in a preliminary conference paper. I am sure publications about the ontology will soon follow in scientific journals. I will keep you posted.

Automated Text Mining to the Rescue of Knowledge Management

Over the years I have worked with and for a great number of large engineering and design companies operating globally. The secret towards successfully managing these companies evolves largely about strategies to accurately manage knowledge. As my colleague and good friend Amy Javernick-Will wrote, this knowledge can be categorized along two main lines: local market knowledge and global technical engineering knowledge (Javernick-Will and Scott 2010). I feel that most large design and engineering organizations have a good handle on managing the local knowledge by creating decentralized organizational structures and by having an extensive merger and acquisition strategy. However, I also feel that most of the companies I worked for still struggle to a certain extent with managing their global technical expertise. Maybe these struggles are most visible in the efforts to establish central knowledge management systems and trying to convince the majority of the workforce to use this systems to post knowledge, best practice studies, and establish global discussion networks around central technical topics of importance.

The main problem with these traditional platforms is that it requires extra efforts of employees to post, maintain entries, and to discuss. In a business that is still mainly oriented on billing hours to clients this is a difficult problem, as most employees are mainly concerned with selling their valuable work hours. This of course in particular holds for the experienced and knowledgeable people that hardly ever find time to support knowledge management tasks. At the same time, however, the large design and engineering organizations sit on a large gold mine of information that could be leveraged for establishing central knowledge management systems: Reports, Project Reviews, Internal Memos, and all type of other textual data. In recent years many studies have also shown the feasibility of automatically text mining these documents to establish automated knowledge categories, dedicated search engines, and knowledge networks that link important concepts. One of my most notable colleagues in this area is for example Nora El-Gohary that has shown the potential in a myriad of different studies. In some of our recent work we also have shown the potential for renovation projects. Commercially some early start-ups are also trying to leverage this idea, such as, for example the Berlin based Architrave. However, I believe that the true potential to leverage the power of automated text mining lies with the large design and engineering companies. The methods and algorithms exist, however, the key to successful implementations lies with the availability of large textual databases that only exist at these multi-national large scale corporations.

Design optimization: Searching for the Impact

Design optimization tools become readily available and easy to use (see for example Karamba or Dynamo. It is not surprising that studies exploring these tools are exploding. Many examples exist that illustrate how to design optimization models and execute optimizations. Often, however, these studies fail to provide the true impact that was expected in terms of improving the (simulated) performance of the engineering design. Showing that the deflection of a structure could be reduce by a centimeter or the material utilization used for the structure was reduced by some percent remains of course an academic exercise that can provide little evidence on the engineering impact of optimization technologies.

As we move forward in this field of research, we need to develop more studies that move away from simply showing the feasibility to apply relatively mature optimization methods towards formalizing optimization problems that matter. Finding such problems is not easy as we cannot truly estimate the outcomes of mathematical optimizations upfront. Whether a specific impact can be achieved can only be determined through experimentation – a long, labor extensive and hard process.

Even worse, identifying relevant optimization problems through a discussion with experts is difficult. The outcomes of each design optimization needs to be compared with the solution an expert designer would have developed using his intuition and a traditional design process. Hence, working with expert designers to identify problems might be tricky. After all they are experts and probably already can come up with pretty good solutions. It seems as one would rather need to identify problems that are less well understood, but still relevant. These problems might also be scarce as relevant problems are of course much more widely researched.

In the end, I think we need to set us up towards a humble and slow approach. An approach that is time consuming, that will require large scale cooperation, and needs to face many set-back in terms of providing an impact that truly matters. Maybe this is also the reason why a disruption of design practice is not yet visible. Until we will be able to truly understand how we can impact design practice with optimization we will still need to rely on human creativity and expertise for some time to come. (not saying that we should stop our efforts.

The NO FREE LUNCH theorem for point cloud processing

As scanning technology gets cheaper the availability of point clouds for buildings is increasing significantly. At the same time decades of research exist that has tried to convert point clouds to semantically rich Building Information Models, a practice that has been recently termed Scan2BIM. Despite the significant past research a breakthrough is not visible that allows us to convert point cloud data to general purpose BIM models. Since quite some time, I am therefore wondering whether a general purpose Scan2BIM conversion is possible at all. To me it rather seems as if conversion processes need to be closely steered by very detailed and specific information requirements. These information requirements should be based on a sound analysis of engineering decisions that are to be made on the information. Once it is clear what information to extract and in what detail this information is required, dedicated extraction algorithms can be developed. Looking at the recently published studies research seems to shift towards such specific purposes. However, such processes can hardly be labeled general purpose Scan2BIM.

The entire discussion reminds me of a paper that I was writing some years ago with Robert Amor and Bill East about the sense and non-sense of general purpose information models. While writing, Bill suggested that we should argue for a FREE LUNCH THEOREM (NFL) for information models. We all liked the idea, but the reviewers did not, so the NFL for information models never made it into the final publication. Bill’s idea was inspired by the NFL theorem in search and optimization. Once this theorem was established, it immediately stopped the extensive research efforts into the ideal general purpose optimization method. More about the NFL for search and optimization here.

Now years later I think we should consider a NFL for point cloud processing as well. For research the existence of such a NFL would have quite some ramifications. It would require a much more humble approach to point cloud processing focusing on very small purposeful engineering applications and the development of clear ontologies describing the knowledge required for these applications. These ontologies then need to steer the development of the geometric point cloud extraction methods. Developed methods would, however, not work for generalized purposes.

Reading about circular economy

While preparing for our new module ‘Circular economy for the Built Environment: Principles, Practices and Methods’ I was reading three papers today in an effort to select some required readings for our students.

I first read Ghisellini et al (2016) ‘A review on circular economy: the expected transition to a balanced interplay of environmental and economic systems‘. This is a very comprehensive review of a large amount of papers, however, the results from this review were a little inconclusive. In the end, the paper reiterates the different approaches taken in China and the EU, while China employs top-down planning and the EU bottom-up planning. The paper also stresses the need for good business models.

I then read chapter 18 ‘Products & Services’ from Lacy et al.’s book ‘The Circular Economy Handbook’. This chapter discussed possibilities to improve products during all stages of the product life-cycle from design, to use, to use extension, to end of use. I liked the inclusion of ‘use extension’ as a separate phase in the product life-cycle, certainly something we should do much more explicit. The chapter also again stressed the need for good business models, as well as, developing a sound understanding of the product portfolio of a company.

Finally I read Bocken et al. (2016) ‘Product design and business model strategies for a circular economy’. I truly enjoyed reading the paper as it introduced a strong framework about how to design products for slowing down the resource loop and for closing the resource loop. I thought these two goals are quite helpful concepts to think along when designing products. To slow loops, the authors suggest to design for attachment and trust, reliability and durability, ease of maintenance and repair, upgradability and adaptability, standardization and compatibility, as well as, dis- and reassembly. This reminded me at the classical idea of ‘ilities’ from de Weck. To close resource loops the authors suggested to design for a technological cycle that allows to reuse technical materials and sub-products, as well as, for a biological cycle. I found this again a powerful guide for designing products.