By Badri Hiriyur, Ph.D.
From the February 2021 Issue
In late 2019, Erica Tishman, a New York City architect, died as a result of being struck by a piece of falling façade from a 105-year-old high-rise building in Manhattan. This tragic incident resulted in a renewed focus on building façade inspection and safety protocols. Many large cities with a large stock of older high-rise buildings, including New York City, already have façade ordinances and other regulations that mandate inspections at frequent intervals to identify and remedy damage conditions such as loose or failing façade elements. Traditional methods of inspection required the use of swing stages, boom lifts, rope-access, or similar means to gain close access to the façade. Inspectors manually photograph and log all observations and field notes and then compile a report. This process is both slow and expensive.
The adoption of unmanned air vehicles (UAVs) or drones in building and façade inspections has significant time and cost benefits. Today, drones can be flown around a structure and capture hundreds or even thousands of images in a fraction of the time that it could be done by traditional methods of close-range access, such as with scaffolding. Drones can also be equipped with special cameras that can face forward, upward, or down to cover all portions of a façade. The ability to program in a desired navigation path also ensures repeatability of image capture locations and vantage points. Furthermore, drones can be equipped with thermographic sensors that capture images in the infrared spectrum or LIDARs to capture 3D point clouds for additional perspectives. All of this makes for a compelling case for the use of drones in inspection, and has prompted municipalities like New York City, which currently has laws prohibiting drone flights within much of the city limits, to reexamine its drone laws and consider special allowances for use in building inspections.
Computer Vision For New Perspective
While drones can help capture a large amount of data—photographic or otherwise, all of this information still needs to be looked at carefully by a trained eye in order to detect and classify the damage conditions. This creates a slow and inefficient (not to mention tedious) bottleneck in the process, when you consider that someone is required to view thousands of repetitive photographs.
Fortunately, with the advent of computer vision, the trained eye need not be a human one. This technology allows machines to be trained to detect, classify, and track various objects or features of interest in an image. Computer vision systems can process a large batch of images—numbering in the tens of thousands—in a matter of minutes, while paying the same undivided attention to the last as it did to the first.
Computer vision is not a new technology, but has been around for several decades. These systems are typically built using neural networks which are “trained” or calibrated using large amounts of pre-labelled images.
But it’s only during the last eight to 10 years that this field has seen a dramatic advancement with the advent of new algorithms (very deep neural networks), big data (hundreds of thousands of labelled images for training), and improved hardware (large and powerful graphical processing units or GPUs). Within the last decade, computer vision models have progressed from sub-par to super-human levels of performance measured in several vision benchmark tests, such as the ImageNet image classification challenge.
In other words, the best computer vision algorithms today make fewer errors on average than humans in identifying specific elements in an image. This has led to increased adoption of computer vision across a range of industries, including transportation (e.g., self-driving cars), security, consumer electronics, and even the bio-medical industry.
Today, the same state-of-the-art computer vision models revolutionizing this broad spectrum of industries are now also experiencing increased adoption in the architecture, engineering, and construction (AEC) industry. Examples of such applications include processing of drawings to aid an engineer or draftsman, flagging unsafe conditions at construction sites, and identifying structural damage conditions in image and video data.
In the context of damage detection, the accuracy of computer vision models can be measured primarily in terms of false-positives (misclassification of detections) and false-negatives (missed detections). While false-positives are usually fairly benign, false-negatives can lead to dangerous situations. Therefore, some human oversight of this technology is required until the performance matures to a level of acceptable risk.
It is also worth noting some of the limitations of these technologies. While drone images combined with computer vision can be very thorough and comprehensive in identifying surface conditions on a façade, there will always be situations in which “hands on” human touch and intervention are essential. This is especially true when the conditions originate below the surface and may not be visually identifiable. However, even in these situations, computer vision technology is extremely useful in identifying potential “hot-spots,” or critical areas on a façade where human experts can be deployed for more efficient focus.
Drone image capture presents an efficient alternative to traditional methods of façade inspection, and computer vision presents a scalable method of converting the vast amounts of data into information and actionable insights. These technologies can be deployed at increased frequency to supplement human expertise so that façade deteriorations can be identified at an early stage before it becomes costly to repair, or worse yet, dangerous.
Dr. Hiriyur is a vice president and director of artificial intelligence at Thornton Tomasetti, as well as the Founder and CEO of T2D2.ai. In his current position at Thornton Tomasetti, Dr. Hiriyur leads the CORE.AI research and development group focused on developing applications that leverage artificial intelligence and machine learning to transform workflows and processes in the AEC sector. T2D2.ai is a new technology startup providing cloud-based building health monitoring services that uses computer vision to detect and map damage in structures from drone or mobile camera feeds. Prior to establishing the CORE.AI R&D group and T2D2.ai, Dr. Hiriyur spent several years as a computational scientist in the Applied Sciences practice at Thornton Tomasetti, where he developed high-performance computing software used by the U.S. Navy for computational fluid dynamics simulations.
Do you have a comment? Share your thoughts in the Comments section below, or send an e-mail to the Editor at [email protected].