Future Forward Interview: Structural Health Modeling as a Multidisciplinary Pursuit
Michael D. Todd is a professor of structural engineering at the University of California, San Diego (UCSD), and co-director of the UCSD Engineering Institute at Los Alamos National Laboratory. He has combined the coursework of several disciplines to create a graduate program in Structural Health Monitoring, focusing his research on structural dynamics, nonlinear vibrations, time-series modeling, fiber-optic sensor-system design and noise propagation modeling. A standout project of recent research was the demonstration of a remotely-powered and accessed sensor network for civil infrastructure assessment with unmanned aerial systems (UASs).
I2: Can you tell us about your affiliation with the Los Alamos National Laboratory and the focus of that research?
Todd: We have a formal institute with space at both Los Alamos and UCSD. I’m the campus director, and Los Alamos staff member Chuck Farrar is the lab-side director. UCSD rents space in a research-park building outside the classified work area there. It supports a lot of student research and development projects for graduate education. The focus of that research is in structural health monitoring, damage prognosis and validated simulation technologies.
I2: What are some areas of study in terms of courses and curriculum?
Todd: We have the only degree program in the entire United States that spans these three technology areas. It’s a specialized MS/Ph.D. graduate program, and it’s a truly multidisciplinary degree. Our students take equal numbers of courses in structural engineering, mechanical/aerospace engineering, computer science and electrical engineering.
To solve the types of structural health monitoring problems and create smart structures, you need knowledge in all those domains. Students take classes to know the mechanics and physics, data analytics and sensing technologies, and then integrate the technologies to make a useful solution.
People in this program will take courses in basic structural material behavior, dynamics and vibration. They take courses in electrical engineering and signal processing and sensing techniques. They take courses in computer science, data mining, statistical techniques, pattern recognition, image processing and software design. They take classes in mechanical engineering and thermal/mechanical properties of structures and failure mechanisms.
The coursework is designed to really have a sense of how everything ties together, so they know how to design a monitoring solution.
I2: Is there a pipeline of jobs for your students?
Todd: We’ve never had a student come out of our program and not find a job. We started the program formally in 2005 with Los Alamos after a pilot start in 2003. We’ve graduated more than 50 students in that time, and about 20 of them have ended with jobs at Los Alamos. It’s a win for them as a recruitment and retention strategy, because it’s hard for them to recruit in a remote area and find the right skill sets.
Everybody wins. The students get exposure to real-world and pretty-cool problems in grad school. They come to San Diego and do their degree and get exposed to the multi-physics, multi-engineering and multidisciplinary program. We pay them to spend some time interning at Los Alamos in their summers. Then they get hired back if they’re successful.
Los Alamos has been very good at recruiting and exciting young scientists and engineers. We’ve had students also go to the automotive industry, aerospace, startups and a few academics.
I2: Is the idea of a multidisciplinary approach taking hold?
Todd: The thing that we tapped into early, and more employers are telling us, is that while there are specialist jobs out there, but they really want a multidisciplinary problem solver. In the real world, problems are always solved on multidisciplinary teams. They’re complex, and you have to be able to speak the language of all the subdisciplines to understand the big problems.
Traditionally a Ph.D. is a deep dive in a narrow topic. We, amongst others, are trying to change that paradigm of graduate education. We explain to our students that they still have to demonstrate expertise in an area, but they really have to be solving system-wide problems. That’s what our program does, and I think employers appreciate that.
I2: Could you walk us through the definition of structural health monitoring?
Todd: The analogy to keep in mind for structural health monitoring is the medical profession. When you or I are sick, we go to the doctor who performs tests to provide a diagnosis of what’s wrong. The idea here is that we’re trying to develop technology solutions to measure things on structures (buildings, bridges, aircraft, automobiles, etc.) and inferring from the data, using analytic techniques, whether they’re successfully performing design functions.
We also look to diagnose damage or faults and defects that cause the structure to not meet its design goals nor perform in a way we want.
Structural health monitoring even borrowed the term “health monitoring,” as we’re trying to get to the health of the structure. It’s a complicated field, as it’s essentially a giant detective problem. You have a limited dataset. There’s no such thing as a sensor that tells you exactly the damage or where it is. Sensors give you indirect clues, so we have to develop the data-interrogation algorithms that know how to mine data to look for subtle changes that reflect different target damage that we’re interested in.
I2: Are you doing a lot of real-time monitoring of conditions?
Todd: Health monitoring, if done right, can do a great job giving you the state of your structure for its status right now. Damage prognosis feeds that information into physics-based models and predicts how it will evolve, so when it reaches a critical point, we can do predictive planning such as when to do an inspection and when to take something out of service, when to do maintenance or “look out, it’s about to fail.”
That’s what damage prognosis is, and that’s a lot more of high-level computing running detailed physics-based models of failure processes, system behavior, dynamics and that kind of thing. There’s also a lot of statistics, because we don’t know all the conditions a structure will see in the future, so we have to describe that probabilistically. We have to model for the probable load state in the future, so it becomes a very interesting physics/statistics problem.
I2: Can you explain validated simulation and how it relates to the other two disciplines?
Todd: Validated simulation is the capability to simulate the future. To know that your models are confident, you have to make sure they are validated with reality. There’s a whole field of making sure models and test data match. That’s an important component of this as well.
There’s certainly value to where we are now, but the Holy Grail is to understand where we’re going to be in the future. When do we expect, with quantified certainty, that something critical will happen to the structure? When will we have a failure, or when will we have to do maintenance, or when do we have to mitigate the problem? Nobody is really doing that now. We’re trying to develop capabilities to match this vision.
I2: Does some of this work relate to seismic and other big events?
Todd: What makes structural health monitoring and prognosis challenging is that the types of things that cause damage or problems in structures take on this vast array of lengthened time scales. So there’s just general aging, things wear out slowly over time, and a bridge slowly gets fatigued as traffic knocks it up and down a bit. That’s part of its design; we fully expect that.
Tracking that and diagnosing and predicting when that reaches a critical level is very different than a major event. Often what will happen then, on top of normal service load, structures will be subjected to natural or manmade disasters like seismic events or blasts (such as in a terrorist attack).
After such an event, the idea is to use the same general grounds of technology to rapidly assess if the damage is critical. Obviously, if the structure falls down, we don’t need much sensing to know it won’t work. A lot of times, there is damage that isn’t critical or that can be repaired fairly quickly.
We did work with UASs that was specifically applied to post-seismic assessment. If a moderate earthquake hits a bridge or building, clearly it isn’t destroyed if it hasn’t collapsed, but that doesn’t mean it’s safe. We don’t necessarily want to send inspectors out there.
We demonstrated the use of simple sensors that don’t need any power on a test bridge in New Mexico. They rely on mechanical deformation, and they can hold their shape, so if you fly a UAS out there, you can see what the maximum load states were. You can then model that and make decisions about whether the bridge is safe or not. The whole point of our demonstration was to illustrate how you can do this all wirelessly and without putting a human in harm’s way.
The sensors we used involved RFID tags, the same basic technology we use when we zip through the FastPass lanes on toll roads. The tag helps you understand which sensor is which in the standard way they are used to track packages and other things. In addition, we had capacitive sensors whose capacitance changes as they deform. With an earthquake that causes the bridge to bend and twist, those sensors hold their maximum load during the event, recording the capacitance signal that they saw. The UAS can then fly out there and read the RFID tag via a cellphone-type antenna and relate the location to the sensor reading. The capacitor value is proportional to the load the sensor saw. With all these readings, you then get a picture of the bridge’s performance.
I2: Do the time-series sensing/modeling and the validated simulation go hand in hand?
Todd: With a model, the saying goes “garbage in, garbage out.” You have to make sure your model reflects the physics of reality. You can do things like measure time series in the field or measure other properties from our sensors in the field. You then use that information to tune and calibrate our physics-based models of how the structure behaves under different loads and how the failure mechanism is working. This gives us a digital twin, if you will, of what the structure’s real behavior is.
We want to have a “cyber model” that resides in cyberspace and acts as a surrogate for the real structure. It’s a model of how the structure behaves, and we can go in and put any load on it and see how our real structure is expected to act. This information helps us design and do optimal monitoring to make long-term predictive planning.
I2: Do you use LiDAR and other reality-capture means to create the digital twins of the structures?
Todd: Any data stream can work, it just depends on what you want your cyber model to do. Depending on what you can measure in the field, such as acceleration, strain, temperature and pressure measurements to things like non-contact LiDAR laser capture. Some or all could be used if you have access to such data streams. All are used to define the conditions you want your model to validate.
If you make a LiDAR measurement of displacement or something, the idea would be to capture field measurements. Then you use your model to predict the same data that you just measured. You can tweak your model parameters so the outputs from the model exactly match what you measured.
I2: So the ultimate goal is to be able to quantify many different inputs with a tie to performance?
Todd: The term we use is “Total Ownership Cost Reduction.” We want to minimize the cost of owning and operating a structure. This strategy of monitoring and predictive action planning minimizes ownership costs. You’re going to reduce unnecessary maintenance, reduce downtime, and you may even generate a life-safety advantage by preventing or warning about catastrophic failure.
All of those things are massive cost strains on any structure. The goal is to minimize cost and risk.
I2: The idea of a smart structure; is that an ability to sense as well as learn?
Todd: I would say it’s the ability to sense and reason. It’s the ultimate realization of the technology that we’re talking about. Not necessarily taking the human out of the loop, but it would be very helpful for engineers that maintain structures or owners to be able to have the bridge tell you when it’s sick.
You can imagine the bridge learning through sensor data, data mining and analytics, and then sending alerts to engineers where there’s a problem and the most likely cause. To me that’s truly a smart structure, when the patient that we’re trying to diagnose calls the doctor; in this case, the engineer.
I2: How are we progressing against the goal of smart structures?
Todd: I think some of the Asian countries are starting to get ahead of America in terms of putting their money where their mouth is. America, despite the fact that the Minneapolis bridge collapsed and we have a general sorry state of our infrastructure, hasn’t invested.
The Chinese and the South Koreans actually mandate that new construction has to contain monitoring technology. That’s a big step forward, to actually mandate the function and recognize that smart structures are the way it should be. Requiring designers of buildings and bridges and other civil infrastructure to install the technology drives innovation.
These countries will catch up and pass us if the United States doesn’t make an investment. We are a fundamentally reactive culture when it comes to infrastructure and asset management in general. We wait for something terrible to happen rather than being proactive.
Read the profile here.