Hello, Guest!

AI Can Help Disaster Relief If It Can Conquer 1 Hurdle: Bad Data

Until surprisingly recently, U.S. state and local governments have had a limited awareness of populations and demographics; it’s unclear who lives where and especially what their health needs might be. This becomes glaringly apparent when a weather disaster hits. When Hurricane Sandy pummeled the northeast in 2012, followed by several other disasters in 2013, the Department of Health and Human Services’ Kristen Finne said that local municipalities came to her HHS agency, the Administration for Strategic Preparedness and Response, asking for data on citizens, because hospitals and shelters were overrun with at-risk patients with high levels of need.

As a result, ASPR partnered with the Centers for Medicare & Medicaid Services to assess the 62 million people at the time on Medicare (now that number is closer to 66 million), and began to try and mine that data.

“We started looking at algorithms and looking at predictive analytics to identify those individuals that might be [at-risk], then translate that data into public restricted and then secure types of data sets. So we have a public map and we put that out. That came out in 2015. But we also had [a map created] in the event of an individual disaster that would happen, if public health authorities ask for individual data, [so] they can actually go do search and rescue for those individuals,” Finne said.

Moderator Dr. Kimberly Elenberg

Finne spoke on a panel about artificial intelligence’s use in humanitarian assistance and disaster relief situations at the Potomac Officers Club’s 2024 Healthcare Summit on Dec. 11. It was expertly moderated by Dr. Kimberly Elenberg of ECS. Finne said that, although the aforementioned predictive analytics and algorithms constitute AI, the first proper encounter that many state and local entities had with AI was a little later.

Don’t miss the Potomac Officers Club’s first GovCon networking event of the year: the 2025 Defense R&D Summit. We’ll be kicking things off with a bang on Jan. 23, hearing from a host of brilliant defense technologists about the tools currently being incubated at the Pentagon. Join the conversation with peers and competitors from across the industry, who will all be present.

Improving Data Transmission

Around 2018–2019, ASPR and HHS worked to solve the issue of stalling data transmit speeds in disaster relief situations by implementing virtual assistants that were able to overcome intermittent modality. Using the virtual assistants like those pioneered by Amazon and Google, they were able to reduce transmissions times to fractions of a second with the help of a small contractor. (The state and local officials viewed this with awe, she said and it helped introduce the idea that such technology could be used for good.)

Similarly, fellow panelist Derrick Jastaad, an executive director of the Veterans Health Administration at the Department of Veterans Affairs, gave credit to a telehealth company with whom the VA collaborated during the Covid-19 pandemic. Together, they were able to build datasets around where there were available hospital beds—where there was “capacity” and “positive pressure”—so that the raw information VA already had could become actionable.

Data Management

The proper maintenance of data is a commonly touted main tenet of reliable and trustworthy AI, and the panelists at the Healthcare Summit echoed this idea. Vinay Malkani, CEO of Figure Eight Federal, posited that proper data-sharing is as tough to find in the private sector as it is the government.

“Sharing of data, even at a large company, does not happen in the ways that we want it to happen. And often teams within a regular business or company use different tools, have different standards, have different policies because they are trying to get what they need done [and are] not necessarily looking at the overall group,” Malkani shared.

He went on to say that AI is something that has to continuously learn based on new information and learning sets. It can’t be “train[ed] once [and] deploy[ed] forever.”

Training AI With Representative Data

Derrick Jastaad

The panelists all agreed that one must strive to present an AI that’s being trained with as diverse a range of datasets as possible, particularly when it comes to people and populations, noting that rural populations especially often get overlooked. Carnegie Mellon University biomedical engineering scientist and professor Dr. John Galeotti underlined this idea, when dealing with situations where off-the-shelf datasets don’t exist.

“When we’re building custom datasets…we are very deliberate to be getting people from different countries, mannequins with different skin tones on them—child mannequins, adult mannequins—doing everything we could within the scope of what our budget and time would allow to create a very diverse set of as-real-as-possible datasets and using those to train the AI and then to test the AI,” Dr. Galeotti said.

However, he admitted that he’d “be lying if [he] said this is something [he] solved,” and that just using diverse datasets is the “easy answer.” Rather, he believes AI models should indicate that they’re unsure about an answer they produce if they have less confidence in it, so that human involvement can be prompted and a more thorough, non-machine-led evaluation can take place.

Dr. Elenberg reinforced the latter point when summarizing the main takeaways from the session: “We cannot move away from the human-AI collaboration.”

The 2025 Defense R&D Summit, from Potomac Officers Club, is fast approaching. Don’t miss this essential gathering for defense industrial base subject matter experts. Join us at the Hilton McLean in Virginia on Jan. 23!

Video of the Day