
AI has already found its way into the utility toolbox, being deployed to help mitigate wildfires, conduct targeted vegetation management, and assist in rate case filings, to name a few applications. However, apart from its obvious benefits, AI poses a security risk – something the U.S. federal government is now aiming to address.
The U.S. Artificial Intelligence Safety Institute at the U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) has announced the formation of the Testing Risks of AI for National Security (TRAINS) Taskforce, which brings together partners from across the U.S. government to identify, measure, and manage the emerging national security and public safety implications of “rapidly evolving” AI technology.
The task force will research and test advanced AI models across critical national security and public safety domains, such as radiological and nuclear security, chemical and biological security, cybersecurity, critical infrastructure, conventional military capabilities, and more.
The TRAINS Taskforce is chaired by the U.S. AI Safety Institute and includes initial representation from the following federal agencies:
- The Department of Energy and ten of its National Laboratories
- Argonne National Laboratory
- Pacific Northwest National Laboratory
- Lawrence Livermore National Laboratory
- Sandia National Laboratory
- Oak Ridge National Laboratory
- Brookhaven National Laboratory
- Savannah River National Laboratory
- Lawrence Berkeley National Laboratory
- Idaho National Laboratory
- Los Alamos National Laboratory
- The Department of Defense, including the Chief Digital and Artificial Intelligence Office (CDAO) and the National Security Agency
- The Department of Homeland Security, including the Cybersecurity and Infrastructure Security Agency (CISA)
- The National Institutes of Health (NIH) at the Department of Health and Human Services
Each member will bring unique subject matter experience, technical infrastructure, and resources to the task force. They will collaborate on the development of new AI evaluation methods and benchmarks, as well as conduct joint national security risk assessments and red-teaming exercises.
“Enabling safe, secure, and trustworthy AI innovation is not just an economic priority – it’s a public safety and national security imperative,” said U.S. Secretary of Commerce Gina Raimondo. “Every corner of the country is impacted by the rapid progress in AI, which is why establishing the TRAINS Taskforce is such an important step to unite our federal resources and ensure we’re pulling every lever to address the challenges of this generation-defining technology. The U.S. AI Safety Institute will continue to lead by centralizing the top-notch national security and AI expertise that exists across government in order to harness the benefits of AI for the betterment of the American people and American business.”
The TRAINS Taskforce is expected to expand its membership across the federal government as its work continues.
AI will be a critical piece of the clean energy economy, according to a report released earlier this year by the Department of Energy and its six national laboratories. Last year, President Joe Biden issued an executive order calling for the agency to produce a public report “describing the potential for AI to improve planning, permitting, investment, and operations for electric grid infrastructure and to enable the provision of clean, affordable, reliable, resilient, and secure electric power to all Americans.”
Researchers identified four priority use cases to organize their findings: grid planning, permitting and siting, operations and reliability, and resilience. They identified three specific challenge areas where AI/ML can surpass the performance of human teams: (1) streamlining the licensing and regulatory process; (2) accelerating deployment; and (3) facilitating unattended operation.
Technology leaders are sprinting to serve seemingly every industry with AI-powered tools. The utility industry is complicated, however, due to its inability to fail forward faster when it comes to safety and reliability, on top of regulatory scrutiny.
Jason Strautman, vice president for data science and analytics engineering for Oracle’s utility division, previously told POWERGRID that deploying AI with utilities presents unique challenges, where reliability is non-negotiable. “This is very different from other industries,” Strautman said, acknowledging that utilities face high stakes with every AI deployment.
However, nearly 90% of utilities view AI and machine learning as crucial for overcoming operational hurdles during the energy transition, according to a recent report, with safety, cybersecurity, and predictive maintenance as the top use cases. Nearly three-quarters of energy and utility companies have implemented or are exploring using AI in their operations, an IBM study found. And while utilities are often cast as laggards for their perceived slow embrace of emerging technology, an earlier IBM study found that energy and resources CEOs are embracing AI opportunities at a higher rate than their global peers.