AI cybersecurity remains vulnerable with few specifics addressed

Adversaries could attack the AI of an autonomous vehicle directly, or by confusing the input such as obscuring a stop sign with graffiti.
Adversaries could attack the AI of an autonomous vehicle directly, or by confusing the input such as obscuring a stop sign with graffiti. | By Dllu/Wikimedia Commons

The benefits of machine learning algorithms also make cyber attacks harder to block or defend against, but governments have yet to address the specifics on how they can address the risks artificial intelligence presents, according to a report by the Brookings Institution.

Increasing reliance on AI for critical services opens them up as targets for attackers that can cause even worse consequences, the report said. 

AI’s decision-making processes can have their integrity compromised so results are different than designed or expected. Either direct control of the system to set outputs and decisions, or indirect means might be used to deliver bad input or training data to the AI model.

In the case of an autonomous vehicle, if remote access of its software is too difficult, an adversary can take action such as defacing stop signs so the AI doesn’t identify them properly in what is called "adversarial machine learning." Researchers discovered the changes to digital images the human eye can’t recognize are still enough for AI algorithms to misidentify the images, Brookings said.

Instead of manipulating inputs, adversaries can poison the data by using mislabeled or inaccurate data to train an AI model.

Close control over the inputs AI models receive and the training datasets to build them is important but difficult. AI developers can’t control if street signs have graffiti obscuring them. Training datasets contain sensitive information, creating more issues. Trade-offs between how training is completed and how much access developers have to those datasets arise. Adversaries may be able to infer training- data points from the model. The resulting attempt to secure it may leave the AI model more open to adversarial machine learning attacks if trying to secure it from an inference attack and vice versa, Brookings said.

Governments in 27 countries have developed AI plans or initiatives in the past four years, but few have focused on maintaining security. They will fund research, worker training, innovation and encourage economic growth. Those that have considered AI security policies have done little to figure out how they would work.

The National Security Commission on Artificial Intelligence in the United States proposes an extensive design documentation process and standards for AI models. No explanations have been made to show how accountability or auditing would be accomplished with the recommendations.

“A 2018 report by a French parliamentary mission, titled ‘For a Meaningful Artificial Intelligence: Towards a French and European Strategy,’ offered similarly vague suggestions,”  Brookings said.

Potential security threats were identified, but the conclusion was all that was needed was to have a greater “collective awareness” and greater consideration of safety and security risks.

“For many governments, the next stage of considering AI security will require figuring out how to implement ideas of transparency, auditing, and accountability to effectively address the risks of insecure AI decision processes and model data leakage,” according to the Brookings report.

Documentation of model development and the results they produce are required to enable identification of vulnerabilities, manipulations of input or training data and unexpected outputs.

Testing and auditing techniques are required, as are certification programs with guidance for AI users and developers.

Accountability methods and liability regimes that would govern AI in cases of security incidents need to be introduced by policymakers, the report said. Developers whose AI systems are compromised or who lack certifications would be liable to the damages their shortcomings cause.

“The proliferation of AI systems in critical sectors—including transportation, health, law enforcement, and military technology—makes clear just how important it is for policymakers to take seriously the security of these systems,” according to Brookings.




Top