The tech industry is dealing with the implications of an executive order on AI signed by President Joe Biden Oct. 30.
The order aims to establish new standards for AI safety and security, while protecting the privacy of American citizens, promoting innovation and spurring development of responsible AI.
The order also shows how the federal government is promoting AI technology internally, said Forrester analyst Alla Valente.
"From the language of this EO, what's clear is that the federal government is now being mandated to leverage AI, and then use that AI to improve how it does everything it does," she said.
However, AI vendors in both the private and federal sectors should pay attention to the order, especially in the areas in which there is a call for standards in AI safety and security, Valente added.
The executive order discusses the need for new standards to test AI, built on the National Institute of Standards and Technology's framework.
"What the executive order is hoping to do is identify some of the risks as early as possible," Valente said. If that's accomplished, risk and security management practices can be embedded earlier in the development cycle of the AI lifecycle, she added.
While the intent of the executive order is to create standards and safety guardrails around AI systems, the lack of actionable steps stood out to Gopi Polavarapu, chief solutions officer at Kore.ai.
"From a vendor perspective, it's a welcome governance that's coming from the government, but at the end of the day, we need to know what those standards are, how that's going to be enforced," Polavarapu said. Kore.ai is a startup vendor of conversational AI tools for enterprises.
Esther Ajao is a TechTarget news writer covering artificial intelligence software and systems. Shaun Sutner is senior news director for TechTarget Editorial's enterprise AI, business analytics, data management, customer experience and unified communications coverage areas.