- The Office of Management and Budget is seeking outside comment on its draft policy for implementation guidance related to President Joe Biden’s executive order on AI, the agency said Wednesday.
- OMB’s implementation guidance calls for the head of each federal agency to designate a chief AI officer within 60 days after the memorandum is finalized, according to documentation released Wednesday.
- Agencies will also need to pay special attention to the vendors offering AI solutions, ensuring procured AI has had adequate testing and safeguards. The OMB is accepting outside comments until Dec. 5.
While the implementation guidance is only a draft and focuses on government use, it could ultimately benefit enterprises as providers shift to become compliant with federal standards.
Before a federal agency acquires an AI solution, the guidance asks vendors to:
- Equip tools with the ability to reliably label or establish provenance for content generated or modified by AI.
- Provide documentation of models, data and systems.
- Undergo regular AI-performance evaluations.
The guidance requires agencies to take steps to ensure competition among vendors by promoting interoperability. Vendors will not be able to “inappropriately” favor their own products at the expense of competitors’ offerings, according to OMB.
Vice President Kamala Harris, in a bid to rally the international community, delivered a speech in London on the Biden-Harris administration’s vision for the future of governance on AI.
Ten philanthropic groups made an initial commitment of $200 million dollars that would be used in a joint project to protect workers’ rights, advance transparency, prevent discrimination, drive innovation in the public interest and build international rules and norms for the responsible use of AI, Harris said.
“In addition to our work with civil society, President Biden and I will continue to engage with the private companies who are building this technology,” Harris said.
The administration had extensive discussions with leading AI companies to establish a minimum baseline of responsible AI practices, which resulted in a set of voluntary commitments, from reporting vulnerabilities discovered in AI models to implementing certain security protocols, Harris said.
“As history has shown, in the absence of regulation and strong government oversight, some technology companies choose to prioritize profit over the wellbeing of their customers, the safety of our communities and the stability of our democracies,” Harris said.