The White House on Wednesday said that federal agencies have completed all the tasks they committed to during the first year of the Biden administration’s executive order on artificial intelligence.
The rapid development of generative AI systems over the last two years caught many federal agencies unprepared and the 2023 executive order was aimed at modernizing the government’s internal practices to prepare agencies to responsibly use AI systems—generative or otherwise—and regulate them in their respective domains.
A top priority of the executive order was establishing some government oversight for tech companies developing the most powerful AI systems. In its press release on Wednesday, the White House touted its success in that area. The Department of Commerce has signed agreements with OpenAI and Anthropic that grant the agency access to test the companies’ new models before their release and under a new proposed regulation, other developers will be required to report the results of their safety tests to the department on a quarterly basis.
That work is part of a broader effort in the Commerce Department, through its newly formed AI Safety Institute, to establish standard practices for testing models and mitigating their risks. At the moment, there is little consensus on when and how to evaluate emerging models and practices vary by developer.
Other federal agencies have spent the past year creating new rules and guidance documents for organizations that are the end users of AI systems.
The Department of Health and Human Services began tracking instances of harmful AI use in health systems and finalized rules for using AI in clinical settings and for mitigating discrimination in healthcare algorithms.
The Department of Labor, Department of Education, and housing regulators also published AI toolkits and best practices for the organizations they regulate.
The White House said the federal government has hired more than 250 AI practitioners as part of a targeted “talent surge” and created a Chief AI Officers’ Council to coordinate and share resources between agencies.
Many of the most important goals laid out in the 2023 executive order, however, require long-term efforts.
In particular, a new policy from the Office of Management and Budget requires most agencies to implement strict policies for assessing, procuring, using, and monitoring their AI tools. Given the purchasing power of the federal government, if agencies effectively and reliably implement the new rules they could significantly affect the quality of AI systems that are developed for commercial markets.
Of course, those long-term plans could change based on the outcome of next month’s presidential election. Donald Trump has made killing the AI executive order part of his campaign platform.
Read the full article here