The Trump administration has announced its long-awaited national policy framework for artificial intelligence, guidelines for Congress on how to regulate the emerging technology. While it was released in a three-page document, it probably could have fit on a Post-It Note.
The framework offers some broad-stroke guidelines for lawmakers, encouraging Congress to implement laws to accomplish goals like protecting minors and combatting censorship. Those recommendations are in line with the type of tech industry-friendly policies that are already being pursued, which makes sense given how much money the big players in the space have spent lobbying and sucking up to the administration.
For instance, Trump called on Congress to introduce “age assurance requirements” for AI, similar to proposed laws like the Kids Online Safety Act, which would implement similar standards on social media platforms. The framework also encourages Congress to establish ways for rights holders to license their material to AI companies for training models and reproduction—though it states “Any such legislation, however, should not address when or whether such licensing is required,” because the administration “believes that training of AI models on copyrighted material does not violate copyright laws.”
As expected, the administration called for its preferred laws to take precedence over states that have already passed more comprehensive laws governing AI. “Congress should preempt state AI laws that impose undue burdens to ensure a minimally burdensome national standard consistent with these recommendations, not fifty discordant ones,” the framework reads, arguing that “Preemption must ensure that State laws do not govern areas better suited to the Federal Government or act contrary to the United States’ national strategy to achieve global AI dominance.”
Tucked in at the very end of the framework is a recommendation that reads like Section 230 for AI companies. “States should not be permitted to penalize AI developers for a third party’s unlawful conduct involving their models,” it states. The inclusion of this from Trump is interesting, given his past disdain for Section 230 of the Communications Act, which spares sites like Reddit and Facebook from legal liability for things posted on their platforms. The idea that AI companies aren’t responsible for the outputs of their models could potentially shield them from facing consequences for misinformation or outputs like non-consensual sexually explicit material, though the proposal from the Trump administration seems more focused on keeping states from carrying out enforcement actions than providing a blanket protection for the AI companies.
Whether Trump’s policy framework actually goes anywhere or not, time will tell. He previously backed a 10-year moratorium that would have prevented states from establishing their own AI laws, and that got roundly shot down by everyone, including most Republicans. This framework is likely to have more support, but it’s far from a sure thing that it’ll get picked up by his party’s members of Congress, many of whom have their own policy proposals.
Read the full article here
