NOTE: The views expressed here do not reflect those of my employer.
In October 2023, the White House issued an executive order on AI. This predictably had a rush of commentators arguing on whether or not the government should be regulating AI. It had all the same arguments. One the one side, there were people arguing that software is different, government will hold back innovation, government does not understand AI, etc. And on the other side, there were those who said that this is essential, and a much needed step to ensure that AI does not cause harm. And of course we have some hyperbole in the mix too that AI will somehow take over the world and is there an existential risk.
I am still not at the point where I can explicitly take a stand one way or another. However, I believe that given all this activity across many countries to figure out some sort of AI regulation, it is better for people in the tech industry to be at least aware of what is being proposed and discussed. That way, we can contribute in a meaningful way, either support or disagree, instead of being passive outsiders, sitting and watch the changes go forward.
Here are a few things that stood out for me in the fact sheet:
- The very first item is that the EO calls for all open foundational models that “pose a serious risk to national security, national economic security, or national public health and safety must notify the federal government”. The key thing that stands out here how will this determination that these model pose these threats be made. The wording is so high level that it is hard to argue against it.
- Almost as if to answer the questions about the first item, NIST is being tasked with the responsibility to set up “rigorous standards for extensive red-team testing”. The Department of Homeland Security (DHS) and the Department of Energy (DoE) are also tasked with applying strict standards for establishing safe and secure AI applications. There is also an explicit call out to deal with the risk of developing harmful biological agents using AI.
- These seem like standard government procedure to figure out how the new technology will affect systems that are critical for national security. This guidance in the order seems similar to how standards are established for dealing with nuclear power, aircrafts, defense equipment, medicines, biological research etc.
- In fact, I am not sure if the risk in some of these industries with AI tools is anyway different from the risks that already exists. This ties back to the discussion of the importance of focusing on marginal risk in these scenarios.
- The net new thing that caught my attention is the call out about the need to safeguard against the risk of fraud using AI-generated content. As Generative AI becomes more widely available, this is one of the biggest risks to society. The Department of Commerce has been tasked with developing guidelines for content authentication and watermarking.
I’ll discuss a few more things related to safety and fairness, and privacy in a future post.
Pingback: Thoughts on the White House Executive Order on AI — Part 2 | Pulper Tank