As new technologies entry the market, we see government regulations come up due to a few reason.
- Ensure benefits of the technology are widely distributed
- Ensure the risks of the technology are properly managed to reduce impact on society
- And the third, though cynical and is also true in many cases, the regulations are pushed by incumbants for regulatory market capture i.e. to make it harder for new companies to challenge them.
The wave of recent AI regulations are largely being positioned as addressing the first two reasons. Although many commentators make the claim that it is also an attempt at regulatory capture by the leading AI companies. For the rest of this post, I’ll stick to open foundational models, as the regulations and the academic research around this topic is focused on open foundational models.
Open models provide many benefits over closed models. The ability for the larger technology community to have access to model weights, and in some cases the model source code and documentation, will spur innovation in adapting these models to unique applications. It will unleash more innovation as a larger pool of people can use these models at low cost.
Since the release of ChatGPT, there have been a host of concerns about Generative AI including concerns of disinformation, manipulating political campaigns, cyber security risks, bio-security risks, automated warfare, etc. Amidst all these claims, the key question is how do we evaluate these threats? Sayash et. al. provide a simple framework, which largely reuses existing security and risk frameworks to evaluate whether open foundational models provide any additional risk (marginal risk) than what is already present.
A few takeaways:
- With regards to societal benefits of open foundational models, the benefits are largely along the same lines of any other new open technologies: increased access to more people, encourages competition and innovation, and reduces concentration of value.
- With regards to risks, the encouraging sign is that we have extremely good frameworks from the general study of security and risks. Specifically with respect to computers, the field of cybersecurity has much to offer in how we manage these risks. Instead of readily succumbing to the FUD scenarios, we can rationally assess the marginal risk of these models using these risk assessment frameworks. As shown in the example of biosecurity hazard, many of these risks have been around for a long time and open LLM models are not increasing the risk of attacks.
Pingback: Tracking AI “Incidents” | Pulper Tank