Open Models: The focus of AI Governance

Every new technology ushers in both excitement and concerns. The latest innovations in AI with the introduction of large language models is no exception. The quantum leap in progress demonstrated by these models caught most people, even those working in the field of AI, by surprise. Many tasks, such as those requiring human mental skills, and previously thought to be very difficult to be performed by machines are now easily done by these Generative AI models. The well publicized examples include ChatGPT clearing the bar exams and passing the MCAT as well.

Governments across the globe have rushed in to figure out the impacts of this new Generative AI technology on society. The US and European Union have already issued regulatory frameworks that seek to control the impact of this technology, with the hope of maximizing benefits and minimizing harm to society.

At the heart of these regulations, there is a particular focus on “open foundational models”. It is important to note that “open models” are not the same as “open source models”. The technology community is now very familiar with open source software, where an application’s entire code base is made public, and provided along with documentation and datasets where applicable. With respect to GenAI and LLMs, open models are more important to the community than open-source models.

A recent academic paper, “On the Societal Impact of Open Foundation Models”, does a great job of laying the groundwork for defining the term “open models”. It defines 5 important criteria that make a model “open” (and not “open source”):

  1. The weights of the model should be made public.
  2. The model source code and the data used to train the model need not be made public.
  3. The model should be widely available
  4. The model need not be released in stages.
  5. The model may have use restrictions.

Open models are considered to pose more of a threat to society because a closed model could in theory block access to malicious users and limit the potential harm. Once the models are released in the open, no one has any control over them. There is no central authority that can control what an individual user does with the model once they get access to it. Hence the major focus of the regulatory frameworks is focused on open models.

This entry was posted in Uncategorized and tagged , , , . Bookmark the permalink.

Leave a comment