AI Self Regulatory Landscape

,

Every few weeks, and sometimes more frequently than that, it seems like there is another major advancement in the state of AI technology. OpenAI announcing GPT-4o, Apple bringing AI to the iPhone, and the ability to generate tv-show length content based on a sentence or two, to name a few. This swift pace of progress has generated both excitement and concern, and lawmakers around the world are working hard to determine the best way to legislate and regulate the rapidly evolving field. There is, however, another mode of regulation already helping to shape the development of AI: self-regulation by the AI companies themselves, as well as by AI industry groups.

The companies behind most, if not all, of the most advanced “frontier” AI models have already made voluntary commitments related to safe and responsible AI development. Microsoft, for example, released its Responsible AI Standard, and Google put out its AI Principles. Anthropic, the company behind the Claude model, went as far as to conduct a poll of 1,000 Americans to help determine what values and guardrails were important to the U.S. public, and developed an “AI Constitution” based on those findings. While the specific approaches adopted by each developer and group differ from one to the next, they share some common guiding principles. These broad categories include transparency, accountability, safety, reliability, and bias identification/reduction.

Many AI companies are also looking to the future with their self-regulatory attempts, and crafting standards to help ensure that, as models get progressively bigger and are capable of increasingly complex tasks, safety measures scale accordingly to help address the correspondingly higher levels of risk, a concept known as “Responsible Scaling.” Perhaps the most well-known player in the AI space, OpenAI committed in 2023 to addressing a problem it called “Superalignment”, intended to provide a safety framework for so-called ASI, or artificial superintelligence.  However, in a move that underscores the tenuous nature of these self-regulatory efforts, less than one year later, OpenAI dissolved their “Superalignment” team and appeared to walk back commitments regarding how much of the company’s computing resources would be dedicated to safety and alignment research..

Many major AI companies are also participating in industry groups which foster collective self-regulatory agreements. One example is the Frontier Model Forum (“FMF”), which counts among its founding members Google, Anthropic, Microsoft, and OpenAI. FMF helps to promote and organize “cross-organizational discussions and actions on AI safety and responsibility.” Many of those same companies came together in May at the Seoul AI Safety Summit to commit to an even wider ranging slate of guidelines and guardrails, including implementing a set of “kill switch” parameters under which they would cease the development of their respective models if they were unable to mitigate the risks associated with continuing to do so.

Even if governmental regulation is inevitable, some degree of self-regulation now might be desirable and beneficial, both for the businesses developing and deploying AI models as well as society as a whole. Self-regulation is inherently flexible. Rather than facing penalties for technical violations, an AI company can easily adopt new protections to address emerging risks while doing away with measures as they become outdated. This flexibility also allows AI companies to implement self-regulatory efforts at a much faster pace than even the most agile governmental regulator, enabling potentially significant safety protections to take effect sooner. This not only benefits the public, but helps AI companies build trust. As certain frameworks become more widely accepted, the costs of both offering and procuring models could decrease: currently, businesses looking to responsibly acquire an AI-powered tool must conduct extensive due diligence prior to purchase, which adds significant transaction costs for both buyers and sellers. Trusted AI self-regulatory frameworks could potentially streamline this process in a manner that would give comfort to businesses that need assurances as to the safety of the AI products they are purchasing.

This isn’t to say that self-regulatory efforts will always be successful – some are bound to come up short – OpenAI’s “superalignment” effort being the most prominent example. However, even these shortcomings may, in the long term, provide a net positive by helping to inform future regulatory efforts, providing evidence as to what works and what does not. Just as the states can function as the “laboratories of democracy,” providing evidence to federal legislators about what works and what does not, so too can current AI industry self-regulatory efforts inform and improve future attempts at self-regulation by the AI industry.

Ultimately, it seems almost inevitable that there will be some form of governmental AI regulation in the United States, whether that be an expanding patchwork of state laws or an overarching federal law. That does not diminish the immediate and ongoing importance of the largest and most advanced AI companies voluntarily implementing guardrails and guidelines. These efforts help to not only protect the public in the short term, but also to build trust in the technology and inform future regulatory efforts going forward.

Authors

  • Member

    Ben focuses his practice on technology and data privacy. He advises clients in complex data transact...

  • Associate

    Daniel concentrates his practice on technology, privacy, and data security. He advises clients navig...

Related Posts

Understanding the Colorado AI Act

With the governor’s signature, Colorado has enacted a new consumer protection law focused on artificial intelligence (“AI”) systems.  The “Colorado