American and Chinese Scientists Call for International Cooperation on A.I. Safety

,

As regulatory regimes for artificial intelligence and machine learning begin to take shape at the state, national, and bloc (i.e., the EU) level, leading artificial intelligence and machine learning scientists have becoming increasingly vocal in calling for international cooperation around A.I. safety. One prominent effort in this vein is the International Dialogue on AI Safety (“IDAIS”), a collaborative project between the nonprofit FAR.AI and the Berggruen Institute.  The IDAIS, which includes experts from the United States and the People’s Republic of China, as well as representatives from Canada, the U.K., and the E.U., recently convened their third series of international dialogues in Venice in early September.

As the culmination of this round of discussions, the IDAIS issued a joint statement extolling the importance of collaboration and urging countries to work together in ensuring the safe development and development of A.I. in order to avoid the possibility of “catastrophic risks” to humanity from the quickly advancing technology.  From the group’s statement: “Deep and foundational research needs to be conducted to guarantee the safety of advanced AI systems. This work must begin swiftly to ensure they are developed and validated prior to the advent of advanced AIs. To enable this, we call on states to carve out AI safety as a cooperative area of academic and technical activity, distinct from broader geostrategic competition on development of AI capabilities.” 

Importantly, the IDAIS statement also specifically calls for the creation of national A.I. safety authorities, as well as a corresponding international authority to help coordinate safety activities among participating member countries.  The IDAIS suggests that such an international regulatory scheme could “ensure states adopt and implement a minimal set of effective safety preparedness measures, including model registration, disclosure, and tripwires,” as well as foster the implementation and enforcement of an internationally recognized set of A.I. safety standards. 

The IDAIS’s full statement is available here.  

This concept of an internationally recognized A.I. safety organization empowered to enforce certain standards (in concert with the organization’s nation-level counterparts) is obviously still in the nascent stages, but is especially noteworthy given the participation of prominent A.I.-industry voices from both the United States and China, which are widely seen as the geopolitical leaders competing to produce advanced “frontier”-level A.I.  Accordingly, in a period of increased tension between the United States and China, the IDAIS stands out as a bright spot of potential international cooperation, and underscores the seriousness with which A.I. thought leaders in both countries regard the potential for “catastrophic” risk associated with A.I.’s rapidly increasing capabilities. Ben Mishkin and Daniel Kilburn are attorneys in Cozen O’Connor’s Technology, Privacy & Data Security practice.

Authors

  • Member

    Ben focuses his practice on technology and data privacy. He advises clients in complex data transact...

  • Associate

    Daniel concentrates his practice on technology, privacy, and data security. He advises clients navig...

Related Posts

Understanding the Colorado AI Act

With the governor’s signature, Colorado has enacted a new consumer protection law focused on artificial intelligence (“AI”) systems.  The “Colorado