PETITION FOR KEEPING UP THE PROGRESS TEMPO ON AI RESEARCH WHILE SECURING ITS TRANSPARENCY AND SAFETY.

by: LAION.ai, 29 Mar, 2023


LINK TO OUR PETITION

Authors: Christoph Schuhmann, Huu Nguyen, Robert Kaczmarczyk, Jenia Jitsev & LAION community

Securing Our Digital Future: Calling for CERN like international organization to transparently coordinate and progress on large-scale AI research and its safety

In an era of unparalleled technological advancements, humanity stands on the precipice of a new epoch characterized by the profound influence of artificial intelligence (AI) and its foundational models, such as GPT-4. The potential applications of these technologies are vast, spanning scientific research, education, governance, and small and medium-sized enterprises. To harness their full potential as tools for societal betterment, it is vital to democratize research on and access to them, lest we face severe repercussions for our collective future.

Dominance of few large corporations in AI development

Increasingly, we are witnessing the emergence of a system wherein educational institutions, government agencies, and entire nations become dependent on the AI technology of a select few large corporations that operate with little transparency or public accountability. To secure our society's technological independence, foster innovation, and safeguard the democratic principles that underpin our way of life, we must act now. We call upon the global community, particularly the European Union, the United States, the United Kingdom, Canada, Australia and other willing countries, to collaborate on a monumental initiative: the establishment of an international, publicly funded, open-source supercomputing research facility. This facility, analogous to the CERN project in scale and impact, should house a diverse array of machines equipped with at least 100,000 high-performance state-of-the-art accelerators (GPUs or ASICs), operated by experts from the machine learning and supercomputing research community and overseen by democratically elected institutions in the participating nations.

Multimodal future

This ambitious endeavor will provide a platform for researchers and institutions worldwide to access and refine advanced AI models, such as GPT-4, harnessing their capabilities for the greater good. By making these models open source and incorporating multimodal data (audio, video, text, and program code), we can significantly enrich academic research, enhance transparency, and ensure data security. Furthermore, granting researchers access to the underlying training data will enable them to understand precisely what these models learn and how they function, an impossibility when restricted by APIs. Additionally, the open-source nature of this project will promote safety and security research, allowing potential risks to be identified and addressed more rapidly and transparently by the academic community and open-source enthusiasts. This is a vital step in ensuring the safety and reliability of AI technologies as they become increasingly integrated into our lives. The proposed facility should feature AI Safety research labs with well-defined security levels, akin to those used in biological research labs, where high-risk developments can be conducted by internationally renowned experts in the field, backed by regulations from democratic institutions. The results of such safety research should be transparent and available for the research community and society at large. These AI Safety research labs should be capable of designing timely countermeasures by studying developments that, according to broad scientific consensus, would predictably have a significant negative impact on our societies.

Economic impact

Economically, this initiative will bring substantial benefits to small and medium-sized companies worldwide. By providing access to large foundation models, businesses can fine-tune these models for their specific use cases while retaining full control over the weights and data. This approach will also appeal to government institutions seeking transparency and control over AI applications in their operations. The importance of this endeavor cannot be overstated. We must act swiftly to secure the independence of academia and government institutions from the technological monopoly of large corporations in AI research. Technologies like GPT-4 are too powerful and significant to be exclusively controlled by a select few. In a world where machine learning expertise and resources for AI development become increasingly concentrated in large corporations, it is imperative that smaller enterprises, academic institutions, municipal administrations, and social organizations, as well as nation-states, assert their autonomy and refrain from relying solely on the benevolence of these powerful entities that are often driven by short-term profit interests and act without properly taking democratic institutions into their decision-making loop. We must take immediate and decisive action to secure the technological independence of our society, nurturing innovation while ensuring the safety of these developments and protecting the democratic principles that form the foundation of our way of life.

Safety measures

The recent proposition of decelerating AI research as a means to ensure safety and progress presents an understandable but untenable approach that will be detrimental to both objectives. Corporate or state actors will make advancements in the dark while simultaneously curtailing the public research community's ability to scrutinize the safety aspects of advanced AI systems thoroughly. Rather than impeding the momentum of public AI development, a more judicious and efficacious approach would be to foster a better-organized, transparent, safety-aware, and collaborative research environment. The establishment of transparent open-source AI safety labs tied to the international large-scale AI research facility as described above, which employ eligible AI safety experts, have corresponding publicly funded compute resources, and act according to regulations issued by democratic institutions, will cover the safety aspect without dampening progress. By embracing this cooperative framework, we can simultaneously ensure progress and the responsible development of AI technology, safeguarding the well-being of our society and the integrity of democratic values.

What you can do

We urge you to join us in this crucial campaign. Sign this petition and make your voice heard. Our collective digital future, the autonomy of our academic research, and the equilibrium of our global economy depend on our ability to act quickly and decisively. Together, we can build a future where advanced AI technologies are accessible to all, and where innovation and progress are not constrained by the boundaries of a few powerful corporations. Let us seize this opportunity and build a brighter future for generations to come.