TOWARDS A TRANSPARENT AI FUTURE: THE CALL FOR LESS REGULATORY HURDLES ON OPEN-SOURCE AI IN EUROPE
by: LAION, 21 Sep, 2023
Following our previous open letter to the European Parliament on the significance of open-source AI, LAION, backed by European Laboratory for Learning and Intelligent Systems (ELLIS) and a long list of very impactful AI researchers, we submit this new open letter to the European Parliament:
Link to the PDF |
---|
Why Open-Source is the Gold Standard for AI Security
The transparency of open-source AI is its strength. It ensures robustness and security unmatched by closed systems. Why? Open-source AI benefits from the scrutiny of the global community, allowing vulnerabilities to be detected and fixed promptly. Drawing parallels, we can look at the Linux operating system—a paragon of security and robustness stemming from its open-source nature.
Countering Redundancy and Upholding Sustainability
With the environmental toll of extensive AI training becoming a major concern, open-source models have shown a clear path forward. By minimizing redundant training, they reduce computational and energy overheads, reflecting a commitment to a sustainable future.
Ensuring Scientific Reproducibility
Reproducibility and validation are key to scientific integrity and progress. Open-source AI models offer full transparency, allowing diverse research groups to independently verify results and claimed functionality. Unlike closed-source alternatives, open-source foundations guarantee stringent standards for the machine learning and AI field. With these open-source foundation models rigorously tested by a vast expert community, AI applications in sectors from healthcare to finance can build on a trusted, scientifically validated base.
A Catalyst for Innovation
Open-source AI has been instrumental in levelling the playing field. Small and mid-sized enterprises can now fine-tune existing models, fostering innovation without the daunting costs of building from scratch. If Europe's ambition is to retain its brightest minds, ensuring uninterrupted access to these resources is non-negotiable.
Regulating Application, Not Innovation
The clarion call from LAION and its supporters is clear—focus regulations on AI's applications, not the foundational technology. By doing so, the EU will nurture innovation while ensuring that AI's real-world applications are ethical, safe, and in line with European values.
Incentivizing the Open-Source Paradigm
Perhaps the most potent recommendation in this new letter is the incentivization of open-source AI. It's a win-win. Organizations can release foundational models as open-source, maintaining proprietary rights on fine-tuned versions. This ensures that the broader community benefits from the base models while commercial competitiveness remains intact.
The European AI Path Forward
European sovereignty in AI is crucial, and open-source AI research is key to addressing challenges ranging from healthcare to climate change. The future, as outlined in the letter, imagines a Europe at the forefront of AI research, one that champions transparency, security, and sustainability.
Supporters
Name | Description |
---|---|
Board of the European Laboratory for Learning and Intelligent Systems (ELLIS): Serge Belongie, Nicolò Cesa-Bianchi, Florence d'Alché-Buc, Nada Lavrac, Neil D. Lawrence, Nuria Oliver, Bernhard Schölkopf, Josef Sivic, Sepp Hochreiter | European Lab for Learning & Intelligent Systems (ellis.eu) |
Yann André LeCun | Chief AI Scientist at Facebook & Silver Professor at the Courant Institute, New York University |
Jürgen Schmidhuber | Prof. Jürgen Schmidhuber : Scientific Director of the Swiss AI Lab IDSIA (USI & SUPSI), Co-Founder & Chief Scientist of NNAISENSE, Father of Modern AI |
Kristian Kersting | Full Professor at Technical University of Darmstadt, Co-Director, Hessian Center for AI (hessian.AI) and member of the German Center for Artificial Intelligence (DFKI) |
Björn Ommer | Full professor and head of the Computer Vision & Learning Group at the Ludwig-Maximilians-University of Munich |
Hilde Kuehne | Professor, Institute for Computer Science II, Head of Multimodal Learning, University of Bonn |
Mira Mezini | Professor of Computer Science at Technical University of Darmstadt, Co-Director of Hessian Center for AI (hessian.AI) |
Patrick Schramowski | Senior Researcher at the German Center for Artificial Intelligence (DFKI) and Hessian Center for AI (hessian.AI) |
Jenia Jitsev | Expert in multi-modal foundation models and datasets. LAION core member and contributor. Member OpenBioML. Researcher at Helmholtz Juelich Supercomputing Center, Germany. |
Marianna Nezhurina | Senior Researcher and Lab Lead at Juelich Supercomputing Center, Helmholtz Research Center Juelich. Scientific Lead and Co-Founder at LAION; Member of European Laboratory for Learning and Intelligent Systems (ELLIS) |
Dominik L. Michels | Full Professor of Intelligent Algorithms in Modeling and Simulation at the Technical University of Darmstadt |
Tim Dettmers | PhD Student at The University of Washington. Creator of the bitsandbytes library. |
Mark Schutera | PhD Student Karlsruhe Institute of Technology within Unsupervised Deep Learning for Cognitive Perception Systems |
Andreas Hochlehnert | PhD Student, University of Tübingen, International Max-Planck Research School for Intelligent Systems (IMPRS-IS) |
Irina Rish | Full Professor at the Université de Montréal, a core member of Mila - Quebec AI Institute. Canada Excellence Research Chair (CERC) in Autonomous AI and CIFAR Chair. PI on a collaborative INCITE project on the Summit supercomputer at OLCF (supported by the U.S. DoE, Office of Science ), aiming to build open-source large-scale language and multimodal models (e.g., RedPajama-INCITE was trained as a part of this project). |
Huu Nguyen | Former big-law partner, CEO and co-founder of Ontocord.AI, LAION volunteer, co-author of Data Governance in the Age of Large-Scale Data-Driven Language Technology, FAccT ’22, and co-author of resolution 112 of the ABA on encouraging lawyers to understand the risks and benefits of AI. |
David Ha | Co-Founder and CEO of sakana.ai |
Hessie Jones | Writer, Forbes, Data Privacy, Ethical AI Practitioner, Advocating for Human-centred AI and Ethical Distribution of AI Systems, BOA Women in AI Ethics, Cofounder MyData Canada, Cofounding Member Personally Identifiable Information Standards Architecture (PIISA); former COO Beacon Trust Network, BOD Technology for Good Canada |
Sampo Pyysalo | Research Fellow, University of Turku, co-lead TurkuNLP research group, Principal Investigator, High-Performance Language Models (Horizon EU project), leading multiple efforts to create very large open models. |
Wolfgang Stille | Chief Technical Officer of the Hessian Center for AI (hessian.AI) and project lead of the AI Innovation Lab and the AI Service Center hessian.AISC. He has been involved with digital research infrastructure and open science for many years and was a leading participant in the process of establishing a digital research data culture at Hessian universities. |
Christoph Schuhmann | Organizational Lead & Co-Founder of the Large-scale AI Open Network (LAION), Neurips 2022 Outstanding Paper Award & Falling Walls Breakthrough of the Year 2023 Award Winner |
Robert Kaczmarczyk | Medical Lead & Co-Founder of the Large-scale AI Open Network (LAION), Neurips 2022 Outstanding Paper Award & Falling Walls Breakthrough of the Year 2023 Award Winner |