NOTES

Welcome to our LAION notes section! Here, you will find quick overviews or work in progress of the recent research by our community!

Safety Review for LAION 5B

by: LAION.ai, 19 Dec, 2023


There have been reports in the press about the results of a research project at Stanford University, according to which the LAION training set 5B contains potentially illegal content in the form of CSAM. We would like to comment on this as follows: LAION is a non-profit organization that provides da...

Conditional Pretraining of Large Language Models

by: Rallio, 16 May, 2023


Introduction Large language models (LLMs), such as OpenAI's ChatGPT and similar chatbot products from other organizations, have recently gained widespread adoption. These models can extend text or respond to instructions in a natural and helpful manner. Despite the core technologies behind LLMs, nam...

A Call to Protect Open-Source AI in Europe

by: LAION.ai, 28 Apr, 2023


An Open Letter to the European Parliament: Protecting Open-Source AI for a Safe, Secure, and Sovereign Digital Future LAION, alongside prominent research institutions and developers, has penned an open letter to the European Parliament to express concerns about the draft AI Act's potential impact on...

Training a Binary Classifier to Distinguish Images Generated with Stable Diffusion (v1.4) from Real Ones

by: Christoph Schuhmann, Ilia Zaitsev, 12 Apr, 2023


We present the development and assessment of a binary classifier designed to distinguish between authentic images and images generated using Stable Diffusion (SD) v1.4. We will discuss the dataset employed, describe the model architecture, outline the training process, and present the results obtain...

General-GPT: Breaking the Modality Constraint

by: Shivaen Ramshetty and Christoph Schuhmann, 28 Mar, 2023


Introduction With the rapid explosion of large language models and utilization of their encompassing applications, most notably ChatGPT, there is a clear promise of more capable and useful AI models/systems. Often, such models are compared to us as humans using the Turing test or their performance o...