NOTES
Conditional Pretraining of Large Language Models
by: Rallio, 16 May, 2023
Introduction Large language models (LLMs), such as OpenAI's ChatGPT and similar chatbot products from other organizations, have recently gained widespread adoption. These models can extend text or respond to instructions in a natural and helpful manner. Despite the core technologies behind LLMs, nam...
A Call to Protect Open-Source AI in Europe
by: LAION.ai, 28 Apr, 2023
An Open Letter to the European Parliament: Protecting Open-Source AI for a Safe, Secure, and Sovereign Digital Future LAION, alongside prominent research institutions and developers, has penned an open letter to the European Parliament to express concerns about the draft AI Act's potential impact on...
Training a Binary Classifier to Distinguish Images Generated with Stable Diffusion (v1.4) from Real Ones
by: Christoph Schuhmann, Ilia Zaitsev, 12 Apr, 2023
We present the development and assessment of a binary classifier designed to distinguish between authentic images and images generated using Stable Diffusion (SD) v1.4. We will discuss the dataset employed, describe the model architecture, outline the training process, and present the results obtain...
General-GPT: Breaking the Modality Constraint
by: Shivaen Ramshetty and Christoph Schuhmann, 28 Mar, 2023
Introduction With the rapid explosion of large language models and utilization of their encompassing applications, most notably ChatGPT, there is a clear promise of more capable and useful AI models/systems. Often, such models are compared to us as humans using the Turing test or their performance o...