KL3M Dataset Paper Release

132+ Million documents and over 1.3+Trillion tokens of copyright clean data for training large language models

Forschung & Fakultät |

Professor Katz, Academic Director Center for Legal Technology and Data Science, co-authored a new paper entitled “The KL3M Data Project: Copyright-Clean Training Resources for Large Language Models.”

Practically all large language models (LLMs) have been pre-trained on data that is subject to global uncertainty related to copyright infringement and/or breach of contract. The KL3M Data Project directly confronts this critical issue by introducing the largest comprehensive training data pipeline for LLMs that minimize risks related to copyright or breach of contract. The foundation of this project is a corpus of over 132 million documents and trillions of tokens spanning 16 different sources that have been verified using a strict copyright and licensing protocol. Source material is derived from several jurisdictions including the US, UK and EU.

The release features the entire data processing pipeline, including 1) the source code to acquire and process these documents, 2) the original document formats with associated provenance and metadata, 3) extracted content in a standardized format, 4) pre-tokenized representations of the documents, and 5) various mid- and post-train resources such as question-answer, summarization, conversion, drafting, classification,prediction, and conversational data. All of these resources are freely available to the public on S3, Hugging Face, and GitHub under CC-BY terms.

Access the paper
Data on Hugging Face