Text Tokenizer

Creator
Creator
Seonglae ChoSeonglae Cho
Created
Created
2023 Mar 26 4:39
Editor
Edited
Edited
2024 Nov 26 12:46
Refs
Karpathy에 따르면 LLM의 모든 문제는 tokenization 때문에 발생한다
BPE같이 단순한 알고리즘으로 대용량 데이터셋에서 token set을 scaling해도 대용량 모델에서도 잘 사용되고 있지만 토크나이저의 단순성 bottleneck으로 high level token이 복잡한 reasoning 이나 arithematic 에서 performance 문제가 생기기 때문에
Text Tokenizer Training
로 llm과 최적화 필요할수도
Longest prefix matching
을 기반으로 encoding되기 때문에 가벼운 inference cost는 좋지만 일부는 greedy encoding이나 복잡한 algorithm이 주는 더 본질적으로 언어 모델링에 적절할수도 있을 것
Text tokenizing은 text corpus에서
Text Compression
처럼 encoding 잘해서 최대한 의미 가지는 반복되는 토큰으로 분리해내는 게 중요하다
Text Tokenizer Notion
 
 
Text Tokenizer Libraries
 
 
 

Understand

Let's build the GPT Tokenizer
The Tokenizer is a necessary and pervasive component of Large Language Models (LLMs), where it translates between strings and tokens (text chunks). Tokenizers are a completely separate stage of the LLM pipeline: they have their own training sets, training algorithms (Byte Pair Encoding), and after training implement two fundamental functions: encode() from strings to tokens, and decode() back from tokens to strings. In this lecture we build from scratch the Tokenizer used in the GPT series from OpenAI. In the process, we will see that a lot of weird behaviors and problems of LLMs actually trace back to tokenization. We'll go through a number of these issues, discuss why tokenization is at fault, and why someone out there ideally finds a way to delete this stage entirely. Chapters: 00:00:00 intro: Tokenization, GPT-2 paper, tokenization-related issues 00:05:50 tokenization by example in a Web UI (tiktokenizer) 00:14:56 strings in Python, Unicode code points 00:18:15 Unicode byte encodings, ASCII, UTF-8, UTF-16, UTF-32 00:22:47 daydreaming: deleting tokenization 00:23:50 Byte Pair Encoding (BPE) algorithm walkthrough 00:27:02 starting the implementation 00:28:35 counting consecutive pairs, finding most common pair 00:30:36 merging the most common pair 00:34:58 training the tokenizer: adding the while loop, compression ratio 00:39:20 tokenizer/LLM diagram: it is a completely separate stage 00:42:47 decoding tokens to strings 00:48:21 encoding strings to tokens 00:57:36 regex patterns to force splits across categories 01:11:38 tiktoken library intro, differences between GPT-2/GPT-4 regex 01:14:59 GPT-2 encoder.py released by OpenAI walkthrough 01:18:26 special tokens, tiktoken handling of, GPT-2/GPT-4 differences 01:25:28 minbpe exercise time! write your own GPT-4 tokenizer 01:28:42 sentencepiece library intro, used to train Llama 2 vocabulary 01:43:27 how to set vocabulary set? revisiting gpt.py transformer 01:48:11 training new tokens, example of prompt compression 01:49:58 multimodal [image, video, audio] tokenization with vector quantization 01:51:41 revisiting and explaining the quirks of LLM tokenization 02:10:20 final recommendations 02:12:50 ??? :) Exercises: - Advised flow: reference this document and try to implement the steps before I give away the partial solutions in the video. The full solutions if you're getting stuck are in the minbpe code https://github.com/karpathy/minbpe/blob/master/exercise.md Links: - Google colab for the video: https://colab.research.google.com/drive/1y0KnCFZvGVf_odSfcNAws6kcDD7HsI0L?usp=sharing - GitHub repo for the video: minBPE https://github.com/karpathy/minbpe - Playlist of the whole Zero to Hero series so far: https://www.youtube.com/watch?v=VMj-3S1tku0&list=PLAqhIrjkxbuWI23v9cThsA9GvCAUhRvKZ - our Discord channel: https://discord.gg/3zy8kqD9Cp - my Twitter: https://twitter.com/karpathy Supplementary links: - tiktokenizer https://tiktokenizer.vercel.app - tiktoken from OpenAI: https://github.com/openai/tiktoken - sentencepiece from Google https://github.com/google/sentencepiece
Let's build the GPT Tokenizer

Tokenizer Arena

새토큰 추가할때 (Not actually tokenizer part, but embedding layer)
  • 기존 임베딩 조합의 평균
  • 더 좋은 방법은 같은모델 lm head 때고 나오는 context vector
 
 

Recommendations