Byte Pair Encoding

In information theory, byte pair encoding (BPE) or digram coding is a simple form of data compression in which the most common pair of consecutive bytes of data is replaced with a byte that does not occur within that data. Look up Wikipedia for a good example of using BPE on a single string.

This technique is also employed in natural language processing models, such as the GPT-2, to tokenize word sequences. Why not to tokenize a text simply by separating the words by space? Well, not every language have space. Even if we tokenize a text that employs space, it may not be a good idea as:

  • If we tokenize a text with heuristic rules (such as the presence of an space), there are some NLP tasks that produce OOV (ou of vocabulary) words. For instance, BPE is used for translation tasks (as it avoids OOV words, you will understand how if you read on).
  • If we use a tokenization technique that is based on individual words and not on their subunits, we might not be able to see the similarity between strong -> stronger, weak -> weaker.

Altogether, BPE is used in NLP tasks for tokenization by using the elements of the text, that is the individual characters that construct them. Let’s see how BPE tokenization is exactly implemented!

The BPE tokenization was proposed by Sennrich et al. 2015 . The workflow of the BPE tokenization algorithm is described below:

  1. First, we define a large corpus to train our tokenizer and a maximum vocabulary length.
  2. Next, we add a word end token to each word (before the space, this token is ”). This special character shows that the subword/ character is just before the space. We also compute the frequency of each word.
  3. Generate a new subword according to the high frequency occurrence. Add the subword to the vocabulary that occurs the most often.
  4. Repeat step 3 until reaching the maximum subword vocabulary size (defined in step 1) or until the next highest frequency pair is 1.

 

In [1]:
text = """
        Lay lady lay, 
        Lay across my big brass bed
        Lay lady lay
        Lay across my big brass bed
        """

tokens = text.split()
text = [' '.join(tokens[i]) for i in range(len(tokens))]
text = [text[i]+' <w>' for i in range(len(text))]
vocab = dict(Counter(text))
print(vocab)
{'L a y <w>': 4, 'l a d y <w>': 2, 'l a y , <w>': 1, 'a c r o s s <w>': 2, 'm y <w>': 2, 'b i g <w>': 2, 'b r a s s <w>': 2, 'b e d <w>': 2, 'l a y <w>': 1}
In [2]:
import re, collections

def get_vocab(filename):
    vocab = collections.defaultdict(int)
    with open(filename, 'r', encoding='utf-8') as fhand:
        for line in fhand:
            words = line.strip().split()
            for word in words:
                vocab[' '.join(list(word)) + ' </w>'] += 1
    return vocab

def get_stats(vocab):
    pairs = collections.defaultdict(int)
    for word, freq in vocab.items():
        symbols = word.split()
        for i in range(len(symbols)-1):
            pairs[symbols[i],symbols[i+1]] += freq
    return pairs

def merge_vocabulary(pair, v_in):
    v_out = {}
    bigram = re.escape(' '.join(pair))
    p = re.compile(r'(?<!\S)' + bigram + r'(?!\S)')
    for word in v_in:
        w_out = p.sub(''.join(pair), word)
        v_out[w_out] = v_in[word]
    return v_out

def get_tokens(vocab):
    tokens = collections.defaultdict(int)
    for word, freq in vocab.items():
        word_tokens = word.split()
        for token in word_tokens:
            tokens[token] += freq
    return tokens

vocab = {'L a y </w>': 4, 'l a d y </w>': 2, 'l a y , </w>': 1, 'a c r o s s </w>': 2, 'm y </w>': 2, 'b i g </w>': 2, 'brass': 2, 'bed': 2, 'lay': 1}
{'l o w </w>': 5, 'l o w e r </w>': 2, 'n e w e s t </w>': 6, 'w i d e s t </w>': 3}

print('----------------')
print('Tokens Before BPE')
tokens = get_tokens(vocab)
print('Tokens: {}'.format(tokens))
print('# of tokens: {}'.format(len(tokens)))
print('----------------')

num_merges = 10
for i in range(num_merges):
    pairs = get_stats(vocab)
    if not pairs:
        break
    best = max(pairs, key=pairs.get)
    vocab = merge_vocabulary(best, vocab)
    print('Iter: {}'.format(i))
    print('Best pair: {}'.format(best))
    tokens = get_tokens(vocab)
    print('Tokens: {}'.format(tokens))
    print('# of tokens: {}'.format(len(tokens)))
    print('----------------')

 

----------------
Tokens Before BPE
Tokens: defaultdict(<class 'int'>, {'L': 4, 'a': 9, 'y': 9, '</w>': 13, 'l': 3, 'd': 2, ',': 1, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 18
----------------
Iter: 0
Best pair: ('y', '</w>')
Tokens: defaultdict(<class 'int'>, {'L': 4, 'a': 9, 'y</w>': 8, 'l': 3, 'd': 2, 'y': 1, ',': 1, '</w>': 5, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 1
Best pair: ('L', 'a')
Tokens: defaultdict(<class 'int'>, {'La': 4, 'y</w>': 8, 'l': 3, 'a': 5, 'd': 2, 'y': 1, ',': 1, '</w>': 5, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 2
Best pair: ('La', 'y</w>')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'l': 3, 'a': 5, 'd': 2, 'y</w>': 4, 'y': 1, ',': 1, '</w>': 5, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 3
Best pair: ('l', 'a')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'la': 3, 'd': 2, 'y</w>': 4, 'y': 1, ',': 1, '</w>': 5, 'a': 2, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 4
Best pair: ('la', 'd')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lad': 2, 'y</w>': 4, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'a': 2, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 5
Best pair: ('lad', 'y</w>')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lady</w>': 2, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'a': 2, 'c': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'y</w>': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 19
----------------
Iter: 6
Best pair: ('a', 'c')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lady</w>': 2, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'ac': 2, 'r': 2, 'o': 2, 's': 4, 'm': 2, 'y</w>': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 18
----------------
Iter: 7
Best pair: ('ac', 'r')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lady</w>': 2, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'acr': 2, 'o': 2, 's': 4, 'm': 2, 'y</w>': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 17
----------------
Iter: 8
Best pair: ('acr', 'o')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lady</w>': 2, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'acro': 2, 's': 4, 'm': 2, 'y</w>': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 16
----------------
Iter: 9
Best pair: ('acro', 's')
Tokens: defaultdict(<class 'int'>, {'Lay</w>': 4, 'lady</w>': 2, 'la': 1, 'y': 1, ',': 1, '</w>': 5, 'acros': 2, 's': 2, 'm': 2, 'y</w>': 2, 'b': 2, 'i': 2, 'g': 2, 'brass': 2, 'bed': 2, 'lay': 1})
# of tokens: 16
----------------
I hope it helps 🙂 A la prochaine

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.

Create a website or blog at WordPress.com

Up ↑

%d bloggers like this: