The objective of this article is to understand the concepts on which the transformer architecture (Vaswani et. al) is based on.
If you want a general overview of the paper you can check the summary.

Here I’m going to present a summary of:

Byte Pair Encoding

Context

This is an algorithm to define the tokens for which we’re going to learn vector embeddings. The simplest way to do this is consider each word and punctuation in the text as a token. The problem with this approach is that in testing we won’t have an embedding for words we didn’t see before. Some research has successfully used characters as tokens (Kim et. al 2016, Costa-Jussa et. al 2016). Byte pair encoding can be put in the middle of these two techniques.

The algorithm

The motivation behind the algorithm is to define a set of tokens that can be used to construct any word, but also contain the most typical words. So we can learn good representations for the most common terms but at the same time remain flexible and have some knowledge for unknown words. The algorithm then is as follows:

  1. We start the tokens set with each of the possible characters plus an end-word character.
  2. We determine the number of merges that we want to do.
  3. For every merge, we will count the occurences of each pair of tokens in our corpus, and we’re going to add as a string the pair (the concatenation) the most frequent. Therefore, adding 1 new token to our set.

With this, the size of the vocabulary = the number of merges + the number of different characters + 1 (the end-word character). So if we define the number of merges as then the vocabulary is all the possible characters and all the different words.

Context

There are different algorithms to decode the final output sequence. This is because our model outputs a probability distribution over our vocabulary, and then we need to choose one word each time until we arrived at the end character. One option (greddy decoding) would be to choose at each step the word with the highest probability, but the problem is that this may not lead to the highest probability sentence because it’s calculated as:

The algorithm

Instead of being that greedy, beam search proposes to maintain beam_size hypothesis (possible sentences). Then at each step:

  1. We predict the next beam_size tokens for each hypothesis
  2. From all those possible sentences we take the beam_size most probable hypothesis.

We can stop when we complete sentences (arrived at the end character), or after steps. Additionally, they propose to normalize (to divide) the sentence probability by , so longer sentences are not less probable.

Label smoothing

(Szegedy et. al)

This is a regularization technique that encourages the model to be less confident, therefore more adaptable.

In classifications problems we have ground truth data that follows a distribution that we usually define as a one hot vector:

If we use the softmax to calculate the output probabilities and we use cross entropy as the loss, then this labels representation can lead to overfitting, therefore this technique proposes to use smoother labels.

Understanding the problem

In classification problems where we predict the label with a softmax as

Where is the input, is one of the possible labels, and the logits (the output score of our model). And we use the cross entropy loss as shown below for one example () and for all of them ():

When we take the ground truth distribution discrete, that is (see definition above of ), then for all different than (the correct label for element ), then:

Let’s now calculate the derivative of this to find the minimum of the loss,

Thus,

Then the function is minimized (it’s derivative is zero) when . Which is approachable if , in words having the correct logit way bigger than the rest. Because with these values the softmax would output 1 for the index and zero elsewhere. This can cause two problems:

  1. If the model learns to output , then it will be overfitting the groundtruth data and it’s not guaranteed to generalize.
  2. It encourages the differences between the largest logit and all others to become large, and this, combined with the fact that the gradient is between -1 and 1 , reduces the ability of the model to adapt. In other words, the model becomes too confident of its predictions.

Proposed solution

Instead of using a one hot vector, we introduce a noise distribution on the following way:

Thus we have a mixture between the old distribution and the fixed distribution , with weights and . We can see this as in, for a label we first set it to the ground truth label and then with probability we replace the label with the distribution .

In the paper where this regularization was proposed they used the uniform distribution . If we look at the cross entropy now, it would be:

Where the term is penalising the deviation of from the prior , because if these two are too alike, then its cross entropy () will be bigger and therefore the loss will be bigger.

Dropout

(Srivastava et. al)

This is another regularization technique that is pretty simple but highly effective. It turns off neurons with a probability , or in other words it keeps neurons with a probability . Doing this the model can learn more relevant patterns and is less prone to overfit, therefore it can achieve better performance.

The intuition behind dropout is that when we delete random neurons, we’re potentially training exponential sub neural networks at the same time! And then at prediction time, we will be averaging each of those predictions.

In test time we don’t drop (turn off) neurons, and since it is not feasible to explicitly average the predictions from exponentially many thinned models, we approximate this by multiplying by the output of the hidden unit. So in this way the expected output of a hidden unit is the same during training and testing.

Layer Normalization

(Lei Ba et. al 2016)

Motivation: During training neural networks converge faster if the input is whitened, that is, linearly transformed to have zero mean and unit variance and decorrelated (LeCun et. al 1998). So we can see the output of one layer as the input of another network, therefore it’s clear that normalizing the intermediate values in the network could be beneficial. The main problem is that each layer has to readjust to the changes in the distribution of the input.

This problem was presented by Ioffe et. al 2015, where they proposed Batch Normalization to overcome this issue, as a way to normalize the inputs of each layer in the network.

Batch Normalization

In a nutshell, we can think of Batch Normalization as an extra layer after each hidden layer, that transforms the inputs for the next layer from to . If we consider to be the mini batch where each is an input vector of a hidden layer, then the normalization of each dimension is the following:

We’ll approximate the expectation ($\mu$) and the variance () calculating them at the mini batch level. Then the batch normalization will be:

Where is a constant added for numerical stability, and and are parameters of this “layer”, learnt through backpropagation. These parameters are used to be able to keep the representational power of the layer, so by setting and we can recover the original output, if it were to be the optimal one.

Additionally, during inference we’ll use and fixed and the expectation and variance will be computed over the entire population (using the first equation).

Back to Layer Normalization

Batch normalization is not easily extendable to Recurrent Neural Networks (RNN), because it requires running averages of the summed input statistics, to compute and . However, the summed inputs in a RNN often vary with the length of the sequence, so applying batch normalization to RNNs appears to require different statistics for different time-steps. Moreover it cannot be applied to online learning tasks (with batch size = 1).

So they proposed layer normalization, that normalises the layers as follows:

Let be the vector of outputs of the hidden layer, and (each hidden layer has hidden units), then:

This looks really similar to the above equations for and , however the equations here use only the hidden layer output whereas the ones above use the whole batch.

Similarly as BN, we’ll learn a linear function ( and ) or as they call it in the paper, a gain function .

Unlike BN, LN is used the same way in training and test times.

Comparison to BN

In the paper they showed that LN works better (converges faster and it’s robust to changes in the batch size) for RNNs and feed-forward networks. However BN outperforms LN when applied to CNNs.