This is a lightly edited version of this notebook.

If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers and 🤗 Datasets. Uncomment the following cell and run it.

%%capture
!pip install datasets transformers seqeval

If you're opening this notebook locally, make sure your environment has an install from the last version of those libraries.

To be able to share your model with the community and generate results like the one shown in the picture below via the inference API, there are a few more steps to follow.

First you have to store your authentication token from the Hugging Face website (sign up here if you haven't already!) then execute the following cell and input your username and password:

(Huggingface notebooks skip this bit, but you need to set credential.helper before anything else works).

!git config --global credential.helper store
from huggingface_hub import notebook_login

notebook_login()
Login successful
Your token has been saved to /root/.huggingface/token

Then you need to install Git-LFS. Uncomment the following instructions:

!apt install git-lfs
Reading package lists... Done
Building dependency tree       
Reading state information... Done
git-lfs is already the newest version (2.3.4-1).
0 upgraded, 0 newly installed, 0 to remove and 37 not upgraded.

Make sure your version of Transformers is at least 4.11.0 since the functionality was introduced in that version:

import transformers

print(transformers.__version__)
4.12.5

You can find a script version of this notebook to fine-tune your model in a distributed fashion using multiple GPUs or TPUs here.

Fine-tuning a model on a token classification task

In this notebook, we will see how to fine-tune one of the 🤗 Transformers model to a token classification task, which is the task of predicting a label for each token.

Widget inference representing the NER task

The most common token classification tasks are:

  • NER (Named-entity recognition) Classify the entities in the text (person, organization, location...).
  • POS (Part-of-speech tagging) Grammatically classify the tokens (noun, verb, adjective...)
  • Chunk (Chunking) Grammatically classify the tokens and group them into "chunks" that go together

We will see how to easily load a dataset for these kinds of tasks and use the Trainer API to fine-tune a model on it.

This notebook is built to run on any token classification task, with any model checkpoint from the Model Hub as long as that model has a version with a token classification head and a fast tokenizer (check on this table if this is the case). It might just need some small adjustments if you decide to use a different dataset than the one used here. Depending on you model and the GPU you are using, you might need to adjust the batch size to avoid out-of-memory errors. Set those three parameters, then the rest of the notebook should run smoothly:

task = "ner" # Should be one of "ner", "pos" or "chunk"
model_checkpoint = "jimregan/BERTreach"
batch_size = 16

Loading the dataset

We will use the 🤗 Datasets library to download the data and get the metric we need to use for evaluation (to compare our model to the benchmark). This can be easily done with the functions load_dataset and load_metric.

from datasets import load_dataset, load_metric

For our example here, we'll use the CONLL 2003 dataset. The notebook should work with any token classification dataset provided by the 🤗 Datasets library. If you're using your own dataset defined from a JSON or csv file (see the Datasets documentation on how to load them), it might need some adjustments in the names of the columns used.

datasets = load_dataset("wikiann", "ga")
Reusing dataset wikiann (/root/.cache/huggingface/datasets/wikiann/ga/1.1.0/4bfd4fe4468ab78bb6e096968f61fab7a888f44f9d3371c2f3fea7e74a5a354e)

The datasets object itself is DatasetDict, which contains one key for the training, validation and test set.

datasets
DatasetDict({
    validation: Dataset({
        features: ['tokens', 'ner_tags', 'langs', 'spans'],
        num_rows: 1000
    })
    test: Dataset({
        features: ['tokens', 'ner_tags', 'langs', 'spans'],
        num_rows: 1000
    })
    train: Dataset({
        features: ['tokens', 'ner_tags', 'langs', 'spans'],
        num_rows: 1000
    })
})

We can see the training, validation and test sets all have a column for the tokens (the input texts split into words) and one column of labels for each kind of task we introduced before.

To access an actual element, you need to select a split first, then give an index:

datasets["train"][0]
{'langs': ['ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga',
  'ga'],
 'ner_tags': [0, 1, 2, 2, 0, 0, 0, 0, 5, 0, 0, 0],
 'spans': ['PER: Pádraig Mac Piarais', 'LOC: Éireannach'],
 'tokens': ['**',
  'Pádraig',
  'Mac',
  'Piarais',
  ',',
  '36',
  ',',
  'réabhlóidí',
  'Éireannach',
  'agus',
  '[[file',
  '.']}

The labels are already coded as integer ids to be easily usable by our model, but the correspondence with the actual categories is stored in the features of the dataset:

datasets["train"].features[f"ner_tags"]
Sequence(feature=ClassLabel(num_classes=7, names=['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC'], names_file=None, id=None), length=-1, id=None)

So for the NER tags, 0 corresponds to 'O', 1 to 'B-PER' etc... On top of the 'O' (which means no special entity), there are four labels for NER here, each prefixed with 'B-' (for beginning) or 'I-' (for intermediate), that indicate if the token is the first one for the current group with the label or not:

  • 'PER' for person
  • 'ORG' for organization
  • 'LOC' for location
  • 'MISC' for miscellaneous

Since the labels are lists of ClassLabel, the actual names of the labels are nested in the feature attribute of the object above:

label_list = datasets["train"].features[f"{task}_tags"].feature.names
label_list
['O', 'B-PER', 'I-PER', 'B-ORG', 'I-ORG', 'B-LOC', 'I-LOC']

To get a sense of what the data looks like, the following function will show some examples picked randomly in the dataset (automatically decoding the labels in passing).

from datasets import ClassLabel, Sequence
import random
import pandas as pd
from IPython.display import display, HTML

def show_random_elements(dataset, num_examples=10):
    assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset."
    picks = []
    for _ in range(num_examples):
        pick = random.randint(0, len(dataset)-1)
        while pick in picks:
            pick = random.randint(0, len(dataset)-1)
        picks.append(pick)
    
    df = pd.DataFrame(dataset[picks])
    for column, typ in dataset.features.items():
        if isinstance(typ, ClassLabel):
            df[column] = df[column].transform(lambda i: typ.names[i])
        elif isinstance(typ, Sequence) and isinstance(typ.feature, ClassLabel):
            df[column] = df[column].transform(lambda x: [typ.feature.names[i] for i in x])
    display(HTML(df.to_html()))
show_random_elements(datasets["train"])
tokens ner_tags langs spans
0 [Burghley, House, ,, Belton, House] [B-ORG, I-ORG, O, B-ORG, I-ORG] [ga, ga, ga, ga, ga] [ORG: Burghley House, ORG: Belton House]
1 [Ollscoil, Chathair, Bhaile, Átha, Cliath] [B-ORG, I-ORG, I-ORG, I-ORG, I-ORG] [ga, ga, ga, ga, ga] [ORG: Ollscoil Chathair Bhaile Átha Cliath]
2 [Dúchasach, do, réigiún, na, Meánmhara, .] [O, O, O, B-LOC, I-LOC, O] [ga, ga, ga, ga, ga, ga] [LOC: na Meánmhara]
3 [Páirc, an, Chrócaigh, ,, Baile, Átha, Cliath] [B-ORG, I-ORG, I-ORG, O, B-LOC, I-LOC, I-LOC] [ga, ga, ga, ga, ga, ga, ga] [ORG: Páirc an Chrócaigh, LOC: Baile Átha Cliath]
4 [Tráigh, Mhór, ,, An, Tuirc] [B-ORG, I-ORG, O, B-LOC, I-LOC] [ga, ga, ga, ga, ga] [ORG: Tráigh Mhór, LOC: An Tuirc]
5 [Bhí, turas, An, Ríocht, Aontaithe, agus, Éire, acu, ón, Eanair, go, dtí, mBealtaine, .] [O, O, B-LOC, I-LOC, I-LOC, O, B-LOC, O, O, O, O, O, O, O] [ga, ga, ga, ga, ga, ga, ga, ga, ga, ga, ga, ga, ga, ga] [LOC: An Ríocht Aontaithe, LOC: Éire]
6 [Tá, an, staid, tógtha, ar, shuíomh, Bhóthair, Lansdúin, .] [O, O, O, O, O, O, B-ORG, I-ORG, O] [ga, ga, ga, ga, ga, ga, ga, ga, ga] [ORG: Bhóthair Lansdúin]
7 [athsheoladh, Pól, I, na, Rúise] [O, B-PER, I-PER, I-PER, I-PER] [ga, ga, ga, ga, ga] [PER: Pól I na Rúise]
8 [Liam, Ó, Leathlobhair] [B-PER, I-PER, I-PER] [ga, ga, ga] [PER: Liam Ó Leathlobhair]
9 [athsheoladh, Séamas, II, Shasana] [O, B-PER, I-PER, I-PER] [ga, ga, ga, ga] [PER: Séamas II Shasana]

Preprocessing the data

Before we can feed those texts to our model, we need to preprocess them. This is done by a 🤗 Transformers Tokenizer which will (as the name indicates) tokenize the inputs (including converting the tokens to their corresponding IDs in the pretrained vocabulary) and put it in a format the model expects, as well as generate the other inputs that model requires.

To do all of this, we instantiate our tokenizer with the AutoTokenizer.from_pretrained method, which will ensure:

  • we get a tokenizer that corresponds to the model architecture we want to use,
  • we download the vocabulary used when pretraining this specific checkpoint.

That vocabulary will be cached, so it's not downloaded again the next time we run the cell.

from transformers import RobertaTokenizerFast

tokenizer = RobertaTokenizerFast.from_pretrained(model_checkpoint, add_prefix_space=True)
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/vocab.json from cache at /root/.cache/huggingface/transformers/9f02739afcb15f79a914d1dc3852921b35c28165868f21dc938b1219ff615ae7.dc1449771f2e5fcd30cf6d6723ec65f8c1106371f6ba60c9466df8d5e1567bca
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/merges.txt from cache at /root/.cache/huggingface/transformers/0bd2316742dd7dd681cffbf4529ec3e97708bf173b741af7c38e60b3f649ed5a.2cbdc9a92c69faaa4556153a1d778a80b85e34b0d4cedb5774e31773edef57fd
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/tokenizer.json from cache at None
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/added_tokens.json from cache at None
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/special_tokens_map.json from cache at None
loading file https://huggingface.co/jimregan/BERTreach/resolve/main/tokenizer_config.json from cache at None
loading configuration file https://huggingface.co/jimregan/BERTreach/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/82da4bf21418a60a0d196c50342fe927af2c9187b87d319e7def1608dfdc0954.f6ebc79ab803ca349ef7b469b0fbe6aa40d053e3c1c2da0501521c46c2a51bb7
Model config RobertaConfig {
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "gradient_checkpointing": false,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-12,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 6,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.12.5",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 52000
}

loading configuration file https://huggingface.co/jimregan/BERTreach/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/82da4bf21418a60a0d196c50342fe927af2c9187b87d319e7def1608dfdc0954.f6ebc79ab803ca349ef7b469b0fbe6aa40d053e3c1c2da0501521c46c2a51bb7
Model config RobertaConfig {
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "gradient_checkpointing": false,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "layer_norm_eps": 1e-12,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 6,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.12.5",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 52000
}

The following assertion ensures that our tokenizer is a fast tokenizers (backed by Rust) from the 🤗 Tokenizers library. Those fast tokenizers are available for almost all models, and we will need some of the special features they have for our preprocessing.

import transformers
assert isinstance(tokenizer, transformers.PreTrainedTokenizerFast)

You can check which type of models have a fast tokenizer available and which don't on the big table of models.

You can directly call this tokenizer on one sentence:

tokenizer("Is abairt amháin é seo!")
{'input_ids': [0, 574, 3152, 799, 350, 369, 5, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1]}

Depending on the model you selected, you will see different keys in the dictionary returned by the cell above. They don't matter much for what we're doing here (just know they are required by the model we will instantiate later), you can learn more about them in this tutorial if you're interested.

If, as is the case here, your inputs have already been split into words, you should pass the list of words to your tokenzier with the argument is_split_into_words=True:

tokenizer(["Hello", ",", "this", "is", "one", "sentence", "split", "into", "words", "."], is_split_into_words=True)
{'input_ids': [0, 838, 25201, 1094, 10285, 381, 15195, 50991, 5359, 809, 786, 2512, 22339, 38628, 968, 2], 'attention_mask': [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]}

Note that transformers are often pretrained with subword tokenizers, meaning that even if your inputs have been split into words already, each of those words could be split again by the tokenizer. Let's look at an example of that:

example = datasets["train"][4]
print(example["tokens"])
['Tá', 'Áras', 'an', 'Uachtaráin', '(', 'áit', 'chónaithe', 'oifigiúil', 'Uachtarán', 'na', 'hÉireann', ')', ',', "''Deerfield", "''", '(', 'áit', 'chónaithe', 'oifigiúil', 'Ambasadóir', 'Stáit', 'Aontaithe', 'Mheiriceá', ')', ',', 'Zú', 'Bhaile', 'Átha', 'Cliath', ',', 'agus', 'Ceanncheathrú', 'an', 'Gharda', 'Síochána', 'go', 'léir', 'laistigh', 'den', 'pháirc', '.']
tokenized_input = tokenizer(example["tokens"], is_split_into_words=True)
tokens = tokenizer.convert_ids_to_tokens(tokenized_input["input_ids"])
print(tokens)
['<s>', 'ĠTá', 'ĠÃģras', 'Ġan', 'ĠUachtaráin', 'Ġ(', 'Ġáit', 'Ġchónaithe', 'Ġoifigiúil', 'ĠUachtarán', 'Ġna', 'ĠhÃīireann', 'Ġ)', 'Ġ,', "Ġ''", 'De', 'er', 'field', "Ġ''", 'Ġ(', 'Ġáit', 'Ġchónaithe', 'Ġoifigiúil', 'ĠAmbasadóir', 'ĠStáit', 'ĠAontaithe', 'ĠMheiriceá', 'Ġ)', 'Ġ,', 'ĠZ', 'ú', 'ĠBhaile', 'ĠÃģtha', 'ĠCliath', 'Ġ,', 'Ġagus', 'ĠCeanncheathrú', 'Ġan', 'ĠGharda', 'ĠSÃŃochána', 'Ġgo', 'Ġléir', 'Ġlaistigh', 'Ġden', 'Ġpháirc', 'Ġ.', '</s>']

Here the words "Zwingmann" and "sheepmeat" have been split in three subtokens.

This means that we need to do some processing on our labels as the input ids returned by the tokenizer are longer than the lists of labels our dataset contain, first because some special tokens might be added (we can a [CLS] and a [SEP] above) and then because of those possible splits of words in multiple tokens:

len(example[f"{task}_tags"]), len(tokenized_input["input_ids"])
(41, 47)

Thankfully, the tokenizer returns outputs that have a word_ids method which can help us.

print(tokenized_input.word_ids())
[None, 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 13, 13, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, None]

As we can see, it returns a list with the same number of elements as our processed input ids, mapping special tokens to None and all other tokens to their respective word. This way, we can align the labels with the processed input ids.

word_ids = tokenized_input.word_ids()
aligned_labels = [-100 if i is None else example[f"{task}_tags"][i] for i in word_ids]
print(len(aligned_labels), len(tokenized_input["input_ids"]))
47 47

Here we set the labels of all special tokens to -100 (the index that is ignored by PyTorch) and the labels of all other tokens to the label of the word they come from. Another strategy is to set the label only on the first token obtained from a given word, and give a label of -100 to the other subtokens from the same word. We propose the two strategies here, just change the value of the following flag:

label_all_tokens = True

We're now ready to write the function that will preprocess our samples. We feed them to the tokenizer with the argument truncation=True (to truncate texts that are bigger than the maximum size allowed by the model) and is_split_into_words=True (as seen above). Then we align the labels with the token ids using the strategy we picked:

def tokenize_and_align_labels(examples):
    tokenized_inputs = tokenizer(examples["tokens"], truncation=True, is_split_into_words=True)

    labels = []
    for i, label in enumerate(examples[f"{task}_tags"]):
        word_ids = tokenized_inputs.word_ids(batch_index=i)
        previous_word_idx = None
        label_ids = []
        for word_idx in word_ids:
            # Special tokens have a word id that is None. We set the label to -100 so they are automatically
            # ignored in the loss function.
            if word_idx is None:
                label_ids.append(-100)
            # We set the label for the first token of each word.
            elif word_idx != previous_word_idx:
                label_ids.append(label[word_idx])
            # For the other tokens in a word, we set the label to either the current label or -100, depending on
            # the label_all_tokens flag.
            else:
                label_ids.append(label[word_idx] if label_all_tokens else -100)
            previous_word_idx = word_idx

        labels.append(label_ids)

    tokenized_inputs["labels"] = labels
    return tokenized_inputs

This function works with one or several examples. In the case of several examples, the tokenizer will return a list of lists for each key:

tokenize_and_align_labels(datasets['train'][:5])
Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation.
{'input_ids': [[0, 5236, 14, 3650, 1619, 21240, 1094, 8584, 1094, 40980, 3337, 306, 6292, 74, 1806, 968, 2], [0, 5236, 14, 15068, 12965, 15693, 384, 4010, 17, 4294, 2], [0, 1146, 80, 1494, 15796, 691, 1094, 17961, 691, 2], [0, 691, 15693, 6207, 48172, 15693, 691, 2], [0, 1281, 11516, 275, 9918, 384, 756, 9978, 4030, 3476, 304, 1147, 4294, 1094, 15693, 1855, 553, 10428, 15693, 384, 756, 9978, 4030, 34067, 1647, 1927, 3616, 4294, 1094, 3999, 276, 2268, 1461, 1397, 1094, 306, 49963, 275, 5247, 4226, 341, 896, 1813, 460, 8981, 968, 2]], 'attention_mask': [[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1], [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]], 'labels': [[-100, 0, 0, 1, 2, 2, 0, 0, 0, 0, 5, 0, 0, 0, 0, 0, -100], [-100, 0, 0, 3, 4, 0, 0, 0, 0, 0, -100], [-100, 1, 1, 1, 0, 0, 0, 0, 0, -100], [-100, 0, 0, 5, 5, 0, 0, -100], [-100, 0, 3, 4, 4, 0, 0, 0, 0, 3, 4, 4, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 5, 6, 6, 0, 0, 3, 3, 4, 4, 4, 0, 0, 0, 0, 3, 4, 0, 0, 0, 0, 0, 0, -100]]}

To apply this function on all the sentences (or pairs of sentences) in our dataset, we just use the map method of our dataset object we created earlier. This will apply the function on all the elements of all the splits in dataset, so our training, validation and testing data will be preprocessed in one single command.

tokenized_datasets = datasets.map(tokenize_and_align_labels, batched=True)

Even better, the results are automatically cached by the 🤗 Datasets library to avoid spending time on this step the next time you run your notebook. The 🤗 Datasets library is normally smart enough to detect when the function you pass to map has changed (and thus requires to not use the cache data). For instance, it will properly detect if you change the task in the first cell and rerun the notebook. 🤗 Datasets warns you when it uses cached files, you can pass load_from_cache_file=False in the call to map to not use the cached files and force the preprocessing to be applied again.

Note that we passed batched=True to encode the texts by batches together. This is to leverage the full benefit of the fast tokenizer we loaded earlier, which will use multi-threading to treat the texts in a batch concurrently.

Fine-tuning the model

Now that our data is ready, we can download the pretrained model and fine-tune it. Since all our tasks are about token classification, we use the AutoModelForTokenClassification class. Like with the tokenizer, the from_pretrained method will download and cache the model for us. The only thing we have to specify is the number of labels for our problem (which we can get from the features, as seen before):

from transformers import AutoModelForTokenClassification, TrainingArguments, Trainer, AutoConfig

config = AutoConfig.from_pretrained(model_checkpoint,
        id2label={i: label for i, label in enumerate(label_list)},
        label2id={label: i for i, label in enumerate(label_list)})

model = AutoModelForTokenClassification.from_pretrained(model_checkpoint, config=config)
loading configuration file https://huggingface.co/jimregan/BERTreach/resolve/main/config.json from cache at /root/.cache/huggingface/transformers/82da4bf21418a60a0d196c50342fe927af2c9187b87d319e7def1608dfdc0954.f6ebc79ab803ca349ef7b469b0fbe6aa40d053e3c1c2da0501521c46c2a51bb7
Model config RobertaConfig {
  "architectures": [
    "RobertaForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "classifier_dropout": null,
  "eos_token_id": 2,
  "gradient_checkpointing": false,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "id2label": {
    "0": "O",
    "1": "B-PER",
    "2": "I-PER",
    "3": "B-ORG",
    "4": "I-ORG",
    "5": "B-LOC",
    "6": "I-LOC"
  },
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "label2id": {
    "B-LOC": 5,
    "B-ORG": 3,
    "B-PER": 1,
    "I-LOC": 6,
    "I-ORG": 4,
    "I-PER": 2,
    "O": 0
  },
  "layer_norm_eps": 1e-12,
  "max_position_embeddings": 514,
  "model_type": "roberta",
  "num_attention_heads": 12,
  "num_hidden_layers": 6,
  "pad_token_id": 1,
  "position_embedding_type": "absolute",
  "transformers_version": "4.12.5",
  "type_vocab_size": 1,
  "use_cache": true,
  "vocab_size": 52000
}

https://huggingface.co/jimregan/BERTreach/resolve/main/pytorch_model.bin not found in cache or force_download set to True, downloading to /root/.cache/huggingface/transformers/tmpv5o9vvd8
storing https://huggingface.co/jimregan/BERTreach/resolve/main/pytorch_model.bin in cache at /root/.cache/huggingface/transformers/dd1b4fd9cac1b246d8d0fd055990d19837145ab67cc89c1c8a1af624e6679469.1da935a4b98fa14d6de9a52c0e4217ff97b262012d6f20bce405f3128b3b539d
creating metadata file for /root/.cache/huggingface/transformers/dd1b4fd9cac1b246d8d0fd055990d19837145ab67cc89c1c8a1af624e6679469.1da935a4b98fa14d6de9a52c0e4217ff97b262012d6f20bce405f3128b3b539d
loading weights file https://huggingface.co/jimregan/BERTreach/resolve/main/pytorch_model.bin from cache at /root/.cache/huggingface/transformers/dd1b4fd9cac1b246d8d0fd055990d19837145ab67cc89c1c8a1af624e6679469.1da935a4b98fa14d6de9a52c0e4217ff97b262012d6f20bce405f3128b3b539d
Some weights of the model checkpoint at jimregan/BERTreach were not used when initializing RobertaForTokenClassification: ['lm_head.decoder.weight', 'lm_head.layer_norm.weight', 'lm_head.decoder.bias', 'lm_head.bias', 'lm_head.dense.bias', 'lm_head.layer_norm.bias', 'lm_head.dense.weight']
- This IS expected if you are initializing RobertaForTokenClassification from the checkpoint of a model trained on another task or with another architecture (e.g. initializing a BertForSequenceClassification model from a BertForPreTraining model).
- This IS NOT expected if you are initializing RobertaForTokenClassification from the checkpoint of a model that you expect to be exactly identical (initializing a BertForSequenceClassification model from a BertForSequenceClassification model).
Some weights of RobertaForTokenClassification were not initialized from the model checkpoint at jimregan/BERTreach and are newly initialized: ['classifier.weight', 'classifier.bias']
You should probably TRAIN this model on a down-stream task to be able to use it for predictions and inference.

The warning is telling us we are throwing away some weights (the vocab_transform and vocab_layer_norm layers) and randomly initializing some other (the pre_classifier and classifier layers). This is absolutely normal in this case, because we are removing the head used to pretrain the model on a masked language modeling objective and replacing it with a new head for which we don't have pretrained weights, so the library warns us we should fine-tune this model before using it for inference, which is exactly what we are going to do.

To instantiate a Trainer, we will need to define three more things. The most important is the TrainingArguments, which is a class that contains all the attributes to customize the training. It requires one folder name, which will be used to save the checkpoints of the model, and all other arguments are optional:

model_name = model_checkpoint.split("/")[-1]
args = TrainingArguments(
    f"BERTreach-finetuned-{task}",
    evaluation_strategy = "epoch",
    learning_rate=2e-5,
    per_device_train_batch_size=batch_size,
    per_device_eval_batch_size=batch_size,
    num_train_epochs=5,
    weight_decay=0.01,
    push_to_hub=True,
)
PyTorch: setting up devices
The default value for the training argument `--report_to` will change in v5 (from all installed integrations to none). In v5, you will need to use `--report_to all` to get the same behavior as now. You should start updating your code and make this info disappear :-).

Here we set the evaluation to be done at the end of each epoch, tweak the learning rate, use the batch_size defined at the top of the notebook and customize the number of epochs for training, as well as the weight decay.

The last argument to setup everything so we can push the model to the Hub regularly during training. Remove it if you didn't follow the installation steps at the top of the notebook. If you want to save your model locally in a name that is different than the name of the repository it will be pushed, or if you want to push your model under an organization and not your name space, use the hub_model_id argument to set the repo name (it needs to be the full name, including your namespace: for instance "sgugger/bert-finetuned-ner" or "huggingface/bert-finetuned-ner").

Then we will need a data collator that will batch our processed examples together while applying padding to make them all the same size (each pad will be padded to the length of its longest example). There is a data collator for this task in the Transformers library, that not only pads the inputs, but also the labels:

from transformers import DataCollatorForTokenClassification

data_collator = DataCollatorForTokenClassification(tokenizer)

The last thing to define for our Trainer is how to compute the metrics from the predictions. Here we will load the seqeval metric (which is commonly used to evaluate results on the CONLL dataset) via the Datasets library.

metric = load_metric("seqeval")

This metric takes list of labels for the predictions and references:

labels = [label_list[i] for i in example[f"{task}_tags"]]
metric.compute(predictions=[labels], references=[labels])
{'LOC': {'f1': 1.0, 'number': 1, 'precision': 1.0, 'recall': 1.0},
 'ORG': {'f1': 1.0, 'number': 4, 'precision': 1.0, 'recall': 1.0},
 'overall_accuracy': 1.0,
 'overall_f1': 1.0,
 'overall_precision': 1.0,
 'overall_recall': 1.0}

So we will need to do a bit of post-processing on our predictions:

  • select the predicted index (with the maximum logit) for each token
  • convert it to its string label
  • ignore everywhere we set a label of -100

The following function does all this post-processing on the result of Trainer.evaluate (which is a namedtuple containing predictions and labels) before applying the metric:

import numpy as np

def compute_metrics(p):
    predictions, labels = p
    predictions = np.argmax(predictions, axis=2)

    # Remove ignored index (special tokens)
    true_predictions = [
        [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]
    true_labels = [
        [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
        for prediction, label in zip(predictions, labels)
    ]

    results = metric.compute(predictions=true_predictions, references=true_labels)
    return {
        "precision": results["overall_precision"],
        "recall": results["overall_recall"],
        "f1": results["overall_f1"],
        "accuracy": results["overall_accuracy"],
    }

Note that we drop the precision/recall/f1 computed for each category and only focus on the overall precision/recall/f1/accuracy.

Then we just need to pass all of this along with our datasets to the Trainer:

trainer = Trainer(
    model,
    args,
    train_dataset=tokenized_datasets["train"],
    eval_dataset=tokenized_datasets["validation"],
    data_collator=data_collator,
    tokenizer=tokenizer,
    compute_metrics=compute_metrics
)
Cloning https://huggingface.co/jimregan/BERTreach-finetuned-ner into local empty directory.

We can now finetune our model by just calling the train method:

trainer.train()
The following columns in the training set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running training *****
  Num examples = 1000
  Num Epochs = 5
  Instantaneous batch size per device = 16
  Total train batch size (w. parallel, distributed & accumulation) = 16
  Gradient Accumulation steps = 1
  Total optimization steps = 315
[315/315 20:00, Epoch 5/5]
Epoch Training Loss Validation Loss Precision Recall F1 Accuracy
1 No log 0.724926 0.364474 0.390508 0.377042 0.758436
2 No log 0.585039 0.452903 0.494831 0.472940 0.807228
3 No log 0.519152 0.494885 0.545583 0.518999 0.828796
4 No log 0.504173 0.520788 0.559211 0.539316 0.834835
5 No log 0.494351 0.520052 0.566729 0.542388 0.836561

</div> </div>

The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16
The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16
The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16
The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16
The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16


Training completed. Do not forget to share your model on huggingface.co/models =)


TrainOutput(global_step=315, training_loss=0.5451592823815724, metrics={'train_runtime': 1204.9135, 'train_samples_per_second': 4.15, 'train_steps_per_second': 0.261, 'total_flos': 40232543021088.0, 'train_loss': 0.5451592823815724, 'epoch': 5.0})
</div> </div> </div>

The evaluate method allows you to evaluate again on the evaluation dataset or on another dataset:

trainer.evaluate()
The following columns in the evaluation set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Evaluation *****
  Num examples = 1000
  Batch size = 16
[63/63 00:44]
{'epoch': 5.0,
 'eval_accuracy': 0.8365605828220859,
 'eval_f1': 0.5423881268270744,
 'eval_loss': 0.49435117840766907,
 'eval_precision': 0.5200517464424321,
 'eval_recall': 0.5667293233082706,
 'eval_runtime': 45.3099,
 'eval_samples_per_second': 22.07,
 'eval_steps_per_second': 1.39}

To get the precision/recall/f1 computed for each category now that we have finished training, we can apply the same function as before on the result of the predict method:

predictions, labels, _ = trainer.predict(tokenized_datasets["validation"])
predictions = np.argmax(predictions, axis=2)

# Remove ignored index (special tokens)
true_predictions = [
    [label_list[p] for (p, l) in zip(prediction, label) if l != -100]
    for prediction, label in zip(predictions, labels)
]
true_labels = [
    [label_list[l] for (p, l) in zip(prediction, label) if l != -100]
    for prediction, label in zip(predictions, labels)
]

results = metric.compute(predictions=true_predictions, references=true_labels)
results
The following columns in the test set  don't have a corresponding argument in `RobertaForTokenClassification.forward` and have been ignored: spans, tokens, ner_tags, langs.
***** Running Prediction *****
  Num examples = 1000
  Batch size = 16
[63/63 01:29]
{'LOC': {'f1': 0.602130616025938,
  'number': 1026,
  'precision': 0.5736981465136805,
  'recall': 0.6335282651072125},
 'ORG': {'f1': 0.45705024311183146,
  'number': 572,
  'precision': 0.4259818731117825,
  'recall': 0.493006993006993},
 'PER': {'f1': 0.5199240986717268,
  'number': 530,
  'precision': 0.5229007633587787,
  'recall': 0.5169811320754717},
 'overall_accuracy': 0.8365605828220859,
 'overall_f1': 0.5423881268270744,
 'overall_precision': 0.5200517464424321,
 'overall_recall': 0.5667293233082706}

You can now upload the result of the training to the Hub, just execute this instruction:

trainer.push_to_hub()
Saving model checkpoint to BERTreach-finetuned-ner
Configuration saved in BERTreach-finetuned-ner/config.json
Model weights saved in BERTreach-finetuned-ner/pytorch_model.bin
tokenizer config file saved in BERTreach-finetuned-ner/tokenizer_config.json
Special tokens file saved in BERTreach-finetuned-ner/special_tokens_map.json
To https://huggingface.co/jimregan/BERTreach-finetuned-ner
   cbc2561..d938626  main -> main

To https://huggingface.co/jimregan/BERTreach-finetuned-ner
   d938626..bc9642b  main -> main

'https://huggingface.co/jimregan/BERTreach-finetuned-ner/commit/d938626d52f5779f475e84e8c628740fda278353'

You can now share this model with all your friends, family, favorite pets: they can all load it with the identifier "your-username/the-name-you-picked" so for instance:

from transformers import AutoModelForTokenClassification

model = AutoModelForTokenClassification.from_pretrained("sgugger/my-awesome-model")
</div>