NER TF
Named Entity Recognition (NER)
This notebook is from AI for Beginners Curriculum.
In this example, we will learn how to train NER model on Annotated Corpus for Named Entity Recognition Dataset from Kaggle. Before procedding, please donwload ner_dataset.csv file into current directory.
Preparing the Dataset
We will start by reading the dataset into a dataframe. If you want to learn more about using Pandas, visit a lesson on data processing in our Data Science for Beginners
Let's get unique tags and create lookup dictionaries that we can use to convert tags into class numbers:
array(['O', 'B-geo', 'B-gpe', 'B-per', 'I-geo', 'B-org', 'I-org', 'B-tim', , 'B-art', 'I-art', 'I-per', 'I-gpe', 'I-tim', 'B-nat', 'B-eve', , 'I-eve', 'I-nat'], dtype=object)
'O'
Now we need to do the same with vocabulary. For simplicity, we will create vocabulary without taking word frequency into account; in real life you might want to use Keras vectorizer, and limit the number of words.
We need to create a dataset of sentences for training. Let's loop through the original dataset and separate all individual sentences into X (lists of words) and Y (list of tokens):
We will now vectorize all words and tokens:
([10386, , 23515, , 4134, , 29620, , 7954, , 13583, , 21193, , 12222, , 27322, , 18258, , 5815, , 15880, , 5355, , 25242, , 31327, , 18258, , 27067, , 23515, , 26444, , 14412, , 358, , 26551, , 5011, , 30558], , [0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 2, 0, 0, 0, 0, 0])
For simplicity, we will pad all sentences with 0 tokens to the maximum length. In real life, we might want to use more clever strategy, and pad sequences only within one minibatch.
Defining Token Classification Network
We will use two-layer bidirectional LSTM network for token classification. In order to apply dense classifier to each of the output of the last LSTM layer, we will use TimeDistributed construction, which replicates the same dense layer to each of the outputs of LSTM at each step:
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
embedding_4 (Embedding) (None, 104, 300) 9545400
bidirectional_6 (Bidirectio (None, 104, 200) 320800
nal)
bidirectional_7 (Bidirectio (None, 104, 200) 240800
nal)
time_distributed_3 (TimeDis (None, 104, 17) 3417
tributed)
=================================================================
Total params: 10,110,417
Trainable params: 10,110,417
Non-trainable params: 0
_________________________________________________________________
Note here that we are explicity specifying maxlen for our dataset - in case we want the network to be able to handle variable length sequences, we need to be a bit more clever when defining the network.
Let's now train the model. For speed, we will only train for one epoch, but you may try training for longer time. Also, you may want to separate some part of the dataset as training dataset, to observe validation accuracy.
1499/1499 [==============================] - 740s 488ms/step - loss: 0.0667 - acc: 0.9841
<keras.callbacks.History at 0x16f0bb2a310>
Testing the Result
Let's now see how our entity recognition model works on a sample sentence:
john -> B-per smith -> I-per went -> O to -> O paris -> B-geo to -> O attend -> O a -> O conference -> O in -> O cancer -> B-org development -> I-org institute -> I-org
Takeaway
Even simple LSTM model shows reasonable results at NER. However, to get much better results, you may want to use large pre-trained language models such as BERT. Training BERT for NER using Huggingface Transformers library is described here.