Multi-label text classification
4 min read
Contents
The Problem
A multi-label classifier maps x inputs to y labels. It’s like running y different binary classification problems. If y=1 we’re in binary classification territory.
Let’s assume we have text with each row containing text and a label that is a comma separated list of multi-labels. Our assigment is to build a multi-label classifier with this dataset.
To build a multi-label classifier first we must understand binary cross entropy with logits loss.
Binary Cross Entropy Loss
This loss is designed to take a multi-hot encoded array of labels with as many columns as number of classes and rows as number of samples. The column is 1 if the class applies to the sample and 0 otherwise. Multiple columns can be 1 for each sample. See this notebook for a practical example.
The loss is defined as:
Which is reduced over all train samples:
The default reduction method in PyTorch is ‘mean’.
Weighted case:
There are 2 weights here, is the sample weight for rescaling each sample’s loss. It’s hard to get weights for each sample in practice. A more practical weight is the positive class weight, assigned to each class. Setting at the class level helps assign greater importance to a minority class. Technically, improves recall and improves precision.
Computationally, there is also a bit of detail around the log-sum-exp trick that helps prevent numerical overflow while converting probability values to and from log probabilties, but that’s for another post.
Solution
Process and Prepare Text
Here, we want to process the text, including any preprocessing and tokenization. We also add a column indicating the data split to the original data. Remember to stratify your splits especially in imbalanced data scenarios. This helps the model train on data points from even minority classes.
Here you might want to think about:
- What is my problem domain? What kind of preprocessing do I want? e.g. I want to clean up URLs in text, remove digits and punctuations.
- What kind of text representation do I want to use? This might be dependent on your latency budgets, importance of case sensitivity for your domain, the training budget, length of text, languages involved etc.
- What’s the quality of my data? Are my multi-labels well propagated? Are there missing multi-labels?
- To prevent overfitting, do I need strong negative samples that have token overlap with my positive samples?
We can start off with a DistilBERT uncased transformer model.
Save Data
Once we write up the preprocesing and tokenization, we save the data to S3 in a format that works with SageMaker Hugging Face Estimator.
Train with BCE loss
After deriving class weights, we fine-tune DistilBERT for 2-3 epochs. Generally setting too small class weights (e.g. 0.2) doesn’t help the model learn those class samples in the case of extreme imbalance. I recommend setting class weight at least for each class. If one class has a very high weight from weight calculations, it might be worthwhile reducing it and bringing it a bit closer in range with other weights. Here some experimentation loops can help select well performing class weights.
Evaluate and Understand Misclassifications
Finally, we evaluate the model and understand the misclassifications through integrated gradient scores. We want to focus on over-fitted tokens and ensure that the model has samples with these overfitted tokens across classes, perhaps as negative samples in other classes.