Loss Overview¶
Loss functions play a critical role in the performance of your fine-tuned model. Sadly, there is no “one size fits all” loss function. Ideally, this overview should help narrow down your choice of loss function(s) by matching them to your data formats.
Note: you can often convert one training data format into another, allowing more loss functions to be viable for your scenario. For example, (sentence_A, sentence_B) pairs
with class
labels can be converted into (anchor, positive, negative) triplets
by sampling sentences with the same or different classes.
Texts | Labels | Appropriate Loss Functions |
---|---|---|
single sentences |
class |
BatchAllTripletLoss BatchHardSoftMarginTripletLoss BatchHardTripletLoss BatchSemiHardTripletLoss |
single sentences |
none |
ContrastiveTensionLoss DenoisingAutoEncoderLoss |
(anchor, anchor) pairs |
none |
ContrastiveTensionLossInBatchNegatives |
(damaged_sentence, original_sentence) pairs |
none |
DenoisingAutoEncoderLoss |
(sentence_A, sentence_B) pairs |
class |
SoftmaxLoss |
(anchor, positive) pairs |
none |
CachedMultipleNegativesRankingLoss MultipleNegativesRankingLoss MultipleNegativesSymmetricRankingLoss MegaBatchMarginLoss CachedGISTEmbedLoss GISTEmbedLoss |
(anchor, positive/negative) pairs |
1 if positive, 0 if negative |
ContrastiveLoss OnlineContrastiveLoss |
(sentence_A, sentence_B) pairs |
float similarity score |
CoSENTLoss AnglELoss CosineSimilarityLoss |
(anchor, positive, negative) triplets |
none |
CachedMultipleNegativesRankingLoss MultipleNegativesRankingLoss TripletLoss CachedGISTEmbedLoss GISTEmbedLoss |
Loss modifiers¶
These loss functions can be seen as loss modifiers: they work on top of standard loss functions, but apply those loss functions in different ways to try and instil useful properties into the trained embedding model.
For example, models trained with MatryoshkaLoss
produce embeddings whose size can be truncated without notable losses in performance, and models trained with AdaptiveLayerLoss
still perform well when you remove model layers for faster inference.
Texts | Labels | Appropriate Loss Functions |
---|---|---|
any |
any |
MatryoshkaLoss AdaptiveLayerLoss Matryoshka2dLoss |
Distillation¶
These loss functions are specifically designed to be used when distilling the knowledge from one model into another. For example, when finetuning a small model to behave more like a larger & stronger one, or when finetuning a model to become multi-lingual.
Texts | Labels | Appropriate Loss Functions |
---|---|---|
single sentences |
model sentence embeddings |
MSELoss |
(query, passage_one, passage_two) triplets |
gold_sim(query, passage_one) - gold_sim(query, passage_two) |
MarginMSELoss |
Commonly used Loss Functions¶
In practice, not all loss functions get used equally often. The most common scenarios are:
(anchor, positive) pairs
without any labels:MultipleNegativesRankingLoss
is commonly used to train the top performing embedding models. This data is often relatively cheap to obtain, and the models are generally very performant.CachedMultipleNegativesRankingLoss
is often used to increase the batch size, resulting in superior performance.(sentence_A, sentence_B) pairs
with afloat similarity score
:CosineSimilarityLoss
is traditionally used a lot, though more recentlyCoSENTLoss
andAnglELoss
are used as drop-in replacements with superior performance.