Thank you very much.

1) Sorry I was not clear in that line. I will fix that - thank you for pointing out.

When pre-training, the custom vocabulary was built for scratch using Transformers library implementation of wordpiece training for subword creation. Continual pre-training simply starts off with the vocabulary from pre-training. As you rightly pointed out, we could add tokens to the vocabulary when continual pre-training if we want to. I did not do that since I already created a custom vocabulary from scratch during pre-training

2) Breaking down your question into two parts "When we pre-train from scratch, what is the impact of random initialization of (a) on vocab vectors (2) model layer weights.

(a) vocab vectors are nearly orthogonal simple by virtue of random initialization because of their high dimensionality (b) I didnt even think of this point - how are the weights initialized in BERT - I dont know the answer - my guess is, as usual,random initialization too but I will check code if there is any specific way of doing it - thank you for asking this.

Regarding NSP loss, it is only implemented in BERT's original code release. I don’t see it in transformer implementation - this could be because of evidence that NSP loss is not necessary and is even considered detrimental (cant remember the reference) - hence models like Roberta dont even have NSP loss.

--

--

--

Machine learning practitioner

Love podcasts or audiobooks? Learn on the go with our new app.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Ajit Rajasekharan

Ajit Rajasekharan

Machine learning practitioner

More from Medium

Answering awkward questions about your gender pay gap

2 / for thoughts (idea dump)

Neat bar shelf with cute plantation and natural light at the ceiling, shining down the interior space

Kids Get It: Diversity Is a Good Thing (duh!)

5 of the Major Advancements in Psychology in the Year 2021.