Not at all. I should have made clear how I extracted in the post.

I extracted embeddings from a pytorch model (pytorch_model.bin file). The code to extract is pasted here. It assumes the embeddings are stored with the name bert.embeddings.word_embeddings.weight. You can just print out all the keys in your clinicalbert model and see what the key name exactly is.

import torch

import pdb

md = torch.load(“./pytorch_model.bin”,map_location=’cpu’)

for k in md:

if (k == “bert.embeddings.word_embeddings.weight”):

embeds = md[k]

for l in range(len(embeds)):

vector = embeds[l]

for m in range(len(vector)):

print(round(vector[m].tolist(),6),end=’ ‘)

print()

Machine learning practitioner

Machine learning practitioner