利用bert预训练模型生成句向量或词向量. ... BERT-Base, Uncased: 12-layer, 768-hidden, 12-heads, 110M parameters. BERT-Large, Uncased: 24-layer, 1024-hidden, ...
確定! 回上一頁