Polish NLP resources
This repository contains pre-trained models and language resources for Natural Language Processing in Polish created during my research. Some of the models are also available on Huggingface Hub.
If you'd like to use any of those resources in your research please cite:
@Misc{polish-nlp-resources,
author = {S{\l}awomir Dadas},
title = {A repository of Polish {NLP} resources},
howpublished = {Github},
year = {2019},
url = {https://github.com/sdadas/polish-nlp-resources/}
}
Contents
- Word embeddings
- Language models
- Text encoders
- Machine translation models
- Fine-tuned models
- Dictionaries and lexicons
- Links to external resources
Word embeddings
The following section includes pre-trained word embeddings for Polish. Each model was trained on a corpus consisting of Polish Wikipedia dump, Polish books and articles, 1.5 billion tokens at total.
Word2Vec
Word2Vec trained with Gensim. 100 dimensions, negative sampling, contains lemmatized words with 3 or more ocurrences in the corpus and additionally a set of pre-defined punctuation symbols, all numbers from 0 to 10'000, Polish forenames and lastnames. The archive contains embedding in gensim binary format. Example of usage:
from gensim.models import KeyedVectors
if __name__ == '__main__':
word2vec = KeyedVectors.load("word2vec_100_3_polish.bin")
print(word2vec.similar_by_word("bierut"))
# [('cyrankiewicz', 0.818274736404419), ('gomułka', 0.7967918515205383), ('raczkiewicz', 0.7757788896560669), ('jaruzelski', 0.7737460732460022), ('pużak', 0.7667238712310791)]
FastText
FastText trained with Gensim. Vocabulary and dimensionality is identical to Word2Vec model. The archive contains embedding in gensim binary format. Example of usage:
from gensim.models import KeyedVectors
if __name__ == '__main__':
word2vec = KeyedVectors.load("fasttext_100_3_polish.bin")
print(word2vec.similar_by_word("bierut"))
# [('bieruty', 0.9290274381637573), ('gierut', 0.8921363353729248), ('bieruta', 0.8906412124633789), ('bierutow', 0.8795544505119324), ('bierutowsko', 0.839280366897583)]
GloVe
Global Vectors for Word Representation (GloVe) trained using the reference implementation from Stanford NLP. 100 dimensions, contains lemmatized words with 3 or more ocurrences in the corpus. Example of usage:
from gensim.models import KeyedVectors
if __name__ == '__main__':
word2vec = KeyedVectors.load_word2vec_format("glove_100_3_polish.txt")
print(word2vec.similar_by_word("bierut"))
# [('cyrankiewicz', 0.8335597515106201), ('gomułka', 0.7793121337890625), ('bieruta', 0.7118682861328125), ('jaruzelski', 0.6743760108947754), ('minc', 0.6692837476730347)]
High dimensional word vectors
Pre-trained vectors using the same vocabulary as above but with higher dimensionality. These vectors are more suitable for representing larger chunks of text such as sentences or documents using simple word aggregation methods (averaging, max pooling etc.) as more semantic information is preserved this way.
GloVe - 300d: Part 1 (GitHub), 500d: Part 1 (GitHub) Part 2 (GitHub), 800d: Part 1 (GitHub) Part 2 (GitHub) Part 3 (GitHub)
Word2Vec - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)
FastText - 300d (OneDrive), 500d (OneDrive), 800d (OneDrive)
Compressed Word2Vec
This is a compressed version of the Word2Vec embedding model described above. For compression, we used the method described in Compressing Word Embeddings via Deep Compositional Code Learning by Shu and Nakayama. Compressed embeddings are suited for deployment on storage-poor devices such as mobile phones. The model weights 38MB, only 4.4% size of the original Word2Vec embeddings. Although the authors of the article claimed that compressing with their method doesn't hurt model performance, we noticed a slight but acceptable drop of accuracy when using compressed version of embeddings. Sample decoder class with usage:
import gzip
from typing import Dict, Callable
import numpy as np
class CompressedEmbedding(object):
def __init__(self, vocab_path: str, embedding_path: str, to_lowercase: bool=True):
self.vocab_path: str = vocab_path
self.embedding_path: str = embedding_path
self.to_lower: bool = to_lowercase
self.vocab: Dict[str, int] = self.__load_vocab(vocab_path)
embedding = np.load(embedding_path)
self.codes: np.ndarray = embedding[embedding.files[0]]
self.codebook: np.ndarray = embedding[embedding.files[1]]
self.m = self.codes.shape[1]
self.k = int(self.codebook.shape[0] / self.m)
self.dim: int = self.codebook.shape[1]
def __load_vocab(self, vocab_path: str) -> Dict[str, int]:
open_func: Callable = gzip.open if vocab_path.endswith(".gz") else open
with open_func(vocab_path, "rt", encoding="utf-8") as input_file:
return {line.strip():idx for idx, line in enumerate(input_file)}
def vocab_vector(self, word: str):
if word == "<pad>": return np.zeros(self.dim)
val: str = word.lower() if self.to_lower else word
index: int = self.vocab.get(val, self.vocab["<unk>"])
codes = self.codes[index]
code_indices = np.array([idx * self.k + offset for idx, offset in enumerate(np.nditer(codes))])
return np.sum(self.codebook[code_indices], axis=0)
if __name__ == '__main__':
word2vec = CompressedEmbedding("word2vec_100_3.vocab.gz", "word2vec_100_3.compressed.npz")
print(word2vec.vocab_vector("bierut"))
Wikipedia2Vec
Wikipedia2Vec is a toolkit for learning joint representations of words and Wikipedia entities. We share Polish embeddings learned using a modified version of the library in which we added lemmatization and fixed some issues regarding parsing wiki dumps for languages other than English. Embedding models are available in sizes from 100 to 800 dimensions. A simple example:
from wikipedia2vec import Wikipedia2Vec
wiki2vec = Wikipedia2Vec.load("wiki2vec-plwiki-100.bin")
print(wiki2vec.most_similar(wiki2vec.get_entity("Bolesław Bierut")))
# (<Entity Bolesław Bierut>, 1.0), (<Word bierut>, 0.75790733), (<Word gomułka>, 0.7276504),
# (<Entity Krajowa Rada Narodowa>, 0.7081445), (<Entity Władysław Gomułka>, 0.7043667) [...]
Download embeddings: 100d, 300d, 500d, 800d.
Language models
ELMo
Embeddings from Language Models (ELMo) is a contextual embedding presented in Deep contextualized word representations by Peters et al. Sample usage with PyTorch below, for a more detailed instructions for integrating ELMo with your model please refer to the official repositories github.com/allenai/bilm-tf (Tensorflow) and github.com/allenai/allennlp (PyTorch).
from allennlp.commands.elmo import ElmoEmbedder
elmo = ElmoEmbedder("options.json", "weights.hdf5")
print(elmo.embed_sentence(["Zażółcić", "gęślą", "jaźń"]))
RoBERTa
Language model for Polish based on popular transformer architecture. We provide weights for improved BERT language model introduced in RoBERTa: A Robustly Optimized BERT Pretraining Approach. We provide two RoBERTa models for Polish - base and large model. A summary of pre-training parameters for each model is shown in the table below. We release two version of the each model: one in the Fairseq format and the other in the HuggingFace Transformers format. More information about the models can be found in a separate repository.