Pre-trained ELMo Representations for Many Languages
-
Updated
May 12, 2020 - Python
{{ message }}
Pre-trained ELMo Representations for Many Languages
Documents, papers and codes related to Natural Language Processing, including Topic Model, Word Embedding, Named Entity Recognition, Text Classificatin, Text Generation, Text Similarity, Machine Translation),etc. All codes are implemented intensorflow 2.0.
Open Source Pre-training Model Framework in PyTorch & Pre-trained Model Zoo
BERT-NER (nert-bert) with google bert https://github.com/google-research.
A short tutorial on Elmo training (Pre trained, Training on new data, Incremental training)
A list of pretrained Transformer models for the Russian language.
Keras Implementation of Aspect based Sentiment Analysis
reference tensorflow code for named entity tagging
基于预训练模型的中文关键词抽取方法(论文SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model 的中文版代码)
Cross-Lingual Alignment of Contextual Word Embeddings
Source code for "Head-Driven Phrase Structure Grammar Parsing on Penn Treebank" published at ACL 2019
中文预训练模型生成字向量学习,测试BERT,ELMO的中文效果
Exploring the simple sentence similarity measurements using word embeddings
Dice.com repo to accompany the dice.com 'Vectors in Search' talk by Simon Hughes, from the Activate 2018 search conference, and the 'Searching with Vectors' talk from Haystack 2019 (US). Builds upon my conceptual search and semantic search work from 2015
This repo contains all the notebooks mentioned in blog.
reference pytorch code for named entity tagging
A collection of resources on using BERT (https://arxiv.org/abs/1810.04805 ) and related Language Models in production environments.
The code of our paper "SIFRank: A New Baseline for Unsupervised Keyphrase Extraction Based on Pre-trained Language Model"
This is a german ELMo deep contextualized word representation. It is trained on a special German Wikipedia Text Corpus.
Minimal code to work with pre-trained ELMo models in TensorFlow
Applied Deep Learning (2019 Spring) @ NTU
A text classification example with Bert/ELMo/GloVe in pytorch
TensorFlow code and pre-trained models for A Dynamic Word Representation Model Based on Deep Context. It combines the idea of BERT model and ELMo's deep context word representation.
Visualizing ELMo Contextual Vectors for Word Sense Disambiguation
Tensorflow QANet with ELMo
Add a description, image, and links to the elmo topic page so that developers can more easily learn about it.
To associate your repository with the elmo topic, visit your repo's landing page and select "manage topics."
Ideally, we'd support something like
mnli += {pretrain_data_fraction = 0.5}, pretrain_tasks = {mnli,boolq}. Currently, pretrain_data_fraction is just a global argument for all pretraining tasks.