Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 18,386 public repositories matching this topic...
AiLearning:数据分析+机器学习实战+线性代数+PyTorch+NLTK+TF2
-
Updated
Mar 19, 2022 - Python
TensorFlow code and pre-trained models for BERT
-
Updated
Feb 26, 2022 - Python
中文分词 词性标注 命名实体识别 依存句法分析 成分句法分析 语义依存分析 语义角色标注 指代消解 风格转换 语义相似度 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
-
Updated
Apr 14, 2022 - Python
-
Updated
Apr 14, 2022 - Python
Oxford Deep NLP 2017 course
-
Updated
Jun 12, 2017
-
Updated
Apr 14, 2022 - Python
-
Updated
Apr 9, 2022
In gensim/models/fasttext.py:
model = FastText(
vector_size=m.dim,
vector_size=m.dim,
window=m.ws,
window=m.ws,
epochs=m.epoch,
epochs=m.epoch,
negative=m.neg,
negative=m.neg,
# FIXME: these next 2 lines read in unsupported FB FT modes (loss=3 softmax or loss=4 onevsall,
# or model=3 superviDescribe the bug
Streaming Datasets can't be pickled, so any interaction between them and multiprocessing results in a crash.
Steps to reproduce the bug
import transformers
from transformers import Trainer, AutoModelForCausalLM, TrainingArguments
import datasets
ds = datasets.load_dataset('oscar', "unshuffled_deduplicated_en", split='train', streaming=True).with_format("A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc.
-
Updated
Mar 23, 2022
此项目是机器学习(Machine Learning)、深度学习(Deep Learning)、NLP面试中常考到的知识点和代码实现,也是作为一个算法工程师必会的理论基础知识。
-
Updated
Apr 1, 2022 - Jupyter Notebook
A very simple framework for state-of-the-art Natural Language Processing (NLP)
-
Updated
Apr 14, 2022 - Python
Is your feature request related to a problem? Please describe.
I typically used compressed datasets (e.g. gzipped) to save disk space. This works fine with AllenNLP during training because I can write my dataset reader to load the compressed data. However, the predict command opens the file and reads lines for the Predictor. This fails when it tries to load data from my compressed files.
Rather than simply caching nltk_data until the cache expires and it's forced to re-download the entire nltk_data, we should perform a check on the index.xml which refreshes the cache if it differs from some previous cache.
I would advise doing this in the same way that it's done for requirements.txt:
https://github.com/nltk/nltk/blob/59aa3fb88c04d6151f2409b31dcfe0f332b0c9ca/.github/wor
Natural Language Processing Tutorial for Deep Learning Researchers
-
Updated
Jul 25, 2021 - Jupyter Notebook
modest natural-language processing
-
Updated
Apr 14, 2022 - JavaScript
This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.
-
Updated
Dec 22, 2020 - Python
-
Updated
Apr 14, 2022 - TypeScript
500 AI Machine learning Deep learning Computer vision NLP Projects with code
-
Updated
Jul 6, 2021
Stanford CoreNLP: A Java suite of core NLP tools.
-
Updated
Apr 14, 2022 - Java
Awesome pre-trained models toolkit based on PaddlePaddle.(300+ models including Image, Text, Audio and Video with Easy Inference & Serving deployment)
-
Updated
Apr 13, 2022 - Python
all kinds of text classification models and more with deep learning
-
Updated
Nov 2, 2021 - Python
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
-
Updated
Oct 22, 2020
Pre-Training with Whole Word Masking for Chinese BERT(中文BERT-wwm系列模型)
-
Updated
Mar 30, 2022 - Python
-
Updated
Apr 14, 2022 - Python
A PyTorch implementation of the Transformer model in "Attention is All You Need".
-
Updated
Apr 3, 2022 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia


Several tokenizers currently have no associated tests. I think that adding the test file for one of these tokenizers could be a very good way to make a first contribution to transformers.
Tokenizers concerned
not yet claimed
LED
RemBert
MobileBert
ConvBert
RetriBert
claimed