Natural language processing
Natural language processing (NLP) is a field of computer science that studies how computers and humans interact. In the 1950s, Alan Turing published an article that proposed a measure of intelligence, now called the Turing test. More modern techniques, such as deep learning, have produced results in the fields of language modeling, parsing, and natural-language tasks.
Here are 11,128 public repositories matching this topic...
AiLearning: 机器学习 - MachineLearning - ML、深度学习 - DeepLearning - DL、自然语言处理 NLP
-
Updated
Sep 11, 2020 - Python
TensorFlow code and pre-trained models for BERT
-
Updated
Sep 4, 2020 - Python
中文分词 词性标注 命名实体识别 依存句法分析 语义依存分析 新词发现 关键词短语提取 自动摘要 文本分类聚类 拼音简繁转换 自然语言处理
-
Updated
Sep 3, 2020 - Python
-
Updated
Sep 11, 2020 - Python
Oxford Deep NLP 2017 course
-
Updated
Jun 12, 2017
-
Updated
Aug 20, 2020
A comprehensive list of pytorch related content on github,such as different models,implementations,helper libraries,tutorials etc.
-
Updated
Sep 8, 2020
-
Updated
Sep 12, 2020 - Python
This repository contains code examples for the Stanford's course: TensorFlow for Deep Learning Research.
-
Updated
Jun 3, 2020 - Python
modest natural-language processing
-
Updated
Sep 11, 2020 - JavaScript
A very simple framework for state-of-the-art Natural Language Processing (NLP)
-
Updated
Sep 11, 2020 - Python
NLTK Source
-
Updated
Sep 11, 2020 - Python
more details at: allenai/allennlp#2264 (comment)
-
Updated
Sep 11, 2020 - TypeScript
Mapping a variable-length sentence to a fixed-length vector using BERT model
-
Updated
Aug 20, 2020 - Python
Stanford CoreNLP: A Java suite of core NLP tools.
-
Updated
Sep 11, 2020 - Java
Natural Language Processing Tutorial for Deep Learning Researchers
-
Updated
Aug 15, 2020 - Jupyter Notebook
all kinds of text classification models and more with deep learning
-
Updated
May 20, 2020 - Python
此项目是机器学习(Machine Learning)、深度学习(Deep Learning)、NLP面试中常考到的知识点和代码实现,也是作为一个算法工程师必会的理论基础知识。
-
Updated
Apr 20, 2020 - Jupyter Notebook
Hi I would like to propose a better implementation for 'test_indices':
We can remove the unneeded np.array casting:
Cleaner/New:
test_indices = list(set(range(len(texts))) - set(train_indices))
Old:
test_indices = np.array(list(set(range(len(texts))) - set(train_indices)))
TensorFlow 2.x version's Tutorials and Examples, including CNN, RNN, GAN, Auto-Encoders, FasterRCNN, GPT, BERT examples, etc. TF 2.0版入门实例代码,实战教程。
-
Updated
Aug 30, 2020 - Jupyter Notebook
XLNet: Generalized Autoregressive Pretraining for Language Understanding
-
Updated
Feb 14, 2020 - Python
大规模中文自然语言处理语料 Large Scale Chinese Corpus for NLP
-
Updated
Dec 1, 2019
Natural Language Processing Best Practices & Examples
-
Updated
Aug 25, 2020 - Python
Created by Alan Turing
- Wikipedia
- Wikipedia


Currently we have a mixture of negative and positive formulated arguments, e.g.
no_cudaandtraininghere: https://github.com/huggingface/transformers/blob/0054a48cdd64e7309184a64b399ab2c58d75d4e5/src/transformers/benchmark/benchmark_args_utils.py#L61.We should change all arguments to be positively formulated, *e.g. from
no_cudatocuda. These arguments should