AI Smart Compose for Your Code
-
Updated
Nov 29, 2020 - Shell
{{ message }}
AI Smart Compose for Your Code
Chinese version of GPT2 training code, using BERT tokenizer.
Toolkit for Machine Learning, Natural Language Processing, and Text Generation, in TensorFlow. This is part of the CASL project: http://casl-project.ai/
Kashgari is a production-level NLP Transfer learning framework built on top of tf.keras for text-labeling and text-classification, includes Word2Vec, BERT, and GPT2 Language Embedding.
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
Large-scale pretraining for dialogue
GPT2 for Multiple Languages, including pretrained models. GPT2 多语言支持, 15亿参数中文预训练模型
Visual Studio Code client for TabNine. https://marketplace.visualstudio.com/items?itemName=TabNine.tabnine-vscode
Integrating the Best of TF into PyTorch, for Machine Learning, Natural Language Processing, and Text Generation. This is part of the CASL project: http://casl-project.ai/
This Word Does Not Exist
Simple Text-Generator with OpenAI gpt-2 Pytorch Implementation
A curated list of NLP resources focused on BERT, attention mechanism, Transformer networks, and transfer learning.
Vim client for TabNine. https://vimawesome.com/plugin/tabnine-vim
Medical Q&A with Deep Language Models
LightSeq: A High Performance Inference Library for Sequence Processing and Generation
Rust native ready-to-use NLP pipelines and transformer-based models (BERT, DistilBERT, GPT2,...)
✍🏻 gpt2-client: Easy-to-use TensorFlow Wrapper for GPT-2 117M, 345M, 774M, and 1.5B Transformer Models
Text-generation API via GPT-2 for Cloud Run
Your new Telegram buddy based on transformers
Python script to download public Tweets from a given Twitter account into a format suitable for AI text generation.
OpenAI GPT2 pre-training and sequence prediction implementation in Tensorflow 2.0
a bot that generates realistic replies using a combination of pretrained GPT-2 and BERT models
Transformer language model (GPT-2) with sentencepiece tokenizer
EMNLP 2020: "Dialogue Response Ranking Training with Large-Scale Human Feedback Data"
A list of pretrained Transformer models for the Russian language.
Sublime Text client for TabNine. https://packagecontrol.io/packages/TabNine
Code and UI for running a Magic card text generator API via GPT-2
Add a description, image, and links to the gpt-2 topic page so that developers can more easily learn about it.
To associate your repository with the gpt-2 topic, visit your repo's landing page and select "manage topics."
I'm playing around with this wonderful code but I'm running into a curious issue when I try to train the model with my own data.
I replicated the
personachat_self_original.jsonfile structure and added my own data. I deleteddataset_cache_OpenAIGPTTokenizerfile but when I try to train, I get this error: