transformer
Here are 2,099 public repositories matching this topic...
-
Updated
Aug 10, 2022
Natural Language Processing Tutorial for Deep Learning Researchers
-
Updated
Jul 25, 2021 - Jupyter Notebook
-
Updated
Aug 10, 2022 - Jupyter Notebook
CVPR 2022 论文和开源项目合集
-
Updated
Aug 7, 2022
Bidirectional RNN
Is there a way to train a bidirectional RNN (like LSTM or GRU) on trax nowadays?
Code for the paper "Jukebox: A Generative Model for Music"
-
Updated
Sep 10, 2021 - Python
Easy-to-use image segmentation library with awesome pre-trained model zoo, supporting wide-range of practical tasks in Semantic Segmentation, Interactive Segmentation, Panoptic Segmentation, Image Matting, 3D Segmentation, etc.
-
Updated
Aug 10, 2022 - Python
Chinese version of GPT2 training code, using BERT tokenizer.
-
Updated
Mar 17, 2022 - Python
https://github.com/PaddlePaddle/PaddleNLP/tree/develop/examples/machine_translation/transformer
在我们的基于 Transformer 的 Machine Translation 例子中,数据准备这一步奏仅提供了一个预处理好的 WMT14 en-de 数据,并没有告诉读者该怎么处理数据,每一步应该怎么做,可否像 fairseq (https://github.com/facebookresearch/fairseq/tree/main/examples/translation) 一样,将每一个步奏都提供出来。
另外,我们的PaddleNLP数据处理看起来和代码高度耦合。意思就是我每想用新的机器翻译数据训练模型,我需要在PaddleNLP里先写一些支持该数据的新的代码。这从用户的视角看并不是
chooses 15% of token
From paper, it mentioned
Instead, the training data generator chooses 15% of tokens at random, e.g., in the sentence my
dog is hairy it chooses hairy.
It means that 15% of token will be choose for sure.
From https://github.com/codertimo/BERT-pytorch/blob/master/bert_pytorch/dataset/dataset.py#L68,
for every single token, it has 15% of chance that go though the followup procedure.
PositionalEmbedding
Please report TTS text frontend bugs here, for examples: text normalization, polyphone and tone sandhi, etc.
We encourage developers to solve these problems.
- polyphone: 能说多长(zhang3
❎ )的语音呢?是否可以长(zhang3❎ )语音合成呢?长(chang2✅ )语音,长(zhang3❎ )文本
BertViz: Visualize Attention in NLP Models (BERT, GPT2, BART, etc.)
-
Updated
Jul 24, 2022 - Python
We keep this issue open to collect feature requests from users and hear your voice. Our monthly release plan is also available here.
You can either:
- Suggest a new feature by leaving a comment.
- Vote for a feature request with
👍 or be against with👎 . (Remember that developers are busy and cannot respond to all feature requests, so vote for your most favorable one!) - Tell us that you wo
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
-
Updated
Jul 14, 2022 - Jupyter Notebook
A TensorFlow Implementation of the Transformer: Attention Is All You Need
-
Updated
May 26, 2022 - Python
viewpager with parallax pages, together with vertical sliding (or click) and activity transition
-
Updated
May 3, 2017 - Java
PostHTML is a tool to transform HTML/XML with JS plugins
-
Updated
Aug 8, 2022 - JavaScript
The OCR approach is rephrased as Segmentation Transformer: https://arxiv.org/abs/1909.11065. This is an official implementation of semantic segmentation for HRNet. https://arxiv.org/abs/1908.07919
-
Updated
Jul 20, 2021 - Python
Efficient AI Backbones including GhostNet, TNT and MLP, developed by Huawei Noah's Ark Lab.
-
Updated
Jul 29, 2022 - Python
The GitHub repository for the paper "Informer" accepted by AAAI 2021.
-
Updated
May 15, 2022 - Python
Collect some papers about transformer with vision. Awesome Transformer with Computer Vision (CV)
-
Updated
Aug 5, 2022
Production First and Production Ready End-to-End Speech Recognition Toolkit
-
Updated
Aug 8, 2022 - C++
Time series Timeseries Deep Learning Machine Learning Pytorch fastai | State-of-the-art Deep Learning library for Time Series and Sequences in Pytorch / fastai
-
Updated
Jul 6, 2022 - Jupyter Notebook
LightSeq: A High Performance Library for Sequence Processing and Generation
-
Updated
Aug 10, 2022 - Cuda
GPT2 for Chinese chitchat/用于中文闲聊的GPT2模型(实现了DialoGPT的MMI思想)
-
Updated
Feb 17, 2022 - Python
SwinIR: Image Restoration Using Swin Transformer (official repository)
-
Updated
Aug 1, 2022 - Python
An Open-Source Framework for Prompt-Learning.
-
Updated
Aug 5, 2022 - Python
pix2tex: Using a ViT to convert images of equations into LaTeX code.
-
Updated
Jul 13, 2022 - Python
Large-scale pretraining for dialogue
-
Updated
Jul 13, 2022 - Python
Improve this page
Add a description, image, and links to the transformer topic page so that developers can more easily learn about it.
Add this topic to your repo
To associate your repository with the transformer topic, visit your repo's landing page and select "manage topics."


Feature request
We currently have 2 monocular depth estimation models in the library, namely DPT and GLPN.
It would be great to have a pipeline for this task, with the following API: