A neural network that transforms a design mock-up into a static website.
-
Updated
Jan 10, 2020 - HTML
{{ message }}
A neural network that transforms a design mock-up into a static website.
Simple Binary Encoding (SBE) - High Performance Message Codec
Semantic Segmentation Suite in TensorFlow. Implement, train, and test new Semantic Segmentation models easily!
Tutorials on implementing a few sequence-to-sequence (seq2seq) models with PyTorch and TorchText.
Show, Attend, and Tell | a PyTorch Tutorial to Image Captioning
Sequence-to-sequence framework with a focus on Neural Machine Translation based on Apache MXNet
最新版ffmpeg3.3-android,并通过CMake方式移植到Android中,并实现编解码,转码,推拉流,滤镜等各种功能
Tensorflow seq2seq Implementation of Text Summarization.
An open-source tool for sequence learning in NLP built on TensorFlow.
Sequence to sequence learning using TensorFlow.
BERT for Multitask Learning
Conversation models in TensorFlow. (website removed)
Multiple implementations for abstractive text summurization , using google colab
Four styles of encoder decoder model by Python, Theano, Keras and Seq2Seq
FFmpegCommand适用于Android的FFmpeg命令库,实现了对音视频相关的处理,能够快速的处理音视频,大概功能包括:音视频剪切,音视频转码,音视频解码原始数据,音视频编码,视频转图片或gif,视频添加水印,多画面拼接,音频混音,视频亮度和对比度,音频淡入和淡出效果等
slot filling, intent detection, joint training, ATIS & SNIPS datasets, the Facebook’s multilingual dataset, MIT corpus, E-commerce Shopping Assistant (ECSA) dataset, CoNLL2003 NER, ELMo, BERT, XLNet
Implementation of a seq2seq model for Speech Recognition using the latest version of TensorFlow. Architecture similar to Listen, Attend and Spell.
Implementation of a seq2seq model for summarization of textual data. Demonstrated on amazon reviews, github issues and news articles.
Code for our paper "Multi-scale Guided Attention for Medical Image Segmentation"
This repository contains my full work and notes on Coursera's NLP Specialization (Natural Language Processing) taught by the instructor Younes Bensouda Mourri and Łukasz Kaiser offered by deeplearning.ai
Pytorch implementation of "Attention-Based Recurrent Neural Network Models for Joint Intent Detection and Slot Filling" (https://arxiv.org/abs/1609.01454)
News summarization using sequence to sequence model with attention in TensorFlow.
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
Decode All Bases - Base Scheme Decoder
Demo code of the paper: "Deep Image Harmonization", Y.-H. Tsai, X. Shen, Z. Lin, K. Sunkavalli, X. Lu and M.-H. Yang, CVPR 2017
This is an ongoing re-implementation of DeepLab_v3_plus on pytorch which is trained on VOC2012 and use ResNet101 for backbone.
Using the Pytorch to build an image temporal prediction model of the encoder-forecaster structure, ConvGRU kernel & ConvLSTM kernel
Add a description, image, and links to the encoder-decoder topic page so that developers can more easily learn about it.
To associate your repository with the encoder-decoder topic, visit your repo's landing page and select "manage topics."
如:xxx app + www.xxx.com
不要在此issue里面提各种问题,谢谢配合。