TAAFT
Free mode
100% free
Freemium
Free Trial
Create tool

Seq2Seq

By Google
New Text Gen 7
Released: September 10, 2014

Overview

Seq2Seq LSTM is a classic encoder-decoder model built with LSTM layers that maps a variable-length input sequence to a variable-length output sequence. It is used for machine translation, summarization, dialog, and speech tasks, often with attention for better long-range accuracy.

Description

A Seq2Seq LSTM pairs two recurrent networks. The encoder LSTM reads the input tokens one by one, compressing their information into hidden states and a context representation. The decoder LSTM then generates the output tokens step by step, conditioned on the previous token and the evolving hidden state. Teacher forcing is typically used during training, while inference relies on greedy decoding or beam search. Attention improves performance by letting the decoder query the encoder’s hidden states at every step, which helps on long sentences and noisy inputs. Common upgrades include embeddings, bidirectional encoders, layer stacking, and coverage terms to reduce repetition. Although Transformers now dominate many sequence tasks, Seq2Seq LSTMs remain attractive when data or compute is limited, when streaming or strict latency is important, or when stability on small datasets matters. They are straightforward to implement, tolerant of modest hardware, and still competitive for compact translation, summarization, and speech pipelines.

About Google

No company description available.

Location: US
View Company Profile

Related Models

Last updated: October 3, 2025