transformer_u2_layer

the transformer model

Module Contents

Classes

TransformerU2EncoderLayer

TransformerEncoderLayer is made up of self-attn and feedforward network.

TransformerU2DecoderLayer

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

class transformer_u2_layer.TransformerU2EncoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='gelu', unidirectional=False, look_ahead=0, ffn=None, conv_module_kernel_size=0, concat_after: bool = False)

Bases: tensorflow.keras.layers.Layer

TransformerEncoderLayer is made up of self-attn and feedforward network.

Parameters
  • d_model – the number of expected features in the input (required).

  • nhead – the number of heads in the multiheadattention models (required).

  • dim_feedforward – the dimension of the feedforward network model (default=2048).

  • dropout – the dropout value (default=0.1).

  • activation – the activation function of intermediate layer, relu or gelu (default=relu).

Examples

>>> encoder_layer = TransformerEncoderLayer(d_model=512, nhead=8)
>>> src = tf.random(10, 32, 512)
>>> out = encoder_layer(src)
call(src: tensorflow.Tensor, src_mask: Optional[tensorflow.Tensor] = None, output_cache: Optional[tensorflow.Tensor] = None, cnn_cache: Optional[tensorflow.Tensor] = None, training: Optional[bool] = None)

Pass the input through the encoder layer.

Parameters
  • src – the sequence to the encoder layer (required).

  • src_mask – tf.zeros([1, 0, 256], dtype=tf.float32) the mask for the src sequence (optional).

set_unidirectional(uni=False)

whether to apply trianglar masks to make transformer unidirectional

class transformer_u2_layer.TransformerU2DecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='gelu')

Bases: tensorflow.keras.layers.Layer

TransformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

Reference:

“Attention Is All You Need”.

Parameters
  • d_model – the number of expected features in the input (required).

  • nhead – the number of heads in the multiheadattention models (required).

  • dim_feedforward – the dimension of the feedforward network model (default=2048).

  • dropout – the dropout value (default=0.1).

  • activation – the activation function of intermediate layer, relu or gelu (default=relu).

Examples

>>> decoder_layer = TransformerU2DecoderLayer(d_model=512, nhead=8)
>>> memory = tf.random(10, 32, 512)
>>> tgt = tf.random(20, 32, 512)
>>> out = decoder_layer(tgt, memory)
call(tgt, memory, tgt_mask=None, memory_mask=None, training=None)

Pass the inputs (and mask) through the decoder layer.

Parameters
  • tgt – the sequence to the decoder layer (required).

  • memory – the sequence from the last layer of the encoder (required).

  • tgt_mask – the mask for the tgt sequence (optional).

  • memory_mask – the mask for the memory sequence (optional).