athena.layers.conformer

the transformer model

Module Contents

Classes

Conformer

A conformer model. User is able to modify the attributes as needed. The architecture

ConformerEncoder

TransformerEncoder is a stack of N encoder layers

ConformerDecoder

TransformerDecoder is a stack of N decoder layers

ConformerEncoderLayer

TransformerEncoderLayer is made up of self-attn and feedforward network.

ConformerDecoderLayer

ConformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network.

class athena.layers.conformer.Conformer(d_model=512, nhead=8, cnn_module_kernel=15, num_encoder_layers=6, num_decoder_layers=6, dim_feedforward=2048, dropout=0.1, activation='gelu', custom_encoder=None, custom_decoder=None)

Bases: tensorflow.keras.layers.Layer

A conformer model. User is able to modify the attributes as needed. The architecture is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users can build the BERT(https://arxiv.org/abs/1810.04805) model with corresponding parameters. :param d_model: the number of expected features in the encoder/decoder inputs (default=512). :param nhead: the number of heads in the multiheadattention models (default=8). :param num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6). :param num_decoder_layers: the number of sub-decoder-layers in the decoder (default=6). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of encoder/decoder intermediate layer, relu or gelu

(default=relu).

Parameters
  • custom_encoder – custom encoder (default=None).

  • custom_decoder – custom decoder (default=None).

Examples::
>>> conformer_model = Conformer(nhead=16, num_encoder_layers=12)
>>> src = tf.random.normal((10, 32, 512))
>>> tgt = tf.random.normal((20, 32, 512))
>>> out = transformer_model(src, tgt)
call(src, tgt, src_mask=None, tgt_mask=None, memory_mask=None, return_encoder_output=False, training=None)

Take in and process masked source/target sequences. :param src: the sequence to the encoder (required). :param tgt: the sequence to the decoder (required). :param src_mask: the additive mask for the src sequence (optional). :param tgt_mask: the additive mask for the tgt sequence (optional). :param memory_mask: the additive mask for the encoder output (optional). :param src_key_padding_mask: the ByteTensor mask for src keys per batch (optional). :param tgt_key_padding_mask: the ByteTensor mask for tgt keys per batch (optional). :param memory_key_padding_mask: the ByteTensor mask for memory keys per batch (optional).

Shape:
  • src: \((N, S, E)\).

  • tgt: \((N, T, E)\).

  • src_mask: \((N, S)\).

  • tgt_mask: \((N, T)\).

  • memory_mask: \((N, S)\).

Note: [src/tgt/memory]_mask should be a ByteTensor where True values are positions that should be masked with float(‘-inf’) and False values will be unchanged. This mask ensures that no information will be taken from position i if it is masked, and has a separate mask for each sequence in a batch. - output: \((N, T, E)\). Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode. where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number

Examples

>>> output = transformer_model(src, tgt, src_mask=src_mask, tgt_mask=tgt_mask)
class athena.layers.conformer.ConformerEncoder(encoder_layers)

Bases: tensorflow.keras.layers.Layer

TransformerEncoder is a stack of N encoder layers :param encoder_layer: an instance of the TransformerEncoderLayer() class (required). :param num_layers: the number of sub-encoder-layers in the encoder (required). :param norm: the layer normalization component (optional).

Examples::
>>> encoder_layer = [ConformerEncoderLayer(d_model=512, nhead=8)
>>>                    for _ in range(num_layers)]
>>> transformer_encoder = ConformerEncoder(encoder_layer)
>>> src = torch.rand(10, 32, 512)
>>> out = transformer_encoder(src)
call(src, src_mask=None, training=None)

Pass the input through the endocder layers in turn. :param src: the sequnce to the encoder (required). :param mask: the mask for the src sequence (optional).

Shape:

see the docs in Transformer class.

class athena.layers.conformer.ConformerDecoder(decoder_layers)

Bases: tensorflow.keras.layers.Layer

TransformerDecoder is a stack of N decoder layers :param decoder_layer: an instance of the TransformerDecoderLayer() class (required). :param num_layers: the number of sub-decoder-layers in the decoder (required). :param norm: the layer normalization component (optional).

Examples::
>>> decoder_layer = [ConformerDecoderLayer(d_model=512, nhead=8)
>>>                     for _ in range(num_layers)]
>>> transformer_decoder = ConformerDecoder(decoder_layer)
>>> memory = torch.rand(10, 32, 512)
>>> tgt = torch.rand(20, 32, 512)
>>> out = transformer_decoder(tgt, memory)
call(tgt, memory, tgt_mask=None, memory_mask=None, training=None)

Pass the inputs (and mask) through the decoder layer in turn. :param tgt: the sequence to the decoder (required). :param memory: the sequnce from the last layer of the encoder (required). :param tgt_mask: the mask for the tgt sequence (optional). :param memory_mask: the mask for the memory sequence (optional).

Shape:

see the docs in Transformer class.

class athena.layers.conformer.ConformerEncoderLayer(d_model, nhead, cnn_module_kernel=15, dim_feedforward=2048, dropout=0.1, activation='gelu')

Bases: tensorflow.keras.layers.Layer

TransformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of intermediate layer, relu or gelu (default=relu).

Examples::
>>> encoder_layer = ConformerEncoderLayer(d_model=512, nhead=8)
>>> src = tf.random(10, 32, 512)
>>> out = encoder_layer(src)
call(src, src_mask=None, training=None)

Pass the input through the encoder layer. :param src: the sequence to the encoder layer (required). :param mask: the mask for the src sequence (optional).

Shape:

see the docs in Transformer class.

class athena.layers.conformer.ConformerDecoderLayer(d_model, nhead, dim_feedforward=2048, dropout=0.1, activation='gelu')

Bases: tensorflow.keras.layers.Layer

ConformerDecoderLayer is made up of self-attn, multi-head-attn and feedforward network. This standard decoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Processing Systems, pages 6000-6010. Users may modify or implement in a different way during application. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of intermediate layer, relu or gelu (default=relu).

Examples::
>>> decoder_layer = ConformerDecoderLayer(d_model=512, nhead=8)
>>> memory = tf.random(10, 32, 512)
>>> tgt = tf.random(20, 32, 512)
>>> out = decoder_layer(tgt, memory)
call(tgt, memory, tgt_mask=None, memory_mask=None, training=None)

Pass the inputs (and mask) through the decoder layer. :param tgt: the sequence to the decoder layer (required). :param memory: the sequence from the last layer of the encoder (required). :param tgt_mask: the mask for the tgt sequence (optional). :param memory_mask: the mask for the memory sequence (optional).