athena.layers.conformer_ctc

the transformer model

Module Contents

Classes

ConformerCTC

A conformer CTC model. User is able to modify the attributes as needed. The architecture

ConformerEncoder

ConformerEncoder is a stack of N encoder layers

ConformerEncoderLayer

ConformerEncoderLayer is made up of self-attn and feedforward network.

class athena.layers.conformer_ctc.ConformerCTC(d_model=512, nhead=8, cnn_module_kernel=15, num_encoder_layers=6, dim_feedforward=2048, dropout=0.1, activation='gelu', custom_encoder=None)

Bases: tensorflow.keras.layers.Layer

A conformer CTC model. User is able to modify the attributes as needed. The architecture is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. :param d_model: the number of expected features in the encoder/decoder inputs (default=512). :param nhead: the number of heads in the multiheadattention models (default=8). :param num_encoder_layers: the number of sub-encoder-layers in the encoder (default=6). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of encoder intermediate layer, relu or gelu (default=relu). :param custom_encoder: custom encoder (default=None).

Examples::
>>> conformer_model = Conformer(nhead=16, num_encoder_layers=12)
>>> src = tf.random.normal((10, 32, 512))
>>> tgt = tf.random.normal((20, 32, 512))
>>> out = transformer_model(src, tgt)
call(src, src_mask=None, return_encoder_output=False, training=None)

Take in and process masked source sequences. :param src: the sequence to the encoder (required). :param src_mask: the additive mask for the src sequence (optional). :param memory_mask: the additive mask for the encoder output (optional).

Shape:
  • src: \((N, S, E)\).

  • src_mask: \((N, S)\).

  • memory_mask: \((N, S)\).

Note: [src/tgt/memory]_mask should be a ByteTensor where True values are positions that should be masked with float(‘-inf’) and False values will be unchanged. This mask ensures that no information will be taken from position i if it is masked, and has a separate mask for each sequence in a batch. - output: \((N, T, E)\). Note: Due to the multi-head attention architecture in the transformer model, the output sequence length of a transformer is same as the input sequence (i.e. target) length of the decode. where S is the source sequence length, T is the target sequence length, N is the batch size, E is the feature number

Examples

>>> output = conformer_ctc_model(src, src_mask=src_mask, tgt_mask=tgt_mask)
class athena.layers.conformer_ctc.ConformerEncoder(encoder_layers)

Bases: tensorflow.keras.layers.Layer

ConformerEncoder is a stack of N encoder layers :param encoder_layer: an instance of the TransformerEncoderLayer() class (required). :param num_layers: the number of sub-encoder-layers in the encoder (required). :param norm: the layer normalization component (optional).

Examples::
>>> encoder_layer = [ConformerEncoderLayer(d_model=512, nhead=8)
>>>                    for _ in range(num_layers)]
>>> transformer_encoder = ConformerEncoder(encoder_layer)
>>> src = torch.rand(10, 32, 512)
>>> out = transformer_encoder(src)
call(src, src_mask=None, training=None)

Pass the input through the endocder layers in turn. :param src: the sequnce to the encoder (required). :param mask: the mask for the src sequence (optional).

Shape:

see the docs in Transformer class.

class athena.layers.conformer_ctc.ConformerEncoderLayer(d_model, nhead, cnn_module_kernel=15, dim_feedforward=2048, dropout=0.1, activation='gelu')

Bases: tensorflow.keras.layers.Layer

ConformerEncoderLayer is made up of self-attn and feedforward network. This standard encoder layer is based on the paper “Attention Is All You Need”. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. :param d_model: the number of expected features in the input (required). :param nhead: the number of heads in the multiheadattention models (required). :param dim_feedforward: the dimension of the feedforward network model (default=2048). :param dropout: the dropout value (default=0.1). :param activation: the activation function of intermediate layer, relu or gelu (default=relu).

Examples::
>>> encoder_layer = ConformerEncoderLayer(d_model=512, nhead=8)
>>> src = tf.random(10, 32, 512)
>>> out = encoder_layer(src)
call(src, src_mask=None, training=None)

Pass the input through the encoder layer. :param src: the sequence to the encoder layer (required). :param mask: the mask for the src sequence (optional).

Shape:

see the docs in Transformer class.