athena.solver
¶
high-level abstraction of different stages in speech processing
Module Contents¶
Classes¶
Base Training Solver. |
|
A multi-processer solver based on Horovod |
|
ASR DecoderSolver |
|
SynthesisSolver (TTS Solver) |
|
VadSolver |
|
Base Solver. |
|
A multi-processer solver based on Horovod |
|
DecoderSolver |
- class athena.solver.BaseSolver(model, optimizer, sample_signature, eval_sample_signature=None, config=None, **kwargs)¶
Bases:
tensorflow.keras.Model
Base Training Solver.
- default_config¶
- static initialize_devices(solver_gpus=None)¶
initialize hvd devices, should be called firstly
- static clip_by_norm(grads, norm)¶
clip norm using tf.clip_by_norm
- train_step(samples)¶
train the model 1 step
- train(trainset, devset, checkpointer, pbar, epoch, total_batches=-1)¶
Update the model in 1 epoch
- save_checkpointer(checkpointer, devset, epoch)¶
- evaluate_step(samples)¶
evaluate the model 1 step
- evaluate(dataset, epoch)¶
evaluate the model
- class athena.solver.HorovodSolver(model, optimizer, sample_signature, eval_sample_signature=None, config=None, **kwargs)¶
Bases:
BaseSolver
A multi-processer solver based on Horovod
- static initialize_devices(solver_gpus=None)¶
initialize hvd devices, should be called firstly
For examples, if you have two machines and each of them contains 4 gpus: 1. run with command horovodrun -np 6 -H ip1:2,ip2:4 and set solver_gpus to be [0,3,0,1,2,3],
then the first gpu and the last gpu on machine1 and all gpus on machine2 will be used.
run with command horovodrun -np 6 -H ip1:2,ip2:4 and set solver_gpus to be [], then the first 2 gpus on machine1 and all gpus on machine2 will be used.
- Parameters
solver_gpus ([list]) – a list to specify gpus being used.
- Raises
ValueError – If the list of solver gpus is not empty, its size should not be smaller than that of horovod configuration.
- train_step(samples)¶
train the model 1 step
- train(trainset, devset, checkpointer, pbar, epoch, total_batches=-1)¶
Update the model in 1 epoch
- evaluate(dataset, epoch=0)¶
evaluate the model
- class athena.solver.DecoderSolver(model, data_descriptions=None, config=None)¶
Bases:
BaseSolver
ASR DecoderSolver
- default_config¶
- inference(dataset_builder, rank_size=1, conf=None)¶
decode the model
- inference_saved_model(dataset_builder, rank_size=1, conf=None)¶
decode the model
- class athena.solver.SynthesisSolver(model, optimizer=None, sample_signature=None, eval_sample_signature=None, config=None, **kwargs)¶
Bases:
BaseSolver
SynthesisSolver (TTS Solver)
- default_config¶
- inference(dataset_builder, rank_size=1, conf=None)¶
synthesize using vocoder on dataset
- inference_saved_model(dataset_builder, rank_size=1, conf=None)¶
synthesize using vocoder on dataset
- class athena.solver.VadSolver(model, optimizer=None, sample_signature=None, eval_sample_signature=None, data_descriptions=None, config=None)¶
Bases:
BaseSolver
VadSolver
- default_config¶
- inference(dataset, rank_size=1, conf=None)¶
decode the model
- class athena.solver.AVSolver(model, optimizer, sample_signature, eval_sample_signature=None, config=None, **kwargs)¶
Bases:
tensorflow.keras.Model
Base Solver.
- default_config¶
- static initialize_devices(solver_gpus=None)¶
initialize hvd devices, should be called firstly
- static clip_by_norm(grads, norm)¶
clip norm using tf.clip_by_norm
- train_step(samples)¶
train the model 1 step
- train(trainset, devset, checkpointer, pbar, epoch, total_batches=-1)¶
Update the model in 1 epoch
- evaluate_step(samples)¶
evaluate the model 1 step
- evaluate(dataset, epoch)¶
evaluate the model
- class athena.solver.AVHorovodSolver(model, optimizer, sample_signature, eval_sample_signature=None, config=None, **kwargs)¶
Bases:
AVSolver
A multi-processer solver based on Horovod
- static initialize_devices(solver_gpus=None)¶
initialize hvd devices, should be called firstly
For examples, if you have two machines and each of them contains 4 gpus: 1. run with command horovodrun -np 6 -H ip1:2,ip2:4 and set solver_gpus to be [0,3,0,1,2,3],
then the first gpu and the last gpu on machine1 and all gpus on machine2 will be used.
run with command horovodrun -np 6 -H ip1:2,ip2:4 and set solver_gpus to be [], then the first 2 gpus on machine1 and all gpus on machine2 will be used.
- Parameters
solver_gpus ([list]) – a list to specify gpus being used.
- Raises
ValueError – If the list of solver gpus is not empty, its size should not be smaller than that of horovod configuration.
- train_step(samples)¶
train the model 1 step
- train(trainset, devset, checkpointer, pbar, epoch, total_batches=-1)¶
Update the model in 1 epoch
- evaluate(dataset, epoch=0)¶
evaluate the model
- class athena.solver.AVDecoderSolver(model, data_descriptions=None, config=None)¶
Bases:
AVSolver
DecoderSolver
- default_config¶
- inference(dataset_builder, rank_size=1, conf=None)¶
decode the model
- inference_freeze(dataset_builder, rank_size=1, conf=None)¶
decode the model
- inference_argmax(dataset_builder, rank_size=1, conf=None)¶
decode the model