P - Type of model update used in this trainer.public class MLPTrainer<P extends Serializable> extends MultiLabelDatasetTrainer<MultilayerPerceptron>
Dataset.DatasetTrainer.EmptyDatasetExceptionenvBuilder, environment| Constructor and Description |
|---|
MLPTrainer(IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> archSupplier,
IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss,
UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy,
int maxIterations,
int batchSize,
int locIterations,
long seed)
Constructs a new instance of multilayer perceptron trainer.
|
MLPTrainer(MLPArchitecture arch,
IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss,
UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy,
int maxIterations,
int batchSize,
int locIterations,
long seed)
Constructs a new instance of multilayer perceptron trainer.
|
| Modifier and Type | Method and Description |
|---|---|
<K,V> MultilayerPerceptron |
fitWithInitializedDeployingContext(DatasetBuilder<K,V> datasetBuilder,
Preprocessor<K,V> extractor)
Trains model based on the specified data.
|
IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> |
getArchSupplier()
Get the multilayer perceptron architecture supplier that defines layers and activators.
|
int |
getBatchSize()
Get the batch size (per every partition).
|
int |
getLocIterations()
Get the maximal number of local iterations before synchronization.
|
IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> |
getLoss()
Get the loss function to be minimized during the training.
|
int |
getMaxIterations()
Get the maximal number of iterations before the training will be stopped.
|
long |
getSeed()
Get the multilayer perceptron model initializer.
|
UpdatesStrategy<? super MultilayerPerceptron,P> |
getUpdatesStgy()
Get the update strategy that defines how to update model parameters during the training.
|
boolean |
isUpdateable(MultilayerPerceptron mdl) |
protected <K,V> MultilayerPerceptron |
updateModel(MultilayerPerceptron lastLearnedMdl,
DatasetBuilder<K,V> datasetBuilder,
Preprocessor<K,V> extractor)
Trains new model taken previous one as a first approximation.
|
MLPTrainer<P> |
withArchSupplier(IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> archSupplier)
Set up the multilayer perceptron architecture supplier that defines layers and activators.
|
MLPTrainer<P> |
withBatchSize(int batchSize)
Set up the batch size (per every partition).
|
MLPTrainer<P> |
withEnvironmentBuilder(LearningEnvironmentBuilder envBuilder)
Changes learning Environment.
|
MLPTrainer<P> |
withLocIterations(int locIterations)
Set up the maximal number of local iterations before synchronization.
|
MLPTrainer<P> |
withLoss(IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss)
Set up the loss function to be minimized during the training.
|
MLPTrainer<P> |
withMaxIterations(int maxIterations)
Set up the maximal number of iterations before the training will be stopped.
|
MLPTrainer<P> |
withSeed(long seed)
Set up the multilayer perceptron model initializer.
|
MLPTrainer<P> |
withUpdatesStgy(UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy)
Set up the update strategy that defines how to update model parameters during the training.
|
fit, fit, fit, fit, fit, fit, getLastTrainedModelOrThrowEmptyDatasetException, identityTrainer, learningEnvironment, update, update, update, update, update, withConvertedLabelspublic MLPTrainer(MLPArchitecture arch, IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss, UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy, int maxIterations, int batchSize, int locIterations, long seed)
arch - Multilayer perceptron architecture that defines layers and activators.loss - Loss function to be minimized during the training.updatesStgy - Update strategy that defines how to update model parameters during the training.maxIterations - Maximal number of iterations before the training will be stopped.batchSize - Batch size (per every partition).locIterations - Maximal number of local iterations before synchronization.seed - Random initializer seed.public MLPTrainer(IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> archSupplier, IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss, UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy, int maxIterations, int batchSize, int locIterations, long seed)
archSupplier - Multilayer perceptron architecture supplier that defines layers and activators.loss - Loss function to be minimized during the training.updatesStgy - Update strategy that defines how to update model parameters during the training.maxIterations - Maximal number of iterations before the training will be stopped.batchSize - Batch size (per every partition).locIterations - Maximal number of local iterations before synchronization.seed - Random initializer seed.public <K,V> MultilayerPerceptron fitWithInitializedDeployingContext(DatasetBuilder<K,V> datasetBuilder, Preprocessor<K,V> extractor)
fitWithInitializedDeployingContext in class DatasetTrainer<MultilayerPerceptron,double[]>K - Type of a key in upstream data.V - Type of a value in upstream data.datasetBuilder - Dataset builder.extractor - Extractor of UpstreamEntry into LabeledVector.protected <K,V> MultilayerPerceptron updateModel(MultilayerPerceptron lastLearnedMdl, DatasetBuilder<K,V> datasetBuilder, Preprocessor<K,V> extractor)
updateModel in class DatasetTrainer<MultilayerPerceptron,double[]>K - Type of a key in upstream data.V - Type of a value in upstream data.lastLearnedMdl - Learned model.datasetBuilder - Dataset builder.extractor - Extractor of UpstreamEntry into LabeledVector.public IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> getArchSupplier()
public MLPTrainer<P> withArchSupplier(IgniteFunction<Dataset<EmptyContext,SimpleLabeledDatasetData>,MLPArchitecture> archSupplier)
archSupplier - The parameter value.public IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> getLoss()
public MLPTrainer<P> withLoss(IgniteFunction<Vector,IgniteDifferentiableVectorToDoubleFunction> loss)
loss - The parameter value.public UpdatesStrategy<? super MultilayerPerceptron,P> getUpdatesStgy()
public MLPTrainer<P> withUpdatesStgy(UpdatesStrategy<? super MultilayerPerceptron,P> updatesStgy)
updatesStgy - The parameter value.public int getMaxIterations()
public MLPTrainer<P> withMaxIterations(int maxIterations)
maxIterations - The parameter value.public int getBatchSize()
public MLPTrainer<P> withBatchSize(int batchSize)
batchSize - The parameter value.public int getLocIterations()
public MLPTrainer<P> withLocIterations(int locIterations)
locIterations - The parameter value.public long getSeed()
public MLPTrainer<P> withSeed(long seed)
seed - The parameter value.public boolean isUpdateable(MultilayerPerceptron mdl)
isUpdateable in class DatasetTrainer<MultilayerPerceptron,double[]>mdl - Model.public MLPTrainer<P> withEnvironmentBuilder(LearningEnvironmentBuilder envBuilder)
withEnvironmentBuilder in class DatasetTrainer<MultilayerPerceptron,double[]>envBuilder - Learning environment builder.
GridGain In-Memory Computing Platform : ver. 8.9.26 Release Date : October 16 2025