Metric Storage Strategy#

class rizemind.logging.metric_storage_strategy.MetricPhases(*values)[source]#

Bases: Enum

MetricPhases based on phases that the ServerApp receives metrics from the clients

AGGREGATE_EVALUATE = 2#
AGGREGATE_FIT = 1#
EVALUATE = 3#
class rizemind.logging.metric_storage_strategy.MetricStorageStrategy(strategy: Strategy, metrics_storage: BaseMetricStorage, enabled_metric_phases: list[MetricPhases] = [MetricPhases.AGGREGATE_FIT, MetricPhases.AGGREGATE_EVALUATE, MetricPhases.EVALUATE], save_best_model: bool = True)[source]#

Bases: Strategy

The MetricStorageStrategy capable of logging metrics at MetricPhases given a metric storage.

aggregate_evaluate(server_round: int, results: list[tuple[ClientProxy, EvaluateRes]], failures: list[tuple[ClientProxy, EvaluateRes] | BaseException]) tuple[float | None, dict[str, bool | bytes | float | int | str]][source]#

Aggregate evaluation results and log metrics.

If the save_best_model is enabled, then the last best evaluation is compared with the current evaluation to log the parameters of the best model. If logging is enabled with AGGREGATE_EVALUATE, then it will log the metrics to the given metric storage.

Parameters:
  • server_round – The current round of federated learning.

  • results – Successful evaluation results from clients.

  • failures – Failures from clients during evaluation.

Returns:

A tuple containing the aggregated loss and a dictionary of metrics.

aggregate_fit(server_round: int, results: list[tuple[ClientProxy, FitRes]], failures: list[tuple[ClientProxy, FitRes] | BaseException]) tuple[Parameters | None, dict[str, bool | bytes | float | int | str]][source]#

Aggregate fit results and log metrics.

If the save_best_model is enabled, then the aggregated parameters will be kept in memory to be used later on if they represent the best model. If logging is enabled with AGGREGATE_FIT, then it will log the metrics to the given metric storage.

Parameters:
  • server_round – The current round of federated learning.

  • results – Successful fit results from clients.

  • failures – Failures from clients during fitting.

Returns:

A tuple containing the aggregated parameters and a dictionary of metrics.

configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, EvaluateIns]][source]#

Configure the next round of evaluation.

Parameters:
  • server_round (int) – The current round of federated learning.

  • parameters (Parameters) – The current (global) model parameters.

  • client_manager (ClientManager) – The client manager which holds all currently connected clients.

Returns:

evaluate_configuration – A list of tuples. Each tuple in the list identifies a ClientProxy and the EvaluateIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated evaluation.

Return type:

List[Tuple[ClientProxy, EvaluateIns]]

configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, FitIns]][source]#

Configure the next round of training.

Parameters:
  • server_round (int) – The current round of federated learning.

  • parameters (Parameters) – The current (global) model parameters.

  • client_manager (ClientManager) – The client manager which holds all currently connected clients.

Returns:

fit_configuration – A list of tuples. Each tuple in the list identifies a ClientProxy and the FitIns for this particular ClientProxy. If a particular ClientProxy is not included in this list, it means that this ClientProxy will not participate in the next round of federated learning.

Return type:

List[Tuple[ClientProxy, FitIns]]

evaluate(server_round: int, parameters: Parameters) tuple[float, dict[str, bool | bytes | float | int | str]] | None[source]#

Evaluate model parameters on the server and log metrics.

If logging is enabled with EVALUATE, then it will log the metrics to the given metric storage.

Parameters:
  • server_round – The current round of federated learning.

  • parameters – The current global model parameters to be evaluated.

Returns:

An optional tuple containing the loss and a dictionary of metrics from the evaluation.

initialize_parameters(client_manager: ClientManager) Parameters | None[source]#

Initialize the (global) model parameters.

Parameters:

client_manager (ClientManager) – The client manager which holds all currently connected clients.

Returns:

parameters – If parameters are returned, then the server will treat these as the initial global model parameters.

Return type:

Optional[Parameters]