Simple Compensation Strategy

Simple Compensation Strategy#

class rizemind.strategies.compensation.simple_compensation_strategy.SimpleCompensationStrategy(strategy: Strategy, model: SupportsDistribute)[source]

Bases: Strategy

A federated learning strategy with equal compensation distribution.

This strategy acts as a decorator around an existing Flower strategy, adding compensation functionality that distributes equal rewards (1.0) to all participating clients after each training round. The compensation is distributed through blockchain.

strategy

The underlying federated learning strategy to delegate operations to.

Type:

flwr.server.strategy.strategy.Strategy

model

The reward distributor.

Type:

rizemind.strategies.compensation.typings.SupportsDistribute

aggregate_evaluate(server_round: int, results: list[tuple[ClientProxy, EvaluateRes]], failures: list[tuple[ClientProxy, EvaluateRes] | BaseException]) tuple[float | None, dict[str, bool | bytes | float | int | str]][source]

Aggregate evaluation results from clients.

Delegates the aggregation of evaluation results to the underlying strategy.

Parameters:
  • server_round – Current federated learning round number.

  • results – List of evaluation results from participating clients.

  • failures – List of failed evaluation attempts.

Returns:

Tuple containing aggregated loss and metrics dictionary.

aggregate_fit(server_round, results, failures)[source]

Aggregate training results and distribute compensation to participants.

This method processes training results from clients, calculates compensation scores using the simple equal distribution scheme, distributes the rewards, and then delegates the actual model aggregation to the underlying strategy.

Parameters:
  • server_round – Current federated learning round number.

  • results – List of training results from participating clients.

  • failures – List of failed training attempts.

Returns:

Aggregated model parameters and metrics from the underlying strategy.

calculate(client_ids: list[Address])[source]

Compensate each client equally.

This method implements a simple equal compensation scheme where all participating clients receive the same reward score of 1.0.

Parameters:

client_ids – List of client blockchain addresses that participated in training.

Returns:

List of tuples containing checksum addresses and their corresponding compensation scores (all equal to 1.0).

configure_evaluate(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, EvaluateIns]][source]

Configure client evaluation instructions for the current round.

Delegates the configuration of evaluation instructions to the underlying strategy.

Parameters:
  • server_round – Current federated learning round number.

  • parameters – Current global model parameters.

  • client_manager – Manager handling available clients.

Returns:

List of tuples containing client proxies and their evaluation instructions.

configure_fit(server_round: int, parameters: Parameters, client_manager: ClientManager) list[tuple[ClientProxy, FitIns]][source]

Configure client training instructions for the current round.

Delegates the configuration of training instructions to the underlying strategy.

Parameters:
  • server_round – Current federated learning round number.

  • parameters – Current global model parameters.

  • client_manager – Manager handling available clients.

Returns:

List of tuples containing client proxies and their training instructions.

evaluate(server_round: int, parameters: Parameters) tuple[float, dict[str, bool | bytes | float | int | str]] | None[source]

Evaluate the global model on the server side.

Delegates the server-side evaluation to the underlying strategy.

Parameters:
  • server_round – Current federated learning round number.

  • parameters – Current global model parameters to evaluate.

Returns:

Tuple containing loss and metrics, or None if evaluation is not performed.

initialize_parameters(client_manager: ClientManager) Parameters | None[source]

Initialize global model parameters for federated learning.

Delegates the parameter initialization to the underlying strategy while logging the start of the training phase.

Parameters:

client_manager – Manager handling available clients.

Returns:

Initial model parameters, or None if not applicable.

model: SupportsDistribute
strategy: Strategy