FedSynthetic#

class fl_sim.data_processing.FedSynthetic(alpha: float, beta: float, iid: bool, num_clients: int, num_classes: int = 10, dimension: int = 60, seed: int = 0, **extra_config: Any)[source]#

Bases: FedDataset

Federated synthetic dataset.

This dataset is proposed in the FedProx paper [1] [2].

Parameters:
  • alpha (float) – Parameters for generating synthetic data using normal distributions.

  • beta (float) – Parameters for generating synthetic data using normal distributions.

  • iid (bool) – Whether to generate iid data.

  • num_clients (int) – The number of clients.

  • num_classes (int, default 10) – The number of classes.

  • dimension (int, default 60) – The dimension of data (feature).

  • seed (int, default 0) – The random seed.

  • **extra_config (dict, optional) – Extra configurations.

References

property candidate_models: Dict[str, Module]#

A set of candidate models.

property doi: List[str]#

DOI(s) related to the dataset.

evaluate(probs: Tensor, truths: Tensor) Dict[str, float][source]#

Evaluation using predictions and ground truth.

Parameters:
Returns:

Evaluation results.

Return type:

Dict[str, float]

extra_repr_keys() List[str][source]#

Extra keys for __repr__() and __str__().

get_dataloader(train_bs: int | None = None, test_bs: int | None = None, client_idx: int | None = None) Tuple[DataLoader, DataLoader][source]#

Get local dataloader at client client_idx or get the global dataloader.

Parameters:
  • train_bs (int, optional) – Batch size for training dataloader. If None, use default batch size.

  • test_bs (int, optional) – Batch size for testing dataloader. If None, use default batch size.

  • client_idx (int, optional) – Index of the client to get dataloader. If None, get the dataloader containing all data. Usually used for centralized training.

Returns:

load_partition_data(batch_size: int | None = None) tuple[source]#

Partition data into all local clients.

Parameters:

batch_size (int, optional) – Batch size for dataloader. If None, use default batch size.

Returns:

  • train_clients_num: int

    Number of training clients.

  • train_data_num: int

    Number of training data.

  • test_data_num: int

    Number of testing data.

  • train_data_global: torch.utils.data.DataLoader

    Global training dataloader.

  • test_data_global: torch.utils.data.DataLoader

    Global testing dataloader.

  • data_local_num_dict: dict

    Number of local training data for each client.

  • train_data_local_dict: dict

    Local training dataloader for each client.

  • test_data_local_dict: dict

    Local testing dataloader for each client.

  • n_class: int

    Number of classes.

Return type:

tuple

load_partition_data_distributed(process_id: int, batch_size: int | None = None) tuple[source]#

Get local dataloader at client process_id or get global dataloader.

Parameters:
  • process_id (int) – Index of the client to get dataloader. If None, get the dataloader containing all data, usually used for centralized training.

  • batch_size (int, optional) – Batch size for dataloader. If None, use default batch size.

Returns:

Return type:

tuple

reset_seed(seed: int) None[source]#

Reset the random seed and re-generate the dataset.

Parameters:

seed (int) – The random seed.

Return type:

None

property url: str#

URL for downloading the dataset. Empty for synthetic dataset.