Shortcuts

Utils

ignite_framework.utils.utils.apply_to_tensor(input_: Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping], func: Callable) → Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping][source]

Apply a function on a tensor or mapping, or sequence of tensors.

ignite_framework.utils.utils.apply_to_type(input_: Union[Any, collections.abc.Sequence, collections.abc.Mapping], input_type: Union[Type, Tuple[Type[Any], Any]], func: Callable) → Union[Any, collections.abc.Sequence, collections.abc.Mapping][source]

Apply a function on a object of input_type or mapping, or sequence of objects of input_type.

ignite_framework.utils.utils.convert_tensor(input_: Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping], device: Union[str, torch.device, None] = None, non_blocking: bool = False) → Union[torch.Tensor, collections.abc.Sequence, collections.abc.Mapping][source]

Move tensors to relevant device.

ignite_framework.utils.utils.no_arguments(func)[source]

Decorator converting a func with its StateObjectsReference arguments into a function without required arguments.

This avoids having to separately create argument dictionary and append it as tuple together with func to a callback :param func: :param *args: :param **kwargs:

Returns:

ignite_framework.utils.utils.setup_logger(name: str, level: int = 20, format: str = '%(asctime)s %(name)s %(levelname)s: %(message)s', filepath: Optional[str] = None, distributed_rank: int = 0) → logging.Logger[source]

Setups logger: name, level, format etc.

Parameters:
  • name (str) – new name for the logger.
  • level (int) – logging level, e.g. CRITICAL, ERROR, WARNING, INFO, DEBUG
  • format (str) – logging format. By default, %(asctime)s %(name)s %(levelname)s: %(message)s
  • filepath (str, optional) – Optional logging file path. If not None, logs are written to the file.
  • distributed_rank (int, optional) – Optional, rank in distributed configuration to avoid logger setup for workers.
Returns:

logging.Logger

For example, to improve logs readability when training with a trainer and evaluator:

from ignite.utils import setup_logger
trainer = ...
evaluator = ...
trainer.logger = setup_logger("trainer")
evaluator.logger = setup_logger("evaluator")
trainer.run(data, max_epochs=10)
# Logs will look like
# 2020-01-21 12:46:07,356 trainer INFO: Engine run starting with max_epochs=5.
# 2020-01-21 12:46:07,358 trainer INFO: Epoch[1] Complete. Time taken: 00:5:23
# 2020-01-21 12:46:07,358 evaluator INFO: Engine run starting with max_epochs=1.
# 2020-01-21 12:46:07,358 evaluator INFO: Epoch[1] Complete. Time taken: 00:01:02
# ...
ignite_framework.utils.utils.to_onehot(indices: torch.Tensor, num_classes: int) → torch.Tensor[source]

Convert a tensor of indices of any shape (N, …) to a tensor of one-hot indicators of shape (N, num_classes, …) and of type uint8. Output’s device is equal to the input’s device.