# Trainer [[trainer]]

[Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) 클래스는 PyTorch에서 완전한 기능(feature-complete)의 훈련을 위한 API를 제공하며, 다중 GPU/TPU에서의 분산 훈련, [NVIDIA GPU](https://nvidia.github.io/apex/), [AMD GPU](https://rocm.docs.amd.com/en/latest/rocm.html)를 위한 혼합 정밀도, 그리고 PyTorch의 [`torch.amp`](https://pytorch.org/docs/stable/amp.html)를 지원합니다. [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)는 모델의 훈련 방식을 커스터마이즈할 수 있는 다양한 옵션을 제공하는 [TrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.TrainingArguments) 클래스와 함께 사용됩니다. 이 두 클래스는 함께 완전한 훈련 API를 제공합니다.

[Seq2SeqTrainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Seq2SeqTrainer)와 [Seq2SeqTrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Seq2SeqTrainingArguments)는 [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)와 [TrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.TrainingArguments) 클래스를 상속하며, 요약이나 번역과 같은 시퀀스-투-시퀀스 작업을 위한 모델 훈련에 적합하게 조정되어 있습니다.

[Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) 클래스는 🤗 Transformers 모델에 최적화되어 있으며, 다른 모델과 함께 사용될 때 예상치 못한 동작을 하게 될 수 있습니다. 자신만의 모델을 사용할 때는 다음을 확인하세요:

- 모델은 항상 튜플이나 [ModelOutput](/docs/transformers/v5.3.0/ko/main_classes/output#transformers.utils.ModelOutput)의 서브클래스를 반환해야 합니다.
- 모델은 `labels` 인자가 제공되면 손실을 계산할 수 있고, 모델이 튜플을 반환하는 경우 그 손실이 튜플의 첫 번째 요소로 반환되어야 합니다.
- 모델은 여러 개의 레이블 인자를 수용할 수 있어야 하며, [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)에게 이름을 알리기 위해 [TrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.TrainingArguments)에서 `label_names`를 사용하지만, 그 중 어느 것도 `"label"`로 명명되어서는 안 됩니다.

## Trainer [[transformers.Trainer]][[transformers.Trainer]]

#### transformers.Trainer[[transformers.Trainer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L255)

Trainer is a simple but feature-complete training and eval loop for PyTorch, optimized for 🤗 Transformers.

Important attributes:

- **model** -- Always points to the core model. If using a transformers model, it will be a [PreTrainedModel](/docs/transformers/v5.3.0/ko/main_classes/model#transformers.PreTrainedModel)
  subclass.
- **model_wrapped** -- Always points to the most external model in case one or more other modules wrap the
  original model. This is the model that should be used for the forward pass. For example, under `DeepSpeed`,
  the inner model is wrapped in `DeepSpeed` and then again in `torch.nn.DistributedDataParallel`. If the inner
  model hasn't been wrapped, then `self.model_wrapped` is the same as `self.model`.
- **is_model_parallel** -- Whether or not a model has been switched to a model parallel mode (different from
  data parallelism, this means some of the model layers are split on different GPUs).
- **place_model_on_device** -- Whether or not to automatically place the model on the device. Defaults to
  `True` unless model parallel, DeepSpeed, FSDP, full fp16/bf16 eval, or SageMaker MP is active. Can be
  overridden by subclassing `TrainingArguments` and overriding the `place_model_on_device` property.
- **is_in_train** -- Whether or not a model is currently running `train` (e.g. when `evaluate` is called while
  in `train`)

add_callbacktransformers.Trainer.add_callbackhttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4337[{"name": "callback", "val": ": type[transformers.trainer_callback.TrainerCallback] | transformers.trainer_callback.TrainerCallback"}]- **callback** (`type` or [`~transformers.TrainerCallback]`) --
  A [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback) class or an instance of a [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback). In the
  first case, will instantiate a member of that class.0

Add a callback to the current list of [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback).

**Parameters:**

model ([PreTrainedModel](/docs/transformers/v5.3.0/ko/main_classes/model#transformers.PreTrainedModel) or `torch.nn.Module`, *optional*) : The model to train, evaluate or use for predictions. If not provided, a `model_init` must be passed.    [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) is optimized to work with the [PreTrainedModel](/docs/transformers/v5.3.0/ko/main_classes/model#transformers.PreTrainedModel) provided by the library. You can still use your own models defined as `torch.nn.Module` as long as they work the same way as the 🤗 Transformers models.   

args ([TrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.TrainingArguments), *optional*) : The arguments to tweak for training. Will default to a basic instance of [TrainingArguments](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.TrainingArguments) with the `output_dir` set to a directory named *tmp_trainer* in the current directory if not provided.

data_collator (`DataCollator`, *optional*) : The function to use to form a batch from a list of elements of `train_dataset` or `eval_dataset`. Will default to [default_data_collator()](/docs/transformers/v5.3.0/ko/main_classes/data_collator#transformers.default_data_collator) if no `processing_class` is provided, an instance of [DataCollatorWithPadding](/docs/transformers/v5.3.0/ko/main_classes/data_collator#transformers.DataCollatorWithPadding) otherwise if the processing_class is a feature extractor or tokenizer.

train_dataset (`torch.utils.data.Dataset` | `torch.utils.data.IterableDataset` | `datasets.Dataset`, *optional*) : The dataset to use for training. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed.  Note that if it's a `torch.utils.data.IterableDataset` with some randomization and you are training in a distributed fashion, your iterable dataset should either use a internal attribute `generator` that is a `torch.Generator` for the randomization that must be identical on all processes (and the Trainer will manually set the seed of this `generator` at each epoch) or have a `set_epoch()` method that internally sets the seed of the RNGs used.

eval_dataset (`torch.utils.data.Dataset` | dict[str, `torch.utils.data.Dataset`] | `datasets.Dataset`, *optional*) : The dataset to use for evaluation. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed. If it is a dictionary, it will evaluate on each dataset prepending the dictionary key to the metric name.

processing_class (`PreTrainedTokenizerBase` or `BaseImageProcessor` or `FeatureExtractionMixin` or `ProcessorMixin`, *optional*) : Processing class used to process the data. If provided, will be used to automatically process the inputs for the model, and it will be saved along the model to make it easier to rerun an interrupted training or reuse the fine-tuned model.

model_init (`Callable[[], PreTrainedModel]`, *optional*) : A function that instantiates the model to be used. If provided, each call to [train()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.train) will start from a new instance of the model as given by this function.  The function may have zero argument, or a single one containing the optuna/Ray Tune trial object, to be able to choose different architectures according to hyperparameters (such as layer count, sizes of inner layers, dropout probabilities etc).

compute_loss_func (`Callable`, *optional*) : A function that accepts the raw model outputs, labels, and the number of items in the entire accumulated batch (batch_size * gradient_accumulation_steps) and returns the loss. For example, see the default [loss function](https://github.com/huggingface/transformers/blob/052e652d6d53c2b26ffde87e039b723949a53493/src/transformers/trainer.py#L3618) used by [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer).

compute_metrics (`Callable[[EvalPrediction], Dict]`, *optional*) : The function that will be used to compute metrics at evaluation. Must take a [EvalPrediction](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.EvalPrediction) and return a dictionary string to metric values. *Note* When passing TrainingArgs with `batch_eval_metrics` set to `True`, your compute_metrics function must take a boolean `compute_result` argument. This will be triggered after the last eval batch to signal that the function needs to calculate and return the global summary statistics rather than accumulating the batch-level statistics

callbacks (List of [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback), *optional*) : A list of callbacks to customize the training loop. Will add those to the list of default callbacks detailed in [here](callback).  If you want to remove one of the default callbacks used, use the [Trainer.remove_callback()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.remove_callback) method.

optimizers (`tuple[torch.optim.Optimizer, torch.optim.lr_scheduler.LambdaLR]`, *optional*, defaults to `(None, None)`) : A tuple containing the optimizer and the scheduler to use. Will default to an instance of `AdamW` on your model and a scheduler given by [get_linear_schedule_with_warmup()](/docs/transformers/v5.3.0/ko/main_classes/optimizer_schedules#transformers.get_linear_schedule_with_warmup) controlled by `args`.

optimizer_cls_and_kwargs (`tuple[Type[torch.optim.Optimizer], dict[str, Any]]`, *optional*) : A tuple containing the optimizer class and keyword arguments to use. Overrides `optim` and `optim_args` in `args`. Incompatible with the `optimizers` argument.  Unlike `optimizers`, this argument avoids the need to place model parameters on the correct devices before initializing the Trainer.

preprocess_logits_for_metrics (`Callable[[torch.Tensor, torch.Tensor], torch.Tensor]`, *optional*) : A function that preprocess the logits right before caching them at each evaluation step. Must take two tensors, the logits and the labels, and return the logits once processed as desired. The modifications made by this function will be reflected in the predictions received by `compute_metrics`.  Note that the labels (second parameter) will be `None` if the dataset does not have them.
#### autocast_smart_context_manager[[transformers.Trainer.autocast_smart_context_manager]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2034)

A helper wrapper that creates an appropriate context manager for `autocast` while feeding it the desired
arguments, depending on the situation. We rely on accelerate for autocast, hence we do nothing here.
#### call_model_init[[transformers.Trainer.call_model_init]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4228)

Invoke `model_init` to get a fresh model instance, optionally conditioned on a hyperparameter trial.
#### compute_loss[[transformers.Trainer.compute_loss]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1938)

How the loss is computed by Trainer. By default, all models return the loss in the first element.

Subclass and override for custom behavior. If you are not using `num_items_in_batch` when computing your loss,
make sure to overwrite `self.model_accepts_loss_kwargs` to `False`. Otherwise, the loss calculation might be slightly inaccurate when performing gradient accumulation.

**Parameters:**

model (`nn.Module`) : The model to compute the loss for.

inputs (`dict[str, torch.Tensor | Any]`) : The input data for the model.

return_outputs (`bool`, *optional*, defaults to `False`) : Whether to return the model outputs along with the loss.

num_items_in_batch (Optional[torch.Tensor], *optional*) : The number of items in the batch. If not passed, the loss is computed using the default batch size reduction logic.

**Returns:**

The loss of the model along with its output if return_outputs was set to True
#### compute_loss_context_manager[[transformers.Trainer.compute_loss_context_manager]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2022)

A helper wrapper to group together context managers.
#### create_accelerator_and_postprocess[[transformers.Trainer.create_accelerator_and_postprocess]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L750)

Create the accelerator and perform post-creation setup (FSDP, DeepSpeed, etc.).
#### create_model_card[[transformers.Trainer.create_model_card]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3912)

Creates a draft of a model card using the information available to the `Trainer`.

**Parameters:**

language (`str`, *optional*) : The language of the model (if applicable)

license (`str`, *optional*) : The license of the model. Will default to the license of the pretrained model used, if the original model given to the `Trainer` comes from a repo on the Hub.

tags (`str` or `list[str]`, *optional*) : Some tags to be included in the metadata of the model card.

model_name (`str`, *optional*) : The name of the model.

finetuned_from (`str`, *optional*) : The name of the model used to fine-tune this one (if applicable). Will default to the name of the repo of the original model given to the `Trainer` (if it comes from the Hub).

tasks (`str` or `list[str]`, *optional*) : One or several task identifiers, to be included in the metadata of the model card.

dataset_tags (`str` or `list[str]`, *optional*) : One or several dataset tags, to be included in the metadata of the model card.

dataset (`str` or `list[str]`, *optional*) : One or several dataset identifiers, to be included in the metadata of the model card.

dataset_args (`str` or `list[str]`, *optional*) : One or several dataset arguments, to be included in the metadata of the model card.
#### create_optimizer[[transformers.Trainer.create_optimizer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1143)

Setup the optimizer.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
Trainer's init through `optimizers`, or subclass and override this method in a subclass.

**Returns:**

``torch.optim.Optimizer``

The optimizer instance.
#### create_optimizer_and_scheduler[[transformers.Trainer.create_optimizer_and_scheduler]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1132)

Setup the optimizer and the learning rate scheduler.

We provide a reasonable default that works well. If you want to use something else, you can pass a tuple in the
Trainer's init through `optimizers`, or subclass and override this method (or `create_optimizer` and/or
`create_scheduler`) in a subclass.
#### create_scheduler[[transformers.Trainer.create_scheduler]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1219)

Setup the scheduler. The optimizer of the trainer must have been set up either before this method is called or
passed as an argument.

**Parameters:**

num_training_steps (int) : The number of training steps to do.

**Returns:**

``torch.optim.lr_scheduler.LRScheduler``

The learning rate scheduler instance.
#### evaluate[[transformers.Trainer.evaluate]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2508)

Run evaluation and returns metrics.

The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
(pass it to the init `compute_metrics` argument).

You can also subclass and override this method to inject custom behavior.

**Parameters:**

eval_dataset (`Dataset` | dict[str, `Dataset`], *optional*) : Pass a dataset if you wish to override `self.eval_dataset`. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed. If it is a dictionary, it will evaluate on each dataset, prepending the dictionary key to the metric name. Datasets must implement the `__len__` method.    If you pass a dictionary with names of datasets as keys and datasets as values, evaluate will run separate evaluations on each dataset. This can be useful to monitor how training affects other datasets or simply to get a more fine-grained evaluation. When used with `load_best_model_at_end`, make sure `metric_for_best_model` references exactly one of the datasets. If you, for example, pass in `{"data1": data1, "data2": data2}` for two datasets `data1` and `data2`, you could specify `metric_for_best_model="eval_data1_loss"` for using the loss on `data1` and `metric_for_best_model="eval_data2_loss"` for the loss on `data2`.   

ignore_keys (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

metric_key_prefix (`str`, *optional*, defaults to `"eval"`) : An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "eval_bleu" if the prefix is "eval" (default)

**Returns:**

A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
dictionary also contains the epoch number which comes from the training state.
#### evaluation_loop[[transformers.Trainer.evaluation_loop]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2608)

Prediction/evaluation loop, shared by `Trainer.evaluate()` and `Trainer.predict()`.

Works both with or without labels.
#### floating_point_ops[[transformers.Trainer.floating_point_ops]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3873)

For models that inherit from [PreTrainedModel](/docs/transformers/v5.3.0/ko/main_classes/model#transformers.PreTrainedModel), uses that method to compute the number of floating point
operations for every backward + forward pass. If using another model, either implement such a method in the
model or subclass and override this method.

**Parameters:**

inputs (`dict[str, torch.Tensor | Any]`) : The inputs and targets of the model.

**Returns:**

``int``

The number of floating-point operations.
#### get_batch_samples[[transformers.Trainer.get_batch_samples]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2092)

Collects a specified number of batches from the epoch iterator and optionally counts the number of items in the batches to properly scale the loss.
#### get_cp_size[[transformers.Trainer.get_cp_size]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2383)

Get the context parallel size
#### get_decay_parameter_names[[transformers.Trainer.get_decay_parameter_names]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1280)

Get all parameter names that weight decay will be applied to.

This function filters out parameters in two ways:
1. By layer type (instances of layers specified in ALL_LAYERNORM_LAYERS)
2. By parameter name patterns (containing 'bias', or variation of 'norm')
#### get_eval_dataloader[[transformers.Trainer.get_eval_dataloader]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L882)

Returns the evaluation `~torch.utils.data.DataLoader`.

Subclass and override this method if you want to inject some custom behavior.

**Parameters:**

eval_dataset (`str` or `torch.utils.data.Dataset`, *optional*) : If a `str`, will use `self.eval_dataset[eval_dataset]` as the evaluation dataset. If a `Dataset`, will override `self.eval_dataset` and must implement `__len__`. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed.
#### get_learning_rates[[transformers.Trainer.get_learning_rates]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L982)

Returns the learning rate of each parameter from self.optimizer.
#### get_num_trainable_parameters[[transformers.Trainer.get_num_trainable_parameters]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L974)

Get the number of trainable parameters.
#### get_optimizer_cls_and_kwargs[[transformers.Trainer.get_optimizer_cls_and_kwargs]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1249)

Returns the optimizer class and optimizer parameters based on the training arguments.

**Parameters:**

args (`transformers.training_args.TrainingArguments`) : The training arguments for the training session.

model (`PreTrainedModel`, *optional*) : The model being trained. Required for some optimizers (GaLore, Apollo, LOMO).

**Returns:**

A tuple containing the optimizer class and a dictionary of optimizer keyword arguments.
#### get_optimizer_group[[transformers.Trainer.get_optimizer_group]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L992)

Returns optimizer group for a parameter if given, else returns all optimizer groups for params.

**Parameters:**

param (`str` or `torch.nn.parameter.Parameter`, *optional*) : The parameter for which optimizer group needs to be returned.
#### get_sp_size[[transformers.Trainer.get_sp_size]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2375)

Get the sequence parallel size
#### get_test_dataloader[[transformers.Trainer.get_test_dataloader]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L921)

Returns the test `~torch.utils.data.DataLoader`.

Subclass and override this method if you want to inject some custom behavior.

**Parameters:**

test_dataset (`torch.utils.data.Dataset`, *optional*) : The test dataset to use. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed. It must implement `__len__`.
#### get_total_train_batch_size[[transformers.Trainer.get_total_train_batch_size]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2357)

Calculates total batch size (micro_batch * grad_accum * dp_world_size).

Accounts for all parallelism dimensions: TP, CP, and SP.

Formula: dp_world_size = world_size // (tp_size * cp_size * sp_size)

Where:
- TP (Tensor Parallelism): Model layers split across GPUs
- CP (Context Parallelism): Sequences split using Ring Attention (FSDP2)
- SP (Sequence Parallelism): Sequences split using ALST/Ulysses (DeepSpeed)

All dimensions are separate and multiplicative: world_size = dp_size * tp_size * cp_size * sp_size
#### get_tp_size[[transformers.Trainer.get_tp_size]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2391)

Get the tensor parallel size from either the model or DeepSpeed config.
#### get_train_dataloader[[transformers.Trainer.get_train_dataloader]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L862)

Returns the training `~torch.utils.data.DataLoader`.

Will use no sampler if `train_dataset` does not implement `__len__`, a random sampler (adapted to distributed
training if necessary) otherwise.

Subclass and override this method if you want to inject some custom behavior.
#### hyperparameter_search[[transformers.Trainer.hyperparameter_search]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4147)

Launch a hyperparameter search using `optuna` or `Ray Tune`. The optimized quantity is determined
by `compute_objective`, which defaults to a function returning the evaluation loss when no metric is provided,
the sum of all metrics otherwise.

To use this method, you need to have provided a `model_init` when initializing your [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer): we need to
reinitialize the model at each new run. This is incompatible with the `optimizers` argument, so you need to
subclass [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) and override the method [create_optimizer_and_scheduler()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.create_optimizer_and_scheduler) for custom
optimizer/scheduler.

**Parameters:**

hp_space (`Callable[["optuna.Trial"], dict[str, float]]`, *optional*) : A function that defines the hyperparameter search space. Will default to `default_hp_space_optuna()` or `default_hp_space_ray()` depending on your backend.

compute_objective (`Callable[[dict[str, float]], float]`, *optional*) : A function computing the objective to minimize or maximize from the metrics returned by the `evaluate` method. Will default to `default_compute_objective()`.

n_trials (`int`, *optional*, defaults to 100) : The number of trial runs to test.

direction (`str` or `list[str]`, *optional*, defaults to `"minimize"`) : If it's single objective optimization, direction is `str`, can be `"minimize"` or `"maximize"`, you should pick `"minimize"` when optimizing the validation loss, `"maximize"` when optimizing one or several metrics. If it's multi objectives optimization, direction is `list[str]`, can be List of `"minimize"` and `"maximize"`, you should pick `"minimize"` when optimizing the validation loss, `"maximize"` when optimizing one or several metrics.

backend (`str` or `~training_utils.HPSearchBackend`, *optional*) : The backend to use for hyperparameter search. Will default to optuna or Ray Tune, depending on which one is installed. If all are installed, will default to optuna.

hp_name (`Callable[["optuna.Trial"], str]]`, *optional*) : A function that defines the trial/run name. Will default to None.

kwargs (`dict[str, Any]`, *optional*) : Additional keyword arguments for each backend:  - `optuna`: parameters from [optuna.study.create_study](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.create_study.html) and also the parameters `timeout`, `n_jobs` and `gc_after_trial` from [optuna.study.Study.optimize](https://optuna.readthedocs.io/en/stable/reference/generated/optuna.study.Study.html#optuna.study.Study.optimize) - `ray`: parameters from [tune.run](https://docs.ray.io/en/latest/tune/api_docs/execution.html#tune-run). If `resources_per_trial` is not set in the `kwargs`, it defaults to 1 CPU core and 1 GPU (if available). If `progress_reporter` is not set in the `kwargs`, [ray.tune.CLIReporter](https://docs.ray.io/en/latest/tune/api/doc/ray.tune.CLIReporter.html) is used.

**Returns:**

`[`trainer_utils.BestRun` or `list[trainer_utils.BestRun]`]`

All the information about the best run or best
runs for multi-objective optimization. Experiment summary can be found in `run_summary` attribute for Ray
backend.
#### init_hf_repo[[transformers.Trainer.init_hf_repo]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3894)

Initializes a git repo in `self.args.hub_model_id`.
#### is_local_process_zero[[transformers.Trainer.is_local_process_zero]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4377)

Whether or not this process is the local (e.g., on one machine if training in a distributed fashion on several
machines) main process.
#### is_world_process_zero[[transformers.Trainer.is_world_process_zero]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4384)

Whether or not this process is the global main process (when training in a distributed fashion on several
machines, this is only going to be `True` for one process).
#### log[[transformers.Trainer.log]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3838)

Log `logs` on the various objects watching training.

Subclass and override this method to inject custom behavior.

**Parameters:**

logs (`dict[str, float]`) : The values to log.

start_time (`Optional[float]`) : The start of training.
#### log_metrics[[transformers.Trainer.log_metrics]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L830)

Log metrics in a specially formatted way.

Under distributed environment this is done only for a process with rank 0.

Notes on memory reports:

In order to get memory usage report you need to install `psutil`. You can do that with `pip install psutil`.

Now when this method is run, you will see a report that will include:

```
init_mem_cpu_alloc_delta   =     1301MB
init_mem_cpu_peaked_delta  =      154MB
init_mem_gpu_alloc_delta   =      230MB
init_mem_gpu_peaked_delta  =        0MB
train_mem_cpu_alloc_delta  =     1345MB
train_mem_cpu_peaked_delta =        0MB
train_mem_gpu_alloc_delta  =      693MB
train_mem_gpu_peaked_delta =        7MB
```

**Understanding the reports:**

- the first segment, e.g., `train__`, tells you which stage the metrics are for. Reports starting with `init_`
  will be added to the first stage that gets run. So that if only evaluation is run, the memory usage for the
  `__init__` will be reported along with the `eval_` metrics.
- the third segment, is either `cpu` or `gpu`, tells you whether it's the general RAM or the gpu0 memory
  metric.
- `*_alloc_delta` - is the difference in the used/allocated memory counter between the end and the start of the
  stage - it can be negative if a function released more memory than it allocated.
- `*_peaked_delta` - is any extra memory that was consumed and then freed - relative to the current allocated
  memory counter - it is never negative. When you look at the metrics of any stage you add up `alloc_delta` +
  `peaked_delta` and you know how much memory was needed to complete that stage.

The reporting happens only for process of rank 0 and gpu 0 (if there is a gpu). Typically this is enough since the
main process does the bulk of work, but it could be not quite so if model parallel is used and then other GPUs may
use a different amount of gpu memory. This is also not the same under DataParallel where gpu0 may require much more
memory than the rest since it stores the gradient and optimizer states for all participating GPUs. Perhaps in the
future these reports will evolve to measure those too.

The CPU RAM metric measures RSS (Resident Set Size) includes both the memory which is unique to the process and the
memory shared with other processes. It is important to note that it does not include swapped out memory, so the
reports could be imprecise.

The CPU peak memory is measured using a sampling thread. Due to python's GIL it may miss some of the peak memory if
that thread didn't get a chance to run when the highest memory was used. Therefore this report can be less than
reality. Using `tracemalloc` would have reported the exact peak memory, but it doesn't report memory allocations
outside of python. So if some C++ CUDA extension allocated its own memory it won't be reported. And therefore it
was dropped in favor of the memory sampling approach, which reads the current process memory usage.

The GPU allocated and peak memory reporting is done with `torch.cuda.memory_allocated()` and
`torch.cuda.max_memory_allocated()`. This metric reports only "deltas" for pytorch-specific allocations, as
`torch.cuda` memory management system doesn't track any memory allocated outside of pytorch. For example, the very
first cuda call typically loads CUDA kernels, which may take from 0.5 to 2GB of GPU memory.

Note that this tracker doesn't account for memory allocations outside of [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)'s `__init__`, `train`,
`evaluate` and `predict` calls.

Because `evaluation` calls may happen during `train`, we can't handle nested invocations because
`torch.cuda.max_memory_allocated` is a single counter, so if it gets reset by a nested eval call, `train`'s tracker
will report incorrect info. If this [pytorch issue](https://github.com/pytorch/pytorch/issues/16266) gets resolved
it will be possible to change this class to be re-entrant. Until then we will only track the outer level of
`train`, `evaluate` and `predict` methods. Which means that if `eval` is called during `train`, it's the latter
that will account for its memory usage and that of the former.

This also means that if any other tool that is used along the [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) calls
`torch.cuda.reset_peak_memory_stats`, the gpu peak memory stats could be invalid. And the [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer) will disrupt
the normal behavior of any such tools that rely on calling `torch.cuda.reset_peak_memory_stats` themselves.

For best performance you may want to consider turning the memory profiling off for production runs.

**Parameters:**

split (`str`) : Mode/split name: one of `train`, `eval`, `test`

metrics (`dict[str, float]`) : The metrics returned from train/evaluate/predictmetrics: metrics dict
#### metrics_format[[transformers.Trainer.metrics_format]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L803)

Reformat Trainer metrics values to a human-readable format.

**Parameters:**

metrics (`dict[str, float]`) : The metrics returned from train/evaluate/predict

**Returns:**

`metrics (`dict[str, float]`)`

The reformatted metrics
#### num_examples[[transformers.Trainer.num_examples]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L939)

Helper to get number of samples in a `~torch.utils.data.DataLoader` by accessing its dataset. When
dataloader.dataset does not exist or has no length, estimates as best it can
#### pop_callback[[transformers.Trainer.pop_callback]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4348)

Remove a callback from the current list of [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback) and returns it.

If the callback is not found, returns `None` (and no error is raised).

**Parameters:**

callback (`type` or [`~transformers.TrainerCallback]`) : A [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback) class or an instance of a [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback). In the first case, will pop the first member of that class found in the list of callbacks.

**Returns:**

`[TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback)`

The callback removed, if found.
#### predict[[transformers.Trainer.predict]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2815)

Run prediction and returns predictions and potential metrics.

Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in `evaluate()`.

If your predictions or labels have different sequence length (for instance because you're doing dynamic padding
in a token classification task) the predictions will be padded (on the right) to allow for concatenation into
one array. The padding index is -100.

Returns: *NamedTuple* A namedtuple with the following keys:

- predictions (`np.ndarray`): The predictions on `test_dataset`.
- label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- metrics (`dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
  labels).

**Parameters:**

test_dataset (`Dataset`) : Dataset to run the predictions on. If it is an `datasets.Dataset`, columns not accepted by the `model.forward()` method are automatically removed. Has to implement the method `__len__`

ignore_keys (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

metric_key_prefix (`str`, *optional*, defaults to `"test"`) : An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "test_bleu" if the prefix is "test" (default)
#### prediction_step[[transformers.Trainer.prediction_step]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2876)

Perform an evaluation step on `model` using `inputs`.

Subclass and override to inject custom behavior.

**Parameters:**

model (`nn.Module`) : The model to evaluate.

inputs (`dict[str, torch.Tensor | Any]`) : The inputs and targets of the model.  The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument `labels`. Check your model's documentation for all accepted arguments.

prediction_loss_only (`bool`) : Whether or not to return the loss only.

ignore_keys (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

**Returns:**

`tuple[Optional[torch.Tensor], Optional[torch.Tensor], Optional[torch.Tensor]]`

A tuple with the loss,
logits and labels (each being optional).
#### push_to_hub[[transformers.Trainer.push_to_hub]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3986)

Upload `self.model` and `self.processing_class` to the 🤗 model hub on the repo `self.args.hub_model_id`.

**Parameters:**

commit_message (`str`, *optional*, defaults to `"End of training"`) : Message to commit while pushing.

blocking (`bool`, *optional*, defaults to `True`) : Whether the function should return only when the `git push` has finished.

token (`str`, *optional*, defaults to `None`) : Token with write permission to overwrite Trainer's original args.

revision (`str`, *optional*) : The git revision to commit from. Defaults to the head of the "main" branch.

kwargs (`dict[str, Any]`, *optional*) : Additional keyword arguments passed along to [create_model_card()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.create_model_card).

**Returns:**

The URL of the repository where the model was pushed if `blocking=False`, or a `Future` object tracking the
progress of the commit if `blocking=True`.
#### remove_callback[[transformers.Trainer.remove_callback]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L4364)

Remove a callback from the current list of [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback).

**Parameters:**

callback (`type` or [`~transformers.TrainerCallback]`) : A [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback) class or an instance of a [TrainerCallback](/docs/transformers/v5.3.0/ko/main_classes/callback#transformers.TrainerCallback). In the first case, will remove the first member of that class found in the list of callbacks.
#### save_metrics[[transformers.Trainer.save_metrics]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L921)

Save metrics into a json file for that split, e.g. `train_results.json`.

Under distributed environment this is done only for a process with rank 0.

To understand the metrics please read the docstring of [log_metrics()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.log_metrics). The only difference is that raw
unformatted numbers are saved in the current method.

**Parameters:**

split (`str`) : Mode/split name: one of `train`, `eval`, `test`, `all`

metrics (`dict[str, float]`) : The metrics returned from train/evaluate/predict

combined (`bool`, *optional*, defaults to `True`) : Creates combined metrics by updating `all_results.json` with metrics of this call
#### save_model[[transformers.Trainer.save_model]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3739)

Will save the model, so you can reload it using `from_pretrained()`.

Will only save from the main process.
#### save_state[[transformers.Trainer.save_state]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_pt_utils.py#L960)

Saves the Trainer state, since Trainer.save_model saves only the tokenizer with the model.

Under distributed environment this is done only for a process with rank 0.
#### set_initial_training_values[[transformers.Trainer.set_initial_training_values]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L2287)

Calculates and returns the following values:
- `num_train_epochs`
- `num_update_steps_per_epoch`
- `num_examples`
- `num_train_samples`
- `total_train_batch_size`
- `steps_in_epoch` (total batches per epoch)
- `max_steps`
#### store_flos[[transformers.Trainer.store_flos]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L3862)

Store the number of floating-point operations that went into the model.
#### train[[transformers.Trainer.train]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1322)

Main training entry point.

**Parameters:**

resume_from_checkpoint (`str` or `bool`, *optional*) : If a `str`, local path to a saved checkpoint as saved by a previous instance of [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer). If a `bool` and equals `True`, load the last checkpoint in *args.output_dir* as saved by a previous instance of [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer). If present, training will resume from the model/optimizer/scheduler states loaded here.

trial (`optuna.Trial` or `dict[str, Any]`, *optional*) : The trial run or the hyperparameter dictionary for hyperparameter search.

ignore_keys_for_eval (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions for evaluation during the training.

**Returns:**

``TrainOutput``

Object containing the global step count, training loss, and metrics.
#### training_step[[transformers.Trainer.training_step]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer.py#L1867)

Perform a training step on a batch of inputs.

Subclass and override to inject custom behavior.

**Parameters:**

model (`nn.Module`) : The model to train.

inputs (`dict[str, torch.Tensor | Any]`) : The inputs and targets of the model.  The dictionary will be unpacked before being fed to the model. Most models expect the targets under the argument `labels`. Check your model's documentation for all accepted arguments.

**Returns:**

``torch.Tensor``

The tensor with training loss on this batch.

## Seq2SeqTrainer [[transformers.Seq2SeqTrainer]][[transformers.Seq2SeqTrainer]]

#### transformers.Seq2SeqTrainer[[transformers.Seq2SeqTrainer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_seq2seq.py#L53)

evaluatetransformers.Seq2SeqTrainer.evaluatehttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_seq2seq.py#L137[{"name": "eval_dataset", "val": ": torch.utils.data.dataset.Dataset | None = None"}, {"name": "ignore_keys", "val": ": list[str] | None = None"}, {"name": "metric_key_prefix", "val": ": str = 'eval'"}, {"name": "**gen_kwargs", "val": ""}]- **eval_dataset** (`Dataset`, *optional*) --
  Pass a dataset if you wish to override `self.eval_dataset`. If it is an `Dataset`, columns
  not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__`
  method.
- **ignore_keys** (`list[str]`, *optional*) --
  A list of keys in the output of your model (if it is a dictionary) that should be ignored when
  gathering predictions.
- **metric_key_prefix** (`str`, *optional*, defaults to `"eval"`) --
  An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named
  "eval_bleu" if the prefix is `"eval"` (default)
- **max_length** (`int`, *optional*) --
  The maximum target length to use when predicting with the generate method.
- **num_beams** (`int`, *optional*) --
  Number of beams for beam search that will be used when predicting with the generate method. 1 means no
  beam search.
- **gen_kwargs** --
  Additional `generate` specific kwargs.0A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
dictionary also contains the epoch number which comes from the training state.

Run evaluation and returns metrics.

The calling script will be responsible for providing a method to compute metrics, as they are task-dependent
(pass it to the init `compute_metrics` argument).

You can also subclass and override this method to inject custom behavior.

**Parameters:**

eval_dataset (`Dataset`, *optional*) : Pass a dataset if you wish to override `self.eval_dataset`. If it is an `Dataset`, columns not accepted by the `model.forward()` method are automatically removed. It must implement the `__len__` method.

ignore_keys (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

metric_key_prefix (`str`, *optional*, defaults to `"eval"`) : An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "eval_bleu" if the prefix is `"eval"` (default)

max_length (`int`, *optional*) : The maximum target length to use when predicting with the generate method.

num_beams (`int`, *optional*) : Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.

gen_kwargs : Additional `generate` specific kwargs.

**Returns:**

A dictionary containing the evaluation loss and the potential metrics computed from the predictions. The
dictionary also contains the epoch number which comes from the training state.
#### predict[[transformers.Seq2SeqTrainer.predict]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/trainer_seq2seq.py#L193)

Run prediction and returns predictions and potential metrics.

Depending on the dataset and your use case, your test dataset may contain labels. In that case, this method
will also return metrics, like in `evaluate()`.

If your predictions or labels have different sequence lengths (for instance because you're doing dynamic
padding in a token classification task) the predictions will be padded (on the right) to allow for
concatenation into one array. The padding index is -100.

Returns: *NamedTuple* A namedtuple with the following keys:

- predictions (`np.ndarray`): The predictions on `test_dataset`.
- label_ids (`np.ndarray`, *optional*): The labels (if the dataset contained some).
- metrics (`dict[str, float]`, *optional*): The potential dictionary of metrics (if the dataset contained
  labels).

**Parameters:**

test_dataset (`Dataset`) : Dataset to run the predictions on. If it is a `Dataset`, columns not accepted by the `model.forward()` method are automatically removed. Has to implement the method `__len__`

ignore_keys (`list[str]`, *optional*) : A list of keys in the output of your model (if it is a dictionary) that should be ignored when gathering predictions.

metric_key_prefix (`str`, *optional*, defaults to `"eval"`) : An optional prefix to be used as the metrics key prefix. For example the metrics "bleu" will be named "eval_bleu" if the prefix is `"eval"` (default)

max_length (`int`, *optional*) : The maximum target length to use when predicting with the generate method.

num_beams (`int`, *optional*) : Number of beams for beam search that will be used when predicting with the generate method. 1 means no beam search.

gen_kwargs : Additional `generate` specific kwargs.

## TrainingArguments [[transformers.TrainingArguments]][[transformers.TrainingArguments]]

#### transformers.TrainingArguments[[transformers.TrainingArguments]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L178)

Configuration class for controlling all aspects of model training with the Trainer.
TrainingArguments centralizes all hyperparameters, optimization settings, logging preferences, and infrastructure choices needed for training.

[HfArgumentParser](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.HfArgumentParser) can turn this class into
[argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the
command line.

get_process_log_leveltransformers.TrainingArguments.get_process_log_levelhttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L1962[]

Returns the log level to be used depending on whether this process is the main process of node 0, main process
of node non-0, or a non-main process.

For the main process the log level defaults to the logging level set (`logging.WARNING` if you didn't do
anything) unless overridden by `log_level` argument.

For the replica processes the log level defaults to `logging.WARNING` unless overridden by `log_level_replica`
argument.

The choice between the main and replica process settings is made according to the return value of `should_log`.

**Parameters:**

output_dir (`str`, *optional*, defaults to `"trainer_output"`) : The output directory where the model predictions and checkpoints will be written.
#### get_warmup_steps[[transformers.TrainingArguments.get_warmup_steps]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2051)

Get number of steps used for a linear warmup.
#### main_process_first[[transformers.TrainingArguments.main_process_first]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2000)

A context manager for torch distributed environment where on needs to do something on the main process, while
blocking replicas, and when it's finished releasing the replicas.

One such use is for `datasets`'s `map` feature which to be efficient should be run once on the main process,
which upon completion saves a cached version of results and which then automatically gets loaded by the
replicas.

**Parameters:**

local (`bool`, *optional*, defaults to `True`) : if `True` first means process of rank 0 of each node if `False` first means process of rank 0 of node rank 0 In multi-node environment with a shared filesystem you most likely will want to use `local=False` so that only the main process of the first node will do the processing. If however, the filesystem is not shared, then the main process of each node will need to do the processing, which is the default behavior.

desc (`str`, *optional*, defaults to `"work"`) : a work description to be used in debug logs
#### set_dataloader[[transformers.TrainingArguments.set_dataloader]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2585)

A method that regroups all arguments linked to the dataloaders creation.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_dataloader(train_batch_size=16, eval_batch_size=64)
>>> args.per_device_train_batch_size
16
```

**Parameters:**

drop_last (`bool`, *optional*, defaults to `False`) : Whether to drop the last incomplete batch (if the length of the dataset is not divisible by the batch size) or not.

num_workers (`int`, *optional*, defaults to 0) : Number of subprocesses to use for data loading (PyTorch only). 0 means that the data will be loaded in the main process.

pin_memory (`bool`, *optional*, defaults to `True`) : Whether you want to pin memory in data loaders or not. Will default to `True`.

persistent_workers (`bool`, *optional*, defaults to `False`) : If True, the data loader will not shut down the worker processes after a dataset has been consumed once. This allows to maintain the workers Dataset instances alive. Can potentially speed up training, but will increase RAM usage. Will default to `False`.

prefetch_factor (`int`, *optional*) : Number of batches loaded in advance by each worker. 2 means there will be a total of 2 * num_workers batches prefetched across all workers.

auto_find_batch_size (`bool`, *optional*, defaults to `False`) : Whether to find a batch size that will fit into memory automatically through exponential decay, avoiding CUDA Out-of-Memory errors. Requires accelerate to be installed (`pip install accelerate`)

ignore_data_skip (`bool`, *optional*, defaults to `False`) : When resuming training, whether or not to skip the epochs and batches to get the data loading at the same stage as in the previous training. If set to `True`, the training will begin faster (as that skipping step can take a long time) but will not yield the same results as the interrupted training would have.

sampler_seed (`int`, *optional*) : Random seed to be used with data samplers. If not set, random generators for data sampling will use the same seed as `self.seed`. This can be used to ensure reproducibility of data sampling, independent of the model seed.
#### set_evaluate[[transformers.TrainingArguments.set_evaluate]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2197)

A method that regroups all arguments linked to evaluation.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_evaluate(strategy="steps", steps=100)
>>> args.eval_steps
100
```

**Parameters:**

strategy (`str` or [IntervalStrategy](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.IntervalStrategy), *optional*, defaults to `"no"`) : The evaluation strategy to adopt during training. Possible values are:  - `"no"`: No evaluation is done during training. - `"steps"`: Evaluation is done (and logged) every `steps`. - `"epoch"`: Evaluation is done at the end of each epoch.  Setting a `strategy` different from `"no"` will set `self.do_eval` to `True`.

steps (`int`, *optional*, defaults to 500) : Number of update steps between two evaluations if `strategy="steps"`.

batch_size (`int` *optional*, defaults to 8) : The batch size per device (GPU/TPU core/CPU...) used for evaluation.

accumulation_steps (`int`, *optional*) : Number of predictions steps to accumulate the output tensors for, before moving the results to the CPU. If left unset, the whole predictions are accumulated on GPU/TPU before being moved to the CPU (faster but requires more memory).

delay (`float`, *optional*) : Number of epochs or steps to wait for before the first evaluation can be performed, depending on the eval_strategy.

loss_only (`bool`, *optional*, defaults to `False`) : Ignores all outputs except the loss.
#### set_logging[[transformers.TrainingArguments.set_logging]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2339)

A method that regroups all arguments linked to logging.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_logging(strategy="steps", steps=100)
>>> args.logging_steps
100
```

**Parameters:**

strategy (`str` or [IntervalStrategy](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.IntervalStrategy), *optional*, defaults to `"steps"`) : The logging strategy to adopt during training. Possible values are:  - `"no"`: No logging is done during training. - `"epoch"`: Logging is done at the end of each epoch. - `"steps"`: Logging is done every `logging_steps`. 

steps (`int`, *optional*, defaults to 500) : Number of update steps between two logs if `strategy="steps"`.

level (`str`, *optional*, defaults to `"passive"`) : Logger log level to use on the main process. Possible choices are the log levels as strings: `"debug"`, `"info"`, `"warning"`, `"error"` and `"critical"`, plus a `"passive"` level which doesn't set anything and lets the application set the level.

report_to (`str` or `list[str]`, *optional*, defaults to `"none"`) : The list of integrations to report the results and logs to. Supported platforms are `"azure_ml"`, `"clearml"`, `"codecarbon"`, `"comet_ml"`, `"dagshub"`, `"dvclive"`, `"flyte"`, `"mlflow"`, `"swanlab"`, `"tensorboard"`, `"trackio"` and `"wandb"`. Use `"all"` to report to all integrations installed, `"none"` for no integrations.

first_step (`bool`, *optional*, defaults to `False`) : Whether to log and evaluate the first `global_step` or not.

nan_inf_filter (`bool`, *optional*, defaults to `True`) : Whether to filter `nan` and `inf` losses for logging. If set to `True` the loss of every step that is `nan` or `inf` is filtered and the average loss of the current logging window is taken instead.    `nan_inf_filter` only influences the logging of loss values, it does not change the behavior the gradient is computed or applied to the model.   

on_each_node (`bool`, *optional*, defaults to `True`) : In multinode distributed training, whether to log using `log_level` once per node, or only on the main node.

replica_level (`str`, *optional*, defaults to `"passive"`) : Logger log level to use on replicas. Same choices as `log_level`
#### set_lr_scheduler[[transformers.TrainingArguments.set_lr_scheduler]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2539)

A method that regroups all arguments linked to the learning rate scheduler and its hyperparameters.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_lr_scheduler(name="cosine", warmup_steps=0.05)
>>> args.warmup_steps
0.05
```

**Parameters:**

name (`str` or [SchedulerType](/docs/transformers/v5.3.0/ko/main_classes/optimizer_schedules#transformers.SchedulerType), *optional*, defaults to `"linear"`) : The scheduler type to use. See the documentation of [SchedulerType](/docs/transformers/v5.3.0/ko/main_classes/optimizer_schedules#transformers.SchedulerType) for all possible values.

num_epochs(`float`, *optional*, defaults to 3.0) : Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).

max_steps (`int`, *optional*, defaults to -1) : If set to a positive number, the total number of training steps to perform. Overrides `num_train_epochs`. For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until `max_steps` is reached.

warmup_steps (`float`, *optional*, defaults to 0) : Number of steps used for a linear warmup from 0 to `learning_rate`.  Should be an integer or a float in range `[0,1)`. If smaller than 1, will be interpreted as ratio of steps used for a linear warmup from 0 to `learning_rate`.
#### set_optimizer[[transformers.TrainingArguments.set_optimizer]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2488)

A method that regroups all arguments linked to the optimizer and its hyperparameters.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_optimizer(name="adamw_torch", beta1=0.8)
>>> args.optim
'adamw_torch'
```

**Parameters:**

name (`str` or `training_args.OptimizerNames`, *optional*, defaults to `"adamw_torch"`) : The optimizer to use: `"adamw_torch"`, `"adamw_torch_fused"`, `"adamw_apex_fused"`, `"adamw_anyprecision"` or `"adafactor"`.

learning_rate (`float`, *optional*, defaults to 5e-5) : The initial learning rate.

weight_decay (`float`, *optional*, defaults to 0) : The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights.

beta1 (`float`, *optional*, defaults to 0.9) : The beta1 hyperparameter for the adam optimizer or its variants.

beta2 (`float`, *optional*, defaults to 0.999) : The beta2 hyperparameter for the adam optimizer or its variants.

epsilon (`float`, *optional*, defaults to 1e-8) : The epsilon hyperparameter for the adam optimizer or its variants.

args (`str`, *optional*) : Optional arguments that are supplied to AnyPrecisionAdamW (only useful when `optim="adamw_anyprecision"`).
#### set_push_to_hub[[transformers.TrainingArguments.set_push_to_hub]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2414)

A method that regroups all arguments linked to synchronizing checkpoints with the Hub.

Calling this method will set `self.push_to_hub` to `True`, which means the `output_dir` will begin a git
directory synced with the repo (determined by `model_id`) and the content will be pushed each time a save is
triggered (depending on your `self.save_strategy`). Calling [save_model()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.save_model) will also trigger a push.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_push_to_hub("me/awesome-model")
>>> args.hub_model_id
'me/awesome-model'
```

**Parameters:**

model_id (`str`) : The name of the repository to keep in sync with the local *output_dir*. It can be a simple model ID in which case the model will be pushed in your namespace. Otherwise it should be the whole repository name, for instance `"user_name/model"`, which allows you to push to an organization you are a member of with `"organization_name/model"`.

strategy (`str` or `HubStrategy`, *optional*, defaults to `"every_save"`) : Defines the scope of what is pushed to the Hub and when. Possible values are:  - `"end"`: push the model, its configuration, the processing_class e.g. tokenizer (if passed along to the [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)) and a draft of a model card when the [save_model()](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer.save_model) method is called. - `"every_save"`: push the model, its configuration, the processing_class e.g. tokenizer (if passed along to the [Trainer](/docs/transformers/v5.3.0/ko/main_classes/trainer#transformers.Trainer)) and a draft of a model card each time there is a model save. The pushes are asynchronous to not block training, and in case the save are very frequent, a new push is only attempted if the previous one is finished. A last push is made with the final model at the end of training. - `"checkpoint"`: like `"every_save"` but the latest checkpoint is also pushed in a subfolder named last-checkpoint, allowing you to resume training easily with `trainer.train(resume_from_checkpoint="last-checkpoint")`. - `"all_checkpoints"`: like `"checkpoint"` but all checkpoints are pushed like they appear in the output folder (so you will get one checkpoint folder per folder in your final repository) 

token (`str`, *optional*) : The token to use to push the model to the Hub. Will default to the token in the cache folder obtained with `hf auth login`.

private_repo (`bool`, *optional*, defaults to `False`) : Whether to make the repo private. If `None` (default), the repo will be public unless the organization's default is private. This value is ignored if the repo already exists.

always_push (`bool`, *optional*, defaults to `False`) : Unless this is `True`, the `Trainer` will skip pushing a checkpoint when the previous push is not finished.

revision (`str`, *optional*) : The revision to use when pushing to the Hub. Can be a branch name, a tag, or a commit hash.
#### set_save[[transformers.TrainingArguments.set_save]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2290)

A method that regroups all arguments linked to checkpoint saving.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_save(strategy="steps", steps=100)
>>> args.save_steps
100
```

**Parameters:**

strategy (`str` or [IntervalStrategy](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.IntervalStrategy), *optional*, defaults to `"steps"`) : The checkpoint save strategy to adopt during training. Possible values are:  - `"no"`: No save is done during training. - `"epoch"`: Save is done at the end of each epoch. - `"steps"`: Save is done every `save_steps`. 

steps (`int`, *optional*, defaults to 500) : Number of updates steps before two checkpoint saves if `strategy="steps"`.

total_limit (`int`, *optional*) : If a value is passed, will limit the total amount of checkpoints. Deletes the older checkpoints in `output_dir`.

on_each_node (`bool`, *optional*, defaults to `False`) : When doing multi-node distributed training, whether to save models and checkpoints on each node, or only on the main one.  This should not be activated when the different nodes use the same storage as the files will be saved with the same names for each node.
#### set_testing[[transformers.TrainingArguments.set_testing]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2254)

A method that regroups all basic arguments linked to testing on a held-out dataset.

Calling this method will automatically set `self.do_predict` to `True`.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_testing(batch_size=32)
>>> args.per_device_eval_batch_size
32
```

**Parameters:**

batch_size (`int` *optional*, defaults to 8) : The batch size per device (GPU/TPU core/CPU...) used for testing.

loss_only (`bool`, *optional*, defaults to `False`) : Ignores all outputs except the loss.
#### set_training[[transformers.TrainingArguments.set_training]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2122)

A method that regroups all basic arguments linked to the training.

Calling this method will automatically set `self.do_train` to `True`.

Example:

```py
>>> from transformers import TrainingArguments

>>> args = TrainingArguments("working_dir")
>>> args = args.set_training(learning_rate=1e-4, batch_size=32)
>>> args.learning_rate
1e-4
```

**Parameters:**

learning_rate (`float`, *optional*, defaults to 5e-5) : The initial learning rate for the optimizer.

batch_size (`int` *optional*, defaults to 8) : The batch size per device (GPU/TPU core/CPU...) used for training.

weight_decay (`float`, *optional*, defaults to 0) : The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in the optimizer.

num_train_epochs(`float`, *optional*, defaults to 3.0) : Total number of training epochs to perform (if not an integer, will perform the decimal part percents of the last epoch before stopping training).

max_steps (`int`, *optional*, defaults to -1) : If set to a positive number, the total number of training steps to perform. Overrides `num_train_epochs`. For a finite dataset, training is reiterated through the dataset (if all data is exhausted) until `max_steps` is reached.

gradient_accumulation_steps (`int`, *optional*, defaults to 1) : Number of updates steps to accumulate the gradients for, before performing a backward/update pass.    When using gradient accumulation, one step is counted as one step with backward pass. Therefore, logging, evaluation, save will be conducted every `gradient_accumulation_steps * xxx_step` training examples.   

seed (`int`, *optional*, defaults to 42) : Random seed that will be set at the beginning of training. To ensure reproducibility across runs, use the `~Trainer.model_init` function to instantiate the model if it has some randomly initialized parameters.

gradient_checkpointing (`bool`, *optional*, defaults to `False`) : If True, use gradient checkpointing to save memory at the expense of slower backward pass.
#### to_dict[[transformers.TrainingArguments.to_dict]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2072)

Serializes this instance while replace `Enum` by their values (for JSON serialization support). It obfuscates
the token values by removing their value.
#### to_json_string[[transformers.TrainingArguments.to_json_string]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2102)

Serializes this instance to a JSON string.
#### to_sanitized_dict[[transformers.TrainingArguments.to_sanitized_dict]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args.py#L2108)

Sanitized serialization to use with TensorBoard's hparams

## Seq2SeqTrainingArguments [[transformers.Seq2SeqTrainingArguments]][[transformers.Seq2SeqTrainingArguments]]

#### transformers.Seq2SeqTrainingArguments[[transformers.Seq2SeqTrainingArguments]]

[Source](https://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args_seq2seq.py#L29)

Configuration class for controlling all aspects of model training with the Trainer.
TrainingArguments centralizes all hyperparameters, optimization settings, logging preferences, and infrastructure choices needed for training.

[HfArgumentParser](/docs/transformers/v5.3.0/ko/internal/trainer_utils#transformers.HfArgumentParser) can turn this class into
[argparse](https://docs.python.org/3/library/argparse#module-argparse) arguments that can be specified on the
command line.

to_dicttransformers.Seq2SeqTrainingArguments.to_dicthttps://github.com/huggingface/transformers/blob/v5.3.0/src/transformers/training_args_seq2seq.py#L84[]

Serializes this instance while replace `Enum` by their values and `GenerationConfig` by dictionaries (for JSON
serialization support). It obfuscates the token values by removing their value.

**Parameters:**

output_dir (`str`, *optional*, defaults to `"trainer_output"`) : The output directory where the model predictions and checkpoints will be written.

