Hugging Face Transformers | Weights & Biases Documentation
Excerpt
The Hugging Face Transformers library makes state-of-the-art NLP models like BERT and training techniques like mixed precision and gradient checkpointing easy to use. The W&B integration adds rich, flexible experiment tracking and model versioning to interactive centralized dashboards without compromising that ease of use.
The Hugging Face Transformers library makes state-of-the-art NLP models like BERT and training techniques like mixed precision and gradient checkpointing easy to use. The W&B integration adds rich, flexible experiment tracking and model versioning to interactive centralized dashboards without compromising that ease of use.
đ€ Next-level logging in few lines
<span><span>os</span><span>.</span><span>environ</span><span>[</span><span>"WANDB_PROJECT"</span><span>]</span><span> </span><span>=</span><span> </span><span>"<my-amazing-project>"</span><span> </span><span># name your W&B project</span><span></span><br></span><span><span>os</span><span>.</span><span>environ</span><span>[</span><span>"WANDB_LOG_MODEL"</span><span>]</span><span> </span><span>=</span><span> </span><span>"checkpoint"</span><span> </span><span># log all model checkpoints</span><span></span><br></span><span><span></span><br></span><span><span></span><span>from</span><span> transformers </span><span>import</span><span> TrainingArguments</span><span>,</span><span> Trainer</span><br></span><span><span></span><br></span><span><span>args </span><span>=</span><span> TrainingArguments</span><span>(</span><span>.</span><span>.</span><span>.</span><span>,</span><span> report_to</span><span>=</span><span>"wandb"</span><span>)</span><span> </span><span># turn on W&B logging</span><span></span><br></span><span><span>trainer </span><span>=</span><span> Trainer</span><span>(</span><span>.</span><span>.</span><span>.</span><span>,</span><span> args</span><span>=</span><span>args</span><span>)</span><br></span>
info
If youâd rather dive straight into working code, check out this Google Colab.
Getting started: track experiments
1) Sign Up, install the wandb
library and log in
a) Sign up for a free account
b) Pip install the wandb
library
c) To log in in your training script, youâll need to be signed in to you account at www.wandb.ai, then you will find your API key on the Authorize page.
If you are using Weights and Biases for the first time you might want to check out our quickstart
- Python
- Command Line
<span><span>pip install wandb</span><br></span><span><span></span><br></span><span><span>wandb login</span><br></span>
2) Name the project
A Project is where all of the charts, data, and models logged from related runs are stored. Naming your project helps you organize your work and keep all the information about a single project in one place.
To add a run to a project simply set the WANDB_PROJECT
environment variable to the name of your project. The WandbCallback
will pick up this project name environment variable and use it when setting up your run.
- Python
- Command Line
- Notebook
<span><span>import os</span><br></span><span><span>os.environ["WANDB_PROJECT"]="amazon_sentiment_analysis"</span><br></span>
info
Make sure you set the project name before you initialize the Trainer
.
If a project name is not specified the project name defaults to âhuggingfaceâ.
3) Log your training runs to W&B
This is the most important step: when defining your Trainer
training arguments, either inside your code or from the command line, is to set report_to
to "wandb"
in order enable logging with Weights & Biases.
The logging_steps
argument in TrainingArguments
will control how often training metrics are pushed to W&B during training. You can also give a name to the training run in W&B using the run_name
argument.
Thatâs it! Now your models will log losses, evaluation metrics, model topology, and gradients to Weights & Biases while they train.
- Python
- Command Line
<span><span>from</span><span> transformers </span><span>import</span><span> TrainingArguments</span><span>,</span><span> Trainer</span><br></span><span><span></span><br></span><span><span>args </span><span>=</span><span> TrainingArguments</span><span>(</span><span></span><br></span><span><span> </span><span># other args and kwargs here</span><span></span><br></span><span><span> report_to</span><span>=</span><span>"wandb"</span><span>,</span><span> </span><span># enable logging to W&B</span><span></span><br></span><span><span> run_name</span><span>=</span><span>"bert-base-high-lr"</span><span>,</span><span> </span><span># name of the W&B run (optional)</span><span></span><br></span><span><span> logging_steps</span><span>=</span><span>1</span><span>,</span><span> </span><span># how often to log to W&B</span><span></span><br></span><span><span></span><span>)</span><span></span><br></span><span><span></span><br></span><span><span>trainer </span><span>=</span><span> Trainer</span><span>(</span><span></span><br></span><span><span> </span><span># other args and kwargs here</span><span></span><br></span><span><span> args</span><span>=</span><span>args</span><span>,</span><span> </span><span># your training args</span><span></span><br></span><span><span></span><span>)</span><span></span><br></span><span><span></span><br></span><span><span>trainer</span><span>.</span><span>train</span><span>(</span><span>)</span><span> </span><span># start training and logging to W&B</span><br></span>
info
Using TensorFlow? Just swap the PyTorch Trainer
for the TensorFlow TFTrainer
.
4) Turn on model checkpointing
Using Weights & Biasesâ Artifacts, you can store up to 100GB of models and datasets for free and then use the Weights & Biases Model Registry to register models to prepare them for staging or deployment in your production environment.
Logging your Hugging Face model checkpoints to Artifacts can be done by setting the WANDB_LOG_MODEL
environment variable to one of end
or checkpoint
or false
:
checkpoint
: a checkpoint will be uploaded everyargs.save_steps
from theTrainingArguments
.end
: the model will be uploaded at the end of training.
Use WANDB_LOG_MODEL
along with load_best_model_at_end
to upload the best model at the end of training.
- Python
- Command Line
- Notebook
<span><span>import</span><span> os</span><br></span><span><span></span><br></span><span><span>os</span><span>.</span><span>environ</span><span>[</span><span>"WANDB_LOG_MODEL"</span><span>]</span><span> </span><span>=</span><span> </span><span>"checkpoint"</span><br></span>
Any Transformers Trainer
you initialize from now on will upload models to your W&B project. The model checkpoints you log will be viewable through the Artifacts UI, and include the full model lineage (see an example model checkpoint in the UI here.
info
By default, your model will be saved to W&B Artifacts as model-{run_id}
when WANDB_LOG_MODEL
is set to end
or checkpoint-{run_id}
when WANDB_LOG_MODEL
is set to checkpoint
. However, If you pass a run_name
in your TrainingArguments
, the model will be saved as model-{run_name}
or checkpoint-{run_name}
.
W&B Model Registry
Once you have logged your checkpoints to Artifacts, you can then register your best model checkpoints and centralize them across your team using the Weights & Biases Model Registry. Here you can organize your best models by task, manage model lifecycle, facilitate easy tracking and auditing throughout the ML lifecyle, and automate downstream actions with webhooks or jobs.
See the Model Registry documentation for how to link a model Artifact to the Model Registry.
5) Visualise evaluation outputs during training
Visualing your model outputs during training or evaluation is often essential to really understand how your model is training.
By using the callbacks system in the Transformers Trainer, you can log additional helpful data to W&B such as your modelsâ text generation outputs or other predictions to W&B Tables.
See the Custom logging section below for a full guide on how to log evaluation outupts while training to log to a W&B Table like this:
6) Finish your W&B Run (Notebook only)
If your training is encapsulated in a Python script, the W&B run will end when your script finishes.
If you are using a Jupyter or Google Colab notebook, youâll need to tell us when youâre done with training by calling wandb.finish()
.
<span><span>trainer</span><span>.</span><span>train</span><span>(</span><span>)</span><span> </span><span># start training and logging to W&B</span><span></span><br></span><span><span></span><br></span><span><span></span><span># post-training analysis, testing, other logged code</span><span></span><br></span><span><span></span><br></span><span><span>wandb</span><span>.</span><span>finish</span><span>(</span><span>)</span><br></span>
7) Visualize your results
Once you have logged your training results you can explore your results dynamically in the W&B Dashboard. Itâs easy to compare across dozens of runs at once, zoom in on interesting findings, and coax insights out of complex data with flexible, interactive visualizations.
Advanced features and FAQs
How do I save the best model?
If load_best_model_at_end=True
is set in the TrainingArguments
that are passed to the Trainer
, then W&B will save the best performing model checkpoint to Artifacts.
If youâd like to centralize all your best model versions across your team to organize them by ML task, stage them for production, bookmark them for further evaluation, or kick off downstream Model CI/CD processes then ensure youâre saving your model checkpoints to Artifacts. Once logged to Artifacts, these checkpoints can then be promoted to the Model Registry.
Loading a saved model
If you saved your model to W&B Artifacts with WANDB_LOG_MODEL
, you can download your model weights for additional training or to run inference. You just load them back into the same Hugging Face architecture that you used before.
<span><span># Create a new run</span><span></span><br></span><span><span></span><span>with</span><span> wandb</span><span>.</span><span>init</span><span>(</span><span>project</span><span>=</span><span>"amazon_sentiment_analysis"</span><span>)</span><span> </span><span>as</span><span> run</span><span>:</span><span></span><br></span><span><span> </span><span># Pass the name and version of Artifact</span><span></span><br></span><span><span> my_model_name </span><span>=</span><span> </span><span>"model-bert-base-high-lr:latest"</span><span></span><br></span><span><span> my_model_artifact </span><span>=</span><span> run</span><span>.</span><span>use_artifact</span><span>(</span><span>my_model_name</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># Download model weights to a folder and return the path</span><span></span><br></span><span><span> model_dir </span><span>=</span><span> my_model_artifact</span><span>.</span><span>download</span><span>(</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># Load your Hugging Face model from that folder</span><span></span><br></span><span><span> </span><span># using the same model class</span><span></span><br></span><span><span> model </span><span>=</span><span> AutoModelForSequenceClassification</span><span>.</span><span>from_pretrained</span><span>(</span><span></span><br></span><span><span> model_dir</span><span>,</span><span> num_labels</span><span>=</span><span>num_labels</span><br></span><span><span> </span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># Do additional training, or run inference</span><br></span>
Resume training from a checkpoint
If you had set WANDB_LOG_MODEL='checkpoint'
you can also resume training by you can using the model_dir
as the model_name_or_path
argument in your TrainingArguments
and pass resume_from_checkpoint=True
to Trainer
.
<span><span>last_run_id </span><span>=</span><span> </span><span>"xxxxxxxx"</span><span> </span><span># fetch the run_id from your wandb workspace</span><span></span><br></span><span><span></span><br></span><span><span></span><span># resume the wandb run from the run_id</span><span></span><br></span><span><span></span><span>with</span><span> wandb</span><span>.</span><span>init</span><span>(</span><span></span><br></span><span><span> project</span><span>=</span><span>os</span><span>.</span><span>environ</span><span>[</span><span>"WANDB_PROJECT"</span><span>]</span><span>,</span><span></span><br></span><span><span> </span><span>id</span><span>=</span><span>last_run_id</span><span>,</span><span></span><br></span><span><span> resume</span><span>=</span><span>"must"</span><span>,</span><span></span><br></span><span><span></span><span>)</span><span> </span><span>as</span><span> run</span><span>:</span><span></span><br></span><span><span> </span><span># Connect an Artifact to the run</span><span></span><br></span><span><span> my_checkpoint_name </span><span>=</span><span> </span><span>f"checkpoint-</span><span>{</span><span>last_run_id</span><span>}</span><span>:latest"</span><span></span><br></span><span><span> my_checkpoint_artifact </span><span>=</span><span> run</span><span>.</span><span>use_artifact</span><span>(</span><span>my_model_name</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># Download checkpoint to a folder and return the path</span><span></span><br></span><span><span> checkpoint_dir </span><span>=</span><span> my_checkpoint_artifact</span><span>.</span><span>download</span><span>(</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># reinitialize your model and trainer</span><span></span><br></span><span><span> model </span><span>=</span><span> AutoModelForSequenceClassification</span><span>.</span><span>from_pretrained</span><span>(</span><span></span><br></span><span><span> </span><span>"<model_name>"</span><span>,</span><span> num_labels</span><span>=</span><span>num_labels</span><br></span><span><span> </span><span>)</span><span></span><br></span><span><span> </span><span># your awesome training arguments here.</span><span></span><br></span><span><span> training_args </span><span>=</span><span> TrainingArguments</span><span>(</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> trainer </span><span>=</span><span> Trainer</span><span>(</span><span>model</span><span>=</span><span>model</span><span>,</span><span> args</span><span>=</span><span>training_args</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span> </span><span># make sure use the checkpoint dir to resume training from the checkpoint</span><span></span><br></span><span><span> trainer</span><span>.</span><span>train</span><span>(</span><span>resume_from_checkpoint</span><span>=</span><span>checkpoint_dir</span><span>)</span><br></span>
Custom logging: log and view evaluation samples during training
Logging to Weights & Biases via the Transformers Trainer
is taken care of by the WandbCallback
in the Transformers library. If you need to customize your Hugging Face logging you can modify this callback by subclassing WandbCallback
and adding additional functionality that leverages additional methods from the Trainer class.
Below is the general pattern to add this new callback to the HF Trainer, and further down is a code-complete example to log evaluation outputs to a W&B Table:
<span><span># Instantiate the Trainer as normal</span><span></span><br></span><span><span>trainer </span><span>=</span><span> Trainer</span><span>(</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><span># Instantiate the new logging callback, passing it the Trainer object</span><span></span><br></span><span><span>evals_callback </span><span>=</span><span> WandbEvalsCallback</span><span>(</span><span>trainer</span><span>,</span><span> tokenizer</span><span>,</span><span> </span><span>.</span><span>.</span><span>.</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><span># Add the callback to the Trainer</span><span></span><br></span><span><span>trainer</span><span>.</span><span>add_callback</span><span>(</span><span>evals_callback</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><span># Begin Trainer training as normal</span><span></span><br></span><span><span>trainer</span><span>.</span><span>train</span><span>(</span><span>)</span><br></span>
View evaluation samples during training
The following section shows how to customize the WandbCallback
to run model predictions and log evaluation samples to a W&B Table during training. We will every eval_steps
using the on_evaluate
method of the Trainer callback.
Here, we wrote a decode_predictions
function to decode the predictions and labels from the model output using the tokenizer.
Then, we create a pandas DataFrame from the predictions and labels and add an epoch
column to the DataFrame.
Finally, we create a wandb.Table
from the DataFrame and log it to wandb. Additionally, we can control the frequency of logging by logging the predictions every freq
epochs.
Note: Unlike the regular WandbCallback
this custom callback needs to be added to the trainer after the Trainer
is instantiated and not during initialization of the Trainer
. This is because the Trainer
instance is passed to the callback during initialization.
<span><span>from</span><span> transformers</span><span>.</span><span>integrations </span><span>import</span><span> WandbCallback</span><br></span><span><span></span><span>import</span><span> pandas </span><span>as</span><span> pd</span><br></span><span><span></span><br></span><span><span></span><br></span><span><span></span><span>def</span><span> </span><span>decode_predictions</span><span>(</span><span>tokenizer</span><span>,</span><span> predictions</span><span>)</span><span>:</span><span></span><br></span><span><span> labels </span><span>=</span><span> tokenizer</span><span>.</span><span>batch_decode</span><span>(</span><span>predictions</span><span>.</span><span>label_ids</span><span>)</span><span></span><br></span><span><span> logits </span><span>=</span><span> predictions</span><span>.</span><span>predictions</span><span>.</span><span>argmax</span><span>(</span><span>axis</span><span>=</span><span>-</span><span>1</span><span>)</span><span></span><br></span><span><span> prediction_text </span><span>=</span><span> tokenizer</span><span>.</span><span>batch_decode</span><span>(</span><span>logits</span><span>)</span><span></span><br></span><span><span> </span><span>return</span><span> </span><span>{</span><span>"labels"</span><span>:</span><span> labels</span><span>,</span><span> </span><span>"predictions"</span><span>:</span><span> prediction_text</span><span>}</span><span></span><br></span><span><span></span><br></span><span><span></span><br></span><span><span></span><span>class</span><span> </span><span>WandbPredictionProgressCallback</span><span>(</span><span>WandbCallback</span><span>)</span><span>:</span><span></span><br></span><span><span> </span><span>"""Custom WandbCallback to log model predictions during training.</span><br></span><span><span></span><br></span><span><span> This callback logs model predictions and labels to a wandb.Table at each </span><br></span><span><span> logging step during training. It allows to visualize the </span><br></span><span><span> model predictions as the training progresses.</span><br></span><span><span></span><br></span><span><span> Attributes:</span><br></span><span><span> trainer (Trainer): The Hugging Face Trainer instance.</span><br></span><span><span> tokenizer (AutoTokenizer): The tokenizer associated with the model.</span><br></span><span><span> sample_dataset (Dataset): A subset of the validation dataset </span><br></span><span><span> for generating predictions.</span><br></span><span><span> num_samples (int, optional): Number of samples to select from </span><br></span><span><span> the validation dataset for generating predictions. Defaults to 100.</span><br></span><span><span> freq (int, optional): Frequency of logging. Defaults to 2.</span><br></span><span><span> """</span><span></span><br></span><span><span></span><br></span><span><span> </span><span>def</span><span> </span><span>__init__</span><span>(</span><span>self</span><span>,</span><span> trainer</span><span>,</span><span> tokenizer</span><span>,</span><span> val_dataset</span><span>,</span><span></span><br></span><span><span> num_samples</span><span>=</span><span>100</span><span>,</span><span> freq</span><span>=</span><span>2</span><span>)</span><span>:</span><span></span><br></span><span><span> </span><span>"""Initializes the WandbPredictionProgressCallback instance.</span><br></span><span><span></span><br></span><span><span> Args:</span><br></span><span><span> trainer (Trainer): The Hugging Face Trainer instance.</span><br></span><span><span> tokenizer (AutoTokenizer): The tokenizer associated </span><br></span><span><span> with the model.</span><br></span><span><span> val_dataset (Dataset): The validation dataset.</span><br></span><span><span> num_samples (int, optional): Number of samples to select from </span><br></span><span><span> the validation dataset for generating predictions.</span><br></span><span><span> Defaults to 100.</span><br></span><span><span> freq (int, optional): Frequency of logging. Defaults to 2.</span><br></span><span><span> """</span><span></span><br></span><span><span> </span><span>super</span><span>(</span><span>)</span><span>.</span><span>__init__</span><span>(</span><span>)</span><span></span><br></span><span><span> self</span><span>.</span><span>trainer </span><span>=</span><span> trainer</span><br></span><span><span> self</span><span>.</span><span>tokenizer </span><span>=</span><span> tokenizer</span><br></span><span><span> self</span><span>.</span><span>sample_dataset </span><span>=</span><span> val_dataset</span><span>.</span><span>select</span><span>(</span><span>range</span><span>(</span><span>num_samples</span><span>)</span><span>)</span><span></span><br></span><span><span> self</span><span>.</span><span>freq </span><span>=</span><span> freq</span><br></span><span><span></span><br></span><span><span> </span><span>def</span><span> </span><span>on_evaluate</span><span>(</span><span>self</span><span>,</span><span> args</span><span>,</span><span> state</span><span>,</span><span> control</span><span>,</span><span> </span><span>**</span><span>kwargs</span><span>)</span><span>:</span><span></span><br></span><span><span> </span><span>super</span><span>(</span><span>)</span><span>.</span><span>on_evaluate</span><span>(</span><span>args</span><span>,</span><span> state</span><span>,</span><span> control</span><span>,</span><span> </span><span>**</span><span>kwargs</span><span>)</span><span></span><br></span><span><span> </span><span># control the frequency of logging by logging the predictions</span><span></span><br></span><span><span> </span><span># every `freq` epochs</span><span></span><br></span><span><span> </span><span>if</span><span> state</span><span>.</span><span>epoch </span><span>%</span><span> self</span><span>.</span><span>freq </span><span>==</span><span> </span><span>0</span><span>:</span><span></span><br></span><span><span> </span><span># generate predictions</span><span></span><br></span><span><span> predictions </span><span>=</span><span> self</span><span>.</span><span>trainer</span><span>.</span><span>predict</span><span>(</span><span>self</span><span>.</span><span>sample_dataset</span><span>)</span><span></span><br></span><span><span> </span><span># decode predictions and labels</span><span></span><br></span><span><span> predictions </span><span>=</span><span> decode_predictions</span><span>(</span><span>self</span><span>.</span><span>tokenizer</span><span>,</span><span> predictions</span><span>)</span><span></span><br></span><span><span> </span><span># add predictions to a wandb.Table</span><span></span><br></span><span><span> predictions_df </span><span>=</span><span> pd</span><span>.</span><span>DataFrame</span><span>(</span><span>predictions</span><span>)</span><span></span><br></span><span><span> predictions_df</span><span>[</span><span>"epoch"</span><span>]</span><span> </span><span>=</span><span> state</span><span>.</span><span>epoch</span><br></span><span><span> records_table </span><span>=</span><span> self</span><span>.</span><span>_wandb</span><span>.</span><span>Table</span><span>(</span><span>dataframe</span><span>=</span><span>predictions_df</span><span>)</span><span></span><br></span><span><span> </span><span># log the table to wandb</span><span></span><br></span><span><span> self</span><span>.</span><span>_wandb</span><span>.</span><span>log</span><span>(</span><span>{</span><span>"sample_predictions"</span><span>:</span><span> records_table</span><span>}</span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><br></span><span><span></span><span># First, instantiate the Trainer</span><span></span><br></span><span><span>trainer </span><span>=</span><span> Trainer</span><span>(</span><span></span><br></span><span><span> model</span><span>=</span><span>model</span><span>,</span><span></span><br></span><span><span> args</span><span>=</span><span>training_args</span><span>,</span><span></span><br></span><span><span> train_dataset</span><span>=</span><span>lm_datasets</span><span>[</span><span>"train"</span><span>]</span><span>,</span><span></span><br></span><span><span> eval_dataset</span><span>=</span><span>lm_datasets</span><span>[</span><span>"validation"</span><span>]</span><span>,</span><span></span><br></span><span><span></span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><span># Instantiate the WandbPredictionProgressCallback</span><span></span><br></span><span><span>progress_callback </span><span>=</span><span> WandbPredictionProgressCallback</span><span>(</span><span></span><br></span><span><span> trainer</span><span>=</span><span>trainer</span><span>,</span><span></span><br></span><span><span> tokenizer</span><span>=</span><span>tokenizer</span><span>,</span><span></span><br></span><span><span> val_dataset</span><span>=</span><span>lm_dataset</span><span>[</span><span>"validation"</span><span>]</span><span>,</span><span></span><br></span><span><span> num_samples</span><span>=</span><span>10</span><span>,</span><span></span><br></span><span><span> freq</span><span>=</span><span>2</span><span>,</span><span></span><br></span><span><span></span><span>)</span><span></span><br></span><span><span></span><br></span><span><span></span><span># Add the callback to the trainer</span><span></span><br></span><span><span>trainer</span><span>.</span><span>add_callback</span><span>(</span><span>progress_callback</span><span>)</span><br></span>
For a more detailed example please refer to this colab
Additional W&B settings
Further configuration of what is logged with Trainer
is possible by setting environment variables. A full list of W&B environment variables can be found here.
Environment Variable | Usage |
---|---|
WANDB_PROJECT | Give your project a name (huggingface by default) |
WANDB_LOG_MODEL | |
Log the model checkpoint as a W&B Artifact (false by default) |
false
(default): No model checkpointingcheckpoint
: A checkpoint will be uploaded every args.save_steps (set in the Trainerâs TrainingArguments).end
: The final model checkpoint will be uploaded at the end of training.
|
| WANDB_WATCH
|
Set whether youâd like to log your models gradients, parameters or neither
false
(default): No gradient or parameter logginggradients
: Log histograms of the gradientsall
: Log histograms of gradients and parameters
|
| WANDB_DISABLED
| Set to true
to disable logging entirely (false
by default) |
| WANDB_SILENT
| Set to true
to silence the output printed by wandb (false
by default) |
- Command Line
- Notebook
<span><span>WANDB_WATCH=all</span><br></span><span><span>WANDB_SILENT=true</span><br></span>
Customize wandb.init
The WandbCallback
that Trainer
uses will call wandb.init
under the hood when Trainer
is initialized. You can alternatively set up your runs manually by calling wandb.init
before theTrainer
is initialized. This gives you full control over your W&B run configuration.
An example of what you might want to pass to init
is below. For more details on how to use wandb.init
, check out the reference documentation.
<span><span>wandb</span><span>.</span><span>init</span><span>(</span><span></span><br></span><span><span> project</span><span>=</span><span>"amazon_sentiment_analysis"</span><span>,</span><span></span><br></span><span><span> name</span><span>=</span><span>"bert-base-high-lr"</span><span>,</span><span></span><br></span><span><span> tags</span><span>=</span><span>[</span><span>"baseline"</span><span>,</span><span> </span><span>"high-lr"</span><span>]</span><span>,</span><span></span><br></span><span><span> group</span><span>=</span><span>"bert"</span><span>,</span><span></span><br></span><span><span></span><span>)</span><br></span>
Highlighted Articles
Below are 6 Transformers and W&B related articles you might enjoy
Hyperparameter Optimization for Hugging Face Transformers Hugging Tweets: Train a Model to Generate Tweets Sentence Classification With Hugging Face BERT and WB A Step by Step Guide to Tracking Hugging Face Model Performance Early Stopping in HuggingFace - Examples How to Fine-Tune Hugging Face Transformers on a Custom Dataset
Issues, questions, feature requests
For any issues, questions, or feature requests for the Hugging Face W&B integration, feel free to post in this thread on the Hugging Face forums or open an issue on the Hugging Face Transformers GitHub repo.