Pretraining SDK#

Creating a pretraining run#

mcli.create_pretraining_run(model, train_data, save_folder, *, compute=None, tokenizer=None, training_duration=None, parameters=None, eval=None, experiment_tracker=None, timeout=10, future=False)[source]

Create a pretraining run.

Parameters
  • model – The name of the Hugging Face model to use. Required.

  • train_data – Either a list of paths to the training data or a mapping of dataset names to the path and proportion of the dataset to use. For example, if you have two datasets, dataset1 and dataset2, and you want to use 80% of dataset1 and 50% of dataset2, you can pass in {"dataset1": {"path": "path/to/dataset1", "proportion": 8}, "dataset2": {"path": "path/to/dataset2", "proportion": 5}}. Required.

  • save_folder – The remote location to save the checkpoints. For example, if your save_folder is s3://my-bucket/my-checkpoints, the Composer checkpoints will be saved to s3://my-bucket/my-checkpoints/<run-name>/checkpoints, and Hugging Face formatted checkpoints will be saved to s3://my-bucket/my-checkpoints/<run-name>/hf_checkpoints. The supported cloud provider prefixes are s3://, gs://, and oci://. Required.

  • compute – The compute configuration to use. Required for now

  • tokenizer – Tokenizer configuration to use. If not provided, the default tokenizer for the model will be used.

  • training_duration – The total duration of your run. This can be specified in batches (e.g. 100ba), epochs (e.g. 10ep), or tokens (e.g. 1_000_000tok). Default is 1ep.

  • parameters –

    Additional parameters to pass to the model
    • learning_rate: The peak learning rate to use. Default is 5e-7. The optimizer used

    is DecoupledLionW with betas of 0.90 and 0.95 and no weight decay, and the learning rate scheduler used is LinearWithWarmupSchedule with a warmup of 2% of the total training duration and a final learning rate multiplier of 0.

    • context_length: The maximum sequence length to use. This will be used to truncate any data that is too

    long. The default is the default for the provided Hugging Face model. We do not support extending the context length beyond each model’s default.

  • experiment_tracker – The configuration for an experiment tracker. For example, to add Weights and Biases tracking, you can pass in {"wandb": {"project": "my-project", "entity": "my-entity"}}. To add in mlflow tracking, you can pass in {"mlflow": {"experiment_path": "my-experiment", "model_registry_path: "catalog.schema.model_name"}}.

  • eval – Configuration for evaluation

  • timeout – Time, in seconds, in which the call should complete. If the run creation takes too long, a TimeoutError will be raised. If future is True, this value will be ignored.

  • future – Return the output as a Future. If True, the call to create_pretraining_run will return immediately and the request will be processed in the background. This takes precedence over the timeout argument. To get the :type Run: output, use return_value.result() with an optional timeout argument.

Returns

A – type Run: object containing the pretraining run information.

Pretraining runs can be programmatically created, which provides flexibility to define custom workflows or create similar pretraining runs in quick succession. create_pretraining_run() takes fields that allow you to create a customized model. At a minimum, you’ll need to provide the model you want to pretrain, the location of your training dataset, and the location where your checkpoints will be saved. There are many optional fields that allow you to perform evaluation, specify a custom tokenizer, and more.

Other actions on pretraining runs#

For listing, stopping, deleting, describing, debugging (logs) pretraining runs, follow the same workflow as Run here.