Azure

Fine-Tuning Hyperparameters using Hyperdrive in Azure Machine Learning SDK

Welcome to the world of hyperparameter training using Hyperdrive experiments in Azure Machine Learning! This powerful process allows you to optimize your machine learning model by launching multiple child runs with different hyperparameter configurations. Once all the runs are complete, you can evaluate and register the best model in the Azure Machine Learning Studio.

In this article, we will guide you through the process of fine-tuning hyperparameters to optimize your model’s performance.

Understanding Hyperparameters

Before we begin, let’s understand what hyperparameters are. Unlike model parameters, hyperparameters cannot be learned from the data. They are predefined settings that need to be tuned in order to achieve optimal model performance. Examples of hyperparameters include the number of layers in a neural network or the learning rate of a machine learning algorithm. These hyperparameters play a crucial role in determining the success of your model. You can learn more about hyperparameters here. For instance, the scikit-learn package provides a hyperparameter called test_size which represents the percentage of data to use in the test split and another hyperparameter called random_state which determines the seed for the random number generator. By fine-tuning these hyperparameters, you can create the best possible model.

Logging in to Workspace

To get started, you’ll need to log in to your Azure workspace using the Azure ML Python SDK. This requires authenticating with Azure, which can be done by clicking on a link and entering a security code on a web page. The following code imports the necessary package and loads the workspace from a saved config file.

import azureml.core
from azureml.core import Workspace

# Load the workspace from the saved config file
ws = Workspace.from_config()
print('Ready to use Azure ML {} to work with {}'.format(azureml.core.VERSION, ws.name))

Preparing the Data

In order to train a regression model, we will use the Diabetes open dataset. The code below registers the diabetes dataset within the Machine Learning Studio Workspace as a tabular dataset, which can be used in your experiments.

from azureml.opendatasets import Diabetes
from azureml.core import Dataset

if "diabetes" not in ws.datasets:

    ds_name = "diabetes"

    # Create a tabular dataset from the path on the datastore
    tab_data_set = Diabetes.get_tabular_dataset()

    # Register the tabular dataset
    try:
        print("Registering Dataset")
        tab_data_set = tab_data_set.register(
            workspace=ws,
            name=ds_name,
            description="Diabetes Sample",
            tags={"format": "CSV"},
            create_new_version=True,
        )
        print("Dataset is registered")
    except Exception as ex:
        print(ex)
else:
    print("Dataset already registered.")

Setting Up Compute

To deploy the Hyperdrive experiment, we need to select a compute instance. The code below discovers the available compute instances and sets one of them to a variable that will be used later.

from azureml.core.compute import ComputeTarget

for compute in ComputeTarget.list(ws):
    training_cluster = ComputeTarget(workspace=ws, name=compute.name)

print("Found compute instance!")

Creating the Training Script

We need a training script to be executed during each run of the Hyperdrive experiment. The code below creates a folder directory to download the training script.

import os

experiment_folder = "diabetes_training-hyperdrive"
os.makedirs(experiment_folder, exist_ok=True)

print("The folder has been created.")

The following code generates a parameterized training script in the experiment_folder directory. This script includes parameters to optimize the alpha and tol arguments of the algorithm. The script downloads the Diabetes dataset and trains the model using the specified algorithm settings. When you run the cell, the script will be created.

import os
import argparse
import joblib
import math
from azureml.core import Dataset, Run
import pandas as pd
import numpy as np
from sklearn.model_selection import train_test_split
from sklearn.linear_model import Ridge
from sklearn.metrics import mean_squared_error

# Set alphas and tols parameters
parser = argparse.ArgumentParser()
parser.add_argument("--input-data", type=str)
parser.add_argument(
    "--alphas", type=float, dest="alpha_value", default=0.01, help="alpha rate"
)
parser.add_argument(
    "--tols", type=float, dest="tol_value", default=0.01, help="tol rate"
)
args = parser.parse_args()
alpha = args.alpha_value
tol = args.tol_value

# Get the experiment run context
run = Run.get_context()
ws = run.experiment.workspace

# Load the Diabetes dataset and split the data into training and test sets
diabetes = Dataset.get_by_id(ws, id=args.input_data).to_pandas_dataframe()

X, y = (
    diabetes[["AGE", "BMI", "S1", "S2", "S3", "S4", "S5", "S6", "SEX"]].values,
    diabetes["Y"].values,
)
X_train, X_test, y_train, y_test = train_test_split(
    X, y, test_size=0.2, random_state=66
)

# Train the model with the specified alpha and tol arguments
model = Ridge(alpha=alpha, tol=tol)
model.fit(X=X_train, y=y_train)
y_pred = model.predict(X=X_test)
rmse = math.sqrt(mean_squared_error(y_true=y_test, y_pred=y_pred))
run.log("rmse", rmse)

# Save the model to the outputs folder, which will be uploaded to the experiment record in Azure ML Studio
os.makedirs("outputs", exist_ok=True)
model_name = "model_alpha_" + str(alpha) + ".pkl"
filename = "outputs/" + model_name
joblib.dump(value=model, filename=filename)

run.complete()

Running a Hyperdrive Experiment

Tuning hyperparameters is similar to tuning a musical instrument. You play with different settings to find the best result. The Hyperdrive package automates this process, reducing the tedious work of manually testing each configuration. Run the next cell to import the required packages for the Hyperdrive experiment.

from azureml.core import Environment
from azureml.core import ScriptRunConfig
from azureml.core import Experiment
from azureml.train.hyperdrive import (
    RandomParameterSampling,
    BanditPolicy,
    HyperDriveConfig,
    PrimaryMetricGoal,
    choice,
    uniform,
)
from azureml.widgets import RunDetails

print("Packages imported!")

Azure Machine Learning provides three sampling methods for hyperparameters: random sampling, grid sampling, and Bayesian sampling. Random sampling randomly selects values from the defined search space. This method is especially useful when dealing with continuous hyperparameters that have a range of values. It reduces the manual effort required for hyperparameter tuning. Run the code below to configure random parameter sampling for our experiment.

# Parameter values for random sampling
params = RandomParameterSampling(
    {
        "--alphas": choice(0.001, 0.005, 0.01, 0.05, 0.1, 1.0, 2.0, 4.0, 8.0),
        "--tols": uniform(0.001, 0.01),
    }
)

print("Hyperparameters are set!")

Next, we create a run configuration that specifies the training script to use for each run and the compute target to run the experiments on. We also pass the diabetes dataset as an input, so each run can use the same dataset.

# Get the training Diabetes dataset
diabetes_ds = ws.datasets.get("diabetes")

sklearn_env = Environment.get(workspace=ws, name="AzureML-Tutorial")
run_config = ScriptRunConfig(
    source_directory=experiment_folder,
    script="diabetes_training.py",
    arguments=["--input-data", diabetes_ds.as_named_input("diabetes")],
    compute_target=training_cluster,
    environment=sklearn_env,
)


print("The run configuration has been created!")

We are now ready to set up the Hyperdrive experiment by configuring the experiment settings. This includes the random parameter sampling and the run configuration.

# Configure Hyperdrive settings
hyperdrive = HyperDriveConfig(
    run_config=run_config,
    hyperparameter_sampling=params,
    policy=None,
    primary_metric_name="rmse",
    primary_metric_goal=PrimaryMetricGoal.MINIMIZE,
    max_total_runs=20,
    max_concurrent_runs=4,
)

print("The Hyperdrive experiment is ready!")

Run the experiment and review the results. This may take 10-20 minutes to complete. The status will be displayed in the output as the experiment runs. You can also switch to the Azure Machine Learning Studio to view the status of the run from the Experiments console.

# Run the experiment
experiment = Experiment(workspace=ws, name="diabetes_training_hyperdrive")
run = experiment.submit(config=hyperdrive)

# Show the status
RunDetails(run).show()
run.wait_for_completion()
_HyperDriveWidget("widget_settings ="{
   "childWidgetDisplay":"popup",
   "send_telemetry":false,
   "log_level":"INFO""…"{
      "runId":"HD_77961f54-fea8-4514-8a1c-25a5743ce89e",
      "target":"ca-41044-compute",
      "status":"Completed",
      "startTimeUtc":"2023-02-12T23:25:25.131619Z",
      "endTimeUtc":"2023-02-12T23:34:58.151189Z",
      "services":{
         
      },
      "properties":{
         "primary_metric_config":"{"name":"rmse","goal":"minimize"}",
         "resume_from":"null",
         "runTemplate":"HyperDrive",
         "azureml.runsource":"hyperdrive",
         "platform":"AML",
         "ContentSnapshotId":"c0a3443f-a512-46bd-8434-de2b8c46aa03",
         "user_agent":"python/3.8.10 (Linux-5.15.0-1031-azure-x86_64-with-glibc2.17) msrest/0.7.1 Hyperdrive.Service/1.0.0 Hyperdrive.SDK/core.1.48.0",
         "space_size":"infinite_space_size",
         "score":"57.05435499812854",
         "best_child_run_id":"HD_77961f54-fea8-4514-8a1c-25a5743ce89e_3",
         "best_metric_status":"Succeeded",
         "best_data_container_id":"dcid.HD_77961f54-fea8-4514-8a1c-25a5743ce89e_3"
      },
      "inputDatasets":[
         
      ],
      "outputDatasets":[
         
      ],
      "runDefinition":{
         "configuration":"None",
         "attribution":"None",
         "telemetryValues":{
            "amlClientType":"azureml-sdk-train",
            "amlClientModule":"[Scrubbed]",
            "amlClientFunction":"[Scrubbed]",
            "tenantId":"fd1fbf9f-991a-40b4-ae26-61dfc34421ef",
            "amlClientRequestId":"007be226-0ddc-447f-806b-e217c5e9dfb5",
            "amlClientSessionId":"e1feb09c-8faa-4bf1-9f5d-886d7ea3e852",
            "subscriptionId":"c460cd3f-7c2a-48cc-9f5d-b62d6083ec23",
            "estimator":"NoneType",
            "samplingMethod":"RANDOM",
            "terminationPolicy":"Default",
            "primaryMetricGoal":"minimize",
            "maxTotalRuns":20,
            "maxConcurrentRuns":4,
            "maxDurationMinutes":10080,
            "vmSize":"None"
         },
         "snapshotId":"c0a3443f-a512-46bd-8434-de2b8c46aa03",
         "snapshots":[
            
         ],
         "sourceCodeDataReference":"None",
         "parentRunId":"None",
         "dataContainerId":"None",
         "runType":"None",
         "displayName":"None",
         "environmentAssetId":"None",
         "properties":{
            
         },
         "tags":{
            
         },
         "aggregatedArtifactPath":"None"
      },
      "logFiles":{
         "azureml-logs/hyperdrive.txt":"https://mllabsbj62gmtjekds.blob.core.windows.net/azureml/ExperimentRun/dcid.HD_77961f54-fea8-4514-8a1c-25a5743ce89e/azureml-logs/hyperdrive.txt?sv=2019-07-07&sr=b&sig=X5DFgKkAi8dN7nGQXfk1%2BWxEfHPEg0cNrKF%2BNMEe9Ao%3D&skoid=1e9fabda-e0fa-4902-8f88-332bc2c64d90&sktid=fd1fbf9f-991a-40b4-ae26-61dfc34421ef&skt=2023-02-12T23%3A16%3A12Z&ske=2023-02-14T07%3A26%3A12Z&sks=b&skv=2019-07-07&st=2023-02-12T23%3A25%3A07Z&se=2023-02-13T07%3A35%3A07Z&sp=r"
      },
      "submittedBy":"student-1289-1715157"
   }

Getting the Best Performing Run

Once all the runs have finished, run the following code to determine the best performing run based on the primary metric used in the experiment.

best_run = run.get_best_run_by_primary_metric()
if best_run is None:
    raise Exception("No best run was found")

best_run

Automating the hyperparameter tuning process is a game-changer in the field of machine learning. If you want to delve deeper into hyperparameter tuning, check out Microsoft’s Documentation.

Thank you for reading! We hope you found this article interesting and useful. If you have any questions or ideas to discuss, we would be happy to collaborate and exchange knowledge with you. Feel free to visit us at Skrots to learn more about our services and how we can help you optimize your machine learning models.

Show More

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button