Use RAPIDS with Hyper Parameter Optimization

Get Started

Accelerate Hyperparameter Optimization
in the Cloud

Machine learning models can have dozens of options, or “hyperparameters,” that make the difference between a great model and an inaccurate one. Accelerated machine learning models in RAPIDS give you the flexibility to use hyperparameter optimization (HPO) experiments to explore all of these options to find the most accurate possible model for your problem. The acceleration of GPUs lets data scientists iterate through hundreds or thousands of variants over a lunch break, even for complex models and large datasets.

RAPIDS Integration into Cloud / Distributed Frameworks

RAPIDS CSP HPO

Benefits With RAPIDS

Smooth Integration

RAPIDS matches popular PyData APIs, making it an easy drop-in for existing workloads built on Pandas and scikit-learn.

High Performance

With GPU acceleration, RAPIDS models can train 40x faster than CPU equivalents, enabling more experimentation in less time.

Deploy on Any Platform

The RAPIDS team works closely with major cloud providers and open source hyperparameter optimization solutions to provide code samples so you can get started with HPO in minutes on the cloud of your choice.

Getting Started

RAPIDS supports hyperparameter optimization solutions based on AWS Sagemaker, Azure ML, Google Cloud AI, Dask ML, and Ray Tune frameworks, so you can easily integrate with whichever framework you use today.

Get the HPO example code

Our GitHub repo contains helper code, sample notebooks, and step-by-step instructions to get you up and running on each HPO platform. See our README

Clone the Repo

Start by cloning the open-source cloud-ml-examples repository from RAPIDSai GitHub. See our Repo

Notebook examples

The repo will walk you through step-by-step instructions for a sample hyperparameter optimization job. To start running your experiments with HPO, navigate to the directory for your framework or CSP, and check out the README.md file there. Walk Through The Notebooks

Minimize Cost, Accelerate Turnaround

100 job cost

Run your experiments with HPO

It’s easy to work in the cloud of your choice to find the best quality model.

RAPIDS on Cloud
Machine Learning Services

Azure ML, AWS Sagemaker, and Google Cloud AI hyperparameter optimization services free users from the details of managing their own infrastructure. Launch a job from a RAPIDS sample notebook, and the platform will automatically scale up and launch as many instances as you need to complete the experiments quickly. From a centralized interface, you can manage your jobs, view results, and find the best model to deploy.

Bring Your Own Cloud
On-Prem or Public

Whether running a cluster on-prem, or managing instances in a public cloud, RAPIDS integrates with HPO platforms that can run on your infrastructure. RayTune and Dask-ML both provide cloud-neutral platforms for hyperparameter optimization. RayTune combines the scalable Ray platform with state-of-the-art HPO algorithms, including PBT, Vizier’s stopping rule, and more. Dask-ML HPO offers GPU-aware caching of intermediate datasets and a familiar, Pythonic API. Both can benefit from high-performance estimators from RAPIDS.

amazon sagemaker
azure ml
google cloud
Dask
ray tune

Get Started with Hyperopt