AWS Cluster via Dask
RAPIDS can be deployed on ECS using Dask’s dask-cloudprovider management tools. For more details, see our blog post on deploying on ECS.
1. Setup AWS credentials. First, you will need AWS credentials to allow us to interact with the AWS CLI. If someone else manages your AWS account, you will need to get these keys from them. You can provide these credentials to dask-cloudprovider in a number of ways, but the easiest is to setup your local environment using the AWS command line tools:
>>> pip install awscli
>>> aws configure
2. Install dask-cloudprovider. To install, you will need to run the following:
>>> pip install dask-cloudprovider
3. Create an EC2 cluster:
In the AWS console, visit the ECS dashboard. From the “Clusters” section on the left hand side, click “Create Cluster”.
Make sure to select an EC 2 Linux + Networking cluster so that we can specify our networking options.
Give the cluster a name EX.
Change the instance type to one that supports RAPIDS-supported GPUs (see introduction section for list of supported instance types). For this example, we will use
p3.2xlarge, each of which comes with one NVIDIA V100 GPU.
In the networking section, select the default VPC and all the subnets available in that VPC.
All other options can be left at defaults. You can now click “create” and wait for the cluster creation to complete.
4. Create a Dask cluster:
Get the Amazon Resource Name (ARN) for the cluster you just created.
AWS_DEFAULT_REGION environment variable to your default region:
[REGION] = code fo the region being used.
Create the ECSCluster object in your Python session:
>>> from dask_cloudprovider import ECSCluster
>>> cluster = ECSCluster(
[CLUSTER_ARN] = The ARN of an existing ECS cluster to use for launching tasks
[NUM_WORKERS] = Number of workers to start on cluster creation.
[NUM_GPUS] = The number of GPUs to expose to the worker.
5. Test RAPIDS. Create a distributed client for our cluster:
>>> from dask.distributed import Client
>>> client = Client(cluster)
Load sample data and test the cluster!
>>> import dask, cudf, dask_cudf
>>> ddf = dask.datasets.timeseries()
>>> gdf = ddf.map_partitions(cudf.from_pandas)
Name: id, dtype: int64
6. Cleanup. Your cluster will continue to run (and incur charges!) until you shut it down. You can either scale the number of nodes down to zero instances, or shut it down altogether. If you are planning to use the cluster again soon, it is probably preferable to reduce the nodes to zero.
Jump to Top