4 Things I like about Amazon SageMaker The managed Notebooks Deployments with JumpStart Deploying models with CDK Cost optimizations with Spot instances What I’m still missing


Generative AI has taken the world by storm, and cloud providers’ machine learning services have gotten increasingly popular. Amazon SageMaker is one in every of the leading ML platforms, and I recently decided to explore it out of curiosity. On this post, I’ll share my experience and highlight the 4 things that I loved about SageMaker (and few I’m missing).

Amazon SageMaker is a managed machine learning service provided by AWS that provides tools for constructing, training, and deploying machine learning models at scale.

Whether you’re a machine learning engineer or similar to hacking AWS, I hope you’ll find this post informative and interesting.

I used to be in a position to spin up a ml.g5.2xlarge instance in lower than two minutes and have my model fully up and running in five. I could literally transform images of Obama into an anime character or cyborg while waiting for my next construct to complete. This feature is awesome and showcases the speed and efficiency of SageMaker’s managed notebooks.

One other feature of SageMaker that I used to be impressed by is the recent launch of shared spaces. This permits multiple users to collaborate on notebooks in real-time and concurrently, which is actually convenient. That is great for data science teams, although I haven’t had a likelihood to try it myself.

I used to be experimenting with Alexa TM 20B model, just look how easy the deployment is.

First, go to SageMaker Studio, hit and select a model. (I’ll go along with Alexa TM 20B)

After choosing the model, you click on the button (change configurations in case you need) and wait just a few minutes for the model to be ready.

I like how seamless was the experience. SageMaker provides a “plug and play” notebook to check your endpoint, which makes it incredibly easy to experiment together with your model.

Deploying a model with IaaC using CDK requires some setup work, resembling making a Docker container that might be utilized by SageMaker and adding the model.tar.gz. Nevertheless, when you’ve accomplished those steps, the deployment process is simple.

➜ cdk bootstrap
⏳ Bootstrapping environment aws://0123456789/us-west-2...
✅ Environment aws://0123456789/us-west-2 bootstrapped.

➜ cdk deploy
✨ Synthesis time: 9.38s

SagemakerBlogPythonStack: constructing assets...

✅ SagemakerBlogPythonStack

✨ Deployment time: 587.42s

I used the HuggingFace GPT-J, and used the S3 model_data from their public bucket s3://huggingface-sagemaker-models/transformers/4.12.3/pytorch/1.9.1/gpt-j/model.tar.gz. For a picture, I used a regular HuggingFace PyTorch inference image from their public ECR arn:aws:ecr:us-west-2:763104351884:repository/huggingface-pytorch-inference. Note that the model required sufficient ECR and S3 permissions as a way to pull the image and skim the model data accurately.
As GPT-J is “only” 6B parameters, an ml.g4dn.xlarge was sufficient. I also added an auto-scaler feature to my endpoint, which was really cool to see in motion.

In lower than 10 minutes my model was served from an endpoint, able to generate text.

Check my GitHub code for more context.

In case you’re not conversant in them, Spot Instances are EC2 instances that provide access to unused EC2 capability at a significantly discounted rate. Amazon offers these instances to assist EC2 users optimize their usage by benefiting from idle capability, as an alternative of relying solely on the costlier On-Demand Instances.

With Managed Spot Training in SageMaker, you possibly can specify which training jobs use spot instances and set a stopping condition that determines how long SageMaker waits for a job to run using Amazon EC2 Spot instances. Moreover, you possibly can arrange checkpoints on your training jobs, and SageMaker will mechanically restart the job from where it left off if it gets interrupted.

It’s very easy to set-up, take a look at this example.

Fortunately, I recently learned concerning the SageMaker JumpStart Foundation Models announcement, which is currently in closed-preview and may address this need. Considering the newest announcement of the expansion of the long-term strategic partnership between Amazon Web Services AWS and Hugging Face, along with the strong collaborations with Cohere, AI21 Labs, and Stability AI, we are able to only imagine the probabilities and anticipate recent breakthroughs in the sphere of AI and ML.

For my part, it could be helpful to have more low-code/no-code AI/ML tools in SageMaker. While SageMaker Canvas does have useful automatic data evaluation tools, I think that SageMaker’s no-code capabilities might be enhanced and expanded to incorporate features for computer vision and generative AI.


What are your thoughts on this topic?
Let us know in the comments below.


0 0 votes
Article Rating
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

Would love your thoughts, please comment.x