Amazon SageMaker

Auto Added by WPeMatico

Learn how Amazon Health Services improved discovery in Amazon search using AWS ML and gen AI

Healthcare discovery on ecommerce domains presents unique challenges that traditional product search wasn’t designed to handle. Unlike searching for books or electronics, healthcare queries involve complex relationships between symptoms, conditions, treatments, and services, requiring sophisticated understanding of medical terminology and customer intent. This challenge became particularly relevant for Amazon as we expanded beyond traditional ecommerce […]

Learn how Amazon Health Services improved discovery in Amazon search using AWS ML and gen AI Read More »

Fine-tune OpenAI GPT-OSS models using Amazon SageMaker HyperPod recipes

This post is the second part of the GPT-OSS series focusing on model customization with Amazon SageMaker AI. In Part 1, we demonstrated fine-tuning GPT-OSS models using open source Hugging Face libraries with SageMaker training jobs, which supports distributed multi-GPU and multi-node configurations, so you can spin up high-performance clusters on demand. In this post,

Fine-tune OpenAI GPT-OSS models using Amazon SageMaker HyperPod recipes Read More »

Enhance AI agents using predictive ML models with Amazon SageMaker AI and Model Context Protocol (MCP)

Machine learning (ML) has evolved from an experimental phase to becoming an integral part of business operations. Organizations now actively deploy ML models for precise sales forecasting, customer segmentation, and churn prediction. While traditional ML continues to transform business processes, generative AI has emerged as a revolutionary force, introducing powerful and accessible tools that reshape

Enhance AI agents using predictive ML models with Amazon SageMaker AI and Model Context Protocol (MCP) Read More »

Optimizing Salesforce’s model endpoints with Amazon SageMaker AI inference components

This post is a joint collaboration between Salesforce and AWS and is being cross-published on both the Salesforce Engineering Blog and the AWS Machine Learning Blog. The Salesforce AI Platform Model Serving team is dedicated to developing and managing services that power large language models (LLMs) and other AI workloads within Salesforce. Their main focus

Optimizing Salesforce’s model endpoints with Amazon SageMaker AI inference components Read More »

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker AI using Hugging Face libraries

Released on August 5, 2025, OpenAI’s GPT-OSS models, gpt-oss-20b and gpt-oss-120b, are now available on AWS through Amazon SageMaker AI and Amazon Bedrock. These pre-trained, text-only Transformer models are built on a Mixture-of-Experts (MoE) architecture that activates only a subset of parameters per token, delivering high reasoning performance while reducing compute costs. They specialize in

Fine-tune OpenAI GPT-OSS models on Amazon SageMaker AI using Hugging Face libraries Read More »

GPT OSS models from OpenAI are now available on SageMaker JumpStart

Today, we are excited to announce the availability of Open AI’s new open weight GPT OSS models, gpt-oss-120b and gpt-oss-20b, from OpenAI in Amazon SageMaker JumpStart. With this launch, you can now deploy OpenAI’s newest reasoning models to build, experiment, and responsibly scale your generative AI ideas on AWS. In this post, we demonstrate how

GPT OSS models from OpenAI are now available on SageMaker JumpStart Read More »

Introducing AWS Batch Support for Amazon SageMaker Training jobs

Picture this: your machine learning (ML) team has a promising model to train and experiments to run for their generative AI project, but they’re waiting for GPU availability. The ML scientists spend time monitoring instance availability, coordinating with teammates over shared resources, and managing infrastructure allocation. Simultaneously, your infrastructure administrators spend significant time trying to

Introducing AWS Batch Support for Amazon SageMaker Training jobs Read More »

Evaluating generative AI models with Amazon Nova LLM-as-a-Judge on Amazon SageMaker AI

Evaluating the performance of large language models (LLMs) goes beyond statistical metrics like perplexity or bilingual evaluation understudy (BLEU) scores. For most real-world generative AI scenarios, it’s crucial to understand whether a model is producing better outputs than a baseline or an earlier iteration. This is especially important for applications such as summarization, content generation,

Evaluating generative AI models with Amazon Nova LLM-as-a-Judge on Amazon SageMaker AI Read More »

Building enterprise-scale RAG applications with Amazon S3 Vectors and DeepSeek R1 on Amazon SageMaker AI

Organizations are adopting large language models (LLMs), such as DeepSeek R1, to transform business processes, enhance customer experiences, and drive innovation at unprecedented speed. However, standalone LLMs have key limitations such as hallucinations, outdated knowledge, and no access to proprietary data. Retrieval Augmented Generation (RAG) addresses these gaps by combining semantic search with generative AI,

Building enterprise-scale RAG applications with Amazon S3 Vectors and DeepSeek R1 on Amazon SageMaker AI Read More »

Build secure RAG applications with AWS serverless data lakes

Data is your generative AI differentiator, and successful generative AI implementation depends on a robust data strategy incorporating a comprehensive data governance approach. Traditional data architectures often struggle to meet the unique demands of generative such as applications. An effective generative AI data strategy requires several key components like seamless integration of diverse data sources,

Build secure RAG applications with AWS serverless data lakes Read More »