Amazon SageMaker

Auto Added by WPeMatico

How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod

This post is co-written with Ken Tsui, Edward Tsoi and Mickey Yip from Apoidea Group. The banking industry has long struggled with the inefficiencies associated with repetitive processes such as information extraction, document review, and auditing. These tasks, which require significant human resources, slow down critical operations such as Know Your Customer (KYC) procedures, loan […]

How Apoidea Group enhances visual information extraction from banking documents with multimodal models using LLaMA-Factory on Amazon SageMaker HyperPod Read More »

How Qualtrics built Socrates: An AI platform powered by Amazon SageMaker and Amazon Bedrock

This post is co-authored by Jay Kshirsagar and Ronald Quan from Qualtrics. The content and opinions in this post are those of the third-party author and AWS is not responsible for the content or accuracy of this post. Qualtrics, founded in 2002, is a pioneering software company that has spent over two decades creating exceptional

How Qualtrics built Socrates: An AI platform powered by Amazon SageMaker and Amazon Bedrock Read More »

Customize DeepSeek-R1 671b model using Amazon SageMaker HyperPod recipes – Part 2

This post is the second part of the DeepSeek series focusing on model customization with Amazon SageMaker HyperPod recipes (or recipes for brevity). In Part 1, we demonstrated the performance and ease of fine-tuning DeepSeek-R1 distilled models using these recipes. In this post, we use the recipes to fine-tune the original DeepSeek-R1 671b parameter model.

Customize DeepSeek-R1 671b model using Amazon SageMaker HyperPod recipes – Part 2 Read More »

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

Generative AI is rapidly reshaping industries worldwide, empowering businesses to deliver exceptional customer experiences, streamline processes, and push innovation at an unprecedented scale. However, amidst the excitement, critical questions around the responsible use and implementation of such powerful technology have started to emerge. Although responsible AI has been a key focus for the industry over

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS Read More »

InterVision accelerates AI development using AWS LLM League and Amazon SageMaker AI

Cities and local governments are continuously seeking ways to enhance their non-emergency services, recognizing that intelligent, scalable contact center solutions play a crucial role in improving citizen experiences. InterVision Systems, LLC (InterVision), an AWS Premier Tier Services Partner and Amazon Connect Service Delivery Partner, has been at the forefront of this transformation, with their contact

InterVision accelerates AI development using AWS LLM League and Amazon SageMaker AI Read More »

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker

Archival data in research institutions and national laboratories represents a vast repository of historical knowledge, yet much of it remains inaccessible due to factors like limited metadata and inconsistent labeling. Traditional keyword-based search mechanisms are often insufficient for locating relevant documents efficiently, requiring extensive manual review to extract meaningful insights. To address these challenges, a

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker Read More »

Supercharge your LLM performance with Amazon SageMaker Large Model Inference container v15

Today, we’re excited to announce the launch of Amazon SageMaker Large Model Inference (LMI) container v15, powered by vLLM 0.8.4 with support for the vLLM V1 engine. This version now supports the latest open-source models, such as Meta’s Llama 4 models Scout and Maverick, Google’s Gemma 3, Alibaba’s Qwen, Mistral AI, DeepSeek-R, and many more.

Supercharge your LLM performance with Amazon SageMaker Large Model Inference container v15 Read More »

How Salesforce achieves high-performance model deployment with Amazon SageMaker AI

This post is a joint collaboration between Salesforce and AWS and is being cross-published on both the Salesforce Engineering Blog and the AWS Machine Learning Blog. The Salesforce AI Model Serving team is working to push the boundaries of natural language processing and AI capabilities for enterprise applications. Their key focus areas include optimizing large

How Salesforce achieves high-performance model deployment with Amazon SageMaker AI Read More »

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

Organizations are constantly seeking ways to harness the power of advanced large language models (LLMs) to enable a wide range of applications such as text generation, summarizationquestion answering, and many others. As these models grow more powerful and capable, deploying them in production environments while optimizing performance and cost-efficiency becomes more challenging. Amazon Web Services

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2 Read More »

Reduce ML training costs with Amazon SageMaker HyperPod

Training a frontier model is highly compute-intensive, requiring a distributed system of hundreds, or thousands, of accelerated instances running for several weeks or months to complete a single job. For example, pre-training the Llama 3 70B model with 15 trillion training tokens took 6.5 million H100 GPU hours. On 256 Amazon EC2 P5 instances (p5.48xlarge,

Reduce ML training costs with Amazon SageMaker HyperPod Read More »