Amazon EC2

Auto Added by WPeMatico

How Amazon scaled Rufus by building multi-node inference using AWS Trainium chips and vLLM

At Amazon, our team builds Rufus, a generative AI-powered shopping assistant that serves millions of customers at immense scale. However, deploying Rufus at scale introduces significant challenges that must be carefully navigated. Rufus is powered by a custom-built large language model (LLM). As the model’s complexity increased, we prioritized developing scalable multi-node inference capabilities that […]

How Amazon scaled Rufus by building multi-node inference using AWS Trainium chips and vLLM Read More »

AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI

Imagine a system that can explore multiple approaches to complex problems, drawing on its understanding of vast amounts of data, from scientific datasets to source code to business documents, and reasoning through the possibilities in real time. This lightning-fast reasoning isn’t waiting on the horizon. It’s happening today in our customers’ AI production environments. The

AWS AI infrastructure with NVIDIA Blackwell: Two powerful compute solutions for the next frontier of AI Read More »

Host concurrent LLMs with LoRAX

Businesses are increasingly seeking domain-adapted and specialized foundation models (FMs) to meet specific needs in areas such as document summarization, industry-specific adaptations, and technical code generation and advisory. The increased usage of generative AI models has offered tailored experiences with minimal technical expertise, and organizations are increasingly using these powerful models to drive innovation and

Host concurrent LLMs with LoRAX Read More »

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

Organizations are constantly seeking ways to harness the power of advanced large language models (LLMs) to enable a wide range of applications such as text generation, summarizationquestion answering, and many others. As these models grow more powerful and capable, deploying them in production environments while optimizing performance and cost-efficiency becomes more challenging. Amazon Web Services

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2 Read More »