Generative AI

Auto Added by WPeMatico

Best practices for Meta Llama 3.2 multimodal fine-tuning on Amazon Bedrock

Multimodal fine-tuning represents a powerful approach for customizing foundation models (FMs) to excel at specific tasks that involve both visual and textual information. Although base multimodal models offer impressive general capabilities, they often fall short when faced with specialized visual tasks, domain-specific content, or particular output formatting requirements. Fine-tuning addresses these limitations by adapting models […]

Best practices for Meta Llama 3.2 multimodal fine-tuning on Amazon Bedrock Read More »

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency

Amazon Bedrock Model Distillation is generally available, and it addresses the fundamental challenge many organizations face when deploying generative AI: how to maintain high performance while reducing costs and latency. This technique transfers knowledge from larger, more capable foundation models (FMs) that act as teachers to smaller, more efficient models (students), creating specialized models that

Amazon Bedrock Model Distillation: Boost function calling accuracy while reducing cost and latency Read More »

FloQast builds an AI-powered accounting transformation solution with Anthropic’s Claude 3 on Amazon Bedrock

With the advent of generative AI solutions, a paradigm shift is underway across industries, driven by organizations embracing foundation models (FMs) to unlock unprecedented opportunities. Amazon Bedrock has emerged as the preferred choice for numerous customers seeking to innovate and launch generative AI applications, leading to an exponential surge in demand for model inference capabilities.

FloQast builds an AI-powered accounting transformation solution with Anthropic’s Claude 3 on Amazon Bedrock Read More »

Insights in implementing production-ready solutions with generative AI

As generative AI revolutionizes industries, organizations are eager to harness its potential. However, the journey from production-ready solutions to full-scale implementation can present distinct operational and technical considerations. This post explores key insights and lessons learned from AWS customers in Europe, Middle East, and Africa (EMEA) who have successfully navigated this transition, providing a roadmap

Insights in implementing production-ready solutions with generative AI Read More »

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS

Generative AI is rapidly reshaping industries worldwide, empowering businesses to deliver exceptional customer experiences, streamline processes, and push innovation at an unprecedented scale. However, amidst the excitement, critical questions around the responsible use and implementation of such powerful technology have started to emerge. Although responsible AI has been a key focus for the industry over

Responsible AI in action: How Data Reply red teaming supports generative AI safety on AWS Read More »

Improve Amazon Nova migration performance with data-aware prompt optimization

In the era of generative AI, new large language models (LLMs) are continually emerging, each with unique capabilities, architectures, and optimizations. Among these, Amazon Nova foundation models (FMs) deliver frontier intelligence and industry-leading cost-performance, available exclusively on Amazon Bedrock. Since its launch in 2024, generative AI practitioners, including the teams in Amazon, have started transitioning

Improve Amazon Nova migration performance with data-aware prompt optimization Read More »

Customize Amazon Nova models to improve tool usage

Modern large language models (LLMs) excel in language processing but are limited by their static training data. However, as industries require more adaptive, decision-making AI, integrating tools and external APIs has become essential. This has led to the evolution and rapid rise of agentic workflows, where AI systems autonomously plan, execute, and refine tasks. Accurate

Customize Amazon Nova models to improve tool usage Read More »

Enterprise-grade natural language to SQL generation using LLMs: Balancing accuracy, latency, and scale

This blog post is co-written with Renuka Kumar and Thomas Matthew from Cisco. Enterprise data by its very nature spans diverse data domains, such as security, finance, product, and HR. Data across these domains is often maintained across disparate data environments (such as Amazon Aurora, Oracle, and Teradata), with each managing hundreds or perhaps thousands

Enterprise-grade natural language to SQL generation using LLMs: Balancing accuracy, latency, and scale Read More »

AWS Field Experience reduced cost and delivered low latency and high performance with Amazon Nova Lite foundation model

AWS Field Experience (AFX) empowers Amazon Web Services (AWS) sales teams with generative AI solutions built on Amazon Bedrock, improving how AWS sellers and customers interact. The AFX team uses AI to automate tasks and provide intelligent insights and recommendations, streamlining workflows for both customer-facing roles and internal support functions. Their approach emphasizes operational efficiency

AWS Field Experience reduced cost and delivered low latency and high performance with Amazon Nova Lite foundation model Read More »

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker

Archival data in research institutions and national laboratories represents a vast repository of historical knowledge, yet much of it remains inaccessible due to factors like limited metadata and inconsistent labeling. Traditional keyword-based search mechanisms are often insufficient for locating relevant documents efficiently, requiring extensive manual review to extract meaningful insights. To address these challenges, a

Build an AI-powered document processing platform with open source NER model and LLM on Amazon SageMaker Read More »