Advanced (300)

Auto Added by WPeMatico

Generate suspicious transaction report drafts for financial compliance using generative AI

Financial regulations and compliance are constantly changing, and automation of compliance reporting has emerged as a game changer in the financial industry. Amazon Web Services (AWS) generative AI solutions offer a seamless and efficient approach to automate this reporting process. The integration of AWS generative AI into the compliance framework not only enhances efficiency but […]

Generate suspicious transaction report drafts for financial compliance using generative AI Read More »

Build a drug discovery research assistant using Strands Agents and Amazon Bedrock

Drug discovery is a complex, time-intensive process that requires researchers to navigate vast amounts of scientific literature, clinical trial data, and molecular databases. Life science customers like Genentech and AstraZeneca are using AI agents and other generative AI tools to increase the speed of scientific discovery. Builders at these organizations are already using the fully

Build a drug discovery research assistant using Strands Agents and Amazon Bedrock Read More »

Benchmarking Amazon Nova: A comprehensive analysis through MT-Bench and Arena-Hard-Auto

Large language models (LLMs) have rapidly evolved, becoming integral to applications ranging from conversational AI to complex reasoning tasks. However, as models grow in size and capability, effectively evaluating their performance has become increasingly challenging. Traditional benchmarking metrics like perplexity and BLEU scores often fail to capture the nuances of real-world interactions, making human-aligned evaluation

Benchmarking Amazon Nova: A comprehensive analysis through MT-Bench and Arena-Hard-Auto Read More »

Multi-tenant RAG implementation with Amazon Bedrock and Amazon OpenSearch Service for SaaS using JWT

In recent years, the emergence of large language models (LLMs) has accelerated AI adoption across various industries. However, to further augment LLMs’ capabilities and effectively use up-to-date information and domain-specific knowledge, integration with external data sources is essential. Retrieval Augmented Generation (RAG) has gained attention as an effective approach to address this challenge. RAG is

Multi-tenant RAG implementation with Amazon Bedrock and Amazon OpenSearch Service for SaaS using JWT Read More »

Amazon Bedrock Knowledge Bases now supports Amazon OpenSearch Service Managed Cluster as vector store

Amazon Bedrock Knowledge Bases has extended its vector store options by enabling support for Amazon OpenSearch Service managed clusters, further strengthening its capabilities as a fully managed Retrieval Augmented Generation (RAG) solution. This enhancement builds on the core functionality of Amazon Bedrock Knowledge Bases , which is designed to seamlessly connect foundation models (FMs) with

Amazon Bedrock Knowledge Bases now supports Amazon OpenSearch Service Managed Cluster as vector store Read More »

Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents

What if you could replace hours of data analysis with a minute-long conversation? Large language models can transform how we bridge the gap between business questions and actionable data insights. For most organizations, this gap remains stubbornly wide, with business teams trapped in endless cycles—decoding metric definitions and hunting for the correct data sources to

Build a conversational data assistant, Part 1: Text-to-SQL with Amazon Bedrock Agents Read More »

Build an MCP application with Mistral models on AWS

This post is cowritten with Siddhant Waghjale and Samuel Barry from Mistral AI. Model Context Protocol (MCP) is a standard that has been gaining significant traction in recent months. At a high level, it consists of a standardized interface designed to streamline and enhance how AI models interact with external data sources and systems. Instead

Build an MCP application with Mistral models on AWS Read More »

Qwen3 family of reasoning models now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart

Today, we are excited to announce that Qwen3, the latest generation of large language models (LLMs) in the Qwen family, is available through Amazon Bedrock Marketplace and Amazon SageMaker JumpStart. With this launch, you can deploy the Qwen3 models—available in 0.6B, 4B, 8B, and 32B parameter sizes—to build, experiment, and responsibly scale your generative AI

Qwen3 family of reasoning models now available in Amazon Bedrock Marketplace and Amazon SageMaker JumpStart Read More »

End-to-End model training and deployment with Amazon SageMaker Unified Studio

Although rapid generative AI advancements are revolutionizing organizational natural language processing tasks, developers and data scientists face significant challenges customizing these large models. These hurdles include managing complex workflows, efficiently preparing large datasets for fine-tuning, implementing sophisticated fine-tuning techniques while optimizing computational resources, consistently tracking model performance, and achieving reliable, scalable deployment.The fragmented nature of

End-to-End model training and deployment with Amazon SageMaker Unified Studio Read More »

Build conversational interfaces for structured data using Amazon Bedrock Knowledge Bases

Organizations manage extensive structured data in databases and data warehouses. Large language models (LLMs) have transformed natural language processing (NLP), yet converting conversational queries into structured data analysis remains complex. Data analysts must translate business questions into SQL queries, creating workflow bottlenecks. Amazon Bedrock Knowledge Bases enables direct natural language interactions with structured data sources.

Build conversational interfaces for structured data using Amazon Bedrock Knowledge Bases Read More »