Generative AI

Auto Added by WPeMatico

Amazon Bedrock Prompt Optimization Drives LLM Applications Innovation for Yuewen Group

Yuewen Group is a global leader in online literature and IP operations. Through its overseas platform WebNovel, it has attracted about 260 million users in over 200 countries and regions, promoting Chinese web literature globally. The company also adapts quality web novels into films, animations for international markets, expanding the global influence of Chinese culture. […]

Amazon Bedrock Prompt Optimization Drives LLM Applications Innovation for Yuewen Group Read More »

Add Zoom as a data accessor to your Amazon Q index

For many organizations, vast amounts of enterprise knowledge are scattered across diverse data sources and applications. Organizations across industries seek to use this cross-application enterprise data from within their preferred systems while adhering to their established security and governance standards. This post demonstrates how Zoom users can access their Amazon Q Business enterprise data directly

Add Zoom as a data accessor to your Amazon Q index Read More »

Host concurrent LLMs with LoRAX

Businesses are increasingly seeking domain-adapted and specialized foundation models (FMs) to meet specific needs in areas such as document summarization, industry-specific adaptations, and technical code generation and advisory. The increased usage of generative AI models has offered tailored experiences with minimal technical expertise, and organizations are increasingly using these powerful models to drive innovation and

Host concurrent LLMs with LoRAX Read More »

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2

Organizations are constantly seeking ways to harness the power of advanced large language models (LLMs) to enable a wide range of applications such as text generation, summarizationquestion answering, and many others. As these models grow more powerful and capable, deploying them in production environments while optimizing performance and cost-efficiency becomes more challenging. Amazon Web Services

Optimizing Mixtral 8x7B on Amazon SageMaker with AWS Inferentia2 Read More »

Elevate business productivity with Amazon Q and Amazon Connect

Modern banking faces dual challenges: delivering rapid loan processing while maintaining robust security against sophisticated fraud. Amazon Q Business provides AI-driven analysis of regulatory requirements and lending patterns. Additionally, you can now report fraud from the same interface with a custom plugin capability that can integrate with Amazon Connect. This fusion of technology transforms traditional

Elevate business productivity with Amazon Q and Amazon Connect Read More »

Build multi-agent systems with LangGraph and Amazon Bedrock

Large language models (LLMs) have raised the bar for human-computer interaction where the expectation from users is that they can communicate with their applications through natural language. Beyond simple language understanding, real-world applications require managing complex workflows, connecting to external data, and coordinating multiple AI capabilities. Imagine scheduling a doctor’s appointment where an AI agent

Build multi-agent systems with LangGraph and Amazon Bedrock Read More »

Dynamic text-to-SQL for enterprise workloads with Amazon Bedrock Agents

Generative AI enables us to accomplish more in less time. Text-to-SQL empowers people to explore data and draw insights using natural language, without requiring specialized database knowledge. Amazon Web Services (AWS) has helped many customers connect this text-to-SQL capability with their own data, which means more employees can generate insights. In this process, we discovered

Dynamic text-to-SQL for enterprise workloads with Amazon Bedrock Agents Read More »

Reduce ML training costs with Amazon SageMaker HyperPod

Training a frontier model is highly compute-intensive, requiring a distributed system of hundreds, or thousands, of accelerated instances running for several weeks or months to complete a single job. For example, pre-training the Llama 3 70B model with 15 trillion training tokens took 6.5 million H100 GPU hours. On 256 Amazon EC2 P5 instances (p5.48xlarge,

Reduce ML training costs with Amazon SageMaker HyperPod Read More »

Model customization, RAG, or both: A case study with Amazon Nova

As businesses and developers increasingly seek to optimize their language models for specific tasks, the decision between model customization and Retrieval Augmented Generation (RAG) becomes critical. In this post, we seek to address this growing need by offering clear, actionable guidelines and best practices on when to use each approach, helping you make informed decisions

Model customization, RAG, or both: A case study with Amazon Nova Read More »

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI

Financial institutions today face an increasingly complex regulatory world that demands robust, efficient compliance mechanisms. Although organizations traditionally invest countless hours reviewing regulations such as the Anti-Money Laundering (AML) rules and the Bank Secrecy Act (BSA), modern AI solutions offer a transformative approach to this challenge. By using Amazon Bedrock Knowledge Bases alongside CrewAI—an open

Automating regulatory compliance: A multi-agent solution using Amazon Bedrock and CrewAI Read More »