InSwiggy Bytes — Tech BlogbyJose MathewReflecting on a year of generative AI at Swiggy: A brief review of achievements, learnings, and…In the past year, Swiggy has embarked on an ambitious journey into the realm of generative AI, aiming to integrate these techniques to…Feb 22, 2024A response icon2Feb 22, 2024A response icon2
InThe Deep HubbyGaurav NukalaEvaluation Metrics for RAG SystemsIn the rapidly evolving landscape of artificial intelligence, Retriever-Augmented Generation (RAG) systems have emerged as a powerful tool…Apr 6, 2024Apr 6, 2024
Padma ThanumoorthyMastering Large Language Models: Generative AI Project Lifecycle — Part 1Generative AI — Large Language Models(LLMs) have revolutionized the way we interact with text, enabling us to communicate, analyze, and…Feb 27, 2024Feb 27, 2024
InTowards AIbyIgnacio de GregorioRAG 2.0, Finally Getting RAG Right!The Creators of RAG Present its SuccessorApr 10, 2024A response icon21Apr 10, 2024A response icon21
InAWS TipbyItsukiAWS: RAG with Bedrock Knowledge BaseRAG OverviewMar 20, 2024A response icon2Mar 20, 2024A response icon2
Nour Eddine ZekaouiHow to Build High-Accuracy Serverless RAG Using Amazon Bedrock and Kendra on AWSServerless RAG on AWS — Amazon Bedrock, Amazon Kendra, AWS Lambda, Claude-2, LangChain, and Streamlit.Mar 18, 2024A response icon1Mar 18, 2024A response icon1
InLevel Up CodingbyYoussef HosniTop Large Language Models (LLMs) Interview Questions & AnswersDemystifying Large Language Models (LLMs): Key Interview Questions and Expert AnswersSep 5, 2023A response icon4Sep 5, 2023A response icon4
Tahir SaeedServerless RAG solution using Amazon Bedrock Knowledge BaseIn one of my earlier posts, we explored how to create a Retrieval Augmented Generation (RAG) solution by integrating Amazon Bedrock and…Mar 12, 2024Mar 12, 2024