<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" media="screen" href="/~files/feed-premium.xsl"?>
                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                             
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:media="http://search.yahoo.com/mrss/" xmlns:atom="http://www.w3.org/2005/Atom" xmlns:feedpress="https://feed.press/xmlns" xmlns:podcast="https://podcastindex.org/namespace/1.0" version="2.0">
  <channel>
    <feedpress:locale>en</feedpress:locale>
    <atom:link rel="self" href="https://feeds.dzone.com/databases"/>
    <atom:link rel="hub" href="https://feedpress.superfeedr.com/"/>
    <title>DZone Databases Zone</title>
    <link>https://dzone.com/databases</link>
    <description>Recent posts in Databases on DZone.com</description>
    <item>
      <title>Custom Model Context Protocol (MCP) for NL2SQL: A Rigorous Evaluation Framework on Oracle Database</title>
      <link>https://feeds.dzone.com/link/23560/17336954/model-context-protocol-mcp-for-nl2sql-a-rigorous-e</link>
      <description><![CDATA[<p data-line="8" dir="auto">When you let an <a href="https://dzone.com/articles/eight-core-llm-development-skills-every-enterprise">LLM</a> turn natural language into <a href="https://dzone.com/articles/sql-server-from-zero-to-advanced-level">SQL</a>, you need to know: is it <em>correct</em>, will it <em>run</em> on your database, and is it <em>efficient</em>? <strong>SQLclMCP</strong> is an open-source framework that answers those questions by comparing LLM-generated SQL to human-written baselines on <strong>Oracle Database&nbsp;</strong>— using the <strong>Model Context Protocol (MCP)</strong> and a 500-question TPC-H benchmark. MCP keeps “how SQL is generated” behind a single HTTP API: the evaluator sends a question and gets back SQL, so you can swap models, prompts, or even the server implementation and still run the <em>same</em> evaluation. This article walks through the pipeline, how to run it, what gets measured, a few example graphs and tables, and Oracle gotchas we fixed in the prompt.</p>
<h2 data-line="12" dir="auto">Why This Matters</h2>
<p data-line="14" dir="auto">Natural language to SQL (NL2SQL) works well for ad-hoc questions and app backends — until the model returns the wrong rows or a query that fails or runs too slowly in production. To ship with confidence you need three guarantees: the result set is <strong>correct</strong> (same logical result as the intended query), the SQL <strong>executes</strong> on your database without syntax or runtime errors, and it’s <strong>efficient</strong> enough (reasonable latency and plan quality, e.g. Oracle EXPLAIN PLAN). The only reliable way to get those guarantees is to compare LLM output to a gold standard on a <em>real</em> database, in a <strong>repeatable</strong> pipeline — so you can improve prompts, compare models, and catch dialect gotchas (Oracle vs MySQL, EXTRACT vs LIMIT, and the like). This framework gives you that pipeline.</p><img src="https://feeds.dzone.com/link/23560/17336954.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 08 May 2026 15:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642362</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18972883&amp;w=600"/>
      <dc:creator>Sanjay Mishra</dc:creator>
    </item>
    <item>
      <title>RAG Done Right: When to Use SQL, Search, and Vector Retrieval and How To Combine Them</title>
      <link>https://feeds.dzone.com/link/23560/17336955/rag-sql-search-vector</link>
      <description><![CDATA[<p><span data-contrast="none">In this article, I will attempt to explain why retrieval-agumented generation (</span><span data-contrast="none">RAG) fails when retrieval is treated as a one-size-fits-all approach.</span></p>
<p>For example, the internal AI assistant looks great at demo time. Vector database ingesting overnight, GPT-4-class model, clean stakeholder presentation. The team ships.</p><img src="https://feeds.dzone.com/link/23560/17336955.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 08 May 2026 14:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3653386</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19011670&amp;w=600"/>
      <dc:creator>Ram Ghadiyaram</dc:creator>
    </item>
    <item>
      <title>How to Test PUT API Request Using REST-Assured Java</title>
      <link>https://feeds.dzone.com/link/23560/17336239/test-put-api-rest-assured-java</link>
      <description><![CDATA[<p data-selectable-paragraph="">PUT requests are typically used for updating an existing resource. This means replacing the current data for the target resource with the data sent in the API request body.</p>
<p data-selectable-paragraph="">Just like POST requests, the content-type header is important because it tells the server how to interpret the data we’re sending.</p><img src="https://feeds.dzone.com/link/23560/17336239.gif" height="1" width="1"/>]]></description>
      <pubDate>Thu, 07 May 2026 14:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3653332</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19009785&amp;w=600"/>
      <dc:creator>Faisal Khatri</dc:creator>
    </item>
    <item>
      <title>When Retries Become a Denial-of-Wallet</title>
      <link>https://feeds.dzone.com/link/23560/17335745/retries-are-a-denial-of-wallet-attack</link>
      <description><![CDATA[<p style="text-align: justify;">There's a particular kind of incident that doesn't show up in your error dashboards. No alerts fire. Latency looks fine, actually — or fine-ish, in that flickering, indeterminate way that makes you suspicious but not certain. What shows up, days later, is a billing anomaly. A line item that's 4x what you budgeted. And when you dig, you find it: retries. Hundreds of thousands of them. Loyal, tireless, utterly pointless retries, hammering a dependency that was never going to recover within the retry window, each one spinning up a Lambda invocation, writing to CloudWatch, touching the database, accruing egress. The system was "retrying" its way into insolvency.</p>
<p style="text-align: justify;">This is what I mean when I call uncontrolled retries a self-inflicted <a href="https://dzone.com/articles/retries-are-a-denial-of-wallet-attack-waiting-to-h">Denial-of-Wallet attack</a>. Not metaphorically. Mechanically.</p><img src="https://feeds.dzone.com/link/23560/17335745.gif" height="1" width="1"/>]]></description>
      <pubDate>Wed, 06 May 2026 20:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642075</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18961267&amp;w=600"/>
      <dc:creator>David Iyanu Jonathan</dc:creator>
    </item>
    <item>
      <title>Why PostgreSQL CDC Breaks in Production</title>
      <link>https://feeds.dzone.com/link/23560/17335575/why-postgresql-cdc-breaks-in-production</link>
      <description><![CDATA[<p>Keeping two PostgreSQL databases in sync sounds simple. Until it isn’t.</p>
<p>At first, everything looks fine:</p><img src="https://feeds.dzone.com/link/23560/17335575.gif" height="1" width="1"/>]]></description>
      <pubDate>Wed, 06 May 2026 15:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3653329</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19009783&amp;w=600"/>
      <dc:creator>Dmitry Narizhnykh</dc:creator>
    </item>
    <item>
      <title>The Hidden Latency of Autoscaling</title>
      <link>https://feeds.dzone.com/link/23560/17335009/the-hidden-latency-of-autoscaling</link>
      <description><![CDATA[<p style="text-align: justify;">There is a comfortable fiction at the center of most <a href="https://dzone.com/articles/developer-centric-cloud-architecture-framework-dcaf">cloud architectures</a>, one that gets written into runbooks and repeated in postmortems with the same exhausted confidence: <em>we autoscale</em>. As if the declaration itself is a reliability posture. As if telling your HPA to watch CPU utilization is the same thing as building a system that breathes.</p>
<p style="text-align: justify;">It isn't. And the gap between those two things has eaten more than a few production environments.</p><img src="https://feeds.dzone.com/link/23560/17335009.gif" height="1" width="1"/>]]></description>
      <pubDate>Tue, 05 May 2026 19:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642048</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18959219&amp;w=600"/>
      <dc:creator>David Iyanu Jonathan</dc:creator>
    </item>
    <item>
      <title>Engineering LLMOps: Building Robust CI/CD Pipelines for LLM Applications on Google Cloud</title>
      <link>https://feeds.dzone.com/link/23560/17334934/llmops-ci-cd-pipelines-google-cloud</link>
      <description><![CDATA[<p>The transition of large language models (LLMs) from experimental notebooks to production-grade applications requires more than just a well-crafted prompt. As enterprises integrate generative AI into their core workflows, the need for stability, scalability, and reproducibility becomes paramount. This is where LLMOps — the intersection of DevOps, Data Engineering, and machine learning — enters the frame.</p>
<p>Building a <a href="https://dzone.com/articles/what-is-a-cicd-pipeline">CI/CD pipeline</a> for LLM-based applications on Google Cloud Platform (GCP) presents unique challenges. Unlike traditional software, LLM outputs are non-deterministic, making testing complex. Unlike traditional ML, the "model" is often a managed service (like Gemini) or a fine-tuned version of an open-source giant, shifting the focus from training to orchestration, prompt management, and RAG (Retrieval-Augmented Generation) infrastructure.</p><img src="https://feeds.dzone.com/link/23560/17334934.gif" height="1" width="1"/>]]></description>
      <pubDate>Tue, 05 May 2026 16:30:01 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3653311</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19007243&amp;w=600"/>
      <dc:creator>Jubin Abhishek Soni</dc:creator>
    </item>
    <item>
      <title>Setting Up Claude Code With Ollama: A Guide</title>
      <link>https://feeds.dzone.com/link/23560/17334893/claude-code-ollama-setup-guide</link>
      <description><![CDATA[<div>
 <p>Nowadays, there are quite a lot of AI coding assistants. In this blog, you will take a closer look at Claude Code, a terminal-based AI coding assistant. Since mid January 2026, Claude Code can also be used in combination with Ollama, a local inference engine. Enjoy!</p>
 <h2>Introduction</h2>
 <p>There are many AI models and also many AI coding assistants. Which one to choose is a hard question. It also depends on whether you run the models locally or in the cloud. When running locally, Qwen3-Coder is a very good AI model to be used for programming tasks. In previous posts, <a href="https://mydeveloperplanet.com/2024/10/08/devoxxgenie-your-ai-assistant-for-idea/" rel="noopener noreferrer" target="_blank">DevoxxGenie</a>, a JetBrains IDE plugin, was often used as an AI coding assistant. DevoxxGenie is nicely integrated within the JetBrains IDEs. But it is also a good thing to take a look at other AI coding assistants. In a <a href="https://mydeveloperplanet.com/2026/02/25/getting-started-with-qwen-code-for-coding-tasks/" rel="noopener noreferrer" target="_blank">previous blog</a>, Qwen Code was used; now it is time to take a look at Claude Code.</p><img src="https://feeds.dzone.com/link/23560/17334893.gif" height="1" width="1"/>]]></description>
      <pubDate>Tue, 05 May 2026 15:30:06 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3653316</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19007229&amp;w=600"/>
      <dc:creator>Gunter Rotsaert</dc:creator>
    </item>
    <item>
      <title>Modernization Is Not Migration</title>
      <link>https://feeds.dzone.com/link/23560/17334894/modernization-is-not-migration</link>
      <description><![CDATA[<h2>Industry Context</h2>
<p><a href="https://dzone.com/articles/application-modernization-amp-6rs">Modernization</a> used to mean something simpler: Move the workloads, update the tooling, declare the project done. In practice, that approach meant engineers manually migrating hundreds of DataStage jobs one at a time, a process that was slow, error-prone, and impossible to scale as platforms grew. The traditional model worked when volumes were low. It broke entirely when weekly release windows started carrying 500 jobs, and the only way through was brute-force manual effort.</p>
<p>What changed the equation was not just cloud infrastructure but also a fundamentally different operating model. When a CI/CD-based promotion mechanism replaced manual steps, reducing what once required hours of coordinated effort down to a single parameterized execution, hundreds of jobs could migrate consistently, with less human involvement and a verifiable audit trail. That shift exposed a harder truth: the technology was never the bottleneck. The operating model was.</p><img src="https://feeds.dzone.com/link/23560/17334894.gif" height="1" width="1"/>]]></description>
      <pubDate>Tue, 05 May 2026 15:00:15 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3643489</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18954001&amp;w=600"/>
      <dc:creator>vaibhav Sharma</dc:creator>
    </item>
    <item>
      <title>Mastering Kubernetes to Maximize Your Cloud Potential</title>
      <link>https://feeds.dzone.com/link/23560/17332376/mastering-kubernetes-to-maximize-your-cloud-potent</link>
      <description><![CDATA[<p data-end="223" data-start="57" style="text-align: justify;"><a href="https://dzone.com/articles/kubernetes-101-understanding-the-foundation-and-ge">Kubernetes</a> is often introduced as a container orchestrator. That’s like calling a modern city “a collection of buildings.” Technically correct, but wildly incomplete.</p>
<p data-end="515" data-start="225" style="text-align: justify;">In reality, Kubernetes is a layered ecosystem where storage, compute, networking, security, and developer workflows interlock like gears in a precision machine. If one gear slips, everything grinds. If all align, you unlock a platform that scales, heals, and evolves with your applications.</p><img src="https://feeds.dzone.com/link/23560/17332376.gif" height="1" width="1"/>]]></description>
      <pubDate>Mon, 04 May 2026 19:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642604</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18958123&amp;w=600"/>
      <dc:creator>Jaswinder Kumar</dc:creator>
    </item>
    <item>
      <title>Building Fault-Tolerant Kafka Consumers in Spring Boot Using Retry, DLQ, and Idempotent Code Patterns</title>
      <link>https://feeds.dzone.com/link/23560/17331903/building-fault-tolerant-kafka-consumers-in-spring</link>
      <description><![CDATA[<p data-end="633" data-start="105"><a href="https://dzone.com/articles/how-to-createand-configureapache-kafka-consumers">Apache Kafka</a> is a robust distributed streaming platform, but building a fault tolerant consumer requires careful handling of errors and duplicates. In this article, we focus on Spring Boot 3 with Spring Kafka 3.x to implement resilient Kafka consumers using retry mechanisms, dead-letter queues (DLQs), and idempotent processing patterns. We'll walk through how to configure retries, route problematic messages to a DLQ, and ensure that even if the same message is consumed multiple times, it is processed only once.</p>
<h2 data-end="682" data-section-id="1lpzx2h" data-start="635">Challenges in Kafka Consumer Fault Tolerance</h2>
<p data-end="1346" data-start="684">Kafka consumers usually operate in an at least once delivery mode, which means a message might be delivered multiple times if not acknowledged properly. Transient errors can cause message processing failures. Without proper handling, such failures might lead to data loss or duplicate processing. If a consumer fails after processing a message but before committing the offset, Kafka will resend that message to another consumer, leading to a duplicate delivery. A fault tolerant consumer design addresses these scenarios by:</p><img src="https://feeds.dzone.com/link/23560/17331903.gif" height="1" width="1"/>]]></description>
      <pubDate>Mon, 04 May 2026 12:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642550</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18954583&amp;w=600"/>
      <dc:creator>Mallikharjuna Manepalli</dc:creator>
    </item>
    <item>
      <title>Understanding MCP Architecture: LLM + API vs Model Context Protocol</title>
      <link>https://feeds.dzone.com/link/23560/17330458/understanding-mcp-architecture-llm-api-vs-mcp</link>
      <description><![CDATA[<p data-line="4" dir="auto">Suppose you want a chatbot that works with PDFs: extract text, search across documents, summarize sections. You can build it two ways: by calling an LLM API directly and wiring tools yourself, or by exposing those tools through the <a href="https://dzone.com/articles/model-context-protocol-mcp-guide-architecture-uses-implementation">Model Context Protocol (MCP)</a>. Same user experience — different architecture. This article uses a PDF example to walk through both routes and explain what MCP adds.</p>
<h2 data-line="8" dir="auto">The Goal</h2>
<p data-line="10" dir="auto">User asks in natural language → chatbot reads/searches PDFs → returns an answer.</p><img src="https://feeds.dzone.com/link/23560/17330458.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 01 May 2026 20:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642063</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18954567&amp;w=600"/>
      <dc:creator>Sanjay Mishra</dc:creator>
    </item>
    <item>
      <title>How to Log HTTP Incoming Requests in Spring Boot</title>
      <link>https://feeds.dzone.com/link/23560/17330419/how-to-log-http-incoming-requests-in-spring-boot</link>
      <description><![CDATA[<p>In developing <a href="https://dzone.com/articles/restful-services-1" rel="noopener" target="_blank" title="REST ">REST&nbsp;</a>APIs, you often need to log <a href="https://dzone.com/articles/http-protocol-obviously-unobvious" rel="noopener" target="_blank" title="HTTP ">HTTP&nbsp;</a>incoming requests. You want to see exactly what data your application is receiving and how it is processed. You want a detailed view of the passed data to ease troubleshooting and development. <strong>CommonsRequestLoggingFilter</strong> is a class of <a href="https://codingstrain.com/category/java/spring/spring-boot/" rel="noopener" target="_blank" title="Spring Boot">Spring Boot</a> that allows you to log requests with simple configuration steps.</p>
<p>In this article, you'll see how to configure request logging in Spring Boot and inspect request payloads and parameters.</p><img src="https://feeds.dzone.com/link/23560/17330419.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 01 May 2026 19:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3641026</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18947385&amp;w=600"/>
      <dc:creator>Mario Casari</dc:creator>
    </item>
    <item>
      <title>Unlocking Smart Meter Insights with Smart Datastream</title>
      <link>https://feeds.dzone.com/link/23560/17330391/smart-meter-insights-with-smart-datastream</link>
      <description><![CDATA[<p dir="ltr">The <a href="https://dzone.com/articles/steps-for-developers-to-take-toward-green-it">rollout of smart meters</a> across the UK has fundamentally changed how energy data is generated and used. Millions of devices now capture consumption data at fine-grained intervals, offering a much clearer picture of how energy is used across households and businesses.</p>
<p dir="ltr">This shift creates a real opportunity. With the right tools, organizations can move beyond basic reporting and start making informed decisions around efficiency, cost optimization, and sustainability.</p><img src="https://feeds.dzone.com/link/23560/17330391.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 01 May 2026 18:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3641124</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18955505&amp;w=600"/>
      <dc:creator>Muhammad Rizwan</dc:creator>
    </item>
    <item>
      <title>6 Integration Patterns That Look Good on Paper and What Happens When They Hit Production</title>
      <link>https://feeds.dzone.com/link/23560/17330278/integration-patterns-fail-production</link>
      <description><![CDATA[<p>In most enterprise systems, integrations don’t fail immediately. They fail slowly. Everything works fine at first, APIs respond quickly, workflows look clean, and dependencies seem manageable. Then traffic grows, systems evolve, and edge cases appear. That’s when the cracks start to show.</p>
<p>In my experience, these failures are rarely caused by tools. They come from how <a href="https://dzone.com/articles/integration-patterns-in-microservices-world">integration patterns</a> are applied without considering real-world conditions like latency, retries, partial failures, and security boundaries.</p><img src="https://feeds.dzone.com/link/23560/17330278.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 01 May 2026 14:00:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642001</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18955447&amp;w=600"/>
      <dc:creator>Priyanka Jayavel</dc:creator>
    </item>
    <item>
      <title>Generate Random Test Data in PostgreSQL</title>
      <link>https://feeds.dzone.com/link/23560/17330279/postgresql-random-test-data</link>
      <description><![CDATA[<p>When developing and testing applications that use a PostgreSQL database, it's often helpful to populate your tables with random data. Whether you're testing queries, performance, or database functionality, having a set of test data can help ensure your application performs as expected.</p>
<p>In this guide, we'll walk through how to create an <strong>anonymous PL/pgSQL block</strong> that generates random data and inserts it into a PostgreSQL table. The data will include various types such as integers, strings, dates, booleans, and UUIDs.</p><img src="https://feeds.dzone.com/link/23560/17330279.gif" height="1" width="1"/>]]></description>
      <pubDate>Fri, 01 May 2026 13:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3560174</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19006935&amp;w=600"/>
      <dc:creator>arvind toorpu</dc:creator>
    </item>
    <item>
      <title>End-to-End Event Streaming With Kafka, Spring Boot and AWS SQS/SNS (Production-Ready Code Guide)</title>
      <link>https://feeds.dzone.com/link/23560/17328769/end-to-end-event-streaming-with-kafka-spring-boot</link>
      <description><![CDATA[<p data-end="768" data-start="101">Event-driven applications often demand high throughput, reliable delivery and flexible fan out messaging. Each platform in our stack plays a distinct role: <a href="https://dzone.com/articles/kafka-real-time-data-dashboards?fromrel=true">Apache Kafka</a> provides a distributed high volume event log, Amazon SQS offers durable point to point queues and Amazon SNS enables pub/sub broadcasting to multiple subscribers. Using them together yields a robust pipeline teams commonly use Kafka for streaming, SQS for decoupled processing and SNS for multicasting events. This synergy leverages the strengths of each platform to build scalable, loosely coupled systems.</p>
<h2 data-end="1431" data-section-id="18pwj5f" data-start="1407">Architecture Overview</h2>
<p data-end="1529" data-start="1433">The pipeline involves multiple components working together in sequence. Below is the event flow:</p><img src="https://feeds.dzone.com/link/23560/17328769.gif" height="1" width="1"/>]]></description>
      <pubDate>Thu, 30 Apr 2026 18:00:09 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642551</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18953051&amp;w=600"/>
      <dc:creator>Mallikharjuna Manepalli</dc:creator>
    </item>
    <item>
      <title>5 Ways Azure AI Search Enhances Enterprise RAG Architectures</title>
      <link>https://feeds.dzone.com/link/23560/17328603/azure-ai-search-enhances-rag</link>
      <description><![CDATA[<p>The transition from experimental Proof of Concepts (POCs) to production-grade applications is the most significant hurdle for enterprises today. At the heart of this transition lies retrieval-augmented generation (RAG). While the "Generation" part — handled by large language models (LLMs) like GPT-4 — is often the focus, the quality of the "retrieval" determines whether an AI application provides value or hallucinates incorrect information.</p>
<p>Azure AI Search (formerly known as Azure Cognitive Search) has emerged as a powerhouse in this space. By moving beyond simple vector databases and offering a comprehensive information retrieval platform, it addresses the unique challenges of the enterprise: scale, security, and precision. In this article, we will deep-dive into the five key ways Azure AI Search is improving enterprise RAG, backed by technical architecture, code examples, and performance insights.</p><img src="https://feeds.dzone.com/link/23560/17328603.gif" height="1" width="1"/>]]></description>
      <pubDate>Thu, 30 Apr 2026 14:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3651500</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19001575&amp;w=600"/>
      <dc:creator>Jubin Abhishek Soni</dc:creator>
    </item>
    <item>
      <title>The Dual Write Problem: What Looks Safe in Code but Breaks in Production</title>
      <link>https://feeds.dzone.com/link/23560/17328022/dual-write-problem-what-looks-safe-in-code</link>
      <description><![CDATA[<p>A system that crashes is easier to fix than one that silently produces wrong results. The dual write problem is exactly that kind of bug.</p>
<p>It is surprisingly common and often misunderstood, even by teams that have encountered it in production. Understanding the dual write problem starts with seeing why the obvious solution fails, and ends with four patterns that address it correctly.</p><img src="https://feeds.dzone.com/link/23560/17328022.gif" height="1" width="1"/>]]></description>
      <pubDate>Wed, 29 Apr 2026 18:00:15 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3642008</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=18949719&amp;w=600"/>
      <dc:creator>Vineet Bhatkoti</dc:creator>
    </item>
    <item>
      <title>What AWS Kiro Matters for Agentic Development</title>
      <link>https://feeds.dzone.com/link/23560/17327948/aws-kiro-matters-agentic-development</link>
      <description><![CDATA[<p>The evolution of artificial intelligence (AI) has transitioned from passive chat interfaces to active, autonomous agents. This shift, known as agentic development, requires a fundamental rethink of cloud infrastructure. In traditional AI workflows, a single request is sent to a large language model (LLM), and a response is received. In agentic workflows, dozens or even hundreds of small, specialized agents must communicate, share state, and access tools in real-time. This creates a massive networking and latency bottleneck that standard REST-based architectures cannot handle.</p>
<p>Enter <strong>AWS Kiro</strong>. AWS Kiro (Kernel-Integrated Runtime Orchestrator) is a specialized, high-performance infrastructure layer designed specifically for the orchestration of multi-agent systems. It moves beyond the limitations of standard container orchestration to provide a low-latency, state-aware environment where agents can thrive. This article provides a deep dive into what AWS Kiro is, how it works, and why it is the missing piece for the next generation of AI development.</p><img src="https://feeds.dzone.com/link/23560/17327948.gif" height="1" width="1"/>]]></description>
      <pubDate>Wed, 29 Apr 2026 15:30:00 GMT</pubDate>
      <guid isPermaLink="false">https://dzone.com/articles/3651501</guid>
      <media:thumbnail url="https://dz2cdn1.dzone.com/thumbnail?fid=19004164&amp;w=600"/>
      <dc:creator>Jubin Abhishek Soni</dc:creator>
    </item>
  </channel>
</rss>
