Posts

Showing posts from 2025

Automate Library Integration with Cursor's Agent Mode

Automate Android Library Integration with Cursor's Agent Mode Automate Android Library Integration with Cursor's Agent Mode As developers, we often find ourselves repeating similar integration steps for various libraries. What if your IDE could proactively guide you through the setup, asking for necessary parameters and generating boilerplate code on the fly? With tools like Cursor's "Agent Requested" mode, this is not just a dream but a reality. This post delves into how to empower Cursor to integrate a custom library (let's call it "MyGraph") into your Android application, making the setup process remarkably efficient. Understanding Cursor's "Agent Requested" Mode Cursor's "Agent Requested" mode is a powerful feature that allows the IDE's AI assistant to take initiative based on the context of your project or specific triggers. Instead of you explicitly asking for help every time, the...

Enhancing LLM Responses with Prompt Stuffing in Spring Boot AI

Image
Enhancing LLM Responses with Prompt Stuffing in Spring Boot AI Large Language Models (LLMs) like OpenAI's GPT series are incredibly powerful, but they sometimes need a little help to provide the most accurate or context-specific answers. One common challenge is their knowledge cut-off date or their lack of access to your private, domain-specific data. This is where "prompt stuffing" (a basic form of Retrieval Augmented Generation or RAG) comes into play. In this post, we'll explore how you can use Spring Boot with Spring AI to "stuff" relevant context into your prompts, guiding the LLM to generate more informed and precise responses. We'll use a practical example involving fetching information about a hypothetical IPL 2025 schedule. What is Prompt Stuffing? Prompt stuffing, in simple terms, means providing the LLM with relevant information or context directly within the prompt you send i...

Building a Retrieval-Augmented Generation (RAG) Application with Ollama 3.2 and Spring Boot

Building a RAG Application with Ollama 3.2 and Spring Boot This blog post demonstrates how to build a Retrieval-Augmented Generation (RAG) application using Ollama 3.2 for large language models (LLMs) and Spring Boot for creating REST APIs. RAG combines information retrieval with LLMs to provide more accurate and contextually relevant answers. We'll leverage Docker Desktop for containerization and pgvector for vector storage. Project Setup We'll use Spring Boot version 3.3.7 for this project. Here's a breakdown of the key components and configurations: 1. Dependencies (Gradle): dependencies { implementation 'org.springframework.boot:spring-boot-starter-jdbc' implementation 'org.springframework.boot:spring-boot-starter-web' implementation 'com.fasterxml.jackson.module:jackson-module-kotlin' implementation 'org.springframework.ai:spring-ai-ollama-spring-boot-starter' ...