Sunday, June 8, 2025

Automate Library Integration with Cursor's Agent Mode

Automate Android Library Integration with Cursor's Agent Mode

Automate Android Library Integration with Cursor's Agent Mode

As developers, we often find ourselves repeating similar integration steps for various libraries. What if your IDE could proactively guide you through the setup, asking for necessary parameters and generating boilerplate code on the fly? With tools like Cursor's "Agent Requested" mode, this is not just a dream but a reality.

This post delves into how to empower Cursor to integrate a custom library (let's call it "MyGraph") into your Android application, making the setup process remarkably efficient.

Understanding Cursor's "Agent Requested" Mode

Cursor's "Agent Requested" mode is a powerful feature that allows the IDE's AI assistant to take initiative based on the context of your project or specific triggers. Instead of you explicitly asking for help every time, the AI can "sense" when its assistance might be beneficial and offer it automatically.

How the AI Decides: The `rules.mdc` File

The magic behind this proactive assistance lies in a special file named `rules.mdc`. This file, placed in your project's `<project-root>/.cursor/rules/` directory, contains the instructions and conditions that tell the AI when to trigger a specific "rule" or action.

A crucial part of the `rules.mdc` file is its description. This description is what the Large Language Model (LLM) within Cursor reads to understand the purpose of your rule and decide whether it's relevant to the current user query or context. When a user's prompt aligns with the rule's description or trigger conditions, the AI activates it.

Here's what our `rules.mdc` file looks like for the MyGraph library integration:


---
description: This rule is help when user asks to integrate a graph library in their android apps.
globs:
alwaysApply: false
---

Cursor IDE Rule: MyGraph Android Library Integration Guide
---

## Overview
The MyGraph Android Library enables rendering custom graphs and visualizations within Android applications.

## 🎯 Trigger Condition

When the prompt contains any of the following phrases:

- "render graph"
- "integrate graph library"

---

## ❓ Clarifying Question

Prompt the user:

> Please share your graph configuration ID, so that I can integrate it into your code.
> Please share your project identifier, so that I can integrate it into your code.
> Please share the name of the graph you want to display, so that I can integrate it into your code.

When the user confirms, save them in local.properties file. Use values of graph configuration ID, project identifier, and graph name from local.properties file and use them in kotlin or Java code to initialize the MyGraph library by reading it from local.properties using BuildConfig variables. Add import "import java.util.Properties" in gradle file if required.

---
## Android Library Installation
To integrate the MyGraph Library into your Android application, you first need to install the library through a package manager for Android. The MyGraph Android Library can be installed using Gradle.

```build.gradle
implementation 'com.mygraph:mygraph-android-library:<latestVersion>'
````

Get the latest version from your library's maven page at https://mvnrepository.com/artifact/com.mygraph/mygraph-android-library/

-----

## Library Initialization & Graph Rendering

To integrate the MyGraph Library into your Android application, you first need to install the library.

### Basic Setup & Rendering (Kotlin)

```kotlin
import com.mygraph.MyGraph
import com.mygraph.interfaces.MyGraphInitCallback
import com.mygraph.models.GraphConfig
import com.mygraph.models.GraphOptions
import android.widget.LinearLayout // Example UI container

// Assuming 'context' is your Activity or Application context
// And 'graphContainer' is a LinearLayout in your layout where the graph will be rendered

val graphConfig = GraphConfig()
graphConfig.configId = GRAPH_CONFIG_ID        // Required: Configuration ID from MyGraph Dashboard
graphConfig.projectId = PROJECT_IDENTIFIER    // Required: Your MyGraph Project ID

MyGraph.initialize(graphConfig, object : MyGraphInitCallback {
    override fun onInitSuccess(myGraphClient: MyGraph, message: String) {
        // Library initialized successfully - store myGraphClient instance for rendering
        Log.d("MyGraph-App", "MyGraph initialized successfully: $message")

        // Now, render the graph
        val graphOptions = GraphOptions()
        graphOptions.graphName = GRAPH_NAME // The name of the graph to display
        graphOptions.data = mapOf("param1" to "value1", "param2" to 123) // Optional: data for the graph

        myGraphClient.renderGraph(context, graphContainer, graphOptions, object : MyGraphRenderCallback {
            override fun onRenderSuccess(message: String) {
                Log.d("MyGraph-App", "Graph rendered successfully: $message")
            }

            override fun onRenderFailed(message: String) {
                Log.e("MyGraph-App", "Graph rendering failed: $message")
                // Handle rendering failure - display error or fallback
            }
        })
    }

    override fun onInitFailed(message: String) {
        Log.e("MyGraph-App", "MyGraph initialization failed: $message")
    }
})
```

### Basic Setup & Rendering (Java)

```java
import com.mygraph.MyGraph;
import com.mygraph.interfaces.MyGraphInitCallback;
import com.mygraph.models.GraphConfig;
import com.mygraph.models.GraphOptions;
import android.widget.LinearLayout; // Example UI container
import androidx.annotation.NonNull; // For annotations

// Assuming 'context' is your Activity or Application context
// And 'graphContainer' is a LinearLayout in your layout where the graph will be rendered

GraphConfig graphConfig = new GraphConfig();
graphConfig.setConfigId(GRAPH_CONFIG_ID);      // Required
graphConfig.setProjectId(PROJECT_IDENTIFIER);  // Required

MyGraph.initialize(graphConfig, new MyGraphInitCallback() {
    @Override
    public void onInitSuccess(@NonNull MyGraph myGraphClient, @NonNull String message) {
        // Library initialized successfully
        Log.d("MyGraph-App", "MyGraph initialized successfully: " + message);

        // Now, render the graph
        GraphOptions graphOptions = new GraphOptions();
        graphOptions.setGraphName(GRAPH_NAME); // The name of the graph to display
        // Optional: add data for the graph
        // Map data = new HashMap<>();
        // data.put("param1", "value1");
        // graphOptions.setData(data);

        myGraphClient.renderGraph(context, graphContainer, graphOptions, new MyGraphRenderCallback() {
            @Override
            public void onRenderSuccess(@NonNull String message) {
                Log.d("MyGraph-App", "Graph rendered successfully: " + message);
            }

            @Override
            public void onRenderFailed(@NonNull String message) {
                Log.e("MyGraph-App", "Graph rendering failed: " + message);
                // Handle rendering failure
            }
        });
    }

    @Override
    public void onInitFailed(@NonNull String message) {
        Log.e("MyGraph-App", "MyGraph initialization failed: " + message);
    }
});
```


Interactive Integration: Asking for User Input

What makes this approach truly powerful is the AI's ability to engage with the user. Our rule includes a "Clarifying Question" section. When the rule is triggered, Cursor will prompt the user with questions like:

Please share your graph configuration ID, project identifier, name of the graph, so that I can integrate it into your code.

Once the user provides this information, the rule intelligently saves these details (e.g., in `local.properties`) and then uses them to generate the correct initialization and rendering code for the MyGraph library directly into your Android project. It not just integrates it in `local.properties` but it modifies build.gradle to define buildConfigField's with values of these sensitive keys. This eliminates manual copy-pasting and variable replacement, significantly reducing setup time and potential errors.

Setting Up the Rule

To use this rule, simply place the `rules.mdc` file in the following path within your Android project's root directory: `<project-root>/.cursor/rules/graph-integration-rule.mdc`.

Note: This setup was tested and verified using Cursor IDE with the Claude-4-Sonnet model in the Cursor editor. Performance and behavior may vary with different Cursor versions or underlying LLM configurations.

Conclusion

Leveraging Cursor's "Agent Requested" mode with custom rules transforms the way you interact with your IDE. By clearly defining trigger conditions and integrating interactive prompts, you can automate complex library integrations, save valuable time, and keep your focus on building amazing Android applications. Give it a try and streamline your development workflow!

Monday, May 19, 2025

Enhancing LLM Responses with Prompt Stuffing in Spring Boot AI

Enhancing LLM Responses with Prompt Stuffing in Spring Boot AI

Large Language Models (LLMs) like OpenAI's GPT series are incredibly powerful, but they sometimes need a little help to provide the most accurate or context-specific answers. One common challenge is their knowledge cut-off date or their lack of access to your private, domain-specific data. This is where "prompt stuffing" (a basic form of Retrieval Augmented Generation or RAG) comes into play.

In this post, we'll explore how you can use Spring Boot with Spring AI to "stuff" relevant context into your prompts, guiding the LLM to generate more informed and precise responses. We'll use a practical example involving fetching information about a hypothetical IPL 2025 schedule.

What is Prompt Stuffing?

Prompt stuffing, in simple terms, means providing the LLM with relevant information or context directly within the prompt you send it. Instead of just asking a question, you give the LLM a chunk of text (the "stuffing" or "context") and then ask your question based on that text. This helps the LLM focus its answer on the provided information, rather than relying solely on its pre-trained knowledge.

This technique is particularly useful when:

  • You need answers based on very recent information not yet in the LLM's training data.
  • You're dealing with private or proprietary documents.
  • You want to reduce hallucinations and ensure answers are grounded in specific facts.

Setting Up Our Spring Boot Project

First, let's look at the essential dependencies and configuration for our Spring Boot application.

Dependencies (build.gradle)

We'll need Spring Web, Spring AI, and the Spring AI OpenAI starter. Here's a snippet from our build.gradle (Kotlin DSL):


plugins {
    // ... other plugins
    id 'org.springframework.boot' version '3.3.7'
    id 'io.spring.dependency-management' version '1.1.7'
    id 'org.jetbrains.kotlin.jvm' version '1.9.25' // Or your Kotlin version
}

// ... group, version, java toolchain

ext {
    set('springAiVersion', "1.0.0-M4") // Use the latest stable/milestone Spring AI version
}

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'org.jetbrains.kotlin:kotlin-reflect'
    implementation 'org.springframework.ai:spring-ai-openai-spring-boot-starter'
    // ... other dependencies like jackson-module-kotlin, lombok
}

dependencyManagement {
    imports {
        mavenBom "org.springframework.ai:spring-ai-bom:${springAiVersion}"
    }
}
        

Configuration (application.properties)

Next, configure your OpenAI API key and desired model in src/main/resources/application.properties. Remember to keep your API key secure and never commit it to public repositories!


spring.application.name=spring-boot-ai
server.port=8082

# Replace with your actual OpenAI API Key or use environment variables
spring.ai.openai.api-key=YOUR_OPENAI_API_KEY_PLACEHOLDER
spring.ai.openai.chat.options.model=gpt-4o-mini # Or your preferred model
        

Using gpt-4o-mini is a good balance for cost and capability for many tasks, but you can choose other models like gpt-3.5-turbo or gpt-4 depending on your needs.

The Core: Prompt Template and Context Document

The magic of prompt stuffing lies in how we structure our prompt and the context we provide.

1. The Prompt Template (promptToStuff.st)

We use a prompt template to structure our request to the LLM. This template will have placeholders for the context we want to "stuff" and the actual user question.

src/main/resources/prompts/promptToStuff.st

Use the following pieces of context to answer the question at the end. If you don't know the answer just say "I'm sorry but I don't know the answer to that".

{context}

Question: {question}
        

Here, {context} will be replaced by the content of our document, and {question} will be the user's query.

2. The Context Document (Ipl2025.txt)

This is a simple text file containing the information we want the LLM to use. For our example, it's about IPL 2025 schedule.

src/main/resources/docs/Ipl2025.txt

IPL 2025 will resume on May 17 and end on June 3, as per the revised schedule announced by the BCCI on Monday night.

The remainder of the tournament, which was suspended on May 9 for a week due to cross-border tensions between India and Pakistan, will be played at six venues: Bengaluru, Jaipur, Delhi, Lucknow, Mumbai and Ahmedabad.
The venues for the playoffs will be announced later, but the matches will be played on the following dates: Qualifier 1 on May 29, the Eliminator on May 30, Qualifier 2 on June 1 and the final on June 3. A total of 17 matches will be played after the resumption, with two double-headers, both of which will be played on Sundays.
... (rest of the document content) ...
        

Implementing the Stuffing Logic in Spring Boot (Kotlin)

Now, let's see how to tie this all together in a Spring Boot controller using Kotlin.

src/main/kotlin/com/swapnil/spring_boot_ai/stuffPrompt/OlympicController.kt

package com.swapnil.spring_boot_ai.stuffPrompt

import org.slf4j.LoggerFactory
import org.springframework.ai.chat.client.ChatClient
import org.springframework.ai.chat.client.ChatClient.PromptUserSpec
import org.springframework.beans.factory.annotation.Value
import org.springframework.core.io.Resource
import org.springframework.web.bind.annotation.GetMapping
import org.springframework.web.bind.annotation.RequestMapping
import org.springframework.web.bind.annotation.RequestParam
import org.springframework.web.bind.annotation.RestController
import java.nio.charset.Charset

@RestController
@RequestMapping("stuff/")
class OlympicController(builder: ChatClient.Builder) {

    val log: org.slf4j.Logger? = LoggerFactory.getLogger(OlympicController::class.java)

    private val chatClient: ChatClient = builder.build()

    // Load the prompt template
    @Value("classpath:/prompts/promptToStuff.st")
    lateinit var promptToStuff: Resource

    // Load the context document
    @Value("classpath:/docs/Ipl2025.txt")
    private lateinit var stuffing: Resource

    @GetMapping("ipl2025")
    fun get(
        @RequestParam(
            value = "message",
            defaultValue = "Why IPL was stopped in 2025?"
        ) message: String,
        @RequestParam(value = "isStuffingEnabled", defaultValue = "false") isStuffingEnabled: Boolean
    ): String {

        // Read the content of our context document
        val contextDocumentContent: String = stuffing.getContentAsString(Charset.defaultCharset())
        log?.info("Context Document Loaded. Length: {}", contextDocumentContent.length)

        // Use ChatClient to build and send the prompt
        return chatClient.prompt()
            .user { userSpec: PromptUserSpec ->
                userSpec.text(promptToStuff) // Our template resource
                userSpec.param("question", message)
                // Conditionally add the context
                userSpec.param("context", if (isStuffingEnabled) contextDocumentContent else "")
            }
            .call()
            .content() ?: "Error: Could not get response from LLM!"
    }
}
        

Key Parts Explained:

  • ChatClient.Builder and ChatClient: Spring AI provides ChatClient as a fluent API to interact with LLMs. It's injected and built in the constructor.
  • @Value annotation: We use this to inject our promptToStuff.st template and Ipl2025.txt context document as Spring Resource objects.
  • Reading Context: stuffing.getContentAsString(Charset.defaultCharset()) reads the entire content of our Ipl2025.txt file.
  • Dynamic Prompting:
    • chatClient.prompt().user { ... } starts building the user message.
    • userSpec.text(promptToStuff) sets the base prompt template.
    • userSpec.param("question", message) injects the user's actual question into the {question} placeholder.
    • userSpec.param("context", if (isStuffingEnabled) contextDocumentContent else "") is the crucial part. If isStuffingEnabled is true, it injects the content of Ipl2025.txt into the {context} placeholder. Otherwise, it injects an empty string.
  • .call().content(): This sends the constructed prompt to the LLM and retrieves the response content.

Seeing it in Action!

Let's test our endpoint. You can use tools like curl, Postman, or even your browser.

Consider the question: "Why IPL was stopped in 2025?"

Scenario 1: Stuffing Disabled (isStuffingEnabled=false)

Request URL: http://localhost:8082/stuff/ipl2025?message=Why%20IPL%20was%20stopped%20in%202025%3F&isStuffingEnabled=false

Since we are not providing any context, and the LLM (e.g., GPT-4o-mini) doesn't know about IPL 2025 suspension from its general training, it will likely respond based on the instruction in our prompt template:


I'm sorry but I don't know the answer to that.
        
Response with prompt stuffing disabled

Expected response when prompt stuffing is disabled.

Scenario 2: Stuffing Enabled (isStuffingEnabled=true)

Request URL: http://localhost:8082/stuff/ipl2025?message=Why%20IPL%20was%20stopped%20in%202025%3F&isStuffingEnabled=true

Now, the content of Ipl2025.txt is "stuffed" into the prompt. The LLM uses this provided context to answer.

Expected Response (based on the provided Ipl2025.txt):


The remainder of the tournament, which was suspended on May 9 for a week due to cross-border tensions between India and Pakistan, will be played at six venues: Bengaluru, Jaipur, Delhi, Lucknow, Mumbai and Ahmedabad.
        

Or a more direct answer like:


IPL 2025 was suspended on May 9 for a week due to cross-border tensions between India and Pakistan.
        
Response with prompt stuffing enabled

Expected response when prompt stuffing is enabled, using the provided context.

Here's an example of how you might make these requests using curl (as shown in your request.http file):


# Request with stuffing disabled
curl -L -X GET 'http://127.0.0.1:8082/stuff/ipl2025?message=Why%20IPL%20was%20stopped%20in%202025%3F&isStuffingEnabled=false'

# Request with stuffing enabled
curl -L -X GET 'http://127.0.0.1:8082/stuff/ipl2025?message=Why%20IPL%20was%20stopped%20in%202025%3F&isStuffingEnabled=true'
        

Making requests to the API endpoint using curl.

Benefits of This Approach

  • Improved Accuracy: LLMs can answer questions based on specific, up-to-date, or private information you provide.
  • Reduced Hallucinations: By grounding the LLM in provided text, you lessen the chance of it inventing facts.
  • Contextual Control: You decide what information the LLM should consider for a particular query.
  • Simplicity: Spring AI makes it relatively straightforward to implement this pattern.

Conclusion

Prompt stuffing is a powerful yet simple technique to significantly enhance the quality and relevance of LLM responses. By leveraging Spring Boot and Spring AI, you can easily integrate this capability into your Java or Kotlin applications, allowing you to build more intelligent and context-aware AI-powered features.

This example focused on a single document, but you can extend this concept to more sophisticated RAG pipelines where relevant document chunks are dynamically retrieved from a vector database based on the user's query before being "stuffed" into the prompt. Spring AI also offers support for these more advanced scenarios.

Happy coding, and I hope this helps you build amazing AI applications!

Friday, February 14, 2025

Building a Retrieval-Augmented Generation (RAG) Application with Ollama 3.2 and Spring Boot

Building a RAG Application with Ollama 3.2 and Spring Boot

This blog post demonstrates how to build a Retrieval-Augmented Generation (RAG) application using Ollama 3.2 for large language models (LLMs) and Spring Boot for creating REST APIs. RAG combines information retrieval with LLMs to provide more accurate and contextually relevant answers. We'll leverage Docker Desktop for containerization and pgvector for vector storage.

Project Setup

We'll use Spring Boot version 3.3.7 for this project. Here's a breakdown of the key components and configurations:

1. Dependencies (Gradle):

dependencies {
    implementation 'org.springframework.boot:spring-boot-starter-jdbc'
    implementation 'org.springframework.boot:spring-boot-starter-web'
    implementation 'com.fasterxml.jackson.module:jackson-module-kotlin'
    implementation 'org.springframework.ai:spring-ai-ollama-spring-boot-starter'
    implementation 'org.springframework.ai:spring-ai-pgvector-store-spring-boot-starter'
}

This includes the necessary Spring Boot starters, Jackson for Kotlin support, and the Spring AI libraries for Ollama and pgvector integration.

2. application.properties:

spring.application.name=spring-boot-ai
server.port=8082

spring.ai.ollama.embedding.model=mxbai-embed-large
spring.ai.ollama.chat.model=llama3.2

spring.datasource.url=jdbc:postgresql://localhost:5432/sbdocs
spring.datasource.username=admin
spring.datasource.password=password

spring.ai.vectorstore.pgvector.initialize-schema=true
spring.vectorstore.pgvector=
spring.vectorstore.index-type=HNSW
spring.vectorstore.distance-type=COSINE_DISTANCE
spring.vectorstore.dimension=1024
spring.ai.vectorstore.pgvector.dimensions=1024

spring.docker.compose.lifecycle-management=start_only

This configuration sets the application name, port, Ollama model names, database connection details, and pgvector settings. Critically, spring.docker.compose.lifecycle-management=start_only allows Spring Boot to manage the Docker Compose lifecycle.

3. RagConfiguration.kt:


@Configuration
open class RagConfiguration {

    @Value("myDataVector.json")
    lateinit var myDataVectorName: String

    @Value("classpath:/docs/myData.txt")
    lateinit var originalArtical: Resource

    @Bean
    open fun getVector(embeddingModel: EmbeddingModel): SimpleVectorStore {
        val simpleVectorStore = SimpleVectorStore(embeddingModel)
        val vectorStoreFile = getVectorStoreFile()
        if (vectorStoreFile.exists()) {
            simpleVectorStore.load(vectorStoreFile)
        } else {
            val textReader = TextReader(originalArtical)
            textReader.customMetadata["filename"] = "myData.txt"
            val documents = textReader.get()
            val splitDocs = TokenTextSplitter()
                .split(documents)
            simpleVectorStore.add(splitDocs)
            simpleVectorStore.save(vectorStoreFile)
        }
        return simpleVectorStore
    }

    private fun getVectorStoreFile(): File {
        val path = Path("src", "main", "resources", "docs", myDataVectorName)
        return path.toFile()
    }
}
    
This configuration class creates a SimpleVectorStore bean. It loads existing vector data from database or generates it by reading the myData.txt file, splitting it into chunks, and embedding them using the specified embedding model.

4. RagController.kt:


@RestController
@RequestMapping("/rag")
class RagController(val chatClient: ChatClient, val vectorStore: SimpleVectorStore) {

    @Value("classpath:/prompts/ragPrompt.st")
    lateinit var ragPrompt: Resource

    @GetMapping("question")
    fun getAnswer(@RequestParam(name = "question", defaultValue = "What is the latest news about Olympics?") question: String): String? {

        return chatClient.prompt()
            .advisors(QuestionAnswerAdvisor(vectorStore, SearchRequest.defaults()))
            .user(question)
            .call()
            .content()
    }
}    
    
This controller defines a /rag/question endpoint that takes a question as a parameter. It uses the ChatClient and QuestionAnswerAdvisor to query the Ollama model, retrieving relevant context from the vectorStore and generating an answer.

Running the Application with Docker

1. Start pgvector Docker Container:

docker run --name pgvector-container -e POSTGRES_USER=admin -e POSTGRES_PASSWORD=password -e POSTGRES_DB=sbdocs -d -p 5432:5432 pgvector/pgvector:0.8.0-pg1

2. Pull Ollama Models:

Open a terminal in your Docker Desktop, exec of the springboot-ai-ollama-1 container and run:

ollama pull llama3.2
ollama pull mxbai-embed-large

3. Run the Spring Boot Application:

Start your Spring Boot application. Because of the spring.docker.compose.lifecycle-management property, Spring Boot will manage the Docker Compose file.

4. Access the API:

You can now access the RAG API at http://localhost:8082/rag/question?question=Your question here.

This setup provides a robust and scalable way to use Ollama 3.2 for RAG applications. The use of Docker and Spring Boot simplifies deployment and management. Remember to replace placeholder values like database credentials and file paths with your actual values. This example provides a foundation that you can extend to build more complex RAG applications.

Tuesday, December 31, 2024

Securing Microservices with JWT Authentication and Data Encryption

Securing Microservices with JWT Authentication and Data Encryption

Securing Microservices with JWT Authentication and Data Encryption

In modern microservices architectures, securing communication and data integrity are paramount. This article explores how JWT (JSON Web Token) authentication and data encryption can bolster security, ensuring that data exchanges between services remain confidential and trusted.

What is JWT Authentication?

JWT is a compact, URL-safe token format that securely transmits information between parties as a JSON object. It is widely used in microservices for its simplicity and efficiency.

Parts of a JWT Token

A JSON Web Token (JWT) consists of three parts, separated by periods (.):

  • Header: Specifies the token type (JWT) and signing algorithm (e.g., HS256 or RS256).
  • Example: { "alg": "HS256", "typ": "JWT" }
  • Payload: Contains claims about the user or the token itself. Claims can be:
    • Registered claims: Predefined fields like iss (issuer), sub (subject), exp (expiration time), etc.
    • Public claims: Custom claims, such as user roles or permissions.
    • Private claims: Claims specific to the application, like user IDs.
    Example: { "sub": "1234567890", "name": "John Doe", "admin": true, "iat": 1516239022 }
  • Signature: Ensures the token's integrity and authenticity. It is generated by signing the encoded header and payload with a secret or private key.
    Example for HMAC-SHA256:
    HMACSHA256( base64UrlEncode(header) + "." + base64UrlEncode(payload), secret )
A full JWT might look like this: eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJzdWIiOiIxMjM0NTY3ODkwIiwibmFtZSI6IkpvaG4gRG9lIiwiaWF0IjoxNTE2MjM5MDIyfQ.SflKxwRJSMeKKF2QT4fwpMeJf36POk6yJV_adQssw5c

Shared Key vs. Public Key JWT in Microservices

Shared Key-Based JWT:

  1. How It Works:
    • A single secret key is used for both signing and verifying the token.
    • This secret must be shared between the microservices.
  2. Advantages:
    • Simple setup.
    • Suitable for small-scale systems with fewer services.
  3. Disadvantages:
    • Security Risk: If the key is compromised, all services relying on it are at risk.
    • Key Distribution: Sharing the key securely across multiple services can be challenging.

Public Key-Based JWT in Microservice

  1. How It Works:
    • The authentication server uses a private key to sign the JWT.
    • Microservices use a public key to verify the token's signature.
  2. Advantages:
    • Better Security: The private key remains on the authentication server, and only the public key is distributed.
    • Scalability: New services can independently verify tokens without needing access to the private key.
    • No Shared Secrets: Eliminates the need to distribute a secret key.
  3. Disadvantages:
    • Slightly more complex setup due to key management.
    • Requires a system to distribute the public key, like a JWKS (JSON Web Key Set) endpoint.
    • No Shared Secrets: Eliminates the need to distribute a secret key.

Data Encryption in Microservices

Encryption ensures sensitive data remains confidential and secure during transmission and storage.

Types of Encryption

  • Symmetric Encryption: Uses the same key for encryption and decryption.
  • Asymmetric Encryption: Utilizes a public key for encryption and a private key for decryption.

Encryption in Microservices Communication

  • Transport-Level Encryption: Secures data in transit using TLS (HTTPS).
  • Message-Level Encryption: Encrypts specific message payloads for added confidentiality.

Combining JWT and Encryption

  • Token Encryption: Adds a layer of security to JWTs by making intercepted tokens unreadable.
  • Public Key Infrastructure: Manages keys securely for token validation and encrypted communication.

Best Practices

  • Set reasonable expiration times for tokens and use refresh tokens for longer sessions.
  • Rotate encryption keys periodically to minimize security risks.
  • Audit and log token usage to detect anomalies.

Conclusion

JWT authentication and encryption are foundational to building secure microservices. By combining these technologies, you can ensure robust authentication, data confidentiality, and integrity across your system. Follow best practices to simplify implementation and focus on delivering high-quality services.

Tuesday, November 15, 2022

Android aar deployment in Maven - 2022

Introduction

If you are working on android library project, you might be wondering how to publish it on Maven like this. Earlier it was done using Android studio plugin maven, but with gradle v 7.0+ it does not work. Now we have to use maven-publish. This post gives you more insights of this procedure.

Generally, there are two types of repositories: local and remote.

A local repository is the repository Maven creates on the computer it is building on. It is usually located under the $HOME/.m2/repository directory.

Remote repository is located on maven server. When any user wants to use our library, they will enter groupId and version of library they want to use.
We will create and deploy a new android aar artifact on maven.
The process can be summarized as
1. Create Account and repository on Nexus sonatype
2. Configure gradle to create, sign and upload aar file to sonatype.
3. Let sonatype verify the artifacts as per maven requirement (Close operation)
4. Release artifacts to maven.

Let's go through the steps one by one.

1. Create account on sonatype at https://issues.sonatype.org/secure/Dashboard.jspa. Register new project by creating new jira ticket. It will create new repository in sonatype
Create → Create Issue → Community Support - Open Source Project Repository Hosting → New Project → with groupid io.bitbucket.swapnilcpublic e.g. OSSRH-85813

2. You will be asked to prove that you own the domain mentioned in Jira ticket. (e.g. https://bitbucket.org/swapnilcpublic). You will be asked to place a file or create git repo under the domain to prove that it really belongs to you. Since I do not own a domain name, I created empty bitbucket repo under bitbucket repo. Here ossrh-85813 is the JIRA ticket id. For more details follow how-to-set-txt-record and personal groupId. If required a static web site can be created using bitbucket.

3. Signing: One of the requirements for publishing your artifacts to the Central Repository, is that they have been signed with PGP. Here is how tosetup signing with gpg.
Create new key with details like

Name: SwapnilGpg
Email: email@id.com
Pass: password
After creation, see created keys with
Export secret keys using
We need short key. It will be referred from gradle script. It appears after `rsa4096/` in output. Find short KeyId using
Once the GPG keys are generated, publish these keys to an open key server. Run the following command to do so. YYYYYYYY is the short key generated using previous step (E72FECF1 in my case).

Verify these keys using

4. Update build.gradle in ProjectRoot/swapnilCalculator/build.gradle with following

5. Add details below in ProjectRoot/gradle.properties In ProjectRoot/swapnilCalculator/gradle.properties

6. Upload aar, jar and signatures using
./gradlew clean publishReleasePublicationToMavenRepository After a successful deployment to OSSRH your components are stored in a separate, temporary repository, that is private to your projects members. In order to get these components published you will have to 'close' & release' them. 'Close' operation checks whether all artifacts are as specified by Maven. 'Close' operation takes few minutes to finish. Once that is successful, proceed with 'release' operation. If there is error, please resolve them and re-upload the library.
After uploading & releasing all artifacts, it takes 4-10 hours for maven to show the library.

7. Find the published library using

  1. Sonatype staging repository
  2. Maven repository
  3. Maven repository
  4. Sonatype staging repository
  5. Sonatype nexus

8. References

  1. https://gist.github.com/lopspower/6f62fe1492726d848d6d
  2. https://central.sonatype.org/publish/
  3. https://central.sonatype.org/publish/requirements/coordinates/
  4. https://central.sonatype.org/publish/publish-guide/
  5. https://shahsurajk.medium.com/technical-publishing-aars-to-maven-central-7e9c603f9ea1
  6. https://www.baeldung.com/maven-snapshot-release-repository
  7. https://docs.gradle.org/current/userguide/signing_plugin.html#sec:signatory_credentials
  8. https://docs.gradle.org/current/userguide/publishing_maven.html

Friday, February 5, 2021

Flutter: Making dashed line matching width of screen

 How can we make dashed line in flutter?

Dart does not have support for this yet. We need to make custom widget for showing dashed line. Code is given below.

class LinePainter extends CustomPainter {
@override
void paint(Canvas canvas, Size size) {
var max = size.width;
debugPrint("LinePainter max=$max");
var dashWidth = 5.0;
var dashSpace = 5.0;
double startX = 0;
final paint = Paint()..color = Colors.grey;
while (max >= 0) {
canvas.drawLine(Offset(startX, 0), Offset(startX + dashWidth, 0),
        paint..strokeWidth = 1);

final space = (dashSpace + dashWidth);
startX += space;
max -= space.toInt();
}
}

@override
bool shouldRepaint(CustomPainter oldDelegate) {
return false;
}
}
Above code will paint a line of width = size.width.

To use this painter as widget, we have to use 
CustomPaint(painter: LinePainter(),size:Size(400,1))
It will draw a dashed line with 400px width. But we want to draw line 
covering entire width of screen. For that we will use constraint in a container.
Container(constraints: BoxConstraints.tightForFinite(height: 1.0),
    child: CustomPaint(painter: LinePainter(),),)

It will draw a line as shown below

No need to use any external library for showing dashed line.


Wednesday, September 23, 2020

How to store encryption keys safely in Android 19+?

We have to encrypt our data while saving or sending over the Internet. We have to use shared key algorithms for encryption because they are fast. But what to do with the keys, how to generate and keep them safe?

Developers usually use some of the approaches mentioned below:

1. Generate shared key in the app with shared logic in app and server.

This can be used for encrypting/decrypting data locally and for sending data over Internet. Problem with this approach is, app can be decompiled and logic can reconstructed. Once the logic is reconstructed hacker can keep making the keys as and when required.

2. Get key from Server and use it.

Security of the key depends on how the key is transported to Mobile app. If someone can grab it, then it is compromised and can be used to decrypt data and can even be used to modify it. 

Recommended approach for this is to use HTTPS connection and send key on it. Ideally new key should be used with each request, as this gives very little time to the hacker to identify the key and decrypt the contents.

3. Generate key using random no generator and save it safely.

This can be used for encrypting/decrypting data locally, but where to save the keys. Following post describes how to use Android keystore and store keys safely. This code is tested on devices having OS version 4.4 onwards.

Using Android Keystore to save AES key

The Android Keystore system lets you store cryptographic keys in a container to make it more difficult to extract from the device. Once keys are in the keystore, they can be used for cryptographic operations with the key material remaining non-exportable. Moreover, it offers facilities to restrict when and how keys can be used, such as requiring user authentication for key use or restricting keys to be used only in certain cryptographic modes.[1]

Use the Android Keystore provider to let an individual app store its own credentials that only the app itself can access. This provides a way for apps to manage credentials that are usable only by itself while providing the same security benefits that the KeyChain API provides for system-wide credentials. This method requires no user interaction to select the credentials.[1]

For using Keystore we will generate RSA key pair and then use it to encrypt AES key.

private val ALIAS = "Alias"

private val RSA_ALGO = "RSA"
private val KEY_STORE_NAME = "AndroidKeyStore"
private val AES_TRANSFORMATION = "AES/GCM/NoPadding"

private val RSA_TRANSFORMATION = "RSA/ECB/PKCS1Padding"
@Throws(GeneralSecurityException::class)
fun generateRSAKeyPair(context: Context) {
val start = Calendar.getInstance()
val end = Calendar.getInstance()
end.add(Calendar.YEAR, 30)
val spec = KeyPairGeneratorSpec.Builder(context)
.setAlias(ALIAS)
.setSubject(X500Principal("CN=" + ALIAS))
.setSerialNumber(BigInteger.TEN)
.setStartDate(start.time)
.setEndDate(end.time)
.build()

val gen = KeyPairGenerator.getInstance(RSA_ALGO, KEY_STORE_NAME)
gen.initialize(spec)
gen.generateKeyPair()
}

//We can grab the public key using

@Throws(Exception::class)
fun getRsaPublicKey(): PublicKey {
val keyStore = KeyStore.getInstance(KEY_STORE_NAME)
keyStore.load(null)

return keyStore.getCertificate(ALIAS).publicKey
}

//Private key can be retrieved using
@Throws(Exception::class)
fun getRsaPrivateKey(): PrivateKey? {
val keyStore = KeyStore.getInstance(KEY_STORE_NAME)
keyStore.load(null)
return (keyStore.getEntry(ALIAS, null) as? KeyStore.PrivateKeyEntry)?.privateKey
}

//Generate AES Key and IV using
fun generateAesKey(): ByteArray {
    val secureRandom = SecureRandom()
val key = ByteArray(32)
secureRandom.nextBytes(key)
return key
}
fun getIv(): ByteArray {
val secureRandom = SecureRandom()
val iv = ByteArray(12)
secureRandom.nextBytes(iv)
return iv
}
//Encrypt the AES key using RSA public key
fun encryptUsingRsa(plain: ByteArray, publicKey1: PublicKey): String {
val cipher = Cipher.getInstance(RSA_TRANSFORMATION)
cipher.init(Cipher.ENCRYPT_MODE, publicKey1)
val encryptedBytes = cipher.doFinal(plain)

val encrypted = bytesToString(encryptedBytes)
return encrypted
}
//Decrypt the AES key using RSA private key
fun decryptUsingRsa(result: String, privateKey1: PrivateKey?): ByteArray {
if(privateKey1==null) return ByteArray(0)

val cipher1 = Cipher.getInstance(RSA_TRANSFORMATION)
cipher1.init(Cipher.DECRYPT_MODE, privateKey1)
return cipher1.doFinal(stringToBytes(result))
}

fun bytesToString(b: ByteArray): String {
return Base64.encodeToString(b, Base64.NO_WRAP)
}


fun stringToBytes(s: String): ByteArray {
return Base64.decode(s, Base64.NO_WRAP)
}

References:
[1] https://developer.android.com/training/articles/keystore

Automate Library Integration with Cursor's Agent Mode

Automate Android Library Integration with Cursor's Agent Mode Automate Android Library Integration with Cursor...