ArchitectureSpring BootMicroservicesPerformance

How I Handled 45M Transactions/Day Using Spring Boot

January 15, 2025
3 min read
Share:
How I Handled 45M Transactions/Day Using Spring Boot

How I Handled 45M Transactions/Day Using Spring Boot

When I joined BKASH to work on their AML360 system, I faced a challenging requirement: process 45 million transactions per day with real-time fraud detection capabilities. Here's how we architected and optimized the system.

The Challenge

BKASH processes millions of mobile financial transactions daily. The Anti-Money Laundering (AML) system needed to:

  • Process 45M+ transactions per day
  • Detect fraud patterns in real-time
  • Maintain 99.9% uptime
  • Scale horizontally during peak hours
  • Generate compliance reports efficiently

Architecture Overview

We built a microservices architecture using Spring Boot with the following components:

1. Event-Driven Architecture

@Service
public class TransactionProcessor {
    
    @KafkaListener(topics = "transactions")
    public void processTransaction(TransactionEvent event) {
        // Async processing
        CompletableFuture.supplyAsync(() -> 
            fraudDetectionService.analyze(event)
        ).thenAccept(result -> {
            if (result.isSuspicious()) {
                alertService.notify(result);
            }
        });
    }
}

We used Apache Kafka to decouple services and handle massive throughput:

  • 50 partitions for parallel processing
  • Consumer groups for scalability
  • At-least-once delivery semantics

2. Database Optimization

Partitioning Strategy:

  • Partitioned PostgreSQL tables by date
  • Hot data (last 30 days) on SSD storage
  • Cold data archived to S3

Caching Layer:

  • Redis for frequently accessed data
  • Cache hit ratio: 85%+
  • TTL-based expiration
@Cacheable(value = "customerProfiles", key = "#customerId")
public CustomerProfile getCustomerProfile(String customerId) {
    return customerRepository.findById(customerId);
}

3. Horizontal Scaling

We deployed on Kubernetes with:

  • Auto-scaling based on CPU and memory
  • Pod disruption budgets for zero-downtime deployments
  • Resource limits: 2GB memory, 1 CPU per pod

4. Performance Optimizations

Batch Processing:

@Scheduled(fixedDelay = 1000)
public void processBatch() {
    List<Transaction> batch = queue.poll(1000);
    if (!batch.isEmpty()) {
        jdbcTemplate.batchUpdate(
            "INSERT INTO transactions ...", 
            batch, 
            100, // batch size
            (ps, txn) -> {
                ps.setString(1, txn.getId());
                // ... set parameters
            }
        );
    }
}

Connection Pooling:

  • HikariCP with 50 connections per instance
  • Connection timeout: 30s
  • Idle timeout: 10 minutes

Results

After optimization:

  • Throughput: 520 transactions/second per pod
  • Latency: p95 < 200ms
  • Cost Reduction: 40% by optimizing resource usage
  • Zero Downtime: During all deployments

Key Takeaways

  1. Event-driven architecture is crucial for handling high throughput
  2. Database partitioning significantly improves query performance
  3. Caching reduces database load by 60%+
  4. Batch processing is more efficient than individual inserts
  5. Monitoring is essential - we used Prometheus + Grafana

Tools & Technologies

  • Spring Boot 2.7
  • Apache Kafka
  • PostgreSQL with TimescaleDB
  • Redis
  • Kubernetes
  • Docker
  • Prometheus & Grafana

Have questions about building high-performance systems? Feel free to reach out on LinkedIn or check out my other articles on microservices architecture.

Shahariar Hossen

Shahariar Hossen

Senior Full Stack Engineer with 6+ years of experience in building scalable systems. Specialist in Spring Boot, Microservices, and AI/ML.

Continue Reading