I solve complex problems with simple architectures. From real-time GPS tracking serving 100+ concurrent users to AI research platforms processing thousands of papers—I ship products that perform under pressure.
View case studiesI prioritize working solutions over perfect architecture. Premature optimization kills momentum. I validate with users first, then optimize based on actual bottlenecks—not theoretical ones.
Every performance claim I make is backed by metrics. I instrument systems early, profile relentlessly, and only optimize what the data proves matters. Intuition guides—data decides.
The best code I've written is boring. I choose Postgres over microservices when monoliths work. I use proven patterns over novel architectures. Complexity is a liability—simplicity compounds.
From database design to real-time frontend state—I build complete systems, not isolated features.
Traditional polling-based GPS tracking consumed excessive bandwidth and battery. System needed to support 100+ concurrent drivers with sub-second location updates while maintaining low resource usage.
Architected a real-time streaming solution with three key components:
This translated to smoother real-time tracking for drivers, lower data costs for vendors, and zero unauthorized access incidents in production.
Migrate location streams to Kafka for horizontal scaling beyond 1000 concurrent users.
Why: Current pub/sub architecture works well but lacks geographic sharding capabilities needed for multi-region deployments.
Team needed WebRTC video calls, encrypted chat, and shared state (Kanban boards) without latency spikes. A single monolith wouldn't isolate failures—one service going down would take everything with it.
Separated concerns into dedicated microservices:
Used JWT for stateless auth across all services. Redis pub/sub for presence reduced PostgreSQL write load by 60%.
Users experienced a smoother, more responsive real-time collaboration environment. Health checks and automated rollbacks caught critical issues before they reached production.
Add WebRTC SFU (Selective Forwarding Unit) for multi-party calls.
Why: Current mesh topology works for 2-4 participants but breaks beyond that due to exponential bandwidth requirements. SFU would efficiently scale video sessions to 10+ participants while conserving client-side bandwidth.
Neuroscience researchers needed to analyze 1000+ papers for pattern discovery and research gaps. Manual review took weeks. Existing tools lacked domain-specific scoring for novelty, reproducibility, and impact assessment.
Built an AI-powered research pipeline:
Researchers could identify high-value research opportunities in hours instead of weeks, accelerating the pace of scientific discovery.
Add graph database (Neo4j) to model citation networks and paper relationships.
Why: Current vector search is effective for semantic similarity but misses implicit connections between papers. Graph traversal would reveal hidden research clusters and citation patterns that influence field direction.
Accounting teams manually typed invoice data from images—error-prone, time-consuming, and difficult to audit. They needed structured extraction with line items, GST breakdown, and automated validation.
Created a vision-first extraction pipeline:
Used server-side API routes to process images securely and prevent API key exposure in client code.
Accountants saved hours of manual data entry, caught discrepancies automatically through validation formulas, and maintained complete audit trails for compliance.
Add batch processing queue for 100+ invoices with progress tracking.
Why: Current synchronous processing blocks the UI during extraction. Background job queue with WebSocket progress updates would support high-volume invoice processing without freezing the interface.
Owned frontend architecture for a real-time social platform serving 100+ daily active users. Delivered features ahead of schedule while maintaining zero production incidents.
Chose Firebase for real-time sync over custom WebSocket server. This enabled faster MVP delivery and reduced infrastructure complexity, but limited advanced query capabilities. At 1000+ DAU, I'd migrate to Supabase for better Row-Level Security control and SQL query flexibility.
Drove data-driven inventory optimization for a multi-region retail chain, translating raw sales data into actionable business insights.
Initial analysis used simple mean imputation for missing sales data, which skewed regional patterns and led to inaccurate forecasts. Switched to K-NN imputation based on similar stores and seasons, improving forecast accuracy by 12%. This taught me that data cleaning choices have downstream business impact.
Built and optimized a computer vision system for facial recognition using dimensionality reduction and neural networks.
Specifically: systems that need to scale, teams that ship fast, and problems where "it depends" is the right answer.
If you're building something hard, let's talk.
Start a conversation