Full-Stack Engineer

Building systems that scale from prototype to production.

I solve complex problems with simple architectures. From real-time GPS tracking serving 100+ concurrent users to AI research platforms processing thousands of papers—I ship products that perform under pressure.

View case studies
Scroll
25%
Bandwidth reduction through WebSocket optimization
40%
Query performance improvement via indexing strategy
20%
API latency cut using intelligent caching
100+
Concurrent users supported in production
01

How I approach engineering problems.

Ship fast, refactor faster

I prioritize working solutions over perfect architecture. Premature optimization kills momentum. I validate with users first, then optimize based on actual bottlenecks—not theoretical ones.

Measure before optimizing

Every performance claim I make is backed by metrics. I instrument systems early, profile relentlessly, and only optimize what the data proves matters. Intuition guides—data decides.

Simplicity scales better than cleverness

The best code I've written is boring. I choose Postgres over microservices when monoliths work. I use proven patterns over novel architectures. Complexity is a liability—simplicity compounds.

02

I work across the full stack.

From database design to real-time frontend state—I build complete systems, not isolated features.

Languages

JavaScript TypeScript Python C++ Go

Frontend

React Next.js Tailwind CSS Redux React Query

Backend

Node.js Express Flask WebSockets REST APIs

Data

PostgreSQL MongoDB Redis Prisma

AI & ML

OpenAI Gemini Computer Vision NLP

Infrastructure

Docker GitHub Actions AWS Vercel
03

Case studies: problems I've solved at scale.

Logistics Intelligence Platform

SwiftTrack

Problem

Traditional polling-based GPS tracking consumed excessive bandwidth and battery. System needed to support 100+ concurrent drivers with sub-second location updates while maintaining low resource usage.

Decision

Architected a real-time streaming solution with three key components:

Real-time Transport Socket.IO with WebSocket compression instead of HTTP polling to reduce bandwidth overhead
Event Architecture Pub/sub pattern to isolate tracking streams per vendor, preventing cross-contamination
Security Layer Supabase Row-Level Security instead of custom auth middleware to reduce attack surface

Outcome

25%
Bandwidth Reduction
40%
Faster Queries
0
Security Breaches

This translated to smoother real-time tracking for drivers, lower data costs for vendors, and zero unauthorized access incidents in production.

Next iteration:

Migrate location streams to Kafka for horizontal scaling beyond 1000 concurrent users.

Why: Current pub/sub architecture works well but lacks geographic sharding capabilities needed for multi-region deployments.

Socket.IO Supabase PostgreSQL RLS
Real-Time Collaboration Suite

SyncroSpace

Problem

Team needed WebRTC video calls, encrypted chat, and shared state (Kanban boards) without latency spikes. A single monolith wouldn't isolate failures—one service going down would take everything with it.

Decision

Separated concerns into dedicated microservices:

Video Service WebRTC with STUN/TURN fallback for peer-to-peer connections across NATs
Chat Service AES-256 encryption + Redis pub/sub for real-time message delivery
State Service PostgreSQL with optimistic locking for collaborative board updates

Used JWT for stateless auth across all services. Redis pub/sub for presence reduced PostgreSQL write load by 60%.

Outcome

40%
Presence Latency Drop
60%
Fewer DB Writes
3
Bugs Caught Pre-Prod

Users experienced a smoother, more responsive real-time collaboration environment. Health checks and automated rollbacks caught critical issues before they reached production.

Next iteration:

Add WebRTC SFU (Selective Forwarding Unit) for multi-party calls.

Why: Current mesh topology works for 2-4 participants but breaks beyond that due to exponential bandwidth requirements. SFU would efficiently scale video sessions to 10+ participants while conserving client-side bandwidth.

WebRTC Redis JWT Microservices
Neural Research Intelligence

SpikeMind

Problem

Neuroscience researchers needed to analyze 1000+ papers for pattern discovery and research gaps. Manual review took weeks. Existing tools lacked domain-specific scoring for novelty, reproducibility, and impact assessment.

Decision

Built an AI-powered research pipeline:

API Layer FastAPI backend with 25+ endpoints for paper ingestion, analysis, and query
AI Engine Gemini for contextual Q&A, hypothesis generation, and impact scoring
Caching Strategy Redis-backed result cache to avoid redundant LLM calls and reduce API costs

Outcome

Weeks → Hours
Review Time
40%
API Cost Savings
12
Research Gaps Found

Researchers could identify high-value research opportunities in hours instead of weeks, accelerating the pace of scientific discovery.

Next iteration:

Add graph database (Neo4j) to model citation networks and paper relationships.

Why: Current vector search is effective for semantic similarity but misses implicit connections between papers. Graph traversal would reveal hidden research clusters and citation patterns that influence field direction.

FastAPI Gemini Redis Docker
Visual Intelligence Processing

SnapToSheet

Problem

Accounting teams manually typed invoice data from images—error-prone, time-consuming, and difficult to audit. They needed structured extraction with line items, GST breakdown, and automated validation.

Decision

Created a vision-first extraction pipeline:

Vision Processing OpenRouter vision models (Nova 2 Lite) for fast, accurate invoice data extraction
Fallback System Tesseract.js OCR as backup when vision API fails or for offline processing
Output Generation 5-sheet Excel workbook with formulas, audit trails, and GST verification

Used server-side API routes to process images securely and prevent API key exposure in client code.

Outcome

10min → 30sec
Processing Time
8
Errors Caught Week 1
95%+
Extraction Accuracy

Accountants saved hours of manual data entry, caught discrepancies automatically through validation formulas, and maintained complete audit trails for compliance.

Next iteration:

Add batch processing queue for 100+ invoices with progress tracking.

Why: Current synchronous processing blocks the UI during extraction. Background job queue with WebSocket progress updates would support high-volume invoice processing without freezing the interface.

OpenRouter Tesseract.js Next.js ExcelJS
04

Where I've shipped under constraints.

Divine Connection

Software Developer

Feb 2024 - Apr 2024

Owned frontend architecture for a real-time social platform serving 100+ daily active users. Delivered features ahead of schedule while maintaining zero production incidents.

  • Reduced API latency by 20% through React Query caching with stale-while-revalidate strategy and request deduplication
  • Increased user session duration by 15% via optimized data fetching patterns and predictive preloading
  • Built 8 reusable components with Framer Motion animations, cutting team development time by 30%
  • Shipped 3 major features ahead of sprint deadlines using feature flags and phased rollouts
Tradeoff Decision:

Chose Firebase for real-time sync over custom WebSocket server. This enabled faster MVP delivery and reduced infrastructure complexity, but limited advanced query capabilities. At 1000+ DAU, I'd migrate to Supabase for better Row-Level Security control and SQL query flexibility.

Internship Studio

Data Analysis Intern

May 2024 - Jun 2024

Drove data-driven inventory optimization for a multi-region retail chain, translating raw sales data into actionable business insights.

  • Analyzed 50,000+ sales records using Pandas to identify revenue trends, seasonal patterns, and underperforming product categories
  • Created interactive Matplotlib dashboards that guided regional inventory adjustments and improved stock turnover rates
  • Cleaned datasets with 20% missing values using custom seasonal imputation strategies to preserve regional variance
Learning from Failure:

Initial analysis used simple mean imputation for missing sales data, which skewed regional patterns and led to inaccurate forecasts. Switched to K-NN imputation based on similar stores and seasons, improving forecast accuracy by 12%. This taught me that data cleaning choices have downstream business impact.

Internship Studio

AI Intern

Jan 2024 - Feb 2024

Built and optimized a computer vision system for facial recognition using dimensionality reduction and neural networks.

  • Implemented PCA for dimensionality reduction, compressing feature space from 10,000 to 150 principal components while retaining 95% variance
  • Trained ANN classifier achieving 94% accuracy on test set through systematic hyperparameter tuning
  • Preprocessed facial datasets with normalization and histogram equalization to stabilize training and improve generalization
  • Used confusion matrices to identify systematic misclassification patterns and guide model refinement

I'm looking for my next challenge.

Specifically: systems that need to scale, teams that ship fast, and problems where "it depends" is the right answer.

If you're building something hard, let's talk.

Start a conversation