Building reliable data infrastructure that scales.
I'm a Data Engineer focused on designing and building the data infrastructure that organizations depend on. I specialize in engineering scalable pipelines, well-modeled warehouses, and automated workflows that make data clean, reliable, and ready for use at scale.
I write production-grade code, care deeply about data quality, and build systems that are easy to maintain and built to last.
Languages
Pipelines & Orchestration
Databases & Warehouses
Dev & Collaboration
| Area | Details |
|---|---|
| 🔄 ETL/ELT Pipelines | Batch and streaming pipelines built for reliability and scale |
| 🏗️ Data Warehousing | Dimensional models and schemas optimized for downstream use |
| ⚙️ Orchestration | Automated, monitored workflows with Airflow and similar tools |
| 🧹 Data Quality | Testing frameworks, validation layers, and governance standards |
| 📦 Data Transformation | Clean, version-controlled transformations using dbt and SQL |
- Advanced streaming architectures with Kafka and Spark
- Data lakehouse patterns with Delta Lake and Iceberg
- Pipeline testing and observability best practices
- dbt advanced features and package ecosystem
