Skip to content
View avtomatik's full-sized avatar
  • Moscow, Russia

Block or report avtomatik

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don’t include any personal information such as legal names or email addresses. Markdown is supported. This note will only be visible to you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
avtomatik/README.md

Hi, I'm Alexander (@avtomatik)

I build internal systems, automation pipelines, and backend platforms that turn complex business processes into safe, reliable, and scalable workflows. My work reduces manual errors, accelerates operations, and ensures production data integrity.

I primarily work in Python, using FastAPI and Django, while exploring Rust, Go, and Java to broaden my toolkit and improve system performance.


What I Build

  • Data reconciliation and controlled execution pipelines: CSV → staging → dry-run → approval → production execution
  • Automation workflows that enforce operational correctness and reduce production risks
  • ETL and data pipelines for analytics, reporting, and operational monitoring
  • Backend APIs and internal platforms integrating databases, services, and workflow approvals

Focused on delivering real-world impact rather than just code elegance


Technical Skills

  • Languages: Python3 (primary), exploring Rust, Go, Java, Bash
  • Frameworks / Libraries: FastAPI, Flask, Django, Django REST Framework, Pydantic
  • Data & Analytics: Pandas, Polars, DuckDB, dbt, SQL
  • DevOps / Automation: Docker, Docker Compose, GitHub Actions, CI/CD basics
  • Databases: PostgreSQL, SQLite, DuckDB
  • Concepts: Workflow automation, data reconciliation, operational safety, production correctness, API design

Key Projects & Impact

Certificate Reconciliation & Execution Pipeline

  • Designed controlled workflow: CSV → staging → dry-run → Excel approval → production execution
  • Reduced manual errors by ~90%, saving ~10 hours/week of reconciliation work
  • Implemented optimistic concurrency, error tracking, and batch execution flags to safeguard production

Data Pipelines & Analytics

  • Developed ETL pipelines with dbt and DuckDB for analytics and reporting
  • Automated reporting workflows, replacing error-prone manual processes

API & Internal Tools

  • Built REST APIs with FastAPI, integrating multiple databases and external services
  • Delivered pragmatic solutions for operations teams, balancing automation with human approvals

Current Focus

  • Strengthening testing, observability, and CI/CD workflows
  • Expanding workflow automation and operational tooling
  • Exploring asynchronous Python frameworks, Rust, Go, and Java for high-performance systems

Open to Collaborate On

  • Internal systems, workflow automation, and operational tooling
  • Data reconciliation and safe execution pipelines
  • Backend/API development and integration
  • Data pipelines and analytics for business-critical operations

Contact

📄 View my CV

Pinned Loading

  1. renamex renamex Public

    A fast CLI tool to clean, normalize, and safely rename files in bulk

    Rust 4

  2. etl-cobb-douglas-model etl-cobb-douglas-model Public

    ETL pipeline for analyzing economic data using the Cobb-Douglas production function model.

    Python

  3. etl-brown-kendrick etl-brown-kendrick Public

    ETL Pipeline for Data Fusion: M.G. Brown & J.W. Kendrick Macroeconomic Sources

    Python

  4. superset-dbt-bea superset-dbt-bea Public

    An **end-to-end data pipeline** for U.S. Bureau of Economic Analysis (BEA) data.

    Python

  5. welding_ml welding_ml Public

    Tiny Project in Machine Learning Applied to Precision Electron-Beam Welding

    Jupyter Notebook

  6. superset-dbt-fed superset-dbt-fed Public

    An end-to-end data pipeline for Federal Reserve Economic Data (FRED) data.

    Python