Skip to content

hatchet-dev/hatchet

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2,836 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Hatchet Logo

An orchestration engine for background tasks, AI agents, and durable workflows

Docs License: MIT Go Reference NPM Downloads

Discord Twitter GitHub Repo stars

Hatchet Cloud · Documentation · Website · Issues

What is Hatchet?

Hatchet is a platform for orchestrating background tasks, AI agents, and durable workflows at scale. It supports applications written in Python, TypeScript, Go and Ruby, and can be used as a service through Hatchet Cloud or self-hosting. Hatchet provides a full platform for queuing, automatic retries, durability, real-time monitoring, alerting, and logging.

Get started quickly

The fastest way to get started with Hatchet is signing up for Hatchet Cloud to try it out! We recommend this even if you plan on self-hosting, so you can have a look at what a fully-deployed Hatchet platform looks like.

To run Hatchet locally, the fastest path for setup is to install the Hatchet CLI (on MacOS, Linux or WSL) - note that this requires Docker installed locally to work:

curl -fsSL https://install.hatchet.run/install.sh | bash
hatchet --version
hatchet server start

To view full documentation for self-hosting and using cloud, have a look at the docs.

When should I use Hatchet?

You can use Hatchet for running background tasks, AI agents, or other types of long-running workflows. It is designed to be a feature-complete solution for systems where correctness, reliability, horizontal scalability, and observability are essential. From a technical perspective, it differs from other solutions in that it uses Postgres as a durability layer for both the task runtime and the observability system, making it particularly easy to self-host.

For some end-to-end examples of workflows you can build with Hatchet, check out our cookbooks.

Hatchet Features

Background Tasks

Task orchestration and workflows

Scale

  • Priority so that critical tasks can run before tasks which aren't latency sensitive, like backfill jobs
  • Rate limiting to deal with third-party APIs, or even to enforce per-user rate limits using dynamic rate limits
  • Fair scheduling using Hatchet's concurrency policies, which can set a concurrency limit for tasks based on dynamic keys
  • Worker slots for ensuring that workers cannot take on more work than they can handle

Monitoring, observability, and management

  • Real-time web UI with alerting, monitoring, and logging
  • OpenTelemetry (using Hatchet's built-in collector or external destinations)
  • Prometheus metrics
  • Multi-tenant by default, so a single Hatchet instance can support multiple teams
  • Users and roles

Hatchet Cloud features

  • Autoscaling and pay-as-you-go plans
  • Multi-region deployments
  • SSO
  • Improved performance for monitoring, logging, and observability

Documentation

The most up-to-date documentation can be found at https://docs.hatchet.run.

Community & Support

  • Discord - best for getting in touch with the maintainers and hanging with the community
  • Github Issues - used for filing bug reports
  • Github Discussions - used for starting in-depth technical discussions that are suited for asynchronous communication
  • Email - best for getting Hatchet Cloud support and for help with billing, data deletion, etc.

Hatchet vs...

Hatchet vs Durable Execution Platforms (Temporal, DBOS)

Hatchet's durable tasks feature is a drop-in replacement for Temporal or DBOS workflows. You also get:

  • End-to-end observability of durable tasks using OpenTelemetry, monitoring and logging
  • Features built for running workflows at scale, such as rate limiting, complex routing, and worker-level slot control
  • Multi-tenancy, users and roles supported out of the box

In addition to making durable execution easier to use, Hatchet can also be used as a general-purpose queue, a DAG-based orchestrator, a durable execution engine, or all three, allowing teams to centralize their async and background processing in a single platform.

Hatchet vs Task Queues (Celery, BullMQ)

Traditional task queues like BullMQ and Celery trade off durability for throughput. Tasks persist on the broker (typically Redis or RabbitMQ) while the task is executing, but are not persisted afterwards. This makes it difficult to build complex workflows, as there is no persistent intermediate state. It also makes it difficult to recover and replay tasks which failed and were removed from the queue, resulting in custom admin tooling to work with these libraries at scale.

On the other hand, Hatchet is a durable task queue, meaning it persists the history of all executions (up to a defined retention period), which allows for easy monitoring, debugging and durable task features. Hatchet's durability features add some overhead: while Hatchet has been load-tested up to 10k tasks/second, it consumes more resources than a system built on Redis or RabbitMQ, which can reach much higher throughput.

Hatchet vs DAG-based platforms (Airflow, Prefect, Dagster)

These tools are usually built with data engineers in mind, and aren’t designed to run as part of a high-volume application. They’re usually higher latency and higher cost, with their primary selling point being integrations with common datastores and connectors.

When to use Hatchet: when you'd like to use a DAG-based framework, write your own integrations and functions, and require higher throughput (>100/s)

When to use other DAG-based platforms: when you'd like to use other data stores and connectors that work out of the box

Issues

Please submit any bugs that you encounter via GitHub issues.

I'd Like to Contribute

Please let us know what you're interested in working on in the #contributing channel on Discord. This will help us shape the direction of the project and will make collaboration much easier!