BCDHub is a set of microservices written in Golang:
indexer
Loads and decodes operations related to smart contracts and also keeps track of the blockchain and handles protocol updates.API
Exposes RESTful JSON API for accessing indexed data (with on-the-fly decoding). Also provides a set of methods for authentication and managing user profiles.
Those microservices are sharing access to databases and communicating via database:
PostgreSQLdatabase for storing compilations and user data.
BCDHub also depends on several API endpoints exposed by TzKT although they are optional:
- List of blocks containing smart contract operations, used for boosting the indexing process (allows to skip blocks with no contract calls)
- Mempool operations
- Contract aliases and other metadata
Those services obviously make sense for public networks only and not used for sandbox or other private environments.
BCD uses X.Y.Z version format where:
Xchanges every 3-5 months along with a big release with a significant addition of functionalityYincreasing signals about a possibly non-compatible update that requires reindexing (or restoring from snapshot) or syncing with frontendZbumped for every stable release candidate or hotfix
BCD web interface developed at https://github.com/baking-bad/bcd uses the same version scheme.
X.Y.* versions of backend and frontent MUST BE compatible which means that for every change in API responses Y has to be increased.
Is essentially tagging commits:
make release # forced tag updateFor stable release:
git tag X.Y.Z
git push --tagsAlthough you can install and run each part of BCD Hub independently, as system services for instance, the simplest approach is to use dockerized versions orchestrated by docker-compose.
BCDHub docker images are being built on dockerhub. Tags for stable releases have format X.Y.
Docker tags are essentially produced from Git tags using the following rules:
X.Y.*→X.Y
make images # latest
make stable-images # requires STABLE_TAG variable in the .env fileMake sure you have installed:
- docker
- docker-compose
You will also need several ports to be not busy:
14000API service5432PostgreSQL8000Frontend GUI
- Clone this repo
git clone https://github.com/baking-bad/bcdhub.git
cd bcdhub- Create and fill
.envfile (see Configuration)
your-text-editor .envThere are several predefined configurations serving different purposes.
- Stable docker images
X.Y /configs/production.ymlfile is used internally- Requires
STABLE_TAGenvironment set - Deployed via
make stable
/configs/development.ymlfile is used- You can spawn local instances of databases or ssh to staging host with port forwarding
- Run services
make {service}(where service is one ofapiindexer)
/configs/sandbox.ymlfile is used- Start via
COMPOSE_PROJECT_NAME=bcd-box docker-compose -f docker-compose.sandbox.yml up -d --build - Stop via
COMPOSE_PROJECT_NAME=bcd-box docker-compose -f docker-compose.sandbox.yml down
It takes around 20-30 seconds to initialize all services, API endpoints might return errors until then.
NOTE that if you specified local RPC node that's not running, BCDHub will wait for it indefinitely.
Full indexing process requires about 2 hours, however there are cases when you cannot afford that
NOTE: currently we don't provide public snapshots. Alternatively, contact us for granting access
- Make sure you have snapshot settings in your
.envfile
make s3-creds
No further actions required
make s3-repo
Follow the instruction: you can choose an arbitrary name for your repo.
make s3-snapshot
Select an existing repository to store your snapshot.
Follow steps 1 and 2 from the make snapshot instruction.
make s3-restore
Select the latest (by date) snapshot from the list. It's taking a while, don't worry about the seeming freeze.
This is mostly for production environment, for all others a simple "start from the scratch" would work.
E.g. applying hotfixes. No breaking changes in the database schema.
Make sure you are on master branch
git pull
make stable-images
make stable
make stable-pull
make stable
E.g. new field added to one of the models. You'd need to write a migration script to update existing data.
git pull
make migration
Select your script.
In case you need to reindex from scratch you can set up a secondary BCDHub instance, fill the index, make a snapshot, and then apply it to the production instance.
Typically you'd use staging for that.
make upgrade
make s3-restore
Select the snapshot you made.
make stable