API for acting as a central authenticated data service for all EPIC front-ends
All documentation has been consolidated in the Eagle Documentation Wiki:
- API Architecture - Service map, routing patterns, and request flow
- Configuration Management - ConfigService pattern and environment variables
- Analytics Architecture - Penguin Analytics integration
- API Deployment - Deployment workflows and procedures
- Deployment Pipeline - CI/CD workflows and image tagging
- Rollback Procedures - How to rollback deployments
- Troubleshooting - Common issues and solutions
Eagle is a revision name of the EAO EPIC application suite.
These projects comprise EAO EPIC:
- https://github.com/bcgov/eagle-api
- https://github.com/bcgov/eagle-public
- https://github.com/bcgov/eagle-admin
- https://github.com/bcgov/eagle-mobile-inspections
- https://github.com/bcgov/eagle-reports
- https://github.com/bcgov/eagle-helper-pods
- https://github.com/bcgov/eagle-dev-guides
- https://github.com/bcgov/eao-nginx (rproxy reverse proxy)
- https://github.com/bcgov/penguin-analytics (analytics service)
Requirements: Node 22.x, Docker
# 1. Install dependencies
yarn install
# 2. Configure environment
cp .env.example .env
# 3. Start MongoDB
yarn db:up
# 4. Start the API
yarn startAPI available at http://localhost:3000
Swagger UI at http://localhost:3000/api/docs/
For watch mode (auto-restart on changes): yarn start-watch
To stop MongoDB: yarn db:down
For deployment procedures, Helm charts, and CI/CD workflows, see the Deployment Guide in the Eagle documentation wiki.
One can run the EPIC applications on two kinds of data; generated and backed-up-from-live.
Generated data will typically be cleaner as it is generated against the latest mongoose models. Generated data also does not require transferring PI to dev machines. Live production dumps should only be used in situations where a particular bug cannot be replicated locally, and after replicating, the data generators and unit tests should be updated to include that edge case.
Described in generate README
Acquire a dump of the database from one of the live environments.
To restore a dump into your local MongoDB:
# Drop the existing database (destructive!)
mongosh --eval 'use epic; db.dropDatabase()'
# Restore from a dump directory
mongorestore -d epic epic/
# Or restore from a gzipped archive
mongorestore --gzip --archive=epic-dump.tar.gzWith yarn db:up running, pipe a mongodump archive directly into the container:
# From a mongodump --archive file
yarn db:restore < epic-prod-dump.archive
# Or stream directly from a live environment
mongodump --uri="<source-uri>" --archive | yarn db:restoreThe db:restore script runs mongorestore --drop inside the container, so it
replaces any existing data. The volume (eagle-api_mongodb-data) persists across
yarn db:down / yarn db:up restarts.
In the process of developing this application, we have database conversion scripts that must be run in order to update the db model so that the newest codebase can work properly. There are currently two methods of doing the database conversion depending on how long-lived and memory intensive the conversion is.
See https://www.npmjs.com/package/db-migrate for documentation on running the db migrate command. General use case for local development at the root folder:
./node_modules/db-migrate/bin/db-migrate up
For dev/test/prod environments, you will need to change the database.json file in the root folder accordingly and run with the --env param. See https://www.npmjs.com/package/db-migrate for more information.
In the root folder, there are files named migrateDocuments*.js. These are large, long-running, memory intensive scripts that operated on the vast majority of the EPIC documents. As a result, db-migrate was slow and unreliable given the nature of the connection to our database. As a result, these nodejs scripts operate using the mongodb driver in nodejs and can handle a more complicated, robust approach to doing the database conversion. They can be run from your local machine as long as there is a oc port-forward tunnel from your machine to the openshift mongdb database. Change the user/pass/port/host/authenticationDatabase params and the script will execute against the mongodb pod directly.
See .env.example for all available environment variables with descriptions and local defaults.
Key variables for local development:
KEYCLOAK_ENABLED=false— disables Keycloak, uses local JWT withSECRETMONGODB_SERVICE_HOST=localhost— MongoDB host (default: localhost)MONGODB_DATABASE=epic— database name
Full reference: Configuration Management wiki.
- Connect to Open Shift by copying login command
- Choose project and get Pods
oc get pods - Port-forward
oc port-forward eagle-api-mongodb-5-tj22g 5555:27017 - Connect to db with mongoshell
mongo "mongodb://admin:pw@localhost:27017/epic?authSource=admin" - Query for project
Eg.
db.epic.find({_id : ObjectId("65c661a8399db00022d48849")}) - Set
hasMetCommentPeriodstotruefor the project. Eg.db.epic.updateOne( { _id: ObjectId("65c661a8399db00022d48849") }, { $set: { "legislation_2018.hasMetCommentPeriods": true } })