- Python 3
- Docker.
- Docker Compose.
- Poetry for Python package and environment management.
- Node.js 16 (with
yarn).
- Start the stack with Docker Compose:
docker-compose up -d- Now you can open your browser and interact with these URLs:
Frontend, built with Docker, with routes handled based on the path: http://localhost:8080
Backend, JSON based web API based on OpenAPI: http://localhost:8000/api/
Automatic interactive documentation with Swagger UI (from the OpenAPI backend): http://localhost:8000/docs
Alternative automatic documentation with ReDoc (from the OpenAPI backend): http://localhost:8000/redoc
PGAdmin, PostgreSQL web administration: http://localhost:5050
Flower, administration of Celery tasks: http://localhost:5555
Note: The first time you start your stack, it might take a minute for it to be ready. While the backend waits for the database to be ready and configures everything. You can check the logs to monitor it.
To check the logs, run:
docker-compose logsTo check the logs of a specific service, add the name of the service, e.g.:
docker-compose logs backendBy default, the dependencies are managed with Poetry, go there and install it.
From ./backend/app/ you can install all the dependencies with:
$ poetry installThen you can start a shell session with the new environment with:
$ poetry shellNext, open your editor at ./backend/app/ (instead of the project root: ./), so that you see an ./app/ directory with your code inside. That way, your editor will be able to find all the imports, etc. Make sure your editor uses the environment you just created with Poetry.
Modify or add SQLAlchemy models in ./backend/app/app/models/, Pydantic schemas in ./backend/app/app/schemas/, API endpoints in ./backend/app/app/api/, CRUD (Create, Read, Update, Delete) utils in ./backend/app/app/crud/. The easiest might be to copy the ones for Items (models, endpoints, and CRUD utils) and update them to your needs.
Add and modify tasks to the Celery worker in ./backend/app/app/worker.py.
If you need to install any additional package to the worker, add it to the file ./backend/app/celeryworker.dockerfile.
During development, you can change Docker Compose settings in the file docker-compose.override.yml.
The changes to that file only affect your local development environment, it is ignored by git. So, you can add "temporary" changes that help the development workflow.
For example, the directory with the backend code is mounted as a Docker "host volume", mapping the code you change live to the directory inside the container. That allows you to test your changes right away, without having to build the Docker image again. It should only be done during development, for production, you should build the Docker image with a recent version of the backend code. But during development, it allows you to iterate very fast.
There is also a command override that runs /start-reload.sh (included in the base image) instead of the default /start.sh (also included in the base image). It starts a single server process (instead of multiple, as would be for production) and reloads the process whenever the code changes. Have in mind that if you have a syntax error and save the Python file, it will break and exit, and the container will stop. After that, you can restart the container by fixing the error and running again:
$ docker-compose up -dTo get inside the container with a bash session you can start the stack with:
$ docker-compose up -dand then exec inside the running container:
$ docker-compose exec backend bashYou should see an output like:
root@7f2607af31c3:/app#that means that you are in a bash session inside your container, as a root user, under the /app directory.
There you can use the script /start-reload.sh to run the debug live reloading server. You can run that script from inside the container with:
$ bash /start-reload.sh...it will look like:
root@7f2607af31c3:/app# bash /start-reload.shand then hit enter. That runs the live reloading server that auto reloads when it detects code changes.
Nevertheless, if it doesn't detect a change but a syntax error, it will just stop with an error. But as the container is still alive and you are in a Bash session, you can quickly restart it after fixing the error, running the same command ("up arrow" and "Enter").
...this previous detail is what makes it useful to have the container alive doing nothing and then, in a Bash session, make it run the live reload server.
To test the backend run:
$ sh ./scripts/test.shThe file ./scripts/test.sh has the commands to generate a testing docker-stack.yml file, start the stack and test it.
The tests run with Pytest, modify and add tests to ./backend/app/app/tests/.
If you use GitLab CI the tests will run automatically.
Start the stack with this command:
sh ./scripts/test-local.shThe ./backend/app directory is mounted as a "host volume" inside the docker container.
You can rerun the test on live code:
docker-compose exec backend sh /app/tests-start.shIf your stack is already up and you just want to run the tests, you can use:
docker-compose exec backend sh /app/tests-start.shThat /app/tests-start.sh script just calls pytest after making sure that the rest of the stack is running. If you need to pass extra arguments to pytest, you can pass them to that command and they will be forwarded.
For example, to stop on first error:
docker-compose exec backend bash /app/tests-start.sh -xBecause the test scripts forward arguments to pytest, you can enable test coverage HTML report generation by passing --cov-report=html.
To run the local tests with coverage HTML reports:
sh ./scripts/test-local.sh --cov-report=htmlTo run the tests in a running stack with coverage HTML reports:
docker-compose exec backend bash /app/tests-start.sh --cov-report=htmlIf you know about Python Jupyter Notebooks, you can take advantage of them during local development.
The docker-compose.override.yml file sends a variable env with a value dev to the build process of the Docker image (during local development) and the Dockerfile has steps to then install and configure Jupyter inside your Docker container.
So, you can enter into the running Docker container:
docker-compose exec backend bashAnd use the environment variable $JUPYTER to run a Jupyter Notebook with everything configured to listen on the public port (so that you can use it from your browser).
It will output something like:
root@73e0ec1f1ae6:/app# $JUPYTER
[I 12:02:09.975 NotebookApp] Writing notebook server cookie secret to /root/.local/share/jupyter/runtime/notebook_cookie_secret
[I 12:02:10.317 NotebookApp] Serving notebooks from local directory: /app
[I 12:02:10.317 NotebookApp] The Jupyter Notebook is running at:
[I 12:02:10.317 NotebookApp] http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
[I 12:02:10.317 NotebookApp] Use Control-C to stop this server and shut down all kernels (twice to skip confirmation).
[W 12:02:10.317 NotebookApp] No web browser found: could not locate runnable browser.
[C 12:02:10.317 NotebookApp]
Copy/paste this URL into your browser when you connect for the first time,
to login with a token:
http://(73e0ec1f1ae6 or 127.0.0.1):8888/?token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397you can copy that URL and modify the "host" to be localhost, in the case above, it would be, e.g.:
http://localhost:8888/token=f20939a41524d021fbfc62b31be8ea4dd9232913476f4397
and then open it in your browser.
You will have a full Jupyter Notebook running inside your container that has direct access to your database by the container name (db), etc. So, you can just run sections of your backend code directly, for example with VS Code Python Jupyter Interactive Window or Hydrogen.
As during local development your app directory is mounted as a volume inside the container, you can also run the migrations with alembic commands inside the container and the migration code will be in your app directory (instead of being only inside the container). So you can add it to your git repository.
Make sure you create a "revision" of your models and that you "upgrade" your database with that revision every time you change them. As this is what will update the tables in your database. Otherwise, your application will have errors.
- Start an interactive session in the backend container:
$ docker-compose exec backend bash-
If you created a new model in
./backend/app/app/models/, make sure to import it in./backend/app/app/db/base.py, that Python module (base.py) that imports all the models will be used by Alembic. -
After changing a model (for example, adding a column), inside the container, create a revision, e.g.:
$ alembic revision --autogenerate -m "Add column last_name to User model"-
Commit to the git repository the files generated in the alembic directory.
-
After creating the revision, run the migration in the database (this is what will actually change the database):
$ alembic upgrade headIf you don't want to use migrations at all, uncomment the line in the file at ./backend/app/app/db/init_db.py with:
Base.metadata.create_all(bind=engine)and comment the line in the file prestart.sh that contains:
$ alembic upgrade headIf you don't want to start with the default models and want to remove them / modify them, from the beginning, without having any previous revision, you can remove the revision files (.py Python files) under ./backend/app/alembic/versions/. And then create a first migration as described above.
- Enter the
frontenddirectory, install the NPM packages and start the live server using thenpmscripts:
cd frontend
yarn
yarn serveThen open your browser at http://localhost:8080
Notice that this live server is not running inside Docker, it is for local development, and that is the recommended workflow. Once you are happy with your frontend, you can build the frontend Docker image and start it, to test it in a production-like environment. But compiling the image at every change will not be as productive as running the local development server with live reload.
Check the file package.json to see other available options.
If you have Vue CLI installed, you can also run vue ui to control, configure, serve, and analyze your application using a nice local web user interface.
This project is using a formatter and a linter for both the back and front-end. Te CI is checking them for each commit.
In the backend we are using yapf and isort as formatter and mypy and flake8 (with a few plugins) as linters.
To run the formatters:
cd backend/app
poetry install --no-root
./scripts/format.shTo run the linters:
cd backend/app
poetry install --no-root
./scripts/lint.shIn the fronten we are using prettier as formatter and the Vue CLI linter, which is using eslint, as linter.
To run the formatter:
cd frontend
yarn
yarn formatTo run the linter:
cd frontend
yarn
yarn lintThere's a copyright header on (almost) all the files. The CI will validate that the header is set correctly like so:
./scripts/license-check-and-add.sh checkIf one adds a new source file, he must add the copyright header:
./scripts/license-check-and-add.sh addThese are the URLs that will be used and generated by the project.
Frontend: https://open-pantheon.org
Backend: https://api.open-pantheon.org/api/
Automatic Interactive Docs (Swagger UI): https://api.open-pantheon.org/docs
Automatic Alternative Docs (ReDoc): https://api.open-pantheon.org/redoc
Staging URLs, from the branch main.
Frontend: https://dev.open-pantheon.org
Backend: https://api.dev.open-pantheon.org/api/
Automatic Interactive Docs (Swagger UI): https://api.dev.open-pantheon.org/docs
Automatic Alternative Docs (ReDoc): https://api.dev.open-pantheon.org/redoc
Development URLs, for local development.
Frontend: http://localhost:8080
Backend: http://localhost:8000/api/
Automatic Interactive Docs (Swagger UI): https://localhost:8000/docs
Automatic Alternative Docs (ReDoc): https://localhost:8000/redoc
PGAdmin: http://localhost:5050