Home Artificial Intelligence Establishing Python Projects: Part III

Establishing Python Projects: Part III

25
Establishing Python Projects: Part III

Photo by Gayatri Malhotra on Unsplash

Whether you’re a seasoned developer or simply getting began with 🐍 Python, it’s essential to know the way to construct robust and maintainable projects. This tutorial will guide you thru the strategy of organising a Python project using a number of the hottest and effective tools within the industry. You’ll learn the way to use GitHub and GitHub Actions for version control and continuous integration, in addition to other tools for testing, documentation, packaging and distribution. The tutorial is inspired by resources reminiscent of Hypermodern Python and Best Practices for a recent Python project. Nevertheless, this shouldn’t be the one method to do things and you would possibly have different preferences or opinions. The tutorial is meant to be beginner-friendly but additionally cover some advanced topics. In each section, you’ll automate some tasks and add badges to your project to point out your progress and achievements.

The repository for this series may be found at github.com/johschmidt42/python-project-johannes

  • OS: Linux, Unix, macOS, Windows (WSL2 with e.g. Ubuntu 20.04 LTS)
  • Tools: python3.10, bash, git, tree
  • Version Control System (VCS) Host: GitHub
  • Continuous Integration (CI) Tool: GitHub Actions

It is predicted that you simply are accustomed to the versioning control system (VCS) git. If not, here’s a refresher for you: Introduction to Git

Commits shall be based on best practices for git commits & Conventional commits. There may be the conventional commit plugin for PyCharm or a VSCode Extension that allow you to to put in writing commits on this format.

Overview

Structure

  • Testing framework (pytest)
  • Pytest configuration (pytest.ini_options)
  • Testing the applying (fastAPI, httpx)
  • Coverage (pytest-coverage)
  • Coverage configuration (coverage.report)
  • CI (test.yml)
  • Badge (Testing)
  • Bonus (Report coverage in README.md)

Testing your code is a crucial a part of software development. It helps you be sure that your code works as expected. You’ll be able to test your code or application manually or use a testing framework to automate the method. Automated tests may be of differing types, reminiscent of unit tests, integration tests, end-to-end tests, penetration tests, etc. On this tutorial, we’ll give attention to writing a straightforward unit test for our single function in our project. This may show that our codebase is well tested and reliable, which is a basic requirement for any proper project.

Python has some testing frameworks to select from, reminiscent of the built-in standard library unittest. Nevertheless, this module has some drawbacks, reminiscent of requiring boilerplate code, class-based tests and specific assert methods. A greater alternative is pytest, which is a preferred and powerful testing framework with many plugins. For those who usually are not accustomed to pytest, you need to read this introductory tutorial before you proceed, because we’ll write a straightforward test without explaining much of the fundamentals.

So let’s start by making a recent branch: feat/unit-tests

In our app src/example_app we only have two files that may be tested: __init__.py and app.py . The __init__ file accommodates just the version and the app.py our fastAPI application and the GET pokemon endpoint. We don’t must test the __init__.py file since it only accommodates the version and it’s going to be executed after we import app.py or some other file from our app.

We are able to create a tests folder within the project’s root and add the test file test_app.py in order that it looks like this:

.
...
├── src
│ └── example_app
│ ├── __init__.py
│ └── app.py
└── tests
└── test_app.py

Before we add a test function with pytest, we’d like to put in the testing framework first and add some configurations to make our lives a little bit easier:

Since the default visual output within the terminal leaves some room for improvement, I prefer to use the plugin pytest-sugar. This is totally optional, but if you happen to just like the visuals, give it a try. We install these dependencies to a recent group that we call test. Again, as explained within the last part (part II), that is to separate app and dev dependencies.

Because pytest won’t know where our tests are positioned, we are able to add this information to the pyproject.toml:

# pyproject.toml
...
[tool.pytest.ini_options]
testpaths = ["tests"]
addopts = "-p no:cacheprovider" # deactivating pytest caching.

Where addopts stands for “add options” or “additional options” and the worth -p no:cacheprovider tells pytest to not cache runs. Alternatively, we are able to create a pytest.ini and add these lines there.

Let’s proceed with adding a test to the fastAPI endpoint that we created in app.py. Because we use httpx, we’d like to mock the response from the HTTP call (https://pokeapi.co/api). We could use monkeypatch or unittest.mock to vary the behaviour of some functions or classes in httpx but there already exists a plugin that we are able to use: respx

Mock HTTPX with awesome request patterns and response unintended effects.

Moreover, because fastAPI is an ASGI and never a WSGI, we’d like to put in writing an async test, for which we are able to use the pytest plugin: pytest-asyncio along with trio . Don’t worry if these are recent to you, they are only libraries for async Python and also you don’t need to grasp what they do.

> poetry add --group test respx pytest-asyncio trio

Let’s create our test within the test_app.py:

I won’t go into the small print of the way to create unit-tests with pytest, because this topic could cover a complete series of tutorials! But to summarise, I created an async test called test_get_pokemon through which the response shall be the expected_response because we’re using the respx_mock library. The endpoint of our fastAPI application known as and the result’s in comparison with the expected result. If you would like to find more about the way to test with fastAPI and httpx, try the official documentation: Testing in fastAPI

And if you’ve async functions, and don’t know the way to cope with them, take a have a look at: Testing with async functions in fastAPI

Assuming that you simply installed your application with poetry install we now can run pytest with

> pytest
Running all our tests — Image by creator

and pytest knows through which directory it must search for test files!

To make our linters blissful, we must always also run them on the newly created file. For this, we’d like to switch the command lint-mypy in order that mypy also covers files within the tests directory (previously only src):

# Makefile...lint-mypy:
@mypy .
...

Finally, we are able to now run our formatters and linters before committing:

> make format
> make lint
Running formatters and linters — Image by creator

The code coverage in a project is a great indicator of how much of the code is roofed by unit tests. Hence, code coverage is a great metric (not all the time) to examine if a specific codebase is well tested and reliable.

We are able to check our code coverage with the coverage module. It creates a coverage report and provides information in regards to the lines that we missed with our unit-tests. We are able to install it via a pytest plugin pytest-cov:

> poetry add --group test pytest-cov

We are able to run the coverage module through pytest:

> pytest --cov=src --cov-report term-missing --cov-report=html

To only check the coverage for the src directory we add the flag --cov=src . We wish the report back to be displayed within the terminal --cov-report term-missing and stored in a html file with --cov-report html

Coverage report terminal — Image by creator

We see that a coverage HTML report has been created within the directory htmlcov through which we discover an index.html.

.
...
├── index.html
├── keybd_closed.png
├── keybd_open.png
├── status.json
└── style.css

Opening it in a browser allows us to visually see the lines that we covered with our tests:

Coverage report HTML (overview) — Image by creator

Clicking on the link src/example_app/app.py we see an in depth view of what our unit-tests covered within the file and more importantly which lines they missed:

Coverage report HTML (detailed) — Image by creator

We notice that the code under the if __name__ == "important": line is included in our coverage report. We are able to exclude this by setting the right flag when running pytest, or higher, add this configuration in our pyproject.toml:

# pyproject.toml
...
[tool.coverage.report]
exclude_lines = [
'if __name__ == "__main__":'
]

The lines after the if __name__==”__main__" are actually excluded*.

*It probably is sensible to incorporate other common lines reminiscent of

  • def __repr__
  • def __str__
  • raise NotImplementedError

If we run pytest with the coverage module again

> pytest --cov=src --cov-report term-missing --cov-report=html
Coverage report HTML (excluded lines) — Image by creator

the last line shouldn’t be excluded as expected.

We have now covered the fundamentals of the coverage module, but there are more features you can explore. You’ll be able to read the official documentation to learn more in regards to the options.

Let’s add these commands (pytest, coverage) to our Makefile, the identical way we did in Part II, in order that we don’t should remember them. Moreover we add a command that uses the --cov-fail-under=80 flag. This signals pytest to fail if the whole coverage is lower than 80 %. We’ll use this later within the CI a part of this tutorial. Since the coverage report creates some files and directories inside the project, we must always also add a command that removes these for us (clean-up):

# Makefileunit-tests:
@pytest
unit-tests-cov:
@pytest --cov=src --cov-report term-missing --cov-report=html
unit-tests-cov-fail:
@pytest --cov=src --cov-report term-missing --cov-report=html --cov-fail-under=80
clean-cov:
@rm -rf .coverage
@rm -rf htmlcov
...

And now we are able to invoke these with

> make unit-tests
> make unit-tests-cov

and clean up the created files with

> make clean-cov

Once more, we use the software development practice CI to make certain that nothing is broken each time we commit to our default branch important.

Up until now, we were in a position to run our tests locally. So allow us to create our second workflow that can run on a server from GitHub! We have now the choice of using codecov.io together with the codecov-action OR we are able to create the report within the Pull Request (PR) itself with a pytest-comment motion. I’ll select the second option for simplicity.

We are able to either create a recent workflow that runs parallel to our linter lint.yml (faster) or have one workflow that runs the linters first after which the testing job (more efficient). This can be a design alternative that depends upon the project’s needs. Each options have pros and cons. For this tutorial, I’ll create a separate workflow (test.yml). But before we try this, we’d like to update our command within the Makefile, in order that we create a pytest.xml and a pytest-coverage.txt, that are needed for the pytest-comment motion:

# Makefile...unit-tests-cov-fail:
@pytest --cov=src --cov-report term-missing --cov-report=html --cov-fail-under=80 --junitxml=pytest.xml | tee pytest-coverage.txt
clean-cov:
@rm -rf .coverage
@rm -rf htmlcov
@rm -rf pytest.xml
@rm -rf pytest-coverage.txt
...

Now we are able to write our workflow test.yml:

Let’s break it right down to make certain we understand each part. GitHub motion workflows should be created within the .github/workflows directory of the repository within the format of .yaml or .yml files. For those who’re seeing these for the primary time, you possibly can check them out here to raised understand them. Within the upper a part of the file, we give the workflow a reputation name: Testing and define on which signals/events, this workflow needs to be began: on: ... . Here, we wish that it runs when recent commits come right into a PullRequest targeting the important branch or commits go the important branch directly. The job runs in an ubuntu-latest (runs-on) environment and executes the next steps:

  • checkout the repository using the branch name that’s stored within the default environment variable ${{ github.head_ref }} . GitHub motion: checkout@v3
  • install Poetry with pipx since it’s pre-installed on all GitHub runners. If you’ve a self-hosted runner in e.g. Azure, you’d must install it yourself or use an existing GitHub motion that does it for you.
  • Setup the python environment and caching the virtualenv based on the content within the poetry.lock file. GitHub motion: setup-python@v4
  • Install the applying & its requirements along with the test dependencies which can be needed to run the tests with pytest: poetry install --with test
  • Running the tests with the make command: poetry run make unit-tests-cov-vail Please note, that running the tools is simply possible within the virtualenv, which we are able to access through poetry run.
  • We use a GitHub motion that enables us to mechanically create a comment within the PR with the coverage report. GitHub motion: pytest-coverage-comment@important

Once we open a PR targeting the important branch, the CI pipeline will run and we’ll see a comment like this in our PR:

Pytest coverage report in PR comment — Image by creator

It created a small badge with the whole coverage percentage (81%) and has linked the tested files with URLs. With one other commit in the identical feature branch (PR), the identical comment for the coverage report is overwritten by default.

To display the status of our recent CI pipeline on the homepage of our repository, we are able to add a badge to the README.md file.

We are able to retrieve the badge after we click on a workflow run:

Create a standing badge from workflow file on GitHub — Image by creator
Copy the badge markdown — Image by creator

and choose the important branch. The badge markdown may be copied and added to the README.md:

Our landing page of the GitHub now looks like this ❤:

Second badge in README.md: Testing — Image by creator

For those who are interested in how this badge reflects the newest status of the pipeline run within the important branch, you possibly can try the statuses API on GitHub.

25 COMMENTS

  1. Simply wish to say your article is as astonishing.
    The clarity to your put up is just cool and that i could assume you
    are an expert on this subject. Well with your permission let me to
    clutch your feed to stay up to date with approaching post. Thank you one million and please carry on the rewarding work.

  2. hello!,I really like your writing very much!
    proportion we keep up a correspondence extra approximately your article on AOL?
    I require a specialist on this space to solve my problem.
    May be that is you! Having a look forward to look you.

  3. I absolutely love your blog and find a lot of your post’s to be just
    what I’m looking for. Do you offer guest writers to write content
    for yourself? I wouldn’t mind creating a post or elaborating on a few of the subjects you write in relation to here.
    Again, awesome blog!

LEAVE A REPLY

Please enter your comment!
Please enter your name here