Packaging Crash Course

This tutorial teaches you briefly how to create a Python package, set up a CI pipeline and publish it to the Acc-Py Package Index. It uses Acc-Py to simplify the process, but also explains what happens under the hood. For each topic, hyperlinks to further information are provided.

Each section should be reasonably self-contained. Feel free to skip boring sections or go directly to the one that answers your question. See also the Acc-Py deployment walkthrough for an alternative approach that converts an unstructured repository of Python code into a deployable Python package.

Loading Acc-Py

If you trust your Python environment, feel free to skip this section. This serves as a baseline from which beginners can start and be confident that none of the experimentation here will impact their other projects.

Start out by loading Acc-Py. We recommend using the latest Acc-Py Base distribution (2021.12 at the time of this writing):

$ source /acc/local/share/python/acc-py/base/pro/setup.sh

If you put this line into your ~/.bash_profile script [1], it will be executed every time you log into your machine. If you don’t want this, but you also don’t want to have to remember this long path, consider putting an alias into your ~/.bash_profile instead:

$ alias setup-acc-py='source /acc/local/share/python/acc-py/base/pro/setup.sh'

This way, you can load Acc-Py by invoking setup-acc-py on your command line.

Note

If you want to use Acc-Py outside of the CERN network, the Acc-Py Package Index wiki page has instructions on how to access it from outside. If you want to use multiple Python versions on the same machine, you may use a tool like Pyenv, Pyflow or Miniconda.

Further reading in the Acc-Py Wiki:

Creating a Virtual Environment

Virtual environments (or venvs for short) separate dependencies of one project from another. This way, you can work on one project that uses PyTorch 1.x, switch your venv, then work on another project that uses PyTorch 2.x.

Venvs also allow you to install dependencies that are not available in the Acc-Py distribution. This approach is much more robust than installing them into your home directory via pip install --user. The latter often leads to hard-to-understand import errors, so it is discouraged.

If you’re working on your BE-CSS VPC, we recommend creating your venv in the /opt directory, since space in your home directory is limited. Obviously, this does not work on LXPLUS, where your home directory is the only choice.

$ # Create a directory for all your venvs.
$ sudo mkdir -p /opt/home/$USER/venvs
$ # Make it your own (instead of root's).
$ sudo chown "$USER:" /opt/home/$USER/venvs
$ acc-py venv /opt/home/$USER/venvs/coi-example

Note

The acc-py venv command is a convenience wrapper around the venv standard library module. In particular, it passes the --system-site-packages flag. This flag ensures that everything that is pre-installed in the Acc-Py distribution also is available in your new environment. Without it, you would have to install common dependencies such as NumPy.

Once the virtual environment is created, you can activate it like this:

$ source /opt/home/$USER/venvs/coi-example/bin/activate
$ which python  # Where does our Python interpreter come from?
/opt/home/.../venvs/coi-example/bin/python
$ # deactivate  # Leave the venv again.

After activating the environment, you can give it a test run by upgrading the Pip package manager. This change should be visible only within your virtual environment:

$ pip install --upgrade pip

Further reading in the Acc-Py Wiki:

Setting up the Project

Time to get started! Go into your projects folder and initialize a project using Acc-Py:

$ cd ~/Projects
$ acc-py init coi-example
$ cd ./coi-example

Note

Don’t forget to hit the tab key while typing the above lines, so that your shell will auto-complete the words for you!

The acc-py init command creates a basic project structure for you. You can inspect the results via the tree command:

$ tree
.
├── coi_example
│   ├── __init__.py
│   └── tests
│       ├── __init__.py
│       └── test_coi_example.py
├── README.md
└── setup.py

This is usually enough to get started. However, there are two useful files that Acc-Py does not create for us: .gitignore and pyproject.toml. If you’re not in a hurry, we suggest you create them now. Otherwise, continue with Adding Dependencies.

Further reading in the Acc-Py wiki:

Adding .gitignore (Optional)

The .gitignore file tells Git which files to ignore. Ignored files will never show up as untracked or modified if you run git status. This is ideal for caches, temporary files and build artifacts. Without .gitignore, git status would quickly become completely useless.

While you can create this file yourself, we recommend you download Python.gitignore; it is comprehensive and universally used.

Warning

After downloading the file and putting it inside your project folder, don’t forget to rename it to .gitignore!

It is very common to later add project-specific names of temporary and glob patterns to this list. Do not hesitate to edit it! It only serves as a starting point.

Note

If you use an IDE like PyCharm, it is very common that IDE-specific config and manifest files will end up in your project directory. You could manually add these files to the .gitignore file of every single project.

However, it’s easier in the long to instead add these file names to the global gitignore file that is used for your entire computer. This means you don’t have to ignore these files in the next project again.

Further reading:

Adding pyproject.toml (Optional)

Setuptools is still the most common tool used to build and install Python packages. Traditionally, it expects project data (name, version, dependencies, …) to be declared in a setup.py file.

Many people don’t like this approach. Executing arbitrary Python code is a security risk and it’s hard to accommodate alternative, more modern build tools such as Poetry, Flit or Meson. For this reason, the Python community has been slowly moving towards a more neutral format.

This format is the pyproject.toml file. It allows a project to declare the build system that it uses and can be read without executing untrusted Python code.

In addition, many Python tools (e.g. Black, Isort, Pylint, Pytest, Setuptools-SCM) can be configured in this file. This reduces clutter in your project directory and makes it possible to do all configuration using a single file format.

If you wonder what a TOML file is, it is a config file format like YAML or INI, but with a focus on clarity and simplicity.

This is what a minimal pyproject.toml file using Setuptools looks like:

# pyproject.toml
[build-system]
requires = ['setuptools']
build-backend = 'setuptools.build_meta'

The section build-system tells Pip how to install our package. The key requires gives a list of necessary Python packages. The key build-backend points at a Python function that Pip calls to handle the rest. Between all of your Python projects, this section will almost never change.

And this is a slightly more complex pyproject.toml, that also configures a few tools. Note that the file would be only about 20 lines long:

# We can require minimum versions and [extras]!
[build-system]
requires = [
    'setuptools >= 64',
    'setuptools-scm[toml] ~= 8.0',
    'wheel',
]
build-backend = 'setuptools.build_meta'

# Tell isort to be compatible with the Black formatting style.
# This is necessary if you use both tools.
[tool.isort]
profile = 'black'

# Note that there is no section for Black itself. Normally,
# we don't need to configure a tool just to use it!

# Setuptools-SCM, however, is a bit quirky. The *presence*
# of its config block is required to activate it.
[tool.setuptools_scm]

# PyTest takes its options in a nested table
# called `ini_options`. Here, we tell it to also run
# doctests, not just unit tests.
[tool.pytest.ini_options]
addopts = '--doctest-modules'

# PyLint splits its configuration across multiple tables.
# Here, we disable one warning and minimize their report
# size.
[tool.pylint.reports]
reports = false
score = false

# Note how we quote 'messages control' because it contains
# a space character.
[tool.pylint.'messages control']
disable = ['similarities']

Further reading:

Adding Dependencies

Once this is done, we can edit the setup.py file created for us and fill in the blanks. This is what the new requirements look like:

# setup.py
REQUIREMENTS: dict = {
    "core": [
        "cernml-coi ~= 0.9.0",
        "gymnasium >= 0.29",
        "matplotlib ~= 3.0",
        "numpy ~= 1.0",
        "pyjapc ~= 2.0",
    ],
    "test": [
        "pytest",
    ],
}

And this is the new setup() call:

# setup.py (cont.)
setup(
    name="coi-example",
    version="0.0.1.dev0",
    author="Your Name",
    author_email="your.name@cern.ch",
    description="An example for how to use the cernml-coi package",
    long_description=LONG_DESCRIPTION,
    long_description_content_type="text/markdown",
    packages=find_packages(),
    python_requires=">=3.9",
    classifiers=[
        "Programming Language :: Python :: 3",
        "Intended Audience :: Science/Research",
        "Natural Language :: English",
        "Operating System :: OS Independent",
        "Programming Language :: Python :: 3 :: Only",
        "Programming Language :: Python :: 3.9",
        "Programming Language :: Python :: 3.10",
        "Programming Language :: Python :: 3.11",
        "Topic :: Scientific/Engineering :: Artificial Intelligence",
        "Topic :: Scientific/Engineering :: Physics",
    ],
    # Rest as before …
)

Of all these changes, only the description and the requirements were really necessary. Things like classifiers are nice-to-have metadata that we could technically also live without.

Further reading:

Version Requirements (Digression)

Note

This section is purely informative. If it bores you, feel free to skip ahead to Test Run.

When specifying your requirements, you should make sure to put in a reasonable version range for two simple reasons:

  • Being too lax with your requirements means that a package that you use might change something and your code suddenly breaks without warning.

  • Being too strict with your requirements means that other people will have a hard time making your package work in conjunction with theirs, even though all the code is correct.

There are two common ways to specify version ranges:

  • ~= 0.4.2 means: “I am compatible with version 0.4.2 and higher, but not with any version 0.5.X.” This is a good choice if the target adheres to Semantic Versioning. (Not all packages do! NumPy doesn’t, for example!)

  • >=1.23, <1.49 means: “I am compatible with version 1.23 and higher, but not with version 1.49 and beyond.” This is a reasonable choice if you know a version of the target that works for you and a version that doesn’t.

Other version specifiers mainly exist for strange edge cases. Only use them if you know what you’re doing.

Further reading:

Test Run

With this minimum in place, your package already can be installed via Pip! Give it a try:

$ pip install .  # "." means "the current directory".

Once this is done, your package is installed in your venv and can be imported by other packages without any path hackery:

>>> import coi_example
>>> coi_example.__version__
'0.0.1'
>>> import pkg_resources
>>> pkg_resources.get_distribution('coi-example')
coi-example 0.0.1.dev0 (/opt/home/.../venvs/coi-example/lib/python3.9/site-packages)

Of course, you can always remove your package again:

$ pip uninstall coi-example

Warning

Installation puts a copy of your package into your venv. This means that every time you change the code, you have to reinstall it for the changes to become visible.

There is also the option to symlink from your venv to your source directory. In this case, all changes to the source code become visible immediately. This is bad for a production release, but extremely useful during development. This feature is called an editable install:

$ pip install --editable .  # or `-e .` for short

Further reading:

SDists and Wheels (Digression)

Note

This section is purely informative. If it bores you, feel free to skip ahead to Continuous Integration.

The act of bringing Python code into a publishable format has a lot of historical baggage. This section skips most of the history and explains the terms that are most relevant today.

Python is an interpreted language. As such, one could think that there is no compilation step, and that the source code of a program is enough in order to run it. However, this assumption is wrong for a number of reasons:

  • some libraries contain extension code written in C or FORTRAN that must be compiled before using them;

  • some libraries generate their own Python code during installation;

  • all libraries must provide their metadata in a certain, standardized format.

As such, even Python packages must be built to some extent before publication.

The publishable result of the build process is a distribution package (confusingly often called distribution or package for short). There are several historical kinds of distribution packages, but only two remain relevant today: sdists and wheels.

Sdists contain only the above mentioned metadata and all relevant source files. It does not contain project files that are not packaged by the author (e.g. .gitignore or pyproject.toml). Because it contains source code, any C extensions must be compiled during installation. For this reason, installation is a bit slower and may run arbitrary code.

Wheels are a binary distribution format. Under the hood, they are zip files with a certain directory layout and file name. They come fully built and any C extensions are already compiled. This makes them faster and safer to install than sdists. The disadvantage is that if your project contains C extensions, you have to provide one wheel for each supported platform.

Given that most projects will be written purely in Python, wheels are the preferred distribution format. Depending on circumstances, it may make sense to publish an sdist in addition. The way to manually create and upload a distribution to the package repository is described elsewhere. See Releasing a Package via CI for the preferred and supported method at CERN.

Further reading:

Continuous Integration

Continuous integration is a strategy that prefers to merge features into the main development branch frequently and early. This ensures that different branches never diverge too much from each other. To facilitate this, websites like Gitlab offer CI pipelines that build and test code on each push automatically.

Continuous delivery takes this a step further and also automates the release of software. When people talk about “CI/CD”, they usually refer to having an automated pipeline of tests and releases.

Why do we care about all of this? Because Gitlab’s CI/CD pipeline is the only supported way to put our Python package on the Acc-Py package index.

You configure the pipeline with a file called .gitlab-ci.yml at the root of your project. Run the command acc-py init-ci to have a template of this file generated in your project directory. It should look somewhat like this:

# Use the acc-py CI templates documented at
# https://acc-py.web.cern.ch/gitlab-mono/acc-co/devops/python/acc-py-gitlab-ci-templates/docs/templates/master/
include:
  - project: acc-co/devops/python/acc-py-gitlab-ci-templates
    file: v2/python.gitlab-ci.yml
variables:
  project_name: coi_example
  # The PY_VERSION and ACC_PY_BASE_IMAGE_TAG variables control the
  # default Python and Acc-Py versions used by Acc-Py jobs. It is
  # recommended to keep the two values consistent. More details
  # https://acc-py.web.cern.ch/gitlab-mono/acc-co/devops/python/acc-py-gitlab-ci-templates/docs/templates/master/generated/v2.html#global-variables.
  PY_VERSION: '3.9'
  ACC_PY_BASE_IMAGE_TAG: '2021.12'

# Build a source distribution for foo.
build_sdist:
  extends: .acc_py_build_sdist

# Build a wheel for foo.
build_wheel:
  extends: .acc_py_build_wheel

# A development installation of foo tested with pytest.
test_dev:
  extends: .acc_py_dev_test

# A full installation of foo (as a wheel) tested with pytest on an
# Acc-Py image.
test_wheel:
  extends: .acc_py_wheel_test

# Release the source distribution and the wheel to the acc-py
# package index, only on git tag.
publish:
  extends: .acc_py_publish

Let’s see what these pieces do.

include

The first block makes a number of Acc-Py CI templates available to you. These templates are a pre-bundled set of configurations that make it easier for us to define our pipeline in a bit. You can distinguish job templates from regular jobs by because their names start with a period (.).

variables

The next block defines a set of variables that we can use in our job definitions with the syntax $variable-name. The variables defined here are not special on their own, but the Acc-Py CI templates happen to use them to fill some blanks, such as which Python version you want to use.

build_sdist

This is our first job definition. The name has no special meaning; in principle, you can name your jobs whatever you want (though you should obviously pick something descriptive).

Each job has a trigger, i.e. the conditions under which it runs. Examples are: on every push to the server, on every pushed Git tag, on every push to the master branch, or only when triggered manually.

Each job also and a stage that determines at which point in the pipeline it will run. Though you can define and order stages as you like, the default is: build → test → deploy. Whenever a trigger fires, all relevant jobs are collected into a pipeline and run, one stage after the other.

In our case, each job contains only one line; it tells us that our job extends a template. This means that it takes over all properties from that template. If you define any further attributes for this job, they will generally override the same properties of the template.

See here for an example of what these templates look like. This gives you an idea of the keys you can and might want to override. Note that a job can extend multiple other jobs; the merge details for how this works are documented on Gitlab.

Further reading:

Testing Your Package

As you might have noticed, the acc-py init call created a sub-package of your package called “tests”. This package is meant for unit tests, small functions that you can write to ensure the data transformation logic that you wrote does what you think it does.

Acc-Py initializes your .gitlab-ci.yml file with two jobs for testing:

  • a dev test that runs the tests directly in your source directory,

  • a wheel test that installs your package and runs the tests in the installed copy. This is particularly important, as it ensures that your package will work not just for you, but also for your users.

Both use the same program, PyTest, to discover and run your unit tests. The way it does that PyTest is simple: It searches for files that match the pattern test_*.py and, inside, searches for functions that match test_*. All functions that it finds are run without arguments. As long as they don’t raise an exception, PyTest assumes they succeeded. The assert statement should be used liberally in your unit tests to verify your assumptions.

If you have any non-trivial logic in your code – anything beyond getting and setting parameters – strongly recommend to put it into separate functions. These functions should only depend on their parameters and no global state. This way, it becomes much easier to write unit tests to ensure that they work as expected. And most importantly: that future changes that you make won’t silently break them!

If you’re writing a COI optimization problem that does not depend on JAPC or LSA, there is one easy test case you can always add: run the COI checker with your class to catch some common pitfalls:

# coi_example/tests/test_coi_example.py
from cernml import coi

def test_checker():
    env = coi.make("YourEnv-v0")
    coi.check(env, warn=True, headless=True)

If your program is in a very strange niche where it is impossible to test it reliably, you can also remove the testing code: remove the “tests” package, and delete the two test jobs from your .gitlab-ci.yml file.

Further reading:

Releasing a Package via CI

Once CI has been set up and tests have been written (or disabled), your package is ready for publication! Outside of CERN, Twine is the command of choice to upload a package to PyPI, but Acc-Py already does this job for us.

Warning

Publishing a package is permanent! Once your code has been uploaded to the index, you cannot remove it again. And once a project name has been claimed, it usually cannot be transferred to another project. Be doubly and triply sure that everything is correct before following the next steps!

If your project is not in a Git repository yet, this is the time to check it in:

$ git init
$ git add --all
$ git commit --message="Initial commit."
$ git remote add origin ...  # The clone URL of your Gitlab repo
$ git push --set-upstream origin master

Then, all that is necessary to publish the next (or first) version of your package is to create a Git tag and upload it to Gitlab.

$ # The tag name doesn't actually matter,
$ # but let's stay consistent.
$ git tag v0.0.1.dev0
$ git push --tags

This will trigger a CI pipeline that builds, tests and eventually releases your code. Once this pipeline has finished successfully (which includes running your tests), your package is published and immediately available anywhere inside CERN:

$ cd ~
$ pip install coi-example

Warning

The version of your package is determined by setup.py, not by the tag name you choose! If you tag another commit but don’t update the version number, and you push this tag, your pipeline will kick off, run through to the deploy stage and then fail due to the version conflict.

Further reading:

Extra Credit

You are done! The following sections give only a little bit more background information on Python packaging, but they are not necessary for you to get off the ground. Especially if you’re a beginner, feel free to stop here and maybe return later.

Getting Rid of setup.py

While it is the standard that Acc-Py generates for us, there are several problems with putting all your project metadata into setup.py:

  • No tools other than Setuptools can read the format.

  • It’s impossible to extract metadata without executing arbitrary, possibly untrusted Python code.

  • The logic before the setup() call quickly becomes hard to read.

  • Most projects don’t need the full flexibility of arbitrary Python to declare their metadata.

For this reason, Setuptools recommends to put all your metadata into pyproject.toml, like you already do for most other Python tools. The most important programming patterns you know from setup.py can be easily replicated there using dedicates keys or values.

Take for example this setup script:

# setup.py
from pathlib import Path

from setuptools import find_packages, setup

# Find the source code of our package.
PROJECT_ROOT = Path(__file__).parent.absolute()
PKG_DIR = PROJECT_ROOT / "my_package"

# Find the version string without actually executing our package.
with open(PKG_DIR / "__init__.py", encoding="utf-8") as infile:
    for line in infile:
        name, equals, version = line.partition("=")
        name = name.strip()
        version = version.strip()
        if name == "VERSION" and version[0] == version[-1] == '"':
            version = version[1:-1]
            break
    else:
        raise ValueError("no version number found")

# Read our long description out of the README file.
with open(PROJECT_ROOT / "README.rst", encoding="utf-8") as infile:
    readme = infile.read()

setup(
    name="my_package",
    version=version,
    author="My Name",
    author_email="my.name@cern.ch",
    long_description=readme,
    packages=find_packages(),
    install_requires=[
        "requests",
        "importlib_metadata; python_version < 3.8",
    ]
    extras_require={
        "pdf": ["ReportLab>=1.2", "RXP"],
        "rest": ["docutils>=0.3", "pack == 1.1, == 1.3"],
    },
)

does the same as this configuration file:

# pyproject.toml
[build-system]
requires = ['setuptools']
build-backend = 'setuptools.build_meta'
# ^^^ same as before ^^^

[project]
name = 'my_package'
readme = { file = 'README.rst' }
dynamic = ['version']
authors = [
    { name = 'My Name', email = 'my.name@cernch'},
    # More than one author supported now!
]
dependencies = [
    'requests',
    'importlib_metadata; python_version < "3.8"' # String inside string!
]

[project.optional-dependencies]
pdf = ['ReportLab>=1.2', 'RXP']
rest = ['docutils>=0.3', 'pack ==1.1, ==1.3']

[tool.setuptools.dynamic]
version = { attr = 'my_package.VERSION' }

# [tool.setuptools.packages.find]
# ^^^ Not needed, Setuptools does the right thing automatically!

And with Setuptools version 40.9 or higher (released in 2019), you can completely remove the setup.py file after this change. With old versions, you would still need this stub file:

# setup.py
from setuptools import setup
setup()

Further reading:

Single-Sourcing Your Version Number

Over time, it becomes annoying to increase your version number every time you release a new version of your package. On top of that, Acc-Py requires us to use Git tags to publish our package, but doesn’t actually use the name of the tag at all. It would be nice if we could just make the tag name our version number and read that into our project metadata.

Setuptools-SCM is a plugin for Setuptools that does precisely that. It generates your version number automatically based on your Git tags and feeds it directly into Setuptools. The minimal setup looks as follows:

# pyproject.toml
[build-system]
requires = [
    'setuptools>=45',
    'setuptools_scm[toml]>=6.2',
]

# Warn Setuptools that the version key is
# generated dynamically.
[project]
dynamic = ['version']

# This section is ALWAYS necessary, even
# if it's empty.
[tool.setuptools_scm]

You can also add a key write_to to the configuration section in pyproject.toml to automatically generate – during installation! – a source file in your package that contains the version number:

# pyproject.toml
[tool.setuptools_scm]
write_to = 'my_package/version.py'
# my_package/__init__.py
from .version import version as __version__
...

Warning

Don’t do this! Adding a __version__ variable to your package is deprecated. If you need to gather a package’s version programmatically, do this:

# Use backport on older Python versions.
try:
    from importlib import metadata
except ImportError:
    import importlib_metadata as metadata

version = metadata.version("name-you-gave-to-pip-install")

which is provided by the importlib.metadata standard library package (Python 3.8+) or its backport (Python 3.6+).

Here are some very clever solutions that people come up every now and then with that are all broken for one reason or another:

Passing my_package.__version__ to setup() in setup.py

This requires you to import your own package while you’re trying to install it. As soon as you try to import one of your dependencies, this will break because Pip hasn’t had a chance to install your dependencies yet.

Specify version = attr: my_package.__version__ in setup.cfg

On Setuptools before version 46.4, this does the same as the first option. It unconditionally attempts to import the package before it is installed. Thus it also has the same problems.

If you don’t know what setup.cfg is, don’t worry about it; it was an intermediate format before pyproject.toml became popular.

As above, but require setuptools>=46.4 in pyproject.toml:

New versions of Setuptools textually analyze your code and try to find __version__ without executing any of your code. If this fails, however, it still falls back to importing your package and break again.

Specify attr = 'my_package.__version__' in pyproject.toml

This is in fact exactly identical to the previous approach.

Further reading:

Automatic Code Formatting

Although a lot of programmers have needlessly strong opinions on it, consistent code formatting has two undeniable advantages:

  1. it makes it easier to spot typos and related bugs;

  2. it makes it easier for other people to read your code.

At the same time, it requires a lot of pointless effort to:

  • pick,

  • follow

  • and enforce

a particular style guide.

Ideally, code formatting would be consistent, automatic and require as little human input as possible. Luckily, Black does all of these:

  • It is automatic. You write your code however messily as you want. You simply run black . at the end and it adjusts your files in-place to be formatted completely uniformly.

  • Editor integration for is is almost universal. No matter which IDE you use, you can configure it such that Black runs every time you save your file or make a Git commit. This way, you can stop thinking about formatting entirely.

  • The Black code style has little configurability. This obviates pointless style discussions as they are known in the C++ world and allows people to focus on the discussions that matter.

On top of it, you may also want to run ISort so that your import statements are always grouped correctly and cleaned up. Like Black, it is supported by a large number of editors. To make it compatible with Black, add these lines to your configuration:

# pyproject.toml
[tool.isort]
profile = "black"

Linting

With Python being the dynamically typed scripting language that it is, it is much easier to put accidental bugs into your code. Just a small typo and you can spend half an hour wondering why a variable doesn’t get updated.

Static analysis tools that scan your code for bugs and anti-patterns are often called linters as they work like a lint trap in a clothes dryer. For Python beginners, the most comprehensive choice is Pylint. It’s a general-purpose linter that catches, among other things:

  • style issues (line too long),

  • excessive complexity (too many lines per function),

  • suspicious patterns (unused variables),

  • outright bugs (undefined variable).

In contrast to Black, PyLint is extremely configurable and encourages users to enable or disable lints as necessary. Here is an example configuration:

# pyproject.toml
[tool.pylint.format]
# Compatibility with Black.
max-line-length=88
# Lines with URLs shouldn't be marked as too long.
ignore-long-lines = '<?https?://\S+>?$'

[tool.pylint.reports]
# Don't show a summary, just print the errors.
reports = false
score = false

# TOML quirk: because of the space in "messages control",
# we need quotes here.
[tool.pylint.'messages control']
# Every Pylint warning has a name that you can put in this
# list to turn it off for the entire package.
disable = [
    'duplicate-code',
    'unbalanced-tuple-unpacking',
]

Sometimes, PyLint gives you a warning that you find generally useful, but just this time, you think it shouldn’t apply and the code is actually correct. In this case you can add a comment like this to suppress the warning:

# pylint: disable = unused-import

These comments respect scoping. If you put them within a function, they apply to only that function. If you put them at the end of a line, they only apply to that line.

You can prevent bugs from silently sneaking into your code by running PyLint in your CI/CD pipeline every time you push code to Gitlab:

# .gitlab-ci.yml
test_lint:
  extends: .acc_py_base
  stage: test
  before_script:
    - python -m pip install pylint black isort
    - python -m pip install -e .
  script:
    # Run each linter, but don't abort on error. Only abort
    # at the end if any linter failed. This way, you get all
    # warnings at once.
    - pylint ${project_name} || pylint_exit=$?
    - black --check . || black_exit=$?
    - isort --check . || isort_exit=$?
    - if [[ pylint_exit+black_exit+isort_exit -gt 0 ]]; then false; fi

If you write Python code that is used by other people, you might also want to add type annotations and use a type checker like Mypy or PyRight.

Further reading: