Managing Python Packages
Warning
Since installing packages can have side effects on other extensions or the main application, here are some best practices to adhere to:
DO:
✅ Always include a confirmation dialog that clearly communicates the installation process.
✅ Document the dependencies your module relies upon.
✅ Consider specifying version requirements using
~=X.Y
to avoid incompatible versions.✅ Verify that all Python packages are distributed as Python wheels. This is particularly important for dependencies including compiled code, as installing a wheel eliminates the need for users to install a compiler.
DON’T:
❌ Do not install packages in use by core Slicer functionality or by other extensions.
❌ Do not install any packages in the global scope (outside of all classes and functions) or in the module class constructor. This can significantly slow down application startup, and it may even prevent the module from loading.
❌ Do not pin to a specific version of the package with
==
, as this will generate conflicts with other package versions. Pinning dependencies should be considered only in the context of custom applications where the deployment environment is tightly controlled. When in doubt, prefer the~=
Compatible Release specifier.
Automatic Installation
This is the preferred method of managing packages, as it automatically handles most of the best
practices described above. This requires extension developers to declare their dependencies in a
requirements.txt
, and the slicer.packaging.Requirements()
mechanism guarantees that no
module can violate the dependencies of another installed module.
Tip
See the PyPA documentation about Requirements Files and Constraints Files for more information. The documentation on the Requirements File Format lists all the supported features.
Creating a Suitable requirements.txt
To declare your dependencies, you must create a requirements.txt
file that will be accessible when
your Scripted Loadable Module is installed in users’ Slicer environments. To do this in a portable
way we use importlib.resources
, so the requirements.txt
file must be in a valid python package.
For example, say we already have a Scripted Loadable Module Sample.py
(as generated by the
Extension Wizard), with a directory structure like:
Sample/
├── Resources/
├── Testing/
├── CMakeLists.txt
└── Sample.py
First, we must create a Python package directory to hold our requirements.txt
. A Python package
directory is just a folder which contains an __init__.py
file, so it is recognized by the Python
import
system. If you don’t already have any Python package for your module, we recommend the
lib
prefix for this. In our example, that is libSample
.
Tip
A Python package like libSample
is a great place to put business logic for your extension. If you
already have a package for business logic, you can place your requirements.txt
there instead of
creating a new one.
Create the package directory, __init__.py
, and requirements.txt
.
Sample/
├── libSample/
│ ├── __init__.py
│ └── requirements.txt
├── Resources/
├── Testing/
├── CMakeLists.txt
└── Sample.py
It is also important to declare these new files in CMakeLists.txt
. Add libSample/__init__.py
to
the MODULE_PYTHON_SCRIPTS
list, and add libSample/requirements.txt
to the
MODULE_PYTHON_RESOURCES
list.
Now, you can refer to requirements.txt
by its identifier libSample:requirements.txt
in a
Requirements
block.
Populating requirements.txt
Now populate the requirements.txt
with your primary dependencies. For example, say we want to use
pandas
. Suppose we’ve only tested our tool with version 2.2.1
, but we are careful not to use
deprecated features. In this case, use the Compatible Release specifier
pandas~=2.2
and add this to requirements.txt
. (If we did use deprecated features, we’d want to
use pandas~=2.2.1
).
Tip
When in doubt, use package~=X.Y
Compatible Release. Be very careful if you
use a more specific version specifier than this (eg. ~=X.Y.Z
or <A.B.C
), as your extension may
become incompatible with other extensions.
Warning
Do not use ==
specifiers or hashes, as this almost guarantees incompatibility with other
extensions.
Do not use -e
or --editable
options.
Be cautious with VCS dependencies. Always use @
and a
separate constraints file.
Using the Requirements
context manager
Add a with Requirements
block at the top of your module file:
from slicer.packaging import Requirements
with Requirements('libSample:requirements.txt'):
import pandas as pd
This does a few things:
It registers your requirements as constraints which cannot be overridden by other extensions. If you’ve written
pandas~=2.2
, you are guaranteed that your users’ Python environments will have a Pandas compatible with that version.It checks that your requirements do not conflict with Slicer core or any other extensions. If they do, a clear error message is shown and logged. Check your users’ logs in bug reports!
Requirements are not installed until
pd
is used. TheGuardedImport
creates a proxy around thepd
module, and doesn’t actually import the library until the first time it is used via.
access.The user is prompted before modifying their Python environment. If changes must be made to satisfy constraints of all installed extensions, a brief summary of changes is presented to the user. Changes are only applied if they click “OK” (or if the tool runs in a script environment where user interaction is not available).
Generally, this means you can use pd
as if it were already installed. If it’s installed in
your development environment, IntelliSense and similar static analysis should correctly recognize
the import pandas as pd
statement.
For example, you might have code like:
from slicer.packaging import Requirements
with Requirements('libSample:requirements.txt'):
import pandas as pd
class SampleLogic(ScriptedLoadableModuleLogic):
def load_data(self, path: Path):
return pd.read_csv(path)
# ^ First access triggers installation, if necessary.
Warning
This prompts installation on first attribute access. Recall one of the best practices:
❌ Do not install any packages in the global scope (outside of all classes and functions) or in the module class constructor. This can significantly slow down application startup, and it may even prevent the module from loading.
This means accessing attributes at the global level will trigger package installation during Slicer startup.
with Requirements('libSample:requirements.txt'):
import pandas as pd
df: pd.DataFrame
# ^ Triggers installation
pd.options.display.max_rows = 999
# ^ Triggers installation
To support type annotations, avoid the issue with from __future__ import annotations
.
from __future__ import annotations
with Requirements('libSample:requirements.txt'):
import pandas as pd
df: pd.DataFrame
# ^ Does NOT trigger installation
Move any other such usages into your module’s widget setup
method, and possibly use
manual-Requirements-resolution
for finer control of when the dependencies are resolved.
See Proxy Modules for details on how this works.
Separate Requirements and Constraints
Since not all statements allowed by requirements.txt
are allowed by constraints.txt
, it may be
necessary to declare requirements and constraints separately. For example, it is not possible to
specify extras like pandas[excel]
in constraints files.
with Requirements(
requirements='libSample:requirements.txt',
constraints='libSample:constraints.txt',
):
import pandas as pd
The constraints are used to define which versions of packages may be changed by other modules, and
the requirements are used to install the packages for this module. Thus it is only valid if
requirements.txt
will satisfy constraints.txt
.
Multiple Import Groups
It is possible to include multiple import groups in a single file. If used with care, this allows reducing footprint if certain features are never used.
with Requirements('libSample:ml-requires.txt'):
import torch
with Requirements('libSample:images-requires.txt'):
import itk
Warning
Do not import the same package from multiple import groups in the same file.
It is possible to do this with import as
, but not recommended. A better solution is
manual-Requirements-resolution
.
with Requirements(...):
import itk as itk_a
with Requirements(...):
import itk as itk_b
Proxy Modules
Requirements
produces Lazy Proxy Modules. Importing the proxy does not import the actual
module until the first time any module attributes (classes, functions, sub-modules, etc.) are
accessed. At that point, any required packages are installed and the real module is imported.
You can test if a module is slicer.packaging.LazyProxyModule
via isinstance
.
with Requirements(...):
import itk
print(itk) # <module 'itk' (<LazyProxyLoader object at 0x737286a4cfd0>)>
print(isinstance(itk, LazyProxyModule)) # True
If a module has already been imported, for example by a badly-behaved Slicer extension that installed and imported a dependency at startup, then the already loaded module will be imported instead and a proxy will not be used.
import itk # Assume ITK is already installed.
with Requirements(...):
import itk
print(itk) # itk<module 'itk' from 'site-packages/itk/__init__.py'>
print(isinstance(itk, LazyProxyModule)) # False
If a proxy module cannot be used in some situation, use the function slicer.packaging.real_module
to obtain the
backing module from the proxy module.
with Requirements(...):
import itk
print(itk) # <module 'itk' (<LazyProxyLoader object at 0x7f9453a50790>)>
print(real_module(itk)) # <module 'itk' from 'site-packages/itk/__init__.py'>
To avoid breaking other modules which do not use the Requirements
functionality, no import
statements that occur outside a with Requirements
block are never affected. In particular:
local imports cannot guarantee that the dependencies have been resolved.
with Requirements(...):
import itk
import itk # ModuleNotFoundError: No module named 'itk'
Manual Requirements
Resolution
It is also possible use Requirements
manually, without the context manager, and explicitly
indicate when the dependencies should be resolved. Calling resolve()
multiple times has no runtime
penalty, so use it freely.
requirements = Requirements(...)
try:
requirements.resolve()
import itk
except InstallationAbortedError:
... # handle rejection
except CalledProcessError:
... # handle installation failure
If the user rejects installation the first time, that rejection is maintained and the user will not
be prompted again. It is possible to undo this with reset_rejection()
; then the next resolve()
would show the prompt.
You can mix manual resolution and automatic resolution.
with Requirements(...) as requirements:
import itk
try:
requirements.resolve()
except InstallationAbortedError:
... # handle rejection
except CalledProcessError:
... # handle installation failure
Prototyping with extra_args
For convenience, Requirements
supports an optional keyword-only argument extra_args
, which
will be passed directly pip_install
. By passing None
as the resource identifier for
requirements
, it is possible to completely define the dependency group without any boilerplate:
no Python package directory, no requirements.txt
, and no CMakeLists
modifications.
Danger
While this is much easier for prototyping modules and sharing small scripts, do not use this for published extensions except in exceptional situations. There is no clear way to declare constraints when used in this way, so expect that other extensions may break your module if you do this.
with Requirements(None, extra_args='itk'):
import itk
Remember that requirements.txt
supports many pip options, so extra_args
is almost certainly not
necessary in production - although you may need to provide
separate requirements and constraints. See
Requirements File Format for full details.
Manual Installation
The utility functions slicer.util.pip_install
and slicer.util.pip_uninstall
are still available
for backward compatibility and usage from the Python console.
pip_install
All the prior guidance and best practices here still apply, except that manually showing a prompt
to the user is no longer necessary as pip_install
automatically requests user confirmation. Pass
interactive=False
to disable this and rely on existing confirmation systems.
def on_button_press(self):
pip_install('-U pandas', interactive=True) # Upgrade pandas without prompting the user.
Danger
Do not use pip_uninstall
. Prefer pip_install
with version specifiers if a
package downgrade is necessary.
It is not possible to apply constraints to pip_uninstall
and so this function will almost
certainly break functionality. Use sparingly and with great care.
FileIdentifier
and register_constraints
requirements.txt
and constraints.txt
files are identified by a FileIdentifier
object. This
contains:
A
name
for the requirements to be shown in prompts to the user. (Typically the Slicer module name)A resource
identifier
of the form'package:filename'
, used byimportlib.resources
to locate the file at runtime.
A FileIdentifier
identifying a Requirements File may be passed to pip_install
.
pip_install(requirements=FileIdentifier(
'Sample Image Processing Pipeline',
'libSample:requirements.txt',
))
A FileIdentifier
identifying a Constraints File may be passed to
register_constraints
to enforce constraints in all subsequent pip_install
usages. If an
operation would violate the constraint, the name is shown in the user prompt and error logs.
register_constraints
should be called during Slicer startup to ensure all subsequent
pip_install
will use the constraints file.
register_constraints(FileIdentifier(
'Sample Image Processing Pipeline',
'libSample:constraints.txt',
None,
))
UV and Pip Compatibility
Note that pip_install
now uses uv pip install
as its backing implementation, for better speed
when resolving constraints. This is almost a drop-in replacement for pip install
, but there are
some inconsistencies.
Be mindful of this and refer to astral.sh/uv/pip/compatibility for details.
Messages and Errors
Confirmation Dialogs
Any usage of pip_install
or Requirements
that would make changes to the Python environment,
if user interaction is available, will show a confirmation to the user with a summary of changes.
For example, pip_install('-U flywheel')
might present to the user:
Resolving dependencies for -U flywheel.
Resolved 7 packages in 509ms
Would download 1 package
Would uninstall 1 package
Would install 1 package
- flywheel==0.5.3
+ flywheel==0.5.4
If the user accepts these changes, they will be applied. If the user rejects them, a
InstallationAbortedError
is raised and may be handled.
Constraints Resolution
Any usage of pip_install
or Requirements
that would break the dependencies of another
registered constraint (ie. Slicer Core or another Requirements
block) will refuse to do so,
present a summary to the user, and add detailed messages to the logs.
For example, pip_install('numpy~=1.25.0')
might show:
Cannot install packages for 'numpy~=1.25.0' because it would violate constraints:
* Slicer Core
* SampleModule
× No solution found when resolving dependencies:
╰─▶ Because you require numpy>=1.25.0,<1.26.dev0 and numpy==1.26.4, we can conclude that your requirements are unsatisfiable.
Installation Failure
In some cases, constraints may be satisfied and the user may accept installation, but the install
process fails. Usually this is due to some network or filesystem error, or due to a missing build
constraint (eg. compilers, CUDA drivers, etc.). In any case, the uv
process’s output is logged and
a CalledProcessError
is raised and may be handled.
Invalid Arguments
The Slicer pip_install()
layer does not validate version specifiers or other pip
arguments;
these are reported by the uv
process. For example error: unexpected argument
or invalid
constraints error: failed to parse
might appear here. In any case, the uv
process’s output is
logged and a CalledProcessError
is raised and may be handled.