Core Classes of This Package¶
See also
- The Core API
User guide page on most classes introduced here.
- Control Flow of Optimization Problems
User guide page on the order in which the methods introduced here are expected to be called.
- Custom Per-Problem Optimizers
User guide page on specifically
CustomOptimizerProvider
.- Optimizing Points on an LSA Function
User guide page on specifically
FunctionOptimizable
.
These are the most prominent classes provided by this package.
They provide an extension of the API provided by Gymnasium and are heavily inspired by it. They are in turn:
Problem
: The root of the interface hierarchy. Both the following two classes andcoi.Env
are subclasses of it.SingleOptimizable
: ancoi.Env
-like class for numerical optimization problems;FunctionOptimizable
: A variant ofSingleOptimizable
for situations where a function over time must be optimized pointwise in order. This is the case e.g. when adjusting the tune of a circular particle accelerator.HasNpRandom
: A mix-in class that provides a convenient random-number generator (RNG). Already included byProblem
.CustomOptimizerProvider
: An optional mix-in interface for optimization problems that require a specific optimization algorithm to be solved.
Each of these three classes is an abstract base class (ABC). In
short, this means that they can be superclasses of other classes, even
if the latter don’t inherit from them. The only requirement is that the
subclass provides the same members as the ABC. This follows and extends
the idea of structural subtyping implemented by typing.Protocol
.
>>> from cernml.coi import Problem
>>> class MyClass:
... metadata = {}
... render_mode = None
... spec = None
... @property
... def unwrapped(self):
... return self
... def close(self):
... pass
... def render(self):
... raise NotImplementedError
... def get_wrapper_attr(self, name):
... return getattr(self, name)
>>> issubclass(MyClass, Problem)
True
In practice, you still want to subclass these ABCs because they provide
some conveniences that are bothersome to implement otherwise. See
Problem
for a list.
- class cernml.coi.Problem(render_mode: str | None = None)¶
Bases:
HasNpRandom
Root base class for all optimization problems.
You typically don’t subclass this class directly. Instead, subclass one of its subclasses, e.g.
Env
orSingleOptimizable
. This class exists for two purposes:define which parts of the interfaces are common to all of them;
provide an easy way to test whether an interface is compatible with the generic optimization framework.
This is an abstract base class. This means even classes that don’t inherit from it may be considered a subclass. To be considered a subclass, a class must provide:
the attributes
metadata
,render_mode
andspec
;provide the methods
render()
,close()
andget_wrapper_attr()
;a dynamic property
unwrapped
.
While this is all that is necessary to be considered a subclass, direct inheritance provides the following additional benefits:
Its
__init__()
method requires render_mode, verifies it with the"render_modes"
key ofmetadata
and assigns it to therender_mode
attribute. This reduces the amount of boilerplate code you have to write yourself.It implements the context manager protocol to automatically call
close()
when the user is done with a problem.It provides the
np_random
property as an exclusive and lazily initialized random-number generator for the problem.
- metadata: InfoDict = mappingproxy({'render_modes': [], 'cern.machine': <Machine.NO_MACHINE: 'no machine'>, 'cern.japc': False, 'cern.cancellable': False})¶
The capabilities and behavior of this problem. It communicates fundamental properties of the class and how a host application can use it. While the dict keys are free-form, there is a list of standard metadata keys.
This should be a class-level constant attribute. Instances should replace this dict with a custom one, but they should never modify the existing dict.
- spec: gymnasium.envs.registration.EnvSpec | None = None¶
Information on how the problem was initialized. This is set by
make()
and you are not expected to modify it yourself. Wrappers shoulddeepcopy()
the spec of the wrapped environment and make their modifications on the copy.
- render_mode: str | None = None¶
The chosen render mode. This is either None (no rendering) or an item from the list in the
"render_modes"
metadata. See also the list of standard render modes.This attribute is expected to be set inside
__init__()
and not changed again afterwards.
- close() None ¶
Perform any necessary cleanup.
This method may be overridden to perform cleanup that does not happen automatically. Examples include stopping JAPC subscriptions or canceling any spawned threads. By default, this method does nothing.
After this method has been called, no further methods may be called on the problem, with the following exceptions:
get_wrapper_attr()
must continue to behave as expected;unwrapped
must continue to behave as expected;calling
close()
again should do nothing.
- render() Any ¶
Render the environment according to the
render_mode
.The list of render modes supported by a problem is given by its
"render_modes"
metadata. See also the list of standard render modes.Example
>>> from gymnasium import Env >>> class MyEnv(Env): ... metadata = {'render_modes': ['human', 'rgb_array']} ... def render(self, mode='human'): ... if mode == 'rgb_array': ... # Return RGB frame suitable for video. ... return np.array(...) ... if mode == 'human': ... # Pop up a window and render. ... pyplot.plot(...) ... pyplot.show() ... return None ... # just raise an exception ... return super().render(mode)
- property unwrapped: Problem¶
Return the core problem.
By default, this just returns self. However, environment wrappers override this method to instead return the wrapped problem (recursively, if necessary).
Example
>>> class Concrete(Problem): ... pass >>> class Wrapper(Problem): ... def __init__(self, wrapped): ... self._wrapped = wrapped ... @property ... def unwrapped(self): ... # Note the recursion. ... return self._wrapped.unwrapped >>> inner = Concrete() >>> outer = Wrapper(inner) >>> inner.unwrapped is inner True >>> outer.unwrapped is inner True
- property np_random: Generator¶
The problem’s internal random number generator.
On its first access, the generator is lazily initialized with a random seed. This property is writeable to support initialization with a fixed seed. Typically, this is done within
get_initial_params()
andreset()
, which simply accept a seed parameter.
- class cernml.coi.SingleOptimizable(render_mode: str | None = None)¶
Bases:
Problem
,Generic
[ParamType
]Interface for single-objective numerical optimization.
Typically, an RL environment (described by
Env
) contains a hidden state on which actions can be performed. Each action changes the state and produces an observation and a reward. In contrast, an optimization problem has certain parameters that can be set to given values. Each set of values (and not the transition between them) is associated with a objective value that shall be minimized.This means in short:
actions describe a step that shall be taken in the phase space of states;
parameters describe the point in phase space to move to.
A parameter may be e.g. the electric current supplied to a magnet, and an action may be the value by which to increase or decrease that current. The difference between the parameters and the hidden state is that the parameters may describe only a subset of the state. There may be state variables that cannot be influenced by the optimizer.
Like
Problem
, this is an abstract base class. While you need not inherit from it directly, doing so provides the following benefits:correct default values for all optional attributes;
get_initial_params()
correctly handles the seed argument and seedsnp_random
if it is passed.__init__()
handles render_mode correctly;the context manager protocol to automatically call
close()
;a property
np_random
for convenient random-number generation.
- optimization_space: Space[ParamType]¶
A
Space
instance that describes the phase space of parameters. This may be the same or different from theaction_space
. This attribute is required.
- constraints: Sequence[Constraint] = ()¶
Optional. The constraints that apply to this optimization problem. For now, each constraint must be either a
LinearConstraint
or aNonlinearConstraint
. In the future, this might be relaxed to allow more optimization algorithms.
- objective_name: str = ''¶
Optional. A custom name for the objective function. You should only set this attribute if there is a physical meaning to the objective. By default, host applications should pick a neutral name like “objective function”.
- param_names: Sequence[str] = ()¶
Optional. Custom names for each of the parameters of the problem. If set, this list should have exactly as many elements as the
optimization_space
. By default, host applications should pick neutral names, e.g. “Parameter 1…N”.
- constraint_names: Sequence[str] = ()¶
Optional. Custom names for each of the
constraints
of the problem. If set, this list should have exactly as many elements as theconstraints
. By default, host applications should pick neutral names, e.g. “Constraint 1…N”.
- abstractmethod get_initial_params( ) ParamType ¶
Return an initial set of parameters for optimization.
This method is similar to
reset()
but is allowed to always return the same value; or to skip certain calculations, in the case of problems that are expensive to evaluate.- Parameters:
seed – Optional. If passed, this should be used to initialize the problem’s internal random-number generator. Passing this argument should lead to predictable behavior of the problem. If this is not possible, you should set the
nondeterministic
when registering your problem.options – Optional. Environments may choose to extract additional information about resets from this argument.
- Returns:
A set of parameters suitable to be passed to
compute_single_objective()
. It should lie within the problem’soptimization_space
. Nonetheless, hosts should verify whether this is indeed the case.
Warning
If the return value lies outside of the given optimization space, and a host application wishes to reset the problem to its original state, it may choose to call
compute_single_objective()
with that return value even though it is out of bounds. Thus, it is crucial to ensure that initial value and optimization space match.
- abstractmethod compute_single_objective(
- params: ParamType,
Perform an optimization step.
This function is similar to
step()
, but it accepts parameters instead of an action. See the class docstring for the difference.This function may modify the environment, but it should conceptually be stateless: Calling it twice with the same parameters should return the same objective value plus stochastic noise. On real machines, this is rarely the case due to machine drift and other external effects.
- Parameters:
params – The parameters for which the objective shall be calculated. This must have the same shape as
optimization_space
. It must further be within that space. However, ifget_initial_params()
returns an out-of-bounds value, that value may also be passed to this method.- Returns:
The objective value associated with these parameters. Numerical optimizers may want to minimize this objective. It is often also called cost or loss.
- class cernml.coi.FunctionOptimizable(render_mode: str | None = None)¶
Bases:
Problem
,Generic
[ParamType
]Interface for problems that optimize functions over time.
An optimization problem in which the target is a function over time that is being optimized at multiple skeleton points should implement this interface instead of
SingleOptimizable
. This interface allows passing through the skeleton points as parameters called cycle_time. The name “cycle” is inspired by the fact that this problem comes up most often in the context of optimizing synchrotron parameters that change during a fill-accelerate-extract cycle. In this case, cycle_time implies that it is measured from the start of the cycle, rather than e.g. the start of injection.Like
Problem
, this is an abstract base class. While you need not inherit from it directly, doing so provides the following benefits:correct default values for all optional attributes, and default implementations for optional methods;
get_initial_params()
correctly handles the seed argument and seedsnp_random
if it is passed.__init__()
handles render_mode correctly;the context manager protocol to automatically call
close()
;a property
np_random
for convenient random-number generation.
- constraints: Sequence[Constraint] = ()¶
The constraints that apply to this optimization problem. For now, each constraint must be either a
LinearConstraint
or aNonlinearConstraint
as provided byscipy.optimize
. In the future, this might be relaxed to allow more optimization algorithms.
- abstractmethod get_optimization_space(
- cycle_time: float,
Return the optimization space for a given point in time.
This should return a
Space
instance that describes the phase space of parameters. While one would typically expect this phase space to be constant for all points on the function that is to be optimized, there are cases where this is not true.Trivially, one can imagine a ramping function where the range of allowed values in the flat bottom is smaller than at the flat top.
Nonetheless, all returned spaces should still have the same shape.
- abstractmethod get_initial_params( ) ParamType ¶
Return an initial set of parameters for optimization.
This method is similar to
reset()
but is allowed to always return the same value; or to skip certain calculations, in the case of problems that are expensive to evaluate.- Parameters:
cycle_time – The point in time at which the objective is being optimized.
seed – Optional. If passed, this should be used to initialize the problem’s internal random-number generator. Passing this argument should lead to predictable behavior of the problem. If this is not possible, you should set the
nondeterministic
when registering your problem.options – Optional. Environments may choose to extract additional information about resets from this argument.
- Returns:
A set of parameters suitable to be passed to
compute_function_objective()
with the same cycle_time. It should lie within the space returned byget_optimization_space()
for this cycle_time. Nonetheless, hosts should verify whether this is indeed the case.
Warning
If the return value lies outside of the given optimization space, and a host application wishes to reset the problem to its original state, it may choose to call
compute_function_objective()
with that return value even though it is out of bounds. Thus, it is crucial to ensure that initial value and optimization space match.
- abstractmethod compute_function_objective( ) float ¶
Perform an optimization step at the given point in time.
This function is the core of the interface. When called, it should perform the following steps:
Convert params into a format suitable for communication with the machine; this may include applying offsets, scaling factors, etc.
Send the new settings to the machine. The easiest way to do this is via
cernml.lsa_utils.incorporate_and_trim()
.Receive new measurements from the machine based on the new settings. This may include some sort of waiting logic to ensure that the settings have propagated to the machine.
Reduce the measurements into a scalar cost to be minimized. This may involve scaling, averaging over multiple variables and other transformations.
This function may modify the environment, but it should conceptually be stateless: Calling it twice with the same parameters should return the same objective value plus stochastic noise. On real machines, this is rarely the case due to machine drift and other external effects.
- Parameters:
cycle_time – The point in time at which the objective is being optimized.
params – The parameters for which the objective shall be calculated. This should be regarded as corrections on one or more functions over time.
- Returns:
The objective value associated with these parameters and this cycle_time. Numerical optimizers may want to minimize this objective. It is often also called cost or loss.
- get_objective_function_name() str ¶
Return the name of the objective function.
By default, this method returns None. If it returns a non-empty string, it should be the name of the objective function of this optimization problem. A host application may use this name e.g. to label a graph of the objective function’s value as it is being optimized.
- get_param_function_names() Sequence[str] ¶
Return the names of the functions being modified.
By default, this method returns an empty list. If the list is non-empty, if should contain as many names as the corresponding box returned by
get_optimization_space()
. Each name should correspond to an LSA parameter that is being corrected by the optimization procedure.A host application may use these names to show the functions that are being modified to the user.
- override_skeleton_points() Sequence[float] | None ¶
Hook to let the problem choose the skeleton points.
You should only override this method if your problem cannot be solved well if optimized at arbitrary skeleton points. In such a case, this method allows you to handle the selection of skeleton points in a customized fashion in your own implementation of
Configurable
.If overridden, this function should return the list of skeleton points at which the problem should be evaluated. As always, each skeleton point should be given as a floating-point time in milliseconds since the beginning of an acceleration cycle. For maximum compatibility, it is suggested not to return fractional cycle times.
A host application should call this method before starting an optimization run. If the return value is
None
, it may proceed to let the user choose the skeleton points at which to optimize. If the return value is a list, the user should be allowed to review, but not modify it. In that case, the other methods of this class must not be called with any skeleton point that is not in that list.
- class cernml.coi.Env¶
Bases:
Problem
,Generic
[ObsType
,ActType
]See
gymnasium.Env
. This is re-exported for the user’s convenience.
- class cernml.coi.HasNpRandom¶
Bases:
object
Mixin that replicates the
gymnasium.Env.np_random
property.This abstracts the property in a generalized fashion. The
Problem
base class subclasses it for convenience.- property np_random: Generator¶
The problem’s internal random number generator.
On its first access, the generator is lazily initialized with a random seed. This property is writeable to support initialization with a fixed seed. Typically, this is done within
get_initial_params()
andreset()
, which simply accept a seed parameter.
Interfaces for Custom Per-Problem Algorithms¶
See also
- Custom Per-Problem Optimizers
User guide page on this topic.
- class cernml.coi.CustomOptimizerProvider(*args, **kwargs)¶
Bases:
AttrCheckProtocol
,Protocol
Interface for optimization problems with custom optimizers.
This protocol gives subclasses of
SingleOptimizable
andFunctionOptimizable
the opportunity to dynamically define specialized optimization algorithms that are tailored to the problem. Host applications are expected to check the presence of this interface and, if possible, callget_optimizers()
before presenting a list of optimization algorithms to the user. Host applications must also check the entry pointcernml.custom_optimizers
for matchin optimizer providers.Optimizers provided by this protocol should themselves follow the protocol defined by
Optimizer
. Beware that that protocol is defined in a separate package, which can be installed with one of these lines:$ pip install cernml-coi-optimizers # The concrete package $ pip install cernml-coi[optimizers] # as extra of this package $ pip install cernml-coi[all] # as part of all extras
Like
Problem
, this is an abstract base class. This means even classes that don’t inherit from it may be considered a subclass, as long as they adhere to the interface defined by this class.- abstractmethod classmethod get_optimizers() Mapping[str, cernml.optimizers.Optimizer] ¶
Return the custom optimizers offered by this problem.
The return value is a mapping from optimizer name to optimizer. The name should follow the format of other, registered optimizers and not conflict with any of their names.
Custom optimizers with the same name may be returned by different optimization problems and may be different from each other.
- entry point group cernml.custom_optimizers¶
Entry points defined under this this group are an alternative to direct implementation on the optimization problem itself. They should either point at a subclass of
CustomOptimizerProvider
or at a function that acts likeget_optimizers()
. The syntax ismodule_name:ClassName
andmodule_name:function_name
respectively.A host application may load and invoke such an entry point if and only if the user selects an optimization problem whose name (including the namespace) matches the entry point name.
- class cernml.coi.CustomPolicyProvider(*args, **kwargs)¶
Bases:
AttrCheckProtocol
,Protocol
Interface for optimization problems with custom RL algorithms.
This protocol gives subclasses of
Env
the opportunity to dynamically collect and return RL agents that are tailored to the problem. Host applications are expected to check the presence of this interface and, if possible, callget_policy_names()
before presenting a list of agents to the user. Host applications must also check the entry pointcernml.custom_policies
for matchin policy providers.The interface is split into two parts:
get_policy_names()
collects the list of available agents or policies and returns a list of names.load_policy()
receives the name of the chosen agent or policy should load it.
The object returned by
load_policy()
is expected to have a methodpredict()
. All algorithms and policy classes of Stable Baselines satisfy this interface.Like
Problem
, this is an abstract base class. This means even classes that don’t inherit from it may be considered a subclass, as long as they adhere to the interface defined by this class.- abstractmethod classmethod get_policy_names() list[str] ¶
Return a list of all available policies.
How this list is acquired is left to the implementation. Possible choices are to hard-code it, to glob a local directory for stored weights, or to request a list from the Internet.
Each policy name should be unique and readable by a human user.
The default implementation returns an empty list.
- entry point group cernml.custom_policies¶
Entry points defined under this this group are an alternative to direct implementation on the environment itself. They should point at a subclass of
CustomPolicyProvider
with the syntaxamodule_name:ClassName
. The class must be instantiable by calling it with no arguments.A host application may load and invoke such an entry point if and only if the user selects an environment whose name (including the namespace) matches the entry point name.
- class cernml.coi.Policy(*args, **kwargs)¶
Bases:
Protocol
Interface of RL algorithms returned by
CustomPolicyProvider
.This interface has been chosen to be compatible with both policy and algorithm objects of Stable Baselines.
This is an abstract base class. This means even classes that don’t inherit from it may be considered a subclass. This means even classes that don’t inherit from it may be considered a subclass, as long as they adhere to the interface defined by this class.
Warning
When implementing this method yourself, be careful to return a
(action, state)
tuple! If your policy is non-recursive, the state should be simply None.- abstractmethod predict(
- observation: ndarray | dict[str, ndarray],
- state: tuple[ndarray, ...] | None = None,
- episode_start: ndarray | None = None,
- deterministic: bool = False,
Get the policy action from an observation (and hidden state).
- Parameters:
observation – the input observation.
state – The last hidden states. On the first call and when using non-recurrent policies, this should be None (the default).
episode_start – The last masks. For non-recurrent policies, this is just None (the default). For recurrent policies, this is an array of the same length as state. That could be e.g. the number of parallel vector environments. An entry should be 1 if the internal state should reset on, 0 otherwise. This means that on the first call (when state is necesarily None), this should be an array of only ones.
deterministic – If True, return deterministic actions. If False (the default), return stochastic actions, e.g. by enabling action noise.
- Returns:
A tuple
(action, state)
, where action is the next environment action chosen by the policy. If this is a recurrent policy, state is the next hidden state; for non-recurrent policies, state should be None.
Standard Metadata Keys¶
See also
- Metadata
User guide section on this topic.
Problem.metadata
is a mapping that describe the capabilities and
behavior of the given optimization problem. While any sort of data can be
stored in it, the following keys have a standardized meaning:
Note
Laboratories are encouraged to define their own metadata keys as necessary.
Care should be taken that the values stored in the dictionary have a simple
type (e.g. numbers, strings, and lists thereof) that is immutable and
trivial to serialize and deserialize. Laboratories are encouraged to name
their metadata keys in a namespaced manner, analogous to
"cern.machine"
.
Standard Render Modes¶
See also
- Rendering
User guide section on this topic.
Render modes declare the ways in which an optimization problem may be visualized. Problems list their supported modes in their :mdkey`”render_modes”` metadata. Problems with no supported render mode cannot be visualized.
Users can pass a supported render_mode of their choosing either to make()
or to the Problem
constructor directly. The problem is expected to store this
value in a render_mode
attribute. Calling render()
then
should react according to the initially chosen render mode.
- render mode None¶
If no render mode is specified, no rendering should take place. Calling
render()
should do nothing.
- render mode "human" None ¶
The problem renders itself to the current display or the terminal in a way that is fit for human consumption. The display should update automatically, i.e. without the user explicitly calling
render()
. Other methods may still call it internally to update the display whenever the problem’s state changes.
- render mode "ansi" str | io.StringIO ¶
The problem renders its current state in a terminal-style representation. The representation may contain newlines and ANSI escape codes.
- render mode "rgb_array" NDArray[uint] ¶
The problem renders its current state in a color image. The color image should be returned as a 3D array of shape
(width, height, 3)
, where the last dimension denotes the colors red, blue and green. Values are in the range from 0 to 255 inclusive.
- render mode "matplotlib_figures" Figure | Iterable[Figure | tuple[str, Figure]] | Mapping[str, Figure] ¶
The problem renders itself via Matplotlib to one or more
Figure
objects. The return value should include all figures whose contents have changed. Figures whose contents haven’t changed needn’t be returned again.The following return types are allowed:
Strings are interpreted as window titles for their associated figure.
List-Like Render Modes¶
There are also so-called list-like render modes. In these modes, the problem
should render itself automatically after each time step and store the resulting
frame in an internal buffer. Whenever render()
is called, no
rendering should be done and instead, all frames should be returned:
- render mode "ansi_list" list[str] | list[io.StringIO] ¶
Like
"ansi"
, but terminal-style representations are collected at each time step. Callingrender()
returns the buffered frames.
- render mode "rgb_array_list" list[NDArray[uint]] ¶
Like
"rgb_array"
, but color images are collected at each time step. Callingrender()
returns the buffered frames.
Note
You typically don’t implement list-like modes yourself. Instead, if the
user requests one of them via make()
, your problem is wrapped in
a RenderCollection
wrapper and the non-list
equivalent is passed to your __init__()
method.
Whether and when the frame buffer should be cleared is implementation-defined;
RenderCollection
lets the user choose during
initialization. Typical choices are to clear it automatically after each
render()
call, or whenever reset()
or
get_initial_params()
are called.
Supporting Types¶
The following types are not interfaces themselves, but are used by the core interfaces of this package.
- class cernml.coi.Machine(value)¶
Bases:
Enum
Enum of the various accelerators at CERN.
This enum is used for the
metadata
entry"cern.machine"
. It declares which accelerator they pertain to. This can be used to filter a collection of environments for only those that are interesting to a certain group of operators.This list is intentionally left incomplete. If you wish to use this API at a machine that is not listed in this enum, please contact the developers to have it included.
In the same vein, if you match a
Machine
against a list of enum members, you should be prepared that new machines may be added in the future.>>> # Bad: >>> def get_proper_value_for(machine): ... return { ... Machine.LINAC_2: 1.0, ... Machine.LINAC_3: 4.0, ... Machine.LINAC_4: 3.0, ... }[machine] >>> get_proper_value_for(Machine.LINAC_4) 3.0 >>> # Oops! ISOLDE was added in cernml-coi v0.7.1. >>> get_proper_value_for(Machine.ISOLDE) Traceback (most recent call last): ... KeyError: <Machine.ISOLDE: 'ISOLDE'> >>> # Better: >>> def get_proper_value_for(machine): ... some_reasonable_default = 0.0 ... return { ... Machine.LINAC_2: 1.0, ... Machine.LINAC_3: 4.0, ... Machine.LINAC_4: 3.0, ... }.get(machine, some_reasonable_default) >>> get_proper_value_for(Machine.ISOLDE) 0.0
Of course, if there is no reasonable default for an unknown machine, raising an exception may still be your best bet.
- NO_MACHINE = 'no machine'¶
- LINAC_2 = 'Linac2'¶
- LINAC_3 = 'Linac3'¶
- LINAC_4 = 'Linac4'¶
- LEIR = 'LEIR'¶
- PS = 'PS'¶
- PSB = 'PSB'¶
- SPS = 'SPS'¶
- AWAKE = 'AWAKE'¶
- LHC = 'LHC'¶
- ISOLDE = 'ISOLDE'¶
- AD = 'AD'¶
- ELENA = 'ELENA'¶
- class cernml.coi.Space¶
-
See
gymnasium.spaces.Space
. This is re-exported for the user’s convenience.
- cernml.coi.Constraint¶
alias of
scipy.optimize.LinearConstraint
|scipy.optimize.NonlinearConstraint
- cernml.coi.ParamType: TypeVar¶
The generic type variable for
SingleOptimizable
. This is exported for the user’s convenience.