Skip to content

MultiNorm class #29876

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
289 changes: 280 additions & 9 deletions lib/matplotlib/colors.py
Original file line number Diff line number Diff line change
Expand Up @@ -1414,10 +1414,10 @@
combination_mode: str, 'sRGB_add' or 'sRGB_sub'
Describe how colormaps are combined in sRGB space

- If 'sRGB_add' -> Mixing produces brighter colors
`sRGB = sum(colors)`
- If 'sRGB_sub' -> Mixing produces darker colors
`sRGB = 1 - sum(1 - colors)`
- If 'sRGB_add': Mixing produces brighter colors
``sRGB = sum(colors)``
- If 'sRGB_sub': Mixing produces darker colors
``sRGB = 1 - sum(1 - colors)``
name : str, optional
The name of the colormap family.
"""
Expand Down Expand Up @@ -1589,15 +1589,15 @@

Parameters
----------
bad: :mpltype:`color`, default: None
bad : :mpltype:`color`, default: None
If Matplotlib color, the bad value is set accordingly in the copy

under tuple of :mpltype:`color`, default: None
If tuple, the `under` value of each component is set with the values
under : tuple of :mpltype:`color`, default: None
If tuple, the ``under`` value of each component is set with the values
from the tuple.

over tuple of :mpltype:`color`, default: None
If tuple, the `over` value of each component is set with the values
over : tuple of :mpltype:`color`, default: None
If tuple, the ``over`` value of each component is set with the values
from the tuple.

Returns
Expand Down Expand Up @@ -2337,6 +2337,12 @@
"""
self.callbacks.process('changed')

@property
@abstractmethod
def n_variables(self):
# Returns the number of variables supported by this normalization
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this a comment and not a docstring?

Suggested change
# Returns the number of variables supported by this normalization
"""
The number of normalized variables.
This is number of elements of the parameter to ``__call__`` and of
*vmin*, *vmax*.
"""

pass

Check warning on line 2344 in lib/matplotlib/colors.py

View check run for this annotation

Codecov / codecov/patch

lib/matplotlib/colors.py#L2344

Added line #L2344 was not covered by tests


class Normalize(Norm):
"""
Expand Down Expand Up @@ -2547,6 +2553,11 @@
# docstring inherited
return self.vmin is not None and self.vmax is not None

@property
def n_variables(self):
# docstring inherited
return 1

Check warning on line 2559 in lib/matplotlib/colors.py

View check run for this annotation

Codecov / codecov/patch

lib/matplotlib/colors.py#L2559

Added line #L2559 was not covered by tests


class TwoSlopeNorm(Normalize):
def __init__(self, vcenter, vmin=None, vmax=None):
Expand Down Expand Up @@ -3272,6 +3283,235 @@
return value


class MultiNorm(Norm):
"""
A class which contains multiple scalar norms
"""

def __init__(self, norms, vmin=None, vmax=None, clip=False):
"""
Parameters
----------
norms : list of (str, `Normalize` or None)
The constituent norms. The list must have a minimum length of 2.
vmin, vmax : float or None or list of (float or None)
Limits of the constituent norms.
If a list, each value is assigned to each of the constituent
norms. Single values are repeated to form a list of appropriate size.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is broadcasting reasonable here? I would assume that most MultiNorms have different scales and thus need per-element entries anyway. It could also be an oversight to pass a single value instead of multiple values.

I'm therefore tempted to not allow scalars here but require exactly n_variables values. A more narrow and explicit interface may be the better start. We can always later expand the API to broadcast scalars if we see that's a typical case and reasonable in terms of usability.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@timhoffm Perhaps this is also a topic for the weekly meeting :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm perfectly fine with removing this here, and perhaps that is a good starting point.

My entry into this topic was a use case (dark-field X-ray microscopy, DFXRM) where we typically want vmax0 = vmax1 = -vmin0 =-vmin1, i.e. equal normalizations, and centered on zero, and given that entry point it felt natural to me to include broadcasting.


clip : bool or list of bools, default: False
Determines the behavior for mapping values outside the range
``[vmin, vmax]`` for the constituent norms.
If a list, each value is assigned to each of the constituent
norms. Single values are repeated to form a list of appropriate size.

"""

if cbook.is_scalar_or_string(norms):
raise ValueError("A MultiNorm must be assigned multiple norms")

norms = [*norms]
for i, n in enumerate(norms):
if n is None:
norms[i] = Normalize()
elif isinstance(n, str):
scale_cls = _get_scale_cls_from_str(n)
norms[i] = mpl.colorizer._auto_norm_from_scale(scale_cls)()
elif not isinstance(n, Normalize):
raise ValueError(
"MultiNorm must be assigned multiple norms, where each norm "
f"is of type `None` `str`, or `Normalize`, not {type(n)}")

# Convert the list of norms to a tuple to make it immutable.
# If there is a use case for swapping a single norm, we can add support for
# that later
self._norms = tuple(norms)

self.callbacks = cbook.CallbackRegistry(signals=["changed"])

self.vmin = vmin
self.vmax = vmax
self.clip = clip

for n in self._norms:
n.callbacks.connect('changed', self._changed)

@property
def n_variables(self):
"""Number of norms held by this `MultiNorm`."""
return len(self._norms)

@property
def norms(self):
"""The individual norms held by this `MultiNorm`"""
return self._norms

@property
def vmin(self):
"""The lower limit of each constituent norm."""
return tuple(n.vmin for n in self._norms)

@vmin.setter
def vmin(self, value):
value = np.broadcast_to(value, self.n_variables)
with self.callbacks.blocked():
for i, v in enumerate(value):
if v is not None:
self.norms[i].vmin = v
self._changed()

@property
def vmax(self):
"""The upper limit of each constituent norm."""
return tuple(n.vmax for n in self._norms)

@vmax.setter
def vmax(self, value):
value = np.broadcast_to(value, self.n_variables)
with self.callbacks.blocked():
for i, v in enumerate(value):
if v is not None:
self.norms[i].vmax = v
self._changed()

@property
def clip(self):
"""The clip behaviour of each constituent norm."""
return tuple(n.clip for n in self._norms)

@clip.setter
def clip(self, value):
value = np.broadcast_to(value, self.n_variables)
with self.callbacks.blocked():
for i, v in enumerate(value):
if v is not None:
self.norms[i].clip = v
self._changed()

def _changed(self):
"""
Call this whenever the norm is changed to notify all the
callback listeners to the 'changed' signal.
"""
self.callbacks.process('changed')

def __call__(self, value, clip=None):
"""
Normalize the data and return the normalized data.

Each variate in the input is assigned to the constituent norm.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Each variate in the input is assigned to the constituent norm.
Each variate in the input is assigned to the constituent norm.

Per #29876 (comment) let's not use "variate".

Alternatives here (in order of my preference): elment, component, variable


Parameters
----------
value : array-like
Data to normalize. Must be of length `n_variables` or be a structured
array or scalar with `n_variables` fields.
Comment on lines +3406 to +3408
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We need to be more precise. Normalize takes either a scalar or an array-like. How do we generalize? If we have two norms, do you expect [scalar1, scalar2], [array-like2, array_like2]? is it reasonable to accepts a 2d array, if so what is the dimensionality, (N, 2) or (2, N)?

clip : list of bools or bool, optional
See the description of the parameter *clip* in Normalize.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
See the description of the parameter *clip* in Normalize.
Determines the behavior for mapping values outside the range
``[vmin, vmax]``. See the description of the parameter *clip* in
`.Normalize`.

At least give the one-sentence summary to give the idea on what this is about so that people can judge whether it's relevant for them and worth looking up the details.

If ``None``, defaults to ``self.clip`` (which defaults to
``False``).

Returns
-------
list
Normalized input values as a list of length `n_variables`
Comment on lines +3416 to +3417
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think a list is not the right structure. Either we do a tuple of length n_variables - like vmin/vmax, or we return a 2D array - need to think on dimensionality here as well. Also, if the input is a structured array, should the output also be a structured array?


Notes
-----
If not already initialized, ``self.vmin`` and ``self.vmax`` are
initialized using ``self.autoscale_None(value)``.
"""
if clip is None:
clip = self.clip
elif not np.iterable(clip):
clip = [clip]*self.n_variables

value = self._iterable_variates_in_data(value, self.n_variables)
result = [n(v, clip=c) for n, v, c in zip(self.norms, value, clip)]
return result

def inverse(self, value):
"""
Map the normalized value (i.e., index in the colormap) back to image data value.

Parameters
----------
value
Normalized value. Must be of length `n_variables` or be a structured array
or scalar with `n_variables` fields.
Comment on lines +3439 to +3441
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Let's be more precise here as well.

"""
value = self._iterable_variates_in_data(value, self.n_variables)
result = [n.inverse(v) for n, v in zip(self.norms, value)]
return result

def autoscale(self, A):
"""
For each constituent norm, Set *vmin*, *vmax* to min, max of the corresponding
variate in *A*.

Parameters
----------
A
Data, must be of length `n_variables` or be a structured array or scalar
with `n_variables` fields.
"""
with self.callbacks.blocked():
# Pause callbacks while we are updating so we only get
# a single update signal at the end
A = self._iterable_variates_in_data(A, self.n_variables)
for n, a in zip(self.norms, A):
n.autoscale(a)
self._changed()

def autoscale_None(self, A):
"""
If *vmin* or *vmax* are not set on any constituent norm,
use the min/max of the corresponding variate in *A* to set them.

Parameters
----------
A
Data, must be of length `n_variables` or be a structured array or scalar
with `n_variables` fields.
"""
with self.callbacks.blocked():
A = self._iterable_variates_in_data(A, self.n_variables)
for n, a in zip(self.norms, A):
n.autoscale_None(a)
self._changed()

def scaled(self):
"""Return whether both *vmin* and *vmax* are set on all constituent norms."""
return all([n.scaled() for n in self.norms])

Check warning on line 3485 in lib/matplotlib/colors.py

View check run for this annotation

Codecov / codecov/patch

lib/matplotlib/colors.py#L3485

Added line #L3485 was not covered by tests

@staticmethod
def _iterable_variates_in_data(data, n_variables):
"""
Provides an iterable over the variates contained in the data.

An input array with `n_variables` fields is returned as a list of length n
referencing slices of the original array.

Parameters
----------
data : np.ndarray, tuple or list
The input array. It must either be an array with n_variables fields or have
a length (n_variables)

Returns
-------
list of np.ndarray

"""
if isinstance(data, np.ndarray) and data.dtype.fields is not None:
data = [data[descriptor[0]] for descriptor in data.dtype.descr]

Check warning on line 3507 in lib/matplotlib/colors.py

View check run for this annotation

Codecov / codecov/patch

lib/matplotlib/colors.py#L3507

Added line #L3507 was not covered by tests
if len(data) != n_variables:
raise ValueError("The input to this `MultiNorm` must be of shape "

Check warning on line 3509 in lib/matplotlib/colors.py

View check run for this annotation

Codecov / codecov/patch

lib/matplotlib/colors.py#L3509

Added line #L3509 was not covered by tests
f"({n_variables}, ...), or be structured array or scalar "
f"with {n_variables} fields.")
return data


def rgb_to_hsv(arr):
"""
Convert an array of float RGB values (in the range [0, 1]) to HSV values.
Expand Down Expand Up @@ -3909,3 +4149,34 @@

norm = BoundaryNorm(levels, ncolors=n_data_colors)
return cmap, norm


def _get_scale_cls_from_str(scale_as_str):
"""
Returns the scale class from a string.

Used in the creation of norms from a string to ensure a reasonable error
in the case where an invalid string is used. This would normally use
`_api.check_getitem()`, which would produce the error:
'not_a_norm' is not a valid value for norm; supported values are
'linear', 'log', 'symlog', 'asinh', 'logit', 'function', 'functionlog'.
which is misleading because the norm keyword also accepts `Normalize` objects.

Parameters
----------
scale_as_str : string
A string corresponding to a scale

Returns
-------
A subclass of ScaleBase.

"""
try:
scale_cls = scale._scale_mapping[scale_as_str]
except KeyError:
raise ValueError(
"Invalid norm str name; the following values are "
f"supported: {', '.join(scale._scale_mapping)}"
) from None
return scale_cls
38 changes: 38 additions & 0 deletions lib/matplotlib/colors.pyi
Original file line number Diff line number Diff line change
Expand Up @@ -270,6 +270,9 @@ class Norm(ABC):
def autoscale_None(self, A: ArrayLike) -> None: ...
@abstractmethod
def scaled(self) -> bool: ...
@abstractmethod
@property
def n_variables(self) -> int: ...


class Normalize(Norm):
Expand Down Expand Up @@ -305,6 +308,8 @@ class Normalize(Norm):
def autoscale(self, A: ArrayLike) -> None: ...
def autoscale_None(self, A: ArrayLike) -> None: ...
def scaled(self) -> bool: ...
@property
def n_variables(self) -> Literal[1]: ...

class TwoSlopeNorm(Normalize):
def __init__(
Expand Down Expand Up @@ -409,6 +414,39 @@ class BoundaryNorm(Normalize):

class NoNorm(Normalize): ...

class MultiNorm(Norm):
# Here "type: ignore[override]" is used for functions with a return type
# that differs from the function in the base class.
# i.e. where `MultiNorm` returns a tuple and Normalize returns a `float` etc.
def __init__(
self,
norms: ArrayLike,
vmin: ArrayLike | float | None = ...,
vmax: ArrayLike | float | None = ...,
clip: ArrayLike | bool = ...
) -> None: ...
@property
def norms(self) -> tuple[Normalize, ...]: ...
@property # type: ignore[override]
def vmin(self) -> tuple[float | None, ...]: ...
@vmin.setter
def vmin(self, value: ArrayLike | float | None) -> None: ...
@property # type: ignore[override]
def vmax(self) -> tuple[float | None, ...]: ...
@vmax.setter
def vmax(self, value: ArrayLike | float | None) -> None: ...
@property # type: ignore[override]
def clip(self) -> tuple[bool, ...]: ...
@clip.setter
def clip(self, value: ArrayLike | bool) -> None: ...
def __call__(self, value: ArrayLike, clip: ArrayLike | bool | None = ...) -> list: ... # type: ignore[override]
def inverse(self, value: ArrayLike) -> list: ... # type: ignore[override]
def autoscale(self, A: ArrayLike) -> None: ...
def autoscale_None(self, A: ArrayLike) -> None: ...
def scaled(self) -> bool: ...
@property
def n_variables(self) -> int: ...

def rgb_to_hsv(arr: ArrayLike) -> np.ndarray: ...
def hsv_to_rgb(hsv: ArrayLike) -> np.ndarray: ...

Expand Down
Loading
Loading