lightonml.opu¶
This module contains the OPU class
-
class
OPU
(n_components: int = 200000, opu_device: Optional[Union[lightonml.internal.device.OpuDevice, lightonml.internal.simulated_device.SimulatedOpuDevice]] = None, max_n_features: int = 1000, config_file: str = '', config_override: Optional[dict] = None, verbose_level: int = -1, input_roi_strategy: lightonml.internal.types.InputRoiStrategy = <InputRoiStrategy.full: 1>, open_at_init: Optional[bool] = None, disable_pbar=False, simulated=False, rescale: Union[lightonml.types.OutputRescaling, str] = <OutputRescaling.variance: 1>)[source]¶ Interface to the OPU.
\[\mathbf{y} = \lvert \mathbf{R} \mathbf{x} \rvert^2 \mbox{ (non-linear transform, the default)}\]\[\mathbf{y} = \mathbf{R}\mathbf{x} \mbox{ (linear transform)}\]Main methods are
transform
,linear_transform
,fit1d
andfit2d
, and accept NumPy arrays or PyTorch tensors.The non-linear transform (
transform
) is a native operation for the OPU, and performs at a higher speed thanlinear_transform
.Acquiring/releasing hardware device resources is done by open/close and a context-manager interface.
Unless
open_at_init=False
, these resources are acquired automatically at init. If another process or kernel has not released the resources, an error will be raised, callclose()
or shutdown the kernel on the OPU object to release it.- Parameters
n_components (int,) – dimensionality of the target projection space.
opu_device (OpuDevice or SimulatedOpuDevice, optional) – optical processing unit instance linked to a physical or simulated device. If not provided, a device is properly instantiated. If opu_device is of type SimulatedOpuDevice, the random matrix is generated at __init__, using max_n_features and n_components
max_n_features (int, optional) – maximum number of binary features that the OPU will transform used only if opu_device is a SimulatedOpuDevice, in order to initiate the random matrix
config_file (str, optional) – path to the configuration file (for dev purpose)
config_override (dict, optional) – for override of the config_file (for dev purpose)
verbose_level (int, optional) – deprecated, use lightonml.set_verbose_level() instead .. seealso::
lightonml.set_verbose_level
input_roi_strategy (types.InputRoiStrategy, optional) – describes how to display the features on the input device .. seealso::
lightonml.internal.types.InputRoiStrategy
open_at_init (bool, optional) – forces the setting of acquiring hardware resource at init. If not provided, follow system’s setting (usually True)
disable_pbar (bool, optional) – disable display of the progress bar when verbose_level is set to 1
simulated (bool, optional) – performs the random projection using CPU, in case no OPU is available on your machine the random matrix is then generated at __init__, using max_n_features and n_components
rescale (types.OutputRescaling, optional,) – output rescaling method for
linear_transform
. Ignored bytransform
. .. seealso::lightonml.types.OutputRescaling
-
rescale
¶ output rescaling method for
linear_transform
. Ignored bytransform
.- Type
-
max_n_features
¶ maximum number of binary features that the OPU will transform writeable only if opu_device is a SimulatedOpuDevice, in order to initiate or resize the random matrix
- Type
-
device
¶ underlying hardware that performs transformation (read-only)
- Type
OpuDevice or SimulatedOpuDevice
-
input_roi_strategy
¶ describes how to display the features on the input device
- Type
types.InputRoiStrategy, optional
-
property
config
¶ Returns the internal configuration object
-
fit1d
(X=None, n_features: Optional[int] = None, packed: bool = False, online=False, **override)[source]¶ Configure OPU transform for 1d vectors
The function can be either called with input vector, for fitting OPU parameters to it, or just vector dimensions, with
n_features
.When input is bit-packed the packed flag must be set to True.
When input vectors must be transformed one by one, performance will be improved with the online flag set to True.
- Parameters
X (np.ndarray or torch.Tensor) – Fit will be made on this vector to optimize transform parameters
n_features (int) – Number of features for the input, necessary if X parameter isn’t provided
packed (bool) – Set to true if the input vectors will be already bit-packed
online (bool, optional) – Set to true if the transforms will be made one vector after the other defaults to False
override (dict, optional) – keyword args for overriding transform settings (advanced parameters)
-
fit2d
(X=None, n_features: Optional[Tuple[int, int]] = None, packed: bool = False, online=False, **override)[source]¶ Configure OPU transform for 2d vectors
The function can be either called with input vector, for fitting OPU parameters to it, or just vector dimensions, with
n_features
.When input is bit-packed the packed flag must be set to True. Number of features must be then provided with
n_features
When input vectors must be transformed one by one, performance will be improved with the online flag set to True.
- Parameters
X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not
n_features (tuple(int)) – Number of features for the input, necessary if X parameter isn’t provided, or if input is bit-packed
packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_features defaults to False
online (bool, optional) – Set to true if the transforms will be made one vector after the other defaults to False
override (dict, optional) – keyword args for overriding transform settings (advanced parameters)
-
fit_transform1d
(X, packed: bool = False, **override) → lightonml.context.ContextArray[source]¶ Performs the nonlinear random projections of 1d input vector(s).
This function is the one-liner equivalent of
fit1d
andtransform
calls.Warning
when making several transform calls, prefer calling
fit1d
and thentransform
, or you might encounter an inconsistency in the transformation matrix.The input data can be bit-packed, where
n_features = 8*X.shape[-1]
Otherwisen_features = X.shape[-1]
If tqdm module is available, it is used for progress display
- Parameters
X (np.ndarray or torch.Tensor) – a 1d input vector, or batch of 1d input_vectors, binary encoded, packed or not batch can be 1d or 2d. In all cases
output.shape[:-1] = X.shape[:-1]
packed (bool, optional) – whether the input data is in bit-packed representation defaults to False
override (keyword args for overriding transform settings (advanced parameters)) –
- Returns
Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata
- Return type
np.ndarray or torch.Tensor
-
fit_transform2d
(X, packed: bool = False, n_2d_features=None, **override) → lightonml.context.ContextArray[source]¶ Performs the nonlinear random projections of 2d input vector(s).
This function is the one-liner equivalent of
fit2d
andtransform
calls.Warning
when making several transform calls, prefer calling
fit2d
and thentransform
, or you might encounter an inconsistency in the transformation matrix.If tqdm module is available, it is used for progress display
- Parameters
X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not
packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_2d_features defaults to False
n_2d_features (list, tuple or np.ndarray of length 2) – If the input is bit-packed, specifies the shape of each input vector. Not needed if the input isn’t bit-packed.
override (keyword args for overriding transform settings (advanced parameters)) –
- Returns
Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata
- Return type
np.ndarray or torch.Tensor
-
linear_transform
(X, encoder_cls=<class 'lightonml.encoding.base.NoEncoding'>, decoder_cls=<class 'lightonml.encoding.base.NoDecoding'>) → Union[lightonml.context.ContextArray, Tensor][source]¶ Do a linear transform of X, for Nitro (non-linear) photonic cores.
- Parameters
X (np.ndarray or torch.Tensor) – input vector, or batch of input vectors. Each vector must have the same dimensions as the one given in
fit1d
orfit2d
.encoder_cls (encoding.base.BaseTransformer, optional) – class or instance of class that transform the input into binary vectors to be processed by the opu.
decoder_cls (encoding.base.BaseTransformer, optional) – class or instance of class that transforms the output of the opu back into the appropriate format.
- Returns
Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata
- Return type
np.ndarray or torch.Tensor
-
open
()[source]¶ Acquires hardware resources used by the OPU device
See also
close()
or use the context manager interface for closing at the end af an indent block
-
transform
(X, encoder_cls=<class 'lightonml.encoding.base.NoEncoding'>, decoder_cls=<class 'lightonml.encoding.base.NoDecoding'>) → Union[lightonml.context.ContextArray, Tensor][source]¶ Performs the nonlinear random projections of one or several input vectors.
The
fit1d
orfit2d
method must be called before, for setting vector dimensions or online option. If you need to transform one vector after each other, addonline=True
in the fit function.- Parameters
X (np.ndarray or torch.Tensor) – input vector, or batch of input vectors. Each vector must have the same dimensions as the one given in
fit1d
orfit2d
.encoder_cls (encoder.base.BaseTransformer, optional) – class or instance of class that transform the input into binary vectors to be processed by the opu.
decoder_cls (encoder.base.BaseTransformer, optional) – class or instance of class that transforms the output of the opu back into the appropriate format.
- Returns
Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata
- Return type
np.ndarray or torch.Tensor
-
class
OutputRescaling
(value)[source]¶ Strategy used for rescaling the output
-
none
= 3¶ No rescaling
-
norm
= 2¶ Ensure approximate conservation of the norm (RIP)
-
variance
= 1¶ Rescale with the standard deviation computed on a Gaussian input
-
Copyright (c) 2020 LightOn, All Rights Reserved. This file is subject to the terms and conditions defined in file ‘LICENSE.txt’, which is part of this source code package.
Module containing enums used with opu.OPU class
-
class
FeaturesFormat
(value)[source]¶ Strategy used for the formatting of data on the input device
-
lined
= 1¶ Features are positioned in line
-
macro_2d
= 2¶ Features are zoomed into elements
-
none
= 4¶ No formatting
input is displayed as-is, but it must match the same number of elements of the input device
-
-
class
InputRoiStrategy
(value)[source]¶ Strategy used for computing the input ROI
-
auto
= 3¶ Try to find the most appropriate between these two modes
-
full
= 1¶ Apply zoom on elements to fill the whole display
-
small
= 2¶ Center the features on the display, with one-to-one element mapping
-
-
class
OutputRoiStrategy
(value)[source]¶ Strategy used for computing the output ROI
-
mid_square
= 2¶ Area in the middle & square (Saturn)
-
mid_width
= 1¶ Area in the middle & max_width, to have max speed (Zeus, Vulcain)
-
-
class
Context
(frametime: Optional[int] = None, exposure: Optional[int] = None, output_roi: Optional[Tuple[Tuple[int, int], Tuple[int, int]]] = None, start: Optional[datetime.datetime] = None, end: Optional[datetime.datetime] = None, gain: Optional[float] = None, input_roi: Optional[Tuple[Tuple[int, int], Tuple[int, int]]] = None, n_ones: Optional[int] = None, fmt_type: Optional[lightonml.internal.types.FeaturesFormat] = None, fmt_factor: Optional[int] = None)[source]¶ Describes the context of an OPU transform
-
input_roi
¶ (offset, size) of the input device region of interest
-
self.
fmt_type
¶ type of formatting used to map features to the input device
- Type
lightonml.types.FeaturesFormat
-
from_opu
(opu, start: datetime.datetime, end: Optional[datetime.datetime] = None)[source]¶ Takes context from an OPU device, namely frametime, exposure, cam_roi and gain. With optional end time
-
-
class
ContextArray
(input_array, context: lightonml.context.Context)[source]¶ Array with additional ‘context’ attribute
-
from_epoch
(datetime_or_epoch)[source]¶ Convert to datetime if argument is an epoch, otherwise return same