lightonml.opu

This module contains the OPU class

class OPU(n_components: int = 200000, opu_device: Union[lightonml.internal.device.OpuDevice, lightonml.internal.simulated_device.SimulatedOpuDevice, None] = None, max_n_features: int = 1000, config_file: str = '', config_override: dict = None, verbose_level: int = -1, input_roi_strategy: lightonml.internal.types.InputRoiStrategy = <InputRoiStrategy.auto: 3>, open_at_init: bool = None, disable_pbar=False)[source]

Interface to the OPU.

\[\mathbf{y} = \lvert \mathbf{R} \mathbf{x} \rvert^2\]

Main methods are transform, fit1d and fit2d, and accept NumPy arrays or PyTorch tensors.

Acquiring/releasing hardware device resources is done by open/close and a context-manager interface.

Unless open_at_init=False, these resources are acquired automatically at init. If another process or kernel has not released the resources, an error will be raised, call close() or shutdown the kernel on the OPU object to release it.

Parameters
  • n_components (int,) – dimensionality of the target projection space.

  • opu_device (OpuDevice or SimulatedOpuDevice, optional) – optical processing unit instance linked to a physical or simulated device. If not provided, a device is properly instantiated. If opu_device is of type SimulatedOpuDevice, the random matrix is generated at __init__, using max_n_features and n_components

  • max_n_features (int, optional) – maximum number of binary features that the OPU will transform used only if opu_device is a SimulatedOpuDevice, in order to initiate the random matrix

  • config_file (str, optional) – path to the configuration file (for dev purpose)

  • config_override (dict, optional) – for override of the config_file (for dev purpose)

  • verbose_level (int, optional) – deprecated, use lightonml.set_verbose_level instead

  • input_roi_strategy (types.InputRoiStrategy, optional) – describes how to display the features on the input device .. seealso:: lightonml.types.InputRoiStrategy

  • open_at_init (bool, optional) – forces the setting of acquiring hardware resource at init. If not provided, follow system’s setting (usually True)

n_components

dimensionality of the target projection space.

Type

int

max_n_features

maximum number of binary features that the OPU will transform writeable only if opu_device is a SimulatedOpuDevice, in order to initiate or resize the random matrix

Type

int

device

underlying hardware that performs transformation (read-only)

Type

OpuDevice or SimulatedOpuDevice

input_roi_strategy

describes how to display the features on the input device

Type

types.InputRoiStrategy, optional

verbose_level

0, 1 or 2. 0 = no messages, 1 = most messages, and 2 = messages

Type

int, optional

close()[source]

Releases hardware resources used by the OPU device

property config

Returns the internal configuration object

fit1d(X=None, n_features: int = None, packed: bool = False, online=False, **override)[source]

Configure OPU transform for 1d vectors

The function can be either called with input vector, for fitting OPU parameters to it, or just vector dimensions, with n_features.

When input is bit-packed the packed flag must be set to True.

When input vectors must be transformed one by one, performance will be improved with the online flag set to True.

Parameters
  • X (np.ndarray or torch.Tensor) – Fit will be made on this vector to optimize transform parameters

  • n_features (int) – Number of features for the input, necessary if X parameter isn’t provided

  • packed (bool) – Set to true if the input vectors will be already bit-packed

  • online (bool, optional) – Set to true if the transforms will be made one vector after the other defaults to False

  • override (keyword args for overriding transform settings (advanced parameters)) –

fit2d(X=None, n_features: Tuple[int, int] = None, packed: bool = False, online=False, **override)[source]

Configure OPU transform for 2d vectors

The function can be either called with input vector, for fitting OPU parameters to it, or just vector dimensions, with n_features.

When input is bit-packed the packed flag must be set to True. Number of features must be then provided with n_features

When input vectors must be transformed one by one, performance will be improved with the online flag set to True.

Parameters
  • X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not

  • n_features (tuple(int)) – Number of features for the input, necessary if X parameter isn’t provided, or if input is bit-packed

  • packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_features defaults to False

  • online (bool, optional) – Set to true if the transforms will be made one vector after the other defaults to False

  • override (keyword args for overriding transform settings (advanced parameters)) –

fit_transform1d(X, packed: bool = False, **override) → lightonml.context.ContextArray[source]

Performs the nonlinear random projections of 1d input vector(s).

This function is the one-liner equivalent of fit1d and transform calls.

Warning

when making several transform calls, prefer calling fit1d and then transform, or you might encounter an inconsistency in the transformation matrix.

The input data can be bit-packed, where n_features = 8*X.shape[-1] Otherwise n_features = X.shape[-1]

If tqdm module is available, it is used for progress display

Parameters
  • X (np.ndarray or torch.Tensor) – a 1d input vector, or batch of 1d input_vectors, binary encoded, packed or not batch can be 1d or 2d. In all cases output.shape[:-1] = X.shape[:-1]

  • packed (bool, optional) – whether the input data is in bit-packed representation defaults to False

  • override (keyword args for overriding transform settings (advanced parameters)) –

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata

Return type

np.ndarray or torch.Tensor

fit_transform2d(X, packed: bool = False, n_2d_features=None, **override) → lightonml.context.ContextArray[source]

Performs the nonlinear random projections of 2d input vector(s).

This function is the one-liner equivalent of fit2d and transform calls.

Warning

when making several transform calls, prefer calling fit2d and then transform, or you might encounter an inconsistency in the transformation matrix.

If tqdm module is available, it is used for progress display

Parameters
  • X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not

  • packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_2d_features defaults to False

  • n_2d_features (list, tuple or np.ndarray of length 2) – If the input is bit-packed, specifies the shape of each input vector. Not needed if the input isn’t bit-packed.

  • override (keyword args for overriding transform settings (advanced parameters)) –

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata

Return type

np.ndarray or torch.Tensor

open()[source]

Acquires hardware resources used by the OPU device

See also

close() or use the context manager interface for closing at the end af an indent block

transform(X) → Union[lightonml.context.ContextArray, Tensor][source]

Performs the nonlinear random projections of one or several input vectors.

The fit1d or fit2d method must be called before, for setting vector dimensions or online option. If you need to transform one vector after each other,

Parameters

X (np.ndarray or torch.Tensor) – input vector, or batch of input vectors. Each vector must have the same dimensions as the one given in fit1d or fit2d.

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata

Return type

np.ndarray or torch.Tensor

transform1d(*args, **kwargs) → Union[lightonml.context.ContextArray, Tensor][source]

Performs the nonlinear random projections of one 1d input vector, or a batch of 1d input vectors.

This function is only for backwards compatibility, prefer using fit1d followed by transform, or fit_transform1d

Warning

when making several transform calls, prefer calling fit1d and then transform, or you might encounter an inconsistency in the transformation matrix.

The input data can be bit-packed, where n_features = 8*X.shape[-1] Otherwise n_features = X.shape[-1]

Deprecated since version 1.2.

Parameters
  • X (np.ndarray or torch.Tensor) – a 1d input vector, or batch of 1d input_vectors, binary encoded, packed or not batch can be 1d or 2d. In all cases output.shape[:-1] = X.shape[:-1]

  • packed (bool, optional) – whether the input data is in bit-packed representation defaults to False

  • override (keyword args for overriding transform settings (advanced parameters)) –

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components type is actually ContextArray, with a context attribute to add metadata

Return type

np.ndarray or torch.Tensor

transform2d(*args, **kwargs) → Union[lightonml.context.ContextArray, Tensor][source]

Performs the nonlinear random projections of one 2d input vector, or a batch of 2d input vectors.

Warning

when making several transform calls, prefer calling fit2d and then transform, or you might encounter an inconsistency in the transformation matrix.

This function is only for backwards compatibility, prefer using fit2d followed by transform, or fit_transform2d.

Deprecated since version 1.2.

Parameters
  • X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not

  • packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_2d_features defaults to False

  • n_2d_features (list, tuple or np.ndarray of length 2) – If the input is bit-packed, specifies the shape of each input vector. Not needed if the input isn’t bit-packed.

  • override (keyword args for overriding transform settings (advanced parameters)) –

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components If input is an ndarray, type is actually ContextArray, with a context attribute to add metadata

Return type

np.ndarray or torch.Tensor

version()[source]

Returns a multi-line string containing name and versions of the OPU

Copyright (c) 2020 LightOn, All Rights Reserved. This file is subject to the terms and conditions defined in file ‘LICENSE.txt’, which is part of this source code package.

Module containing enums used with opu.OPU class

class FeaturesFormat[source]

Strategy used for the formatting of data on the input device

auto = 3

Automatic choice

lined if features are 1d, macro_2d if 2d

lined = 1

Features are positioned in line

macro_2d = 2

Features are zoomed into elements

none = 4

No formatting

input is displayed as-is, but it must match the same number of elements of the input device

class InputRoiStrategy[source]

Strategy used for computing the input ROI

auto = 3

Try to find the most appropriate between these two modes

full = 1

Apply zoom on elements to fill the whole display

small = 2

Center the features on the display, with one-to-one element mapping

class OutputRoiStrategy[source]

Strategy used for computing the output ROI

mid_square = 2

Area in the middle & square (Saturn)

mid_width = 1

Area in the middle & max_width, to have max speed (Zeus, Vulcain)

class Context(frametime: int = None, exposure: int = None, output_roi: Tuple[Tuple[int, int], Tuple[int, int]] = None, start: datetime.datetime = None, end: datetime.datetime = None, gain: float = None, input_roi: Tuple[Tuple[int, int], Tuple[int, int]] = None, n_ones: int = None, fmt_type: lightonml.internal.types.FeaturesFormat = None, fmt_factor: int = None)[source]

Describes the context of an OPU transform

exposure_us

Exposure time of the output device (µs)

Type

int

frametime_us

Exposure time of the input device (µs)

Type

int

output_roi

(offset, size) of the output device region of interest

Type

tuple(tuple(int))

input_roi

(offset, size) of the input device region of interest

start

epoch of the start time of the transform

Type

float

end

epoch of the end time of the transform

Type

float

n_ones

average number of ones displayed on the input device

Type

int

self.fmt_type

type of formatting used to map features to the input device

Type

lightonml.types.FeaturesFormat

self.fmt_factor

size of the macro-elements used when formatting

Type

int

static from_dict(d)[source]

Create a context from a dict (flat or not)

from_opu(opu, start: datetime.datetime, end: datetime.datetime = None)[source]

Takes context from an OPU device, namely frametime, exposure, cam_roi and gain. With optional end time

class ContextArray[source]

Array with additional ‘context’ attribute

from_epoch(datetime_or_epoch)[source]

Convert to datetime if argument is an epoch, otherwise return same

get_debug_fn()[source]

Returns debug logging function, or blank if logging level isn’t debug

get_trace_fn()[source]

Returns trace loggeing function, or blank if logging level isn’t trace

set_verbose_level(verbose_level)[source]

Set the log_level for the lightonml module. Once change, one has to re-execute the get_trace_fn and alike Levels are 0: nothing, 1: print info, 2: debug info, 3: trace info