lightonopu.opu

This module contains the OPU class, the main class of the library

class OPU(n_components: int = 200000, opu_device: Union[lightonopu.device.OpuDevice, lightonopu.simulated_device.SimulatedOpuDevice, None] = None, max_n_features: int = 1000, config_file: str = '/etc/lighton/opu.json', config_override: dict = None, verbose_level: int = 0, features_fmt: lightonopu.types.FeaturesFormat = <FeaturesFormat.auto: 3>, dmd_strategy: lightonopu.types.DmdRoiStrategy = <DmdRoiStrategy.auto: 3>)[source]

Interface to the OPU.

\[\mathbf{y} = \lvert \mathbf{R} \mathbf{x} \rvert^2\]

Main methods are transform1d and transform2d, and accept NumPy arrays or PyTorch tensors.

OPU offers a context-manager interface for acquiring hardware device resources.

Parameters
  • n_components (int,) – dimensionality of the target projection space.

  • opu_device (OpuDevice or SimulatedOpuDevice, optional) – optical processing unit instance linked to a physical or simulated device. If not provided, a device is properly instantiated. If opu_device is of type SimulatedOpuDevice, the random matrix is generated at __init__, using max_n_features and n_components

  • max_n_features (int, optional) – maximum number of binary features that the OPU will transform used only if opu_device is a SimulatedOpuDevice, in order to initiate the random matrix

  • config_file (str, optional) – path to the configuration file (for dev purpose)

  • config_override (dict, optional) – for override of the config_file (for dev purpose)

  • verbose_level (int, optional) – 0, 1 or 2. 0 = no messages, 1 = most messages, and 2 = messages from OPU device (very verbose).

  • features_fmt (types.FeaturesFormat, optional) – describes how data is formatted on the DMD can be “auto”, “lined” or “macro_pixels”, or “none” If “none”, the transform input must be of the same size as device.input_size

  • dmd_strategy (types.DmdRoiStrategy, optional) – describes how to display the features on the DMD @see types.DmdRoiStrategy

n_components

dimensionality of the target projection space.

Type

int

max_n_features

maximum number of binary features that the OPU will transform writeable only if opu_device is a SimulatedOpuDevice, in order to initiate or resize the random matrix

Type

int

device

underlying hardware that performs transformation (read-only)

Type

OpuDevice or SimulatedOpuDevice

features_fmt

describes how data is formatted on the DMD

Type

types.FeaturesFormat, optional

dmd_strategy

describes how to display the features on the DMD

Type

types.DmdRoiStrategy, optional

verbose_level

0, 1 or 2. 0 = no messages, 1 = most messages, and 2 = messages

Type

int, optional

batch_transform(ins: numpy.ndarray, output: numpy.ndarray, packed: bool, fmt_func, batch_index)[source]

Format and transform a single batch of encoded vectors

close()[source]

Releases hardware resources used by the OPU device

open()[source]

Acquires hardware resources used by the OPU device

@see close() or use the context manager interface for closing at the end af an indent block

transform1d(X, packed: bool = False, dmd_roi_: Tuple[Union[List[int], numpy.ndarray], Union[List[int], numpy.ndarray]] = None, context: Optional[lightonopu.context.Context] = <lightonopu.context.Context object>)[source]

Performs the nonlinear random projections of 1d input vector(s).

The input data can be bit-packed, where n_features = 8*X.shape[-1] Otherwise n_features = X.shape[-1]

If tqdm module is available, it is used for progress display

Parameters
  • X (np.ndarray or torch.Tensor) – a 1d input vector, or batch of 1d input_vectors, binary encoded, packed or not batch can be 1d or 2d. In all cases output.shape[:-1] = X.shape[:-1]

  • packed (bool, optional) – whether the input data is in bit-packed representation defaults to False

  • dmd_roi (if provided, as (offset, size), will override the computation of) – the dmd ROI (advanced parameter)

  • context (Context, optional) – will be filled with information about transform @see lightonopu.context.Context

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components

Return type

np.ndarray or torch.Tensor

transform2d(X, packed: bool = False, n_2d_features=None, dmd_roi_: Tuple[Union[List[int], numpy.ndarray], Union[List[int], numpy.ndarray]] = None, context: Optional[lightonopu.context.Context] = <lightonopu.context.Context object>)[source]

Performs the nonlinear random projections of 2d input vector(s).

If tqdm module is available, it is used for progress display

Parameters
  • X (np.ndarray or torch.Tensor) – a 2d input vector, or batch of 2d input_vectors, binary encoded, packed or not

  • packed (bool, optional) – whether the input data is in bit-packed representation if True, each input vector is assumed to be a 1d array, and the “real” number of features must be provided as n_2d_features defaults to False

  • n_2d_features (list, tuple or np.ndarray of length 2) – If the input is packed

  • dmd_roi (if provided, as (offset, size), will override the computation of) – the dmd ROI (advanced parameter)

  • context (Context, optional) – will be filled with information about transform @see lightonopu.context.Context

Returns

Y – complete array of nonlinear random projections of X, of size self.n_components

Return type

np.ndarray or torch.Tensor

version()[source]

Returns a multi-line string containing name and versions of the OPU

Module containing enums used with opu.OPU class

class CamRoiStrategy[source]

Strategy used for computing the camera ROI

mid_square = 2

Area in the middle & square (Saturn)

mid_width = 1

Area in the middle & max_width, to have max speed (Zeus, Vulcain)

class DmdRoiStrategy[source]

Strategy used for computing the DMD ROI

auto = 3

Try to find the most appropriate between these two modes

full = 1

Use macro-pixels to fill the whole dmd display

small = 2

Center the features on the dmd, with one element on each pixel

class FeaturesFormat[source]

Strategy used for the formatting of data on the DMD

auto = 3

Automatic choice

lined if features are 1d, 2d_macro_pixels if 2d

lined = 1

Features are positioned in line

macro_pixels = 2

Features are zoomed into pixels

none = 4

No formatting

input is displayed as-is, but it must match the same number of elements of the DMD

class Context(frametime: int = None, exposure: int = None, cam_roi: Tuple[Union[List[int], numpy.ndarray], Union[List[int], numpy.ndarray]] = None, start: datetime.datetime = None, end: datetime.datetime = None, gain: float = None, dmd_roi: Tuple[Union[List[int], numpy.ndarray], Union[List[int], numpy.ndarray]] = None, n_ones: int = None, fmt_type: lightonopu.types.FeaturesFormat = None, fmt_factor: int = None)[source]

Describes the context of an OPU transform

exposure_us

Exposure time of the camera (µs)

Type

int

frametime_us

Exposure time of the DMD (µs)

Type

int

cam_roi

(offset, size) of the camera region of interest

Type

tuple(list(int))

dmd_roi

(offset, size) of the DMD region of interest

start

epoch of the start time of the transform

Type

float

end

epoch of the end time of the transform

Type

float

n_ones

average number of ones displayed on the DMD

Type

int

self.fmt_type

type of formatting used to map features to the DMD

Type

lightonopu.types.FeaturesFormat

self.fmt_factor

size of the macro-pixels used when formatting

Type

int

static from_dict(d)[source]

Create a context from a dict (flat or not)

from_opu(opu, start, end=None)[source]

Takes context from an OPU device, namely frametime, exposure, cam_roi and gain. With optional end time

from_epoch(datetime_or_epoch)[source]

Convert to datetime if argument is an epoch, otherwise return same