Release notes

Version 1.4

  • Add a linear_transform method to the OPU and OPUMap classes, that adds linearity to the transform method.

  • Provide encoder and decoder directly in the transform and linean_transform arguments

See linear_transform for more details, as well as the What is an OPU? page.

Version 1.3

New features

  • lightonml is publicly available on PyPi and Github.

  • The OPU class, previously in lightonopu, has been moved in lightonml. This allows easier installation for using a simulated OPU on a local machine, by simply running pip install lightonml.

Version 1.2

New features

  • Online transform mode allows a large speedup when running transform on single vectors, or small batches (<100 vectors). To use it, add online=True to the new OPU fit1d/fit2d methods. On single vectors the speedup is more than 70x with regards to online=False.

  • Add fit/transform methods. The new fit1d/fit2d methods allow the OPU transform to be fit first before transform. They accept in parameters either the number of features, or input vectors.

  • The transform method is called without 1d/2d variants, the choice for number of input dimensions going to the fit1d/fit2d methods.

  • Add a fit to OPUMap objects in lightonml.projections.sklearn and lightonml.projections.torch.

Minor changes

  • Context metadata with information on the transform is now an attribute of the output array

  • Internal optimizations speed up transform by ~5 %

  • Add new verbose level (0 to 3, 3 being the most verbose), and set it at a global level with lightonopu.set_verbose_level()

  • Allow transform settings to be overridden in the fit1d/fit2d methods (for advanced usage)