Cargando…
All-optical synthesis of an arbitrary linear transformation using diffractive surfaces
Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linea...
Autores principales: | , , , |
---|---|
Formato: | Online Artículo Texto |
Lenguaje: | English |
Publicado: |
Nature Publishing Group UK
2021
|
Materias: | |
Acceso en línea: | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463717/ https://www.ncbi.nlm.nih.gov/pubmed/34561415 http://dx.doi.org/10.1038/s41377-021-00623-5 |
_version_ | 1784572455221723136 |
---|---|
author | Kulce, Onur Mengu, Deniz Rivenson, Yair Ozcan, Aydogan |
author_facet | Kulce, Onur Mengu, Deniz Rivenson, Yair Ozcan, Aydogan |
author_sort | Kulce, Onur |
collection | PubMed |
description | Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (N(i)) and output (N(o)), where N(i) and N(o) represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥N(i) × N(o), both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < N(i) × N(o). These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces. |
format | Online Article Text |
id | pubmed-8463717 |
institution | National Center for Biotechnology Information |
language | English |
publishDate | 2021 |
publisher | Nature Publishing Group UK |
record_format | MEDLINE/PubMed |
spelling | pubmed-84637172021-10-08 All-optical synthesis of an arbitrary linear transformation using diffractive surfaces Kulce, Onur Mengu, Deniz Rivenson, Yair Ozcan, Aydogan Light Sci Appl Article Spatially-engineered diffractive surfaces have emerged as a powerful framework to control light-matter interactions for statistical inference and the design of task-specific optical components. Here, we report the design of diffractive surfaces to all-optically perform arbitrary complex-valued linear transformations between an input (N(i)) and output (N(o)), where N(i) and N(o) represent the number of pixels at the input and output fields-of-view (FOVs), respectively. First, we consider a single diffractive surface and use a matrix pseudoinverse-based method to determine the complex-valued transmission coefficients of the diffractive features/neurons to all-optically perform a desired/target linear transformation. In addition to this data-free design approach, we also consider a deep learning-based design method to optimize the transmission coefficients of diffractive surfaces by using examples of input/output fields corresponding to the target transformation. We compared the all-optical transformation errors and diffraction efficiencies achieved using data-free designs as well as data-driven (deep learning-based) diffractive designs to all-optically perform (i) arbitrarily-chosen complex-valued transformations including unitary, nonunitary, and noninvertible transforms, (ii) 2D discrete Fourier transformation, (iii) arbitrary 2D permutation operations, and (iv) high-pass filtered coherent imaging. Our analyses reveal that if the total number (N) of spatially-engineered diffractive features/neurons is ≥N(i) × N(o), both design methods succeed in all-optical implementation of the target transformation, achieving negligible error. However, compared to data-free designs, deep learning-based diffractive designs are found to achieve significantly larger diffraction efficiencies for a given N and their all-optical transformations are more accurate for N < N(i) × N(o). These conclusions are generally applicable to various optical processors that employ spatially-engineered diffractive surfaces. Nature Publishing Group UK 2021-09-24 /pmc/articles/PMC8463717/ /pubmed/34561415 http://dx.doi.org/10.1038/s41377-021-00623-5 Text en © The Author(s) 2021 https://creativecommons.org/licenses/by/4.0/Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/ (https://creativecommons.org/licenses/by/4.0/) . |
spellingShingle | Article Kulce, Onur Mengu, Deniz Rivenson, Yair Ozcan, Aydogan All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title | All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title_full | All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title_fullStr | All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title_full_unstemmed | All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title_short | All-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
title_sort | all-optical synthesis of an arbitrary linear transformation using diffractive surfaces |
topic | Article |
url | https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8463717/ https://www.ncbi.nlm.nih.gov/pubmed/34561415 http://dx.doi.org/10.1038/s41377-021-00623-5 |
work_keys_str_mv | AT kulceonur allopticalsynthesisofanarbitrarylineartransformationusingdiffractivesurfaces AT mengudeniz allopticalsynthesisofanarbitrarylineartransformationusingdiffractivesurfaces AT rivensonyair allopticalsynthesisofanarbitrarylineartransformationusingdiffractivesurfaces AT ozcanaydogan allopticalsynthesisofanarbitrarylineartransformationusingdiffractivesurfaces |