Jacques Carolan, Masoud Mohseni, Jonathan P. Olson, Mihika Prabhu, Changchen Chen, Darius Bunandar, Murphy Yuezhen Niu, Nicholas C. Harris, Franco N. C. Wong, Michael Hochberg, Seth Lloyd & Dirk Englund

*doi:10.1038/s41567-019‑0747‑6*

**Abstract:**

A promising route towards the demonstration of near-term quantum advantage (or supremacy) over classical systems relies on running tailored quantum algorithms on noisy intermediate-scale quantum machines. These algorithms typically involve sampling from probability distributions that—under plausible complexity-theoretic conjectures—cannot be efficiently generated classically. Rather than determining the computational features of output states produced by a given physical system, we investigate what features of the generating system can be efficiently learnt given direct access to an output state. To tackle this question, here we introduce the variational quantum unsampling protocol, a nonlinear quantum neural network approach for verification and inference of near-term quantum circuit outputs. In our approach, one can variationally train a quantum operation to unravel the action of an unknown unitary on a known input state, essentially learning the inverse of the black-box quantum dynamics. While the principle of our approach is platform independent, its implementation will depend on the unique architecture of a specific quantum processor. We experimentally demonstrate the variational quantum unsampling protocol on a quantum photonic processor. Alongside quantum verification, our protocol has broad applications, including optimal quantum measurement and tomography, quantum sensing and imaging, and ansatz validation.

]]>A new method determines whether circuits are accurately executing complex operations that classical computers can’t tackle. «more»

**Related Links:**

How to verify that quantum chips are computing correctly (MIT News)

Variational quantum unsampling on a quantum photonic processor (Nature Physics)

]]>*Artistic illustration of solving complex problems with a photonic circuit. The background pattern shows a network of interferometers able to perform arbitrary unitary matrix multiplication. By encoding the problem data into the weights of the optical matrix and letting an optical signal evolve through the optical circuit, ** one can find the state minimizing the energy of the associated problem (the solution). In the case of the Ising problem, the solution is a given distribution of spins that can only take binary values. Image courtesy of the researchers.*

Many of the most challenging optimization problems encountered in various disciplines of science and engineering, from biology and drug discovery [1] to routing and scheduling [2] can be reduced to NP-complete problems. Intuitively speaking, NP-complete problems are “hard to solve” because the number of operations that must be performed in order to find the solution grows exponentially with the problem size. The ubiquity of NP-complete problems has led to the development of dedicated hardware (such as optical annealing and quantum annealing machines like “D‑Wave”) and special algorithms (heuristic algorithms like simulated annealing).

Recently, there has been a growing interest in solving these hard combinatorial problems by designing optical machines. These optical machines consist of a set of optical transformations imparted to an optical signal, so that the optical signal will encode the solution to the problem after some amount of computation. Such machines could benefit from the fundamental advantages of optical hardware integrated into silicon photonics, such as low-loss, parallel processing, optical passivity at low optical powers and robust scalability enabled by the development of fabrication processes by the industry. However, the development of compact and fast photonic hardware with dedicated algorithms which optimally utilize the capability of this hardware, has been lacking.

Today, the path to solving NP-complete problems with integrated photonics is open due to the work of Charles Roques-Carmes, Dr. Yichen Shen, Cristian Zanoci, Mihika Prabhu, Fadi Atieh, Dr. Li Jing, Dr. Tena Dubček, Chenkai Mao, Miles Johnson, Prof. Vladimir Čeperić, Prof. Dirk Englund, Prof. John Joannopoulos, and Prof. Marin Soljačić from MIT and the Institute for Soldier Nanotechnologies, published in *Nature Communications* [3]. In this work, the MIT team developed an algorithm dedicated to solving the well-known NP-complete Ising problem with photonics hardware.

Originally proposed to model magnetic systems, the Ising model describes a network of spins that can point only up or down. Each spin’s energy depends on its interaction with neighboring spins — in a ferromagnet, for instance, the positive interaction between nearest neighbors will incentivize each spin to align with its closest neighbors. An Ising machine will tend to find the spin configuration that minimizes the total energy of the spin network. This solution can then be translated into the solution of other optimization problem [4].

Heuristic Ising machines, like the one developed by the MIT team, only yields a candidate solution to the problem (which is, on average, close to the optimal solution). However, algorithms that always find the exact solution to the problem are difficult to apply to large problem sizes, as they would often have to run for hours, if not days, to terminate. Therefore, heuristic algorithms are an alternative to exact algorithms, since they provide fast and cheap solutions to hard problems.

The researchers were guided by their knowledge of fundamental photonics. Professor Marin Soljačić from MIT explains: “Optical computing is a very old field of research. Therefore, we had to identify which recent advances in photonic hardware could make a difference. In other words, we had to identify the value proposition of modern photonics.” Graduate student Charles Roques-Carmes adds: “We identified this value proposition to be: (a) performing fast and cheap fixed matrix multiplication and; (b) performing noisy computation, which means that the result of the computation slightly varies from one run to the other, a little bit like flipping a coin. Therefore, these two elements are the building blocks of our work.”

While developing this algorithm and benchmarking it on various problems, the researchers discovered a variety of related algorithms that could also be implemented in photonics to find solutions even faster. Postdoctoral associate Dr. Yichen Shen is enthusiastic about the prospect of this work: “The field of enhancing computing capability with integrated photonics is currently booming, and we believe this work can be part of it. Since the algorithm we developed optimally leverages the strengths and weaknesses of photonic hardware, we hope it could find some short-term application.” The MIT research team is currently working in collaboration with others towards realizing proof-of-concept experiments and benchmarking their algorithm on photonic hardware, versus other photonic machines and conventional algorithms running on computers.

This work was supported in part by the Semiconductor Research Corporation (SRC) under SRC contract #2016-EP-2693‑B (Energy Efficient Computing with Chip-Based Photonics MIT). This work was supported in part by the National Science Foundation (NSF) with NSF Award #CCF-1640012 (E2DCA: Type I: Collaborative Research: Energy Efficient Computing with Chip-Based Photonics). This material is based upon work supported in part by the U.S. Army Research Laboratory and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies, under contract number W911NF-18–2‑0048. C. Z. was financially supported by the Whiteman Fellowship. M. P. was financially supported by NSF Graduate Research Fellowship grant number 1122374.

**Contact information:**

Charles Roques-Carmes (chrc@mit.edu), Yichen Shen (ycshen@mit.edu), and Marin Soljačić (soljacic@mit.edu).

[1] For instance, the task of determining the three-dimensional structure of a protein, given its sequence of amino-acids.

[2] For instance, the problem of finding the shortest path connecting many cities.

[3] Charles Roques-Carmes, et al. Heuristic recurrent algorithms for photonic Ising machines. Nat Commun 11, 249 (2020) doi:10.1038/s41467-019–14096‑z

[4] One such procedure has been proposed by Karp in his seminal paper: Karp, Richard M. “Reducibility among combinatorial problems.” Complexity of computer computations. Springer, Boston, MA, 1972. 85–103.

]]>* DOI 10.1109/JBHI.2019.2961403*

**Abstract:**

Intracranial pressure (ICP) normally ranges from 5 to 15 mmHg. Elevation in ICP is an important clinical indicator of neurological injury, and ICP is therefore monitored routinely in several neurological conditions to guide diagnosis and treatment decisions. Current measurement modalities for ICP monitoring are highly invasive, largely limiting the measurement to critically ill patients. An accurate noninvasive method to estimate ICP would dramatically expand the pool of patients that could benefit from this cranial vital sign. Methods: This work presents a spectral approach to model-based ICP estimation from arterial blood pressure (ABP) and cerebral blood flow velocity (CBFV) measurements. The model captures the relationship between the ABP, CBFV and ICP waveforms and utilizes a second-order model of the cerebral vasculature to estimate ICP. Results: The estimation approach was validated on two separate clinical datasets, one recorded from thirteen pediatric patients with a duration of around seven hours, and the other recorded from five adult patients, one hour and 48 minutes in duration. The algorithm was shown to have an accuracy (mean error) of 0.4 mmHg and −1.5 mmHg, and a precision (standard deviation of the error) of 5.1 mmHg and 4.3 mmHg, in estimating mean ICP (range of 1.3 mmHg to 24.8 mmHg) on the pediatric and adult data, respectively. These results are comparable to previous results and within the clinically relevant range. Additionally, the accuracy and precision in estimating the pulse pressure of ICP on a beat-by-beat basis were found to be 1.3 mmHg and 2.9 mmHg respectively. Conclusion: These contributions take a step towards realizing the goal of implementing a real-time noninvasive ICP estimation modality in a clinical setting, to enable accurate clinical-decision making while overcoming the drawbacks of the invasive ICP modalities.

]]>Yi Yang, Di Zhu1, Wei Yan, Akshay Agarwal, Mengjie Zheng, John D. Joannopoulos, Philippe Lalanne, Thomas Christensen, Karl K. Berggren & Marin Soljačić

*DOI: 10.1038/s41586-019‑1803‑1*

**Abstract:**

The macroscopic electromagnetic boundary conditions, which have been established for over a century1 , are essential for the understanding of photonics at macroscopic length scales. Even state-of-the-art nanoplasmonic studies2–4 , exemplars of extremely interface-localized felds, rely on their validity. This classical description, however, neglects the intrinsic electronic length scales (of the order of ångström) associated with interfaces, leading to considerable discrepancies between classical predictions and experimental observations in systems with deeply nanoscale feature sizes, which are typically evident below about 10 to 20 nanometres5–10. The onset of these discrepancies has a mesoscopic character: it lies between the granular microscopic (electronic-scale) and continuous macroscopic (wavelength-scale) domains. Existing top-down phenomenological approaches deal only with individual aspects of these omissions, such as nonlocality11–13 and local-response spill-out14,15. Alternatively, bottom-up frst-principles approaches—for example, time-dependent density functional theory16,17—are severely constrained by computational demands and thus become impractical for multiscale problems. Consequently, a general and unifed framework for nanoscale electromagnetism remains absent. Here we introduce and experimentally demonstrate such a framework—amenable to both analytics and numerics, and applicable to multiscale problems—that reintroduces the electronic length scale via surface-response functions known as Feibelman d parameters18,19. We establish an experimental procedure to measure these complex dispersive surfaceresponse functions, using quasi-normal-mode perturbation theory and observations of pronounced nonclassical efects. We observe nonclassical spectral shifts in excess of 30 per cent and the breakdown of Kreibig-like broadening in a quintessential multiscale architecture: flm-coupled nanoresonators, with feature sizes comparable to both the wavelength and the electronic length scale. Our results provide a general framework for modelling and understanding nanoscale (that is, all relevant length scales above about 1 nanometre) electromagnetic phenomena.

]]>During my freshman year I considered both physics and electrical engineering as possible majors of study. I thoroughly enjoyed the former in high-school and an acquaintance spoke highly of the latter. Introductory courses in both fields proved interesting and challenging.

The instructors in physics were particularly logical in their solutions of problems that I had struggled to understand, and I aspired to gain this degree of mastery over sensible, coherent thought processes. Physics was displayed as a wonderful expression of our world, made wholly comprehensible through sequential reasoning. Instructors in electrical engineering engaged my attention through the immediate application of new concepts to practical, hands-on projects. The seemingly effortless transition they made between classroom and workshop was a skill I respected and wished to develop in myself.

Electrical engineering and physics were equally absorbing subjects of study to me. Both required disciplined reasoning and intuition, and so I eventually chose to major in both. My favorite classes announced themselves to me immediately. These were the signal processing courses – Signals and Systems (6.003), Signals, Systems, and Inference (6.011), and Discrete-Time Signal Processing (6.341) – none of which I found effortless. Although I was adept at manipulating the mathematics of the courses, it was only after having completed all three that I finally gained the intuitive understanding I sought. I could now apply the concepts to ideas beyond homework exercises. That there is pleasure to be found in learning about sampling theory and filter banks is still a wonder to me.

It was very fortunate timing that in my senior year, Professors Al Oppenheim and Randy Davis sought an undergraduate student to assist in a signal processing project. It concerned the detection and measurement of essential tremor through the use of a digitizing pen. The further objective was to discriminate between essential and Parkinsonian tremor, thereby contributing to a more reliable diagnosis of Parkinson’s disease. I was asked to help in the design of the bandpass filter and the digital differentiator necessary for the detection of essential tremor. It was gratifying to now apply my coursework to a practical problem, and our work in collaboration with others was eventually patented. These months led to a continued mentorship by Professor Oppenheim, as I completed a Masters of Engineering thesis in the following year.

**What problem are you trying to solve with your current research and what are some possible applications?**

My current research draws on my background in both physics and signal processing. Professor Oppenheim encourages his students to think creatively, to find “solutions in search of problems” rather than “problems in search of solutions.” I have found that in an intellectual sense, I thrive in this environment which dismisses rigid boundaries. The necessary regimentation of undergraduate homework assignments has now been replaced by the freedom to contemplate projects that excite me.

This attitude toward research led me to study a topic for my masters thesis – classical receiver operating characteristic (ROC) curves – that is decades old and initially appeared to offer no new areas of exploration. An ROC curve displays the tradeoff between the probabilities of detection and false alarm (also sometimes called the true positive and false positive rates, or the sensitivity and specificity) of a classical binary hypothesis testing system. It was initially developed for use in radar systems during World War II. These curves have since seen expanded use in a broad range of applications including signal detection in noisy environments, screening and diagnostic algorithms in medical tests, and damage detection in airplanes and ships.

We discovered that while the theoretical results surrounding optimal ROC curves are well-established and ‑understood, it is often the case that in practice, the ROC curves themselves are generated experimentally in a non-optimal way. We derived a condition under which an experimental ROC curve is optimal, even if generated in a non-optimal manner. We also developed a constructive procedure for the generation of such an optimal ROC curve from a non-optimal one, with minimal knowledge of the underlying data. These results were presented at and published in the proceedings of a major international signal processing conference.

I am now immersed in my doctoral studies and over the past year and a half, the direction of my research has steadily emerged: signal processing in the context of quantum mechanics. It began when we became intrigued by the problem of binary quantum state detection, which is closely related to the topic of my masters thesis, but in a quantum setting. I am currently exploring how this work expands into other topics such as quantum state estimation, quantum sensing, entanglement concentration, quantum control theory, classical frame theory, and generalized uncertainty principles.

Our most recent publication relating to this research details results regarding the operating characteristics of quantum binary hypothesis testing systems. Two types of operating characteristics are discussed. The first is the direct analogue of a classical ROC curve, while the second does not have a classical analogue that is commonly used. We further derive a generalization of previous results concerning the correspondence between positive operator valued measures (POVMs), and classical Parseval frames. POVMs are used in quantum mechanics to model the measurement of a quantum particle.

**What are your future plans?**

After graduation, I plan to continue my research in the signal processing field, ideally in the same type of creative and collaborative environment that now supports me. It has been said by several mentors that the research culture of the old Bell Laboratories was exactly this. Thus I will use it as a touchstone for any future employment. An area of future research that I am currently eager to explore would include conducting experiments involving quantum particles, as this would greatly improve my practical understanding of the problems I am currently studying. But as I have well-learned from Professor Oppenheim, any beginning starts with the erasure of boundaries around research possibilities.

]]>As electric motors become more ubiquitous in our everyday lives, found in just about everything we use from automobiles to kitchen appliances to IOT-connected and smart devices, it’s more important than ever to understand the machine characteristics, modern control techniques, and associated interactions with electronic drives that power these objects. Computer-based tools for estimating machine parameters and performance can remarkably speed up a designer’s understanding of when different control and machine design assumptions are applicable, and how gracefully these assumptions fail as performance limits are approached.

]]>Megan Yamoah, Billy Woltz, and Francisca Vasconcelos of the Engineering Quantum Systems Group awarded Rhodes Scholarships «more»

**Related Links:**

Engineering Quantum Systems Group

]]>

Friday, October 18, 2019

10:00 AM

Grier A, 34–401A

Professor Jacob B. Khurgin

Johns Hopkins University

Hosted by: Professor Kevin O’Brien

Some myths and realities in nanophotonics:

(1) Excited carriers in metals: from icy cold to comfortably warm to scalding hot

The field of plasmonics in recent years has experienced a certain shift in priorities. Faced with undisputable fact that loss in metal structures cannot be avoided, or even mitigated (at least not in the optical and near IR range) the community has its attention to the applications where the loss may not be an obstacle, and, in fact, can be put into productive use. Such applications include photo-detection, photo-catalysis, and others where the energy of plasmons is expended on generation of hot carriers in the metal. Hot carriers are characterized by short lifetimes, hence it is important to understand thoroughly their generation, transport, and relaxation in order to ascertain viability of the many proposed schemes involving them.

In this talk we shall investigate the genesis of hot carriers in metals by investigating rigorously and within the same quantum framework all four principle mechanisms responsible for their generation: interband transitions, phonon-and-defect assisted intraband processes, carrier-carrier scattering assisted transitions and Landau damping. For all of these mechanisms we evaluate generation rates as well as the energy (effective temperature) and momenta (directions of propagation) of the generated hot electrons and holes. We show that as the energy of the incoming photons increases towards the visible range the electron-electron scattering assisted absorption becomes important with dire consequences for the prospective “hot electron” devices as four carriers generated in the process of the absorption of a single photon can at best be characterized as “lukewarm” or “tepid” as their kinetic energies may be too small to overcome the potential barrier at the metal boundary. Similarly, as the photon energy shifts further towards blue the interband absorption becomes the dominant mechanism and the holes generated in the d‑shell of the metal can at best be characterized as “frigid” due to their low velocity. It is the Landau damping process occurring in the metal particles that are smaller than 10nm that is the most favorable on for production of truly “hot” carriers that are actually directed towards the metal interface.

We also investigate the relaxation processes causing rapid cooling of carriers. Based on our analysis we make predictions about performance characteristics of various proposed plasmonic devices.

(2) Non-magnetic optical isolators: what works and what does not?

Optical Isolator is a key component of photonic circuits and systems. An optical isolator requires non-reciprocal propagation i.e. breaking time inversion symmetry. Time symmetry cannot be broken in a linear optical system without magnetic field and/or gain and loss, hence all the practical isolators at this point are based on Faraday (magneto optic) effect which makes it difficult to develop isolators for planar integrated photonic circuits. Therefore, in recent years a strong effort has been mounted to develop non-magnetic isolators. A number of schemes had been proposed and demonstrated, such as devices with temporal modulation, acousto-optic and opto-mechanical isolators, various nonlinear schemes and parity time schemes with gain and loss.

In this talk we review performance characteristics of all these schemes and find them lacking any advantages in comparison to magnetic isolators. Most of the proposed schemes are severely limited in bandwidth and require high power consumption. Moreover, often they are not true optical isolators but are “optical diodes” in the sense that they do not offer full isolation.

We then make a case for the optical isolator based on second and third order nonlinearities that have good isolation and high dynamic range and offer detailed analysis of this exciting family of devices.

]]>

Wednesday, October 16, 2019

11:00 AM

Haus Room, 36–428

“Quiet Light and Integrated Ultra-Narrow Linewidth SBS Lasers”

Daniel J. Blumenthal

Professor ECE

Director Terabit Optical Ethernet (TOEC) Center

University of California at Santa Barbara

Hosted by Professor Dirk Englund

Abstract:

Optical sources with near perfect linewidths and frequency stability approaching that of an atomic transition have ushered in the era of “quiet light.” These spectrally pure, ultra-stable sources serve as the heart of large-scale precision high-end scientific experiments used in time-keeping, positioning, quantum and spectroscopy, yet have been relegated to the table-top. In this talk the basics of quiet light, the limiting sources of noise and drift, and how such light is measured and characterized will be briefly discussed. The generation and stabilization of quite light using a new class of chip-scale integrated Stimulated Brillouin Scattering (SBS) laser capable of sub-Hz fundamental linewidth emission and frequency reference cavities will be described. Reduction of the integral linewidth and laser stabilization close-to-carrier noise using miniature stabilization cavities will be described as will improving the long term frequency drift and fractional frequency stability to unprecedented levels for photonic integrated lasers. Applications of quiet light and these sources will be described including atomic cooling, ultra-low noise microwave generation and ARPA‑e funded FRESCO project energy efficient high capacity coherent communications project.

**Daniel J. Blumenthal **received the Ph.D. degree from the University of Colorado, Boulder (1993), the M.S.E.E. from Columbia University (1988) and the B.S.E.E from the University of Rochester (1981). He is a Professor in the Department of ECE at UCSB, Director of the Terabit Optical Ethernet Center (TOEC) and heads the Optical Communications and Photonics Integration (OCPI) group (ocpi.ece.ucsb.edu). Dr. Blumenthal is Co-Founder of Packet Photonics Inc. and Calient Networks and 23 patents. He has published over 460 papers in the areas of optical communications, ultra-narrow linewidth integrated lasers, optical gyros, InP and ultra-low loss silicon nitride waveguide photonic integration, nano-photonic devices and microwave photonics. He is co-author of *Tunable Laser Diodes and Related Optical Sources *(New York: IEEE–Wiley, 2005) and has published in the Proceedings of the IEEE.

Dr. Blumenthal is a Fellow of the National Academy of Inventors (NAI) and a Fellow of the IEEE and Optical Society of America. He has served on the Board of Directors for National LambdaRail (NLR) and as an elected member of the Internet2 Architecture Advisory Council. He is recipient of a Presidential Early Career Award for Scientists and Engineers (PECASE), a National Science Foundation Young Investigator Award (NYI) and an Office of Naval Research Young Investigator Program (YIP) Award and has served on numerous program committees including OFC, Photonics in Switching and as guest editor of multiple IEEE Journal special issues.

]]>