**RLE Profile: Paul Azunre**

Paul Azunre was a member of RLE for six years, first as a graduate student and then as a postdoc. He recently published his Ph.D and postdoc work. We discuss his research and the career trajectory that has taken him from his native Ghana to MIT, and then to his own startup company.

*Tell us about your research topic and why you picked it.*

I was originally motivated by trying to find ways to improve optical system design. In particular, I was interested in apparently simple problems like how to design coatings on lenses and other optical components to reduce stray reflections. Each of these optical systems is individually very important. Reducing unwanted reflections from sunlight is crucial in solar cells to pick just one example. But they are also representative of a much broader class of design problems that quickly become intractable as the complexity increases.

*What is the fundamental challenge?*

When I opened state-of-the-art books on optical system design, I noticed that most, if not all, referred to the design process as “more an art than a science”. The reason for this is that most design tools available at the time were (and still today are) very dependent on the initial solution provided by an expert designer which the software tool could then improve or refine. So an experienced user could get very good results, by initializing to a good starting point based on the work they have done in the past and what kind of designs they believe would work well for the problem, whereas another designer might not get anything nearly as good by using the same tool. We set out to understand and define, in rigorous mathematical terms, why this happens and whether we could take some of this uncertainty and guesswork out of the design of optical systems.

From a modern algorithmic perspective, what is happening is that the optical models rarely, if ever, have nice structure to them – technically speaking, they are *nonconvex*, which means that the design space is littered with lots of ‘trick’ solutions – designs that look like they might be optimal but are in fact not the absolute best design that can be achieved. In technical jargon, the ‘trick’ solutions are each a “local minimum” – the best solution closest to the initial guess provided by the designer – which may be much worse than the “global optimum”, *i.e.*, the best possible design. Of course then, the initial guess is everything, wherein lies the “art” component of the design process.

*What did you do?*

The next question we asked ourselves was – could we design an algorithm that could take nonconvexity into account and *guarantee* finding the global optimum for a class of optical systems? By far the most popular algorithmic framework that can achieve this is called Branch-and-Bound (BandB) and has been known since the 1960s, which is a very long time ago. This framework systematically divides up the design space and tries to figure out which regions can be immediately excluded because every possible design in it is already worse than the best known solution. BandB is mathematically rigorous and, in principle, could be used to prove that a particular solution is optimal.

*So if that approach has been known for so long, why hasn’t it been used yet?*

As is often the case, the devil hides in the details. BandB is only a general framework, and for any particular problem various components of it can take significant mathematical effort to develop. The case in point is something called a “bounding function”, which for any particular class of problems can take years to discover, correctly formulate as a mathematical theorem, and a proof for the theorem discovered. Beyond that, the algorithms are significantly more computationally expensive than other methods, since the algorithm will need to intelligently explore the whole space to be sure of finding the global optimum, and so will likely need to be massively parallelized to be practical, which is another significant effort from a software development perspective. Given the current ubiquity and low cost of parallel computing resources, we didn’t feel this was an obstacle to practicality of the approach from a cost angle. Note that these skills – distributed software engineering, mathematical analysis – are not typically found in optical system engineers. I set out to close this apparent chasm between various fields.

The next 5 years of my PhD work was supported by the US Department of Energy via the Excitonics Energy Frontier Research Center. I studied and picked up mathematical analysis, formulated and proved some theorems, developed parallel algorithms in the Amazon AWS cloud, picked a problem in thin-film optics that I cared deeply about – reducing reflection from a solar panel to improve its overall efficiency – and with the help of some great people (Joel Jean notably, another graduate student at the time) experimentally demonstrated the resulting optical design in the MTL and RLE laboratories. For the very first time ever we had shown that it was possible to do this on practical problems, to find the globally optimal design for a practical optical system with a guarantee of global optimality. Now, what should be stressed is that this was a proof of concept on a small scale – we did not solve the hardest practical optical design problems – but rather the easiest practical problems, which is the natural sequence of scientific progression.

*Congratulations on the publication! What are your next steps?*

As I was graduating, I proposed my work to the MIT Deshpande Center, which recognized its commercialization potential, and I was thus able to fund my postdoc. Ultimately, I earned a Green Card (I am from Ghana, West Africa) and spun a startup business – Algorine Inc. – out of MIT and the Deshpande Center Incubator, carrying out a variety of consulting projects spanning Business Analytics, Wind energy optimization and Artificial Intelligence. Algorine is headquartered in Austin, Texas — a location selected for its rapidly emerging startup scene — and has a branch in my native Ghana, where we try to leverage AI to help local businesses. Notably, I helped the Department of Defense (DARPA) on a couple of projects, as Principal Investigator, to develop bleeding edge Automatic Machine Learning (AutoML) techniques and next-generation chatbots to engage attackers in networks (cyber-security). These experiences made me realize 1) how Machine Learning can be used to make the optimization algorithms even more efficient, by analyzing algorithm behavior for specific problem classes and allowing the algorithm to develop “experience” and do better with time – something that is called “Learning to Optimize” in that field, and 2) how many more potential applications areas existed – from vehicle routing to financial modeling.

*I am sure that your success hasn’t come without a lot of effort. Do you have any advice for students beginning their careers?*

Research into truly transformative topics is an inherently difficult problem. No one has done what you are trying to do before, and the buck stops with you. That can be terrifying. But it can also be extremely exciting, with the right attitude – which you get to choose. Remember to do something everyday to advance the status of your project, even when it seems to be in vain. Desist from comparing yourself to anyone else – every journey is unique, with a unique starting point and a unique destination. Be helpful and kind towards each other along this journey – it really does come around.

**Related Links:**

Guaranteed global optimization of thin-film optical systems (IOP Science New Journal of Physics)

MIT-Harvard Center for Excitonics