Special Solid State & Optics Seminar Series
sponsored by “The Flint Fund Series on Quantum Devices and Nanostructures”
Tuesday, February 11, 2020
Mann Engineer Student Center - Room 107
(located in Dunham Lab – 10 Hillhouse Ave)
Prof. Daniel Brunner
Optics Department, FEMTO -ST Institute, 15B Avenue des Montboucons, 25030 Besançon, France
Towards scalable neural network processors
In recent years, neural networks have significantly shifted the limits of what is computationally possible. Today, these algorithms offer a practical approach to solve complex computational problems that were out of reach of classical, algorithmic programming. Inspired by the human brain, neural networks mold their neurons’ nonlinear transformations into a computational result according to the network’s connections. In such a setting, most programming corresponds to modifying connections simply by comparing a network’s current output to the correct result available from example data.
The hardware equivalent of such computation quite literally is a large network of nonlinear elements, with connections modified during a period of learning. Such a neuromorphic processor strongly differs from the von Neumann computing architecture and integrating the large number of parallel network connections in a physical substrate is a fundamental challenge that remains to be solved. Beyond the technological, neuromorphic hardware raises numerous other fundamental questions with respect to stability, reproducibility and how learning is best to be embedded in a physical system. I will discuss what might be lacking in today’s approach to the challenge and will present our results in photonic integration of parallel neural networks, in noise propagation and learning algorithms.
Hosted: Prof. Hui Cao