Caltech Celebrates 30 Years of its Computation and Neural Systems Option

Sep 7, 2017 | Alumni News

Caltech Celebrates 30 Years of its Computation and Neural Systems Option

It began small, with a single class.

For the 1981-82 school year, three giants on Caltech’s faculty—Richard Feynman, Carver Mead (BS ’56, MS ’57, PhD ’60), and John Hopfield—joined up to co-teach a yearlong course called “The Physics of Computation.” The course was intended to unite their respective fields of study—physics, engineering, and biology—to explore the relationship between nanoscale physics, computation, and brain function.

Though the class was only taught for three years, it became the seed of a larger effort to create a program that would combine computer and brain research. Within a few years, then-provost Rochus E. (Robbie) Vogt (now the R. Stanton Avery Distinguished Service Professor and Professor of Physics, Emeritus) had blessed the creation of a new interdivisional program, with Hopfield as its first chair.

Computation and Neural Systems (CNS) at Caltech explores the relationship between the physical structure of a computational system and the dynamics of its operation, as well as the computational problems that it can efficiently solve. CNS research applies both to the design of engineered autonomous intelligent systems and to understanding computation in biological brains.

The option attracted funding and support from the System Development Foundation, the Ralph M. Parsons Foundation, and the Pew Charitable Trusts, which were critical to its early success. More importantly, it attracted top PhD candidates from across the globe. Caltech faculty founded the Annual Conference on Neural Information Processing Systems (NIPS), which has grown exponentially from its small beginning and in 2016 brought together about 6,000 scientists and engineers who are interested in machine learning and neural systems.

In the intervening three decades, the option has grown into a vibrant community of scholars that includes 29 affiliated faculty members. It has produced more than 100 PhDs. One-third of those doctoral degree recipients have gone on to positions in academia while others have gone on to prominent positions in industry. Many of those former students returned to their alma mater for a 30th anniversary celebration of the option held on August 11 in the Beckman Institute Auditorium.

Hosted by Pietro Perona, the Allen E. Puckett Professor of Electrical Engineering in Caltech’s Division of Engineering and Applied Science, the all-day event included speakers from throughout CNS’s history. It wrapped up with a reception at a private residence near campus.

Perona told the audience at the symposium that, despite CNS’s success, its faculty members never rest on their laurels; they regularly reevaluate whether to continue the option and how to evolve its scope to keep it intellectually vibrant.

“You should always consider shutting things down … and so every five years or so the faculty discuss this point,” he said. “But if you think of the top challenges in science and engineering right now, intelligent autonomous machines is one of the top challenges in engineering, while brain and behavior is one of the great challenges in science. CNS addresses both. I think that is a very good reason to keep going.”

Perona concluded his own remarks by announcing that he would be stepping down as the executive officer of CNS at the end of the month, turning leadership of the option over to Thanos Siapas, professor of computation and neural systems in the Division of Biology and Biological Engineering.

Other speakers at the event reflected on the history, experiences, and promise of CNS. Here are just a few of the thoughts they shared:

Gabriel Kreiman (MS ’02, PhD ’02), associate professor at Harvard Medical School, on the passion for learning and discovery that he found at Caltech:

“[At Caltech] I realized that scientific enthusiasm can be contagious, and can make almost any topic interesting. … And that’s the thing today that I’m hearing from many people. It’s part of the spirit of CNS. The sheer intellectual pleasure of finding things out. Of trying to discover how things work. The intellectual freedom to get together and go with all the other CNS people to the Athenaeum to have lunch and then spend three hours discussing the minutiae of one particular problem. Or staying until the wee hours in one of the rooms where we have all of the computers and working together and fighting together about absolutely every problem in neuroscience and computational neuroscience. … The magic, the spark of what happened here at CNS was completely unique.”

Thanos Siapas, sharing memories from John Hopfield, who was unable to attend, on how difficult it was to get the option approved, with some arguing that its goals could be achieved by redefining the requirements of existing options and that it might not be in the best long-term interests of students:

“These opinions were shared by a lot of people. … Eventually it happened because of some not-so-subtle, top-down influencing from the provost. … There are many ways to slice and dice science, but there’s no reason to rely on how things were done in the 1940s, or in 2010, for that matter.”

Casimir Wierzynski (PhD ’09), autonomy lead at Qualcomm Research, on the future of CNS-related work:

“It’s very exciting that neuroscience has now inspired very successful computation systems. … We are at the dawn of a new and exciting era for CNS. The CNS ideal of engineering informing neuroscience and vice versa is ready to go exponential once again.”

Peter Welinder (PhD ’12), researcher and engineer at OpenAI, on the breakdown of silos in an interdisciplinary field like CNS:

“We got this sense that there were no boundaries between departments. I didn’t really care about EE or biology or physics; they all kind of blended together. I think that’s also Pietro being my adviser. He’s in [electrical engineering], but is doing a lot of computer science-related stuff. We kept on having neuroscience discussions and talks in the labs; it was very inspiring.”

Carver Mead, Gordon and Betty Moore Professor of Engineering and Applied Science, Emeritus, discussing a suggestion by Wierzynski that there is a convergent evolution of artificial neural networks and biological brains:

“There are similarities in the physics and the constraints aren’t that different. And if you think about it a while, it’s probably true that the lessons that are getting learned by people actually trying to make optimal chips for doing deep learning are probably not so different from the evolutionary lessons that [shaped] the development of animal brains over millions of years. I’m extremely excited by the things we’ve heard today. I think it’s true that the fields we bring together in CNS really do synergize. The goals aren’t so different. Because to build something you have to understand it. And if you understand it, you can build it. That’s a saying that Dick Feynman got from me.”

More information about the CNS option can be found online at cns.caltech.edu.

Written by Robert Perkins

Contact:

Robert Perkins

(626) 395-1862

rperkins@caltech.edu

Related Articles

SHARE