Robert Strauss Spends Summer As Student Researcher on Machine Learning Systems In PetaVision Lab 

Robert Strauss/Courtesy photo

BY CARRIE TALUS

For the summer of 2023, Robert Strauss has worked on Machine Learning Systems as a student researcher in the PetaVision Lab. On this project, Strauss worked with Garrett Kenyon, a Computer, Computational and Statistical Sciences Division Leader at Los Alamos National Laboratory.

PetaVision is an open source, object oriented neural simulation toolbox optimized for high-performance multi-core, multi-node computer architectures. PetaVision is intended for computational neuroscientists who seek to apply neuromorphic models to hard signal processing problems. At the PetaVision Lab, much of the research focuses on the use of sparse solvers to tackle hard problems in neuromorphic computing, such as depth reconstruction and image/action classification. 

During Strauss’ time working at the PetaVision Lab, he has been doing research on Hopfield energy models as attractor based machine learning systems. He has been focusing on the possibility of using this system as a new type of artificial intelligence (AI), which can be trained and implemented on ultra-fast, ultra-parallel, energy efficient neuromorphic hardware, and even be trained with a biologically plausible local learning rule called Equilibrium Propagation (EP). 

Strauss prefers Hopfield energy models compared to most of the current AI, which is run on GPUs using 100s of watts, takes many hours to train and run, and works nothing like an actual biological brain. 

Working at PetaVision over the summer, Strauss says, “I’ve learned to create simulations of these energy models, and experimented with adding lateral connections. I’m now working on integrating Hopfield models and sparse solvers, with the hope of compounding the robustness of each.” He says his colleagues, Siddarth Mansingh in particular, discovered that in addition to being so much faster and more efficient, these Hopfield energy models are also noticeably more robust due to their attractor-based nature! 

Another benefit is that while traditional AI can be fooled with random-looking pixel-level changes added to an image, adversarial attacks on a Hopfield model rarely work as this kind of system is very semantic and it would take a larger attack to fool it. No one wants to use a self-driving car that might do the wrong thing inexplicably when a few pixels in its image are changed due to dust on a lens. Although Hopfield models are much better in this respect, they are still easier to fool than we’d like. The attacks are semantic, but still slight. This motivates Strauss’s efforts to create a Hopfield LCAnet – a neural network preceded by a layer of sparse coding. But unlike a traditional neural net, a Hopfield model could fully couple to the sparse coding. He hopes this is a step forward for the robustness of AI.