Hopfield Networks

EE376A (Winter 2019)

Jordan Alexander

(Note: I’d recommend just checking out the link to my personal site: https://jfalexanders.github.io/me/articles/19/hopfield-networks, the version there has a few very useful side notes, images, and equations that I couldn’t include here)

These days there’s a lot of hype around deep learning. We have these things called “deep neural networks” with billions of parameters that are trained on gigabytes of data to classify images, produce paragraphs of text, and even drive cars. But how did we get here? And why are our neural networks built the way they are? In order to answer the latter, I’ll be giving a brief tour of Hopfield networks, their history, how they work, and their relevance to information theory. By studying a path that machine learning could’ve taken, we can better understand why machine learning looks like it does today.

I. A brief history of artificial neural networks

While neural networks sound fancy and modern, they’re actually quite old. Depending on how loosely you define “neural network”, you could probably trace their origins all the way back to Alan Turing’s late work, Leibniz’s logical calculus, or even the vague notions ofGreek automata.

In my eyes, however, the field truly comes into shape with two neuroscientist-logicians: Walter Pitts and Warren McCullough. These two researchers believed that the brain was some kind of universal computing device that used its neurons to carry out logical calculations. Together, these researchers invented the most commonly used mathematical model of a neuron today: the McCulloch–Pitts (MCP) neuron.

These neural networks can then be trained to approximate mathematical functions, and McCullough and Pitts believed this would be sufficient to model the human mind. Now, whether an MCP neuron can truly capture all the intricacies of a human neuron is a hard question, but what’s undeniable are the results that came from applying this model to solve hard problems.

The first major success came from David Rumelhardt’s group in 1986, who applied the backpropagation algorithm to train a neural network for image classification and showed that neural networks can learn internal representations of data. Before we examine the results let’s first unpack the concepts hidden in this sentence:training/learning, backpropagation, and internal representation.

Let’s start with learning. While learning conjures up images of a child sitting in a classroom, in practice, training a neural network just involves a lot of math. At its core, a neural networks is a function approximator, and “training” a neural network simply means feeding it data until it approximates the desired function. Sometimes this function is a map from images to digits between 0-9, and sometimes it’s a map from blocks of text to blocks of text, but the assumption is that there’s always a mathematical structure to be learned.

Training a neural network requires a learning algorithm. Of these, backpropagation is the most widely used. The basic idea of backpropagation is to train a neural network by giving it an input, comparing the output of the neural network with the correct output, and adjusting the weights based on this error.

Backpropagation allows you to quickly calculate the partial derivative of the error with respect to a weight in the neural network. This roughly corresponds to how “significant” this weight was to the final error, and can be used to determine by how much we should adjust the weight of the neural network. If fed enough data, the neural network learns what weights are good approximations of the desired mathematical function.

There’s a tiny detail that we’ve glossed over, though. The original backpropagation algorithm is meant for feed-forward neural networks. That is, in order for the algorithm to successfully train the neural network, connections between neurons shouldn’t form a cycle. We call neural networks that have cycles between neurons recurrent neural networks, and, it at least seems like the human brain should be closer to a recurrent neural network than to a feed-forward neural network, right? Yet, backpropgation still works. While researchers
later generalized backpropagation to work with recurrent neural networks, the success of backpropgation was somewhat puzzling, and it wasn’t always as clear a choice to train neural networks. So what does that mean for our neural network architectures?

In the present, not much. Regardless of the biological impossibility of backprop, our deep neural networks are actually performing quite well with it. But a few years ago, there was an abundance of alternative architectures and training methods that all seemed equally likely to produce massive breakthroughs.

One of these alternative neural networks was the Hopfield network, a recurrent neural network inspired by associative human memory. The hope for the Hopfield human network was that it would be able to build useful internal representations of the data it was given. That is, rather than memorize a bunch of images, a neural network with good
internal representations stores data about the outside world in its own, space-efficient internal language. There are a few interesting concepts related to the storage of information that come into play when generating internal representations, and Hopfield networks illustrate them quite nicely. As for practical uses of Hopfield networks, later in this post we’ll
play around with a Hopfield network to see how effective its own internal representations turned out to be.

II. What is a Hopfield network?

Imagine a neural network that’s designed for storing memories in a way that’s closer to how human brains work, not to how digital hard-drives work. We’d want the network to have
the following properties:

  • Associative: Memories are tied to each other and remembering one thing triggers another memory.
  • Robust: Even when the network is damaged it still retains a good portion of its functionality. For example, if you were to damage your digital hard drive, it would often be a complete loss, but the human brain–even under traumatic injuries–is still able to retain a sigificant portion of its functionality.

To make this a bit more concrete, we’ll treat memories as binary strings with B bits, and each state of the neural network will correspond to a possible memory. This means that there will be a single neuron for every bit we wish to remember, and in this model, “remembering a memory” corresponds to matching a binary string to the most similar binary string in the list of possible memories. Example: Say you have two memories {1, 1, -1, 1}, {-1, -1, 1, -1} and you are presented the input {1, 1, -1, -1}. The desired outcome would be retrieving the memory {1, 1, -1, 1}.

Now, how can we get our desired properties? The first, associativity, we can get by using a novel learning algorithm. Hebbian learning is often distilled into the phrase “neurons that fire together wire together”,

So, for example, if we feed a Hopfield network lots of (images) of tomatoes, the neurons corresponding to the color red and the neurons corresponding to the shape of a circle will activate at the same time and the weight between these neurons will increase. If we later feed the network an image of an apple, then, the neuron group corresponding to a circular shape will also activate, and the we’d say that the network was “reminded” of a tomato.

The second property, robustness, we can get by thinking of memories as stable states of the network:

If a certain amount of neurons were to change (say, by an accident or a data corruption event), then the network would update in such a way that returns the changed neurons back to the stable state. These states correspond to local “energy” minima, which we’ll explain later on.

Hopfield networks can be used to retrieve binary patterns when given a corrupted binary string by repeatedly updating the network until it reaches a stable state. If the weights
of the neural network were trained correctly we would hope for the stable states to correspond to memories. Intuitively, seeing some amount of bits should “remind” the neural network of the other bits in the memory, since our weights were adjusted to satisfy the Hebbian principle “neurons that fire together wire together”

Now that we know how Hopfield networks work, let’s analyze some of their properties.

III. Storing information in Hopfield networks

Hopfield networks might sound cool, but how well do they work? To answer this question we’ll explore the capacity of our network (Highly recommend going to: https://jfalexanders.github.io/me/articles/19/hopfield-networks for LaTeX support). The
idea of capacity is central to the field of information theory because it’s a direct measure of how much information a neural network can store. For example, in the same way a hard-drive with higher capacity can store more images, a Hopfield network with higher capacity can store more memories. To give a concrete definition of capacity, if we assume that the memories of our neural network are randomly chosen, give a certain tolerance for memory-corruption, and choose a satisfactory probability for correctly remembering each pattern in our network, how many memories can we store?

To answer this question we’ll model our neural network as a communication channel. We’re trying to encode N memories into W weights in such a way that prevents:

  • The corruption of individual bits
  • Stable states that do not correspond to any memories in our list

Example: Say you have two memories {1, 1, -1, 1}, {-1, -1, 1, -1} and you are presented the
input {1, 1, 1, -1}. The desired outcome would be retrieving the memory {1, 1, -1, 1}, corresponding to the most similar memory associated to the memories stored in the neural network.

We can use the formula for the approximation of the area under the Gaussian to bound the maximum number of memories that a neural network can retrieve. Using methods from statistical physics, too, we can model what our capacity is if we allow for the corruption of a certain percentage of memories. Finally, if you wanted to go even further, you could get some additional gains by using the Storkey rule for updating weights or by minimizing an objective function that measures how well the networks stores memories.

IV. What’s next?

Well, unfortunately, not much. Despite some interesting theoretical properties, Hopfield networks are far outpaced by their modern counterparts. But that doesn’t mean their developement wasn’t influential! Research into Hopfield networks was part of a larger
program that sought to mimic different components of the human brain, and the idea that networks should be recurrent, rather than feed-forward, lived on in the deep recurrent neural networks used today for natural language processing.

Outreach

For the outreach portion of the project, I explained the basics of how neural networks stored information through my own blog post and a few articles on distill.pub about machine learning interpretability and feature visualization. The focus of my project was letting the kids play around with neural networks to understand how they generate “internal representations” of the data being fed to them, coupled with a high-level explanation of what this meant.

External Links

For a more detailed blog post, with some visualizations and equations, check out my other blog post on my personal site: https://jfalexanders.github.io/me/articles/19/hopfield-networks

Leave a Reply