How memories are formed and retrieved by the brain revealed in a new study

Try to remember that last dinner you went out for. Perhaps you can remember the taste of that delicious pasta, the sounds of the jazz pianist in the corner, or that boisterous laugh from the portly gentleman three tables over. What you probably can’t remember is putting any effort into remembering any of these little details.

Somehow, your brain has rapidly processed the experience and turned it into a robust, long-term memory without any serious effort from yourself. And, as you reflect on that meal today, your brain has generated a high-definition movie of the meal from memory, for your mental viewing pleasure, in a matter of seconds.

Undoubtedly, our ability to create and retrieve long-term memories is a fundamental part of the human experience – but we still have lots to learn about the process. For instance, we lack a clear understanding of how different brain regions interact in order to form and retrieve memories. But our recent study sheds new light on this phenomenon by showing how neural activity in two distinct brain regions interact during memory retrieval.

The hippocampus, a structure located deep within the brain, has long been seen as a hub for memory. The hippocampus helps “glue” parts of the memory together (the “where” with the “when”) by ensuring that neurons fire together. This is often referred to as “neural synchronisation”. When the neurons that code for the “where” synchronise with the neurons that code for the “when”, these details become associated through a phenomenon known as “Hebbian learning”.

But the hippocampus is simply too small to store every little detail of a memory. This has lead researchers to theorise that the hippocampus calls upon the neocortex – a region which processes complex sensory details such as sound and sight – to help fill in the details of a memory.

The neocortex does this by doing the exact opposite of what the hippocampus does – it ensures that neurons do not fire together. This is often referred to as “neural desynchronisation”. Imagine asking an audience of 100 people for their names. If they synchronise their response (that is, they all scream out at the same time), you’re probably not going to understand anything. But if they desynchronise their response (that is, they take turns speaking their names), you’re probably going to gather a lot more information from them. The same is true for neocortical neurons – if they synchronise, they struggle to get their message across, but if they desynchronise, the information comes across easily.

Our research found that the hippocampus and neocortex do in fact work together when recalling a memory. This happens when the hippocampus synchronises its activity to glue parts of the memory together, and later help to recall the memory. Meanwhile, the neocortex desynchronises its activity to help process information about the event and later help process information about the memory.

Of cats and bicycles

We tested 12 epilepsy patients between 24 and 53 years of age. All had electrodes place directly within the brain tissue of their hippocampus and neocortex as part of the treatment for their epilepsy. During the experiment, patients learned associations between different stimuli (such as words, sounds and videos), and later recalled these associations. For example, a patient may be shown the word “cat” followed by a video of a bike cycling down a street.

The patient would then try and create a vivid link between the two (perhaps the cat riding the bike) to help them remember the association between the two items. Later, they would be presented with one of the items and asked to recall the other. The researchers then examined how the hippocampus interacted with the neocortex when the patients were learning and recalling these associations.

During learning, neural activity in the neocortex desynchronised and then, around 150 milliseconds later, neural activity in the hippocampus synchronised. Seemingly, information about the sensory details of the stimuli was first being processed by the neocortex, before being passed to the hippocampus to be glued together.

We found that the hippocampus and neocortex work closely together when forming and retrieving memories. Orawan Pattarawimonchai/ Shutterstock

Fascinatingly, this pattern reversed during retrieval – neural activity in the hippocampus first synchronised and then, around 250 milliseconds later, neural activity in the neocortex desynchronised. This time, it appeared that the hippocampus first recalled a gist of the memory and then began to ask the neocortex for the specifics.

Our findings support a recent theory which suggests that a desynchronised neocortex and synchronised hippocampus need to interact to form and recall memories.

Read more: Are memories reliable? Expert explains how they change more than we realise

While brain stimulation has become a promising method for boosting our cognitive facilities, it has proved difficult to stimulate the hippocampus to improve long-term memory. The key problem has been that the hippocampus is located deep within the brain and is difficult to reach with brain stimulation that is applied from the scalp. But the findings from this study present a new possibility. By stimulating the regions in the neocortex that communicate with the hippocampus, perhaps the hippocampus can be indirectly pushed to create new memories or recall old ones.

Understanding more about the way the hippocampus and neocortex work together when forming and recalling memories could be important for further developing new technologies that could help improve memory for those suffering from cognitive impairments such as dementia, as well as boosting memory in the population at large.

A breakthrough method that became vital to neuroscience

Originally developed to record currents of ions flowing through channel proteins in the membranes of cells, the patch-clamp technique has become a true stalwart of the neuroscience toolbox.

Information in the brain is thought to be encoded as complex patterns of electrical impulses generated by thousands of neuronal cells. Each impulse, known as an action potential, is mediated by currents of charged ions flowing through a neuron’s membrane. But how the ions pass through the insulated membrane of the neuron remained a puzzle for many years. In 1976, Erwin Neher and Bert Sakmann developed the patch-clamp technique, which showed definitively that currents result from the opening of many channel proteins in the membrane1. Although the technique was originally designed to record tiny currents, it has since become one of the most important tools in neuroscience for studying electrical signals — from those at the molecular scale to the level of networks of neurons.

By the 1970s, current flowing through the cell was generally accepted to result from the opening of many channels in the membrane, although the underlying mechanism was unknown. At that time, current was commonly recorded by impaling tissue with a sharp electrode — a pipette with a very fine point. Unfortunately, however, the signal recorded in this way was excessively noisy, and so only the large, ‘macroscopic’ current — the collective current mediated by many different types of channel — that flows through the tissue could be resolved.

In 1972, Bernard Katz and Ricardo Miledi2, pioneers of the biology of the synaptic connections between cells, managed to infer from the macroscopic current certain properties of the membrane channels, but only after a heroic effort to exclude all possible confounding factors. The problem was that the macroscopic current could be influenced by factors not directly related to channel activity, such as cell geometry and modulatory processes that regulate cell excitability. Also troublesome was that interpretations of macroscopic-current features were based on unverified assumptions about the statistics of individual channel activity2,3. Despite Katz and Miledi’s careful analyses, there was a lingering doubt about whether their conclusions were correct. The crucial data were obtained by Neher and Sakmann using patch clamp.

The patch-clamp technique is conceptually rather simple. Instead of impaling the cells, a pipette with a relatively large diameter is pressed against the cell membrane. Under the right conditions, the pipette tip ‘bonds’ with the membrane, forming a tight seal. This substantially reduces the noise compared with that encountered using sharp electrodes, because the small patch of membrane encompassed by the pipette tip is electrically isolated from the rest of the cell’s membrane and from the environment surrounding the cell (Fig. 1).

Figure 1 | The patch-clamp technique used at different scales. a, Neher and Sakmann1 developed the cell-attached patch-clamp technique. An electrode (a fine pipette) is pressed against a ‘patch’ of the cell membrane so that ion currents (red dotted arrow) passing through channel proteins in the patch under the electrode can be recorded. In the whole-cell configuration, the patch is ruptured so that the whole-cell macroscopic current (blue dotted arrow), which represents the summed currents from the entire cell, can be recorded. b, Simultaneous whole-cell recordings from different parts of a neuron can determine, for example, the direction of travelling signals. c, Whole-cell recordings can be made from a small network of connected neurons. d, Whole-cell recording can even be made in the brains of animals performing a task or walking around freely.

The tiny currents passing through the few channels in the patch were thus observed for the first time. The recording confirmed key channel properties: when channels open, there is a step-like jump in the current trace and, when they close, a step-like drop back to baseline. It was now possible to determine details such as the statistics of the opening and closing of channels, the amplitude of the currents they mediate and the optimal stimuli that trigger their opening. For this work, Neher and Sakmann were awarded the 1991 Nobel Prize in Physiology or Medicine.

Improvements in patch clamp made it feasible to study channels in various preparations4 to finally address long-standing questions. There was particular interest in verifying a model for action-potential generation5 proposed by Nobel laureates Alan Hodgkin and Andrew Huxley in the 1950s. Specific predictions of the model could now be tested directly by examining the current through individual channels and by observing the changes in current that occur when the molecular structure of the channel is modified6. Ultimately, the model was shown to be mostly correct and remains the gold standard for computational neuroscientists today.

One of the several variants of patch clamp4 — the whole-cell configuration — found an audience with neuroscientists studying electrical phenomena in neurons beyond the channel level. To achieve whole-cell recording, the patch of membrane under the electrode is ruptured, enabling electrical access to the cell. Compared with the use of sharp electrodes, whole-cell patch clamp allows much more accurate recordings and, crucially, is less damaging to the cell. This allowed systematic investigation of synergistic processes at the cellular level, such as the regulation of macroscopic currents by modulatory molecules, and interactions between the different types of channel in the neuron.

The relatively large opening created in the cell in the whole-cell configuration also provided access to the cell by chemicals, enabling dyes to be delivered for visualizing intricate cell structures, and RNA to be extracted for gene-expression analysis7. Neher’s group examined the sequence of events that underlie the transfer of information between cells by introducing chemicals into the cell and simultaneously tracking the resulting changes in the electrical properties of the cell’s membrane8.

Whole-cell patch clamp proved ideal for studying the collective properties of neurons and neuronal networks in brain slices maintained in vitro. A challenge in working with more-complex systems such as neuronal networks is that the number of possible confounding factors increases. Sakmann’s solution in the 1990s was to carry out simultaneous whole-cell recording using two or three electrodes, which to some seemed excessive because comparable data could be obtained by sequential recordings using fewer electrodes. However, the rationale was that taking time to design the near-perfect experiment mitigated later difficulties in data interpretation analogous to those faced by Katz and Miledi.

Hence, simultaneous recordings from different parts of the neuron definitively confirmed that action potentials are initiated at one part of the main long neuronal protrusion (the axon) and propagate back to the dendrites (clustered protrusions that receive inputs from other neurons)9. The mechanisms that underlie signalling between neurons were directly investigated by placing electrodes on either side of a synaptic connection10. Moreover, triple recordings from neurons of different classes uncovered certain basic principles of network organization11.

The patch-clamp technique is also used to examine cell activities under more natural conditions. To study how sensory stimuli and movements are represented in the brain, experiments must be carried out in living animals. The challenge with this approach, however, is that the slightest movement can dislodge an electrode from the neuron. Whole-cell patch-clamping turns out to be remarkably stable because of the tight seal between the electrode and the membrane. Thus, this technique has permitted recording from dendrites12 and pairs of neurons13 in anaesthetized rodents, and even from animals that are able to walk and run14.

Patch-clamp recording is arguably still the most direct and effective way of studying electrical signals in the brain. The data obtained with this technique essentially represent the ground truth for investigators in many branches of neuroscience, from theorists15 to translational researchers developing drugs for the treatment of certain brain conditions, including epilepsy16 and autism spectrum disorder17.

Moreover, patch clamp complements modern ‘optogenetic’ techniques, which enable control and visualization of the activities of large populations of neurons using light18. Emerging technologies, such as prostheses for vision19, will probably rely heavily on patch-clamp recording to establish the optimal conditions for converting external stimuli into electrical signals. Patch-clamping will clearly remain a vital tool for the neuroscientist in the foreseeable future.


  1. 1.

    Neher, E. & Sakmann, B. Nature 260, 799–802 (1976).

Andrew Anzalone was restless. It was late fall of 2017. The year was winding down, and so was his MD/PhD program at Columbia. Trying to figure out what was next in his life, he’d taken to long walks in the leaf-strewn West Village. One night as he paced up Hudson Street, his stomach filled with La Colombe coffee and his mind with Crispr gene editing papers, an idea began to bubble through the caffeine brume inside his brain.

Crispr, for all its DNA-snipping precision, has always been best at breaking things. But if you want to replace a faulty gene with a healthy one, things get more complicated.

In addition to programming a piece of guide RNA to tell Crispr where to cut, you have to provide a copy of the new DNA and then hope the cell’s repair machinery installs it correctly. Which, spoiler alert, it often doesn’t. Anzalone wondered if instead there was a way to combine those two pieces, so that one molecule told Crispr both where to make its changes and what edits to make. Inspired, he cinched his coat tighter and hurried home to his apartment in Chelsea, sketching and Googling late into the night to see how it might be done.


The WIRED Guide to Crispr

A few months later, his idea found a home in the lab of David Liu, the Broad Institute chemist who’d recently developed a host of more surgical Crispr systems, known as base editors. Anzalone joined Liu’s lab in 2018, and together they began to engineer the Crispr creation glimpsed in the young post-doc’s imagination. After much trial and error, they wound up with something even more powerful. The system, which Liu’s lab has dubbed “prime editing,” can for the first time make virtually any alteration—additions, deletions, swapping any single letter for any other—without severing the DNA double helix. “If Crispr-Cas9 is like scissors and base editors are like pencils, then you can think of prime editors to be like word processors,” Liu told reporters in a press briefing.

Why is that a big deal? Because with such fine-tuned command of the genetic code, prime editing could, according to Liu’s calculations, correct around 89 percent of the mutations that cause heritable human diseases. Working in human cell cultures, his lab has already used prime editors to fix the genetic glitches that cause sickle cell anemia, cystic fibrosis, and Tay-Sachs disease. Those are just three of more than 175 edits the group unveiled today in a scientific article published in the journal Nature.

The work “has a strong potential to change the way we edit cells and be transformative,” says Gaétan Burgio, a geneticist at the Australian National University who was not involved in the work, in an email. He was especially impressed at the range of changes prime editing makes possible, including adding up to 44 DNA letters and deleting up to 80. “Overall, the editing efficiency and the versatility shown in this paper are remarkable.”

Classic Crispr, the most widely used gene editing tool in rotation, is made up of two parts: a DNA slicing enzyme called Cas9 and a strand of guide RNA that essentially says “cut here, but not here.” Other enzymes can be directed to do different things, like sitting on a gene to turn it off, or unzipping the DNA just a bit and knocking out one letter for another.

Anzalone’s prime editor is a little different. Its enzyme is actually two that have been fused together—a molecule that acts like a scalpel combined with something called a reverse transcriptase, which converts RNA into DNA. His RNA guide is a little different too: It not only finds the DNA in need of fixing, but also carries a copy of the edit to be made. When it locates its target DNA, it makes a little nick, and the reverse transcriptase starts adding the corrected sequence of DNA letter by letter, like the strikers on a typewriter. The result is two redundant flaps of DNA—the original and the edited strand. Then the cell’s DNA repair machinery swoops in to cut away the original (marked as it is with that little nick), permanently installing the desired edit.

This technique allows for far more flexibility when editing DNA. Whereas base editors could only make four types of genetic “bit” flips—changing one G-C base pair to an A-T, for example—prime editing can change any letter to any other. Prime editors also appear to make fewer mistakes. “We believe this arises from the fact that prime editing requires three different pairing steps,” says Liu. Crispr-Cas9 only needs one. “If any of those three events fail then prime editing can’t proceed.” But Liu says they still need to test that theory further.

The bigger problem, according to folks like Burgio, is that prime editors are huge, in molecular terms. They’re so big that they won’t pack up neatly into the viruses researchers typically use to shuttle editing components into cells. These colossi might even clog a microinjection needle, making it difficult to deliver into mouse (or potentially human) embryos. That, says Burgio, could make prime editing a lot less practical than existing techniques.

But that’s not stopping Liu from moving forward with plans to bring prime editing to patients. In September, he cofounded a company called Prime Medicine that has licensed the technology from the Broad Institute to develop treatments for genetic disease. Liu’s base editing company, Beam Therapeutics, has also been granted a sublicense for certain fields. Other researchers will be able to freely access the technology using a nonprofit repository called Addgene, where Liu’s team has already placed the DNA blueprints for making prime editors. Still, it will take years for the first human experiments to begin.

As for Anzalone, he’s just astonished at how fast something can move from a flash of curiosity to a working molecular machine in the world of Crispr. “There are things that we can do now that seemed impossible when I began graduate and medical school,” he says. That means there’s plenty of fodder for more evening strolls, but this time in Cambridge, Massachusetts, instead of Manhattan.

How many flights does it takes for stair-climbing to qualify as a workout?

That many people avoid the stairs in favour of a less strenuous option like pushing the button for the elevator is a clear indication of the effort it requires.

The ability to go up and down stairs quickly and with confidence is a task worthy of preserving. ALLEN MCINNIS / MONTREAL GAZETTE


When it comes to stair climbing, there’s no denying that the 1,776 steps in the CN Tower present a mighty tall challenge. Thankfully, most workplaces don’t expect their employees to hike 144 flights every morning, even if taking the stairs is encouraged.

So how many flights does it takes for stair climbing to qualify as a workout? Most workplace health programs highlight the benefits of the long game, urging employees to opt for the stairs on a regular basis, even if it’s just one flight. If the goal is improved health and longevity, the Harvard Alumni Health study reported that climbing 10-19 flights a week (two to four flights per day) reduces mortality risk. And a host of other studies have proved that consistently choosing to take the stairs can improve cardiovascular fitness, balance, gait, blood pressure, glucose, cholesterol and weight loss.

From a strictly physiological standpoint, there’s a lot going on when using the stairs — especially compared with the effort associated with taking the elevator or escalator. Most of the muscles in the lower body are called into action both going up and down the stairs. As for the heart, it’s working hard enough on the ascent to qualify as a vigorous intensity workout, while going downstairs is considered a moderate intensity activity.

But that’s not news to anyone who’s climbed more than a couple of flights at a time. Heavy legs and breathlessness set in early. And if that’s not proof enough of its workout potential, that so many people avoid the stairs in favour of a less strenuous option — like pushing the button for the elevator — is a clear indication of the effort it requires. Yet for those who make a conscious decision to travel from floor to floor on their own steam, the payoff is worth it.


What’s the goal for anyone hoping to realize the health and fitness benefits of taking the stairs? An overview of the research suggests that 30-160 minutes of vigorous stair climbing a week for eight to 12 weeks will boost cardiovascular fitness. But in keeping with the trend toward shorter, more intense workouts, a research team from McMaster University recruited 24 university students to perform a series of short, fast stair intervals. The students climbed three flights of stairs (60 steps) three times a day with one to four hours recovery between bouts — a protocol they followed three days a week for six weeks. With instructions to climb the stairs one step at a time as quickly as possible, using the railings as needed, the stair climbers realized a five-per-cent boost in aerobic fitness.

Another stair-climbing study, also performed by a McMaster University research team, involved two sets of subjects. One group performed 20-second bouts of stair climbing (about three to four storeys) three times, with two minutes recovery between each interval. The second group performed 60-second bouts of repeatedly ascending and descending either one or two flights of stairs, three times with 60 seconds recovery between intervals. The two groups performed their workouts three days a week for six weeks.

The 20-second and 60-second interval workouts resulted in similar heart rate response and fitness gains, though the study subjects preferred the repeated bouts of 20 seconds of stair climbing over the 60-second intervals of continually climbing up and down one or two flights. They claimed to find the quick changes in direction destabilizing.

The McMaster studies add to the fitness options for people looking for another simple, accessible, time efficient workout to help achieve their weekly fitness goals. But to be clear, we’re not talking about the type of stair climbing you do while dressed in business casual. These 10-minute workouts demand a level of intensity that brings on a sweat.

But it’s not just the potential to improve health and fitness that makes stair climbing such a great workout option. Climbing the stairs is a functional day-to-day task that requires balance and agility, both of which deteriorate as the decades add up. The ability to go up and down stairs quickly and with confidence is a task worthy of preserving.

Use a set of stairs at home or at the office that will sustain a climb for a minimum of 20 seconds (about 60 steps) or a single/double flight of stairs that can accommodate quick changes in direction. Then use the stairs on those days when time isn’t on your side. A quick warmup, followed by three x 20 seconds or three x 60 seconds of stair climbing with a short recovery (one to two minutes) between bouts is a great stand-in for more traditional workouts. And when you think you’ve mastered the stairs at work or at home, there’s always the CN Tower.

Neurons hide their memories in their imaginary fluctuations

Noisy brain hides memory-like structures in the noise.

This is your brain. Well, not <em>your</em> brain. Presumably your brain isn't being photographed at this moment.
Enlarge / This is your brain. Well, not your brain. Presumably your brain isn’t being photographed at this moment.

The brain is, at least to me, an enigma wrapped in a mystery. People who are smarter than I am—a list that encompasses most humans, dogs, and possibly some species of yeast—have worked out many aspects of the brain. But some seemingly basic things, like how we remember, are still understood only at a very vague level. Now, by investigating a mathematical model of neural activity, researchers have found another possible mechanism to store and recall memories.

We know in detail how neurons function. Neurotransmitters, synapse firing, excitation, and suppression are all textbook knowledge. Indeed, we’ve abstracted these ideas to create blackbox algorithms to help us ruin people’s lives by performing real-world tasks.

We also understand the brain at a higher, more structural, level: we know which bits of the brain are involved in processing different tasks. The vision system, for instance is mapped out in exquisite detail. Yet the intermediate level in between these two areas remains frustratingly vague. We know that a set of neurons might be involved in identifying vertical lines in our visual field, but we don’t really understand how that recognition occurs.

Memory is hard

Likewise, we know that the brain can hold memories. We can even create and erase a memory in a mouse. But the details of how the memory is encoded are unclear. Our basic hypothesis is that a memory represents something that persists through time: a constant of sorts (we know that memories vary with recall, but they are still relatively constant). That means there should be something constant within the brain that holds the memory. But the brain is incredibly dynamic, and very little stays constant.

This is where the latest research comes in: abstract constants that may hold memories have been proposed.

So, what constants have the researchers found? Let’s say that a group of six neurons is networked via interconnected synapses. The firing of any particular synapse is completely unpredictable. Likewise, its influence on its neighbors’ activity is unpredictable. So, no single synapse or neuron encodes the memory.

But hidden within all of that unpredictability is predictability that allows a neural network to be modeled with a relatively simple set of equations. These equations replicate the statistics of synapses firing very well (if they didn’t, artificial neural networks probably wouldn’t work).

A critical part of the equations is the weighting or influence of a synaptic input on a particular neuron. Each weighting varies with time randomly but can be strengthened or weakened due to learning and recall. To study this, the researchers examined the dynamical behavior of a network, focusing on the so-called fixed points (or set points).

Technically, you have to understand complex numbers to understand set points. But I have a short cut. The world of dynamics is divided into stable things (like planets orbiting the Sun), unstable things (like rocks balanced on pointy sticks), and things that are utterly unpredictable.

Memory is plastic

The neuron is a weird combination of stable and unpredictable. The neurons have firing rates and patterns that stay within certain bounds, but you can never know exactly when an individual neuron is going to fire. The researchers show that the characteristic that keeps the network stable does not store information for very long. However, the characteristic that drives unpredictability does store information, and it seems to be able to do so indefinitely.

The researchers demonstrated this by exposing their model to input stimulus, which they found changed the network’s fluctuations. Furthermore, the longer the model was exposed to the stimulus, the stronger its influence was.

The individual pattern of firing was still unpredictable, and there was no way to see the memory in the stimulus in any individual neuron or its firing behavior. Yet it was still there, hidden in the network’s global behavior.

Further analysis shows that, in terms of the dynamics, there is a big difference between this way memory is encoded and previous models. In previous models, memory is a fixed point that corresponds to a particular pattern of neural firing. In this model, memory is a shape. It could be a 2D shape on a plane, as the researchers found in their model. But the dimensionality of the shape could be much larger, allowing very complicated memories to be encoded.

In a 2D model, the neuron-firing behavior follows a limit cycle, meaning that the pattern continuously changes through a range of states that eventually repeats itself, though this is only evident during recall.

Another interesting aspect of the model is that recall has an effect on the memory. Memories recalled by a similar stimulus get weaker in some cases, while in others they are strengthened.

Where to from here?

The researchers go on to suggest that evidence for their model might be found in biological systems. It should be possible to find invariant shapes in neuronal connectivity. However, I imagine that this is not an easy search to conduct. A simpler test is that there should be asymmetry in the strength in the connections between two neurons during learning. That asymmetry should change between learning and rest.

So, yes, in principle the model is testable. But it looks like those tests will be very difficult. We may be waiting a long time to get some results one way or another.

Nature Communications, 2019, DOI: 10.1038/s41467-019-12306-2 (About DOIs)

Tesla Roadster production car will exceed insane prototype ‘in every way:’ Chief of Design


Tesla patents custom cooling system for longer-lasting energy storage devices


A significant part of Tesla’s business relies heavily on the durability and longevity of its battery packs, and in the spirit of disruptive innovation, the Silicon Valley-based company has continued to make improvements to its battery technology to make them more durable and more efficient. Tesla was able to achieve this through several ways, one of which was discussed in a recently published patent application.

It is pertinent for battery packs, particularly those that are used for energy storage, to be robust enough that they last for a very long time. To accomplish this, battery packs must be able to handle multiple charge and discharge cycles on a regular basis. They must also be able to weather faults in the system, including those that may cause damage to the actual cells in the pack itself.

Such a system was outlined by Tesla in a patent simply titled “Energy Storage System.” Explaining its rationale, the Silicon Valley-based company stated that “cells and other components in a pack generate heat during operation, both during the charging process to store the energy and during the discharge process when energy is consumed.” Tesla further explains that “when the cells fail, they typically release hot gases. These gases may impact the integrity of other cells in the pack and may cause substantial damage to the functional cells which have not failed.”

An illustration of cooling elements within an energy storage system according to certain embodiments of the invention. (Credit: US Patent Office)

With this in mind, Tesla maintains that there is a need to develop an “improved energy storage system” that will be capable of reducing or removing “one or more of the issues mentioned.” Tesla’s patent describes two strategies that could improve its battery packs. One of these involves the use of a novel system that utilizes a cold plate, which could help remove heat generated by the battery pack during use. Heat pipes may also be used together with a cold plate to achieve this purpose.

“In certain embodiments, a cold plate (which provides liquid cooling) may be in thermal connection with the battery cells 100 to further remove heat generated during system use. The cold plate may be in direct thermal contact with the battery cells 100 or, alternatively, one or more layers and/or features may be between the cold plate and the battery cells 100. In certain embodiments, the battery cells 100 are in contact with one or more heat pipes to remove excess heat disposed under the battery cells. A cold plate is disposed below the heat pipe or pipes (on the side of the heat pipe away from the battery cells 100) that helps dissipate the heat contained in the heat pipe.”

“In certain embodiments, the cold plate may be in thermal contact with one side of the cells without any heat pipes disposed between the cells. The cold plate may physically consist of a single plate or multiple plates that are thermally connected to the cells and/or one another. In other embodiments, one or more heat pipes are disposed between the battery cells 100 and a cold plate is disposed below the battery cells 100. The heat pipes and the cold plate may be in thermal connection with one another.”

An illustration of a cold plate within an energy storage system according to certain embodiments of the invention. (Credit: US Patent Office)

Apart from the use of cold plates, Tesla also described a battery pack with regions that are designed to give way when mechanical failures happen. By using such a system, the majority of the cells in a battery pack become protected even if some cells were to fail.

“The top plate includes one or more weak areas above the one or more battery cell. The weak areas are regions that have less integrity and thus, where mechanical failure is more likely to occur if a battery cell releases gas. These regions may be physically weaker areas compared to the surrounding areas and may rupture when pressure builds up due to a failed cell. Alternatively, the weak areas may be chemically weaker and preferentially rupture when exposed to the caustic gases released by a failed battery cell. The weak areas may also fail due to a combination of physical and chemical weakening.”

The full text of Tesla’s Energy Storage System patent could be accessed here.

Tesla’s focus on battery integrity in its recently published patent application suggests that the Silicon Valley-based company is looking to develop packs that are capable of lasting a very long time. Such improvements have been teased before, especially in a paper released by Tesla lead battery researcher Jeff Dahn and members of the Department of Physics and Atmospheric Science at Dalhousie University. The cells described in the paper are capable of lasting over 1 million miles on the road, or 20 years if used in grid energy storage.

Looking at these initiatives, as well as the battery pack contingencies outlined in the recently released patent, it appears that Tesla is building up towards creating an ecosystem of products that are capable of lasting decades. This, of course, plays a huge part in pushing Tesla’s overall goal of accelerating the advent of sustainable energy.