https://www.iflscience.com/technology/new-brain-implant-allows-paralyzed-patients-to-surf-the-internet-using-their-thoughts/

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts

TWO OF THE PATIENTS IN THE TRIAL CHATTING WITH EACH OTHER. BRAINGATE COLLABORATION

ADVERTISMENT

Brand-new research has shown that paralyzed patients can control an off-the-shelf tablet using chip implants connected to their brains. The brain-computer interface (BCI) allowed subjects to move a cursor and click using nothing more than their thoughts.

This is an important breakthrough. The three patients suffered from tetraplegia, which made them unable to use their limbs. Two had amyotrophic lateral sclerosis (ALS) and the other had a spinal cord injury. Thanks to this particular BCI, they were able to use email, chat, music, and video-streaming apps. They were able to navigate the web and perform tasks such as online shopping with ease. They could even play a virtual piano. The findings are reported in the journal PLOS ONE.

“It was great to see our participants make their way through the tasks we asked them to perform, but the most gratifying and fun part of the study was when they just did what they wanted to do – using the apps that they liked for shopping, watching videos or just chatting with friends,” lead author Dr Paul Nuyujukian, a bioengineer at Stanford, said in a statement. “One of the participants told us at the beginning of the trial that one of the things she really wanted to do was play music again. So to see her play on a digital keyboard was fantastic.”

The work was done by the BrainGate collaboration, which has worked to make BCIs a reality for many years. The chip is the size of a small pill and is placed in the brain’s motor cortex. The sensor registers neural activity linked to intended movements. This information is then decoded and sent to external devices. The same approach by BrainGate and other groups has allowed people to move robotic limbs.

“For years, the BrainGate collaboration has been working to develop the neuroscience and neuroengineering know-how to enable people who have lost motor abilities to control external devices just by thinking about the movement of their own arm or hand,” said Dr Jaimie Henderson, a senior author of the paper and a Stanford University neurosurgeon. “In this study, we’ve harnessed that know-how to restore people’s ability to control the exact same everyday technologies they were using before the onset of their illnesses. It was wonderful to see the participants express themselves or just find a song they want to hear.”

The approach will allow paralyzed people to communicate more easily with their family and friends. It will also enable them to help their caregivers to make better decisions regarding their ongoing health issues. This technology could dramatically improve the quality of life for many people.

https://www.vox.com/the-goods/2018/11/26/18112631/cyber-monday-amazon-alexa-google-voice-assistant-war

Smart speakers are everywhere this holiday season, but they’re really a gift for big tech companies

Which voice assistant will get the warmest welcome this year?

Everyone knows that there is, each holiday season, a gift that says, “I know nothing about you, but I love you, I mean, you get it.” For a very long time, this present was an iTunes gift card; Apple is the richest company in the world, and I am pretty sure this is exclusively thanks to the fortune it amassed from iTunes gift cards purchased for nephews and hairdressers in the first decade of the millennium. Prior to iTunes gift cards, the gift was maybe a sweater.

Now, I’m sorry to say, the comparable gift is a smart speaker. We keep purchasing them for each other, buying into the fantasy that Siri or Alexa or Google can make someone’s life easier by scheduling their appointments and managing their time and telling them how to put on makeup or make a butternut squash lasagna. Though, at the moment, reports say that people mostly just use them to listen to music, check the weather, and ask “fun questions.”

As nondescript gifts, smart speakers make a lot of sense: Both Amazon and Google have options that are around $50, there is at least some novelty factor that pokes at adults’ memories of receiving toys, and they are far less rude to give than a Fitbit. Plus, for Amazon and Google in particular, with 64.5 percent and 19.6 percent shares in the category, respectively, the point isn’t really to make money off selling hardware. The point is to beat the others at integrating their services into the lives of the population.

In other words: You’re not gifting an Amazon Echo; you’re gifting a relationship with Alexa. Amazon can later sell that relationship to brands that hope Alexa users will order their products with their voice. You’re not gifting a Google Home; you’re gifting a closer entwining with Google Search and all the strange personalized add-ons to Calendar and Maps.

This expansion of the voice assistant ecosystem is crucial to almost every major tech company, far more so than getting sticker price for devices that look like high-end air fresheners, and if you don’t believe me, please peruse the ridiculously marked-down Black Friday and Cyber Monday deals they are all offering this year.

Amazon

According to predictions from the Consumer Technology Association, shoppers are set to spend $96.1 billion on tech presents this year, up 3.4 percent from 2017. In the US, 66 percent of adults will buy some sort of gadget as a gift, and the CTA expects that 22 million of these gifts will be smart speakers. Overall, 12 percent of shoppers plan to buy some kind of voice assistant-enabled smart speaker, and 6 percent plan to buy a speaker that also has a screen — like Amazon’s recently updated Echo Show or Google’s just-released Home Hub.

 

https://venturebeat.com/2018/11/24/before-you-launch-your-machine-learning-model-start-with-an-mvp/

Before you launch your machine learning model, start with an MVP

Image Credit: everything possible/Shutterstock

I’ve seen a lot of failed machine learning models in the course of my work. I’ve worked with a number of organizations to build both models and the teams and culture to support them. And in my experience, the number one reason models fail is because the team failed to create a minimum viable product (MVP).

In fact, skipping the MVP phase of product development is how one legacy corporation ended up dissolving its entire analytics team. The nascent team followed the lead of its manager and chose to use a NoSQL database, despite the fact no one on the team had NoSQL expertise. The team built a model, then attempted to scale the application. However, because it tried to scale its product using technology that was inappropriate for the use case, it never delivered a product to its customers. The company leadership never saw a return on its investment and concluded that investing in a data initiative was too risky and unpredictable.

If that data team had started with an MVP, not only could it have diagnosed the problem with its model but it could also have switched to the cheaper, more appropriate technology alternative and saved money.

In traditional software development, MVPs are a common part of the “lean” development cycle; they’re a way to explore a market and learn about the challenges related to the product. Machine learning product development, by contrast, is struggling to become a lean discipline because it’s hard to learn quickly and reliably from complex systems.

Yet, for ML teams, building an MVP remains an absolute must. If the weakness in the model originates from bad data quality, all further investments to improve the model will be doomed to failure, no matter the amount of money thrown at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted. Teams can avoid these pitfalls by first developing an MVP and by learning from failed attempts.

Return on investment in machine learning

Machine learning initiatives require tremendous overhead work, such as the design of new data pipelines, data management frameworks, and data monitoring systems. That overhead work causes an ‘S’-shaped return-on-investment curve, which most tech leaders are not accustomed to. Company leaders who don’t understand that this S-shaped ROI is inherent to machine learning projects could abandon projects prematurely, judging them to be failures.

Unfortunately, prematurely terminating a project happens in the “building the foundations” phase of the ROI curve, and many organizations never allow their teams to progress far enough into the next phases.

Failed models offer good lessons

Identifying the weaknesses of any product sooner rather than later can result in hundreds of thousands of dollars in savings. Spotting potential shortcomings ahead of time is even more important with data products, because the root causes for a subpar recommendation system, for instance, could be anything from technology choices to data quality and/or quantity to model performance to integration, and more. To avoid bleeding resources, early diagnosis is key.

For instance, by foregoing the MVP stage of machine learning development, one company deploying a new search algorithm missed the opportunity to identify the poor quality of its data. In the process, it lost customers to the competition and had to not only fix its data collection process but eventually redo every subsequent step, including model development. This resulted in investments in the wrong technologies and six months’ worth of man hours for a team of 10 engineers and data scientists. It also led to the resignation of several key members on that team. Each departed employee cost $70,000 per person to replace.

In another example, a company leaned too heavily on A/B testing to determine the viability of its model. A/B tests are an incredible instrument for probing the market; they are a particularly relevant tool for machine learning products, as those products are often built using theoretical metrics that do not always closely relate to real-life success. However, many companies use A/B tests to identify the weaknesses in their machine learning algorithms. By using A/B tests as a quality assurance (QA) checkpoint, companies miss the opportunity to stop poorly developed models and systems in their tracks before sending a prototype to production. The typical ML prototype takes 12 to 15 engineer-weeks to turn into a real product. Based on that projection, failing to first create an MVP will typically result in a loss of over $50,000 if the final product isn’t successful.

The investment you’re protecting

Personnel costs are just one consideration. Let’s step back and discuss the wider investment in AI that you need to protect by first building an MVP.

Data collection. Data acquisition costs will vary based on the type of product your building and how frequently you’re gathering and updating data. If you are developing an application for an IoT device, you will have to identify which data to keep on the edge vs. which data to store remotely on the cloud where your team can do a lot of R&D work on it. If you are in the eCommerce business, gathering data will mean adding new front-end instrumentation to your website, which will unquestionably slow down the response time and degrade the overall user experience, potentially costing you customers.

Data pipeline building. The creation of pipelines to transfer data is fortunately a one-time initiative, but it is also a costly and time-consuming one.

Data storage. The consensus for a while now has been that data storage is being progressively commoditized. However, there are more and more indications that Moore’s Law just isn’t enough anymore to make up for the growth rate of the volumes of data we collect. If those trends prove true, storage will become increasingly expensive and will require that we stick to the bare minimum: only the data that is truly informational and actionable.

Data cleaning. With volumes always on the rise, the amount of data that is available to data scientists is becoming both an opportunity and a liability. Separating the wheat from the chaff is often difficult and time-consuming. And since these decisions typically need to be made by the data scientist in charge of developing the model, the process is all the more costly.

Data annotation. Using larger amounts of data requires more labels, and using crowds of human annotators isn’t enough anymore. Semi-automated labeling and active learning are becoming increasingly attractive to many companies, especially those with very large volumes of data. However the licenses to those platforms can represent a substantial add to the entire price of your ML initiative, especially when your data shows important seasonal patterns and needs to be relabeled regularly.

Compute power. Just like data storage, computer power is becoming commoditized, and many companies opt for cloud-based solutions such as AWS or GCP. However, with large volumes of data and complex models, the bill can become a considerable part of the entire budget and can sometimes even require a hefty investment in a server solution.

Modeling cost. The model development phase accounts for the most unpredictable cost in your final bill because the amount of time required to build a model depends on many different factors: the skill of your ML team, problem complexity, required accuracy, data quality, time constraints, and even luck. Hyperparameter tuning for deep learning is making things even more hectic, as this phase of development benefits little from experience, and usually only a trial-and-error approach prevails. Typical models will take about six weeks of development for a mid-level data scientist, so that’s about $15K in salary alone.

Deployment cost. Depending on the organization, this phase can either be fast or slow. If the company is mature from an ML-perspective and already has a standardized path to production, deploying a model will likely take about two weeks of time by an ML engineer, so about $5K. However, more often than not, you’ll require custom work, and that can make the deployment phase the most time-consuming and expensive part of creating a live ML MVP.

The cost of diagnosis

Recent years have seen an explosion in the number of ML projects powered by deep learning architectures. But along with the fantastic promise of deep learning comes the most frightening challenge in machine learning: lack of explainability. Deep learning models can have tens, if not hundreds of thousands, of parameters, and this makes it impossible for data scientists to use intuition when trying to diagnose problems with the system. This is likely one of the chief reasons ineffective models are taken offline rather than fixed and improved. If, after weeks waiting for the ML team to diagnose a mistake, they still can’t find the problem, it’s easiest to move on and start over.

And because most data scientists are trained as researchers rather than engineers, their core expertise as well as their interest rarely lies in improving systems but rather in exploring new ideas. Pushing your data science experts to spend most of their time “fixing” things (which could cost you 70 percent of your R&D budget) could considerably increase the churn among them. Ultimately, debugging, or even incremental improvement of an ML MVP can prove much more costly than a similarly-sized “traditional” software engineering MVP.

Yet ML MVPs remain an absolute must, because if the weakness in the model originates in the bad quality of the data, all further investments to improve the model will be doomed to failure, no matter how much money you throw at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted.

How to succeed with an MVP

But there is hope. It is just a matter of time until the lean methodology that has seen huge success within the software development community proves itself useful for machine learning projects as well. For this to happen, though, we’ll have to see a shift in mindset among data scientists, a group known to value perfectionism over short time-to-market. Business leaders will also need to understand the subtle differences between an engineering and a machine learning MVP:

Data scientists need to evaluate the data and the model separately. The fact that the application is not providing the desired results might be caused by one or the other, or both, and diagnosing can never converge unless data scientists keep this fact in mind. Because data scientists now have the option of improving their data collecting process, they can do justice to those models that would have been otherwise identified as hopeless.

Be patient with ROI. Because the ROI curve of ML is S-shaped, even MVPs require more way work than you could typically anticipate. As we have seen, ML products require many complex steps to reach completion, and this is something that needs to be profusely communicated to stakeholders to limit the risk of frustration and premature abandonment of a project.

Diagnosing is costly but critical. Debugging ML systems is almost always extremely time-consuming, in particular because of the lack of explainability in many modern models (DL). Building from scratch is cheaper but is a worse financial bet because humans have a natural tendency to repeat the same mistakes anyway. Obtaining the right diagnostic will ensure your ML team knows with precision what requires attention (whether it be the data, the model, or the deployment), allowing you to prevent the costs of the project from exploding. Diagnosing problems also gives your team the opportunity to learn valuable lessons from their mistakes, potentially shortening future project cycles. Failed models can be a mine of information; redesigning from scratch is thus often a lost opportunity.

Make sure no single person has the keys to your project. Unfortunately, extremely short tenures are the norm among machine learning employees. When key team members leave a project, its problems are even harder to diagnose, so company leaders must ensure that “tribal” knowledge is not owned by any one single person on the team. Otherwise, even the most promising MVPs will have to be abandoned. Make sure that once your MVP is ready for the market, you start gathering data as fast as possible and that learnings from the project are shared with your entire team.

No shortcuts

No matter how long you have worked in the field, ML models are daunting, especially when the data is highly dimensional and high volume. For the highest chances of success, you need to test your model early with an MVP and invest the necessary time and money in diagnosing and fixing its weaknesses. There are no shortcuts.

Jennifer Prendki is VP of Machine Learning at Figure Eight.

https://singularityhub.com/2018/11/24/what-happens-to-the-brain-in-zero-gravity/

What Happens to the Brain in Zero Gravity?

NASA has made a commitment to send humans to Mars by the 2030s. This is an ambitious goal when you think that a typical round trip will anywhere between three and six months and crews will be expected to stay on the red planet for up to two years before planetary alignment allows for the return journey home. It means that the astronauts have to live in reduced (micro) gravity for about three years, well beyond the current record of 438 continuous days in space held by the Russian cosmonaut Valery Polyakov.

In the early days of space travel, scientists worked hard to figure out how to overcome the force of gravity so that a rocket could catapult free of Earth’s pull in order to land humans on the Moon. Today, gravity remains at the top of the science agenda, but this time we’re more interested in how reduced gravity affects the astronauts’ health, especially their brains. After all, we’ve evolved to exist within Earth’s gravity (1 g), not in the weightlessness of space (0 g) or the microgravity of Mars (0.3 g).

So exactly how does the human brain cope with microgravity? Poorly, in a nutshell—although information about this is limited. This is surprising, since we’re familiar with astronauts’ faces becoming red and bloated during weightlessness—a phenomenon affectionately known as the “Charlie Brown effect”, or “puffy head bird legs syndrome”. This is due to fluid consisting mostly of blood (cells and plasma) and cerebrospinal fluid shifting towards the head, causing them to have round, puffy faces and thinner legs.

These fluid shifts are also associated with space motion sickness, headaches, and nausea. They have also, more recently, been linked to blurred vision due to a build up of pressure as blood flow increases and the brain floats upward inside the skull—a condition called visual impairment and intracranial pressure syndrome. Even though NASA considers this syndrome to be the top health risk for any mission to Mars, figuring out what causes it and—an even tougher question—how to prevent it, still remains a mystery.

So where does my research fit into this? Well, I think that certain parts of the brain end up receiving way too much blood because nitric oxide—an invisible molecule which is usually floating around in the blood stream—builds up in the bloodstream. This makes the arteries supplying the brain with blood relax, so that they open up too much. As a result of this relentless surge in blood flow, the blood-brain barrier (the brain’s “shock absorber”) may become overwhelmed. This allows water to slowly build up (a condition called oedema), causing brain swelling and an increase in pressure that can also be made worse due to limits in its drainage capacity.

Think of it like a river overflowing its banks. The end result is that not enough oxygen gets to parts of the brain fast enough. This a big problem which could explain why blurred vision occurs, as well as effects on other skills including astronauts’ cognitive agility (how they think, concentrate, reason and move).

A Trip in the ‘Vomit Comet’

To work out whether my idea was right, we needed to test it. But rather than ask NASA for a trip to the moon, we escaped the bonds of Earth’s gravity by simulating weightlessness in a special aeroplane nicknamed the “vomit comet.”

By climbing and then dipping through the air, this plane performs up to 30 of these “parabolas” in a single flight to simulate the feeling of weightlessness. They last only 30 seconds and I must admit, it’s very addictive and you really do get a puffy face!

With all of the equipment securely fastened down, we took measurements from eight volunteers who took a single flight every day for four days. We measured blood flow in different arteries that supply the brain using a portable doppler ultrasound, which works by bouncing high-frequency sound waves off circulating red blood cells. We also measured nitric oxide levels in blood samples taken from the forearm vein, as well as other invisible molecules that included free radicals and brain-specific proteins (which reflect structural damage to the brain) that could tell us if the blood-brain barrier has been forced open.

Our initial findings confirmed what we anticipated. Nitric oxide levels increased following repeated bouts of weightlessness, and this coincided with increased blood flow, particularly through arteries that supply the back of the brain. This forced the blood-brain barrier open, although there was no evidence of structural brain damage.

We’re now planning on following these studies up with more detailed assessments of blood and fluid shifts in the brain using imaging techniques such as magnetic resonance to confirm our findings. We’re also going to explore the effects that countermeasures such as rubber suction trousers—which create a negative pressure in the lower half of the body with the idea that they can help “suck” blood away from the astronaut’s brain—as well as drugs to counteract the increase in nitric oxide.

But these findings won’t just improve space travel—they can also provide valuable information as to why the “gravity” of exercise is good medicine for the brain and how it can protect against dementia and stroke in later life.The Conversation

Damian Bailey, Professor of Physiology and Biochemistry, University of South Wales

This article is republished from The Conversation under a Creative Commons license. Read the original article.

https://physics.aps.org/synopsis-for/10.1103/PhysRevLett.121.213601

Synopsis: Making Mixtures of Magnetic Condensates

A condensate mixing two species of strongly magnetic atoms provides a new experimental window into many-body phenomena.
Synopsis figure

RARE Team/IQOQI

In recent years, researchers have been able to prepare condensates of ultracold atoms with large magnetic moments. In these condensates, the interactions between the atoms’ magnetic dipoles give rise to exotic phases, some of which are analogous to those found in liquid crystals and superfluids. The condensates demonstrated to date have involved single species of magnetic atoms like dysprosium (Dy) or erbium (Er) (see 21 May 2012 Viewpoint). Now, Francesca Ferlaino and colleagues at the Institute for Quantum Optics and Quantum Information, Austria, have produced condensates that mix Dy and Er. The ability to couple two distinct dipolar atomic species will provide an opportunity to explore new quantum behaviors of ultracold gases.

Starting with an atomic beam of Dy and Er, the team used a combination of lasers and magnetic fields to trap the atoms and to cool them by evaporation. Getting both atomic species to condense at the same time, however, entailed a new trick. The researchers used traps whose shape and depth could be tuned so that the more weakly trapped Er would evaporate more easily than Dy, in turn serving as a coolant for the Dy.

Working with different isotopes of Dy and Er—some fermionic, some bosonic—they produced a variety of Bose-Bose or Bose-Fermi quantum mixtures. The authors observed signatures of strong interspecies interaction: For some isotope combinations, the forces between Er and Dy turned out to be repulsive, displacing the two condensates upwards and downwards, respectively, relative to the trap in which they were created. The magnetic mixture may allow researchers to study hard-to-probe quantum phases, such as fermionic superfluids with direction-dependent properties.

This research is published in Physical Review Letters.

–Nicolas Doiron-Leyraud

Nicolas Doiron-Leyraud is a Corresponding Editor at Physics and a researcher at the University of Sherbrooke.

https://www.sciencealert.com/a-hidden-region-of-the-human-brain-was-revealed-while-making-an-atlas

Neuroscientists Have Discovered a Previously Hidden Region in The Human Brain

TESSA KOUMOUNDOUROS
22 NOV 2018

It turns out we humans may have an extra type of thinky bit that isn’t found in other primates. A previously unknown brain structure was identified while scientists carefully imaged parts of the human brain for an upcoming atlas on brain anatomy.

main article image

Neuroscientist George Paxinos and his team at Neuroscience Research Australia (NeuRA) have named their discovery the endorestiform nucleus – because it is located within (endo) the inferior cerebellar peduncle (also called the restiform body). It’s found at the base of the brain, near where the brain meets the spinal cord.

This area is involved in receiving sensory and motor information from our bodies to refine our posture, balance and movements.

“The inferior cerebellar peduncle is like a river carrying information from the spinal cord and brainstem to the cerebellum,” Paxinos told ScienceAlert.

“The endorestiform nucleus is a group of neurons, and it is like an island in this river.”

Neuroscientist Lyndsey Collins-Praino from Adelaide University, who was not involved in the study, told ScienceAlert that Paxinos’ discovery is “intriguing”.

“While one can speculate that the endorestiform nucleus may play a key role in [the functions of the inferior cerebellar peduncle], it is too early to know its true significance,” she added.

Paxinos confirmed the existence of this brain structure while using a relatively new brain staining technique he developed to make images of the brain tissues clearer (and surely also prettier!) for the latest neuroanatomy atlas he has been working on.

These stains target cell products actively being made – chemicals in the brain such as neurotransmitters, providing a map of brain tissues. This helps to differentiate the neuron groups by their function – rather than just the traditional way of separating them by how the cells look – revealing what is known as the chemoarchitecture of the brain.

“The endorestiform nucleus is all too evident by its dense staining for [the enzyme] acetylcholinesterase, all the more evident because the surrounding areas are negative,” Paxinos explained.

“It was nearly the case the nucleus discovered me, than the other way around.”

In fact, Paxinos had been receiving clues that the endorestiform nucleus existed for decades. In a procedure called a therapeutic anterolateral cordotomy – a surgery to achieve relief from extreme and incurable pain by cutting spinal pathways – he and his colleagues had noticed that the long fibres from the spine seemed to end around where the endorestiform nucleus was found.

“It has been staring at me from the anterolateral cordotomies and also from the chemical stains I use in my lab,” he told ScienceAlert.

The location of this elusive brain bit leads Paxinos to suspect it may be involved in fine motor control – something also backed up by the fact that this structure has yet to be identified in other animals, including marmosets or rhesus monkeys.

“I cannot imagine a chimpanzee playing the guitar as dexterously as us, even if they liked to make music,” Paxinos pointed out.

Humans have brains at least twice as big as chimpanzees (1,300 grams vs 600 grams, or 2.9 lbs vs 1.3 lbs), and a larger percentage of our brain neuronal pathways that signal for movement make direct contact with motor neurons – 20 percent compared to 5 percent in other primates.

So, the endorestiform nucleus may be another unique feature in our nervous system, although it’s too soon to tell just yet. Paxinos is set to do some work in chimpanzees soon.

In order to discover what function the endorestiform nucleus might serve, we may have to wait for higher resolution MRIs capable of studying it in a living person.

Comparing the normal brains studied for the atlas with those from people with known abnormalities might also lead to some insights.

“Neuroanatomy is critical for serving as the foundation that we build a knowledge of both normal and abnormal function upon, but, at this time, it is simply impossible to know what implications this discovery may have for neurological or psychiatric disease,” Collins-Praino told ScienceAlert.

“Investigations into the functionality of this nucleus in the coming years will be key in answering these questions.”

Paxinos, who has 52 brain-mapping books under his belt, plans to keep using this new staining technique to thoroughly search our brains for more bits and compare them across species, to obtain a greater understanding on how they work.

This discovery is yet to be examined by peer-review, but details of the new brain area can be found in Paxinos’ latest atlas, titled Human Brainstem: Cytoarchitecture, Chemoarchitecture, Myeloarchitecture.

Learn More

  1. 3D Atlas Will Help Navigate The Spinal Cord
    Prince of Wales Medical Research Institute,ScienceDaily
  2. How neurons control fine motor behavior of the arm
    University of Basel, ScienceDaily
  3. Gene Find Sheds Light On Motor Neuron Diseases Like ALS
    University of Rochester Medical Center, ScienceDaily
  1. Neuromorphic computing enabled by physics of electron spins: Prospects and perspectives
    Abhronil Sengupta et al., Applied Physics Express
  2. Rutherford scattering and the atomic nucleus
    Hans Paetz gen. Schieck et al., Key Nuclear Reaction Experiments
  3. The discovery of the atomic nucleus
    Claude Amsler, Nuclear and Particle Physics

https://medicalxpress.com/news/2018-11-brain.html

How the brain switches between different sets of rules

November 19, 2018, Massachusetts Institute of Technology
brain
Credit: Wikimedia Commons

Cognitive flexibility—the brain’s ability to switch between different rules or action plans depending on the context—is key to many of our everyday activities. For example, imagine you’re driving on a highway at 65 miles per hour. When you exit onto a local street, you realize that the situation has changed and you need to slow down.

When we move between different contexts like this, our brain holds multiple sets of rules in mind so that it can switch to the appropriate one when necessary. These neural representations of task rules are maintained in the , the part of the brain responsible for planning action.

A new study from MIT has found that a region of the thalamus is key to the process of switching between the rules required for different contexts. This region, called the mediodorsal thalamus, suppresses representations that are not currently needed. That suppression also protects the representations as a short-term memory that can be reactivated when needed.

“It seems like a way to toggle between irrelevant and relevant contexts, and one advantage is that it protects the currently irrelevant representations from being overwritten,” says Michael Halassa, an assistant professor of brain and cognitive sciences and a member of MIT’s McGovern Institute for Brain Research.

Halassa is the senior author of the paper, which appears in the Nov. 19 issue of Nature Neuroscience. The paper’s first author is former MIT graduate student Rajeev Rikhye, who is now a postdoc in Halassa’s lab. Aditya Gilra, a postdoc at the University of Bonn, is also an author.

Changing the rules

Previous studies have found that the prefrontal cortex is essential for cognitive flexibility, and that a part of the thalamus called the mediodorsal thalamus also contributes to this ability. In a 2017 study published in Nature, Halassa and his colleagues showed that the mediodorsal thalamus helps the prefrontal cortex to keep a thought in mind by temporarily strengthening the neuronal connections in the prefrontal cortex that encode that particular thought.

In the new study, Halassa wanted to further investigate the relationship between the mediodorsal thalamus and the prefrontal cortex. To do that, he created a task in which mice learn to switch back and forth between two different contexts—one in which they must follow visual instructions and one in which they must follow auditory instructions.

In each trial, the mice are given both a visual target (flash of light to the right or left) and an auditory target (a tone that sweeps from high to low pitch, or vice versa). These targets offer conflicting instructions. One tells the mouse to go to the right to get a reward; the other tells it to go left. Before each trial begins, the mice are given a cue that tells them whether to follow the visual or auditory target.

“The only way for the animal to solve the task is to keep the cue in mind over the entire delay, until the targets are given,” Halassa says.

The researchers found that thalamic input is necessary for the mice to successfully switch from one context to another. When they suppressed the mediodorsal thalamus during the cuing period of a series of trials in which the context did not change, there was no effect on performance. However, if they suppressed the mediodorsal thalamus during the switch to a different context, it took the mice much longer to switch.

By recording from neurons of the prefrontal cortex, the researchers found that when the mediodorsal thalamus was suppressed, the representation of the old context in the prefrontal cortex could not be turned off, making it much harder to switch to the new context.

In addition to helping the brain switch between contexts, this process also appears to help maintain the neural representation of the context that is not currently being used, so that it doesn’t get overwritten, Halassa says. This allows it to be activated again when needed. The mice could maintain these representations over hundreds of trials, but the next day, they had to relearn the rules associated with each context.

Multitasking AI

The findings could help guide the development of better artificial intelligence algorithms, Halassa says. The human brain is very good at learning many different kinds of tasks—singing, walking, talking, etc. However, neural networks (a type of artificial intelligence based on interconnected nodes similar to neurons) usually are good at learning only one thing. These networks are subject to a phenomenon called “catastrophic forgetting”—when they try to learn a new , previous tasks become overwritten.

Halassa and his colleagues now hope to apply their findings to improve ‘ ability to store previously learned tasks while learning to perform new ones.

 Explore further: Altered brain activity responsible for cognitive symptoms of schizophrenia

More information: Thalamic regulation of switching between cortical representations enables cognitive flexibility, Nature Neuroscience (2018). DOI: 10.1038/s41593-018-0269-z , https://www.nature.com/articles/s41593-018-0269-z