http://www.dailymail.co.uk/sciencetech/article-3423615/Practice-really-DOES-make-perfect-Study-shows-ballet-dancer-s-brains-change-learn-new-routine.html

Practice really DOES make perfect: Study shows how ballet dancer’s brains change as they learn a new routine
Study analyzed fMRI scans of 11 professional dancers ages 19-50
Performed four brain scans over 34 weeks of dance rehearsals
Learning and performance at 7 weeks led to increased activity in cortical regions during visualization of the dance than compared to the 1st week
By STACY LIBERATORE FOR DAILYMAIL.COM

PUBLISHED: 00:12 GMT, 30 January 2016 | UPDATED: 02:43 GMT, 30 January
‘Practice makes perfect’ may be a cliché, but a new study confirms it is more than just a saying.

Researcher analyzed fMRI brain scans of professional ballet dancers to understand the long-term effects of learning.

They found that practicing and performing at seven weeks led to high activity in areas of the brain when visualizing the dance being learned, compared to the first week of rehearsals.

All figures show the average activity across participants (pictured). Researcher analyzed fMRI brain scans of professional ballet dancers to understand the long-term effects of learning. And found that practicing and performing at seven weeks led to high activity in areas of the brain when visualizing the dance being learned

All figures show the average activity across participants (pictured). Researcher analyzed fMRI brain scans of professional ballet dancers to understand the long-term effects of learning. And found that practicing and performing at seven weeks led to high activity in areas of the brain when visualizing the dance being learned

‘Studies investigating the neuroplastic changes associated with learning and practicing motor tasks have shown that practicing such tasks results in an increase in neural activation in several specific brain regions,’ according to the study by York University published in the journal PLOS ONE.

‘However, studies comparing experts and non-experts suggest that experts employ less neuronal activation than non-experts when performing a familiar motor task.’
The experiment was created to determine the long-term changes in neural networks linked to learning a new dance in expert ballet dancers over the period of 34 weeks.

‘We wanted to study how the brain gets activated with long-term rehearsal of complex dance motor sequences,’ said Professor Joseph DeSouza, who studies and supports people with Parkinson’s disease.

WHAT WERE THE RESULTS OF THE STUDY?
The results showed that initial learning and performance at seven weeks led to increase in activation in cortical regions during visualization of the dance being learned when compared to the first week.

However, at 34 weeks, it showed reduced activation in comparison to week seven.

Researchers found that in the learning process, our brain function makes an inverted ‘U’ learning pattern from a slow pace at the start, accelerating to a peak at the midpoint, before returning to the original pace, once we have mastered the task

These findings do not necessarily mean that the cortical regions are no longer greatly involved.

Neurons within these regions may have become more efficient, changed their connection weights, or chucked together their efficiency

‘The study outcome will help with understanding motor learning and developing effective treatments to rehabilitate the damaged or diseased brain.’

During the study, 11 dancers (ages 19 to 50 years old) were asked to visualize dance movements while they underwent fMRI scanning for different times over the 34 weeks.

The experiment was created to determine the long-term changes in neural networks linked to learning a new dance in expert ballet dancers over the period of 34 weeks. 11 dancers (ages 19 to 50 years old) were asked to visualize dance movements while they underwent fMRI scanning for different times over the 34 weeks

The experiment was created to determine the long-term changes in neural networks linked to learning a new dance in expert ballet dancers over the period of 34 weeks. 11 dancers (ages 19 to 50 years old) were asked to visualize dance movements while they underwent fMRI scanning for different times over the 34 weeks

‘Our aim was to find out the long-term impact of the cortical changes that occur as one goes from learning a motor sequence to becoming an expert at it,’ said coauthor Rachel Bar, who was a ballet dancer herself, a recent press release.

‘Our results also suggest that understanding the neural underpinnings of complex motor tasks such as learning a new dance can be an effective model to study motor learning in the real world.’

The first scan was performed after four rehearsals of the dance and the second a week later, after nine rehearsals. nine rehearsals. The third was conducted seven weeks after the first day of practice, which the dancers had performed the piece on stage 16 times
The first scan was performed after four rehearsals of the dance and the second a week later, after nine rehearsals. nine rehearsals. The third was conducted seven weeks after the first day of practice, which the dancers had performed the piece on stage 16 times

The scans measured Blood-Oxygen-Level-Depended (BOLD) in the participants.

The first scan was performed after four rehearsals of the dance and the second a week later, after nine rehearsals.

The third was conducted seven weeks after the first day of practice, which the dancers had performed the piece on stage 16 times.

And the fourth took place during the 34th week, when the dance had been performed on stage a total of 36 times.

While the scanning was being done, dancers were given two tasks: ‘a music-visualization task cued by music and a motor localizer task cued by a visual stimulus,’ wrote researchers.

The dancers were told to listen to music and visualize themselves dancing the steps they had learned during rehearsals.

The results showed that initial learning and performance at seven weeks led to increase in activation in cortical regions during visualization of the dance being learned when compared to the first week.

The results showed that initial learning and performance at seven weeks led to increase in activation in cortical regions during visualization of the dance being learned when compared to the first week. However, at 34 weeks, it showed reduced activation in comparison to week seven.

The results showed that initial learning and performance at seven weeks led to increase in activation in cortical regions during visualization of the dance being learned when compared to the first week. However, at 34 weeks, it showed reduced activation in comparison to week seven.

However, at 34 weeks, it showed reduced activation in comparison to week seven.

‘We found that in the learning process, our brain function makes an inverted ‘U’ learning pattern from a slow pace at the start, accelerating to a peak at the midpoint, before returning to the original pace, once we have mastered the task,’ says DeSouza.

‘An everyday example would be learning to drive a manual car, where you constantly have to think about shifting the gears until you master it and then do it instinctively.’

Researchers explained that these findings do not necessarily mean that the cortical regions are no longer greatly involved.

‘Neurons within these regions may have become more efficient, changed their connection weights, or chucked together their efficiency,’ the study explains.

The team believes these findings indicate that the timing of learning real world sensorimotor tasks can be tracked from cortical and subcortical regions using fMRI in professionals performing unique routines created to flow with music.

Read more:
Practice makes perfect, York U brain study confirms | EurekAlert! Science News
PLOS ONE: Tracking Plasticity: Effects of Long-Term Rehearsal in Expert Dancers Encoding Music to Movement

http://www.castanet.net/news/Canada/157480/Count-your-friends

According to a recent study, four.

The paper, titled “Do online social media cut through the constraints that limit the size of offline social networks?” was published by the Royal Society of Open Science and takes a look at whether social media networks have expanded our real-life social networks.

Robin Dunbar, a professor of evolutionary psychology at the University of Oxford, conducted two surveys: one that sampled 2,000 social-media using adults across the U.K., and another that included 1,375 professional adults who work nine-to-five weekday jobs.

While the average person had 150 Facebook friends, only 4.1 of those could considered friends you can count on for support and assistance in a time of crisis.

Expanding out to a “sympathy group” – those you might consider close friends – left an average of 13.6 people.

Dunbar has also published a paper that stated humans can only handle approximately 150 meaningful relationships at once.

Dunbar also concludes that online social networking sites like Facebook don’t seem to help individuals expand their true social networks larger than they would if they stayed strictly offline.

http://www.autoomobile.com/news/tesla-model-3-takes-body-of-bmw-4-series-with-model-x/40024842/

NEWS

Tesla Model 3 Takes Body of BMW 4-Series With Model X

One of the most anticipated of all vehicles is the Tesla Model 3 and one of the biggest questions on the minds of everyone is what the vehicle is going to look like.

All that we know at the moment is that the Tesla Model 3 will be a sedan and it may be a smaller size than the Tesla Model S. recently we were treated to a render of the vehicle and it has the style of the Tesla Model X while being fused together with the Model S.

We have also seen another prediction of what the 2017 Tesla Model 3 might end up looking like if it had the elements seen in the Tesla Model X and the platform of a BMW 4 Series.

Davos 2016 – A World Without Work?

Published on Jan 23, 2016

Christopher Pissarides defends a universal basic inome at Davos 2016.

How will rapid technological progress and the prospect of longer, healthier lives revolutionize work?

· Erik Brynjolfsson, Director, MIT Initiative on the Digital Economy, MIT – Sloan
School of Management, USA
· Yoshiaki Fujimori, President and Chief Executive Officer, LIXIL Group, Japan · Dileep George, Co-Founder and Chief Technology Officer, Vicarious, USA
· Christopher Pissarides, Regius Professor of Economics, London School of
Economics and Political Science, United Kingdom
· Troels Lund Poulsen, Minister for Business and Growth of Denmark

http://www.kurzweilai.net/scientists-decode-brain-signals-to-recognize-images-in-real-time

Scientists decode brain signals to recognize images in real time

May lead to helping locked-in patients (paralyzed or had a stroke) communicate and also to real-time brain mapping
January 30, 2016

Using electrodes implanted in the temporal lobes of seven awake epilepsy patients, University of Washington scientists have decoded brain signals (representing images) at nearly the speed of perception for the first time* — enabling the scientists to predict in real time which images of faces and houses the patients were viewing and when, and with better than 95 percent accuracy.

Multi-electrode placements on thalamus surface (credit: K.J. Miller et al./PLoS Comput Biol)

The research, published Jan. 28 in open-access PLOS Computational Biology, may lead to an effective way to help locked-in patients (who were paralyzed or have had a stroke) communicate, the scientists suggest.

Predicting what someone is seeing in real time

“We were trying to understand, first, how the human brain perceives objects in the temporal lobe, and second, how one could use a computer to extract and predict what someone is seeing in real time,” explained University of Washington computational neuroscientist Rajesh Rao. He is a UW professor of computer science and engineering and directs the National Science Foundation’s Center for Sensorimotor Engineering, headquartered at UW.

The study involved patients receiving care at Harborview Medical Center in Seattle. Each had been experiencing epileptic seizures not relieved by medication, so each had undergone surgery in which their brains’ temporal lobes were implanted (temporarily, for about a week) with electrodes to try to locate the seizures’ focal points.

Temporal lobes process sensory input and are a common site of epileptic seizures. Situated behind mammals’ eyes and ears, the lobes are also involved in Alzheimer’s and dementias and appear somewhat more vulnerable than other brain structures to head traumas, said UW Medicine neurosurgeon Jeff Ojemann.

Recording digital signatures of images in real time

In the experiment, signals from electrocorticographic (ECoG) electrodes from multiple temporal-lobe locations were processed powerful computational software that extracted two characteristic properties of the brain signals: “event-related potentials” (voltages from hundreds of thousands of neurons activated by an image) and “broadband spectral changes” (processing of power measurements  across a wide range of frequencies).

Averaged broadband power at two multi-electrode locations (1 and 4) following presentation of different images; note that responses to people are stronger than to houses. (credit: K.J. Miller et al./PLoS Comput Biol)

Target image (credit: K.J. Miller et al./PLoS Comput Biol)

The subjects, watching a computer monitor, were shown a random sequence of pictures: brief (400 millisecond) flashes of images of human faces and houses, interspersed with blank gray screens. Their task was to watch for an image of an upside-down house and verbally report this target, which appeared once during each of 3 runs (3 of 300 stimuli). Patients identified the target with less than 3 percent errors across all 21 experimental runs.

The computational software sampled and digitized the brain signals 1,000 times per second to extract their characteristics. The software also analyzed the data to determine which combination of electrode locations and signal types correlated best with what each subject actually saw.

By training an algorithm on the subjects’ responses to the (known) first two-thirds of the images, the researchers could examine the brain signals representing the final third of the images, whose labels were unknown to them, and predict with 96 percent accuracy whether and when (within 20 milliseconds) the subjects were seeing a house, a face or a gray screen, with only ~20 milliseconds timing error.

This accuracy was attained only when event-related potentials and broadband changes were combined for prediction, which suggests they carry complementary information.

Steppingstone to real-time brain mapping

“Traditionally scientists have looked at single neurons,” Rao said. “Our study gives a more global picture, at the level of very large networks of neurons, of how a person who is awake and paying attention perceives a complex visual object.”

The scientists’ technique, he said, is a steppingstone for brain mapping, in that it could be used to identify in real time which locations of the brain are sensitive to particular types of information.

“The computational tools that we developed can be applied to studies of motor function, studies of epilepsy, studies of memory. The math behind it, as applied to the biological, is fundamental to learning,” Ojemann added.

Lead author of the study is Kai Miller, a neurosurgery resident and physicist at Stanford University who obtained his M.D. and Ph.D. at the UW. Other collaborators were Dora Hermes, a Stanford postdoctoral fellow in neuroscience, and Gerwin Schalk, a neuroscientist at the Wadsworth Institute in New York.

This work was supported by National Aeronautics and Space Administration Graduate Student Research Program, the National Institutes of Health, the National Science Foundation, and the U.S. Army.

* In previous studies, such as these three covered on KurzweilAI, brain images were reconstructed after they were viewed, not in real time: Study matches brain scans with topics of thoughts, Neuroscape Lab visualizes live brain functions using dramatic images, How to make movies of what the brain sees.


Abstract of Spontaneous Decoding of the Timing and Content of Human Object Perception from Cortical Surface Recordings Reveals Complementary Information in the Event-Related Potential and Broadband Spectral Change

The link between object perception and neural activity in visual cortical areas is a problem of fundamental importance in neuroscience. Here we show that electrical potentials from the ventral temporal cortical surface in humans contain sufficient information for spontaneous and near-instantaneous identification of a subject’s perceptual state. Electrocorticographic (ECoG) arrays were placed on the subtemporal cortical surface of seven epilepsy patients. Grayscale images of faces and houses were displayed rapidly in random sequence. We developed a template projection approach to decode the continuous ECoG data stream spontaneously, predicting the occurrence, timing and type of visual stimulus. In this setting, we evaluated the independent and joint use of two well-studied features of brain signals, broadband changes in the frequency power spectrum of the potential and deflections in the raw potential trace (event-related potential; ERP). Our ability to predict both the timing of stimulus onset and the type of image was best when we used a combination of both the broadband response and ERP, suggesting that they capture different and complementary aspects of the subject’s perceptual state. Specifically, we were able to predict the timing and type of 96% of all stimuli, with less than 5% false positive rate and a ~20ms error in timing.

http://health.usnews.com/health-news/articles/2016-01-30/dogs-read-faces-much-like-humans-do-study-finds

Dogs Read Faces Much Like Humans Do, Study Finds

By Robert Preidt, HealthDay Reporter

SATURDAY, Jan. 30, 2016 (HealthDay News) — While dogs read facial expressions in much the same way as people do, they consider the source of a threatening expression before deciding how to respond, a new study suggests.

The dogs paid close attention to threatening faces, likely because being able to detect and avoid threats helped dogs survive as they evolved. However, they had different responses to threatening expressions, depending on whether those expressions came from other dogs or humans, the study researchers said.

Dogs tended to look longer at threatening dog faces, but looked away from threatening human faces, according to the researchers at the University of Helsinki in Finland.

“The tolerant behavior strategy of dogs toward humans may partially explain the results. Domestication may have equipped dogs with a sensitivity to detect the threat signals of humans and respond to them with pronounced appeasement signals,” researcher Sanni Somppi said in a university news release.

In the study, the researchers used eye gaze tracking to determine how 31 dogs from 13 breeds viewed the facial expressions of other dogs and of people.

The dogs first looked at the eyes and typically lingered there longer than at the nose or mouth. Dog- or human-specific characteristics of certain facial expressions attracted their attention — such as the mouths of threatening dogs — but the dogs appeared to use the whole face to assess facial expressions.

The study, published online recently in the journal PLoS One, is the first evidence of emotion-related gaze patterns in a non-primate animal, the study authors said.