Heavy screen time appears to impact childrens’ brains: study

Screen timeA new study advises limits on screen time for children and teenagers to help boost their well-being.(PeopleImages /

Published Monday, December 10, 2018 1:42AM EST 

Researchers have found “different patterns” in brain scans among children who record heavy smart device and video game use, according to initial data from a major ongoing U.S. study.

The first wave of information from the $300 million National Institute of Health (NIH) study is showing that those nine and 10-year-old kids spending more than seven hours a day using such devices show signs of premature thinning of the cortex, the brain’s outermost layer that processes sensory information.

“We don’t know if it’s being caused by the screen time. We don’t know yet if it’s a bad thing,” said Gaya Dowling, an NIH doctor working on the project, explaining the preliminary findings in an interview with the CBS news program 60 Minutes.

“What we can say is that this is what the brains look like of kids who spend a lot of time on screens. And it’s not just one pattern,” Dowling said.

The NIH data reported on CBS also showed that kids who spend more than two hours a day on screens score worse on language and reasoning tests.

The study — which involves scanning the brains of 4,500 children — eventually aims to show whether screen time is addictive, but researchers need several years to understand such long-term outcomes.

“In many ways, the concern that investigators like I have is, that we’re sort of in the midst of a natural kind of uncontrolled experiment on the next generation of children,” Dimitri Christakis, a lead author of the American Academy of Pediatrics’ most recent guidelines on screen time, told 60 Minutes.

Initial data from the study will begin to be released in early 2019.

The academy now recommends parents “avoid digital media use — except video chatting — in children younger than 18 to 24 months.”


New building block in quantum computing demonstrated

December 4, 2018
DOE/Oak Ridge National Laboratory
Researchers have demonstrated a new level of control over photons encoded with quantum information. The team’s experimental system allows them to manipulate the frequency of photons to bring about superposition, a state that enables quantum operations and computing.
The researchers’ innovative experimental setup involved operating on photons contained within a single fiber-optic cable. This provided stability and control for operations producing entangled photons, shown separated at top and intertwined at bottom after operations performed by the processor (middle), and further demonstrated the feasibility of standard telecommunications technology for linear optical quantum information processing.
Credit: Andy Sproles/Oak Ridge National Laboratory, U.S. Department of Energy

Researchers with the Department of Energy’s Oak Ridge National Laboratory have demonstrated a new level of control over photons encoded with quantum information. Their research was published in Optica.

Joseph Lukens, Brian Williams, Nicholas Peters, and Pavel Lougovski, research scientists with ORNL’s Quantum Information Science Group, performed distinct, independent operations simultaneously on two qubits encoded on photons of different frequencies, a key capability in linear optical quantum computing. Qubits are the smallest unit of quantum information.

Quantum scientists working with frequency-encoded qubits have been able to perform a single operation on two qubits in parallel, but that falls short for quantum computing.

“To realize universal quantum computing, you need to be able to do different operations on different qubits at the same time, and that’s what we’ve done here,” Lougovski said.

According to Lougovski, the team’s experimental system — two entangled photons contained in a single strand of fiber-optic cable — is the “smallest quantum computer you can imagine. This paper marks the first demonstration of our frequency-based approach to universal quantum computing.”

“A lot of researchers are talking about quantum information processing with photons, and even using frequency,” said Lukens. “But no one had thought about sending multiple photons through the same fiber-optic strand, in the same space, and operating on them differently.”

The team’s quantum frequency processor allowed them to manipulate the frequency of photons to bring about superposition, a state that enables quantum operations and computing.

Unlike data bits encoded for classical computing, superposed qubits encoded in a photon’s frequency have a value of 0 and 1, rather than 0 or 1. This capability allows quantum computers to concurrently perform operations on larger datasets than today’s supercomputers.

Using their processor, the researchers demonstrated 97 percent interference visibility — a measure of how alike two photons are — compared with the 70 percent visibility rate returned in similar research. Their result indicated that the photons’ quantum states were virtually identical.

The researchers also applied a statistical method associated with machine learning to prove that the operations were done with very high fidelity and in a completely controlled fashion.

“We were able to extract more information about the quantum state of our experimental system using Bayesian inference than if we had used more common statistical methods,” Williams said.

“This work represents the first time our team’s process has returned an actual quantum outcome.”

Williams pointed out that their experimental setup provides stability and control. “When the photons are taking different paths in the equipment, they experience different phase changes, and that leads to instability,” he said. “When they are traveling through the same device, in this case, the fiber-optic strand, you have better control.”

Stability and control enable quantum operations that preserve information, reduce information processing time, and improve energy efficiency. The researchers compared their ongoing projects, begun in 2016, to building blocks that will link together to make large-scale quantum computing possible.

“There are steps you have to take before you take the next, more complicated step,” Peters said. “Our previous projects focused on developing fundamental capabilities and enable us to now work in the fully quantum domain with fully quantum input states.”

Lukens said the team’s results show that “we can control qubits’ quantum states, change their correlations, and modify them using standard telecommunications technology in ways that are applicable to advancing quantum computing.”

Once the building blocks of quantum computers are all in place, he added, “we can start connecting quantum devices to build the quantum internet, which is the next, exciting step.”

Much the way that information is processed differently from supercomputer to supercomputer, reflecting different developers and workflow priorities, quantum devices will function using different frequencies. This will make it challenging to connect them so they can work together the way today’s computers interact on the internet.

This work is an extension of the team’s previous demonstrations of quantum information processing capabilities on standard telecommunications technology. Furthermore, they said, leveraging existing fiber-optic network infrastructure for quantum computing is practical: billions of dollars have been invested, and quantum information processing represents a novel use.

The researchers said this “full circle” aspect of their work is highly satisfying. “We started our research together wanting to explore the use of standard telecommunications technology for quantum information processing, and we have found out that we can go back to the classical domain and improve it,” Lukens said.

Lukens, Williams, Peters, and Lougovski collaborated with Purdue University graduate student Hsuan-Hao Lu and his advisor Andrew Weiner. The research is supported by ORNL’s Laboratory Directed Research and Development program.

Story Source:

Materials provided by DOE/Oak Ridge National LaboratoryNote: Content may be edited for style and length.

Journal Reference:

  1. Hsuan-Hao Lu, Joseph M. Lukens, Nicholas A. Peters, Brian P. Williams, Andrew M. Weiner, Pavel Lougovski. Quantum interference and correlation control of frequency-bin qubitsOptica, 2018; 5 (11): 1455 DOI: 10.1364/OPTICA.5.001455

Cite This Page:

DOE/Oak Ridge National Laboratory. “New building block in quantum computing demonstrated.” ScienceDaily. ScienceDaily, 4 December 2018. <>.

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts

New Brain Implant Allows Paralyzed Patients To Surf The Internet Using Their Thoughts



Brand-new research has shown that paralyzed patients can control an off-the-shelf tablet using chip implants connected to their brains. The brain-computer interface (BCI) allowed subjects to move a cursor and click using nothing more than their thoughts.

This is an important breakthrough. The three patients suffered from tetraplegia, which made them unable to use their limbs. Two had amyotrophic lateral sclerosis (ALS) and the other had a spinal cord injury. Thanks to this particular BCI, they were able to use email, chat, music, and video-streaming apps. They were able to navigate the web and perform tasks such as online shopping with ease. They could even play a virtual piano. The findings are reported in the journal PLOS ONE.

“It was great to see our participants make their way through the tasks we asked them to perform, but the most gratifying and fun part of the study was when they just did what they wanted to do – using the apps that they liked for shopping, watching videos or just chatting with friends,” lead author Dr Paul Nuyujukian, a bioengineer at Stanford, said in a statement. “One of the participants told us at the beginning of the trial that one of the things she really wanted to do was play music again. So to see her play on a digital keyboard was fantastic.”

The work was done by the BrainGate collaboration, which has worked to make BCIs a reality for many years. The chip is the size of a small pill and is placed in the brain’s motor cortex. The sensor registers neural activity linked to intended movements. This information is then decoded and sent to external devices. The same approach by BrainGate and other groups has allowed people to move robotic limbs.

“For years, the BrainGate collaboration has been working to develop the neuroscience and neuroengineering know-how to enable people who have lost motor abilities to control external devices just by thinking about the movement of their own arm or hand,” said Dr Jaimie Henderson, a senior author of the paper and a Stanford University neurosurgeon. “In this study, we’ve harnessed that know-how to restore people’s ability to control the exact same everyday technologies they were using before the onset of their illnesses. It was wonderful to see the participants express themselves or just find a song they want to hear.”

The approach will allow paralyzed people to communicate more easily with their family and friends. It will also enable them to help their caregivers to make better decisions regarding their ongoing health issues. This technology could dramatically improve the quality of life for many people.

Smart speakers are everywhere this holiday season, but they’re really a gift for big tech companies

Which voice assistant will get the warmest welcome this year?

Everyone knows that there is, each holiday season, a gift that says, “I know nothing about you, but I love you, I mean, you get it.” For a very long time, this present was an iTunes gift card; Apple is the richest company in the world, and I am pretty sure this is exclusively thanks to the fortune it amassed from iTunes gift cards purchased for nephews and hairdressers in the first decade of the millennium. Prior to iTunes gift cards, the gift was maybe a sweater.

Now, I’m sorry to say, the comparable gift is a smart speaker. We keep purchasing them for each other, buying into the fantasy that Siri or Alexa or Google can make someone’s life easier by scheduling their appointments and managing their time and telling them how to put on makeup or make a butternut squash lasagna. Though, at the moment, reports say that people mostly just use them to listen to music, check the weather, and ask “fun questions.”

As nondescript gifts, smart speakers make a lot of sense: Both Amazon and Google have options that are around $50, there is at least some novelty factor that pokes at adults’ memories of receiving toys, and they are far less rude to give than a Fitbit. Plus, for Amazon and Google in particular, with 64.5 percent and 19.6 percent shares in the category, respectively, the point isn’t really to make money off selling hardware. The point is to beat the others at integrating their services into the lives of the population.

In other words: You’re not gifting an Amazon Echo; you’re gifting a relationship with Alexa. Amazon can later sell that relationship to brands that hope Alexa users will order their products with their voice. You’re not gifting a Google Home; you’re gifting a closer entwining with Google Search and all the strange personalized add-ons to Calendar and Maps.

This expansion of the voice assistant ecosystem is crucial to almost every major tech company, far more so than getting sticker price for devices that look like high-end air fresheners, and if you don’t believe me, please peruse the ridiculously marked-down Black Friday and Cyber Monday deals they are all offering this year.


According to predictions from the Consumer Technology Association, shoppers are set to spend $96.1 billion on tech presents this year, up 3.4 percent from 2017. In the US, 66 percent of adults will buy some sort of gadget as a gift, and the CTA expects that 22 million of these gifts will be smart speakers. Overall, 12 percent of shoppers plan to buy some kind of voice assistant-enabled smart speaker, and 6 percent plan to buy a speaker that also has a screen — like Amazon’s recently updated Echo Show or Google’s just-released Home Hub.

Before you launch your machine learning model, start with an MVP

Image Credit: everything possible/Shutterstock

I’ve seen a lot of failed machine learning models in the course of my work. I’ve worked with a number of organizations to build both models and the teams and culture to support them. And in my experience, the number one reason models fail is because the team failed to create a minimum viable product (MVP).

In fact, skipping the MVP phase of product development is how one legacy corporation ended up dissolving its entire analytics team. The nascent team followed the lead of its manager and chose to use a NoSQL database, despite the fact no one on the team had NoSQL expertise. The team built a model, then attempted to scale the application. However, because it tried to scale its product using technology that was inappropriate for the use case, it never delivered a product to its customers. The company leadership never saw a return on its investment and concluded that investing in a data initiative was too risky and unpredictable.

If that data team had started with an MVP, not only could it have diagnosed the problem with its model but it could also have switched to the cheaper, more appropriate technology alternative and saved money.

In traditional software development, MVPs are a common part of the “lean” development cycle; they’re a way to explore a market and learn about the challenges related to the product. Machine learning product development, by contrast, is struggling to become a lean discipline because it’s hard to learn quickly and reliably from complex systems.

Yet, for ML teams, building an MVP remains an absolute must. If the weakness in the model originates from bad data quality, all further investments to improve the model will be doomed to failure, no matter the amount of money thrown at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted. Teams can avoid these pitfalls by first developing an MVP and by learning from failed attempts.

Return on investment in machine learning

Machine learning initiatives require tremendous overhead work, such as the design of new data pipelines, data management frameworks, and data monitoring systems. That overhead work causes an ‘S’-shaped return-on-investment curve, which most tech leaders are not accustomed to. Company leaders who don’t understand that this S-shaped ROI is inherent to machine learning projects could abandon projects prematurely, judging them to be failures.

Unfortunately, prematurely terminating a project happens in the “building the foundations” phase of the ROI curve, and many organizations never allow their teams to progress far enough into the next phases.

Failed models offer good lessons

Identifying the weaknesses of any product sooner rather than later can result in hundreds of thousands of dollars in savings. Spotting potential shortcomings ahead of time is even more important with data products, because the root causes for a subpar recommendation system, for instance, could be anything from technology choices to data quality and/or quantity to model performance to integration, and more. To avoid bleeding resources, early diagnosis is key.

For instance, by foregoing the MVP stage of machine learning development, one company deploying a new search algorithm missed the opportunity to identify the poor quality of its data. In the process, it lost customers to the competition and had to not only fix its data collection process but eventually redo every subsequent step, including model development. This resulted in investments in the wrong technologies and six months’ worth of man hours for a team of 10 engineers and data scientists. It also led to the resignation of several key members on that team. Each departed employee cost $70,000 per person to replace.

In another example, a company leaned too heavily on A/B testing to determine the viability of its model. A/B tests are an incredible instrument for probing the market; they are a particularly relevant tool for machine learning products, as those products are often built using theoretical metrics that do not always closely relate to real-life success. However, many companies use A/B tests to identify the weaknesses in their machine learning algorithms. By using A/B tests as a quality assurance (QA) checkpoint, companies miss the opportunity to stop poorly developed models and systems in their tracks before sending a prototype to production. The typical ML prototype takes 12 to 15 engineer-weeks to turn into a real product. Based on that projection, failing to first create an MVP will typically result in a loss of over $50,000 if the final product isn’t successful.

The investment you’re protecting

Personnel costs are just one consideration. Let’s step back and discuss the wider investment in AI that you need to protect by first building an MVP.

Data collection. Data acquisition costs will vary based on the type of product your building and how frequently you’re gathering and updating data. If you are developing an application for an IoT device, you will have to identify which data to keep on the edge vs. which data to store remotely on the cloud where your team can do a lot of R&D work on it. If you are in the eCommerce business, gathering data will mean adding new front-end instrumentation to your website, which will unquestionably slow down the response time and degrade the overall user experience, potentially costing you customers.

Data pipeline building. The creation of pipelines to transfer data is fortunately a one-time initiative, but it is also a costly and time-consuming one.

Data storage. The consensus for a while now has been that data storage is being progressively commoditized. However, there are more and more indications that Moore’s Law just isn’t enough anymore to make up for the growth rate of the volumes of data we collect. If those trends prove true, storage will become increasingly expensive and will require that we stick to the bare minimum: only the data that is truly informational and actionable.

Data cleaning. With volumes always on the rise, the amount of data that is available to data scientists is becoming both an opportunity and a liability. Separating the wheat from the chaff is often difficult and time-consuming. And since these decisions typically need to be made by the data scientist in charge of developing the model, the process is all the more costly.

Data annotation. Using larger amounts of data requires more labels, and using crowds of human annotators isn’t enough anymore. Semi-automated labeling and active learning are becoming increasingly attractive to many companies, especially those with very large volumes of data. However the licenses to those platforms can represent a substantial add to the entire price of your ML initiative, especially when your data shows important seasonal patterns and needs to be relabeled regularly.

Compute power. Just like data storage, computer power is becoming commoditized, and many companies opt for cloud-based solutions such as AWS or GCP. However, with large volumes of data and complex models, the bill can become a considerable part of the entire budget and can sometimes even require a hefty investment in a server solution.

Modeling cost. The model development phase accounts for the most unpredictable cost in your final bill because the amount of time required to build a model depends on many different factors: the skill of your ML team, problem complexity, required accuracy, data quality, time constraints, and even luck. Hyperparameter tuning for deep learning is making things even more hectic, as this phase of development benefits little from experience, and usually only a trial-and-error approach prevails. Typical models will take about six weeks of development for a mid-level data scientist, so that’s about $15K in salary alone.

Deployment cost. Depending on the organization, this phase can either be fast or slow. If the company is mature from an ML-perspective and already has a standardized path to production, deploying a model will likely take about two weeks of time by an ML engineer, so about $5K. However, more often than not, you’ll require custom work, and that can make the deployment phase the most time-consuming and expensive part of creating a live ML MVP.

The cost of diagnosis

Recent years have seen an explosion in the number of ML projects powered by deep learning architectures. But along with the fantastic promise of deep learning comes the most frightening challenge in machine learning: lack of explainability. Deep learning models can have tens, if not hundreds of thousands, of parameters, and this makes it impossible for data scientists to use intuition when trying to diagnose problems with the system. This is likely one of the chief reasons ineffective models are taken offline rather than fixed and improved. If, after weeks waiting for the ML team to diagnose a mistake, they still can’t find the problem, it’s easiest to move on and start over.

And because most data scientists are trained as researchers rather than engineers, their core expertise as well as their interest rarely lies in improving systems but rather in exploring new ideas. Pushing your data science experts to spend most of their time “fixing” things (which could cost you 70 percent of your R&D budget) could considerably increase the churn among them. Ultimately, debugging, or even incremental improvement of an ML MVP can prove much more costly than a similarly-sized “traditional” software engineering MVP.

Yet ML MVPs remain an absolute must, because if the weakness in the model originates in the bad quality of the data, all further investments to improve the model will be doomed to failure, no matter how much money you throw at the project. Similarly, if the model underperforms because it was not deployed or monitored properly, then any money spent on improving data quality will be wasted.

How to succeed with an MVP

But there is hope. It is just a matter of time until the lean methodology that has seen huge success within the software development community proves itself useful for machine learning projects as well. For this to happen, though, we’ll have to see a shift in mindset among data scientists, a group known to value perfectionism over short time-to-market. Business leaders will also need to understand the subtle differences between an engineering and a machine learning MVP:

Data scientists need to evaluate the data and the model separately. The fact that the application is not providing the desired results might be caused by one or the other, or both, and diagnosing can never converge unless data scientists keep this fact in mind. Because data scientists now have the option of improving their data collecting process, they can do justice to those models that would have been otherwise identified as hopeless.

Be patient with ROI. Because the ROI curve of ML is S-shaped, even MVPs require more way work than you could typically anticipate. As we have seen, ML products require many complex steps to reach completion, and this is something that needs to be profusely communicated to stakeholders to limit the risk of frustration and premature abandonment of a project.

Diagnosing is costly but critical. Debugging ML systems is almost always extremely time-consuming, in particular because of the lack of explainability in many modern models (DL). Building from scratch is cheaper but is a worse financial bet because humans have a natural tendency to repeat the same mistakes anyway. Obtaining the right diagnostic will ensure your ML team knows with precision what requires attention (whether it be the data, the model, or the deployment), allowing you to prevent the costs of the project from exploding. Diagnosing problems also gives your team the opportunity to learn valuable lessons from their mistakes, potentially shortening future project cycles. Failed models can be a mine of information; redesigning from scratch is thus often a lost opportunity.

Make sure no single person has the keys to your project. Unfortunately, extremely short tenures are the norm among machine learning employees. When key team members leave a project, its problems are even harder to diagnose, so company leaders must ensure that “tribal” knowledge is not owned by any one single person on the team. Otherwise, even the most promising MVPs will have to be abandoned. Make sure that once your MVP is ready for the market, you start gathering data as fast as possible and that learnings from the project are shared with your entire team.

No shortcuts

No matter how long you have worked in the field, ML models are daunting, especially when the data is highly dimensional and high volume. For the highest chances of success, you need to test your model early with an MVP and invest the necessary time and money in diagnosing and fixing its weaknesses. There are no shortcuts.

Jennifer Prendki is VP of Machine Learning at Figure Eight.

What Happens to the Brain in Zero Gravity?

NASA has made a commitment to send humans to Mars by the 2030s. This is an ambitious goal when you think that a typical round trip will anywhere between three and six months and crews will be expected to stay on the red planet for up to two years before planetary alignment allows for the return journey home. It means that the astronauts have to live in reduced (micro) gravity for about three years, well beyond the current record of 438 continuous days in space held by the Russian cosmonaut Valery Polyakov.

In the early days of space travel, scientists worked hard to figure out how to overcome the force of gravity so that a rocket could catapult free of Earth’s pull in order to land humans on the Moon. Today, gravity remains at the top of the science agenda, but this time we’re more interested in how reduced gravity affects the astronauts’ health, especially their brains. After all, we’ve evolved to exist within Earth’s gravity (1 g), not in the weightlessness of space (0 g) or the microgravity of Mars (0.3 g).

So exactly how does the human brain cope with microgravity? Poorly, in a nutshell—although information about this is limited. This is surprising, since we’re familiar with astronauts’ faces becoming red and bloated during weightlessness—a phenomenon affectionately known as the “Charlie Brown effect”, or “puffy head bird legs syndrome”. This is due to fluid consisting mostly of blood (cells and plasma) and cerebrospinal fluid shifting towards the head, causing them to have round, puffy faces and thinner legs.

These fluid shifts are also associated with space motion sickness, headaches, and nausea. They have also, more recently, been linked to blurred vision due to a build up of pressure as blood flow increases and the brain floats upward inside the skull—a condition called visual impairment and intracranial pressure syndrome. Even though NASA considers this syndrome to be the top health risk for any mission to Mars, figuring out what causes it and—an even tougher question—how to prevent it, still remains a mystery.

So where does my research fit into this? Well, I think that certain parts of the brain end up receiving way too much blood because nitric oxide—an invisible molecule which is usually floating around in the blood stream—builds up in the bloodstream. This makes the arteries supplying the brain with blood relax, so that they open up too much. As a result of this relentless surge in blood flow, the blood-brain barrier (the brain’s “shock absorber”) may become overwhelmed. This allows water to slowly build up (a condition called oedema), causing brain swelling and an increase in pressure that can also be made worse due to limits in its drainage capacity.

Think of it like a river overflowing its banks. The end result is that not enough oxygen gets to parts of the brain fast enough. This a big problem which could explain why blurred vision occurs, as well as effects on other skills including astronauts’ cognitive agility (how they think, concentrate, reason and move).

A Trip in the ‘Vomit Comet’

To work out whether my idea was right, we needed to test it. But rather than ask NASA for a trip to the moon, we escaped the bonds of Earth’s gravity by simulating weightlessness in a special aeroplane nicknamed the “vomit comet.”

By climbing and then dipping through the air, this plane performs up to 30 of these “parabolas” in a single flight to simulate the feeling of weightlessness. They last only 30 seconds and I must admit, it’s very addictive and you really do get a puffy face!

With all of the equipment securely fastened down, we took measurements from eight volunteers who took a single flight every day for four days. We measured blood flow in different arteries that supply the brain using a portable doppler ultrasound, which works by bouncing high-frequency sound waves off circulating red blood cells. We also measured nitric oxide levels in blood samples taken from the forearm vein, as well as other invisible molecules that included free radicals and brain-specific proteins (which reflect structural damage to the brain) that could tell us if the blood-brain barrier has been forced open.

Our initial findings confirmed what we anticipated. Nitric oxide levels increased following repeated bouts of weightlessness, and this coincided with increased blood flow, particularly through arteries that supply the back of the brain. This forced the blood-brain barrier open, although there was no evidence of structural brain damage.

We’re now planning on following these studies up with more detailed assessments of blood and fluid shifts in the brain using imaging techniques such as magnetic resonance to confirm our findings. We’re also going to explore the effects that countermeasures such as rubber suction trousers—which create a negative pressure in the lower half of the body with the idea that they can help “suck” blood away from the astronaut’s brain—as well as drugs to counteract the increase in nitric oxide.

But these findings won’t just improve space travel—they can also provide valuable information as to why the “gravity” of exercise is good medicine for the brain and how it can protect against dementia and stroke in later life.The Conversation

Damian Bailey, Professor of Physiology and Biochemistry, University of South Wales

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Synopsis: Making Mixtures of Magnetic Condensates

A condensate mixing two species of strongly magnetic atoms provides a new experimental window into many-body phenomena.
Synopsis figure


In recent years, researchers have been able to prepare condensates of ultracold atoms with large magnetic moments. In these condensates, the interactions between the atoms’ magnetic dipoles give rise to exotic phases, some of which are analogous to those found in liquid crystals and superfluids. The condensates demonstrated to date have involved single species of magnetic atoms like dysprosium (Dy) or erbium (Er) (see 21 May 2012 Viewpoint). Now, Francesca Ferlaino and colleagues at the Institute for Quantum Optics and Quantum Information, Austria, have produced condensates that mix Dy and Er. The ability to couple two distinct dipolar atomic species will provide an opportunity to explore new quantum behaviors of ultracold gases.

Starting with an atomic beam of Dy and Er, the team used a combination of lasers and magnetic fields to trap the atoms and to cool them by evaporation. Getting both atomic species to condense at the same time, however, entailed a new trick. The researchers used traps whose shape and depth could be tuned so that the more weakly trapped Er would evaporate more easily than Dy, in turn serving as a coolant for the Dy.

Working with different isotopes of Dy and Er—some fermionic, some bosonic—they produced a variety of Bose-Bose or Bose-Fermi quantum mixtures. The authors observed signatures of strong interspecies interaction: For some isotope combinations, the forces between Er and Dy turned out to be repulsive, displacing the two condensates upwards and downwards, respectively, relative to the trap in which they were created. The magnetic mixture may allow researchers to study hard-to-probe quantum phases, such as fermionic superfluids with direction-dependent properties.

This research is published in Physical Review Letters.

–Nicolas Doiron-Leyraud

Nicolas Doiron-Leyraud is a Corresponding Editor at Physics and a researcher at the University of Sherbrooke.