Chinese researchers hail Google’s quantum computing breakthrough, call for more funds to catch up to US

  • Chinese researchers working on 50-bit quantum computing technology are expected to achieve ‘quantum superiority’ by the end of next year
  • While Google and Chinese scientists celebrated the breakthrough, American rivals including IBM and Intel cast doubt over the claims


A Sycamore chip mounted on the printed circuit board during the packaging process. Photo: AFP
A Sycamore chip mounted on the printed circuit board during the packaging process. Photo: AFP

Chinese scientists have applauded Google’s claim of a breakthrough in quantum computing despite doubts from its American rivals, calling for continuous investment so they do not fall further behind the US in a field that promises to render supercomputers obsolete.

Sycamore, Google’s 53-bit quantum computer, performed a calculation in 200 seconds that would take the world’s fastest supercomputer, the IBM Summit, 10,000 years to perform, according to a blog post by Google that was also published in Nature magazine last Wednesday. With Sycamore, Google claims to have reached quantum supremacy, the point where a quantum computer can perform calculations that surpass anything the most advanced supercomputers today can do.

Guoping Guo, a professor at the University of Science and Technology of China and founder and chief scientist of Chinese start-up Origin Quantum, said the achievement was of “epoch-making significance”.

“Quantum supremacy is the turning point that has proven the superiority of quantum computers over classical computers,” said Guo. “If we fall behind in the next stage of general-purpose quantum computing, it would mean the difference between cold weapons and firearms.”

Google CEO Sundar Pichai with one of the company’s quantum computers. Photo: AFP
Google CEO Sundar Pichai with one of the company’s quantum computers. Photo: AFP

Quantum computers, which take a new approach to processing information, are theoretically capable of making calculations that are orders of magnitude faster than what the world’s most powerful supercomputers can do.

“With this breakthrough we’re now one step closer to applying quantum computing to – for example – design more efficient batteries, create fertiliser using less energy, and figure out what molecules might make effective medicines,” Google chief executive Sundar Pichai wrote in a separate post on Wednesday.

While Google celebrated its breakthrough, rivals including IBM and Intel cast doubt over the claims. IBM said Google did not tap the full power of its Summit supercomputer, which could have processed Google’s calculation in 2.5 days or faster with ideal simulation.

In a statement Intel said quantum practicality is much further down the road.

Regardless of the different spin each company put on the achievement, Guo said the huge gap between 200 seconds and 2.5 days was sufficient for Google to claim quantum supremacy.

Other Chinese researchers in the field pointed to the significance of the new technologies Google used in the experiment rather than the claim of quantum supremacy itself, such as the adjustable coupler used to connect qubits.

“At this stage, the problems that quantum supremacy can solve have no practical value, but [Google] has demonstrated its ability to perform a computation on such a scale of 53 qubits,” said Huang Heliang, a researcher in superconducting quantum computing at the University of Science and Technology of China. “It is foreseeable that it could lead to breakthroughs and applications in fields such as machine learning in the near future.”

A qubit, or quantum bit, is the basic unit of quantum information, similar to the binary bit in classical computing.

Google is among a group of US technology companies as well as Chinese universities and companies racing to develop quantum computers amid intensifying technology competition between the world’s two biggest economies.

China must rein in state firms to gain upper hand in US tech war

China filed almost twice as many patents as the US in 2017 for quantum technology, a category that includes communications and cryptology devices, according to market research firm Patinformatics. The US, however, leads the world in patents relating to the most prized segment of the field – quantum computers – thanks to heavy investment by IBM, Google, Microsoft and others.

Huang, who acknowledged China was still trying to catch up to the US because of a late start, said that falling behind in the current stage may not have a significant impact since quantum technology is still in its early days of development.

“But we have to be aware that the gap could easily widen if we don’t step up support and investment,” he said.

Chinese researchers working on 50-bit quantum computing technology are expected to achieve quantum supremacy by the end of next year, he added.

Pan Jianwei (right) and Lu Chaoyang, two leading scientists in China's quantum computing industry. Photo: Handout
Pan Jianwei (right) and Lu Chaoyang, two leading scientists in China’s quantum computing industry. Photo: Handout

The US was pouring US$200 million a year into research into the field, according to a 2016 government report.

Guo believes China is three to five years behind the US and Europe in breakthroughs, talent acquisition and other areas. He said the gap could be widening because most scientists in the field are going through the phase of publishing papers on basic research. The all-important application research phase was still burning through funding without any hope of commercialisation on the horizon, he added.

China has been stepping up efforts for its quantum ambitions in recent years but does not reveal total investment funding. Under the country’s 13th five-year plan introduced in 2016, Beijing launched a “megaproject” for quantum communications and computing which aimed to achieve breakthroughs by 2030.

In 2017 China started building the world’s largest quantum research facility in Hefei, central China’s Anhui province, with the goal of developing a quantum computer. The National Laboratory for Quantum Information Sciences is a US$10 billion project due to open in 2020.

Chinese tech giants, including Baidu, Alibaba, Tencent – collectively known as BAT – and telecoms giant Huawei Technologies, have also recruited some of the country’s top scientists and set up labs for the development of quantum technologies.

For more insights into China tech, sign up for our 

tech newsletters

, subscribe to our 

Inside China Tech podcast

, and download the comprehensive 

2019 China Internet Report

. Also roam 

China Tech City

, an award-winning interactive digital map at our sister site 



Discover: Meet the Sudbury scientist who feeds minerals to microbesOct 22, 2019 12:00 Science is self-correcting. Every mistake that is made and corrected deepens our understanding of the world around us, Dr. Thomas Merritt of Laurentian University tells us



I’m a geneticist. I study the connection between information and biology — essentially what makes a fly a fly, and a human a human. Interestingly, we’re not that different. It’s a fantastic job and I know, more or less, how lucky I am to have it.

I’ve been a professional geneticist since the early 1990s. I’m reasonably good at this, and my research group has done some really good work over the years. But one of the challenges of the job is coming to grips with the idea that much of what we think we “know” is, in fact, wrong.

Sometimes, we’re just off a little, and the whole point of a set of experiments is simply trying to do a little better, to get a little closer to the answer. At some point, though, in some aspect of what we do, it’s likely that we’re just flat out wrong. And that’s okay. The trick is being open-minded enough, hopefully, to see that someday, and then to make the change.

One of the amazing things about being a modern geneticist is that, generally speaking, people have some idea of what I do: work on DNA (deoxyribonucleic acid). When I ask a group of school kids what a gene is, the most common answer is “DNA.” And this is true, with some interesting exceptions. Genes are DNA and DNA is the information in biology.

For almost 100 years, biologists were certain that the information in biology was found in proteins and not DNA, and there were geneticists who went to the grave certain of this. How they got it wrong is an interesting story.

Genetics, microscopy (actually creating the first microscopes), and biochemistry were all developing together in the late 1800s. Not surprisingly, one of the earliest questions that fascinated biologists was how information was carried from generation to generation. Offspring look like their parents, but why? Why your second daughter looks like the postman is a question that came up later.

Early cell biologists were using the new microscopes to peer into the cell in ways that simply hadn’t been possible previously. They were finding thread-like structures in the interior of cells that passed from generation to generation, were similar within a species, but different between them. We now know these threads as chromosomes. Could these hold the information that scientists were looking for?

Advances in biochemistry paralleled those in microscopy and early geneticists determined that chromosomes were primarily made up of two types of molecules: proteins and DNA. Both are long polymers (chains) made up of repeated monomers (links in the chains). It seemed very reasonable that these chains could contain the information of biological complexity.

By analogy, think of a word as just a string of letters, a sentence as a chain of words, and a paragraph as a chain of sentences. We can think of chromosomes, then, as chapters, and all of our genetic information — what we now call our genome (all our genetic material) — as these chapters that make up a novel. The question to those early geneticists, then, was: Which string made up the novel? Was it protein or DNA?

You and I know the answer: DNA. Early geneticists, however, got it wrong and then passionately defended this wrong stance for eight decades. Why? The answer is simple. Protein is complicated. DNA is simple. Life is complicated. The alphabet of life, then, should be complicated — and protein fits that.

Proteins are made up of 20 amino acids — there are 20 different kinds of links in the protein chain. DNA is made up of only four nucleotides — there are only four different links in the DNA chain. Given the choice between a complicated alphabet and a simple one, the reasonable choice was the complicated one, namely protein. But, biology doesn’t always follow the obvious path and the genetic material was, and is, DNA.

It took decades of experiments to disprove conventional wisdom and convince most people that biological information was in DNA. For some, it took James Watson and Francis Crick (, using data misappropriated from Rosalind Franklin, deciphering the structure of DNA in 1953 to drive the nail in the protein coffin. It just seemed to obvious that protein, with all its complexity, would be the molecule that coded for complexity.

These were some of the most accomplished and thoughtful scientists of their day, but they got it wrong. And that’s okay — if we learn from their mistakes.

It is too easy to dismiss this example as the foolishness of the past. We wouldn’t make this kind of mistake today, would we? I can’t answer that, but let me give you another example that suggests we would, and I’ll argue at the end that we almost certainly are.

I’m an American, and one of the challenges of moving to Canada was having to adapt to overcooked burgers (my mother still can’t accept that she can’t get her burger “medium” when she visits). This culinary challenge is driven by a phenomenon that one of the more interesting recent cases of scientists having it wrong and refusing to see that.

In the late 1980s, cows started wasting away and, in the late stages of what was slowly recognized as a disease, acting in such bizarre manner that their disease, bovine spongiform encephalitis, became known as Mad Cow Disease. Strikingly, the brains of the cows were full of holes (hence “spongiform”) and the holes were caked with plaques of proteins clumped together.

Really strikingly, the proteins were ones that are found in healthy brains, but now in an unnatural shape. Proteins are long chains, but they function because they have complex 3D shapes — think origami. Proteins fold and fold into specific shapes. But, these proteins found in sick cow brains had a shape not normally seen in nature; they were misfolded.

Sometime after, people started dying from the same symptoms and a connection was made between eating infected cows and contracting the disease (cows could also contract the disease, but likely through saliva or direct contact, and not cannibalism). Researchers also determined the culprit was consumption only of neural tissue, brain and spinal tissue, the very tissue that showed the physical effects of infection (and this is important).

One of the challenges of explaining the disease was the time-course from infection to disease to death; it was long and slow. Diseases, we knew, were transmitted by viruses and bacteria, but no scientist could isolate one that would explain this disease. Further, no one knew of other viruses or bacteria whose infection would take this long to lead to death. For various reasons, people leaned toward assuming a viral cause, and careers and reputations were built on finding the slow virus.

In the late 1980s, a pair of British researchers suggested that perhaps the shape, the folding, of the proteins in the plaques was key. Could the misfolding be causing the clumping that led to the plaques? This proposal was soon championed by Stanely Prusiner, a young scientist early in his career.

The idea was simple. The misfolded protein was itself both the result and the cause of the infection. Misfolded protein clumped forming plaques that killed the brain tissue — they also caused correctly folded versions of the proteins to misfold. The concept was straightforward, but completely heretical. Disease, we knew, did not work that way. Diseases are transmitted by viruses or bacteria, but the information is transmitted as DNA (and, rarely, RNA, a closely related molecule). Disease is not transmitted in protein folding (although in 1963 Kurt Vonnegut had predicted such a model for world-destroying ice formation in his amazing book Cat’s Cradle).

For holding this protein-based view of infection, Prusiner was literally and metaphorically shouted out of the room. Then he showed, experimentally and elegantly, that misfolded proteins, which he called “prions,” were the cause of these diseases, of both symptoms and infection.

For this accomplishment, he was awarded the 1997 Nobel Prize in Medicine. He, and others, were right. Science, with a big S, was wrong. And that’s okay. We now know that prions are responsible for a series of diseases in humans and other animals, including Chronic Wasting Disease, the spread of which poses a serious threat to deer and elk here in Ontario.

Circling back, the overcooked burger phenomenon is because of these proteins. If you heat the prions sufficiently, they lose their unnatural shape — all shape actually — and the beef is safe to eat. A well-done burger will guarantee no infectious prions, while a medium one will not. We don’t have this issue in the U.S. because cows south of the border are less likely to have been infected with the prions than their northern counterparts (or at least Americans are willing to pretend this is the case).

Where does this leave us? To me, the take-home message is that we need to remain skeptical, but curious. Examine the world around you with curious eyes, and be ready to challenge and question your assumptions.

Also, don’t ignore the massive things in front of your eyes simply because they don’t fit your understanding of, or wishes for, the world around you. Climate change, for example, is real and will likely make this a more difficult world for our children. I’ve spent a lot of time in my career putting together models of how the biological world works, but I know pieces of these models are wrong.

I can almost guarantee you that I have something as fundamentally wrong as those early geneticists stuck on protein as the genetic material of cells or the prion-deniers; I just don’t know what it is. Yet.

And, this situation is okay. The important thing isn’t to be right. Instead, it is to be open to seeing when you are wrong.

Dr. Thomas Merritt is the Canada Research Chair in Genomics and Bioinformatics at Laurentian University.

The Origin of Consciousness in the Brain Is About to Be Tested


Here’s something you don’t hear every day: two theories of consciousness are about to face off in the scientific fight of the century.

Backed by top neuroscientist theorists of today, including Christof Koch, head of the formidable Allen Institute for Brain Research in Seattle, Washington, the fight hopes to put two rival ideas of consciousness to the test in a $20 million project. Briefly, volunteers will have their brain activity scanned while performing a series of cleverly-designed tasks targeted to suss out the brain’s physical origin of conscious thought. The first phase was launched this week at the Society for Neuroscience annual conference in Chicago, a brainy extravaganza that draws over 20,000 neuroscientists each year.

Both sides agree to make the fight as fair as possible: they’ll collaborate on the task design, pre-register their predictions on public ledgers, and if the data supports only one idea, the other acknowledges defeat.

The “outlandish” project is already raising eyebrows. While some applaud the project’s head-to-head approach, which rarely occurs in science, others question if it’s all a publicity stunt. “I don’t think [the competition] will do what it says on the tin,” said Dr. Anil Seath, a neuroscientist at the University of Sussex in Brighton UK, explaining that the whole trial is too “philosophical.” Rather than unearthing how the brain brings outside stimuli into attention, he said, the fight focuses more on where and why consciousness emerges, with theories growing by the numbers every year.

Then there’s the religion angle. The project is sponsored by the Templeton World Charity Foundation (TWCF), a philanthropic foundation that tiptoes the line between science and faith. Although spirituality isn’t taboo to consciousness theorists—many embrace it—TWCF is a rather unorthodox player in the neuroscientific field.

Despite immediate controversy, the two sides aren’t deterred. “Theories are very flexible. Like vampires, they’re very difficult to slay,” said Koch. Even if the project can somewhat narrow down divergent theories of consciousness, we’re on our way to cracking one of the most enigmatic properties of the human brain.

With the rise of increasingly human-like machines, and efforts to promote communications with locked-in patients, the need to understand consciousness is especially salient. Can AI ever be conscious and should we give them rights? What about people’s awareness during and after anesthesia? How do we reliably measure consciousness in fetuses inside mother’s wombs—a tricky question leveraged in abortion debates—or in animals?

Even if the project doesn’t produce a definitive solution to consciousness, it’ll drive scientists loyal to different theoretical aisles to talk and collaborate—and that in itself is already a laudable achievement.

“What we hope for is a process that reduces the number of incorrect theories,” said TWCF president Andrew Serazin. “We want to reward people who are courageous in their work, and part of having courage is having the humility to change your mind.”

Meet the Contestants

How physical systems give rise to subjective experience is dubbed the “hard problem” of consciousness. Although neuroscientists can measure the crackling of electrical activity among neurons and their networks, no one understands how consciousness emerges from individual spikes. The sense of awareness and self simply can’t be reduced down to neuronal pulses, at least with our current state of understanding. What’s more, what exactly is consciousness? A broad stroke describes it as a capacity to experience something, including one’s own existence, rather than documenting it like an automaton—a vague enough picture that leaves plenty of room for theories to how consciousness actually works.

In all, the project hopes to tackle nearly a dozen top theories of consciousness. But the first two in the boxing ring are also the most prominent: one is the Global Workspace Theory (GWT), championed by Dr. Stanislas Dehaene of the Collège de France in Paris. The other is the Integrated Information Theory (IIT), proposed by Dr. Giulio Tononi of the University of Wisconsin in Madison and backed by Koch.

The GWT describes an almost algorithmic view. Conscious behavior arises when we can integrate and segregate information from multiple input sources—for example, eyes, ears, or internal ruminations—and combine it into a piece of data in a global workspace within the brain. This mental sketchpad forms a bottleneck in conscious processing, in that only items in our attention are available to the entire brain for use—and thus for a conscious experience of it. For another to enter awareness, previous data have to leave.

In this way, the workspace itself “creates” consciousness, and acts as a sort of motivational whip to drive actions. Here’s the crux: according to Dehaene, brain imaging studies in humans suggest that the main “node” exists at the front of the brain, or the prefrontal cortex, which acts like a central processing unit in a computer. It’s algorithmic, input-output based, and—like all computers—potentially hackable.

IIT, in contrast, takes a more globalist view. Consciousness arises from the measurable, intrinsic interconnectedness of brain networks. Under the right architecture and connective features, consciousness emerges. Unlike the GWT, which begins with understanding what the brain does to create consciousness, IIT begins with the awareness of experience—even if it’s just an experience of self rather than something external. When neurons connect in the “right” way under the “right” circumstances, the theory posits, consciousness naturally emerges to create the sensation of experience.

In contrast to GWT, IIT believes this emergent process happens at the back of the brain—here, neurons connect in a grid-like structure that hypothetically should be able to support this capacity. To IIT subscribers, GWT describes a feed-forward scenario that’s similar to digital computers and zombies—entities that act conscious but don’t truly posses the experience. According to Koch, consciousness is rather “a system’s ability to be acted upon by its own state in the past and to influence its own future. The more a system has cause-and-effect power, the more conscious it is.”

The Showdown

To test the ideas, 6 labs across the world will run experiments with over 500 people, using 3 different types of brain recordings as the participants perform various consciousness-related tests. By adopting functional MRI to spot brain metabolic activity, EEG for brain waves and ECoG (a type of EEG with electrodes directly placed on the brain), the trial hopes to gather enough replicable data to assuage even the most skeptical of opposing fields.

For example, one experiment will track the brain’s response as a participant becomes aware of an image: the GWT believes the prefrontal cortex will activate, whereas the IIT says to keep your eyes on the back of the brain.

According to Quanta Magazine, the showdown will get a top journal to commit to publishing the outcomes of the experiments, regardless of the result. In addition, the two main camps are required to publicly register specific predictions, based on their theories, of the results. Neither party will actually collect nor interpret the data to avoid potential conflicts of interest. And ultimately, if the results come back conclusively in favor of one idea, the other will acknowledge defeat.

What the trial doesn’t answer, of course, is how neural computations lead to consciousness. A recent theorybased on thermodynamics in physics, suggests that neural networks in a healthy brain naturally organize together according to energy costs into a sufficient number of connection “microstates” that lead to consciousness. Too many or too few microstates and the brain loses its adaptability, processing powers, and sometimes the ability to keep itself online.

Despite misgivings, TWCF’s Potgieter sees the project as an open, collaborative step forward in a messy domain. It’s “the first time ever that such an audacious, adversarial collaboration has been undertaken and formalized within the field of neuroscience,” he said.

Tononi, the backer of IIT, agrees. “It forces the proponents to focus and enter some common framework. I think we all stand to gain one way or another,” he said.

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains.

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation
Scheme of self-assembled gold-DNA nanosunflowers for enhanced cellular uptake amount, tunable gene silencing efficacy, and controlled tumor inhibition effect by NIR irradiation. (A) (a) Assembly and disassembly of the large-sized nanostructure (200-nm gold-DNA nanosunflowers) from/to ultrasmall nanoparticles (2-nm Au-POY2T NPs). (b) Representative TEM image of the nanosunflowers. (c) Masterpiece: Sunflowers (Vincent van Gogh, 1889). (B) Left: In vivo tumor retention and penetration of transformable nanosunflowers. Right: Enhanced cellular uptake and controlled oncogene silencing process of the nanosunflowers in vitro. ① Large-sized nanosunflowers were taken up by an MCF-7 cell. ② The nanosunflowers standby in the cell cytoplasm. ③ Upon NIR irradiation, large-sized gold-DNA nanostructures dissociate and release small units (2-nm Au-POY2T NPs) to attack the cell nucleus. ④ The silencing sequence POY2T will bind to the P2 promoter of the c-myc oncogene and down-regulate the c-myc expression of MCF-7 cells, which can be controlled (ON/OFF) and regulated (Low/Medium/High) by the NIR irradiation. Credit: Science Advances, doi: 10.1126/sciadv.aaw6264

Developing an efficient delivery system for enhanced and controlled gene interference-based therapeutics is an existing challenge in molecular biology. The advancing field of nanotechnology can provide an effective, cross-disciplinary strategy to facilitate nucleic acid delivery. In a new report, Shuaidong Huo and colleagues in the interdisciplinary departments of Nanoscience, Interactive Materials, Chemistry and Polymer Research in China, Germany and the U.S. used triplex-forming oligonucleotide sequences coupled to its complementary strand to mediate the self-assembly of ultra-small gold nanoparticles.

The resulting sunflower-like nanostructures showed strong near infrared (NIR) absorption and ability for photothermal conversion. When the scientists irradiated the structures with NIR, the larger nanostructures disassembled to generate ultra-small nanoparticles modified with the c-Myc oncogene sequence to directly target the cancer cell nucleus. Huo et al. controlled gene silencing by synergistically controlling the time of preincubating cells with nanoparticles alongside nanostructure self-assembly (in vitro and in vivo) and the time-frame of NIR irradiation. The study provided a new paradigm to construct efficient and tailored nanocarriers for applications of gene interference and therapeutic gene delivery.

Gene therapy has great potential to treat a variety of diseases and complications including infertility, HIV and cancer. Successful gene therapy to alleviate disease symptoms depend on an efficient gene delivery vehicle or vector. During the process, the gene carrier must cross many biological barriers and cell membranes while escaping endosomal entrapment and nuclease-based degradation. Compared to virus-based delivery strategies, non-viral gene delivery approaches face many challenges during the process of loading and releasing DNA/RNA, targeted delivery and intracellular uptake, including incompatibility relative to immune responses in vivo.

Vigorous efforts in nanotechnology are underway to engineer stable and efficient vehicles for gene transfer to cancer cells. Due to their unique physiochemical properties a number of nanomaterials have emerged for gene delivery. Among them, gold nanoparticles (Au NPs) with specific size and surface properties can overcome obstacles in vivo to become one of the most studied gene carrier systems. However, these strategies have encountered a variety of shortcomings and therefore it is important to establish efficient delivery systems or enhanced and controlled gene therapies.

Self-assembly and testing sunflower-like nanostructures

In the present work, Huo et al. were inspired by nature’s ability to hybridize DNA by engineering DNA-mediated, self-assembled gold DNA nanostructures (approximating 200 nm). The sunflower-like design showed strong NIR absorption and photothermal conversion properties. Upon NIR irradiation, the structures disassembled to liberate ultra-small gold nanoparticles (2 nm, Au NPs) with potential for oncogene silencing, improved cell and nuclei permeability and enhanced transfection efficiency. The scientists synergistically controlled the cell-nanomaterial interactions based on the time of pre-incubation in the lab, followed by time of circulation in vivo and the timeline of irradiation. The experiments facilitated increased cellular uptake, tunable gene silencing efficacy and controlled tumor inhibition. The transformable nanosunflowers provided an excellent model to design nanovehicles for drug delivery with great potential in biomedicine.

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation
Morphology characterization of the self-assembled nanostructures (nanosunflowers). (A) TEM (200 kV) images of the nanosunflowers with enlarged structural details. (B) Bio-TEM (80 kV) images with enlarged polymer structural details. (C) High-resolution TEM (200 kV) images showing the distribution of ultrasmall NPs on the self-assembled nanostructure. (D) SEM images with enlarged surface topography of the nanosunflowers. Credit: Science Advances, doi: 10.1126/sciadv.aaw6264

Huo et al. first synthesized the two-nanometer Au NPs coated with tiopronin and modified them with thiol-oligonucleotides (SH-POY2T) using an established method of ligand exchange. The 23-nucleotide (nt) POY2T oligonucleotide bound the P2 promoter of the c-myc oncogene to form a triplex structure and downregulate oncogenic c-myc expression. In parallel, they designed and synthesized another single-stranded sequence known as CA to complementarily hybridize to the tail of the POY2T sequence and block its binding to the c-myc oncogene. On completion, the nanostructure self-assembled into sunflower-like structures. The team investigated the nanostructure (200 nm) using transmission electron microscopy (TEM). Additional imaging revealed further details of the DNA moieties of the “sunflower” structure. When the materials scientists used scanning electron microscopy (SEM) to validate the TEM results, they observed consistency between the methods.

They investigated the UV-Vis absorption spectra of the ultrasmall Au NPs prior to DNA-mediated . The monodispersed, individual two-nanometer Au-POY2T NPs showed strong absorption in the NIR region to generate heat under NIR irradiation. Huo et al. credited the observed strong NIR absorbance to close interparticle spacing and nonuniform spatial distribution of individual NPs within the larger nanostructure. They tested the heat response of the self-assembled nanostructures under NIR irradiation and noted the melting point of the complementary DNA sequences (POY2T and CA) to approximate 41 degrees C, dissociating half of the duplex structure between complementary DNA sequences. Huo et al. selected 10 minutes as the optimal time for NIR irradiation in the study.

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation
Photothermal property and disassembly behavior study of the self-assembled nanostructures. (A) Visible absorption spectra of 2-nm core-sized NPs and 200-nm self-assembled nanostructures. a.u., absorbance unit. (B) Temperature response of self-assembled nanostructures, upon NIR irradiation, dispersed in water and cell culture medium. Mean values ± SD, n = 3. (C) Temperature rise of self-assembled nanostructures, upon NIR irradiation, dispersed in water and cell culture medium. (D) Change of maximum absorbance (767 nm) of 2-nm core-sized NPs and 200-nm self-assembled nanostructures upon NIR irradiation. (E and F) TEM observation of disassembly behavior of 200-nm self-assembled nanostructures before (top) and after (bottom) NIR irradiation (808 nm, 10 min). (G) Hydrodynamic diameter of (a) monodispersed 2-nm Au-POY2T NPs and size change of the 200-nm nanosunflowers before (b) and after (c and d) NIR irradiation for different time periods (3 and 10 min). Credit: Science Advances, doi: 10.1126/sciadv.aaw6264

Disassembly behaviour of the self-assembled nanostructures and proof-of-conceptThe scientists hypothesized the self-assembled nanostructures would shrink and disassemble into individual ultrasmall Au-POY2T NPs. After 10 minutes of NIR irradiation, the maximum absorption (767 nm) of nanostructures markedly decreased to disassemble the sunflower structure. They followed the experiments before and after NIR irradiation with TEM observations and used particle size analysers to understand the disassembly process and size transformation of the nanostructures up to six nanometers in size and confirmed the optimal suitability of the 10-minute timeline.

Huo et al. applied NIR irradiation to MCF-7 cells treated with self-assembled gold DNA nanostructures and tested their cellular uptake in vitro as proof-of-concept. They determined the cellular internalization of Au-POY2T (2 nm) across diverse incubation times and quantified their cellular uptake using inductively coupled plasma mass spectroscopy (ICP-MS) and previous methods. They noted increased internalization after six hours of incubation compared to 24-hour incubation timelines. They did not observe inhibitors of endocytosis to influence Au-POY2T NP uptake, suggesting the involvement of an alternative path such as membrane fusion.

Understanding gene silencing behavior of the self-assembled nanostructures

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation
Controlled nucleus localization and gene silencing study in vitro of the self-assembled nanostructures. (A) Schematic of the in vitro cell experimental setup for the controlled NP nucleus localization and gene regulation study. (B) Number of 2-nm Au-POY2T NPs localized in the MCF-7 cell nucleus with treatment of ① individual 2-nm Au-POY2T NPs, ② 200-nm nanosunflowers, and 200-nm nanosunflowers with NIR irradiation (10 min) after different preincubation times (③ 1, ④ 3, ⑤ 6, and ⑥ 12 hours). Mean values ± SD, n = 3. Statistical differences were determined by two-tailed Student’s t test; *P < 0.05 and **P < 0.01. (C) Confocal observation of distribution of fluorescein isothiocyanate–labeled nanosunflowers (green) before (top) and after (bottom) NIR irradiation in MCF-7 cells. Nucleus was labeled by 4′,6-diamidino-2-phenylindole (blue). (D) Bio-TEM image of the localization of large-sized nanosunflowers (top, red arrow) in the cytoplasm and distribution of released small NPs (bottom, blue arrow) in cytoplasm and nucleus after NIR irradiation in MCF-7 cells. (E) Cytotoxicity evaluation of MCF-7 cells with treatment of 200-nm nanosunflowers after NIR irradiation (after a period of preincubation time: 1, 3, 6, and 12 hours, respectively) compared to control, 2-nm Au-TIOP NPs, POY2T sequence, CA sequence, 2-nm Au-POY2T NPs, 200-nm nanosunflowers without NIR irradiation, and NIR exposure only. All the concentrations of treatments were at or equal to 1 μM in POY2T sequence and were tested after a total of 24 hours of incubation. Mean values ± SD, n = 3. Statistical differences were compared with the treatment group of ① individual 2-nm Au-POY2T NPs determined by two-tailed Student’s t test; *P < 0.05 and **P < 0.01. (F) C-myc mRNA level determined by real-time PCR after different treatments as described above. Mean values ± SD, n = 3. Statistical differences were determined by two-tailed Student’s t test; **P < 0.01 and ***P < 0.001. (G) C-myc protein levels determined by Western blot and (H) corresponding quantitative histogram after different treatments as described above. GAPDH, glyceraldehyde phosphate dehydrogenase. Credit: Science Advances, doi: 10.1126/sciadv.aaw6264

After enhanced cellular uptake of self-assembled nanostructures in vitro, the research team investigated the distribution of nanoparticles within the cell nuclei using “standby” and “attack” strategies after NIR triggering. For this, they extracted cell nuclei after incubation, for ICP-MS analysis after NIR irradiation across diverse periods of incubation (one, three, six and 12 hours). They noted that the pre-incubation period largely affects nanoparticle internalization within the cell nucleus, and the researchers regulated Au-POY2T NPs in the cell nucleus based on the time of pre-incubation and NIR irradiation.

Huo et al. also investigated NIR-irradiation controlled therapeutic effects of nanosunflowers using cell viability tests; they observed oncogene silencing to increase markedly (80 percent) and kill more cancer cells. The research team controlled the therapeutic impact effectively by changing the timeline of pre-incubation and irradiation efficiently. The results supported a superior ability of the transformable nanosunflowers to silence the c-myc oncogene and oncoprotein. The scientists controlled the gene silencing process by tuning pre-incubation timelines prior to NIR irradiation.

Controlling tumor growth inhibition using self-assembled nanosunflowers

To test the controllable anti-tumor efficiency of nanosunflowers in vivo, the scientists first investigated their blood compatibility to confirm good blood biocompatibility. The research team then established the MCF-7 tumor model using the BALB/c nude mice, allowed the tumor volumes to reach 50 mm3 and randomly divided the animals into nine groups and treated them with 1000 µl of varying POY2T formulations. After each injection, they irradiated the animal groups with NIR lasers for 10 minutes to reach a local temperature above 41 degrees C.

Gold-DNA nanosunflowers for efficient gene silencing and controlled transformation
Controlled tumor growth inhibition study of the self-assembled nanostructures. (A) The MCF-7 tumor BALB/c nude mice model was established at day 0. After tumors were ready, the mice were randomly divided into nine groups and treated with 100 μl of various formulations (equivalent to 10 μM in POY2T sequence; group ① with 2-nm Au-POY2T NPs and groups ②, ③, ④, ⑤, and ⑥ with 200-nm nanosunflowers) at days 9, 12, and 15. In groups ③, ④, ⑤, and ⑥, the tumors were irradiated with a NIR laser for 10 min at 1, 3, 6, and 12 hours after each intravenous injection. Saline, NIR only, and POY2T were used as control groups. The (B) body weights and (C) tumor volumes were measured every 3 days. Scale bar, 1 cm. After the mice were sacrificed at day 24, all tumors were (D) isolated and (E) weighted, respectively. Mean values ± SD, n = 4. Statistical differences were determined by two-tailed Student’s t test; *P < 0.05, **P < 0.01, and ***P < 0.001. (Photo credit: Ningqiang Gong, National Center for Nanoscience and Technology, China.) (F) Hematoxylin and eosin staining images of organs including the heart, liver, spleen, lung, kidney, and tumor after different treatments. Scale bar, 200 μm. Credit: Science Advances, doi: 10.1126/sciadv.aaw6264

Of note, mice treated with the nanosunflower-treated group and irradiated at 12 hours showed the most significant anti-tumor effects, indicating efficient delivery of gene silencing units into the tumor site. After 24 days, Huo et al. sacrificed the animals, isolated the tumors and weighed them to demonstrate nanosunflower based NIR-controlled tumor growth inhibition in vivo. Based on histological studies, the team showed the treatment significantly reduced tumor growth and did not affect the morphology of other organs. The results verified the therapeutic efficiency and lack of side effects for nanosunflowers and NIR therapy.

In this way, Shuaidong Huo and colleagues designed, developed and optimized nanoagents for effective anti-tumor therapy. They engineered self-assembled sunflower-like nanostructures to act as multiparticle carriers loaded with many ultrasmall therapeutic units. Upon NIR irradiation, the nanostructures dissociated to release swarms of small NPs to target the cell nucleus. In tumor-bearing mice, the large sunflowers passively targeted the tumor site followed by NIR irradiation to transform the tumor genetic composition and shrink it. The research team aim to improve transfection efficiency and provide a blueprint for controllable gene silencing at tumor sites using transformable gene interference carriers for intricate theranostics at the level of the single cell.

Explore further

Thermo-triggered release of a genome-editing machinery by modified gold nanoparticles for tumor therapy

More information: Shuaidong Huo et al. Gold-DNA nanosunflowers for efficient gene silencing with controllable transformation, Science Advances (2019). DOI: 10.1126/sciadv.aaw6264Reinhard Waehler et al. Engineering targeted viral vectors for gene therapy, Nature Reviews Genetics (2007). DOI: 10.1038/nrg2141

N. L. Rosi. Oligonucleotide-Modified Gold Nanoparticles for Intracellular Gene Regulation, Science (2006). DOI: 10.1126/science.1125559

© 2019 Science X Network

Science just totally rewrote the story of human evolution (again)

The earliest humans could have lived in what is now northern Botswana, close to the remains of an enormous lake

Noctiluxx / Getty

In the last three decades, scientists have uncovered around half of the 20 known human ancestors. But when it comes to where the first Homo sapiens lived, things start to get a little blurry.

One group of researchers, however, claim they’ve narrowed in on the exact region. Modern humans originated around 200,000 years ago in northern Botswana, according to new research published in the scientific journal Nature. The group narrowed down the spot where humans evolved to the the Makgadikgadi–Okavango palaeo-wetland, south of the Zambezi river.

Researchers collected DNA from Khoe-San people in southern Africa, who represent the earliest human maternal lineages, and from people who don’t identify as Khoe-San but who the researchers predicted also carried the lineages.

They analysed the DNA fibres in more than 1,200 mitochondrial genomes. We only inherit mitochondrial DNA from our mothers, so it doesn’t change much across generations. The researchers focused on L0 mitochondrial DNA, a genome found on the first branch in the earliest lineage of all modern humans’ maternal ancestors.

They worked with a geologist and climate physicist to understand what the climate, land and geology was like at this time period, and found that there was a substantial population of L0 on the Zambezi river 200,000 years ago, and that multiple Khoe-San sub-lineages were the predominant human population in the world then.

The region used to be Lake Makgadikgadi, which ran from northern Namibia across northern Botswana into Zimbabwe. It would have been the biggest lake in Africa today, the researchers say, and survived for around for 200 million years before shifting tectonic plates broke it up and a wetland formed in its place.

The breaking up of the lake – researchers think – increased humidity and opened strips of lush animal and plant life that allowed populations to migrate northeast and southwest after surviving there for 700,000 years.

However, some experts warn that any claims about the origins of humans must investigate the whole genome, as the mitochondria makes up a very small percentage of our genome, and only represents our direct maternal line, says Carina Schlebusch, associate professor of human evolution at Uppsala University in Sweden.

“It doesn’t represent all of our other potential ancestors we could’ve had,” she says. “So genetic variation can only be captured by the rest of our chromosomes.” The ancestors of mitochondrial lineages were not the only people living in Africa 200,000 years ago, and might not have transmitted the rest of their DNA, says Eleanor Scerri, professor and independent group leader at the Pan African Evolution Research Group at the Max Planck Institute for the Science of Human History.

“Reconstructing deep ancestry from mitochondrial DNA is like trying to reconstruct a language from a handful of words, whereas using whole genome or nuclear DNA is like trying to reconstruct a dead language after hearing it being spoken for a day,” she says. The researchers chose to look at mitochondrial genomes because this is the most accurate way to determine timelines while whole genome data is lacking, and see where a lineage appeared.

Eva Chan, one of the study’s authors and senior research officer of human comparative and prostate cancer genomics at the Garvan Institute, says the origins of our ancestors is a hotly debated topic, and with more data, the theories will change, “But all our evidence points to this palaeo-wetland as the birthplace of all humans today.”

“We could include sequences of the whole genome, but there are still limitations to computer power, and at the moment we could only compare the whole genome of a few individuals.” The paper contradicts some recent findings suggesting humans originated in other parts of Africa. For example, research analysing the male-inherited Y chromosome suggests the earliest modern humans could have emerged in west Africa, not southern Africa.

But a reliable argument for human origin would need to account for far more than just genetics, says Scerri. “The paper ignores a swathe of fossil and archaeological evidence supporting an older origin for our species,” she says. James Cole, principal lecturer in archaeology at the University of Brighton, says archaeological evidence in different fossils across Africa throws into question the study’s basic findings. “You might get the impression that human evolution story started 200,000 years ago, but we know from fossil and archaeological records that Homo sapiens’ evolution starts around 300,000 years ago.”

This includes partial skull and lower jaw remains, stone tools and evidence of fire uncovered in Morocco, north Africa, after only previously finding evidence in south and east Africa. While the new study helps us further understand where we came from, it also highlights how complex our evolution has been, says Cole.

“Nexus of populations spring up all over place – this study shows a really strong one around 200,000 years ago that have genetically survived in today’s human population, but there will be others.”

“We knew human evolution was complicated from archaeology and fossil records, but we didn’t know how complicated it was until palaeontologists started to shine a torch on dark masses of complexity and highlight strands we can pull out and see where we came from,” says Cole.

The paper has reignited the argument that modern humans didn’t originate from any one place, but multiple groups shaped who we are today, and the whole African continent could be the origin of our species.

In a widely-praised paper published last year, Scerri argues that a mixture of genetic traits evolved across different regions in Africa. Jon Marks, professor of Anthropology at the University of North Carolina, says this is his “go-to idea” when teaching human origins in Africa, rather than, “Trying to pinpoint where the first person with a chin and forehead lived”.

But aside from mounting evidence to support a continent-wide origin theory, there’s another reason scientists are rejecting the theory that modern humans came from one place. The new paper relies on the assumption that the Khoe-San people have stayed in one place for hundreds of thousands of years. It mentions anatomically modern humans without having studied bones, Marks points out, and the link between mitochondrial DNA from 200,000 years ago and the emergence of anatomically correct humans at the same time is unknown. In fact, he adds, there may be no relationship between the two.

The authors have made a good case that the earliest mitochondrial DNA was in southern Africa 200,000 years ago, he says, but how do we know that the people sampled in the research haven’t moved around in the last 200,000 years?

“That’s a lot of time to be staying in the same place,” Marks says. Some researchers see the argument that any contemporary population represents the earliest modern human as problematic, especially one that may have been widespread in the past.

“Accepting these results means accepting that the Khoe-San are evolutionary relicts who have neither changed nor moved geographically for tens or even hundreds of thousand years, Scerri says. “Do we really still have to point out how factually incorrect and ethically problematic such a view is, in 2019?”

Is this brain cell your ‘mind’s eye’?

Credit: CC0 Public Domain

No-one knows what connects awareness—the state of consciousness—with its contents, i.e. thoughts and experiences. Now researchers propose an elegant solution: a literal, structural connection.

‘Content circuits’ within the cortex are plugged into ‘switchboard circuits’ that allocate awareness, says the theory, via  called L5p neurons.

Writing in Frontiers in Systems Neuroscience, one group offers evidence—and caveats. Their challenge to experimentalists: if  requires L5p neurons, all brain activity without them must be unconscious.

State vs. contents of conscious

Most neuroscientists chasing the neural mechanisms of consciousness focus on its contents, measuring changes in the brain when it thinks about a particular thing—a smell, a memory, an emotion. Quite separately, others study how the brain behaves during different conscious states, like alert wakefulness, dreaming, deep sleep or anesthesia.

Most agree the two are indivisible: you can’t think or feel or experience anything without being aware, nor be ‘aware’ of nothing. But because of the divided approach, “nobody knows how and why the contents and state of consciousness are so tightly coupled,” says Dr. Jaan Aru, neuroscientist at Humboldt University, Berlin, and lead author of the new theory.

Separate circuits

The divide created between state and contents of consciousness is anatomical.

Our conscious state is thought to depend on the activity of so-called ‘thalamo-cortical’ circuits. These are connections between neurons in the cortex, and neurons in the thalamus—a thumb-sized relay center in the middle of the brain that controls information inflow from the senses (except smell). Thalamocortical circuits are thought to be the target of general anesthesia, and damage to these neurons due to tumors or stroke often results in coma.

In contrast, functional brain imaging studies locate the contents of consciousness mostly within the cortex, in ‘cortico-cortical’ circuits.

The missing link?

Aru and colleagues believe that L5p neurons are uniquely placed to bridge the divide.

“Thalamo-cortical and cortico-cortical circuits intersect via L5p neurons,” explains Aru. “Studies tracing these cells under the microscope suggest they participate in both circuits, by exchanging connections with both thalamus and cortex.”

Functional brain studies suggest these cells may indeed couple the state and contents of consciousness. Cellular-level brain imaging in mice shows that L5p neurons respond to a sensory stimulus (air puff to the leg); that this response increases when the animal is awake; and that it is strongest by far when the animal reacts to the stimulus (moves its leg).

“We can’t tell what the mouse is thinking,” concedes Aru. “But if we assume that it reacts only when it is conscious of the stimulus, then this study demonstrates the interaction between the state [wakefulness] and contents [sensory experience] of consciousness in L5p neurons.”

The assumption is consistent with a similar mouse study. This one went further, showing that directly activating the stimulus-responsive L5p neurons (e.g. with drugs) makes the animal react to a weaker sensory stimulus—and sometimes without any stimulus.

“It’s as if the mouse experiences an illusory stimulus; as if L5p stimulation creates consciousness,” Aru adds.

Testing the theory

The theory is a first iteration that needs refinement, stresses Aru.

“Our goal here is to convince others that future work on the mechanisms of consciousness should specifically target L5p neurons.”

Nevertheless, this general arrangement could account for some well-known quirks of consciousness.

For example, the processing delay of this long relay—from cortico-cortical circuit to thalamo-cortical and back again via L5p neurons—could explain why rapid changes of stimuli often escape conscious perception. (Think subliminal messages spliced into video.)

One feature of this phenomenon is ‘backward masking’: when two images are presented briefly in rapid succession (50-100 ms), only the second image is consciously perceived. In this case, posits Aru, “by the time the stimulus completes the L5p-thalamus-L5p relay, the second image has taken over early cortical representation and steals the limelight lit by the first image.”

The theory could also help explain why we usually have little conscious insight into some  processes, like planning movement or even syntax.

“All  that does not (sufficiently) involve L5p  remains unconscious,” predicts Aru.

Therein lies the key to testing this exciting theory.

Explore further

Waking up the visual system

More information: Jaan Aru et al, Coupling the State and Contents of Consciousness, Frontiers in Systems Neuroscience (2019). DOI: 10.3389/fnsys.2019.00043
Provided by Frontiers

Electrek has uncovered a previously unseen Tesla concept made by the lead designer of the upcoming Tesla ‘cyberpunk’ Pickup truck and it looks like it could give us some hints.

We know a lot about Tesla’s plans for its electric pickup truck thanks to comments from CEO Elon Musk.

Earlier this year, he said that the Tesla Pickup truck will cost less than $50,000 and ‘be better than a Ford F150’.

The CEO revealed some planned features, like an option for 400 to 500 miles of range, Dual Motor All-wheel-drive powertrain with dynamic suspension, as well as ‘300,000 lbs of towing capacity’.

But when it comes to the design of the Tesla Pickup truck, Musk’s comments have been more confusing.

The CEO shocked some when he said that the Tesla Pickup Truck will have a ‘really futuristic-like cyberpunk Blade Runner’ design without explaining what that meant other than saying that ‘it won’t be for everyone’.

On top of the comments not being clear, Musk didn’t really help anyone when he released a very cryptic teaser image for the pickup truck during the Model Y unveiling earlier this year.

Most people didn’t even understand which part of the electric pickup truck was shown by Tesla in the teaser image.

Some amateur designers tried to interpret what it would look like based on the teaser image and Musk’s comments, but the CEO said that he hadn’t seen one render that looks like what Tesla is working on.

We now have a new render, but it’s not just any fan render.

Electrek found out that Tesla designer Sahm Jafari is behind the concept for Tesla’s “Cyberpunk Truck”.

Jafari got hired by Tesla out of the Art Center College of Design in California.

He interned at Tesla while completing his degree and Electrek has uncovered one of his designs meant for Tesla while studying at the prestigious design school that is particularly interesting.

It’s called the ‘Tesla Model Zero’ (pictures via

Obviously, it’s not a pickup truck. Jafari wrote that he meant it as a car positioned under the Model 3:

“A car that slots under Model 3 with the goal of making the electric lifestyle accessible to all. The Model Zero strengthens the brand image toward the entry-level market and opens up the doors to sustainable commuting to nearly anyone looking to get into a new vehicle.”

But some of the design accents of the ‘Tesla Model Zero’ could give us some clues of Jafari’s work on Tesla’s “Cyberpunk Truck”.

For example, the front-end that runs in a straight line all the way up the windshield looks similar to the teaser image released by Tesla:

Musk confirmed that the teaser was the front-end of the Tesla pickup truck.

In order for the Tesla Pickup to achieve a long-range as Musk promised, it will have to either have an incredible battery capacity or be significantly more energy-efficient than the average truck.

An elongated front-end like that could help improve aerodynamic performance and ultimately the efficiency of a larger vehicle like a pickup.

The concept also looks fairly futuristic and the CEO has said several times that the Tesla Pickup looks futuristic.

Although, he also said that it looks ‘cyberpunk’ and “Blade Runneresque” and Jafari’s concept doesn’t have many “cyberpunk” design accents.

Tesla is expected to unveil its pickup truck concept next month (November 2019), which also happens to be when the events of the Blade Runner movie happened.

It was also set in Los Angeles, where Tesla often launches its new vehicles.