Advanced brain organoid could model strokes, screen drugs

Functional blood brain barrier allows for discovering and testing new drugs that can cross over into the brain
May 29, 2018

These four marker proteins (top row) are involved in controlling entry of molecules into the brain via the blood brain barrier. Here, the scientists illustrate one form of damage to the blood brain barrier in ischemic stroke conditions, as revealed by changes (bottom row) in these markers. (credit:WFIRM)

Wake Forest Institute for Regenerative Medicine (WFIRM) scientists have developed a 3-D brain organoid (tiny artifical organ) that could have potential applications in drug discovery and disease modeling.

The scientists say this is the first engineered tissue-equivalent to closely resemble normal human brain anatomy — containing all six major cell types found in normal organs, including neurons and immune cells.

The advanced 3-D organoids promote the formation of a fully cell-based, natural, and functional version of the blood brain barrier (a semipermeable membrane that separates the circulating blood from the brain, protecting it from foreign substances that could cause injury).

The new artificial organ model can help improve understanding of disease mechanisms at the blood brain barrier (BBB), the passage of drugs through the barrier, and the effects of drugs once they cross the barrier.

Faster drug discovery and screening

The shortage of effective therapies and the low success rate of investigational drugs are (in part) due to the fact that we do not have human-like tissue models for testing, according to senior author Anthony Atala, M.D., director of WFIRM. “The development of tissue-engineered 3D brain tissue equivalents such as these can help advance the science toward better treatments and improve patients’ lives,” he said.

The development of the model opens the door to speedier drug discovery and screening. This applies both to neurological conditions and for diseases like HIV, where pathogens hide in the brain; and to disease modeling of neurological conditions, such as Alzheimer’s disease, multiple sclerosis and Parkinson’s disease. The goal is to better understand their pathways and progression.

“To date, most in vitro [lab] BBB models [only] utilize endothelial cells, pericytes and astrocytes,” the researchers note in a paper. “We report a 3D spheroid model of the BBB comprising all major cell types, including neurons, microglia, and oligodendrocytes, to recapitulate more closely normal human brain tissue.”

So far, the researchers have used the brain organoids to measure the effects of (mimicked) strokes on impairment of the blood brain barrier, and have successfully tested permeability (ability of molecules to  pass through the BBB) of large and small molecules.

Reference: Nature Scientific Reports (open access). Source: Wake Forest Institute for Regenerative Medicine.

Mozilla Firefox joins Chrome, Safari in making it easier to build sophisticated websites

You may not care about web components, but you’ll like what they can do for the web.

With Mozilla’s flip of a virtual switch, life got easier for the people who make websites and the people who use them, which is to say, everybody.

On Monday, Mozilla accepted an update for its Firefox browser that enables technology called web components. You probably won’t directly care about them unless you’re a programmer. But you’ll almost assuredly care about what they mean for intricate websites: fewer problems, faster loading and quicker improvements.

Google’s Chrome team started pushing web components more than five years ago. But browser makers only gradually embraced the two big pieces, called Shadow DOM and Custom Elements. Shadow DOM makes it possible to isolate chunks of code so they don’t disturb other parts of website software, while Custom Elements let programmers create their own custom website foundations.

Chrome was the first to support web components, but Apple’s Safari followed suit in 2016 and 2017Microsoft has pledged to add support to its Edge browser but hasn’t done so yet. Firefox supports Custom Elements, but on Monday, Shadow DOM support arrived in the Nightly test version.

Web components are overkill for basic websites. But more advanced sites can benefit, however, and some big ones such as YouTube already use web components. If you visit a website with a browser that doesn’t support web components, it’ll likely be slower or limited.

“Web development got super hard,” said Mozilla Chief Product Officer Mark Mayo. “It’s now going to be a lot easier, so we should see better, faster web pages.”

Firefox Chief Product Officer Mark Mayo
Firefox Chief Product Officer Mark Mayo

Stephen Shankland/CNET

Web components only work in the Nightly test version of Firefox for now, but they’re scheduled to arrive in the main version of the browser in September. They join a host of other developer-focused Firefox improvements arriving this year that Mozilla is using to try to restore its cachet with the web programmers who were instrumental to the browser’s rise a decade ago.

With web components, developers can create website building blocks and then widely reuse them without worrying they’ll cause problems that’ll stop you from actually using that website. One example: Websites often have tabs to visually represent different sections, and web components let developers more easily create that interface, reuse it on another project or even copy it from other websites that already have figured it out.

“For big companies with many teams and complex products, it’s huge,” said Alex Russell, a senior programmer at Chrome who’s worked for years to modernize the web.

Web components technology particularly helps with big libraries of pre-written software called frameworks, which are widely used in today’s web programming. Frameworks, like React from Facebook and Angular from Google, make it easier to build websites, but parts of one framework can’t be used with parts of another. As a result, programming on the web is “balkanized,” Russell said.

Mozilla’s Mayo sees it as a big step forward, too.

“It’s the basis of a safer, faster, more productive development model for the web,” Mayo said. “You don’t get all three of those being advanced at once very often.”

Environmental noise paradoxically preserves the coherence of a quantum system

May 30, 2018, RIKEN

Quantum computers promise to advance certain areas of complex computing. One of the roadblocks to their development, however, is the fact that quantum phenomena, which take place at the level of atomic particles, can be severely affected by environmental “noise” from their surroundings. In the past, scientists have tried to maintain the coherence of the systems by cooling them to very low temperatures, for example, but challenges remain. Now, in research published in Nature Communications, scientists from the RIKEN Center for Emergent Matter Science and collaborators have used dephasing to maintain quantum coherence in a three-particle system. Normally, dephasing causes decoherence in quantum systems.

Quantum phenomena are generally restricted to the atomic level, but there are cases—such as laser light and superconductivity—in which the  of  allows them to be expressed at the macroscopic level. This is important for the development of quantum computers. However, they are also extremely sensitive to the environment, which destroys the coherence that makes them meaningful.

The group, led by Seigo Tarucha of the RIKEN Center for Emergent Matter Science, set up a system of three quantum dots in which  could be individually controlled with an electric field. They began with two entangled electron spins in one of the end quantum dots, while keeping the center dot empty, and transferred one of these spins to the center dot. They then swapped the center dot spin with a third spin in the other end dot using electric pulses, so that the third spin was now entangled with the first. The entanglement was stronger than expected, and based on simulations, the researchers realized that the  around the system was, paradoxically, helping the entanglement to form.

According to Takashi Nakajima, the first author of the study, “We discovered that this derives from a phenomenon known as the ‘quantum Zeno paradox,’ or ‘Turing paradox,’ which means that we can slow down a quantum system by the mere act of observing it frequently. This is interesting, as it leads to environmental noise, which normally makes a system incoherent, Here, it made the system more coherent.”

Tarucha, the leader of the team, says, “This is a very exciting finding, as it could potentially help to accelerate research into scaling up semiconductor quantum computers, allowing us to solve scientific problems that are very tough on conventional  systems.”

Nakajima says, “Another area that is very interesting to me is that a number of biological systems, such as photosynthesis, that operate within a very noisy environment take advantage of macroscopic  coherence, and it is interesting to ponder if a similar process may be taking place.”

 Explore further: Researchers create a quantum entanglement between two physically separated ultra-cold atomic clouds

More information: Takashi Nakajima et al, Coherent transfer of electron spin correlations assisted by dephasing noise, Nature Communications (2018). DOI: 10.1038/s41467-018-04544-7

Read more at:

Garbage In, Garbage Out: machine learning has not repealed the iron law of computer science

Pete Warden writes convincingly about computer scientists’ focus on improving machine learning algorithms, to the exclusion of improving the training data that the algorithms interpret, and how that focus has slowed the progress of machine learning.

The problem is as old as data-processing itself: garbage in, garbage out. Assembling the large, well-labeled datasets needed to train machine learning systems is a tedious job (indeed, the whole point and promise of machine learning is to teach computers to do this work, which humans are generally not good at and do not enjoy). The shortcuts we take to produce datasets come with steep costs that are not well-understood by the industry.

For example, in order to teach a model to recognize attractive travel photos, Jetpac paid low-waged Southeast Asian workers to label pictures. These workers had a very different idea of a nice holiday than the wealthy people who would use the service they were helping to create: for them, conference reception photos of people in suits drinking wine in air-conditioned international hotels were an aspirational ideal — I imagine that for some of these people, the beach and sea connoted grueling work fishing or clearing brush, rather than relaxing on a sun-lounger.

Warden says that people who are trying to improve vision systems for drones and other robots run into problems using the industry standard Imagenet dataset, because those images were taken by humans, not drones, and humans take pictures in ways that are significanty different from the way that machines do — different lenses, framing, subjects, vantage-points, etc.

Warden’s advice is for machine learning researchers to sit with their training data: sift through it, hand-code it, review it and review it again. Do the hard, boring work of making sure that PNGs aren’t labeled as JPGs, retrieve the audio samples that were classified as “other” and listen to them to see why the classifier barfed on them.

It’s an important lesson for product design, but even more important when considering machine learning’s increasing role in adversarial uses like predictive policing, sentencing recommendations, parole decisions, lending decisions, hiring decisions, etc. These datasets are just as noisy and faulty and unfit for purpose as the datasets Warden cites, but their garbage out problem ruins peoples’ lives or gets them killed.

Here’s an example that stuck with me, from a conversation with Patrick Ball, whose NGO did a study of predictive policing. The police are more likely to discover and arrest perpetrators of domestic violence who live in row-houses, semi-detached homes and apartment buildings, because the most common way for domestic violence to come to police attention is when a neighbor phones in a complaint. Abusers who live in detached homes get away with it more than their counterparts in homes with a party wall.

Train a machine learning system with police data, and it will overpolice people in homes with shared walls (who tend to be poorer), and underpolice people in detached homes (who tend to be richer). No one benefits from that situation.

There are almost always model errors that have bigger impacts on your application’s users than the loss function captures. You should think about the worst possible outcomes ahead of time and try to engineer a backstop to the model to avoid them. This might just be a blacklist of categories you never want to predict, because the cost of a false positive is so high, or you might have a simple algorithmic set of rules to ensure that the actions taken don’t exceed some boundary parameters you’ve decided. For example, you might keep a list of swear words that you never want a text generator to output, even if they’re in the training set, because it wouldn’t be appropriate in your product.

It’s not always so obvious ahead of time what the bad outcomes might be though, so it’s essential to learn from your mistakes in the real world. One of the simplest ways to do this, once you have a half-decent product/market fit, is to use bug reports. When people use your application, and they get a result they don’t like from the model, make it easy for them to tell you. If possible get the full input to the model but if it’s sensitive data, just knowing what the bad output was can be helpful to guide your investigation. These categories can be used to choose where you gather more data, and which classes you explore to understand their current label quality. Once you have a new revision of your model, have a set of inputs that previously produced bad results and run a separate evaluation on those, in addition to the normal test set. This rogues gallery works a bit like a regression test, and gives you a way to track how well you’re improving the user experience, since a single model accuracy metric will never fully capture everything that people care about. By looking at a small number of examples that prompted a strong reaction in the past, you’ve got some independent evidence that you’re actually making things better for your users. If you can’t capture the input data to your model in these cases because it’s too sensitive, use dogfooding or internal experimentation to figure out what inputs you do have access to produce these mistakes, and substitute those in your regression set instead.

Why you need to improve your training data, and how to do it [Pete Warden]

Researchers Are Training a Robot Butler to Do the Chores You Hate in a Sims-Inspired Virtual House

Researchers are teaching machines to get stuff done using video simulations, a database of chores, and a virtual home reminiscent of your favorite time-wasting video game. The end goal? Teaching robots the same way you teach yourself how to install a toilet: instructional videos.

Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), the University of Toronto, McGill University, and the University of Ljubljana released a paper detailing the methods by which they taught computers how to accomplish a greater range of activities by watching instructional videos. The researchers used simulated videos with virtual human characters, along with a database of 3,000 crowdsourced tasks the program can choose. The AI then mimics the tasks—along with everything that task entails—seen in the video.


The researchers created video simulations set in a furnished home (with a living room, kitchen, dining room, bedroom, and home office), surprisingly similar to houses in The Sims. The artificial agents would watch the videos and attempt to execute the tasks demonstrated. Researchers have so far successfully executed about 1,000 of the available crowdsourced actions.

As for learning new tricks, it’s certainly possible, “as long as the task is described as a program with a series of steps that it can understand,” according to MIT CSAIL’s Adam Conner-Simons.

Turning on the TV is easy for a human to understand, but the simple command lacks the instructions a robot would deem necessary in order to execute the task. You can’t turn on the TV if you don’t hit the power button; you can’t hit the power button unless you’re in front of it; you can’t be in front of it until you walk over to it. You get the idea.

Eventually, researchers hope to teach robots how to accomplish tasks simply by showing them actual instructional videos you might find on YouTube, for example. It also means you could eventually talk to your in-home smart speaker, instructing your Google Assistant on how exactly to dim your lights, play your tunes, and set the mood for dinner without manually entering each step.

When I asked Conner-Simons about real-world applications, I suggested a robot could help someone crack open a cold one with the boys. He said that “isn’t exactly the first use case that the team had in mind,” but the ability to move household items would be a valuable skill. “We envision that a system like this could have important implications for people with limited mobility, such as the elderly or the disabled,” he said.

But what about the lazy?

Researchers Are Training a Robot Butler to Do the Chores You Hate in a Sims-Inspired Virtual House

This AirPods wrist holder looks goofy as heck


Here’s an idea: a wristband for AirPods. It’s simple and dumb and it basically allows you to forgo your AirPods case. You can buy this accessory feat, which comes from a company called Elago, for $14.99 on Amazon. This is the same company that created the retro Mac iPhone stand. It has lots of accessory ideas.

This AirPods holder also fits over a standard Apple Watch band, so you can always carry your AirPods next to your watch. This looks better than the band by itself. You shouldn’t wear the band with just AirPods because it doesn’t look great.

I get that carrying a case around is annoying, but have some pride in yourself and don’t wear your AirPods on your wrist.