https://www.forbes.com/sites/fernandezelizabeth/2019/10/12/minibrains-grown-in-the-laboratory-produce-brainwaves–now-what/#251f43ba9ac7

Minibrains Grown In The Laboratory Produce Brainwaves. Now What?

Enter minibrains.

Minibrains are small clusters of human brain cells that can be grown in a Petri dish. Floating through the agar, these small gray lumps don’t look particularly impressive, but they are allowing scientists to study actual living human brain tissue in ways they couldn’t before.

Growing these minibrains gives scientists a chance to study a host of psychological issues and diseases, and perhaps make advancements that they would not have made previously. Minibrains will even be sent to space to study how the human brain develops in zero-G.

But then came the surprise. These lab-grown brains started producing brainwaves.

Today In: Innovation

These brainwaves, equivalent to brain wave patterns in a pre-term infant, were seen by a group of researchers at the University of California San Diego. They reported in a recent paper in Cell Stem Cell that these minibrains began showing neural activity after two months, and in four to six months, they reached levels of neural activity never before seen in a lab. At ten months, they were equivalent to pre-term babies, complete with lulls and flutters of activity.

Minibrains are created by using stem cells, in this case, human skin cells. When stem cells are placed in a conducive environment, they can develop into any organ.

But minibrains are still a far cry from a full human brain. To develop into a mature brain, these minibrains would need to communicate with other areas of a larger brain and have some sort of connection with the outside world. But this might not be far off. Already, scientists have given minibrains retinal cells so they can sense light.

While some note that these minibrains are nowhere near real human brains, others begin to feel uneasy at seeing this neural activity. What does it mean? In this quickly developing field, how soon will these minibrains develop even further? There is an ethical code when dealing with animals in the lab – should this code apply to minibrains too? Could they one day feel pain, have memories, or even become self-aware?

“There is now a need for clear guidelines for research,” says Dr. Nita Farahany and collaborators in a 2018 Letter to Nature. They point out that as research develops and these minibrains become more advanced, it is less far-fetched to believe that one day these minibrains might have some sort of sentience or feelings such as pleasure or pain. The benefits of minibrain research are promising, but they caution, “to ensure the success and social acceptance of this research long term, an ethical framework must be forged now, while brain surrogates remain in the early stages of development.”

Dr. Elizabeth Fernandez is the host of SparkDialog Podcasts, which covers the intersection of science and society.

https://www.teslarati.com/tesla-compact-battery-patent-easy-production/

Tesla patent paves way for compact battery systems that are easier to produce


Tesla’s use of batteries for its electric vehicles are crucial to their function. The company’s battery systems are the industry standard as they offer more range and density than their competitors. But despite this lead, a recently submitted patent for an aggregated battery system could put Tesla’s batteries head-and-shoulders above the rest of the pack, bar none.

Tesla Pickup Truck still on track for November unveiling

AN ARTIST’S RENDER OF THE TESLA PICKUP TRUCK. (CREDIT: EMRE HUSMAN)


It appears that Tesla’s highly-anticipated Pickup Truck is still poised to be unveiled sometime this coming November. The update was shared on Twitter by CEO Elon Musk while responding to an inquiry about the upcoming vehicle’s official reveal date.

While Musk did not specify a date in his recent tweet, a previous announcement from the CEO last month estimated a November unveiling event for the Tesla Pickup Truck. Prior to this, Musk noted in late July that while the vehicle was “close,” the truck’s reveal was “maybe 2 to 3 months” away. This coming November is just a bit over this estimate.

Steve Jobs Ghost 👻@tesla_truth

Have you decided on a date for the pickup reveal? Still targeting November?

128 people are talking about this
Learn more about the benefits of certified pre-owned Acura models.

Interestingly, a November reveal for the Tesla Pickup Truck would mark around two years since the unveiling event of the company’s Semi, which could only be described as the company’s most exciting reveal event to date. Tesla surprised both its enthusiasts and the auto industry as a whole during the Semi event by unveiling its next-generation Roadster, a successor to the car that started it all for the company that boasts an insane 0-60 mph time of 1.9 seconds and range of 620 miles per charge.

Little details are known about the upcoming Tesla Pickup Truck unveiling apart from its expected date, though considering recent developments in the electric car maker’s lineup of vehicles, there seems to be a chance that Tesla could do a “One More Thing” portion on its pickup’s reveal event. With this in mind, a potential vehicle that might make a surprise appearance could be the Model S Plaid Powertrain variant.

Thanks to the Model S’ track capabilities as exhibited in the Plaid Powertrain variant’s performance in Laguna Seca and the Nurburgring, interest in Tesla’s flagship sedan is fairly high once more. Thus, it would be a good idea for Tesla to showcase some of its recent projects involving its flagship sedan during the pickup truck’s unveiling. Such a gesture will likely reaffirm the Model S’ place in the premium EV sedan market, especially considering the arrival of vehicles like the Porsche Taycan Turbo S, a car that was bred on the track.

Elon Musk has teased several notable aspects of Tesla’s upcoming pickup truck, with the CEO stating during the 2019 Annual Shareholder Meeting that the vehicle will feature performance that’s comparable to a base Porsche 911 while boasting a towing capacity that can match industry leaders like the Ford F-150. “If the (Ford) F-150 can tow it, the Tesla truck can do it,” Musk said.

Perhaps the most interesting aspect of the Tesla Pickup Truck would be its starting price. Musk has stated that the vehicle will be priced at “well under” $50,000. The CEO also added that at most, the vehicle should have a starting price of around $49,000. “You should be able to buy a really great truck for $49k or less,” Musk said.

https://www.teslarati.com/tesla-patent-hood-hinge-pedestrian-safety/

Tesla patents novel hood hinge that optimizes pedestrian safety during collisions

THE MODEL X IS TESLA’S LARGEST VEHICLE IN ITS CURRENT LINEUP. (CREDIT: NICK.LAUER VIA MY TESLA ADVENTURE/INSTAGRAM)


Tesla’s electric cars are known for being extremely quick, and they are also known for being extremely safe. The Model 3, the company’s most affordable car to date, for example, has aced safety ratings across the globe, earning a 5-Star rating from the NHTSA in the US, the Euro NCAP in Europe, and the ANCAP in Australia. Even the IIHS gave the Tesla Model 3 its highest rating, Top Safety Pick+.

But this is Tesla, and the electric car maker is known for being a company that refuses to stay still. Its cars are already quick enough to give passengers serious Gs while launching, yet the company remains hard at work on making them even quicker and more visceral in terms of speed (e.g. the Model S Plaid Powertrain). In the same light, while Teslas are already safe at their current state, it is no surprise that the company remains dedicated to finding ways to make its vehicles even safer, both for passengers in the cabin and for pedestrians on the road.

One such example of this was highlighted in a recently published patent that was simply titled “Hinge Assembly for a Vehicle Hood.” Based on the electric car maker’s discussion, the novel hinge assembly has the potential to protect pedestrians who happen to hit the vehicle’s hood during a collision. Similar systems are in place in vehicles today, though Tesla maintained that conventional designs have lots of areas for improvement.

A side view of Tesla’s hinge assembly. (Credit: US Patent Office)
If you’re shopping for a new vehicle, it’s good to know the key differences between these two models.

“Modern vehicles are mandated by safety standards to protect pedestrians from head-impact injuries, including a scenario in which a pedestrian would contact the vehicle’s hood. To meet these requirements. Current state of the art safety systems are active systems that typically include a sensor system to detect a collision with pedestrian and fire (using a pyrotechnic) an actuator to lift the front hood into a protective position before pedestrian impact. However, such systems may be falsely triggered and can only be used once because the pyrotechnic is not reversible. The pyrotechnic is also expensive, adding to overall cost of the vehicle. Therefore, there is a need for a safety system that overcomes the aforementioned drawbacks.”

Tesla noted in its patent’s description that its hinge assembly includes a body member and a hood member, with the latter being “pivotally coupled with a body member through a pivot pin.” In the event of a collision, a portion of the vehicle’s hood member or body member “deforms such that the hood member or body member disengages from the pivot pin.” This allows Tesla to use the hinge as a passive pedestrian safety system that does not require any additional components such as sensors or controllers. The design outlined in Tesla’s patent is also more practical than the pyrotechnic system used in conventional pedestrian impact safety systems.

Tesla describes how its hood hinge works in a collision in the following section.

A side view of Tesla’s shows the hinge assembly being impacted by a pedestrian head. (Credit: US Patent Office)

“FIG. 6 illustrates impact of a headform 602 on hinge assembly 116. Headform 602 represents the head (or portion thereof) of a pedestrian or other living being. As illustrated, when a collision occurs such that headform 602 hits a portion of hood member 108 of vehicle 100 along direction of an axis X-X′, a force is generated. When the force is great enough, the impact force causes hood member 108 to disengage from hinge assembly 116. The impact force typically causes deformation of portion 314 of hood member 108 adjacent to notch 312 such that pivot pin 202 disengages with second opening 304 of hood member 108. In embodiments, the width W of notch 312 is altered to change the impact force at which the hood member 108 disengages from hood member 108. In embodiments the impact force causes deformation of the pivot pin 202 to allow disengagement of hood member 108 from body member 110.

“In an event of collision, hood member 108 may disengage with hinge assembly 116 such that safety standards can be met. Hood member 108 may move down due to impact force and disengagement with hinge assembly 116. To allow movement of hood member 108, sufficient space may be provided by trimming away portions of hood member 108 and body member 110. Advantageously, this would lower weight of components while maintaining the safety standards for vehicle 100.”

Tesla is a carmaker that will likely never stay still. Despite its significant lead in the electric car segment thanks to its vehicles’ batteries and powertrain, Tesla is in a continuous process of improvement. The hood hinge outlined in this patent might be quite simple, but it contributes to the overall safety of Tesla’s lineup of vehicles nonetheless. Such initiatives, if any, further prove that when it comes to safety, no part is too small for innovation — and in the event of a collision, it’s these factors that can make all the difference.

Tesla’s patent for its hinge assembly could be accessed below.

Tesla Hood Patent by Simon Alvarez on Scribd

https://www.sciencealert.com/babies-who-are-cuddled-more-seem-to-have-their-genetics-altered-for-years-afterwards

Babies Who Are Cuddled More Seem to Have Their Genetics Altered For Years Afterwards

DAVID NIELD
12 OCT 2019

The amount of close and comforting contact that young infants get doesn’t just keep them warm, snug, and loved.

A 2017 study says it can actually affect babies at the molecular level, and the effects can last for years.

Based on the study, babies who get less physical contact and are more distressed at a young age, end up with changes in molecular processes that affect gene expression.

The team from the University of British Columbia in Canada emphasises that it’s still very early days for this research, and it’s not clear exactly what’s causing the change.

But it could give scientists some useful insights into how touching affects the epigenome – the biochemical changes that influence gene expression in the body.

During the study, parents of 94 babies were asked to keep diaries of their touching and cuddling habits from five weeks after birth, as well as logging the behaviour of the infants – sleeping, crying, and so on.

Four-and-a-half years later, DNA swabs were taken of the kids to analyse a biochemical modification called DNA methylation.

It’s an epigenetic mechanism in which some parts of the chromosome are tagged with small carbon and hydrogen molecules, often changing how genes function and affecting their expression.

The researchers found DNA methylation differences between “high-contact” children and “low-contact” children at five specific DNA sites, two of which were within genes: one related to the immune system, and one to the metabolic system.

DNA methylation also acts as a marker for normal biological development and the processes that go along with it, and it can be influenced by external, environmental factors as well.

Then there was the epigenetic age, the biological ageing of blood and tissue. This marker was lower than expected in the kids who hadn’t had much contact as babies, and had experienced more distress in their early years, compared with their actual age.

“In children, we think slower epigenetic ageing could reflect less favourable developmental progress,” said one of the team, Michael Kobor.

In fact, similar findings were spotted in a study from 2013 looking at how much care and attention young rats were given from a very early age.

Gaps between epigenetic age and chronological age have been linked to health problems in the past, but again it’s too soon to draw those kind of conclusions: the scientists readily admit they don’t yet know how this will affect the kids later in life.

We are also talking about less than 100 babies in the study, but it does seem that close contact and cuddles do somehow change the body at a genetic level.

Of course it’s well accepted that human touch is good for us and our development in all kinds of ways, but this is the first study to look at how it might be changing the epigenetics of human babies.

It will be the job of further studies to work out why, and to investigate whether any long-term changes in health might appear as a consequence.

“We plan to follow up on whether the ‘biological immaturity’ we saw in these children carries broad implications for their health, especially their psychological development,” said one of the researchers, Sarah Moore.

“If further research confirms this initial finding, it will underscore the importance of providing physical contact, especially for distressed infants.”

The research was published in Development and Psychopathology.

A version of this article was first published in November 2017.

Learn More

  1. Milestones in transcription and chromatin published in the Journal of Biological Chemistry
    Joel M. Gottesfeld, Journal of Biological Chemistry, 2019
  2. Disorders of infant feeding
    Helen McElroy, BMJ Best Practice, 2018
  1. Genetics as a Modernization Program: Biological Research at the Kaiser Wilhelm Institutes and the Political Economy of the Nazi State
    Bernd Gausemeier, Historical Studies in the Natural Sciences, 2010
  2. Biliary atresia
    Jessi Erlichman et al., BMJ Best Practice, 2018

https://phys.org/news/2019-10-quantum-faster.html

New compiler makes quantum computers two times faster

UChicago-Developed Compiler Makes Quantum Computers 2x Faster
A flow chart describing the compiling of variational algorithms to speed up quantum computations. Credit: EPiQC/University of Chicago

A new paper from researchers at the University of Chicago introduces a technique for compiling highly optimized quantum instructions that can be executed on near-term hardware. This technique is particularly well suited to a new class of variational quantum algorithms, which are promising candidates for demonstrating useful quantum speedups. The new work was enabled by uniting ideas across the stack, spanning quantum algorithms, machine learning, compilers, and device physics. The interdisciplinary research was carried out by members of the EPiQC (Enabling Practical-scale Quantum Computation) collaboration, an NSF Expedition in Computing.

Adapting to a New Paradigm for Quantum Algorithms

The original vision for  dates to the early 1980s, when physicist Richard Feynman proposed performing molecular simulations using just thousands of noise-less qubits (quantum bits), a practically impossible task for traditional computers. Other algorithms developed in the 1990s and 2000s demonstrated that thousands of noise-less qubits would also offer dramatic speedups for problems such as database search, integer factoring, and matrix algebra. However, despite recent advances in quantum hardware, these algorithms are still decades away from scalable realizations, because current hardware features noisy qubits.

To match the constraints of current and near-term quantum computers, a new paradigm for variational quantum algorithms has recently emerged. These algorithms tackle similar computational challenges as the originally envisioned quantum algorithms, but build resilience to noise by leaving certain internal program parameters unspecified. Instead, these internal parameters are learned by variation over repeated trials, guided by an optimizer. With a robust optimizer, a variational  can tolerate moderate levels of noise.

While the noise resilience of variational algorithms is appealing, it poses a challenge for compilation, the process of translating a mathematical algorithm into the physical instructions ultimately executed by hardware.

“The trade-off between variational and traditional quantum algorithms is that while variational approaches are cheap in the number of gates, they are expensive in the number of repetitions needed,” said Fred Chong, the Seymour Goodman Professor of Computer Science at UChicago and lead PI for EPiQC. “Whereas traditional quantum algorithms are fully specified at execution time and thereby fully optimizable pre-execution, variational programs are only partially specified at execution time.”

Partial Compilation

The researchers address the issue of partially specified programs with a parallel technique called partial compilation. Pranav Gokhale, a UChicago PhD student explains, “Although we can’t fully compile a variational algorithm before execution, we can at least pre-compile the parts that are specified.” For typical variational algorithms, this simple heuristic alone is sufficient, delivering 2x speedups in quantum runtime relative to standard gate-based compilation techniques. Since qubits decay exponentially with time, this runtime speedup also leads to reductions in error rates.

For more complicated algorithms, the researchers apply a second layer of optimizations that numerically characterize variations due to the unspecified parameters, through a process called hyperparameter optimization. “Spending a few minutes on hyperparameter tuning and partial compilation leads to hours of savings in execution time”, summarizes Gokhale. Professor Chong notes that this theme of realizing cost savings by shifting resources—whether between traditional and quantum computing or between compilation and execution—echoes in several other EPiQC projects.

The researchers next aim to demonstrate their work experimentally. Such experimental validation has become possible only recently, with the release of cloud-accessible quantum computers that can be controlled at the level of analog pulses. This level of control is much closer to hardware than standard gate-based control, and the researchers expect to realize greater efficiency gains from this pulse interface.

The researchers’ paper, “Partial Compilation of Variational Algorithms for Noisy Intermediate-Scale Quantum Machines” (arXiv link) will be presented at the MICRO computer architecture conference in Columbus, Ohio on October 14. Gokhale and Chong’s co-authors include Yongshan Ding, Thomas Propson, Christopher Winkler, Nelson Leung, Yunong Shi, David I. Schuster, and Henry Hoffmann, all also from the University of Chicago.


Explore further

Research provides speed boost to quantum computers


More information: Partial Compilation of Variational Algorithms for Noisy Intermediate-Scale Quantum Machines, arXiv:1909.07522 [quant-ph] https://arxiv.org/abs/1909.07522DOI: 10.1145/3352460.3358313

https://www.technologyreview.com/f/614551/ai-computer-vision-algorithms-on-your-phone-mit-ibm/

An image of hand gestures being recognized on a mobile phone

Researchers have shrunk state-of-the-art computer vision models to run on low-power devices.

Growing pains: Visual recognition is deep learning’s strongest skill. Computer vision algorithms are analyzing medical images, enabling self-driving cars, and powering face recognition. But training models to recognize actions in videos has grown increasingly expensive. This has fueled concerns about the technology’s carbon footprint and its increasing inaccessibility in low-resource environments.

The research: Researchers at the MIT-IBM Watson AI Lab have now developed a new technique for training video recognition models on a phone or other device with very limited processing capacity. Typically, an algorithm will process video by splitting it up into image frames and running recognition algorithms on each of them. It then pieces together the actions shown in the video by seeing how the objects change over subsequent frames. The method requires the algorithm to “remember” what it has seen in each frame and the order in which it has seen it. This is unnecessarily inefficient.

In the new approach, the algorithm instead extracts basic sketches of the objects in each frame, and overlays them on top of one another. Rather than remember what happened when, the algorithm can get an impression of the passing of time by looking at how the objects shift through space in the sketches. In testing, the researchers found that the new approach trained video recognition models three times faster than the state of the art. It was also able to quickly classify hand gestures with a small computer and camera running only on enough energy to power a bike light.

Why it matters: The new technique could help reduce lag and computation costs in existing commercial applications of computer vision. It could, for example, make self-driving cars safer by speeding up their reaction to incoming visual information. The technique could also unlock new applications that previously weren’t possible, such as by enabling phones to help diagnose patients or analyze medical images.

Distributed AI: As more and more AI research gets translated into applications, the need for tinier models will increase. The MIT-IBM paper is part of a growing trend to shrink state-of-the-art models to a more manageable size.

https://electrek.co/2019/10/11/tesla-self-driving-price-increase-1000-november-1/

In a series of tweets today, Tesla CEO Elon Musk talked about future plans for Tesla’s Full Self-Driving capability.  Notably, three weeks from now on November 1, Tesla will go through with a planned price increase for Full Self Driving software, increasing the price by $1,000.

The software currently costs $6,000 as an option on any Tesla vehicle.  This cost will rise to $7,000 at the end of this month.

Despite the name, the “Full Self Driving” package does not make any Tesla car actually capable of driving itself with no human intervention.  That capability is expected to be rolled out over the course of the coming years.

Tesla’s Full Self Driving option has received a lot of changes over the course of the last year.  Previously there was a differentiation between “Enhanced Autopilot” and “Full Self Driving” packages, but now Tesla unbundled some Enhanced Autopilot features to make them standard, and wrapped the rest of the features into the Full Self Driving package.  Tesla describes the differences between the features on their website here.

Elon Musk

@elonmusk

Now that Tesla V10.0 with Smart Summon is out, Full Self-Driving price will increase by $1000 on Nov 1

1,628 people are talking about this

This price increase follows the recent release of Tesla’s “Smart Summon” feature as part of the new V10 software.  With this feature, owners can open the Tesla app and have their car come to them across a parking lot or other non-public road area, navigating at low speeds with no driver.

Tesla has committed to gradual price increases as more software capabilities get rolled out.  Earlier this year, Tesla planned to increase the price in August, then postponed that increase until after the release of smart summon.  Since smart summon is now out, Tesla is going forward with the promised increase.

In the long term, Musk has even stated that Tesla plans to stop selling cars at consumer-accessible prices once self-driving is solved, as he believes it will be more profitable for the company to run cars as taxis than to sell them to end customers.

This all relies on the implementation of the Tesla Network, Tesla’s planned self-driving robotaxi fleet which owners will be able to participate in.  Musk thinks that owners will be able to make a career out of managing a fleet of robotaxis:

Luis Ramirez@cutza7

How will your goal of making the most affordable electric car for the masses be achieved if over the long run the cost will continue increasing as FSD keeps improving?

Elon Musk

@elonmusk

When the car is FSD without supervision, ie robotaxi, you’ll be able to earn far more than monthly lease/loan cost by allowing others to use it. Managing a small fleet of robotaxis will be a career for many & much better than driving a single car.

271 people are talking about this

Tesla Network is not currently implemented, and we don’t have a solid timeline on when it will be implemented (though Tesla says they want to release a tesla network electrek.co prior to the robotaxi rollout).  Tesla does keep moving forward on driver-assist features, but nothing the cars can do today can truly be called “self-driving.”

Musk also talked about the promised “Hardware 3.0” upgrade, to install Tesla’s new “FSD Computer” into cars which have purchased Full Self Driving.  The hardware inside these cars is not currently capable of running Tesla’s future self-driving software, but Tesla has engineered a much more capable computer to allow for eventual advances.

Tesla recently started installing these retrofits in some cars, but it will take some time to get around to every car.  Today, Musk mentioned the logistic problems involved with upgrading tens of thousands of cars without putting undue stress on Tesla’s already-overtaxed service centers:

Anner J. Bonilla🇵🇷🛩️🔋🔧@annerajb

When can we get upgrade to hw 3.0? From 2.5?

Elon Musk

@elonmusk

Working with engineering team to figure out best way to do upgrade without crushing service team. Will start doing upgrades in volume in a few months, coincident with more FSD features being released.

164 people are talking about this

Given that these computers don’t actually provide a tangible benefit in current cars yet, it’s no big deal to have to wait.  Their enhanced computing power is not yet being used by the Full Self Driving system, since that software isn’t even enabled yet.  So owners will have to wait patiently, and Tesla will reach out when these computers are available.

Finally, Musk also hinted at an upcoming release.  Autopilot currently cannot read street signs and traffic lights, though we know that the software is capable of distinguishing them.  Some hackers have even managed to enable a development feature which enables cars to stop at stop lights on their own.

When asked by one tweeter for word on when the car will have this capability in public release, Musk had a simple reply:

wilson lam@wilsonlam

Any word on when Navigate on Autopilot for street level (aka read traffic lights and signs) will be out?

111 people are talking about this