Google Home will let you adjust podcast playback speed

“Hey Google, play twice as fast.”

Raspberry Pi’s Raspbian OS Updated With New Kernel, Startup Wizard Improvements
Written by Michael Larabel in Operating Systems on 11 October 2018 at 09:56 AM EDT.

Raspberry Pi’s Debian-based Raspbian OS has been updated today with four months worth of improvements for this popular ARM SBC.

One of the most user-facing changes with the new Raspbian 2018-10-09 update are startup wizard improvements. This initial-run wizard will now install more language support packages, improves keyboard-only workflows when having no mouse, network connectivity handling enhancements, an IP address indicator, and other improvements to make a better out-of-the-box experience.

This Raspbian update also is using the Linux 4.14.71 kernel compared to 4.14.50 on the older build from this summer. There are also a slew of package updates from a newer RealVNC server, now bundling libav-tools, dropping Mathematic, and other packaging changes.

Raspbian 2018-10-09 also has support for the Raspberry Pi PoE HAT, updated firmware images, hardware acceleration support for FFmpeg, support for DHCPCD with 3G network devices, and a variety of other updates.

The new Raspbian 2018-10-09 installation image is available as always from

Deep learning? Here’s how to exercise your neural networks

Getting practical with machine learning and AI

There are many ways machines can learn, but for humans nothing beats getting together with like-minded souls who’ve trodden a similar path.

So if you’re looking to sharpen up your work in machine learning, artificial intelligence or data science, you should be joining us on Monday morning when we open the doors to Minds Mastering Machines.

We’ve assembled a lineup of speakers who can take you from the fundamentals through to the practical application of key technologies and techniques including TensorFlowKerasLime, GPUs, Deep Learning and Reinforcement Learning.

And they’ll be discussing how they’ve applied them in areas such as financial tradingmilitary vehiclesarchitectural engineering, and just keeping the trains running.

We also have some spaces left in our brace of workshops covering developing and deploying machine learning and using the cloud, containers and DevOps to get your project into production.

This all happens at 30 Euston Square on October 15 to 17, and because this conference is brought to you by The Register and Heise, you can ensure that the conversation will flow at lunch, and at our first night drinks party.

But time is running out. Head to the MCubed website today, and secure your place. See you next week.

You look familiar: Humans recognise 5,000 faces, says study

Through most of history humans lived in small groups of a hundred or so individuals, a pattern that has changed drastically in recent centuries.

You look familiar: humans recognise 5,000 faces, says study

Humans recognise 5,000 faces, says study  |  Photo Credit: Thinkstock

From family and friends to strangers on the subway and public figures on 24-hour news cycles, humans recognise an astonishing 5,000 faces, scientists said on Wednesday in the first study of its kind. Through most of history humans lived in small groups of a hundred or so individuals, a pattern that has changed drastically in recent centuries.

A study by scientists at Britain’s University of York found that our facial recognition abilities allow us to process the thousands of faces we encounter in busy social environments, on our smartphones and our television screens every day.

“In everyday life, we are used to identifying friends, colleagues, and celebrities, and many other people by their faces,” Rob Jenkins, from York’s Department of Psychology, told AFP.

“But no one has established how many faces people actually know.”

For the study, published in the journal Proceedings of the Royal Society B, Jenkins and his team asked participants to write down as many faces they could remember from their personal lives. The volunteers were then asked to do the same with people they recognised but did not know personally.

They were also shown thousands of images of famous people — two photos of each to ensure consistency — and asked which ones they recognised. The team found an enormous range of the number of faces each participant could recall, from roughly 1,000-10,000.

“We found that people know around 5,000 faces on average,” Jenkins said.

“It seems that whatever mental apparatus allows us to differentiate dozens of people also allows us to differentiate thousands of people.”

Never forget a face

The team said it believes this figure — the first ever baseline of human “facial vocabulary”, could aid the development of facial recognition software increasingly used at airports and criminal investigations.

It may also help scientists better understand cases of mistaken identity.

“Psychological research in humans has revealed important differences between unfamiliar and familiar face recognition,” said Jenkins.

“Unfamiliar faces are often misidentified. Familiar faces are identified very reliably, but we don’t know exactly how.”

While the team said it was focused on how many faces humans actually know, they said it might be possible for some people to continue learning to recognise an unlimited number of faces, given enough practice.

They pointed out that the brain has an almost limitless capacity to memorise words and languages — the limits on these instead come from study time and motivation.

The range of faces recognised by participants went far beyond what may have been evolutionarily useful for thousands of years humans would likely only have met a few dozen people throughout their lives.

Jenkins said it was not clear why we developed the ability to distinguish between thousands of faces in the crowd.

“This could be another case of ‘overkill’ that is sometimes seen in nature,” he said.

Noise pollution is worse than ever – here is how you can avoid it damaging your health

Francesca Specter

Yahoo Style UK deputy editor

Noise pollution is a very real threat to your overall health – and it’s getting worse, according to a new report from the World Health Organisation.

The publication, released today, aim to tackle the serious implications noise pollution can have for one in five of us in Europe.

“Noise pollution in our towns and cities is increasing, blighting the lives of many European citizens,” said Dr Zsuzsanna Jakab, the WHO’s regional director for Europe. More than a nuisance, excessive noise is a health risk.”

Exposure to excessive noise can lead to a number of conditions , cognitive impairment in children, sleep disturbance, cardiovascular disease and tinnitus and annoyance, the report explains.

Here’s how you can reduce your own exposure to noise, based on NHS guidelines for hearing:

1. Avoid loud noises

The best way to avoid noise-induced hearing loss is to keep away from loud noise as much as you can,” the website advises.

A quick test is, if you have to raise your voice to talk to others, it’s probably too loud. Ditto if your ears hurt, or if you have ringing in your ears afterwards.

2. Take care when listening to music

Listening to loud music through earphones and headphones is one of the biggest dangers to your hearing,” says the NHS. Try purchasing a noise-cancelling pair, or maintaining the volume below 60% of its maximum capacity, the guidelines recommend.

3. Protect your hearing

Try to wear earplugs when you attend a nightclub or concert, to protect your ears from excessive noise. Alternatively, move away from loudspeakers and try to take a break from the noise every 15 minutes.

4 Take precautions at work

“Your employer is obliged to make changes to reduce your exposure to loud noise,” explains the website – so make sure you are provided with hearing protection such as ear muffs or earplugs if you need it, and be sure to wear it.

5 Get your hearing tested

If you are worried you are losing your hearing, get a test. The NHS says: “The earlier hearing loss is picked up, the earlier something can be done about it.”

MIT researchers develop a chip design to take us closer to computers that work like human brains

Scientists at MIT are developing brains-on-a-chip for neuromorphic computing.
It would allow processing facts, patterns and learning at lightning speed and could fast-forward the development of humanoids and autonomous driving technology.
Last year the market for chips that enable machine learning was approximately worth $4.5 billion, according to Intersect360.
H/O: chip designers
From left, MIT researchers Scott H. Tan, Jeehwan Kim and Shinhyun have unveiled a neuromorphic chip design that could represent the next leap for AI technology. The secret: a design that creates an artificial synapse for “brain on a chip” hardware.
Kuan Qiao
While the pace of machine learning has quickened over the last decade, the underlying hardware enabling machine-learning tasks hasn’t changed much: racks of traditional processing chips, such as computer processing units (CPUs) and graphics processing units (GPUs) combined in large data centers.

But on the cutting edge of processing is an area called neuromorphic computing, which seeks to make computer chips work more like the human brain — so they are able to process multiple facts, patterns and learning tasks at lightning speed. Earlier this year, researchers at the Massachusetts Institute of Technology unveiled a revolutionary neuromorphic chip design that could represent the next leap for AI technology.

The secret: a design that creates an artificial synapse for “brain on a chip” hardware. Today’s digital chips make computations based on binary, on/off signaling. Neuromorphic chips instead work in an analog fashion, exchanging bursts of electric signals at varying intensities, much like the neurons in the brain. This is a breakthrough, given that there are “more than 100 trillion synapses that mediate neuron signaling in the brain,” according to the MIT researchers.

More from Business of Design:
Soon you may be able to change your car’s interior at the touch of a button
The secret trigger that makes you reach for your favorite bottle of wine
These tiny homes offer a breathtaking retreat for nature lovers who want to escape the modern world

The MIT research, published in the journal Nature Materials in January, demonstrated a new design for a neuromorphic chip built from silicon germanium. Think of a window screen, and you have an approximation of what this chip looked like at the microscopic level. The structure made for pathways that allowed the researchers to precisely control the intensity of electric current. In one simulation, the MIT team found its chip could represent samples of human handwriting with 95 percent accuracy.

“Supercomputer-based artificial neural network operation is very precise and very efficient. However, it consumes a lot of power and requires a large footprint,” said lead researcher Jeehwan Kim, professor and principal investigator in MIT’s Research Laboratory of Electronics and Microsystems Technology Laboratories.

Eventually, such a chip design could lead to processors capable of carrying out machine learning tasks with dramatically lower energy demands. It could fast-forward the development of humanoids and autonomous driving technology.

Another plus is cost-saving and improvements in portability. It’s thought that small neuromorphic chips would consume much less power — perhaps even up to 1,000 times less — while efficiently processing millions of computations simultaneously, something currently possible only with large banks of supercomputers.

“That’s exactly what people are envisioning: a larger category of problems can be done on a single chip, and over time that migrates into something very portable,” said Addison Snell, CEO of Intersect360 Research, an industry analyst that tracks high-performance computing.

The current market for chips that enable machine learning is quite large. Last year, according to Intersect360, the market was approximately worth $4.5 billion. Neuromorphic chips represent a tiny sliver. According to Deloitte, fewer than 10,000 neuromorphic chips will probably be sold this year, whereas it expects more than 500,000 GPUs will be sold in 2018.

GPUs were developed initially by Nvidia in the 1990s for computer-based gaming. Eventually, researchers discovered they were highly effective at supporting machine-learning tasks via artificial neural networks, which are run on supercomputers and allow for the training and inference tasks that make up the main segments of any AI workflow. (If you want to build an image-recognition system that knows what is and what isn’t a tiger, you first feed the network millions of images labeled by humans as “tigers” or “not tigers,” which trains the computer algorithm. Next time the system is shown a photo of a tiger, it will be able to infer that the image is indeed a tiger.)

The evolution of machine learning
But in recent years small start-ups and big companies alike have been modifying their chip architecture to meet the demands of new artificial intelligence workloads, including autonomous driving and speech recognition. Two years ago, according to Deloitte, almost all the machine-learning tasks that involved artificial neural networks made use of large banks of GPUs and CPUs. This year new chip designs, such as FPGAs (field programmable gate arrays) and ASICs (application-specific integrated circuits), make up a larger share of machine-learning chips in data centers.

“These new kinds of chips should increase dramatically the use of machine learning, enabling applications to consume less power and at the same time become more responsive, flexible and capable,” according to a Deloitte market analysis published this year.

Neuromorphic chips represent the next level, especially as chip architecture based on the premise of shrinking transistors has begun to slow down. Although neuromorphic computing has been around since the 1980s, it’s still considered an emerging field — albeit one that has garnered more attention from researchers and tech companies over the last decade.

“The power and performance of neuromorphic computing is far superior to any incremental solution we can expect on any platform,” said Dharmendra S. Modha, IBM chief scientist for brain-inspired computing.

H/O: IBM’s TrueNorth chips
A 64-chip array of IBM’s TrueNorth chips, which represents 64 million neurons.
Modha initiated IBM’s own project into neuromorphic chip design back in 2004. Funded in part by the Defense Advanced Research Projects Agency, the years-long effort by IBM researchers resulted in TrueNorth, a neuromorphic chip the size of a postage stamp that draws just 70 milliwatts of power, or the same amount required by a hearing aid.

“We don’t envision that neuromorphic computing will replace traditional computing, but I believe it will be the key enabling technology for self-driving cars and for robotics,” Modha said.

For computing at the edge — like the reams of data a self-driving car must process in real time to prevent crashing — small, portable neuromorphic chips would represent a boon. Indeed, the ultimate end game is taking a deep neural network and embedding it onto a single chip. Current neuromorphic technology is far from that, however.

These new kinds of chips should increase dramatically the use of machine learning.
Deloitte marketing report
The MIT research spearheaded by Kim took about three years and still continues, thanks to a $125,000 grant from the National Science Foundation.

“People have been pursuing neuromorphic computing for decades. We’re getting closer to where such chips are possible,” said Intersect360′s Snell. “But in the near term the market will be more geared toward what can be done with traditional processing elements.”

New machine learning technology to predict human blood pressure: Study

Using machine learning and the data from existing wearable devices, they developed an algorithm to predict the users’ blood pressure and show which particular health behaviours affected it most.

Representational image Representational Image | Photo Credit: Thinkstock
New York: Researchers, including one of an Indian-origin, have developed a wearable off-the-shelf and machine learning technology that can predict an individual’s blood pressure and provide personalised recommendations to lower it. When doctors tell their patients to make a lot of significant lifestyle changes – exercise more, sleep better, lower their salt intake etc. – it can be overwhelming, and compliance is not very high, Sujit Dey, Professor, Department of Electrical and Computer Engineering at the University of California in the US, said in a statement.

“What if we could pinpoint the one health behaviour that most impacts an individual’s blood pressure, and have them focus on that one goal instead,” Dey said. The study affirmed the importance of personalised data over generalised information as the former was more effective. The team collected sleep, exercise and blood pressure data from eight patients over 90 days. Using machine learning and the data from existing wearable devices, they developed an algorithm to predict the users’ blood pressure and show which particular health behaviours affected it most.

“This research shows that using wireless wearables and other devices to collect and analyse personal data can help transition patients from reactive to continuous care,” Dey said. “Instead of saying ‘My blood pressure is high, therefore I’ll go to the doctor to get medicine’, giving patients and doctors access to this type of system can allow them to manage their symptoms on a continuous basis,” he noted.

Blood Pressure Health Health news Diseases and conditions

Grape compound can help protect against lung cancer: StudyHealth
Updated Oct 06, 2018 | 18:38 IST | IANS
Researchers have found that a molecule, resveratrol which is found in grape skin, seeds and red wine can be an ultimate source to protect against lung cancer.

Lung Cancer Studies reveal that grape compound can help protect against lung cancer: (Representational Image) | Photo Credit: Thinkstock
London: Researchers have found that a molecule — resveratrol — found in grape skin, seeds and red wine can protect against lung cancer. Lung cancer is the deadliest form of the disease in the world and 80 per cent of deaths are related to smoking. In addition to tobacco control, effective chemo-prevention strategies are therefore needed. In experiments in mice, the researchers from the University of Geneva (UNIGE) prevented lung cancer induced by a carcinogen found in cigarette smoke by using resveratrol.

“We observed a 45 per cent decrease in tumour load per mouse in the treated mice. They developed fewer tumours and of smaller size than untreated mice,” said Muriel Cuendet, associate professor at the varsity. The team conducted their 26-week study on four groups of mice. The first one — the control — received neither carcinogen nor resveratrol treatment. The second received only the carcinogen. The third received both the carcinogen and the treatment, whereas the fourth received only the treatment.

When comparing the two groups that were not exposed to a carcinogen, 63 per cent of the mice treated did not develop cancer, compared to only 12.5 per cent of the untreated mice. “Resveratrol could, therefore, play a preventive role against lung cancer,” Cuendet added. This formulation is applicable to humans, the researchers noted.

However, when ingested, resveratrol did not prevent lung cancer as it is metabolised and eliminated within minutes. It does not have time to reach the lungs. Conversely, when the molecule was administered through the nasal route, it as found to be much effective and allows the compound to reach the lungs.

The resveratrol concentration obtained in the lungs after nasal administration of the formulation was 22 times higher than when taken orally, the researchers said.