https://www.forbes.com/sites/greatspeculations/2018/02/26/investing-for-the-singularity/#3407d63a5c34

Investing for ‘The Singularity’ With Softbank

You do not need to be a Sci-Fi nerd to believe machine intelligence will eventually be more powerful than human intelligence, or to think Futurist Ray Kurzweil might be on to something when he describes how revolutions in genetics, nanotechnology and robotics are ushering in a New Age for humanity, in what he calls ‘The Singularity.’

‘The Singularity’ is more commonly thought of as the time when machines become smarter than people. I mention it because I believe ‘The Singularity’ is related to the thesis that the bull market in stocks might end in a melt-up phase. A melt-up is when an expensive market is driven ever higher, propelled by compelling stories, not fundamentals, as investors stampede into stocks for fear of missing out. During a melt-up, gains tend to be large and transitory, as melt-ups typically precede melt-downs. Current adherents to the melt-up thesis include Jeremy Grantham of GMO and Legg Mason’s Bill Miller.

Companies that do best during a melt up are the poster children for that bull market.  They feed the narrative things are different this time and serve as ‘proof’ a new era is upon us, two important conditions necessary for a Bubble, according to Robert Shiller in his seminal book, Irrational Exuberance.

In the late 1990s, the bubble narrative which led to the mother of all melt-ups had to do with the Internet transforming everyday life. This time around, I believe a compelling story will involve the information revolution, and the implicit assumption ‘The Singularity’ will be achieved in the not-so-distant future.

The thesis is straightforward. The internet-of-things (IoT) is advancing at lightning speed, creating an economy that is intensely connected.

This connectivity has two important components. First, connectivity is spawning an unimaginable amount of data. Second, machines are harvesting the data and, critically, are constantly learning from it (artificial intelligence, or AI). Both trends are accelerating exponentially. At the same time, Moore’s Law remains in effect. https://en.wikipedia.org/wiki/Moore%27s_law

https://9to5google.com/2018/02/26/google-pixel-visual-core-service-app/

Google updating the Pixel 2’s Visual Core algorithms through the Play Store

Earlier this month, Google enabled the Pixel Visual Core for HDR+ on third-party imaging apps like Instagram, Snapchat, and WhatsApp. Today, Pixel 2 and Pixel 2 XL devices are getting an update to the “Pixel Visual Core Service.”

Google maintains several “Service” apps for the Pixel, like “Pixel Ambient Services” for the music recognition feature and “Digital Health Services” for the battery app. More broadly, there is Carrier Services, and more of course Google Play services.

The latest update for Pixel Visual Core improvements bring more efficient HDR+ image processing and other machine learning tasks. Google notes on the Play Store description that “this application updates image processing and machine learning algorithms on Pixel 2 phones.”

Version 1.0.185741828 (from 1.0.166778097) is rolling out to Pixel 2 devices this morning with the update only weighing in at 109 kB. Presumably, users will see some benefit to these updates, but it’s unlikely going to be signficantly noticeable. The Play Store’s screenshots just show basic information about software versions.

Meanwhile, the application ID (com.google.android.imaging.easel.service) for this app provides another confirmation that Google’s first custom SoC is codenamed “Easel.”


Check out 9to5Google on YouTube for more news:

https://www.androidpolice.com/2018/02/26/google-word-coach-tests-vocabulary-knowledge-bite-sized-questions/

Google Word Coach tests your vocabulary knowledge in bite-sized questions

Google is always making improvements to its search, whether it’s through knowledge cards or little Easter Eggs. This new Word Coach addition can count a little bit as both: it’s informative, but it’s also designed like a game.

Google already has a dictionary/thesaurus that shows up in search results when you search for “word meaning” or “define word,” and it was recently updated with a search box and history. The Word Coach complements it and is starting to pop up below the dictionary card with a small 5-question test that first relates directly to the word you searched then spreads a little to other words.

 

However, you can also invoke Word Coach directly in the Google app or Chrome on your phone, by searching for “Google word coach,” or simply, “word coach.” Some questions ask for synonyms, others antonyms, and some use images. There are always 2 choices to pick from and at the end of the 5 questions, you get a little score card with explanations for each of the answers. There’s also a “Next round” button to continue with another test.

  

If you answer a couple of rounds without making any mistakes, you might get a card to level up. I did so a few times until I started getting words like solicitous, which are at the very limit of my expanding English-as-a-third-language vocabulary. SAT flashbacks guaranteed.

 

The main downside now to this educational game/test is that it doesn’t seem to remember your score. If you close your search and try it again, you go back to 0 and have to level up from the start.

https://thenextweb.com/artificial-intelligence/2018/02/26/googles-deepmind-teaches-ai-to-predict-death/

Google’s DeepMind teaches AI to predict death

DeepMind wants to solve the problem of patient deterioration in hospitals. The Google sister-company fed its AI the historical medical records of about 700,000 US veterans in hopes it will learn to predict changes in patient condition that, unchecked, lead to death.

The partnership between DeepMind and the Veterans Administration (VA) brings some of the top minds in artificial intelligence research together with “world-renowned clinicians and researchers” working for the government.

Basically, the US government is turning to, arguably, the smartest computer on the planet in order to find a cure for human-error. According to the laws of 1980s movies the robots will be attacking by the time you finish reading this sentence.

All kidding aside, AI is transforming the medical field – as we’ve written before – and this is another example of that sweeping change.

Traditionally, nurses are responsible for monitoring patients. Since it isn’t feasible to place every patient under constant direct care, the vast majority of patient monitoring is done remotely through electronics and sensors like EKGs and respirators. Nurses and doctors make rounds, checking in on each patient, and listen for alarms at a central station, but there’s really no one watching most patients the majority of the time.

If DeepMind can teach AI to figure out why patients deteriorate then machines can, theoretically, take over monitoring duties. And it’s absolutely feasible for AI to continuously watch every patient all the time — computers don’t take breaks or get tired. This might not be the instant solution to human-error in the medical field – the third leading cause of death in the US – but it’s a start.

It’s worth mentioning that all the data gleaned from service members’ records was scrubbed of personal information in order to maintain the privacy of the veterans they belonged to.

The team focused on a specific problem in order to lay the groundwork for further work in the field. According to a DeepMind blog post:

We’re focusing on Acute Kidney Injury (AKI), one of the most common conditions associated with patient deterioration, and an area where DeepMind and the VA both have expertise. This is a complex challenge, because predicting AKI is far from easy. Not only is the onset of AKI sudden and often asymptomatic, but the risk factors associated with it are commonplace throughout hospitals. AKI can also strike people of any age, and frequently occurs following routine procedures and operations like a hip replacement.

DeepMind may be the smartest team in artificial intelligence research. There’s no better place to focus its energy and effort at the cutting-edge than in saving human lives. Solving the human-error problem in hospitals could dramatically lengthen our species expected lifespans. It would be huge.

But, we’d better have Matthew Broderick standing by just in case.

http://www.labmanager.com/news/2018/02/using-light-and-gold-nanoparticles-for-targeted-non-invasive-drug-delivery#.WpSBa5M-cmI

Using Light and Gold Nanoparticles for Targeted, Non-Invasive Drug Delivery

Technion researchers have developed a technology that enables drugs to be delivered and released only to the diseased tissue which the drug is targeting

gold nanoparticles and NIR light

Over the last century, there has been astounding progress in medical science, leading to the development of efficient, effective medications for treating cancer and a wide variety of other diseases. But the random dispersion of drugs throughout the body often lowers their effectiveness and, even worse, damages healthy tissue. A prime example of this is the use of chemotherapy drugs, which work to block cell division, causing hair loss and bowel issues in cancer patients (hair growth and waste elimination both depend on rapid cell turnover).

This has led to a global effort to develop smarter systems for drug delivery that will more effectively target the specific part of the body affected by cancer, bypassing healthy tissue. A recent issue of ACS Applied Materials & Interfaces presents groundbreaking work in the field by the Technion Faculty of Biotechnology and Food Engineering.

Doctoral candidate Alona Shagan and assistant professor Boaz Mizrahi have developed a technology that enables drugs to be delivered and released only to the diseased tissue which the drug is targeting. The new method uses a unique polymer coating that contains nanoscale gold particles, in addition to the drug itself. The drug only releases when a light shines on the gold particles, causing the polymeric coating to melt.

Related Article: Age-Old Malaria Treatment Found to Improve Nanoparticle Delivery to Tumors

“Photo-triggered materials fulfill a vital role in a range of bio-medical applications,” said Shagan. “But despite this enormous potential, these materials are rarely used because of toxins in the polymer coating itself, and damage caused by high-energy (shortwave) light.”

The researchers designed the one-of-a-kind delivery method to release under longwave light (Near-Infrared, NIR). The light warms the gold nanoshells, melting the polymer packaging, and releasing the drug. The primary advantage of NIR light is its ability to penetrate bodily tissues without harming them.

“We’ve developed a material with varying melting points, allowing us to control it using low intensities,” explains Mizrahi. “Our system is composed of FDA-approved materials, and we are relatively close to clinical application.”

The researchers believe this new technology can be used for a variety of other applications, such as sealing of internal and external injuries, temporary holding of tissue during surgery, or as biodegradable scaffolds for growing transplant organs. It may even be possible to use the polymer as part of the self-healing process, giving it a wide range of both medical and non-medical applications.

“This article focuses on the concept and material: how we can design the material to fulfill these particular physical and mechanical requirements,” says Mizrahi. “The next step will include creating particles that include the drugs so that we can test their improved effectiveness using this delivery technology. We’ll discuss that in an upcoming article.”

The Technion-Israel Institute of Technology is a major source of the innovation and brainpower that drives the Israeli economy, and a key to Israel’s renown as the world’s “Start-Up Nation.” Its three Nobel Prize winners exemplify academic excellence. Technion people, ideas and inventions make immeasurable contributions to the world including life-saving medicine, sustainable energy, computer science, water conservation, and nanotechnology.

American Technion Society (ATS) donors provide critical support for the Technion—nearly $2.5 billion since its inception in 1940. Based in New York City, the ATS and its network of supporters across the U.S. provide funds for scholarships, fellowships, faculty recruitment and chairs, research, buildings, laboratories, classrooms and dormitories, and more.

https://www.genengnews.com/gen-news-highlights/parkinsons-neuronal-loss-linked-to-mitochondrial-membrane-lipid/81255525

Parkinson’s Neuronal Loss Linked to Mitochondrial Membrane Lipid

Source: Pixabay

  • Understanding the underlying molecular mechanisms that mediate the deterioration of cellular function is a critical factor for any disease, and with our incomplete knowledge of neurobiology, it’s essential. If we hope to develop improved therapeutics to slow or even stop the progression of fatal neurodegenerative disorders like Parkinson’s disease, then identifying the cellular pathways that lead to neuronal loss is crucial. Now, a new study from investigators at the University of Guelph, Ontario, has uncovered what they believe is a main factor behind nerve cell death in Parkinson’s disease. The findings from the new study were published online today in Nature Communications, in an article entitled “Cardiolipin Exposure on the Outer Mitochondrial Membrane Modulates α-Synuclein.”

    The Canadian researchers found that cardiolipin—a lipid molecule found in the mitochondrial membrane—helps ensure that a protein called α-synuclein folds properly. Misfolding of this protein leads to protein deposits that are the hallmark of Parkinson’s disease. These deposits are toxic to nerve cells that control voluntary movement. When too many of these deposits accumulate, nerve cells die.

    “Identifying the crucial role cardiolipin plays in keeping these proteins functional means cardiolipin may represent a new target for the development of therapies against Parkinson’s disease,” explained Scott Ryan, Ph.D., a professor in the department of molecular and cellular biology at the University of Guelph. “Currently there are no treatments that stop nerve cells from dying.”

    In this new study, the investigators used stem cells collected from people with the disease. The research team studied how nerve cells try to cope with misfolded α-synuclein.

    “Using human pluripotent stem cells (hPSCs) that allow comparison of cells expressing mutant SNCA (encoding α-synuclein (α-syn)) with isogenic controls or SNCA-transgenic mice, we showed that SNCA-mutant neurons display fragmented mitochondria and accumulate α-syn deposits that cluster to mitochondrial membranes in response to exposure of cardiolipin on the mitochondrial surface,” the authors wrote. “Whereas exposed cardiolipin specifically binds to and facilitates refolding of α-syn fibrils, prolonged cardiolipin exposure in SNCA-mutants initiates recruitment of LC3 to the mitochondria and mitophagy.”

    “We thought if we can better understand how cells normally fold α-synuclein, we may be able to exploit that process to dissolve these aggregates and slow the spread of the disease,” Dr. Ryan added.

    The study revealed that, inside cells, α-synuclein binds to mitochondria, where cardiolipin resides. Cells use mitochondria to generate energy and drive metabolism. Normally, cardiolipin in mitochondria pulls synuclein out of toxic protein deposits and refolds it into a nontoxic shape.

    Interestingly, the research team found that, in people with Parkinson’s disease, this process is overwhelmed over time and mitochondria are ultimately destroyed

    “As a result, the cells slowly die,” Dr. Ryan remarked. “Based on this finding, we now have a better understanding of why nerve cells die in Parkinson’s disease and how we might be able to intervene.”

    Amazingly, the authors also found that “co-culture of SNCA-mutant neurons with their isogenic controls results in the transmission of α-syn pathology coincident with mitochondrial pathology in control neurons. Transmission of pathology is effectively blocked using an anti-α-syn monoclonal antibody (mAb), consistent with cell-to-cell seeding of α-syn.”

    Understanding cardiolipin’s role in protein refolding helps provide the foundation blocks for building potential new therapies to slow progression of Parkinson’s disease.

    “The hope is that we will be able to rescue locomotor deficits in an animal model. It’s a big step toward treating the cause of this disease,” Dr. Ryan concluded.

     

https://www.utoronto.ca/news/new-technique-developed-u-t-uses-eeg-show-how-our-brains-perceive-faces

New technique developed at U of T uses EEG to show how our brains perceive faces

A new technique developed by neuroscientists at the University of Toronto can, for the first time, reconstruct images of what people perceive based on their brain activity.

The technique developed by Dan Nemrodov, a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough, is able to digitally reconstruct images seen by test subjects based on electroencephalography (EEG) data.

“When we see something, our brain creates a mental percept, which is essentially a mental impression of that thing. We were able to capture this percept using EEG to get a direct illustration of what’s happening in the brain during this process,” says Nemrodov.

For the study, test subjects hooked up to EEG equipment were shown images of faces. Their brain activity was recorded and then used to digitally recreate the image in the subject’s mind using a technique based on machine learning algorithms.

It’s not the first time researchers have been able to reconstruct images based on visual stimuli using neuroimaging techniques. The current method was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past, but this is the first time EEG has been used.

And while techniques like fMRI – which measures brain activity by detecting changes in blood flow – can grab finer details of what’s going on in specific areas of the brain, EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison. EEG also has greater temporal resolution, meaning it can measure with detail how a percept develops in time right down to milliseconds, says Nemrodov.

“fMRI captures activity at the time scale of seconds, but EEG captures activity at the millisecond scale. So we can see with very fine detail how the percept of a face develops in our brain using EEG,” he says.

The researchers were able to estimate that it takes our brain about 170 milliseconds (0.17 seconds) to form a good representation of a face we see.

This research, which was published in the journal eNeuro, provides validation that EEG has potential for this type of image reconstruction, says Nemrodov – something many researchers doubted was possible given its apparent limitations. Using EEG data for image reconstruction has great theoretical and practical potential from a neurotechnological standpoint, especially since it’s relatively inexpensive and portable.

In terms of next steps, work is currently underway in Nestor’s lab to test how image reconstruction based on EEG data could be done using memory and applied to a wider range of objects beyond faces. But it could eventually have wide-ranging clinical applications as well.

Read the reseach in eNeuro

“It could provide a means of communication for people who are unable to verbally communicate. Not only could it produce a neural-based reconstruction of what a person is perceiving, but also of what they remember and imagine, of what they want to express,” says Nestor.

“It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects rather than relying on verbal descriptions provided to a sketch artist.”

The research was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.

“What’s really exciting is that we’re not reconstructing squares and triangles but actual images of a person’s face, and that involves a lot of fine-grained visual detail,” says Nestor.

“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities. It unveils the subjective content of our mind and it provides a way to access, explore and share the content of our perception, memory and imagination.”

https://www.ctvnews.ca/health/1-in-4-teens-has-received-a-sext-new-study-finds-1.3819014

1 in 4 teens has received a sext, new study finds

Sexting is becoming a lot more common among teens, a new study finds, with more and more teens using their phones and other devices to send sexually explicit images, messages, and videos.

The new study estimates 1 in 7 youths between the ages of 12 and 17 has sent a sext, while an even larger number, 1 in 4, has received one.

The authors say the practice raises several concerns, especially for younger teens and preteens who may not have a good understanding that once their images and messages leave their devices, they are no longer in their control.

The findings, published Monday in JAMA Pediatrics, come from a systematic review of 39 recent studies on sexting that surveyed more than 110,000 youth under the age of 18, mostly from the U.S. and Europe.

The review found that the mean prevalence for sending and receiving sexts were 14.8 per cent and 27.4 per cent, respectively.

Younger teens were less likely to sext, but the prevalence rose with every year of age, the authors found.

Though sexting among teens is often seen as something that girls do under the pressure of boys, this study found there really were no gender differences between boys and girls and how often they sent or received sexts.

That was an important finding, says study author Sheri Madigan, an assistant professor in the department of psychology at the University of Calgary. But she says the studies she reviewed found that boys’ and girls’ attitudes toward sexting are still different.

“We see girls feel more pressure to sext and they also feel more of the consequences of sexting. So they might feel they’ll be treated harshly for sexting of alternatively be called prudes for not sexting whereas boys seem more immune to those consequences,” Madigan told CTV News Channel.

With sexting becoming more common, the authors say there are few public health concerns that need to be addressed. Of particular concern is the sharing of images without a teen’s consent.

The study found 12.5 per cent of youth (or 1 in 8) reported they had forwarded a sext, while 8.4 per cent reported they had had one of their sexts forwarded on to others without their consent.

The authors say non-consensual distribution of sexts can lead to embarrassment and distress, as well as “harassment by peers, cyberbullying, or blackmailing.” In extreme cases, it’s even been implicated in youth suicide.

The authors also have concerns about sexting among tweens, meaning children between the ages of 10 and 12. More and more tweens are using smartphones, the authors note, estimating the average age at which kids get their first smartphone is now 10.3 years.

Yet the authors note there still has not been much research done on how often kids under the age of 12 experiment with sexting – an area of public health concern that should be studied further, they say.

Sexting among tweens is particularly concerning, they say, because relationships among tweens are often transient, which may make them more vulnerable to sexts being forwarded without consent.

“Moreover, given their relative cognitive naivete, tweens may be particularly vulnerable to sextortion (i.e., nude images and/or videos are used as a form of threat or blackmail),” the authors write.

Madigan recommends parents have regular talks about sexting and responsible smartphone use with their kids.

“Have these conversations… early and often. These are not one-and-done talks,” she said.

An accompanying JAMA Pediatrics Patient Page offers several tips for parents. They advise:

  • talking with teens and preteens early about sexting and its risks
  • reading recent news items about sexting together and discuss what can happen when sexting goes badly
  • discouraging tweens with cell phones from sending messages or images of anyone without clothes
  • being specific with teens about what sexting is and how it can lead to serious consequences

“For all ages, remind them that once an image is sent, it is no longer in their control and they cannot get it back. What is online or sent via text can exist forever and be sent to others,” the paper advises.

Finally, they advise parents to remind their children they deserve respect and that being pressured to send a sext is never okay, nor is it a way to “prove” their love or attraction to someone.

https://www.space.com/39815-hubble-suggests-universe-expanding-faster-study.html

The Universe Is Expanding Faster Than We Thought, Hubble Data Suggests

The Universe Is Expanding Faster Than We Thought, Hubble Data Suggests

Researchers analyzed 19 galaxies, including NGC 3972 (left) and NGC 1015 (right), which are 65 million and 118 million light-years from Earth, respectively. Both possessed pulsating stars called Cepheid variables that let researchers determine the distance to the galaxies.

Credit: A. Riess (STScl/JHU)/NASA/ESA

Recent Hubble Space Telescope findings suggest that the universe is expanding much faster than expected — and astronomers say the rules of physics may need to be rewritten in order to understand why.

Scientists use the Hubble Space Telescope to make precise measurements of the universe’s expansion rate. However, observations for a new study don’t match up with previous predictions based on the universe’s trajectory following the Big Bang, according to a statement from the Space Telescope Science Institute (STScI).

“The community is really grappling with understanding the meaning of this discrepancy,” Adam Riess, Nobel laureate and lead researcher on the study describing the new findings, said in the statement. Riess is an astronomer at STScI and a professor at Johns Hopkins University. [Our Expanding Universe: Age, History & Other Facts]

The Hubble Space Telescope measures the distance to other galaxies by examining a type of star that varies in brightness. These stars, called Cepheid variables, brighten and dim in a predictable way that lets researchers judge the distance to them. This data is then used to measure the universe’s expansion rate, known as the Hubble constant.

The new findings show that eight Cepheid variables in our Milky Way galaxy are up to 10 times farther away than any previously analyzed star of this kind. Those Cepheids are more challenging to measure than others because they reside between 6,000 and 12,000 light-years from Earth. To handle that distance, the researchers developed a new scanning technique that allowed the Hubble Space Telescope to periodically measure a star’s position at a rate of 1,000 times per minute, thus increasing the accuracy of the stars’ true brightness and distance, according to the statement.

Researchers measured the universe's expansion by calculating the distance to several very distant stars called Cepheid variables, which pulse regularly and let researchers determine the distance to them based on their brightness. The eight newly measured Cepheids are 10 times farther away than any studied previously. Then, the researchers compared the brightness of those stars to the brightness of supernovas in the same galaxies, and compare them with the brightness of supernovas that are even farther out.

Researchers measured the universe’s expansion by calculating the distance to several very distant stars called Cepheid variables, which pulse regularly and let researchers determine the distance to them based on their brightness. The eight newly measured Cepheids are 10 times farther away than any studied previously. Then, the researchers compared the brightness of those stars to the brightness of supernovas in the same galaxies, and compare them with the brightness of supernovas that are even farther out.

Credit: A. Field (STScl)/A. Riess (STScl/JHU)/NASA/ESA

The researchers compared their findings to earlier data from the European Space Agency’s (ESA) Planck satellite. During its four-year mission, the Planck satellite mapped leftover radiation from the Big Bang, also known as the cosmic microwave background. The Planck data revealed a Hubble constant between 67 and 69 kilometers per second per megaparsec. (A megaparsec is roughly 3 million light-years.)

However, the Planck data gives a constant about 9 percent lower than that of the new Hubble measurements, which estimate that the universe is expanding at 73 kilometers per second per megaparsec, therefore suggesting that galaxies are moving faster than expected, according to the statement.

“Both results have been tested multiple ways, so barring a series of unrelated mistakes, it is increasingly likely that this is not a bug but a feature of the universe,” Riess said.

One possible explanation for the discrepancy is that dark energy — the mysterious force known to be accelerating the cosmos — is driving galaxies farther apart with greater intensity. In this case, the acceleration of the universe may not have a constant value but rather may change over time.

Also, it’s possible that elusive dark matter, which accounts for 80 percent of the matter in the universe, interacts more strongly with visible matter or radiation than once thought, the researchers said.

Another possible explanation includes a new kind of subatomic particle that travels close to the speed of light and would be affected only by gravity. Researchers named the superfast particles sterile neutrinos, and collectively, these particles are called dark radiation, according to the study, which has been accepted for publication in The Astrophysical Journal.

“Any of these scenarios would change the contents of the early universe, leading to inconsistencies in theoretical models,” STScI representatives said in the statement. “These inconsistencies would result in an incorrect value for the Hubble constant, inferred from observations of the young cosmos. This value would then be at odds with the number derived from the Hubble observations.”

The team plans to use data from the Hubble Space Telescope and ESA’s Gaia space observatory to measure the precise positions and distances of stars and to further refine estimates of the universe’s expansion rate.

Follow Samantha Mathewson @Sam_Ashley13. Follow us @SpacedotcomFacebook and Google+. Original article on Space.com.