Microsoft looks to ‘do for data sharing what open source did for code’

Microsoft is working to standardize data-sharing terms via pre-designed licensing agreements, the first of which now are available for preview and comment.

Beyond finding a gene: Same repeated stretch of DNA found in three neurodegenerative diseases

Beyond finding a gene: Same repeated stretch of DNA in three neurodegenerative diseases
Four rare diseases are characterized by similar symptoms of neurodegeneration. Patients with three of the diseases — fragile X tremor/ataxia syndrome (FXTAS), neuronal intranuclear inclusion disease (NIID) and oculopharyngeal myopathy with leukoencephalopathy (OPML) — have similar MRI brain scan images. Patients with a fourth disease, oculopharyngodistal myopathy (OPDM), have normal brain scans, but their muscle tissue has a similar appearance to that of patients with OPML. Researchers suspected that the genetic mutations causing the four diseases must also be similar, even if the mutations were in different genes. After exhaustive genetic sequencing and analysis, researchers in Japan discovered that the same mutation — CGG noncoding expanded tandem repeats — in different areas of the genome causes all four diseases. T2WI: T2 weighted image; DWI: Diffusion weighted image. DOI: 10.1038/s41588-019-0458-z Credit: Hiroyuki Ishiura and Shoji Tsuji, CC-BY.

Families living with four extremely rare neurodegenerative diseases have finally learned the cause of their illnesses, thanks to a researcher’s hunch and decades of improvements in DNA sequencing technology.

Four  are all caused by the same short segment of DNA repeated too many times, a mutation researchers call noncoding expanded tandem repeats. Researchers suspect variations of this type of mutation may cause other diseases that have thus far evaded diagnosis by .

Treatment strategies for many  are complicated by the fact that different mutations can cause the same disease. For example, Parkinson’s disease can be caused by unique mutations in at least five genes. Cystic fibrosis can be caused by over 1,000 different mutations in the same gene.

Researchers are excited because instead of finding unique mutations in specific genes, they identified the same mutation in different areas of the genome causing different diseases.

Genetic cause of rare diseases evaded diagnosis

“Because the mutations causing the diseases are so similar, in the future, all these patients might benefit from the same treatment,” explained Dr. Hiroyuki Ishiura, M.D./Ph.D., assistant professor from the University of Tokyo Hospital and first author of the recent research paper published in Nature Genetics.

“Gene silencing techniques [which inactivate previously active genes] are a possible treatment. We cannot know the result, but we believe such strategies may help patients in the future,” said Dr. Shoji Tsuji, M.D./Ph.D., project professor from the University of Tokyo Hospital, a corresponding author of the recent research paper.

The research team focused on patients with adult-onset neurodegeneration, showing symptoms like cognitive impairment, uncontrolled movement, loss of balance, weakness in the arms and legs, or difficulty swallowing.

The genetic cause of one disease with those symptoms, fragile X tremor/ataxia syndrome, was identified in the early 2000s as three letters of the genetic code, CGG, being repeated dozens or hundreds of times on the X chromosome. Noncoding expanded tandem repeat mutations can be caused by any letters of the genetic code repeated an unusual number of times anywhere in the genome.

Researchers had a hunch that the same CGG repeat mutation might cause three other rare diseases with similar symptoms and clinical test results. But since patients with those other diseases had normal X chromosomes, researchers had no idea where in the genome the potential CGG repeat mutations might exist.

Beyond finding a gene: Same repeated stretch of DNA in three neurodegenerative diseases
A research team in Japan analyzed the genomes of patients with similar symptoms of adult-onset neurodegeneration, but no genetic diagnosis for their diseases. The researchers discovered that the same mutation — CGG noncoding expanded tandem repeats — in different areas of the genome causes four rare diseases. DOI: 10.1038/s41588-019-0458-z Credit: Library of Science and Medical Illustrations by somersault18:24 (, CC-BY-NC-SA 4.0 (

High-tech scavenger hunt

Previous generations of DNA sequencing technology required researchers to know where in the genome to look for a mutation. Searching for CGG repeat mutations on all 46 chromosomes in the human genome was extremely difficult and laborious.

The new approach that researchers designed relies on modern next-generation genome sequencing and clever data analysis.

Researchers sequenced patients’ and healthy people’s entire genomes in short, overlapping but broken stretches. Collaborators, including Professor Shinichi Morishita, a computational biologist from the Department of Bioinformatics and Systems Biology, developed a new computer program to sort all of those short sequences and search for ones made of just CGG over and over again.

Using a standard sequence of the entire healthy human genome, the computer program could pin down those short segments containing CGG repeat mutations to particular genetic neighborhoods. Researchers narrowed down their search to anywhere in the genome where patients had a large number of CGG repeats and healthy people had none.

With that information, researchers then knew where to sequence a long stretch of DNA to specifically identify the gene and where in the gene the patients’ CGG repeat mutations exist.

Rare diseases share common symptoms

All four rare neurological diseases that the research team studied are caused by CGG repeat mutations in distant, seemingly unrelated areas of the .

One of the diseases is currently only known in a single family. “We cared for this family starting 10 years ago, so it was a long puzzle for us to find the correct diagnosis,” said Ishiura.

Efforts are ongoing to study genomes identified through the genetic analysis of people who are healthy despite having the same CGG repeat mutations. Such cases are extremely rare.

The researchers hope that their studies on rare neurodegenerative diseases might lead to insights into more common diseases caused by other types of noncoding tandem repeat , including  (ALS), also known as motor neuron disease or Lou Gehrig’s .

Explore further

New study first to identify cause of rare genetic metabolic disorder

More information: Noncoding CGG repeat expansions in neuronal intranuclear inclusion disease, oculopharyngodistal myopathy and an overlapping disease, Nature Genetics (2019). DOI: 10.1038/s41588-019-0458-z

Journal information: Nature Genetics
Provided by University of Tokyo

According to a report by the Economic Daily News, Apple is getting ready to launch an Apple Watch with a microLED display instead of the current OLED panels as early as 2020 (via MacRumors).

Citing sources inside Apple’s supply chain, the Chinese publication claims that Apple is already in talks with some Taiwanese display manufacturers regarding the use of microLED displays in its next-generation Apple Watches.


Earlier reports have also suggested that Apple is working with Taiwan Semiconductor Manufacturing Company (TSMC) regarding the development of microLED display panels for the Apple Watch which could go into mass production by the end of this year.

For those who don’t know, microLED displays use different light-emitting compounds than the current OLED display panels found in Apple Watches. They can help Apple in manufacturing slimmer, brighter, and less power-hungry devices in the future.

How Chrome OS Virtual Desks could change the way you work

A new feature coming to Chromebooks has tons of productivity potential — but it’s up to you to put it to use.

Here in the land o’ Chrome OS, fresh paint shows up almost constantly. And while most of it involves minor refinements or relatively small-scale improvements, one soon-to-land new Chromebook feature has the potential to shake up your work environment in a really significant way.

It’s a little somethin’ called virtual desktops — or Virtual Desks, in current Chrome OS lingo — and while we’ve talked about it in passing before, I thought it’d be worth taking a closer look at what exactly it is, how it works, and how it could be beneficial to you. And for good reason: I’ve been using Virtual Desks while it’s been under development in the Chrome OS beta channel (and simultaneously familiarizing myself with the Windows equivalent, which I’d never spent much time exploring before), and I genuinely think it holds a lot of hidden value for anyone who’s serious about productivity.

As it stands now, the Virtual Desks feature is set to make its debut in Chrome OS 76 — which should start rolling out widely in just a couple of weeks, in early August.

Getting to know Chrome OS Virtual Desks

Let’s start with the basics, shall we? The Virtual Desks system in Chrome OS is pretty similar in concept to the virtual desktops feature in Windows (and also in MacOS): When you open Chrome OS’s Overview screen — by tapping the button that looks like a box with two lines to its right or by swiping down on your trackpad with three fingers — you see an option to create a new desk at the top of the screen, above all your open apps and windows.

Chrome OS Virtual Desks - Overview (1) JR

Once you select it, you get a new row of thumbnails showing all your open desks at the top of that same screen. You can create up to four desks total, as of now.

And whenever you have multiple desks open, you can drag and drop an app or window from one desk to another — using either your mouse or your finger, if you have a touch-enabled Chromebook.

chroChrome OS Virtual Desks - Overview (2) JR

The only real difference between the Chrome OS Virtual Desks setup and the virtual desktop arrangement in Windows 10 is Chrome’s lack of advanced shortcuts for managing and moving among your different desks — and the possibly related fact that on a Chromebook, hitting Alt-Tab shows you all of your open apps and windows across every desk instead of limiting you only to the processes within your current desk environment.

The first part, at least, seems poised to change: Progress in the open-source Chrome OS code site suggests Google’s working on a series of keyboard shortcuts and even a trackpad gesture for navigating your Virtual Desk environment. Only time will tell if any or all of that makes its way into the feature by the time it launches, but the fact that work is actively underway sure seems like a positive sign.

All right — so that’s what the Virtual Desks feature does and how you get around it. But why would you want to bother using it, and what could it do for you? Allow me to use my own personal setup as an example.

Putting Chrome OS Virtual Desks to work

I’ll be honest: When I first heard about the Virtual Desks feature development, I didn’t understand why or how I’d actually take advantage of it. It sounded neat in theory — and as you probably know, I’m always a sucker for advanced forms of organization — but I just couldn’t quite wrap my head around how having multiple desktops on a Chromebook would benefit me or make sense with my style of working. It kind of struck me as being a bit of overkill, even.

What changed my mind was living with the feature for a while in the real world — both in Chrome OS and also in Windows (since I use both platforms regularly, with a Windows 10 system and a Chromebox at my desk and a Pixelbook as my sole away-from-the-office computer) — and experimenting with different ways to make my workflow fit within it.

And you know what? At this point, I can’t imagine myself going back to my barbaric-seeming old system of having a single overflowing desktop for everything I’m doing. I don’t think Virtual Desks will make sense for everyone, by any means, but if you tend to keep a lot of things open during the day — and jump back and forth between different projects or focuses — splitting your work up into distinctive environments might just end up being a similarly eye-opening shift for you.

So here’s how I’m using it: My first, default desk is typically where I keep what I consider to be my core communication and organizational tools — the stuff I look at when I first start my day. That’s where I leave my Gmail inbox, a messaging interface for my phone messages, Slack for communication with colleagues, and Trello — which basically acts as my all-purpose work project organizer. When I need to glance at Twitter, I also open it in that same area (though I don’t tend to keep that app open all day, lest I get nothing done and simultaneously lose what little sanity this mildewy ol’ brain of mine has remaining).

One desk over is where I allow myself to focus on my primary project for the day. Right now, for instance, that desk contains a Chrome window with a Google Doc where I’m writing this very column along with a handful of other tabs I’m referencing as I work on it. I also have an image editor open for working on graphics related to this story. Everything in that desk relates to this one project and nothing else.

One desk over from there is where I’ve set myself up to focus on a feature story I also need to work on today, as I’ll probably bounce between this column and that document at different points in the day. So rather than have all the tabs and windows tied to each project open in the same space and cluttering up my desktop, I keep each project in its own separate area — isolated from everything else and free from all other distractions.

Last but not least, I’m hoping to squeeze in some work on upcoming improvements to my Android Intelligence newsletter on and off throughout the day, whenever I take a break from the other stuff — so I’ve got a fourth desk open right now with a bunch of tabs and windows related to that.

Obviously, your own setup won’t be identical to mine, but you get the basic idea: Chrome OS’s Virtual Desks make it easy to focus on one specific thing at a time without any superfluous clutter competing for your concentration. You can keep other items readily available and a couple keystrokes away without having ’em in your face and attracting your attention all the time. It’s kind of like having the benefit of multiple monitors within a single condensed space — which, assuming you don’t actually need to look at everything you’re doing at the same exact time, is arguably a less distracting and more effective setup for getting stuff accomplished.

For me, it’s been a huge improvement when it comes to on-the-go focus and productivity. As Chrome OS becomes ever more versatile and capable as a professional, ready-for-work platform, having this sort of power-user feature is a welcome step forward.

Microsoft wants to build artificial general intelligence: an AI better than humans at everything

Microsoft’s new billion dollar partnership with OpenAI is a big bet on the future of artificial intelligence.


A humanoid robot stands in front of a screen displaying the letters “AI.”
Microsoft is investing $1 billion in a partnership with OpenAI. The mission? Building an AI smarter than any of us.
 Getty Images/iStockphoto

A lot of startups in the San Francisco Bay Area claim that they’re planning to transform the world. San-Francisco-based, Elon Musk-founded OpenAI has a stronger claim than most: It wants to build artificial general intelligence (AGI), an AI system that has, like humans, the capacity to reason across different domains and apply its skills to unfamiliar problems.

Today, it announced a billion dollar partnership with Microsoft to fund its work — the latest sign that AGI research is leaving the domain of science fiction and entering the realm of serious research.

“We believe that the creation of beneficial AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” Greg Brockman, chief technology officer of OpenAI, said in a press release today.

Existing AI systems beat humans at lots of narrow tasks — chess, Go, Starcraft, image generation — and they’re catching up to humans at others, like translation and news reporting. But an artificial general intelligence would be one system with the capacity to surpass us at all of those things. Enthusiasts argue that it would enable centuries of technological advances to arrive, effectively, all at once — transforming medicine, food production, green technologies, and everything else in sight.

Others warn that, if poorly designed, it could be a catastrophe for humans in a few different ways. A sufficiently advanced AI could pursue a goal that we hadn’t intended — a recipe for catastrophe. It could turn out unexpectedly impossible to correct once running. Or it could be maliciously used by a small group of people to harm others. Or it could just make the rich richer and leave the rest of humanity even further in the dust.

Getting AGI right may be one of the most important challenges ahead for humanity. Microsoft’s billion dollar investment has the potential to push the frontiers forward for AI development, but to get AGI right, investors have to be willing to prioritize safety concerns that might slow commercial development.

A transformative technology with enormous potential benefits — and real risks

Some analysts have compared the development of AGI to the development of electricity. It’s not just one breakthrough; it enables countless other changes in the way we live our lives.

But the announcement also nods at the ways this could go wrong. OpenAI’s team working on the safety and policy implications of AGI has been unafraid to articulate ways that AGI could be a disaster rather than a boon.

“To accomplish our mission of ensuring that AGI (whether built by us or not) benefits all of humanity,” Brockman says in the release, “we’ll need to ensure that AGI is deployed safely and securely; that society is well-prepared for its implications; and that its economic upside is widely shared.”

Those are hard problems. Current AI systems are vulnerable to adversarial examples — inputs designed to confuse them — and more advanced systems might be, too. Current systems faithfully do what we tell them to do, even if it’s not exactly what we meant them to do.

And there are some reasons to think advanced systems will have problems that current systems don’t. Some researchers have argued that an AGI system that appears to be performing well at a small scale might unexpectedly deteriorate in performance when it has more resources available to it, as the best route to achieving its goals changes. (You can imagine this by thinking about a company that follows the rules when it’s small and scrutinized, but cheats on them or lobbies to get them changed once it has enough clout to do so.)

Even AGI’s most enthusiastic proponents think there’s a lot of potential for things to go wrong — they just think the benefits of developing AGI are worth it. A success with AGI could let us address climate change, extreme poverty, pandemic diseases, and whatever new challenges are around the corner, by identifying promising new drugsoptimizing our power grid, and speeding up the rate at which we develop new technologies.

So how far away is AGI? Here, experts disagree. Some estimate that we’re only a decade away while others point out that there’s been optimism that AGI is just around the corner for a long time and it has never arrived.

The disagreements don’t fall along obvious lines. Some academics, such as MIT’s Max Tegmark, are among those predicting AI soon, while some key figures in industry, such as Facebook’s Yann LeCun, are among those who think it’s likely fairly distant. But they do agree that it’s possible and will happen someday, and that makes it one of the big open challenges of this century.

OpenAI shifted gears this year toward raising money from investors

Until this year, OpenAI was a nonprofit. (Musk, one of its founders, left the board in 2018, citing conflicts of interest with Tesla.) Earlier this year, that changed. Instead of a nonprofit, they announced they’ll operate from now on as a new kind of company called OpenAI LP (the LP stands for “limited partnership”).

Why the change, which critics interpreted as a betrayal of the nonprofit’s egalitarian mission? OpenAI’s leadership team had become convinced that they couldn’t stay on the cutting edge of the field and help shape the direction of AGI without an infusion of billions of dollars, and that’s hard for a nonprofit to get.

But taking investment money would be a slippery slope toward abandoning their mission: Once you have investors, you have obligations to maximize their profits, which is incompatible with ensuring that the benefits of AI are widely distributed.

OpenAI LP (the structure that was used to raise the Microsoft money) is meant to solve that dilemma. It’s a hybrid, OpenAI, says, of a for-profit and nonprofit, the company promises to pay shareholders a return on their investment, up to 100 times what they put in. Everything beyond that goes to the public. The OpenAI nonprofit board still oversees everything.

That sounds a bit ridiculous — after all, how much can possibly be left over after paying investors 100 times what they paid in? — but early investors in many tech companies have made far more than 100 times what they invested. Jeff Bezos reportedly invested $250,000 in Google back in 1998; if he held onto those shares, they’d be worth more than $3 billion today. If Google had adopted OpenAI LP’s cap on returns, Bezos would’ve gotten $25 million dollars — a handsome return on his investment — and the rest would go to humankind.

If OpenAI makes it big, Microsoft will profit immensely — but, they say, so will the rest of us. “Both companies have very aligned missions,” Brockman wrote me today in an email: “Microsoft to empower every person and every organization on the planet to achieve more; OpenAI to ensure that artificial general intelligence benefits all of humanity.”

Whether such partnerships can drive advances that are good for humanity — or put the brakes on advances that are bad for humanity — increasingly looks like a question everyone should be very interested in answering.

7 Technologies You Need to Know for Artificial Intelligence

Artificial intelligence is actually a term that encompasses a host of technology and tools. Here’s a closer look at some of the more important ones.


7 of 7


Jupyter Notebook

Named for three core programming language supported by Project Jupyter — Julia, Python, and R — this technology is a web browser-based interactive environment for data scientists and machine learning developers that enables them to create and share documents that contain live code, equations, visualizations and text. It can be used for data cleaning and transformation, numerical simulation, statistical modeling, data visualization, machine learning, and more.

Jessica Davis has spent a career covering the intersection of business and technology at titles including IDG’s Infoworld, Ziff Davis Enterprise’s eWeek and Channel Insider, and Penton Technology’s MSPmentor. She’s passionate about the practical use of business intelligence, … View Full Bio

We welcome your comments on this topic on our social media channels, or [contact us directly] with questions about the site.

How low can it go? The Google Home Hub is now just $59

The nifty smart-screen, now known as the Nest Hub, continues its slow and steady march toward the dollar store.


How low can it go? Hard to say, but the Google Home Hub keeps dropping a few bucks every few weeks.


Well, here we are again. A while back I made the joke that if this trend continues, the Google Home Hub (which was recently renamed the Nest Hub) will soon be free. Because for months now, each week seems to bring another $1-2 discount.

Like this week: For a limited time, and while supplies last, Altatac via Rakuten has the Google Home Hub for $58.99 when you apply promo code ALT10 at checkout. You also need to be signed into your Rakuten account (assuming you have one; if not, you’ll have to sign up for one).

The Home Hub originally sold for $149, but Google recently lowered it to $129. Even before that, it started showing up from resellers for around $80, and every couple weeks it seems to drop another a buck or two. I keep thinking it can’t possibly go any lower, but, well, happy Monday!

The Home Hub takes the Google Home smart speaker and adds a 7-inch touchscreen. That opens the door to things like guided recipes, song lyrics, appointment calendars and so on — all the same stuff your phone or tablet can do, but on something that’s a permanent fixture in, say, your kitchen.

There’s no camera, so you can’t use it for video calling the way you can an Amazon Echo Show or Facebook Portal. (Some people might find that preferable, though, in light of recent privacy concerns.) Google does offer a smart assistant with a camera, the new Nest Hub Max, but it’s $230.

And don’t forget the new Echo Show 5, which seems very Google Home Hub-like and sells for $90 — but has just a 5-inch screen.

Watch this: Google Home Hub comes up big as a smart home control…

Read CNET’s Google Home Hub review to learn more. Verdict: Top marks, even if the audio quality doesn’t quite rival some screenless smart speakers. Likewise, over at Best Buy, it has an impressive 4.6-star review average from over 3,000 buyers.

At $150, this might have seemed a little extravagant. But at $59? Awfully tempting. (Of course, I’m also tempted to see if it can drop any lower.)

Read more: The first 9 things you should do with your Google Home Hub

Originally published March 29.
Update, July 22: Another price drop.

CNET’s Cheapskate scours the web for great deals on PCs, phonesgadgets and much more. Note that CNET may get a share of revenue from the sale of the products featured on this page. Questions about the Cheapskate blog? Find the answers on our FAQ page. Find more great buys on the CNET Deals page and follow the Cheapskate on Facebook and Twitter!

Google Home Hub
CNET may get a commission from retail offers.