Magnetized inflow accreting to center of Milky Way galaxy
August 17, 2018 by Callie Matulonis, East Asian Observatory
Are magnetic fields an important guiding force for gas accreting to a supermassive black hole (SMBH) like the one that our Milky Way galaxy hosts? The role of magnetic fields in gas accretion is little understood, and trying to observe it has been challenging to astronomers. Researchers at the Academia Sinica Institute of Astronomy and Astrophysics (ASIAA), Taiwan, led by Dr. Pei-Ying Hsieh, have obtained a good measurement by using the instruments on the James Clerk Maxwell Telescope (JCMT). Their result provides clear evidence that the orientation of the magnetic field is in alignment with the molecular torus and ionized streamers rotating with respect to Sagittarius A*—the black hole at the center of the Milky Way. The findings are published in Astrophysical Journal in 2018 August.
Sgr A*—The Best Laboratory to Study Black Hole Feeding in the Sky
Sagittarius A* (Sgr A*), being the nearest SMBH to Earth, has been targeted by many scientists to understand the nature of gas accretion in recent decades. Observing gas accretion onto SMBHs is critical to understanding how they release such tremendous energy.
The circumnuclear disk (CND) is a molecular torus rotating with respect to Sgr A*, within which are ionized gas streamers called mini-spirals (also called Sgr A West) filling the molecular cavity. The mini-spiral is hypothesized to originate from the inner edge of the CND. The CND, being the closest “food reservoir” of Sgr A*, is therefore critical to understanding the feeding of Sgr A*. However, looking for the physical evidence to connect the CND and the mini-spiral has puzzled astronomers since they were discovered 35 years ago.
Intensive measurements of dynamical movements orbiting Sgr A* have been conducted in recent decades, but its magnetic field has not been widely studied. This is solely because the weak polarized signal generated by the magnetic field from dust emission is difficult to measure. However, the magnetic field is expected to be important for material orbiting within and around the CND as the magnetic stress acting on the rotating disk can exert a torque to extract angular momentum from rotating gas, and thus drive gas inflows. Additionally, the magnetic tension force can also pull the gas back from the black hole. Taking advantage of the excellent atmospheric conditions of Mauna Kea at 4,000 meters, and the large aperture size of the JCMT (15 m in diameter), the submillimeter polarization experiments were successfully obtained at the galactic center to understand the role of the magnetic field.
Tracing Magnetized Accreting Inflow
The astronomers utilized the dust polarization data obtained by the JCMT-SCUPOL instrument to image the orientation of the magnetic field. A detailed comparison with higher-resolution interferometric maps from the Submillimeter Array (SMA) reveals that the magnetic field aligns with the CND. Moreover, the innermost observed magnetic field lines also appear to trace and align with the mini-spiral coherently. This is the first attempt to reveal the footprint of inflow linking the CND and the mini-spiral since they were discovered 35 years ago. The comparison of the model and data reinforces the key idea that the CND and the mini-spiral can be treated as a coherent inflow system.
They found that the magnetic field is dynamically significant toward the CND and the mini-spiral. This finding indicates that the magnetic field is able to guide the motion of the ionized particles originated in the CND, and produce the observed spiral pattern of the mini-spiral. The results have shown that the magnetic field is critical to explaining the inflow structure and will also help researchers to understand the inflow picture in other galaxies hosting black holes similar to Sgr A*.
FOR ALL THEIR differences, big tech companies agree on where we’re heading: into a future dominated by smart machines. Google, Amazon, Facebook, and Apple all say that every aspect of our lives will soon be transformed by artificial intelligence and machine learning, through innovations such as self-driving cars and facial recognition. Yet the people whose work underpins that vision don’t much resemble the society their inventions are supposed to transform. WIRED worked with Montreal startup Element AI to estimate the diversity of leading machine learning researchers, and found that only 12 percent were women.
That estimate came from tallying the numbers of men and women who had contributed work at three top machine learning conferences in 2017. It suggests the group supposedly charting society’s future is even less inclusive than the broader tech industry, which has its own well-known diversity problems.
At Google, 21 percent of technical roles are filled by women, according to company figures released in June. When WIRED reviewed Google’s AI research pages earlier this month, they listed 641 people working on “machine intelligence,” of whom only 10 percent were women. Facebook said last month that 22 percent of its technical workers are women. Pages for the company’s AI research group listed 115 people earlier this month, of whom 15 percent were women.
A Google spokesperson told WIRED that the company’s research page lists only people who have authored research papers, not everyone who implements or researches AI technology, but declined to provide more information. Facebook also declined to provide details on the diversity of its AI teams. Joelle Pineau, who leads the Montreal branch of Facebook’s AI lab, said counting the research team’s publicly listed staff was “reasonable,” but that the group is small relative to everyone at Facebook involved in AI, and growing and changing through hiring.
Pineau is part of a faction in AI research trying to improve the field’s diversity—motivated in part by fears that failing to do so increases the chance AI systems have harmful effects on the world. “We have more of a scientific responsibility to act than other fields because we’re developing technology that affects a large proportion of the population,” Pineau says.
Companies and governments are betting on AI because of its potential to let computers make decisions and take action in the world, in areas such as health care and policing. Facebook is counting on machine learning to help it fight fake news in places with very different demographics to its AI research lab, such as Myanmar, where rumors on the company’s platform led to violence. Anima Anandkumar, a professor at the California Institute of Technology who previously worked on AI at Amazon, says the risks AI systems will cause harm to certain groups are higher when research teams are homogenous. “Diverse teams are more likely to flag problems that could have negative social consequences before a product has been launched,” she says. Research has also shown diverse teams are more productive.
Corporate and academic AI teams have already—inadvertently—released data and systems biased against people not poorly represented among the high priests of AI. Last year, researchers at the universities of Virginia and Washington showed that two large image collections used in machine learning research, including one backed by Microsoft and Facebook, teach algorithms a skewed view of gender. Images of people shopping and washing are mostly linked to women, for example.
Anandkumar and others also say that the AI community needs better representation of ethnic minorities. In February, researchers from MIT and Microsoft found that facial analysis services that IBM and Microsoft offered to businesses were less accurate for darker skin tones. The companies’ algorithms were near perfect at identifying the gender of men with lighter skin, but frequently erred when presented with photos of women with dark skin. IBM and Microsoft both say they have improved their services. The original, flawed, versions were on the market for more than a year.
The scarcity of women among machine learning researchers is hardly surprising. The wider field of computer science is well documented as being dominated by men. Government figures show that the proportion of women awarded bachelor’s degrees in computing in the US has slid significantly over the past thirty years, the opposite of the trend in physical and biological sciences.
Little demographic data has been gathered on the people advancing machine learning. WIRED approached Element about doing that after the company published figures on the global AI talent pool. The company compiled a list of the names and affiliations of everyone who had papers or other work accepted at three top academic machine learning conferences—NIPS, ICLR, and ICML—in 2017. The once obscure events now feature corporate parties and armies of corporate recruiters and researchers. Element’s list comprised 3,825 names, of which 17 percent were affiliated with industry. The company counted men and women by asking workers on a crowdsourcing service to research people on the list online. Each name was sent to three workers independently, for consistency. WIRED checked a sample of the data, and excluded six entries that came back incomplete.
The picture that emerged is only an estimate. Rachel Thomas, a professor at the University of San Francisco, and cofounder of AI education provider, Fast.ai, says it can still be useful. Figures on AI’s diversity problem might help motivate attempts to address it, she says. “I think it’s a fairly accurate picture of who big companies working on AI think are appropriate people to hire,” Thomas says.
AI’s lack of diversity and efforts to address it have won more attention in recent years. Thomas, Anandkumar, and Pineau have all been involved with Women in ML, or WiML, a workshop that runs alongside NIPS, currently the hottest conference in AI. The side-event provides a venue for women to present their work, and in 2017 boasted corporate sponsorship from Google, Facebook, Amazon, and Apple. Similarly, boldface tech brands sponsored a new workshop that ran alongside NIPS last year called Black In AI, which hosted technical research talks, and discussion on how to improve the field’s diversity. Fast.ai’s courses are designed to offer an alternative to the conventional grad school track into AI, and the company offers diversity scholarships.
Despite the growth of such programs, few people in AI expect the proportion of women or ethnic minorities in their field to grow very swiftly.
Diversity campaigns at companies such as Google have failed to significantly shift the predominance of white and Asian men in their technical workforces. Negar Rostamzadeh, a research scientist at Element, says AI has its own version of a problem well documented in tech companies whereby women are more likely than men to leave the field, and less likely to be gain promotions. “Working to have good representation of women and minorities is positive, but we also want them to be able to advance,” Rostamzadeh says.
Women in AI research also say the field can be unwelcoming and even hostile to women.
Anandkumar and Thomas say they learned long before completing their PhDs that it’s not unusual for men in computer science or math research to subject women to inappropriate remarks or harassment. Two long-standing computer science professors at Carnegie Mellon University resigned this week, citing “sexist management”. In February, Anandkumar made online posts with the #metoo tag, describing verbal harassment by an unnamed coworker in AI.
Events at NIPS in recent years illustrate the challenge of making the field more welcoming to women—and how the new money flowing into AI can sometimes make it worse.
In 2015, the founders of a Canadian startup called Deeplearni.ng brought t-shirts to the conference with the slogan “My NIPS are NP-hard,” an anatomicalmath jokesome men and women foundinappropriate. (The conference’s full name is Neural Information Processing Systems.) Stephen Piron, founder of the startup, now called Dessa, says making the shirt “was a meat-headed move” he regrets, and that his company values inclusion.
At last year’s event, Anandkumar and some other attendees complained that a party hosted by Intel—which also sponsored the Women in ML event—where female acrobats descended from the ceiling created an unwelcoming atmosphere for women. An Intel spokesman said the company welcomes feedback on how it can better create environments where everyone feels included. The conference’s official closing party generated similar complaints, triggering investigations into the behavior of two prominent researchers.
One was University of Minnesota professor Brad Carlin, who performed at the NIPS closing party in a band called the Imposteriors made up of statistics professors. Carlin, who plays keys, made a joke about sexual harassment during the show. Tweets complaining about his remark spurred data scientist Kristian Lum to write a blog post alleging that a person involved in the incident—later confirmed to be Carlin—and another, unnamed, researcher had touched her inappropriately, on separate occasions. Carlin later retired after a University of Minnesota investigation found he had breached sexual harassment policy on multiple occasions. Bloomberg reported the second man was Steven Scott, Google’s director of statistics research. A company spokesperson confirmed Scott left the company after an internal investigation into his behavior.
The organizers of NIPS are now working on a more detailed code of conduct for the event, which takes place in Montreal this December. Last week they sent out a survey soliciting opinions on alternatives to the current name that wouldn’t have the same “distasteful connotations.“ Candidates include CLIPS, NALS, and ICOLS.
Pineau of Facebook doesn’t have a preference, but is in favor of changing the name. “I have searched for the conference and ended up on some really unpleasant websites,” she says. She also cautions that renaming NIPS shouldn’t distract from AI’s larger, and less easily fixed problems. “I worry a little bit that people will think we’ve done a grand gesture and momentum on other things will slow down,” she says.
Researchers in the Department of Physical Medicine and Rehabilitation at Johns Hopkins Medicine report that a computerized study of 36 healthy adult volunteers asked to repeat the same movement over and over became significantly faster when asked to repeat that movement on demand—a result that occurred not because they anticipated the movement, but because of an as yet unknown mechanism that prepared their brains to replicate the same action.
The findings, the researchers say, add another clue to a growing body of research on how the brain generates movement in the first place, and could eventually help scientists understand how brain-controlled motor responses go awry after neurologic disease or injuries such as strokes.
Since the early 1950s, researchers have known that repeating a movement can improve the reaction time required to generate it later says study author Adrian Mark Haith, Ph.D., assistant professor of neurology at the Johns Hopkins University School of Medicine. This effect has long been attributed to “anticipation”—being prepared to repeat a movement by default in accordance with expectations about which movement would most likely be required.
However, other experiments using transcranial magnetic stimulation—a technique that uses magnetic pulses to stimulate the brain and record responses—show that repeating movements can actually bias the movements that occur when stimulating the brain’s motor cortex, making typically random movements more like the one that was practiced.
“These studies suggest that something other than anticipation might be happening with repetition,” Haith says.
In a study designed to clarify how repeated movements might influence motor response, Haith, along with colleagues Pablo A. Celnik, M.D., professor of physical medicine and rehabilitation, neurology, and neuroscience at the Johns Hopkins University School of Medicine; Firas Mawase, Ph.D., a former postdoctoral fellow in Celnik’s lab; and Daniel Lopez, B.S., a research assistant at the Johns Hopkins University School of Medicine, devised a set of experiments to tease out whether or not practice might affect movement through anticipation or another mechanism.
The researchers recruited 36 right-handed adult volunteers, 22 of whom were women, ranging in age from 19 to 30 years. Each of the volunteers sat at a desk in front of a large computer screen. On the desktop was a touch-responsive tablet. When a target appeared on the screen, the volunteers were asked to move a cursor to touch the target as quickly as possible using a stylus on the tablet.
In initial tests, the volunteers took about 215 milliseconds (each millisecond is 1/1000th of a second) to respond and reach the changing target, no matter what direction they moved their hands. However, after practicing moving the cursor hundreds of times in just a single direction, the volunteers became significantly faster at responding and moving the cursor toward the target in that direction, even though their reaction times stayed the same when the target appeared in other directions.
“The benefit you get is 20 to 30 milliseconds,” says Celnik. “It sounds small, but when you’re looking at performance that can make a difference in sports and other areas that require quick motor movements, that time increment might mean the divide between a winner and a loser.”
The scientists reasoned that there were two possibilities for the subjects’ decreased reaction times: One idea is that they had learned to anticipate the movement and were guessing that the target would appear in the preferential (usual) direction from force of habit. Another is that repetitive practice somehow trained their brains to select the practiced movement more quickly in the future while still allowing the subjects the same amount of flexibility as before they practiced to choose other targets.
To tease apart those possibilities, the researchers tried another experiment much like the previous ones in which the subjects were asked to move their hand toward a target that appeared on the screen, but with a twist: they were asked to move their hand on every fourth beat of a metronome, whether the target appeared or not. When the target did appear, it showed up in various time intervals right before the fourth beat, effectively imposing a reaction time on each trial.
If, as previous theories held, the subjects were anticipating movement in the practiced direction, the researchers reasoned they’d preferentially move their hand in that direction when the target failed to show up, or when the reaction time was so narrow that they wouldn’t have time to accurately hit the target. However, that wasn’t the case, says Firas.
“The subjects did have preferred directions for moving their hands when they had to guess, but it was mostly directions comfortable for right-handed people,” he says. “They either chose up and to the right or down and to the left, rather than in the direction they’d practiced.”
Together, the researchers say, these results, published in the July 24, 2018 Cell Reports, suggest that repeating a movement many times somehow primes the brain to be more efficient at making that movement in the future.
Celnik says he and his team plan to investigate what’s happening in the brain itself to better understand this effect. Gaining insight on the neural mechanisms behind the phenomenon, he adds, could lead to more effective therapies for stroke and other disorders that affect the brain’s control over body movement.
In a new study in cells, University of Illinois researchers have adapted CRISPR gene-editing technology to cause the cell’s internal machinery to skip over a small portion of a gene when transcribing it into a template for protein building. This gives researchers a way not only to eliminate a mutated gene sequence, but to influence how the gene is expressed and regulated.
Such targeted editing could one day be useful for treating genetic diseases caused by mutations in the genome, such as Duchenne’s muscular dystrophy, Huntington’s disease or some cancers.
CRISPR technologies typically turn off genes by breaking the DNA at the start of a targeted gene, inducing mutations when the DNA binds back together. This approach can cause problems, such as the DNA breaking in places other than the intended target and the broken DNA reattaching to different chromosomes.
The new CRISPR-SKIP technique, described in the journal Genome Biology, does not break the DNA strands but instead alters a single point in the targeted DNA sequence.
“Given the problems with traditional gene editing by breaking the DNA, we have to find ways of optimizing tools to accomplish gene modification. This is a good one because we can regulate a gene without breaking genomic DNA,” said Illinois bioengineering professor Pablo Perez-Pinera, who led the study with Illinois physics professor Jun Song. Both are affiliated with the Carl R. Woese Institute for Genomic Biology at the U. of I.
In mammal cells, genes are broken up into segments called exons that are interspersed with regions of DNA that don’t appear to code for anything. When the cell’s machinery transcribes a gene into RNA to be translated into a protein, there are signals in the DNA sequence indicating which portions are exons and which are not part of the gene. The cell splices together the RNA transcribed from the coding portions to get one continuous RNA template that is used to make proteins.
CRISPR-SKIP alters a single base before the beginning of an exon, causing the cell to read it as a non-coding portion.
“When the cell treats the exon as non-coding DNA, that exon is not included in mature RNA, effectively removing the corresponding amino acids from the protein,” said Michael Gapinske, a bioengineering graduate student and first author of the paper.
While skipping exons results in proteins that are missing a few amino acids, the resulting truncated proteins often retain partial or full activity—which may be enough to restore function in some genetic diseases, said Perez-Pinera, who also is a professor in the Carle Illinois College of Medicine.
There are other approaches to skipping exons or eliminating amino acids, but since they don’t permanently alter the DNA, they provide only a temporary benefit and require repeated administrations over the lifetime of the patient, the researchers said.
“By editing a single base in genomic DNA using CRISPR-SKIP, we can eliminate exons permanently and, therefore, achieve a long-lasting correction of the disease with a single treatment,” said Alan Luu, a physics graduate student and co-first author of the study. “The process is also reversible if we would need to turn an exon back on.”
The researchers tested the technique in multiple cell lines from mice and humans, both healthy and cancerous.
“We tested it in three different mammalian cell lines to demonstrate that it can be applied to different types of cells. We also demonstrated it in cancer cell lines because we wanted to show that we could target oncogenes,” Song said. “We haven’t used it in vivo; that will be the next step.”
They sequenced the DNA and RNA from the treated cells and found that the CRISPR-SKIP system could target specific bases and skip exons with high efficiency, and also demonstrated that differently targeted CRISPR-SKIPs can be combined to skip multiple exons in one gene if necessary. The researchers hope to test its efficiency in live animals—the first step toward assessing its therapeutic potential.
“In Duchenne’s muscular dystrophy, for example, just correcting 5 to 10 percent of the cells is enough to achieve a therapeutic benefit. With CRISPR-SKIP, we have seen modification rates of more than 20 to 30 percent in many of the cell lines we have studied,” Perez-Pinera said.
The group built a web tool allowing other researchers to search whether an exon could be targeted with the CRISPR-SKIP technique while minimizing chances of it binding to similar sites in the genome.
Since the researchers saw some mutations at off-target sites, they are working to make CRISPR-SKIP even more efficient and specific.
“Biology is complex. The human genome is more than three billion bases. So the chance of landing at a location that’s similar to the intended region is not negligible and is something to be aware of with any gene editing technique,” Song said. “The reason we spent so much time sequencing extensively to look for off-target mutations is that it could be a major barrier to medical applications. We hope that future improvements to gene editing technologies will increase the specificity of CRISPR-SKIP so we can begin to address some of the problems that have kept gene therapy from being widely applied in the clinic.”
Nearly five years ago, NASA and Lincoln Laboratory made history when the Lunar Laser Communication Demonstration (LLCD) used a pulsed laser beam to transmit data from a satellite orbiting the moon to Earth—more than 239,000 miles—at a record-breaking download speed of 622 megabits per second.
Now, researchers at Lincoln Laboratory are aiming to once again break new ground by applying the laser beam technology used in LLCD to underwater communications.
“Both our undersea effort and LLCD take advantage of very narrow laser beams to deliver the necessary energy to the partner terminal for high-rate communication,” says Stephen Conrad, a staff member in the Control and Autonomous Systems Engineering Group, who developed the pointing, acquisition, and tracking (PAT) algorithm for LLCD. “In regard to using narrow-beam technology, there is a great deal of similarity between the undersea effort and LLCD.”
However, undersea laser communication (lasercom) presents its own set of challenges. In the ocean, laser beams are hampered by significant absorption and scattering, which restrict both the distance the beam can travel and the data signaling rate. To address these problems, the Laboratory is developing narrow-beam optical communications that use a beam from one underwater vehicle pointed precisely at the receive terminal of a second underwater vehicle.
This technique contrasts with the more common undersea communication approach that sends the transmit beam over a wide angle but reduces the achievable range and data rate. “By demonstrating that we can successfully acquire and track narrow optical beams between two mobile vehicles, we have taken an important step toward proving the feasibility of the laboratory’s approach to achieving undersea communication that is 10,000 times more efficient than other modern approaches,” says Scott Hamilton, leader of the Optical Communications Technology Group, which is directing this R&D into undersea communication.
Most above-ground autonomous systems rely on the use of GPS for positioning and timing data; however, because GPS signals do not penetrate the surface of water, submerged vehicles must find other ways to obtain these important data. “Underwater vehicles rely on large, costly inertial navigation systems, which combine accelerometer, gyroscope, and compass data, as well as other data streams when available, to calculate position,” says Thomas Howe of the research team. “The position calculation is noise sensitive and can quickly accumulate errors of hundreds of meters when a vehicle is submerged for significant periods of time.”
This positional uncertainty can make it difficult for an undersea terminal to locate and establish a link with incoming narrow optical beams. For this reason, “We implemented an acquisition scanning function that is used to quickly translate the beam over the uncertain region so that the companion terminal is able to detect the beam and actively lock on to keep it centered on the lasercom terminal’s acquisition and communications detector,” researcher Nicolas Hardy explains. Using this methodology, two vehicles can locate, track, and effectively establish a link, despite the independent movement of each vehicle underwater.
Once the two lasercom terminals have locked onto each other and are communicating, the relative position between the two vehicles can be determined very precisely by using wide bandwidth signaling features in the communications waveform. With this method, the relative bearing and range between vehicles can be known precisely, to within a few centimeters, explains Howe, who worked on the undersea vehicles’ controls.
To test their underwater optical communications capability, six members of the team recently completed a demonstration of precision beam pointing and fast acquisition between two moving vehicles in the Boston Sports Club pool in Lexington, Massachusetts. Their tests proved that two underwater vehicles could search for and locate each other in the pool within one second. Once linked, the vehicles could potentially use their established link to transmit hundreds of gigabytes of data in one session.
This summer, the team is traveling to regional field sites to demonstrate this new optical communications capability to U.S. Navy stakeholders. One demonstration will involve underwater communications between two vehicles in an ocean environment—similar to prior testing that the Laboratory undertook at the Naval Undersea Warfare Center in Newport, Rhode Island, in 2016. The team is planning a second exercise to demonstrate communications from above the surface of the water to an underwater vehicle—a proposition that has previously proven to be nearly impossible.
The undersea communication effort could tap into innovative work conducted by other groups at the laboratory. For example, integrated blue-green optoelectronic technologies, including gallium nitride laser arrays and silicon Geiger-mode avalanche photodiode array technologies, could lead to lower size, weight, and power terminal implementation and enhanced communication functionality.
In addition, the ability to move data at megabit-to gigabit-per-second transfer rates over distances that vary from tens of meters in turbid waters to hundreds of meters in clear ocean waters will enable undersea system applications that the laboratory is exploring.
Howe, who has done a significant amount of work with underwater vehicles, both before and after coming to the laboratory, says the team’s work could transform undersea communications and operations. “High-rate, reliable communications could completely change underwater vehicle operations and take a lot of the uncertainty and stress out of the current operation methods.”
That fancy Cortana thermostat now supports Alexa and Google Assistant
It ships on August 24th.
If you’ve been eying Johnson Controls’ slick, Microsoft-backed GLAS thermostat ever since it was unveiled in 2017, it’s finally close at hand — and you won’t have to rely on Cortana for voice control, either. The $319 climate controller will ship on August 24th with support both Amazon’s Alexa and Google Assistant, helping it slip more gracefully into more smart home setups. This probably won’t wound Microsoft’s pride too much. It’s aware that Cortana has just a small slice of the voice assistant market, and it’s already getting cozy with Alexa on its own platforms.
Apart from voice control, the star of the show is undoubtedly the thermostat’s translucent OLED touchscreen. You can tweak the temperature, check air quality (inside and out) and gauge your energy savings without having to push buttons or twist knobs. You’re paying a lot even compared to high-end rivals like Nest, but it’s not often that you can get a touch-only thermostat that doubles as a conversation piece.
In this Aug. 8, 2018, file photo a mobile phone displays a user’s travels using Google Maps in New York. (AP Photo/Seth Wenig, File)
Ryan Nakashima, The Canadian Press Published Thursday, August 16, 2018 8:26PM EDT
SAN FRANCISCO — Google has revised an erroneous description on its website of how its “Location History” setting works, clarifying that it continues to track users even if they’ve disabled the setting.
The change came three days after an Associated Press investigation revealed that several Google apps and websites store user location even if users have turned off Location History. Google has not changed its location-tracking practice in that regard.
But its help page for the Location History setting now states: “This setting does not affect other location services on your device.” It also acknowledges that “some location data may be saved as part of your activity on other services, like Search and Maps.”
Previously, the page stated: “With Location History off, the places you go are no longer stored.”
The AP observed that the change occurred midday Thursday, a finding confirmed by Internet Archive snapshots taken earlier in the day.
The AP investigation found that even with Location History turned off, Google stores user location when, for instance, the Google Maps app is opened, or when users conduct Google searches that aren’t related to location. Automated searches of the local weather on some Android phones also store the phone’s whereabouts.
In a Thursday statement to the AP, Google said: “We have been updating the explanatory language about Location History to make it more consistent and clear across our platforms and help centres.”
The statement contrasted with a statement Google sent to the AP several days ago that said in part, “We provide clear descriptions of these tools.”
Jonathan Mayer, a Princeton computer scientist and former chief technologist for the Federal Communications Commission’s enforcement bureau, said the wording change was a step in the right direction. But it doesn’t fix the underlying confusion Google created by storing location information in multiple ways, he said.
“The notion of having two distinct ways in which you control how your location data is stored is inherently confusing,” he said Thursday. “I can’t think off the top of my head of any major online service that architected their location privacy settings in a similar way.”
K. Shankari, a UC Berkeley graduate researcher whose findings initially alerted the AP to the issue, said Thursday the change was a “good step forward,” but added “they can make it better.” For one thing, she said, the page still makes no mention of another setting called “Web & App Activity.” Turning that setting off that would in fact stop recording location data.
Huge tech companies are under increasing scrutiny over their data practices, following a series of privacy scandals at Facebook and new data-privacy rules recently adopted by the European Union. Last year, the business news site Quartz found that Google was tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were off. Google changed the practice and insisted it never recorded the data anyway.
Critics say Google’s insistence on tracking its users’ locations stems from its drive to boost advertising revenue. It can charge advertisers more if they want to narrow ad delivery to people who’ve visited certain locations.
Several observers also noted that Google is still bound by a 20-year agreement it struck with the Federal Trade Commission in 2011. That consent decree requires Google to not misrepresent to consumers how they can protect their privacy.
Google agreed to that order in response to an FTC investigation of its now-defunct social networking service Google Buzz, which the agency accused of publicly revealing users’ most frequent Gmail contacts.
A year later, Google was fined $22.5 million for breaking the agreement after it served some users of Apple’s Safari browser so-called tracking cookies in violation of settings that were meant to prevent that.
The FTC has declined to say whether it had begun investigating Google for how it has described Location History.