http://www.kurzweilai.net/low-cost-eeg-can-now-be-used-to-reconstruct-images-of-what-you-see

Low-cost EEG can now be used to reconstruct images of what you see

Has promising uses for locked-in patients and forensics — no expensive fMRI machine needed
February 27, 2018

(left:) Test image displayed on computer monitor. (right:) Image captured by EEG and decoded. (credit: Dan Nemrodov et al./eNeuro)

A new technique developed by University of Toronto Scarborough neuroscientists has, for the first time, used EEG detection of brain activity in reconstructing images of what people perceive.

The new technique “could provide a means of communication for people who are unable to verbally communicate,” said Dan Nemrodov, Ph.D., a postdoctoral fellow in Assistant Professor Adrian Nestor’s lab at U of T Scarborough. “It could also have forensic uses for law enforcement in gathering eyewitness information on potential suspects, rather than relying on verbal descriptions provided to a sketch artist.”

(left:) EEG electrodes used in the study (photo credit: Ken Jones). (right in red:) The area where the images were detected, the occipital lobe, is the visual processing center of the mammalian brain, containing most of the anatomical region of the visual cortex. (credit: CC/Wikipedia)

For the study, test subjects were shown images of faces while their brain activity was detected by EEG (electroencephalogram) electrodes over the occipital lobe, the visual processing center of the brain. The data was then processed by the researchers, using a technique based on machine learning algorithms that allowed for digitally recreating the image that was in the subject’s mind.

More practical than fMRI for reconstructing brain images

This new technique was pioneered by Nestor, who successfully reconstructed facial images from functional magnetic resonance imaging (fMRI) data in the past.

According to Nemrodov, techniques like fMRI — which measures brain activity by detecting changes in blood flow — can grab finer details of what’s going on in specific areas of the brain, but EEG has greater practical potential given that it’s more common, portable, and inexpensive by comparison.

While fMRI captures activity at the time scale of seconds, EEG captures activity at the millisecond scale, he says. “So we can see, with very fine detail, how the percept of a face develops in our brain using EEG.” The researchers found that it takes the brain about 120 milliseconds (0.12 seconds) to form a good representation of a face we see, but the important time period for recording starts around 200 milliseconds, Nemrodov says. That’s followed by machine-learning processing to decode the image.*

This study provides validation that EEG has potential for this type of image reconstruction, notes Nemrodov, something many researchers doubted was possible, given its apparent limitations.

Clinical and forensic uses

“The fact we can reconstruct what someone experiences visually based on their brain activity opens up a lot of possibilities,” says Nestor. “It unveils the subjective content of our mind and it provides a way to access, explore, and share the content of our perception, memory, and imagination.”

Work is now underway in Nestor’s lab to test how EEG could be used to reconstruct images from a wider range of objects beyond faces — even to show “what people remember or imagine, or what they want to express,” says Nestor. (A new creative tool?)

The research, which is published (open-access) in the journal eNeuro, was funded by the Natural Sciences and Engineering Research Council of Canada (NSERC) and by a Connaught New Researcher Award.

* “After we obtain event-related potentials (ERPs) [the measured brain response from a visual sensory event, in this case] — we use a support vector machine (SVM) algorithm to compute pairwise classifications of the visual image identities,” Nemrodov explained to KurzweilAI. “Based on the resulting dissimilarity matrix, we build a face space from which we estimate in a pixel-wise manner the appearance of every individual left-out (to avoid circularity) face. We do it by a linear combination of the classification images plus the origin of the face space.” The method is based on a former study: Nestor, A., Plaut, D. C., & Behrmann, M. (2016). Feature-based face representations and image reconstruction from behavioral and neural data. Proceedings of the National Academy of Sciences. 25 113: 416-421.


University of Toronto Scarborough | Do you see what I see? Harnessing brain waves can help reconstruct mental images


Nature Video | Reading minds


Abstract of The Neural Dynamics of Facial Identity Processing: insights from EEG-Based Pattern Analysis and Image Reconstruction

Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50-650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.

https://www.barrons.com/articles/an-angel-on-your-shoulder-who-will-build-a-i-1519747124

An Angel on Your Shoulder: Who Will Build A.I.?

Professor Mahadev Satyanarayanan of Carnegie Mellon.

How does all the stuff in the world get connected, until humans live lives with the equivalent of “an angel on your shoulder,” an artificial intelligence that is pervasive, like your own thoughts?

And who the heck is going to build all that?

Such are the provocative questions that emerged during a Monday afternoon session on artificial intelligence at the Mobile World Congress trade show in Barcelona.

It was an absolutely packed auditorium, an already airless room becoming even more so, demonstrating there is a lot of interest in such questions.

Since this is a telecom show, the panel of entrepreneurs and academics threaded nimbly the connections between the emergent 5G networking technology, wearables, and something called “edge computing,” in a session dubbed “A.I. Everywhere.”

The panel’s moderator, Robert Marcus, general partner of Quantum Wave Capital, a Silicon Valley firm on the storied Sand Hill Road, talked of “massive” change that will come from enabling things.

Marcus’s point, as laid out in an initial slide, was that there was a burst of digital activity with Apple’s (AAPL) first iPhone, in 2007, which really took advantage of 4G networking with apps.

Now, he said, the advent of 5G will make possible edge computing, which will make possible “orders of magnitude” increases in compute, which will in turn make possible A.I. everywhere.

An Angel on Your Shoulder: Who Will Build A.I.?

By way of background, it is increasingly clear 5G is more about connecting many devices, perhaps unmanned, such as factory robots, than it is about bringing greater speeds to human users of smartphones.

Sure, speed will rise up for users on Verizon Communications (VZ) or other networks. But the most novel technology enhancement that comes with 5G, something not even discussed in past, is a reduction in “latency,” the time it takes the first bit of a transmission to reach its destination.

Marcus’s apostle for the technical details was his first speaker, Mahadev Satyanarayanan, a professor at Carnegie Mellon University. His passion is about the emerging “tier” of edge computing, that sits between cloud computing, which is centralized, and all the billions of devices that will be connected in the world, including smartwatches and self-driving cars and on and on.

Satyanarayanan, who was referred to by Marcus as “Satya,” informed the audience he had been working on edge computing “since as long as there has been edge computing,” which sounded rather confusing given it seems like the term only popped up in the last two years.

In any event, Satya’s main point was that there needs to be something that’s not in the central facilities of Amazon (AMZN), or Alphabet’s (GOOGLGoogle, or Microsoft (MSFTAzure to interface with all the connected things, and for a variety of reasons.

“The ability to process without sending to the cloud is absolutely crucial” to the future of A.I., he said.

An Angel on Your Shoulder: Who Will Build A.I.?

One reason is privacy and security, a “notion of a privacy firewall that is under your [direct] control,” he said. Another is to be able to “fall back” to local compute when those central cloud resources are unavailable. He specifically mentioned security concerns as being raised by some edge devices, such as Amazon’s “Echo” home speakers.

And here’s where it ties in to A.I.: the connected things must not have to constantly poll the central brain to understand what they are doing. An audience member asked about the machine learning phase called “training,” in which a computer is shown many, many models and detects patterns. Training, said Satya, will have to move to the edge in some fashion, because a connected thing won’t always rely on what it was trained for back in the lab.

“Suppose it’s trying to understand my walking,” proposed Satya, referring to some kind of activity tracker. “It knows how I normally walk. But what if I am now carrying a heavy load, or what if I stub my toe, and my gait is different,” so that the motion of the person with the sensor is unrecognizable. Then, the connected thing needs to learn on the spot, he said, and so it will need local computing to do so.

Other examples included multiple sensors in the car. What if cars on the road are programmed to look for a missing child. Or to participate in the Amber Alert system for known offenders in a neighborhood. Or, “every car in the city looking out for your dog” should your dog get lost.

It would be, said Satya, like Google’s “Waze” app, but without a human being. “Video cameras replace people in a Waze-style application,” he suggested.

Satya proposed the idea of “cloudlets,” little cloud-like machines that will be near the activity.

A cloudlet, said Satya, is “a small data center at the edge of the Internet.” They have the benefits of wearability, he said, but attributes of cloud-like services.”

An Angel on Your Shoulder: Who Will Build A.I.?

And that’s where latency comes in, the ability to do the learning without transmitting out to the central cloud facility and all the way back. The human cognitive system is incredibly fast, he said. The challenge is akin to building that cognitive neural system across networked computers.

“If you have a human in the loop, or a machine, such as a self-driving car,” said Satya, “you not only have high bandwidth from the edge inwards, you need to send a response fast.”
Cloudlets are nice in that respect, he said, because they are “one hop away from the third tier,” and fewer hops mean lower latency since it’s a shorter trip.

An Angel on Your Shoulder: Who Will Build A.I.?

Satya extended the notion to wearables, specifically smart glasses, where, he said, augmented reality would meet A.I. Smart glasses, whispering information in your ear, would become “like an angel on your shoulder,” and before long, humans will live in a world where “The angel on your shoulder is indistinguishable from the voice in your head.” With his cherubic face and halo of white hair, Satya was an apt messenger for such a prospect.

An Angel on Your Shoulder: Who Will Build A.I.?

Satya proposed that the end game, at least for humans, would be the arrival of futurist Ray Kurzweil’s notion of the “singularity”: “The biological limits on human intelligence will be eliminated,” he declared. “This is path toward that vision.”

He also put up a slide of the late Mark Weiser, who postulated technology will disappear into the background. (I should note the same idea has long been propounded by Internet pioneer Leonard Kleinrock, as he articulated in my 2015 interview with him for Barron’s.)

An Angel on Your Shoulder: Who Will Build A.I.?

But, who’s going to build all this?

It won’t be the cloud computer companies, he concluded. “Cloud computing is not going to do it, I’m sorry,” he said. Amazon, Google and Microsoft for the most part are too hung up on centralization. “It will be hard for them to embrace edge computing,” he said, before quickly adding, “except maybe Microsoft. Microsoft came from the world of edge computing,” the world of the traditional PC and workgroup server.

If not the cloud people , it could be the telcos. Or it could be someone else. “It’s wide open,” he assured the audience.

There will be effects for semiconductor devices, as “cloudlets can use lots of custom chips,” he said. And there is already the advent of an “A.I. processing unit,” a “new kind of chip,” he said.

At the end of the day, though, he conceded, “we don’t really know how to provision the edge, yet.”

That leaves plenty for both researchers and academics to think about.

An Angel on Your Shoulder: Who Will Build A.I.?

https://www.androidcentral.com/t-mobile-and-sprint-announce-first-markets-support-their-5g-networks

T-Mobile and Sprint announce first markets to support their 5G networks

The 5G future is almost here.

The race to 5G seems to be heating up among U.S. carriers every single day, and during MWC 2018, T-Mobile and Sprint both announced their respective plans for rolling out their 5G networks over the next year.

Looking first at T-Mobile, the Un-Carrier says it’ll begin building out 5G equipment this year in a total of 30 cities, with New York, Los Angeles, Dallas, and Las Vegas being among the first that’ll be able to experience the increased speeds when 5G smartphones start coming out in 2019.

T-Mobile will be enabling more of its 600 MHz and millimeter wave spectrum to help prepare for its 5G coverage, and commenting on this news, Chief Technology Officer Neville Ray said –

Every dollar we invest in our network is a 5G dollar. All the LTE Advanced work we do is 5G work, and we’re leading the industry with the most advanced LTE network in the country. Every step we take — every innovation – builds toward a future-proof 5G network, one where our customers continue to come out on top.

As for Sprint, the carrier claims its customers in Atlanta, Chicago, Dallas, Houston, Los Angeles, and Washington D.C. will be able to “experience the future of wireless” starting this April. Sprint will offer these markets “5G-like capabilities”, including the likes of faster speed and more capacity.

Sprint’s targeting the first half of 2019 as the launch window for its 5G network, and between now and then, it’ll start to utilize thousands of its Massive MIMO radios to prepare itself for this next evolution in mobile data.

This news comes shortly after AT&T announced similar plans, and all of this makes it clear that no one will stop until they’re the first to market before everyone else. Who will win that race? Let me know who your money is on in the comments below.

https://futurism.com/depression-caused-inflammation-brain/

Long Term Depression Permanently Changes the Brain

Is clinical depression a degenerative illness? One new study shows that inflammation in the brain linked to depression increases over time.

DEPRESSION INFLAMMATION

New research from the Centre for Addiction and Mental Health (CAMH) in Toronto has revealed something remarkable about mental illness: years of persistent depression-caused inflammation permanently and physically alter the brain. This may dramatically affect how we understand mental illness and how it progresses over time.

In a study published in The Lancet Psychiatry, researchers found that those who had untreated depression for over a decade had significantly more inflammation in their brains, when compared to those with untreated clinical depression for less than a decade. This work jumps off of senior author Jeff Meyer’s previous work, in which he found the first concrete evidence that those with clinical depression experience inflammation of the brain.

This study went even further, proving for the first time that long-term depression can cause extensive and permanent changes in the brain. Dr. Meyer thinks that this study could be used to create treatments for different stages in depression. This is important because now it is clear that treating depression immediately after diagnosis should be significantly different than treatment after 10 years with the illness.

IMPROVING UNDERSTANDING

Once a doctor and patient find a treatments for depression that works for the patient, treatment typically remains static throughout the course of the patient’s life. Taking this new study into account, this might not be the most effective method.

A PET image of a slice of human brain, showing areas of blue and red coloring. This method was used to measure depression-caused inflammation in this study.
A PET image of a slice of human brain. Image Credit: Jens Maus

This study examined a total of 25 patients who have had depression for over a decade, 25 who had the illness for less time, and 30 people without clinical depression as a control group. The researchers measured depression-caused inflammation using positron emission tomography (PET), which can pick out the protein markers, called TSPO, that the brain immune cells produce due to inflammation. Those with long-lasting depression had about 30 percent higher levels of TSPO when compared to those with shorter periods of depression, as well as higher levels than the control group.

Many misunderstand mental illness to be entirely separate from physical symptoms, but this study shows just how severe those symptoms can be. These findings could spark similar studies with other mental illnesses.

It is even possible that depression might now be treated as a degenerative disease, as it affects the brain progressively over time: “Greater inflammation in the brain is a common response with degenerative brain diseases as they progress, such as with Alzheimer’s disease and Parkinson’s disease,” Meyer said in a press release.

http://www.itpro.co.uk/hardware/30623/apple-imac-pro-review-the-return-of-the-king

Apple iMac Pro review: The return of the king

Apple regains its place as the big dog of enterprise workstations

From £4,083 exc VAT
Pros
The most powerful Mac ever; Stunning design; Gorgeous screen
Cons
Expensive
Verdict
The iMac Pro is Apple’s attempt to take back the workplace, and it’s come out swinging. Not only is it the most powerful Mac ever made, it’s managed to fit in all that power without sacrificing any of Apple’s world-class design aesthetic. If you can afford it, you won’t regret it.

The MacBook range has been one of the most ubiquitous business laptops around for a number of years now, beloved by everyone from executives to developers. Now Apple is seeking to re-assert its dominance in other areas of the enterprise, with an absolute monster of a machine that’s built to handle serious workloads.

Apple’s first unabashedly enterprise-grade machine since the Mac Pro, the iMac Pro has garnered some criticism from skeptics, largely over its pricetag, which starts at just under £5,000 and goes up to over £12,000 for the most powerful model.

Although it’s undeniably expensive, it’s also wildly powerful – it’s Apple’s most powerful machine to date, in fact. Combine that with Apple’s existing pedigree, and you’ve got a machine that will likely be seriously tempting to businesses looking for a proper workstation that doesn’t just look like a big, black monolith.

Apple iMac Pro review: Design

There are, of course, many things that has contributed to the iMac’s ubiquity in the business market, but one of the main reasons is that Apple has firmly and consistently nailed the design of its machines. You can get machines that have the same relative horsepower and display quality as an iMac (often for a cheaper price) but there really is no substitute for the aesthetic appeal of a bank of pristine iMacs.

Understandably, Apple hasn’t messed with its winning formula, and the iMac Pro is visually identical to the vanilla iMac. It’s got the same minimalist aluminium shell and the same clean lines. There are a few critical differences, though. For a start, the iMac Pro is the only member of the iMac family to come in Space Grey – which is the only colour it comes in.

It comes with Apple’s Magic Keyboard and Magic Mouse 2 and an optional Magic Trackpad 2, all of which are also in Space Grey. Apple don’t sell the Space Grey versions individually either, so if you want to get your hands on these snazzy peripherals, an iMac Pro purchase is your only option.

As you may have gathered, we’re big fans of the iMac Pro’s aesthetic. It maintains the iMac’s classic, timeless charm with the Space Grey finish adding an air of businesslike sophistication to the whole affair.

This machine’s appeal isn’t just skin-deep, either. It’s perfectly-balanced, requiring the lightest touch to tilt the screen through its XX degrees of movement. It’s equally easy to rotate the machine, although the screen itself doesn’t rotate along the X axis.

On top of that, Apple has added a number of clever design features. In what we suspect will turn out to be a surprisingly useful feature for many businesses, the iMac Pro features support for VESA mounting – just detach the base, and you can attach it to a wall-mount or a third-party stand.

The only issue we have with the iMac Pro’s design is that it still lacks height adjustability, but it’s a minor gripe that pales into insignificance when the rest of the construction is this good.

Apple iMac Pro review: Display

Apple’s hardware is famous for its image quality, and the display quality of its latest iMac range is absolutely stunning. Unsurprisingly, the iMac Pro is every bit as capable as its stablemate in this regard, and the 27in screen is an absolute joy to behold.

It’s got a 5K resolution (which translates to 5,120 x 2,880 if you want to be specific) and supports the wide colour DCI-P3 gamut. It was effectively flawless in our display tests, covering 98.9% of the DCI-P3 gamut and producing gorgeous, accurate colours and sharp, crisp blacks with deep contrast. At 551cdm/2, the maximum brightness is actually slightly blinding – we had to turn it down to about 75% to reach a comfortable level.

The iMac Pro supports a wide range of colour profiles, and it’s ideally suited to design work and editing. This is unlikely to be a surprise to anyone who already owns a Mac from the last few years – Apple’s reputation for fantastic displays is well-earned, and the iMac Pro is a demonstration of why.

Apple iMac Pro review: Performance

Apple has brought out the big guns for the iMac Pro. It’s the company’s most powerful machine to date, and intended to handle pretty much anything a business can throw of it, short of heavy-duty server operations. With an 18-core Intel Xeon W processor, 128GB of DDR4 memory and AMD’s Radeon Pro Vega 64 graphics chip in the top-spec configuration, it’s fair to say that this machine is an absolute monster.

Of course, you’ll have to fork over a truly eye-watering amount if you want that much power – the most expensive configuration costs in excess of £12,000. Paying that much money for a desktop computer may sound like utter madness, but it’s important to put that number into perspective. A comparably-specced Windows workstation would cost at least £10,000 – and that’s without including the cost of a 21in 5K monitor.

With this much firepower on display, you’d hope that the iMac Pro would be capable of some seriously impressive feats, and Apple’s workhorse certainly doesn’t disappoint. We tested the standard (and least powerful) configuration, which comes with 3.2GHz octocore Intel Xeon W-2140B processor, an 8GB Radeon Pro Vega 56 GPU and 32GB of RAM.

Even though this is the iMac Pro’s entry-level configuration, it easily blew last year’s bottom-tier 5K iMac out of the water in our benchmarks, racking up an overall score of 283 – well over double the regular iMac’s score of 109. It’s worth noting that these improvements are almost entirely centred around multi-core operations. The iMac Pro scored similarly to the 5K iMac in our single-core image editing tests, but it was more than twice as efficient at video editing, and multitasking was around three times faster.

Graphics performance is good as well. On Unigine’s heaven benchmark – a reasonably demanding test of graphical horsepower – the iMac Pro managed to achieve 48fps at a 2,560 x 1,440 resolution and medium detail. Although this is only marginally better than the 5K iMac’s result from last year, it’s nonetheless impressive. It even managed a smooth 35fps on the ultra quality setting. We should also point out that at no point during our testing did we notice any loud cooling fans, and machine felt cool to the touch throughout the duration. It’s one cool cucumber.

This machine is intended to be used for enterprise workloads like rendering, 3D modelling and CAD applications as well as creative tasks, however. Thankfully, it can capably hold its own here; in our workstation tests, it managed to keep pace with full-blown workstation PCs such as the Scan 3XS WI4000, PC Specialist Apollo X02 and Chillblast Fusion Render OC Lite – all of which are hulking beasts compared to the iMac Pro.

Storage is lightning-quick, too, with the 1TB PCIe SSD delivering measured read speeds of 2.4GB/sec and write speeds of 3GB/sec – well over twice the speed of the 2017 iMac’s Fusion Drive. In fact, compared to the latter’s 130MB/sec write speed, the iMac Pro is over 2,000% faster.

All of which is to say that this is an almost frighteningly powerful machine. It’s not as quick as the beefiest Windows workstations on the market, but it’s far and away the most capable piece of hardware that Apple has ever produced – and this is just the entry-level configuration. From 3D rendering to extreme multi-tasking and demanding media editing work, there’s not much that this sleek all-in-one won’t be able to handle – including virtual reality hardware and apps.

Apple iMac Pro review: Ports and features

Apple may have abandoned the idea of versatile connectivity options for its MacBook range, but it certainly hasn’t given up on it with its desktop devices. The iMac Pro is sporting a robust suite of port options, including 10Gb Ethernet, an SDXC card reader, four USB 3.0 ports and no less than four Thunderbolt 3 inputs. These allow you to connect not only two external 5K monitors, but also an external GPU, letting you bolster your iMac Pro with addition graphics processing power.

Apple iMac Pro review: Verdict

The iMac Pro is one seriously heavyweight contender. It’s fitted with some of the most powerful components ever to grace an Apple machine, and it puts them to great use. The display is as gorgeous as you’d expect and the whole thing runs like a dream.

Let’s address the elephant in the room, though. Yes, it’s expensive, but as we mentioned earlier, it’s actually not that much more expensive than Windows machines in the same category. Admittedly, you will be paying a noticeable markup compared to similarly-equipped rivals, but remember that for your initial entry price, you’ll also be getting a truly spectacular 27in 5K display, and the whole package is wrapped up in by far the most sleek and attractive chassis of any enterprise workstation.

If the iMac Pro is Apple’s attempt to reclaim the business market, then we have to say that it’s putting up one hell of a fight. This machine takes all the raw grunt you’d expect from a heavy-duty enterprise PC and crams it into Apple’s signature all-in-one design in a way that virtually defies comprehension.

It may not be the most powerful – and it’s certainly not cheap – but as far as we’re concerned, the iMac Pro represents the pinnacle of business-grade workstation design. Go on, why not treat yourself?

Specifications
Processor 3.2GHz Intel Xeon W-2140B
RAM 32GB
Dimensions
Weight
Screen size 27in
Screen resolution 5,120 x 2,880
Graphics adaptor AMD Radeon Pro Vega 56 8GB
Total storage 1TB
Operating system macOS High Sierra

https://www.bloomberg.com/news/articles/2018-02-26/apple-is-said-to-plan-giant-high-end-iphone-lower-priced-model

Apple Plans Giant High-End iPhone, Lower-Priced Model

  • Company aims to boost sales after iPhone X missed expectations
  • All models to have facial recognition, edge-to-edge screens
 Apple Said to Plan Giant High-End IPhone
Apple Is Said to Plan Giant iPhone and a Lower-Priced Model
Comparing Samsung’s New Galaxy S9 to Apple’s iPhone X
Bloomberg’s Mark Gurman reports that Apple will release its biggest iPhone ever later this year.

Apple Inc. is preparing to release a trio of new smartphones later this year: the largest iPhone ever, an upgraded handset the same size as the current iPhone X and a less expensive model with some of the flagship phone’s key features.

With the new lineup, Apple wants to appeal to the growing number of consumers who crave the multitasking attributes of so-called phablets while also catering to those looking for a more affordable version of the iPhone X, according to people familiar with the products.

iPhone X

Photographer: Luke MacGregor/Bloomberg

Apple, which is already running production tests with suppliers, is expected to announce the new phones this fall. The plans could still change, say the people, who requested anonymity to discuss internal planning.

Despite months of breathless hype, the iPhone X hasn’t sold as well as expected since its debut last year. Apple sold 77.3 million iPhones in the final quarter of 2017, below analysts’ projections of 80.2 million units. Some consumers were turned off by the iPhone X’s $1,000 price despite liking the design but wanted something more cutting-edge than the cheaper iPhone 8. With its next lineup, Apple is seeking to rekindle sales by offering a model for everyone.

“This is a big deal,” says Gene Munster, a co-founder of Loup Ventures and a long-time Apple watcher. “When you have a measurable upgrade in screen size, people go to update their phone in droves. We saw that with the iPhone 6, and we think this is setting up to be a similar step up in growth.”

Munster predicts a supercycle — which he defines as upgrades by 10 percent or more of Apple’s existing iPhone customers. “The market that will see the biggest jump in sales is likely Asia,” he says. “That market has many single-device consumers, and they love big phones.”

An Apple spokeswoman declined to comment. The shares gained 2.1 percent to $179.18 at 2:16 p.m. in New York.

Read more: How Samsung’s new Galaxy S9 compares to the iPhone X

With a screen close to 6.5 inches, Apple’s big new handset will be one of the largest mainstream smartphones on the market. While the body of the phone will be about the same size as the iPhone 8 Plus, the screen will be about an inch larger thanks to the edge-to-edge design used in the iPhone X. (Apple is unlikely to refer to the phone as a phablet, a term popularized by Samsung.)

The larger screen should especially appeal to business users, letting them write emails and manage spreadsheets on a screen about as big as a small tablet. Like the iPhone 8 Plus, the new handset will probably enable split-screen modes for certain apps. Still, the larger phone could cannibalize iPad sales, a category that recently started growing again.

The big phone is code named D33, a person familiar with its development says, and at least some prototypes include a screen resolution of 1242 x 2688. That would make the screen about as sharp as the one on the 5.8-inch iPhone X. Apple also plans to use OLED technology, the same, more expensive type of screen in the regular iPhone X.

Like the iPhone X, the larger model will include a Face ID scanner that unlocks the device and enables payments. Apple is also preparing an update to the regular-sized iPhone X that is internally dubbed D32, people familiar with the product said. Both of these phones are expected to use next-generation A12 processors and will continue to include stainless steel edges, they say, and will be Apple’s high-end smartphone offerings.

Animoji on an iPhone X

Photographer: Daniel Acker/Bloomberg

Apple is considering a gold color option for the update to the iPhone X and the larger model. The company tried to develop gold for the current X handset, but abandoned it because of production problems. All new iPhones since the 5s came in gold, including the iPhone 8. The gold option is especially appealing to consumers in Asia and may help boost sales in the region. Still, Apple may ultimately decide not to proceed with the color.

In at least some regions, Apple is considering offering a dual-SIM card option for the larger model. That would let people use their phones in countries with different carrier plans without having to swap out cards. Such a feature has been growing in importance and popularity, especially in Europe and Asia where business people routinely visit multiple countries.

Apple hasn’t made a final decision on including the feature and could choose to wait for E-SIM technology, which will connect phones to multiple networks without the need for a removable chip. Apple has wanted to offer E-SIM technology (it already exists in the iPad and Apple Watch), but some carriers are resistant to including it in iPhones, and Apple needs their support. A dual-SIM capability would provide a compromise.

The phones will have an updated operating system, probably called iOS 12 and code named Peace, which will include upgraded augmented reality capabilities, deeper integration of the Siri digital assistant, digital health monitoring and the ability to use Animojis in FaceTime.

iPhone 8 Plus

Photographer: Daniel Acker/Bloomberg

Apple’s decision to also build a cheaper phone is an acknowledgment that the current entry-level 8 models too closely resemble the iPhone 6 introduced back in 2014. With their thick bezels and lack of edge-to-edge screens, they seem dated next to the iPhone X and the latest Samsung devices. The new lower-cost model will feature the same edge-to-edge screen as the iPhone X as well as Face ID instead of a fingerprint sensor.

“It’s good that they’re rounding out the product line” with a less expensive phone, Munster says. But he doesn’t think it will have a measurable impact on demand because many consumers will want the bigger model.

To keep costs down, the cheaper phone will use LCD screen technology similar to the type employed in the iPhone 8. It will also have aluminum edges and a glass back like the iPhone 8, not the flashier stainless steel used in the iPhone X.

Apple has tried selling cheaper phones in the past with poor results. In 2013, the company debuted the iPhone 5c, which had a polycarbonate body and came in various colors. Consumers quickly discovered that for a mere $100 more they could buy a 5s, which had an aluminum body, a slow-motion video camera and a fingerprint scanner. Apple soon discontinued the 5c.

For more on the iPhone, check out the Decrypted podcast:

This time, the company is trying something different: using a cheaper body but including the features — Face ID and an edge-to-edge screen — that consumers most prize.

https://www.insidehighered.com/blogs/gradhacker/disabled-grad-school-how-out-do-i-need-be

Disabled in Grad School: How ‘Out” Do I Need to Be

This post is part of a (somewhat loose) series about being disabled at university, with a focus on graduate school: problems we encounter, how we deal with them, and what you can do that will make things easier for fellow graduate students with disabilities.

I’m a graduate student in neuroscience, I’m registered with disability services, and I’m pretty out about being disabled… in certain circumstances. Did you notice that it’s my first name alone on my byline? That’s intentional. At the same time, I took a class last semester about augmentative and alternative communication (AAC), as a part time AAC user, and I’ve been known to wear a T-shirt announcing my neurotype while teaching.

The moral of that story is, disability disclosure is complicated.

We’re often taught to be ashamed of our needs, and to believe that they aren’t reasonable. Is it just that we shouldn’t be here? Whether or not the shame holds, there are times when being openly disabled just isn’t practical — proving disability discrimination can be hard, and encountering plausibly unrelated barriers as soon as we ask for accommodations is a common fear.

So, how openly disabled do I need to be to take your class?

If I need accommodations, then I need documentation, which I have to give to disability services. Then I have to make sure you get the disability services letter. You’ll know I’m disabled, but you may or may not know what my specific disability is. In practice, you’ll know what my disability is, because it can make things easier and it shouldn’t be a big deal. Also in practice, I understand why people might want to keep disclosure to a minimum, because sometimes it is a big deal. In theory, you and I could be the only people who know I’m disabled, and you might not know what disability I have.

Now let’s consider what happens when accommodations are implemented.

When the accommodation is extra time, other students might notice who’s never in the classroom for exams. I guess that’s possible? I certainly never noticed who was missing at exams. If everyone started together, and then students who both had extra time and needed it on this exam went elsewhere at the end of the time, that might be noticed. How noticeable this accommodation is depends on how it’s done at the individual university.

When the accommodation is only being called on when one’s hand is raised, I suppose other students could theoretically notice that certain people don’t get unexpectedly called on. Honestly, I’m not going to catch on if someone else has this accommodation. I’ll just notice if and when I get unexpectedly called on. Unless, of course, the professor normally refuses to call on raised hands. Then it’s pretty obvious if someone only gets called on when their hand is up. Having never taken a class with that sort of policy, I have no idea if I’ve ever had a classmate with this accommodation. I’m told it’s a common option for selective mutism, anxiety, and similar disabilities. It’s what I was initially offered I told disability services I can’t always talk. I could have taken that option, rather than used augmentative and alternative communication (AAC), which everyone notices.

When the accommodation involves the use of a device that is allowed in the classroom, or already used in the classroom for other reasons, it can be pretty subtle. If, for example, fidget objects are already allowed, no one’s likely to notice or care that I’m using mine because I’m autistic. Or if I need to take notes on my laptop because I can’t always read my handwritten notes, and laptops are generally allowed, my classmates aren’t going to know why I’m using it. You might not know either — why turn in paperwork to protect my ability to do something everyone is already allowed to do? (One could argue that it’s a self-accommodationor not even an accommodation at all when the action or support is allowed by default, and we don’t need to disclose in order to make use of it.)

It’s when the accommodation involves the use of a device that is otherwise banned that I have to out myself in order to take your class. Fidget spinners are banned? I’m still going to need to fidget, so I can find a different way of meeting that need or I can out myself for an exception. Laptops are banned? I still can’t consistently read my handwriting once I’m removed from the context, so I’m going to need to go without usable notes, get a note-taker, or out myself to my classmates as well as to you.

You’ll notice that this isn’t just about technology in the classroom. It’s not just laptops. It’s a question of how the university is designed: some spaces won’t require me to request accommodation. My needs are met by the default design of most online courses, for example. In other spaces, I’ll need to turn in my disability services letter, but other students might not know about my disabilities. In yet other spaces, my accommodations will be visible to everyone in the room. Which spaces are which varies with both individual needs and the design of the space, including its rules: what’s normally accepted, and what’s not?

For students who aren’t out as disabled to their cohorts or classmates, having to out themselves is a barrier. How out do we need to be, in order to take your class? How out do we need to be, in order to make it through the door?