Scientists develop tiny tooth-mounted sensors that can track what you eat

Wireless real-time monitoring could add precision to the linkage between diet and health


MEDFORD/SOMERVILLE, Mass. (March 22, 2018) – Monitoring in real time what happens in and around our bodies can be invaluable in the context of health care or clinical studies, but not so easy to do. That could soon change thanks to new, miniaturized sensors developed by researchers at the Tufts University School of Engineering that, when mounted directly on a tooth and communicating wirelessly with a mobile device, can transmit information on glucose, salt and alcohol intake. In research to be published soon in the journal Advanced Materials, researchers note that future adaptations of these sensors could enable the detection and recording of a wide range of nutrients, chemicals and physiological states.

Previous wearable devices for monitoring dietary intake suffered from limitations such as requiring the use of a mouth guard, bulky wiring, or necessitating frequent replacement as the sensors rapidly degraded. Tufts engineers sought a more adoptable technology and developed a sensor with a mere 2mm x 2mm footprint that can flexibly conform and bond to the irregular surface of a tooth. In a similar fashion to the way a toll is collected on a highway, the sensors transmit their data wirelessly in response to an incoming radiofrequency signal.

The sensors are made up of three sandwiched layers: a central “bioresponsive” layer that absorbs the nutrient or other chemicals to be detected, and outer layers consisting of two square-shaped gold rings. Together, the three layers act like a tiny antenna, collecting and transmitting waves in the radiofrequency spectrum. As an incoming wave hits the sensor, some of it is cancelled out and the rest transmitted back, just like a patch of blue paint absorbs redder wavelengths and reflects the blue back to our eyes.

The sensor, however, can change its “color.” For example, if the central layer takes on salt, or ethanol, its electrical properties will shift, causing the sensor to absorb and transmit a different spectrum of radiofrequency waves, with varying intensity. That is how nutrients and other analytes can be detected and measured.

“In theory we can modify the bioresponsive layer in these sensors to target other chemicals – we are really limited only by our creativity,” said Fiorenzo Omenetto, Ph.D., corresponding author and the Frank C. Doble Professor of Engineering at Tufts. “We have extended common RFID [radiofrequency ID] technology to a sensor package that can dynamically read and transmit information on its environment, whether it is affixed to a tooth, to skin, or any other surface.”


Other authors on the paper were: Peter Tseng, Ph.D., a post-doctoral associate in Omenetto’s laboratory, who is now assistant professor of electrical engineering and computer science at University of California, Irvine; Bradley Napier, a graduate student in the Department of Biomedical Engineering at Tufts; Logan Garbarini, an undergraduate student at the Tufts School of Engineering; and David Kaplan, Ph.D., the Stern Family Professor of Engineering, chair of the Department of Biomedical Engineering, and director of the Bioengineering and Biotechnology Center at Tufts.

The work was supported by U.S. Army Natick Soldier Research, Development and Engineering Center, the National Institutes of Health (NIH; F32 EB021159) National Institute of Biomedical Imaging and Bioengineering and the Office of Naval Research. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH, the Army Natick Soldier Research, Development and Engineering Center, or the Office of Naval Research.

Tseng P, Napier B, Garbarini L, Kaplan DL and Omenetto F. “Functional, RF-trilayer sensors for tooth-mounted, wireless monitoring of the oral cavity and food consumption” Advanced Materials, DOI: 10.1002/adma.201703257

About Tufts University

Tufts University, located on campuses in Boston, Medford/Somerville and Grafton, Massachusetts, and in Talloires, France, is recognized among the premier research universities in the United States. Tufts enjoys a global reputation for academic excellence and for the preparation of students as leaders in a wide range of professions. A growing number of innovative teaching and research initiatives span all Tufts campuses, and collaboration among the faculty and students in the undergraduate, graduate and professional programs across the university’s schools is widely encouraged.


Early life experiences influence DNA in the adult brain

March 22, 2018, Salk Institute
Early life experiences influence DNA in the adult brain
Mothering style influences the degree to which DNA is mobilized in offspring’s brains, with offspring of more attentive mothers experiencing less gene movement, and offspring of less attentive mothers experiencing more gene movement. …more

In the perennial question of nature versus nurture, a new study suggests an intriguing connection between the two. Salk Institute scientists report in the journal Science that the type of mothering a female mouse provides her pups actually changes their DNA. The work lends support to studies about how childhood environments affect brain development in humans and could provide insights into neuropsychiatric disorders such as depression and schizophrenia.

“We are taught that our DNA is something stable and unchanging which makes us who we are, but in reality it’s much more dynamic,” says Rusty Gage, a professor in Salk’s Laboratory of Genetics. “It turns out there are  in your cells that are capable of copying themselves and moving around, which means that, in some ways, your DNA does change.”

For at least a decade, scientists have known that most cells in the mammalian brain undergo changes to their DNA that make each neuron, for example, slightly different from its neighbor. Some of these changes are caused by “jumping” genes—officially known as long interspersed nuclear elements (LINEs)—that move from one spot in the genome to another. In 2005, the Gage lab discovered that a jumping gene called L1, which was already known to copy and paste itself into new places in the genome, could jump in developing neuronal brain cells.

The team had hypothesized that such changes create potentially helpful diversity among brain cells, fine-tuning function, but might also contribute to neuropsychiatric conditions.

“While we’ve known for a while that cells can acquire changes to their DNA, it’s been speculated that maybe it’s not a random process,” says Tracy Bedrosian, a former Salk research associate and first author of the study. “Maybe there are factors in the brain or in the environment that cause changes to happen more or less frequently.”


PlaySeek04:03Current time04:27Toggle MuteVolumeToggle Fullscreen
Early life experiences can cause genes to move around. Credit: Salk Institute

To find out, Gage, Bedrosian and colleagues began by observing natural variations in maternal care between mice and their offspring. They then looked at DNA from the offspring’s hippocampus, which is involved in emotion, memory and some involuntary functions. The team discovered a correlation between maternal care and L1 copy number: mice with attentive mothers had fewer copies of the jumping gene L1, and those with neglectful mothers had more L1 copies, and thus more genetic diversity in their brains.

To make sure the difference wasn’t a coincidence, the team conducted a number of control experiments, including checking the DNA of both parents of each litter to make sure the offspring didn’t just inherit their numbers of L1s from a parent, as well as verifying that the extra DNA was actually genomic DNA and not stray genetic material from outside the cell nucleus. Lastly, they cross-fostered offspring, so that mice born to neglectful mothers were raised by attentive ones, and vice versa. Initial results of the correlation between L1 numbers and mothering style held: mice born to neglectful mothers but raised by attentive ones had fewer copies of L1 than mice born to attentive mothers but raised by neglectful ones.

The researchers hypothesized that offspring whose mothers were neglectful were more stressed and that somehow this was causing genes to copy and move around more frequently. Interestingly, there was no similar correlation between  and the numbers of other known jumping genes, which suggested a unique role for L1. So, next, the team looked at methylation—the pattern of chemical marks on DNA that signals whether genes should or should not be copied and that can be influenced by environmental factors. In this case, methylation of the other known jumping genes was consistent for all offspring. But it was a different story with L1: mice with neglectful mothers had noticeably fewer methylated L1 genes than those with attentive , suggesting that methylation is the mechanism responsible for the mobility of the L1 gene.

“This finding agrees with studies of childhood neglect that also show altered patterns of DNA methylation for other genes,” says Gage, who holds the Vi and John Adler Chair for Research on Age-Related Neurodegenerative Diseases. “That’s a hopeful thing, because once you understand a mechanism, you can begin to develop strategies for intervention”

The researchers emphasize that at this point it’s unclear whether there are functional consequences of increased L1 elements. Future work will examine whether the mice’s performance on cognitive tests, such as remembering which path in a maze leads to a treat, can be correlated with the number of L1 genes.

 Explore further: The brain’s stunning genomic diversity revealed

More information: T.A. Bedrosian el al., “Early life experience drives structural variation of neural genomes in mice,” Science (2018). … 1126/science.aah3378

Up next AUTOPLAY 59:08 An Evening with Jane Smiley – Point Loma Writer’s Symposium By The Sea 2018 University of California Television (UCTV) 96 views New 56:10 Follow Your Gut: Microbiomes and Aging with Rob Knight – Research on Aging University of California Television (UCTV) 66K views 1:30:52 MIT AGI: Cognitive Architecture (Nate Derbinsky) Lex Fridman Recommended for you New 56:40 CARTA: The Evolution of Human Nutrition University of California Television (UCTV) 6.7K views Sleep Disorders University of California Television (UCTV) 8.5K views “What Can Physics Say About Life?” with Steven Chu University of California Television (UCTV) 16K views Stephen Fry & Steven Pinker on the Enlightenment Today How to: Academy Recommended for you Humanities as a Vocation: Career Paths Beyond the Blackboard University of California Television (UCTV) 292 views New Gender, Autism and Developmental Disabilities University of California Television (UCTV) 121 views New Steven Pinker: Linguistics as a Window to Understanding the Brain Big Think Recommended for you CARTA: Uniquely-Human Features of the Brain: Plasticity Social Nature Unified Mind University of California Television (UCTV) 6.3K views Career Opportunities in Sustainability University of California Television (UCTV) 7.5K views Sir Roger Penrose – How can Consciousness Arise Within the Laws of Physics? The Artificial Intelligence Channel Recommended for you A Quest for New Materials: Superhard Metals Conducting Polymers and Graphene University of California Television (UCTV) 26K views Dietary Fats: The Good the Bad and the Ugly University of California Television (UCTV) 96K views The Future of Humanity – with Yuval Noah Harari The Royal Institution Recommended for you Prof. Brian Cox – Machine Learning & Artificial Intelligence The Artificial Intelligence Channel 140K views Mental Health Issues, Legal Defense and Developmental Disabilities University of California Television (UCTV) 169 views New Ian Morris | Why the West Rules — For Now The Oriental Institute 225K views East West Street: On the Origins of Genocide and Crimes Against Humanity with Philippe Sands University of California Television (UCTV) 294 views New Developing Better Batteries: Nano Engineering – Exploring Ethics

New algorithm will allow for simulating neural connections of entire brain on future exascale supercomputers

March 21, 2018

(credit: iStock)

An international team of scientists has developed an algorithm that represents a major step toward simulating neural connections in the entire human brain.

The new algorithm, described in an open-access paper published in Frontiers in Neuroinformatics, is intended to allow simulation of the human brain’s 100 billion interconnected neurons on supercomputers. The work involves researchers at the Jülich Research Centre, Norwegian University of Life Sciences, Aachen University, RIKEN, KTH Royal Institute of Technology, and KTH Royal Institute of Technology.

An open-source neural simulation tool. The algorithm was developed using NEST* (“neural simulation tool”) — open-source simulation software in widespread use by the neuroscientific community and a core simulator of the European Human Brain Project. With NEST, the behavior of each neuron in the network is represented by a small number of mathematical equations, the researchers explain in an announcement.

Since 2014, large-scale simulations of neural networks using NEST have been running on the petascale** K supercomputer at RIKEN and JUQUEEN supercomputer at the Jülich Supercomputing Centre in Germany to simulate the connections of about one percent of the neurons in the human brain, according to Markus Diesmann, PhD, Director at the Jülich Institute of Neuroscience and Medicine. Those simulations have used a previous version of the NEST algorithm.

Why supercomputers can’t model the entire brain (yet). “Before a neuronal network simulation can take place, neurons and their connections need to be created virtually,” explains senior author Susanne Kunkel of KTH Royal Institute of Technology in Stockholm.

During the simulation, a neuron’s action potentials (short electric pulses) first need to be sent to all 100,000 or so small computers, called nodes, each equipped with a number of processors doing the actual calculations. Each node then checks which of all these pulses are relevant for the virtual neurons that exist on this node.

That process requires one bit of information per processor for every neuron in the whole network. For a network of one billion neurons, a large part of the memory in each node is consumed by this single bit of information per neuron. Of course, the amount of computer memory required per processor for these extra bits per neuron increases with the size of the neuronal network. To go beyond the 1 percent and simulate the entire human brain would require the memory available to each processor to be 100 times larger than in today’s supercomputers.

In future exascale** computers, such as the post-K computer planned in Kobe and JUWELS at Jülich*** in Germany, the number of processors per compute node will increase, but the memory per processor and the number of compute nodes will stay the same.

Achieving whole-brain simulation on future exascale supercomputers. That’s where the next-generation NEST algorithm comes in. At the beginning of the simulation, the new NEST algorithm will allow the nodes to exchange information about what data on neuronal activity needs to sent and to where. Once this knowledge is available, the exchange of data between nodes can be organized such that a given node only receives the information it actually requires. That will eliminate the need for the additional bit for each neuron in the network.

Brain-simulation software, running on a current petascale supercomputer, can only represent about 1 percent of neuron connections in the cortex of a human brain (dark red area of brain on left). Only about 10 percent of neuron connections (center) would be possible on the next generation of exascale supercomputers, which will exceed the performance of today’s high-end supercomputers by 10- to 100-fold. However, a new algorithm could allow for 100 percent (whole-brain-scale simulation) on exascale supercomputers, using the same amount of computer memory as current supercomputers. (credit: Forschungszentrum Jülich, adapted by KurzweilAI)

With memory consumption under control, simulation speed will then become the main focus. For example, a large simulation of 0.52 billion neurons connected by 5.8 trillion synapses running on the supercomputer JUQUEEN in Jülich previously required 28.5 minutes to compute one second of biological time. With the improved algorithm, the time will be reduced to just 5.2 minutes, the researchers calculate.

“The combination of exascale hardware and [forthcoming NEST] software brings investigations of fundamental aspects of brain function, like plasticity and learning, unfolding over minutes of biological time, within our reach,” says Diesmann.

The new algorithm will also make simulations faster on presently available petascale supercomputers, the researchers found.

NEST simulation software update. In one of the next releases of the simulation software by the Neural Simulation Technology Initiative, the researchers will make the new open-source code freely available to the community.

For the first time, researchers will have the computer power available to simulate neuronal networks on the scale of the entire human brain.

Kenji Doya of Okinawa Institute of Science and Technology (OIST) may be among the first to try it. “We have been using NEST for simulating the complex dynamics of the basal ganglia circuits in health and Parkinson’s disease on the K computer. We are excited to hear the news about the new generation of NEST, which will allow us to run whole-brain-scale simulations on the post-K computer to clarify the neural mechanisms of motor control and mental functions,” he says .

* NEST is a simulator for spiking neural network models that focuses on the dynamics, size and structure of neural systems, rather than on the exact morphology of individual neurons. NEST is ideal for networks of spiking neurons of any size, such as models of information processing, e.g., in the visual or auditory cortex of mammals, models of network activity dynamics, e.g., laminar cortical networks or balanced random networks, and models of learning and plasticity.

** Petascale supercomputers operate at petaflop/s (quadrillions or 1015 floating point operations per second). Future exascale supercomputers will operate at exaflop/s (1018 flop/s). The fastest supercomputer at this time is the Sunway TaihuLight at the National Supercomputing Center in Wuxi, China, operating at 93 petaflops/sec.

*** At Jülich, the work is supported by the Simulation Laboratory Neuroscience, a facility of the Bernstein Network Computational Neuroscience at Jülich Supercomputing Centre. Partial funding comes from the European Union Seventh Framework Programme (Human Brain Project, HBP) and the European Union’s Horizon 2020 research and innovation programme, and the Exploratory Challenge on Post-K Computer (Understanding the neural mechanisms of thoughts and its applications to AI) of the Ministry of Education, Culture, Sports, Science and Technology (MEXT) Japan. With their joint project between Japan and Europe, the researchers hope to contribute to the formation of an International Brain Initiative (IBI).

BernsteinNetwork | NEST — A brain simulator

Abstract of Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers

State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.

Samsung launches Exynos 9610 SoC with deep learning-based vision and image processing capabilities

Samsung Electronics has launched a brand new Exynos 7 series SoC called the 9610 that will begin mass production later this year. While it has been designed to deliver slow-motion video recording it also packs in some deep learning magic that should improve the quality of photos drastically, just like on the Google Pixel 2.

The logo of Samsung Electronics is seen at its office building in Seoul, South Korea. Image: Reuters

Representational image. Reuters

To be mass produced by the second half of this year, the mid-range mobile SoC will be manufactured using Samsung’s 10nm FinFET process. The chipset, also packs in its own neural network engine that will enable enhanced face detection while providing vision and image processing capabilities.

According to the Samsung, the advanced face detection is smart enough to recognise faces that are partially covered with objects such as hair or a hat. In fact, it is so powerful that faces don’t even need to be facing forward to be detected.

Other Pixel-like tricks include a single camera out-focusing even augmented low-light imaging capabilities. Smart depth sensing will help deliver bokeh images using just a single camera sensor.

The 9610 with its vision and image processing capabilities can even enhance signal-to-noise (SNR) and brightness that should improve image quality in low-light images drastically.

For videos, the Exynos 9610 will add 480 fps for slow motion video recording and even throw in 4K recording at 120 fps using HEVC (H.265) format. Yes, and this is still a mid-range SoC we are talking about.

The ISP inside the 9610 has been improved to deliver performance by 1.6 times and a doubled mobile industry processor interface speed (MIPI).

Samsung Exynos 7 Series 9610 SoC. Samsung Newsroom

Samsung Exynos 7 Series 9610 SoC. Samsung Newsroom

With the crazy image-processing capabilities out of the way it’s time to take a look at what’s under the hood.

We have octa-core setup with 4xCortex A73 and 4xCortex A53 cores clocked at 2.3 GHz and 1.6 Ghz respectively. Add to this a ARM Mali-G72 GPU along with support for LPDDR4X RAM.

There is also a low-powered Cortex M4-based sensor hub that will enable features like gesture recognition and context awareness, shedding the load off the main processor.

Connectivity sees an LTE modem with support for Cat.12 3CA for downlink and Cat.13 2CA for uplink. Add to this Wi-Fi MIMO, Bluetooth v5.0 and support for an FM Radio (finally). There’s also the usual serving of a 4-mode Global Navigation Satellite System (GNSS).

“The Exynos 7 Series 9610 is our latest mobile processor that delivers on added power and speed,” said Ben Hur, vice president of System LSI marketing at Samsung Electronics. “But more importantly, it sets a new performance standard for high-end devices with its deep learning vision-image processing solution and slow motion video capabilities that change the way we engage with our mobile devices.”

Indeed, all of the above capabilities do make up for what appeared to be Samsung’s lack of interest in the Galaxy S9and S9+ this year. The brand seemed like it was preparing for something big with its tenth anniversary S Series. Considering the capabilities of this mid-range chipset, we can expect a lot more  from the high-end one to arrive on the Galaxy X early next year.

27:24 Steven Pinker on Enlightenment, Our Complex Democracy, and Hope for Humanity (Pt. 2) The Rubin Report 2.1K views New 34:12 Michio Kaku & Ray Kurzweil – Singularity is Close! Cuckoo for Kaku Recommended for you 2:13:10 Joe Rogan Experience #1073 – Steven Pinker PowerfulJRE 1.1M views 1:18:49 Stephen Fry & Steven Pinker on the Enlightenment Today How to: Academy Recommended for you Ben Shapiro DEFENDS Left-Wing STEVEN PINKER from the RABID LEFT TRUTHBOMBS 138K views Steven Pinker – The Philosophy of Free Will The Artificial Intelligence Channel Recommended for you Daniel Dennett & Steven Pinker – Can we become a more peaceful species Cosmos Minutes 4.4K views EU Parliament Artificial Intelligence Debate – Steven Pinker The Artificial Intelligence Channel 12K views Seeing through fog Massachusetts Institute of Technology (MIT) Recommended for you New Free Speech – Steven Pinker Lecture Question Everything 26K views Civil Discourse and America CSADKenyon 22K views Mathematics and sex | Clio Cresswell | TEDxSydney TEDx Talks 7.5M views The SAFIRE Project 2017 – 2018 Update ThunderboltsProject Recommended for you New Steven Pinker: Human nature and the blank slate TED 248K views World War A: Aliens Invade Earth – Full Documentary (HD) Krasnyy yug Recommended for you [AUDIO FIXED] The Truth Cannot be Sexist – Steven Pinker on the biology of sex differences Gravitahn 137K views Ellie Goulding – Lights (Artistic Piano Interpretation by Sunny Choi) Sunny Choi Recommended for you Christiane Amanpour & Kara Swisher | Christiane Amanpour on Sex & Love Around the World | SXSW 2018 SXSW 3.6K views New Fired Google Engineer James Damore (Live Interview) The Rubin Report 460K views Steven Pinker: 5 Main Sex Differences in Cognitive Abilities PhilosophyInsights 77K views Steven Pinker on Sex Differences, Human Nature, and Identity Politics (Pt. 1)

Opinion: What I’d like to see Apple announce at the March 27 ‘education’ event

Apple set expectations for the March 27 event with the invite: ‘creative new ideas for teachers and students’. It’s going to be themed and directed at education markets, no question.

But that doesn’t mean the event will be irrelevant to an average consumer outside of a school. Whilst software announcements will almost certainly focus on things like Apple Classroom, any new hardware revisions affect normal customers just as much as schools. Here’s what I’d like to see happen.

I want Apple to give some attention to the Mac laptop lineup. There are almost too many choices now. It’s hard to say what MacBook is the entry-level model nowadays; the product that you can recommend blindly and a majority of people will be happy.

I feel like Apple wants to say that the 12-inch MacBook is that device. If you look at the Apple Store website, it is the first Mac in the navigation. Compare this to the equivalent page for the iPad, where the iPad Pro is listed first.

It is a sliver of metal with incredible portability, a Retina display and the new keyboard design.  The latest 2017 revision finally gave it enough CPU power to not feel dog slow. The problem is it is priced like a higher-end machine.

It gets squeezed by the MacBook Air on price; as poor as the Air’s screen is, it is 30% cheaper and technically has a larger display. (Refurbished Airs regularly get discounted even more significantly.) Customer psychology is always driven by price. The Air is the only Mac laptop that doesn’t have a four digit price tag.

On the other end, the MacBook bumps up against the 13-inch MacBook Pro with 2 Thunderbolt 3 ports. For the same $1299 price, you can get a much more powerful  machine with modern I/O connectivity, albeit half as much storage.

What I want Apple to introduce is a machine that can be priced cheaply enough to cut out the need for an Air for a regular consumer. Honestly, I think they are already close with their current hardware. If they made a 128 GB SSD configuration of the 12-inch MacBook, they could push it lower without making any revolutionary hardware changes. I don’t think it’s a pie in the sky proposition to envision a 12-inch MacBook for $1099, $999 at a stretch.

The rumor is that Apple will make the MacBook Air more affordable. This is less ideal in my head, because the folly of psychology that people ‘buy what’s cheapest not what’s best for them’ (even if they have the budget) will still apply. If they are going to do it, make it noticeably different — like $799.

I wouldn’t mind if they kept it around but hid it in the Apple Store, targeted just for business and education, similar to how they kept around the MacBook Pro with SuperDrive for years, or how the current 2015 Pros are buried on the Apple Store. Distinguish it enough so that it is clearly an ‘old legacy’ rather than a bad entry-level.

Tieing into another rumor we heard last week, Apple could be prepping a higher-end 13-inch MacBook (maybe with two ports?) that would slot in above the 12-inch but below the Pros. This product probably wouldn’t ship until June, and is unlikely to even be announced until WWDC, but the changes made next week could signal a gap in the lineup ready to be filled. For instance, the Air could shift down to $799, the 12-inch MacBook hits the magic $999 number, and then the 13-inch MacBook fills a $1199 price point in a few months time.

If it was me thinking just about my own needs, I’d love to cut out the Air altogether and simplify the range. However, I don’t think that’s practical for Apple’s margins or customer needs quite yet.

Regarding the iPad, I think a cheaper entry-level iPad is almost certainly going to happen. Going from $329 to $299 is a huge psychological improvement for customers.

Apple was already selling the 2017 iPad to schools with a $30 discount when buying in bulk, and I’d expect them to do the same this year bringing the cost of entry into education for iPads to around $260 (which might explain some recent reports).

In terms of spec changes, I wouldn’t expect a huge leap. Unlike the Air, the 2017 iPad is a pretty respectable product overall. The A9 is still a very capable chip. It’s just cheaper. Given that the invite looks like a pen stroke, I am inclined to think that maybe the new school iPad will support Apple Pencil though.

I’m not familiar with the technical requirements here, but I don’t think the expensive part of making an iPad work with the Pencil is in the iPad; it’s in the accessory that you pay $99 for. So, bringing Pencil support to the cheapest iPad doesn’t seem impossible. It’s also a technology that is now several years old, starting with the 12.9-inch iPad Pro in late 2015.

Of course, using Pencil with the 9.7-inch iPad wouldn’t be as nice a canvas as a True Tone ProMotion screen from the iPad Pro, but those are niceties and not requirements for a good baseline experience. After all, Apple happily debuted the 2015 iPad Pro with neither 120 FPS screen refresh nor True Tone.

I think an update to the Apple Pencil itself is possible but could come later in the year, to accompany with new iPad Pro hardware.

The other thing I’d like to see is an Apple designed keyboard accessory for the low-end model. Apple currently recommends this Logitech case for schools to use.  The reality is, though, it’s really ugly. Moreover, it relies on Bluetooth to communicate, so it needs to be charged separately.

An Apple solution would be prettier, have zero-effort pairing, and ideally eliminate the need for a separate battery to charge. This last wish would require a Smart Connector in the iPad itself. Again though, I don’t think that connector represents a price premium for the overall bill of materials.

Even if Apple didn’t want to brand it as their own, a Smart Connector and partnership with Logitech would allow them to make a second-generation case that is significantly improved.

Thinking realistically, for an event that is not going to be livestreamed and not held on Apple campus, I think that’s about as much as you can expect from the hardware story. Small updates to the entry-level iPad and price cuts on the Mac side. Nevertheless, I hopes it shows a path to deprecating a laptop that has stuck around for an embarrassingly long time. Apple, take the Air from the room.

Check out our roundup from earlier this week on all the potential software and hardware rumors that have been circulating … and stay tuned as we bring live coverage of all the announcements on Tuesday!

Check out 9to5Mac on YouTube for more Apple news: