http://www.businessinsider.com/airpods-design-flaw-is-really-a-strength-2018-7

The Airpods’ biggest design ‘flaw’ is actually their greatest strength — here’s why I think Apple got it right

The Insider Picks team writes about stuff we think you’ll like. Business Insider has affiliate partnerships, so we get a share of the revenue from your purchase.

apple airpodsReuters/Beck Diefenbach
  • Apple’s AirPods are the best pair of totally wireless headphones I’ve tried, despite breaking one of the most common design rules for earbuds.
  • Instead of having gummy tips, which create a full seal around your ears to stop outside noise from leaking in, the AirPods are made entirely out of hard plastic.
  • Because they don’t create as tight of a seal, the AirPods may not sound quite as good as other high-end earbuds, but in my experience, they’re the only pair of totally wireless earbuds that reliably sits in my ear.
  • The other totally wireless earbuds I’ve tried so far have their tight seal broken within a couple of minutes, because they’re much more sensitive to movement. Once the seal is gone, music doesn’t sound as good, and they fall out.

Apple’s AirPods aren’t the best-sounding headphones I’ve tried, or even the best looking, but they’re still the pair I reach for when I’m not working on a headphone review. At first I thought it was because they were the only totally wireless headphones I owned, but I’ve tested a couple of other pairs recently, and it made me realize the AirPods’ biggest design “flaw” is actually their biggest strength.

Most earbuds have gummy tips, which create a “seal” around the inside of your ear that’s important for two reasons. First, it creates an environment where outside sound can’t leak in, so you can listen to your music without being interrupted by outside noises, like a subway car or people walking on the street. Second, by creating that ideal environment, your music will actually sound better, especially lower bass frequencies.

The AirPods break that design rule. Instead of having gummy tips, they have a hard plastic shell, like Apple’s wired earbuds, the EarPods. They don’t create a tight seal, and sit on the inside of my ears instead of totally plugging them up. This design lets outside sound leak in, which is annoying, but it actually helps the AirPods stay in my ears.

Totally wireless earbuds rely on their seal for more than just audio quality. They actually need that tight seal to stay securely in your ears. Unfortunately, it doesn’t matter if I’m sitting, walking, or running — all of the gummy-tipped wireless earbuds I’ve tried fit into my ears nicely for a couple of minutes, then start loosening up.

Eventually, the seal the earbuds created breaks, and one or both of them fall out of my ears. Because the AirPods don’t have to maintain a seal, they’re the only totally wireless headphones I can reliably keep in my ears for more than a few minutes at a time.

Apple AirPodsMaurizio Pesce/Flickr

The AirPods’ design also solves another problem I’ve come across when testing earbuds: finding the right-sized tip for my ears. Most earbuds ship with sets of small, medium, and large eartips, so you can find the ones that fit best. AirPods are one-size-fits-all, and while they may feel like they’re going to fall out, they’re surprisingly good at staying in my ears.

Like I mentioned earlier, outside noise does leak in through the AirPod because they don’t create a tight seal, but that doesn’t mean they sound bad.

In fact, they sound quite good, regardless of the genre of music I’m listening to. Competing earbuds that do create a seal do sound better, but it’s hard to concentrate on the music I’m listening to when I have to stop and rearrange them in my ears every few minutes.

It’s frustrating for my music to have to compete with outside noise during my commute, but turning up the volume a little higher than I normally would mostly solves that problem. Choosing between a pair of totally wireless earbuds that fit OK but sound great, and another that sounds good but fits great is easy, which is why I think Apple made the right call with the Airpods’ hard plastic design.

There are rumors that Apple is going to release more headphones in the future, and I have no doubt that they’ll sound even better than the AirPods you can get right now. I only hope that Apple doesn’t mess with the AirPods’ design too much, because they’ve solved a couple of the biggest problems facing totally wireless earbuds by breaking the rules.

 

Advertisements

https://www.zdnet.com/article/the-best-programming-language-for-data-science-and-machine-learning/

The best programming language for data science and machine learning

Hint: There is no easy answer, and no consensus either.

https://www.technologyreview.com/s/611673/google-wants-to-make-programming-quantum-computers-easier/

Google wants to make programming quantum computers easier

Its new open-source software will help developers experiment with the machines, including Google’s own super-powerful quantum processor.

Quantum computers are still in their infancy, but builders of the exotic machines want to encourage software developers to experiment with them. Programming the circuits on quantum machines is a real challenge . Instead of standard digital bits, which represent either 1 or 0, quantum computers use “qubits,” which can be in both states at once thanks to a phenomenon known as superposition. Qubits can also influence one another even if they’re not physically connected. Moreover, they stay in their delicate quantum state for no longer than the blink of an eye. Exploiting them requires completely different software, and only a small band of developers currently has the highly specialized knowledge to write such programs.

Google wants to help change that. It has just released Cirq, a software toolkit that lets developers create algorithms without needing a background in quantum physics. Cirq is an open-source initiative, which means anyone can access and modify the software. Google likens it to its popular TensorFlow open source toolkit that has made it easier to build machine-learning software. For now, developers can use Cirq to create quantum algorithms that run on simulators. But the goal is to have it help build software that will run on a wide range of real machines in the future.

The tech giant has also released OpenFermion-Cirq, a toolkit for creating algorithms that simulate molecules and properties of materials. Indeed, chemistry is among the applications in which quantum computers are likely to be of most use in the short term. One of the companies that worked with Google on Cirq’s development is Zapata Computing, whose early focus is on software for chemistry and materials (see “The world’s first quantum superstore—or so it hopes—is here”).

Another Google partner is Quantum Benchmark, which helps people assess the performance of different kinds of quantum hardware for various applications. “Cirq gives us an accessible platform for providing our tools to users,” says Joseph Emerson, the firm’s CEO and founder.

There are other open-source initiatives already under way that let developers build code for some existing quantum machines, but Google’s move is significant because the company has been at the forefront of developing powerful quantum processors like the Bristlecone chip in the image above, which holds the record for number of qubits (see our qubit counter here).

Researchers working in the quantum field say that sharing code openly will help foster a more vibrant developer community, just as it has in other areas of software. “We’re at such an early stage in the development of quantum computing that it’s to everyone’s advantage that things are done out in the open,” says Andrew Childs, who is co-director of the Joint Center for Quantum Information and Computer Science at the University of Maryland.

The other thing that will foster interest is greater accessibility to quantum computers themselves, many of which still reside in academic labs. Companies like IBM and Rigetti Computing have already made their machines accessible to people who want to run algorithms on them, and Google looks set to follow suit. It says it plans to make the Bristlecone processor available via the computing cloud, and that developers will be able to use Cirq to write programs for it.

https://www.nextbigfuture.com/2018/07/agni-fusion-has-innovative-hybrid-ion-beam-approach-to-commercial-fusion.html

AGNI Fusion has innovative hybrid ion beam approach to commercial fusion

Dr. James Conca covered AGNI for Forbes.

AGNI has the ability to remediate all radioactive waste made with fission models to include waste in storage.

AGNI can break down radioactive waste made in hospitals, commonly used for medical imaging and radiation therapy.

The AGNI system can break down these radioactive materials through remediation, or simply put, bombarding the waste in a remediation bay with our energized, fast neutrons. Since fusion produces such high energy neutrons, suddenly the process of breaking down the radioactive waste further to stable elements is viable. This can be likened to rocket fuel, where often a starter fuel is needed to reach the temperatures required to ignite this fuel. Fission reactions cannot achieve that energy, so the radioactive waste remains a toxic end product. Fusion with AGNI can break down these elements until they are safe and stable.

Breaking down waste would be an intermediate revenue generation goal that could help fund the full nuclear fusion system.

The AGNI Energy design combines the stability of magnetic containment with beam to target inertial fusion.

They will shoot a beam of fusing atoms onto a solid target. This will solve several physics problems and generating energy without generating a lot of neutrons.

There are many ways people are trying to get to nuclear fusion. Two of the main ways are:

– inertial confinement fusion aims to compress hot ions (plasma), heating them to conditions where fusion reactions are more likely. Specific approaches include – laser fusion, beam fusion, fast ignition, and magnetized target fusion.

– magnetic confinement fusion aims to contain a hot plasma in a device with immensely strong magnetic fields. Specific approaches include – tokamak, stellarator, z pinch, and reversed field pinch.

Each method has challenges:

– inertial confinement struggles with efficiency of reactions for the energy put in as the laser or ion beam require huge amounts of energy to be generated.

– magnetic confinement struggles with controlling and containing the plasma and keeping it stable long enough to sustain fusion.

AGNI Energy wants to combine the two main fusion methods for use in their device. AGNI focuses a beam of ions, which is half of the fuel, onto a solid target which is the other half of the fuel.


The AGNI fusion reactor uses both electric fields and magnetic fields, giving the nuclei a very short flight time before they hit the solid target, so the nuclei don’t need to be controlled very long before the fusion occurs.

The ion beam contains a mixture of deuterium and helium-3, deuterium being the dominant component of the beam. The target plate contains Lithium-6, Tritium, and Boron-11. Because of pre-target fusion, there are more final products interacting with the target plate then Deuterium and Helium-3. Deuterium—Helium-3 fusion produces protons that can then fuse with the Boron-11 to produce three Helium-4 ions.

AGNI fusion uses a series of five rings, capable of varying the degrees of freedom involving the electrostatic source diameter, the Z-axis position of each ring, and variable output of magnetic intensity and electrostatic intensity. The method of containment is focused around shaping beam dynamics towards convergence at the target plate, with the intent of the plasma generating a strong internal magnetic field as seen in kink oscillations to increase the likelihood of surpassing the coulomb barrier in the target plate materials, necessary to fuse.

https://arstechnica.com/science/2018/07/ai-plus-a-chemistry-robot-finds-all-the-reactions-that-will-work/

AI plus a chemistry robot finds all the reactions that will work

Given a set of starting materials, it’ll figure out every reaction among them.

Simple robots have been part of chemistry for years.
Greg Russ

Chemistry is a sort of applied physics, with the behavior of electrons and their orbitals dictating a set of rules for which reactions can take place and what products will remain stable. At a very rough level, the basics of these rules are simple enough that experienced chemists can keep them all in their brain and intuit how to fit together pieces in a way that ultimately produces the starting material they want. Unfortunately, there are some parts of the chemical landscape that we don’t have much experience with, and strange things sometimes happen when intuition meets a reaction flask. This is why some critical drugs still have to be purified from biological sources.

It’s possible to get more precise than intuition, but that generally requires full quantum-level simulations run on a cluster, and even these don’t always capture some of the quirks that come about because of things like choice of solvents and reaction temperatures or the presence of minor contaminants.

But improvements in AI have led to a number of impressive demonstrations of its use in chemistry. And it’s easy to see why this works; AIs can figure out their own rules, without the same constraints traditionally imparted by a chemistry education. Now, a team at Glasgow University has paired a machine-learning system with a robot that can run and analyze its own chemical reaction. The result is a system that can figure out every reaction that’s possible from a given set of starting materials.

Chemist in a fume hood

Lee Cronin, the researcher who organized the work, was kind enough to send along an image of the setup, which looks nothing like our typical conception of a robot (the researchers refer to it as “bespoke”). Most of its parts are dispersed through a fume hood, which ensures safe ventilation of any products that somehow escape the system. The upper right is a collection of tanks containing starting materials and pumps that send them into one of six reaction chambers, which can be operated in parallel.

The robot in question. MS = Mass Spectrometer; IR = Infrared Spectrometer.
Enlarge / The robot in question. MS = Mass Spectrometer; IR = Infrared Spectrometer.
Lee Cronin

The outcomes of these reactions can then be sent on for analysis. Pumps can feed samples into an IR spectrometer, a mass spectrometer, and a compact NMR machine—the latter being the only bit of equipment that didn’t fit in the fume hood. Collectively, these can create a fingerprint of the molecules that occupy a reaction chamber. By comparing this to the fingerprint of the starting materials, it’s possible to determine whether a chemical reaction took place and infer some things about its products.

All of that is a substitute for a chemist’s hands, but it doesn’t replace the brains that evaluate potential reactions. That’s where a machine-learning algorithm comes in. The system was given a set of 72 reactions with known products and used those to generate predictions of the outcomes of further reactions. From there, it started choosing reactions at random from the remaining list of options and determining whether they, too, produced products. By the time the algorithm had sampled 10 percent of the total possible reactions, it was able to predict the outcome of untested reactions with more than 80-percent accuracy.

And, since the earlier reactions it tested were chosen at random, the system wasn’t biased by human expectations of what reactions would or wouldn’t work.

Once it had built a model, the system was set up to evaluate which of the remaining possible reactions was most likely to produce products and prioritize testing those. The system could continue on until it reached a set number of reactions, stop after a certain number of tests no longer produced products, or simply go until it tested every possible reaction.

Neural networking

Not content with this degree of success, the research team went on to add a neural network that was provided with data from the research literature on the yield of a class of reactions that links two hydrocarbon chains. After training on nearly 3,500 reactions, the system had an error of only 11 percent when predicting the yield on another 1,700 reactions from the literature.

This system was then integrated with the existing test setup and set loose on reactions that hadn’t been reported in the literature. This allowed the system to prioritize not only by whether the reaction was likely to make a product but also how much of the product would be produced by the reaction.

All this, on its own, is pretty impressive. As the authors put it, “by realizing only 10 percent of the total number of reactions, we can predict the outcomes of the remaining 90 percent without needing to carry out the experiments.” But the system also helped them identify a few surprises—cases where the fingerprint of the reaction mix suggested that the product was something more than a simple combination of starting materials. These reactions were explored further by actual human chemists, who identified both ring-breaking and ring-forming reactions this way.

That last aspect really goes a long way toward explaining how this sort of capability will fit into future chemistry labs. People tend to think of robots as replacing humans. But in this context, the robots are simply taking some of the drudgery away from humans. No sane human would ever consider trying every possible combination of reactants to see what they’d do, and humans couldn’t perform the testing 24 hours a day without dangerous levels of caffeine anyway. The robots will also be good at identifying the rare cases where highly trained intuitions turn out to lead us astray about the utility of trying some reactions.

But for now, humans will be necessary to integrate this knowledge into useful chemistry, recognizing when deploying any new or efficient reactions may make a previously inefficient process or unobtainable product easier to work with. There may come a time where AI helps out with that, as well, but we don’t seem to be quite there yet.

Nature, 2018. DOI: 10.1038/s41586-018-0307-8  (About DOIs).

https://fstoppers.com/post-production/early-reports-suggest-macbooks-new-i9-processor-isnt-keeping-269922

Early Reports Suggest MacBook’s New i9 Processor Isn’t Keeping Up

The jury might still be out on this, but it’s not looking promising.

Dave Lee, a popular YouTuber boasting 1.4 million subscribers, has denounced Apple’s latest 15-inch MacBook Pros. He claims that the Intel Core i9 is running slower than the previous i7 models, due to the laptop underclocking the CPU when it gets hot. Unfortunately that throttling is taking it from a 2.9 GHz base speed to 2.2 GHz — a far cry from the 4.8 GHz “Turbo Boost” capabilities Apple are claiming it can reach. Users on Reddit have been been talking this over since last night, without Lee being proved wrong.

It’s worth noting that Apple doesn’t claim the chip will run at it’s best speeds all the time, claiming that it will run best when “workloads and system thermals allow.” Also it’s hardly unusual for a laptop to slow things down when temperatures rise. For all we know, Lee’s office space could be a balmy 100 degrees. However he did also take issue with Dell’s XPS 15 laptop throttling th i9 processor as well.

One concern I have with his test is that he’s running Premiere Pro, which is generally considered to be poorly optimized for a MacBook, and macOS in general. Perhaps Adobe isn’t playing well with the i9, and not using every available core as it should. Nonetheless, by testing it in the freezer I think we can see that heat is definitely a major factor.

Zollotech did a brief test with FCPX and the results weren’t promising, with the 2017 model beating the 2018. However Austin Evans did his own test a couple days ago, and found the CPU to outperform the 2016 model quite easily.

Has anybody else tested this? I haven’t been able to find a lot of reliable first hand accounts yet, and I don’t want to rely on Lee’s notoriety in the tech community as the be all and end all. I’d like to see more tests with Final Cut Pro X to see how much software plays a role in this issue.

https://phys.org/news/2018-07-closer-optical-artificial-neural-network.html

Researchers move closer to completely optical artificial neural network

July 19, 2018, Optical Society of America
Researchers move closer to completely optical artificial neural network
Researchers have shown a neural network can be trained using an optical circuit (blue rectangle in the illustration). In the full network there would be several of these linked together. The laser inputs (green) encode information that is carried through the chip by optical waveguides (black). The chip performs operations crucial to the artificial neural network using tunable beam splitters, which are represented by the curved sections in the waveguides. These sections couple two adjacent waveguides together and are tuned by adjusting the settings of optical phase shifters (red and blue glowing objects), which act like ‘knobs’ that can be adjusted during training to perform a given task. Credit: Tyler W. Hughes, Stanford University

Researchers have shown that it is possible to train artificial neural networks directly on an optical chip. The significant breakthrough demonstrates that an optical circuit can perform a critical function of an electronics-based artificial neural network and could lead to less expensive, faster and more energy efficient ways to perform complex tasks such as speech or image recognition.

An artificial neural network is a type of artificial intelligence that uses connected units to process information in a manner similar to the way the brain processes information. Using these networks to perform a complex task, for instance voice recognition, requires the critical step of training the algorithms to categorize inputs, such as different words.

Although optical artificial neural networks were recently demonstrated experimentally, the training step was performed using a model on a traditional digital computer and the final settings were then imported into the optical circuit. In Optica, The Optical Society’s journal for high impact research, Stanford University researchers report a method for training these networks directly in the device by implementing an optical analogue of the ‘backpropagation’ algorithm, which is the standard way to train conventional neural networks.

“Using a physical device rather than a computer model for training makes the process more accurate,” said Tyler W. Hughes, first author of the paper. “Also, because the training step is a very computationally expensive part of the implementation of the neural network, performing this step optically is key to improving the computational efficiency, speed and power consumption of artificial networks.”

A light-based network

Although neural network processing is typically performed using a traditional computer, there are significant efforts to design hardware optimized specifically for neural network computing. Optics-based devices are of great interest because they can perform computations in parallel while using less energy than electronic devices.

In the new work, the researchers overcame a significant challenge to implementing an all-optical neural network by designing an optical chip that replicates the way that conventional computers train neural networks.

An  can be thought of as a black box with a number of knobs. During the training step, these knobs are each turned a little and then the system is tested to see if the performance of the algorithms improved.

“Our method not only helps predict which direction to turn the knobs but also how much you should turn each knob to get you closer to the desired performance,” said Hughes. “Our approach speeds up training significantly, especially for large networks, because we get information about each knob in parallel.”

On-chip training

The new training protocol operates on optical circuits with tunable beam splitters that are adjusted by changing the settings of optical phase shifters. Laser beams encoding information to be processed are fired into the optical circuit and carried by optical waveguides through the beam splitters, which are adjusted like knobs to train the .

In the new training protocol, the laser is first fed through the . Upon exiting the device, the difference from the expected outcome is calculated. This information is then used to generate a new light signal, which is sent back through the optical network in the opposite direction. By measuring the optical intensity around each beam splitter during this process, the researchers showed how to detect, in parallel, how the neural network performance will change with respect to each beam splitter’s setting. The phase shifter settings can be changed based on this information, and the process may be repeated until the neural network produces the desired outcome.

The researchers tested their training technique with optical simulations by teaching an algorithm to perform complicated functions, such as picking out complex features within a set of points. They found that the optical implementation performed similarly to a conventional computer.

“Our work demonstrates that you can use the laws of physics to implement computer science algorithms,” said Fan. “By  these networks in the optical domain, it shows that optical neural network systems could be built to carry out certain functionalities using optics alone.”

The researchers plan to further optimize the system and want to use it to implement a practical application of a neural network task. The general approach they designed could be used with various architectures and for other applications such as reconfigurable optics.

 Explore further: New AI method increases the power of artificial neural networks

More information: T. W. Hughes, M. Minkov, Y. Shi, S. Fan, “Training of photonic neural networks through in situ backpropagation and gradient measurement,” Optica, Volume 5,Issue , pages 864-871 (2018) DOI: 10.1364/OPTICA.5.000864