https://scitechdaily.com/breakthrough-in-search-for-safer-easier-way-to-deliver-vision-saving-gene-therapy-to-the-retina/

Breakthrough in Search For Safer, Easier Way to Deliver Vision-Saving Gene Therapy to The Retina

Vision Saving Gene Therapy

In experiments with rats, pigs and monkeys, Johns Hopkins Medicine researchers have developed a way to deliver sight-saving gene therapy to the retina. If proved safe and effective in humans, the technique could provide a new, more permanent therapeutic option for patients with common diseases such as wet age-related macular degeneration (AMD), and it could potentially replace defective genes in patients with inherited retinal disease.

The new approach, described in the August 13, 2019, issue of The Journal of Clinical Investigation, uses a small needle to inject harmless, genetically engineered viruses into the space between the white of the eye and the eye’s vascular layer, called the suprachoroidal space. From there, the virus can spread throughout the eye to deliver therapeutic genes to cells in the retina.

The gene therapy approach currently used to treat Leber congenital amaurosis, an inherited eye disorder, involves a surgical procedure to inject the gene-carrying virus under the retina. This procedure carries a high risk of patients developing cataracts, and a low but significant risk of retinal detachment and other vision-threatening complications.

Though only tested in animals up to this point, the new suprachoroidal injection technique is less invasive because it does not involve detaching the retina, and theoretically, it could be done on an outpatient basis, marking a major step toward making permanent vision-saving gene therapies safer and more accessible.

“The best time for patients with inherited retinal degeneration to receive gene therapy treatments is when they still have fairly good vision. However, at that time, they also have more to lose from complications. The ability to offer a safer, more convenient procedure would be a breakthrough,” says Peter Campochiaro, M.D., Eccles Professor of Ophthalmology and Neuroscience at the Johns Hopkins University School of Medicine and the Wilmer Eye Institute.

A subretinal injection gene therapy approach is also being tested in clinical trials for age-related macular degeneration, which is among the leading causes of irreversible and disabling vision loss in people over age 50, according to the National Eye Institute. An estimated 10 million Americans have age-related macular degeneration. In the disease’s more common “wet” form, abnormal blood vessels grow under the retina and leak vision-blocking fluids into the eye. The growth and leakage of the abnormal blood vessels is caused by excess production of a cell signal called vascular endothelial growth factor (VEGF).

Currently, eye specialists can stave off vision loss by injecting a protein into the eye that blocks VEGF, but these treatments have a limited life span, so patients must return to the clinic every four to six weeks for more injections to maintain their vision. Missed appointments can allow the abnormal blood vessels to grow, causing further vision loss.

“We find that repeated treatments, although effective, can be hard for patients to keep up with, and over time, they lose vision,” says Campochiaro.

However, a gene therapy could turn each cell in the retina into a little pharmaceutical factory that constantly produces anti-VEGF proteins, thereby continuously maintaining vision without repeated injections.

To test if the suprachoroidal injection technique could effectively deliver gene therapies to the retina, the researchers first wanted to track whether the technique would allow the virus to reach the back of the eye. The researchers injected eyes of 10 rats with a harmless form of adeno-associated virus modified to carry a fluorescent marker into the suprachoroidal space of the eye. They used high-powered microscopes to track the glow across the retina and found that after a week, the virus had reached the entire retina.

Next, the researchers looked at whether this virus could deliver helpful genes. They loaded an anti-VEGF gene into their modified virus and injected it into the suprachoroidal space of 40 rats induced to develop a humanlike form of macular degeneration. For comparison, they used the conventional subretinal injection in 40 other rats.

The researchers found that the suprachoroidal injection technique performed just as well as the conventional subretinal approach, and was as effective and long lasting in delivering the vision-protecting anti-VEGF protein. By also performing these experiments in pigs and rhesus monkeys, the researchers confirmed that the suprachoroidal delivery method worked in larger animals’ eyes that are closer in size to human eyes. All yielded similar results.

While this gene therapy is promising, Campochiaro notes it may not be an option for people previously exposed to viruses similar to the one they used in these experiments, because their immune system might stop the virus before it can deliver its cargo into the retina’s cells. However, he believes that suprachoroidal injections may, one day, prove to be a viable option for a large number of patients with wet AMD and patients with inherited disorders caused by defective genes.

“Our hope is that with suprachoroidal injections, patients can just walk into a clinic and get their vision-saving treatment without worrying about many of the complications that come with subretinal injections,” says Campochiaro.

The research was supported by REGENXBIO Inc., the Alsheler-Durell Foundation, Per Bang-Jensen, Mr. and Mrs. Conrad Aschenbach and Mr. and Mrs. Andrew Marriott.

Other contributors to the research include Kun Ding, Jikui Shen, Zibran Hafiz, Sean Hackett, Raquel Lima e Silva, Mahmood Khan, Valeria Lorenc, Daiqin Chen, Rishi Chadha, Minie Zhang and Sherri Van Everen from Johns Hopkins Medicine, and Nicholas Buss, Michele Fiscella and Olivier Danos from REGENXBIO Inc.

Reference: “AAV8-vectored suprachoroidal gene transfer produces widespread ocular transgene expression” by Kun Ding, Jikui Shen, Zibran Hafiz, Sean F. Hackett, Raquel Lima e Silva, Mahmood Khan, Valeria E. Lorenc, Daiqin Chen, Rishi Chadha, Minie Zhang, Sherri Van Everen, Nicholas Buss, Michele Fiscella, Olivier Danos, and Peter A. Campochiaro, 13 August 2019, The Journal of Clinical Investigation.
DOI: 10.1172/JCI129085

https://scitechdaily.com/algorithm-uses-math-to-blend-musical-notes-seamlessly-video/

Algorithm Uses Math to Blend Musical Notes Seamlessly [Video]

Algorithm Automatically Produces a Portamento Effect

Algorithm enables one audio signal to glide into another, recreating the “portamento” effect of some musical instruments.

In music, “portamento” is a term that’s been used for hundreds of years, referring to the effect of gliding a note at one pitch into a note of a lower or higher pitch. But only instruments that can continuously vary in pitch — such as the human voice, string instruments, and trombones — can pull off the effect.

Now an MIT student has invented a novel algorithm that produces a portamento effect between any two audio signals in real-time. In experiments, the algorithm seamlessly merged various audio clips, such as a piano note gliding into a human voice, and one song blending into another. His paper describing the algorithm won the “best student paper” award at the recent International Conference on Digital Audio Effects.

The algorithm relies on “optimal transport,” a geometry-based framework that determines the most efficient ways to move objects — or data points — between multiple origin and destination configurations. Formulated in the 1700s, the framework has been applied to supply chains, fluid dynamics, image alignment, 3-D modeling, computer graphics, and more.

In work that originated in a class project, Trevor Henderson, now a graduate student in computer science, applied optimal transport to interpolating audio signals — or blending one signal into another. The algorithm first breaks the audio signals into brief segments. Then, it finds the optimal way to move the pitches in  each segment to pitches in the other signal, to produce the smooth glide of the portamento effect. The algorithm also includes specialized techniques to maintain the fidelity of the audio signal as it transitions.

“Optimal transport is used here to determine how to map pitches in one sound to the pitches in the other,” says Henderson, a classically trained organist who performs electronic music and has been a DJ on WMBR 88.1, MIT’s radio station. “If it’s transforming one chord into a chord with a different harmony, or with more notes, for instance, the notes will split from the first chord and find a position to seamlessly glide to in the other chord.”

According to Henderson, this is one of the first techniques to apply optimal transport to transforming audio signals. He has already used the algorithm to build equipment that seamlessly transitions between songs on his radio show. DJs could also use the equipment to transition between tracks during live performances. Other musicians might use it to blend instruments and voice on stage or in the studio.

Trevor Henderson MIT

Henderson’s co-author on the paper is Justin Solomon, an X-Consortium Career Development Assistant Professor in the Department of Electrical Engineering and Computer Science. Solomon — who also plays cello and piano — leads the Geometric Data Processing Group in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and is a member of the Center for Computational Engineering.

Henderson took Solomon’s class, 6.838 (Shape Analysis), which tasks students with applying geometric tools like optimal transport to real-world applications. Student projects usually focus on 3-D shapes from virtual reality or computer graphics. So Henderson’s project came as a surprise to Solomon. “Trevor saw an abstract connection between geometry and moving frequencies around in audio signals to create a portamento effect,” Solomon says. “He was in and out of my office all semester with DJ equipment. It wasn’t what I expected to see, but it was pretty entertaining.”

For Henderson, it wasn’t too much of a stretch. “When I see a new idea, I ask, ‘Is this applicable to music?’” he says. “So, when we talked about optimal transport, I wondered what would happen if I connected it to audio spectra.”

A good way to think of optimal transport, Henderson says, is finding “a lazy way to build a sand castle.” In that analogy, the framework is used to calculate the way to move each grain of sand from its position in a shapeless pile into a corresponding position in a sand castle, using as little work as possible. In computer graphics, for instance, optimal transport can be used to transform or morph shapes by finding the optimal movement from each point on one shape into the other.

Applying this theory to audio clips involves some additional ideas from signal processing. Musical instruments produce sound through vibrations of components, depending on the instrument. Violins use strings, brass instruments use air inside hollow bodies, and humans use vocal cords. These vibrations can be captured as audio signals, where the frequency and amplitude (peak height) represent different pitches.

Conventionally, the transition between two audio signals is done with a fade, where one signal is reduced in volume while the other rises. Henderson’s algorithm, on the other hand, smoothly slides frequency segments from one clip into another, with no fading of volume.

To do so, the algorithm splits any two audio clips into windows of about 50 milliseconds. Then, it runs a Fourier transform, which turns each window into its frequency components. The frequency components within a wind

https://hackaday.com/2019/09/29/usb-armory-mkii-a-usb-c-thumb-drive-based-linux-computer-for-pentesters/

USB ARMORY MKII: A USB-C THUMB DRIVE BASED LINUX COMPUTER FOR PENTESTERS 1 Commentby:

Tom Nardi September 29, 2019 While it might look like a disrobed flash drive or RTL-SDR dongle, the USB Armory Mk II is actually a full-fledged open hardware computer built into the ubiquitous USB “stick” format. But more than just that, it’s optimized for security research and boasts a list of features that are sure to get the attention of any pentesters in the audience. Fine tuned thanks to the feedback developer [Inverse Path] received about the original version of the hardware, the Mk II promises to be the last word in secure mobile computing. Compared to the original hardware, the most obvious change is the switch to USB-C. The previous USB Armory used traces on the PCB to plug directly into a USB Type-A port, but this time around [Inverse Path] has put a proper male connector on the front of the board. Nominally, the USB Armory is plugged into a host computer to provide it with power and a network connection, though it also has the ability to disguise itself as a storage or input device for more stealthy applications. There’s also a female USB-C port on the Mk II, which can be used to connect additional devices, a feature the previous version lacked. The USB Armory Mk II is powered by an upgraded 900 MHz ARM Cortex-A7 processor, though it retains the same 512 MB of RAM from the previous version. Like the original, there’s a micro SD slot to hold the Linux operating system, but this time it’s supplemented with an onboard 16 GB eMMC chip. There’s even a physical switch that allows the user to choose which storage device they want to boot from. Other additions for the Mk II include Bluetooth connectivity, and a hardware true random number generator (TRNG). We first brought you word of the original USB Armory back in 2014, and it’s always good to see an open hardware project thriving and iterating years later. While the $149 price tag arguably puts the MKII out of the tinkering budget for many of us, there’s clearly a market for niche devices like this and we can’t wait to see what [Inverse Path] comes up with next. Posted in Security Hacks, Tool Hacks

https://insideevs.com/news/372938/tesla-supercharger-canada-network/

Trans-Canadian Tesla Supercharger Network Is Taking Shape

As the image shows, crossing Canada will soon be easy.

It looks like Canada will be the next country equipped with its own coast-to-coast Supercharger network for Tesla motorists.

It is part of the largest Supercharger expansion effort to date with a record 100 supercharger stations currently under construction and 92 stations permitted for Tesla throughout global markets.

Nearly 25% of the expansion is happening in Canada as part of a massive effort to provide the nation with a coast-to-coast network. There are still some permitting voids required to complete this network, but the recent expansion burst makes the task at hand rather obvious.

The U.S. market too is seeing record supercharger expansion with 57 stations currently under construction along with an additional 62 station permits equipping the majority of the U.S. interstate systems with supercharging capacity.

Interstate travel is critical to the adoption of electric transportation and Tesla Motors is not leaving it to chance or time.

https://www.forbes.com/sites/lanceeliot/2019/09/29/teslas-ai-chips-are-rolling-out-but-they-arent-a-self-driving-panacea/#1e6455e35902

Tesla’s AI Chips Are Rolling Out, But They Aren’t A Self-Driving Panacea

According to several media reports, the new AI chips Tesla devised to achieve true self-driving car status have begun rolling out to older Tesla models that require retrofitting to replace the prior on-board processors.

Unfortunately, there has been some misleading reporting about those chips, a special type of AI computer processor that extensively supports Artificial Neural Networks (ANN), commonly referred to as Machine Learning (ML) or Deep Learning (DL).

Before I explore the over-hyped reporting, let me clarify that these custom-developed AI chips devised by Tesla engineers are certainly admirable, and the computer hardware design team deserves to be proud of what they have done. Kudos for their impressive work.

But such an acknowledgement does not also imply that they have somehow achieved a singularity marvel in AI, and nor does it mean they have miraculously solved the real-world problem of how to attain a true self-driving driverless car.

Today In: Business

Not by a long shot.

And yet many in the media seem to think so, and at times have implied in a wide-eyed overzealous way that Tesla’s new computer processors have seemingly reached a nirvana of finally getting us to fully autonomous cars.

That’s just not the case.

Time to unpack the matter.

Important Context About AI Chips

First, let’s clarify what an AI chip consists of.

A conventional computer contains a core processor or chip that does the systems work when you invoke your word processor or spreadsheets or are loading and running an app of some kind.

In addition, most modern computers also have GPU’s, Graphical Processing Units, an additional set of processors or chips that aid the core processor by taking on the task of displaying visual graphics and animation that you might see on the screen of your device such as on the display of a desktop PC, a laptop or a smartphone.

To use computers for Machine Learning or Deep Learning, it was realized that rather than necessarily using the normal core processors of a computer to do so, the GPU’s actually tended to be better suited for the ML or DL tasks.

This is due to the aspect that by-and-large the implementation of Artificial Neural Networks in today’s computers is really a massive numerical and linear algebra kind of affair. GPU’s are generally structured and devised for that kind of numeric mashing.

AI developers that rely upon ML/DL computer-based neural networks fell in love with GPU’s, utilizing GPU’s for something not particularly originally envisioned but that happens to be a good marriage anyway.

Once it became apparent that having souped-up GPU’s would help advance today’s kind of AI, the chip developers realized that it could be a huge market potential for their processors and therefore merited tweaking GPU designs to more closely fit to the ML/DL task.

Tesla had initially opted to use off-the-shelf specialized GPU chips made by NVIDIA, doing so for the Tesla in-car on-board processing efforts of the Tesla version of ADAS (Advanced Driver-Assistance System), including and especially for their so-called Tesla AutoPilot (a naming that has generated controversy for being misleading about the actual driverless functionality to-date available in their so-equipped “FSD” or Full Self-Driving cars).

In April of this year, Elon Musk and his team unveiled a set of proprietary AI chips that were secretly developed in-house by Tesla (rumors about the effort had been floating for quite a while), and the idea was that the new chips would replace the use of the in-car NVIDIA processors.

The unveiling of the new AI chips was a key portion of the Investor Autonomy Day event that Tesla used as a forum to announce the future plans of their hoped-for self-driving driverless capability.

Subsequently, in late August, a presentation was made by Tesla engineers depicting additional details about their custom-designed AI chips, doing so at the annual Hot Chips conference sponsored by the IEEE that focuses on high performance computer processors.

Overall media interest about the Tesla AI chips was reinvigorated by the presentation and likewise further stoked by the roll-out that has apparently now gotten underway.

One additional important point — most people refer to these kinds of processors as “AI chips,” which I’ll do likewise for ease of discussion herein, but please do not be lulled into believing that these specialized processors are actually fulfilling the long-sought goal of being able to have full Artificial Intelligence in all of its intended facets.

At best, these chips or processors are simulating relatively shallow mathematically inspired aspects of what might be called neural networks, but it isn’t at all anything akin to a human brain. There isn’t any human-like reasoning or common-sense capability involved in these chips. They are merely computationally enhanced numeric calculating devices.

Brouhaha About Tesla’s New Chips

In quick recap, Tesla opted to replace the NVIDIA chips and did so by designing and now deploying their own Tesla-designed chips (the chips are being manufactured for Tesla by Samsung).

Let’s consider vital questions about the matter.

·        Did it make sense for Tesla to have gone on its own to make specialized chips, or would they have been better off to continue using someone else’s off-the-shelf specialized chips?

·        On a comparison basis, how are the Tesla custom chips different or the same as off-the-shelf specialized chips that do roughly the same thing?

·        What do the AI chips achieve in terms of aiming for becoming true self-driving cars?

·        And so on.

Here are some key thoughts on these matters:

·        Hardware-Only Focus

It is crucial to realize that discussing these AI chips is only a small part of a bigger picture, since the chips are a hardware-only focused element.

You need software, really good software, in order to arrive at a true self-driving car.

As an analogy, suppose someone comes out with a new smartphone that is incompatible with the thousands upon thousands of apps in the marketplace. Even if the smartphone is super-fast, you have the rather more daunting issue that there aren’t any apps for the new hardware.

Media salivating over the Tesla AI chips is missing the boat on asking about the software needed to arrive at driverless capabilities.

I’m not saying that having good hardware is not important, it is, but I think we all now know that hardware is only part of the battle.

The software to do true AI self-driving is the 500-pound gorilla.

There has yet to be any publicly revealed indication that the software for achieving true self-driving by Tesla has been crafted.

As I previously reported, the AI team at Tesla has been restructured and revamped, presumably in an effort to gain added traction towards the goal of having a driverless car, but so far no new indication has demonstrated that the vaunted aim is imminent.

·        Force-fit Of Design

If you were going to design a new AI chip, one approach would be to sit down and come up with all of the vital things you’d like to have the chip do.

You would blue sky it, starting with a blank sheet, aiming to stretch the AI boundaries as much as feasible.

For Tesla, the hardware engineers were actually handed a circumstance that imposed a lot of severe constraints on what they could devise.

They had to keep the electrical power consumption within a boundary dictated by the prior designs of the Tesla cars, otherwise it would mean that the Teslas already in the marketplace would have to undergo a major retrofit to allow for a more power hungry set of processors. That would be costly and economically infeasible. Thus, right away the new AI chip would be hampered by how much power it could consume.

The new processors would have to fit into the physical space as already set aside on existing Tesla cars, meaning that the size and shape of the on-board system boards and computer box would have to abide by a strict “form factor.”

And so on.

This is oftentimes the downside of being a first-mover into a market.

You come out with a product when few others have something similar, it gains some success, and so you need to then try to advance the product as the marketplace evolves, yet you are also trapped by needing to be backward-compatible with what you already did.

Those that come along after your product has been underway have the latitude of not being ensnared by what came before, sometimes allowing them to out-perform by having an open slate to work with.

An example of overstepping “first movers” includes the rapid success of Uber and Lyft and the ridesharing phenomena. The newer entrants ignored existing constraints faced by taxi’s and cabs, allowing the brazen upstarts to eclipse those that were hampered by the past (rightfully or wrongly so).

Being first in something is not necessarily always the best, and sometimes those that come along later on can move in a more agile way.

Don’t misinterpret my remarks to imply that for self-driving cars you can wildly design AI chips in whatever manner you fancy. Obviously, there are going to be size, weight, power consumption, cooling, cost, and other factors that limit what sensibly can appropriately fit into a driverless car.

·        Improper Comparisons

One of my biggest beefs about the media reporting has been the willingness to fall into a misleading and improper comparison of the Tesla AI chips to other chips.

Comparing the new with the old is not especially helpful, though it sounds exciting when you do so, and instead the comparison should be with what else currently exists in the marketplace.

Here’s what I mean.

Most keep saying that the Tesla AI chips are many times faster than the Tesla prior-used NVIDIA chips (but they ought to be comparing to NVIDIA’s other newer chips), implying that Tesla made a breathtaking breakthrough in this kind of technology, often quoting the number of trillions of operations per second, known as TOPS.

I won’t inundate you with the details herein, but suffice to say that the Tesla AI chips TOPS performance is either on par with other alternatives in the marketplace, or in some ways less so, and in selective other ways somewhat better, but it is not a hit-it-out-of-the-ballpark revelation.

Bottom-line: I ask that the media stop making inappropriate comparisons between the Tesla AI chips and the Tesla prior-used NVIDIA chips, it just doesn’t make sense, it is misleading to the public, it is unfair, and it really shows ignorance about the topic.

Another pet peeve is the tossing around of big numbers to impress the non-initiated, such as touting that the Tesla AI chips consist of 6 billion transistors.

On my gosh, 6 billion seems like such a large number and implies something gargantuan.

Well, there are GPU’s that already have 20 billion transistors.

I’m not denigrating the 6 billion, and only trying to point out that those quoting the 6 billion do so without offering any viable context and therefore imply something that isn’t really the case.

For those readers that are hardware types, I know and you know that trying to make a comparison by the number of transistors is a rather problematic exercise anyway, since it can be an apples-to-apples or an apple-to-oranges kind of comparison, depending upon what the chip is designed to do.

·        First Gen Is Dicey

Anybody that knows anything about chip design can tell you that the first generation of a newly devised chip is oftentimes a rocky road.

There can be a slew of latent errors or bugs (if you prefer, we can be gentler in our terminology and refer to those aspects as quirks or the proverbial tongue-in-cheek “hidden features”).

Like the first version of any new product, the odds are that it will take a shakeout period to ferret out what might be amiss.

In the case of chips, since it is encased in silicon and not readily changeable, there are sometimes software patches used to deal with hardware issues, and then in later versions of the chip you might make the needed hardware alterations and improvements.

This brings up the point that by Tesla choosing to make its own AI chips, rather than using an off-the-shelf approach, it puts Tesla into the unenviable position of having a first gen and needing to figure out on-their-own whatever guffaws those new chips might have.

Typically, an off-the-shelf commercially available chip is going to have not just the original maker looking at it, but will also have those that are buying and incorporating the processor into their systems looking at it too. The more eyes, the better.

The Tesla proprietary chips are presumably only being scrutinized and tested by Tesla alone.

·        Proprietary Chip Woes

Using your own self-designed chips has a lot of other considerations worth noting.

At Tesla, there would have been a significant cost and attention that was devoted toward devising the AI chips.

Was that cost worth it?

Was the diverted attention that might have gone to other matters a lost opportunity cost?

Plus, Tesla not only had to bear the original design cost, they will have to endure the ongoing cost to upgrade and improve the chips over time.

This is not a one-time only kind of matter.

It would seem unlikely and unwise for Tesla to sit on this chip and not advance it.

Advances in AI chips are moving at lightening-like paces.

There are also the labor pool considerations too.

Having a proprietary chip usually means that you have to grow your own specialists to be able to develop the specialized software for it. You cannot readily find those specialists in the marketplace per se, since they won’t know your proprietary stuff, whereas when you use a commercial off-the-shelf chip, the odds are that you can find expert labor for it since there is an ecosystem surrounding the off-the-shelf processor.

I am not saying that Tesla was mistaken per se to go the proprietary route, and only time will tell whether it was a worthwhile bet.

By having their own chip, they can potentially control their own destiny and not be dependent upon an off-the-shelf chip made by someone else, and not be forced into the path of the off-the-shelf chip maker, while the other side of that coin is they now find themselves squarely in the chip design and upgrade business, in addition to the car making business.

It’s a calculated gamble and a trade-off.

From a cost perspective, it might or might not be a sensible approach, and those that keep trying to imply that the proprietary chip is a lesser cost strategy are likely not including the full set of costs involved.

Be wary of those that do those off-the-cuff cost claims.

·        Redundancy Assertions

There has been media excitement about how the Tesla AI chips supposedly have a robust redundancy capability, which certainly is essential for a real-time system that involves the life-and-death aspects of driving a car.

So far, the scant details revealed seemed to be that there are two identical AI chips running in parallel and if one of the chips disagrees with the other chip that the current assessment of the driving situation and planned next step is discarded, allowing for the next “frame” to be captured and analyzed.

On the surface, this might seem dandy to those that haven’t developed fault-tolerant real-time systems before.

There are serious and somber issues to consider.

Presumably, on the good side, if one of the chips experiences a foul hiccup, it causes the identical chip to be in disagreement, and because the two chips don’t agree, the system potentially avoids undertaking an inappropriate action.

But, realize that the ball is simply being punted further down-the-field, so to speak.

This has downsides.

Suppose the oddball quirk isn’t just a single momentary fluke, and instead recurs, over and over.

Does this mean that both chips are going to continually disagree and therefore presumably keep postponing the act of making a driving decision?

If you were driving a car, and you kept postponing making a vital driving decision, imagine if that decision involved whether or not to swerve to avoid a fully loaded tanker truck stranded in the roadway ahead.

Not making a decision is not necessarily the best driving strategy.

Another design consideration is the assertion that both chips have to agree.

Well, suppose that both chips make a bad choice, which could happen, and since they are identical and presumably going to agree in their wrongfully selected choice, you have a reinforced possible foul-up, based simply on the aspect that both agreed to it.

This is why sometimes you purposely create an additional redundant system that is separate and purposely not identical in its making, trying to overcome the chances of a flaw inside a merely repeated or duplicate system.

Yet another facet is the need for a kind of self-awareness capability, namely that if the two chips disagree, why did they disagree?

And, equally or maybe more importantly, you would be best advised to find a means to learn something valuable from the disagreements that might occur, improving the ability to jointly agree when they perhaps shouldn’t be disagreeing anymore.

I can go on-and-on (see my article on fault tolerance for AI self-driving cars, and my piece too about arguing machines and autonomous cars).

I think you get the gist that there’s a lot more to proper redundancy and so the media that touts how Tesla has opted to do so is really offering an ill-informed opinion without any proper basis to make the assertion that it is robust.

It might be, and I’m not saying that it isn’t, and only saying that the media shouldn’t be alluding to something that we don’t know has strong legs or not.

Conclusion

Some have said that it is a gutsy move by Tesla to have gone the self-designed custom AI chip route for their self-driving car capabilities.

Was it a smart and business savvy choice?

Was it a vanity decision?

Will it turnout to be their best decision or their worst decision?

Overall, it is one of those bet-the-company kind of gambles, since their ability to achieve true self-driving driverless cars rests predominantly on that decision.

You might say they’ve placed all of their chips on these new AI chips.

Follow me on Twitter.

Dr. Lance B. Eliot is a world-renowned expert on Artificial Intelligence (AI) and especially Autonomous Vehicles (AV). As a seasoned executive and high-tech entrepreneu

https://www.nature.com/articles/d41586-019-02853-5

 chemistry gains fresh focus

The tools of chemical biology, genomics and data mining can yield insights into the metabolites of the microbiome.
Blue, pink, green and brown coloured SEM of mixed oral bacteria.

Microbes, like these oral bacteria, can profoundly influence host physiology.Credit: Steve Gschmeissner/SPL

Studies of the microorganisms that live on and inside animals’ bodies have long relied on DNA sequencing, which can reveal which species abound and how these microbial communities respond to their environment. Now, the analytical methods of chemical biology, combined with genomics and computing techniques, are giving researchers insights into what these microbes are actually doing, biochemically speaking. Using mass spectroscopy and a growing suite of databases and bioinformatics tools to analyse the data, some labs are focusing on substances produced as the microbes metabolize food. These ‘metabolites’ serve not only as markers for charting health and disease, but also as engines of physiological change1.

The metabolites can influence the biology of the host, and not just where the microbial communities are resident. Some such compounds reach high levels in the blood, with concentrations that can vary by more than an order of magnitude between individuals, says Michael Fischbach, a microbiologist at Stanford University in California. “These are chemicals we should know more about, because they could underlie biological differences among people.”

Metabolomics — as the study of metabolites is known — is easier said than done, however. “In any given metabolomics run, we’ll detect thousands of metabolites,” says Erica Majumder, a biochemist at the State University of New York College of Environmental Science and Forestry in Syracuse, New York, who studies sulfur metabolism in gut microbes.

When researchers were just starting to analyse metabolites, using a technique called liquid chromatography–mass spectrometry (LC–MS), identifying these biomolecules could take months of work. “It was really an incredibly frustrating process,” says biochemist Gary Siuzdak, whose team at Scripps Research in La Jolla, California, published one of the earliest LC–MS metabolomics papers2, in 1995.

Since then, improved instrumentation and analytical tools have shaved that time considerably. Siuzdak’s lab created METLIN, a database of tandem mass spectra — which reveal structural details of molecular fragments — on more than half a million metabolites and other molecules. The lab also developed XCMS, an online platform for processing LC–MS data.

Another tool, Global Natural Product Social Molecular Networking, was created by chemist Pieter Dorrestein and his colleagues at the University of California, San Diego. It provides crowdsourced mass-spectrometry data that researchers can use to identify metabolites when official reference standards are not available. Although much work remains to be done, Siuzdak says that such tools make it possible to identify some metabolites in seconds. In 2016, fewer than 2% of mass-spectrometry signals could be matched to known metabolites, Dorrestein says. That number has now increased two- to threefold.

More ways to explore

Genomics techniques are also opening up lines of exploration. One key question, addressed in two studies by Maria Zimmermann-Kogadeeva, a computational biologist at the European Molecular Biology Laboratory in Heidelberg, Germany, is how the microbiome influences drug metabolism in its host.

In the first study3, conducted when she was a postdoc at Yale University in New Haven, Connecticut, Zimmermann-Kogadeeva and her colleagues looked at the antiviral drug brivudine, from which gut microbes produce a toxic metabolite. Zimmermann’s team gave brivudine to wild-type mice or mice that lack microbiota, then measured the concentration of the drug and its metabolite over time. After identifying the microbial strains that metabolized the drug most rapidly, they systematically deactivated 2,350 bacterial genes to determine the enzyme responsible.

Next, the researchers recolonized ‘germ-free’ mice with bacteria lacking that enzyme. That enabled them to build a pharmacokinetic model of host–microbiome drug metabolism, an approach that could be used to estimate the microbial contribution to the digestion of foods, other drugs or endogenous metabolites.

Composite image of intestinal organoids at different developmental stages

Organoids can tease apart microbial influences.Credit: Prisca Liberali & Denise Serra

Zimmermann and her team have also tried to quantify the microbiome’s impact on oral pharmaceuticals more broadly. In a screen of 76 gut microbes and 271 oral drugs, they found that all microbes metabolized some of the drugs, and that 65% of the drugs studied were metabolized by at least one microbial strain4. The team then created libraries of bacteria, each expressing small pieces of the genomes of interest, to identify bacterial genes responsible for this metabolic activity, which they quantified using mass spectrometry.

Another question concerns the impact of microbial metabolites on the host. Indolepropionic acid (IPA), for instance, a substance that can alter the permeability of the intestinal wall, is made exclusively by gut bacteria such as Clostridium sporogenes from dietary tryptophan. The metabolic pathway involved, however, was unclear, until the steps were pinned down using bioinformatics, gene knockouts and mass spectrometry by a team led by Fischbach and Stanford colleague Justin Sonnenburg5. In a subsequent preprint6, the team has described a CRISPR–Cas9-based system for toggling the production of bacterial metabolites, and used this to uncover a role for certain metabolites in host immunity.

Researchers are also addressing metabolite impact using organoids — lab-grown tissues that are akin to simplified organs. Stem-cell biologist Scott Magness and bioengineer Nancy Allbritton, both at the University of North Carolina, Chapel Hill, have developed a system for analysing 15,000 organoids grown in individual wells — all fitting within a square the size of a postage stamp7. The team built the platform using off-the-shelf and 3D-printed components, and set up an automated monitoring system using microscopy and computational image analysis. “You’re never going to get a grad student or postdoc to count 15,000 wells,” says Magness.

The researchers used another automated system to inject bacteria from healthy donor stool samples into the organoids, at a rate of some 90 organoids per hour (manual injection would have treated only a dozen organoids per hour). By injecting a fluorescent dye alongside the bacteria, the researchers could tell whether microbial metabolites were disrupting gut-barrier function8.

They also demonstrated that the system could support the growth of anaerobic microbes, which predominate in the human gut. “We showed you could inject complex communities of bacteria and they would maintain a stable community over a couple days,” Magness says.

A mine of information

Such tools can help tease apart the microbial chemical activity of the microbiome. But to exploit and understand the metabolome, researchers also need to make use of tools such as data mining. A web tool called Metabolite Annotation and Gene Integration (MAGI), for instance, uses known biochemical pathways to generate a metabolite–gene association score, helping to correlate genetic sequences with metabolomics data9. “Identifying metabolites is very challenging. Likewise, identifying the function of a gene in a genome is often ambivalent,” says MAGI developer Trent Northen at Lawrence Berkeley National Laboratory in California. “MAGI recognizes that metabolomic and genomic data are orthogonal, and puts those pieces of information together to help identify metabolites and identify genes.”

Such tools can also help researchers home in on what’s important in the research literature, Siuzdak says. “It’s a new technology that’s allowing us to decipher the metabolomics data more quickly.” In a paper under review, Majumder describes a strategy to mine the scientific literature for clues that predict metabolite functions in specific biological contexts. She has used this to identify metabolites that might eventually help to reverse the neurodegeneration seen in multiple sclerosis. Some papers that the tool pulled up “were ones we never would have found from traditional searching, and gave us direct evidence from the literature to interpret what we saw happening in our system”, she says.

Nature 573, 615-616 (2019)

doi: 10.1038/d41586-019-02853-5

https://interestingengineering.com/human-machine-collaboration-work-in-the-age-of-artificial-intelligence

Human + Machine Collaboration: Work in the Age of Artificial Intelligence

Human and Machine collaboration reimagines processes with AI, letting humans work more like humans and less like robots.

Human + Machine Collaboration: Work in the Age of Artificial Intelligence

Human+Machine CollaborationAndreyPopov/iStock 

In this age of Artificial Intelligence (AI), we are witnessing a transformation in the way we live, work, and do business. From robots that share our environment and smart homes to supply chains that think and act in real-time, forward-thinking companies are using AI to innovate and expand their business more rapidly than ever.

Indeed, this is a time of change and change happens fast. Those able to understand that the future includes living, working, co-existing, and collaborating with AI are set to succeed in the coming years. On the other hand, those who neglect the fact that business transformation in the digital age depends on human and machine collaboration will inevitably be left behind.

Humans and machines can complement each other resulting in increasing productivity. This collaboration could increase revenue by 38 percent by 2022, according to Accenture Research. At least 61 percent of business leaders agree that the intersection of human and machine collaboration is going to help them achieve their strategic priorities faster and more efficiently.

Human and machine collaboration is paramount for organizations. Having the right mindset for AI means being at ease with the concept of human+machine, leaving the mindset of human Vs. machine behind. Thanks to AI, factories are now requiring a little more humanity; and AI is boosting the value of engineers and manufacturers.

SPONSORED VIDEO

[X] Close

Business transformation in the era of AI

The emergence of AI is creating brand new roles and opportunities for humans up and down the value chain. From workers in the assembly line and maintenance specialists to robot engineers and operations managers, AI is regenerating the concept and meaning of work in an industrial setting.

According to Accenture‘s Paul Daugherty, Chief Technology and Innovation Officer, and H. James Wilson, Managing Director of Information Technology and Business Research, AI is transforming business processes in five ways:

  • Flexibility: A change from rigid manufacturing processes with automation done in the past by dumb robots to smart individualized production following real-time customer choices brings flexibility to businesses. This is particularly visible in the automotive manufacturing industry where customers can customize their vehicle at the dealership. They can choose everything from dashboard components to the seat leather –or vegan leather– to tire valve caps. For instance, at Stuttgart’s Mercedes-Benz assembly line there are no two vehicles that are the same.
  • Speed: Speed is super important in many industries, including finance. The detection of credit card fraud on the spot can guarantee a card holder that a transaction will not be approved if fraud was involved, saving time and headaches if this is detected too late. According to Daugherty and Wilson, HSBC Holdings developed an AI-based solution that uses improved speed and accuracy in fraud detection. The solution can monitor millions of transactions on a daily basis seeking subtle pattern that can possibly signal fraud. This type of solution is great for financial institutions. Yet, they need the human collaboration to be continually updated. Without the updates required, soon the algorithms would become useless for combating fraud. Data analysts and financial fraud experts must keep an eye on the software at all times to assure the AI solution is at least one step ahead of criminals.
  • Scale: In order to accelerate its recruiting evaluation to improve diversity, Unilever adopted an AI-based hiring system that assesses candidate’s body language and personality traits. Using this solution, Unilever was able to broaden its recruiting scale; job applicants doubled to 30,000, and the average time for arriving to a hiring decision decreased to four weeks. The process used to take up to four months before the adoption of the AI system.
  • Decision Making: There is no secret to the fact that the best decision that people make are based on specific, tailored information received in vast amounts. Using machine learning and AI a huge amount of data can be quickly available at the fingertips of workers on the factory floor, or to service technicians solving problems out in the field. All data previously collected and analyzed brings invaluable information that helps humans solve problems much faster or even prevent such problems before they happen. Take the case of GE and its Predix application. The solution uses machine-learning algorithms to predict when a specific part in a specific machine might fail. Predix alerts workers to potential problems before they become serious. In many cases, GE could save millions of dollars thanks to this technology collaborating with fast human action.
  • Personalization: AI makes possible individual tailoring, on-demand brand experiences at great scale. Music streaming service Pandora, for instance, applies AI algorithms to generate personalized playlists based on preferences in songs, artists, and genres. AI can use data to personalize anything and everything delivering a more enjoyable user experience. AI brings marketing to a new level.

AI will create new roles and opportunities 

Of course, some roles will come to an end as it has happened in the history of humanity every time there has been a technological revolution. However, the changes toward human and machine collaboration require the creation of new roles and the recruiting of new talent; it is not just a matter of implementing AI technology. We also need to remember that there is no evolution without change.

Robotics and AI will replace some jobs liberating humans for other kinds of tasks, many that do not yet exist as many of today’s positions and jobs did not exist a few decades ago. Since 2000, the United States has lost five million manufacturing jobs. However, Daugherty and Wilson think that things are not as clear cut as they might seem.

In the United States alone, there are going to be needed around 3.4 million more job openings covered in the manufacturing sector. One reason for this is the need to cover the Baby Boomers’ retirement plans.

Re-skilling: Developing fusion skills

Re-skilling is now paramount and applies to everyone who wishes to remain relevant. Paul Daugherty recommends enterprises to help existing employees develop what he calls fusion skills.

In their book Human + Machine: Reimagining Work in the Age of AI, a must-read for business leaders looking for a practical guide on adopting AI into their organization, Paul Daugherty and H. James Wilson identify eight fusion skills for the workplace:

Rehumanizing time: People will have more time to dedicate toward more human activities, such as increasing interpersonal interactions and creativity.

Responsible normalizing: It is time to normalize the purpose and perception of human and machine interaction as it relates to individuals, businesses, and society as a whole.

Judgment integration: A machine may be uncertain about something or lack the necessary business or ethical context to make decisions. In such case, humans must be prepared to sense where, how, and when to step in and provide input.

Intelligent interrogation: Humans simply can’t probe massively complex systems or predict interactions between complex layers of data on their own. It is imperative to have the ability to ask machines the right smart questions across multiple levels.

Bot-based empowerment: A variety of bots are available to help people be more productive and become better at their jobs. Using the power of AI agents can extend human’s capabilities, reinvent business processes, and even boost a human’s professional career.

Holistic (physical and mental) melding: In the age of human and machine fusion, holistic melding will become increasingly important. The full reimagination of business processes only becomes possible when humans create working mental models of how machines work and learn, and when machines capture user-behavior data to update their interactions.

Reciprocal apprenticing: In the past, technological education has gone in one direction: People have learned how to use machines. But with AI, machines are learning from humans, and humans, in turn, learn again from machines. In the future, humans will perform tasks alongside AI agents to learn new skills, and will receive on-the-job training to work well within AI-enhanced processes.

Relentless reimagining: This hybrid skill is the ability to reimagine how things currently are—and to keep reimagining how AI can transform and improve work, organizational processes, business models, and even entire industries.

In Human + Machine, the authors propose a continuous circle of learning, an exchange of knowledge between humans and machines. Humans can work better and more efficiently with the help of AI. According to the authors, in the long term, companies will start rethinking their business processes, and as they do they will cover the needs for new humans in the new ways of doing business.

They believe that “before we rewrite the business processes, job descriptions, and business models, we need to answer these questions: What tasks do humans do best? And, what do machines do best?” The transfer of jobs is not simply one way. In many cases, AI is freeing up to creativity and human capital, letting people work more like humans and less like robots.

Giving these paramount questions and the concepts proposed by Daugherty and Wilson, giving them some thought might be crucial at the time of deciding what is the best strategy you should take as a business leader in your organization in order to change and adapt in the age of AI.

The authors highlight how embracing the new rules of AI can be beneficial at the time businesses are reimagining processes with a focus on an exchange of knowledge between humans and machines.