https://www.teslarati.com/tesla-cyberpunk-pickup-truck-predictions-video/

Tesla ‘Cyberpunk’ Pickup Truck predictions: Range, towing capacity, and more


Elon Musk’s “Cyberpunk” Tesla Pickup Truck is set to be unveiled this coming November, and the electric vehicle community could not be more excited. Musk, after all, has hyped the vehicle, hinting that it will start at a reasonable price of $49,000 and be the company’s “best product ever.” Tesla has been remarkably good at keeping the truck’s specs secret, which has all but encouraged the EV community to speculate about the upcoming features and specs of the highly-anticipated Tesla Pickup Truck.

Tesla owner-enthusiast Sean Mitchell recently shared his expectations for the upcoming vehicle, and while they are but speculations, they are rooted in information that the electric car maker and CEO Elon Musk have shared in the past. Other speculations are based on Tesla’s current technologies, as well as the company’s recent updates to its operations.

The Tesla Pickup Truck is meant to be a disruptor just like its predecessors like the Model 3 and the Model S. With this in mind, there is a good chance that Tesla will put its best technologies in the vehicle. Mitchell believes that the vehicle will have battery sizes between 150-200 kWh, which should give the truck a range of about 400 miles or more. This is something that Musk himself has mentioned in the past, with the CEO noting that the vehicle will have 400-500 miles of range per charge.

These figures might seem optimistic, but if one were to consider the innovations offered by Maxwell Technologies to Tesla, these specs would be more than plausible. Of course, being a new vehicle, the “Cyberpunk” truck will most definitely be capable of charging at 250 kW using the Supercharger V3 Network. This should allow the upcoming pickup to take advantage of Tesla’s fastest charging solution out of the box.

Since the Tesla Pickup Truck is meant to disrupt, the vehicle will most likely have an industry-leading towing capacity as well. Mitchell estimates that the vehicle will have a 20,000-30,000-lb towing capacity, on account of Elon Musk’s tendency to equip his electric cars with specs that far exceed those of ICE competitors. Seeing as Musk has previously joked that the vehicle could tow 300,000 lbs, a 30,000-lb towing capacity definitely seems feasible.

True to the Tesla brand, the Cybertruck will likely be very powerful as well. The Tesla owner-enthusiast noted that the Silicon Valley-based company will probably leapfrog the competition like Rivian when it comes to acceleration and horsepower; thus, it is possible for the truck to have a sub-3-second 0-60 mph time and about 800-1,000 hp. These specs exceed that of the well-received Rivian R1T all-electric pickup, which will likely beat the Tesla Truck to market.

Mitchell gave an excellent point when it came to the vehicle’s design. During the Tesla Semi’s unveiling, Musk mentioned that the electric car maker is developing a type of Armor Glass that is far more durable and far less prone to breaking. This should enable Tesla to use a generous amount of glass in the Cyberpunk truck’s design, allowing the company to equip the vehicle with a durable panoramic windshield. This does seem to be in line with Musk’s statements about the vehicle being a Blade Runner Cyberpunk truck that looks a bit like an armored personnel carrier from the future.

Watch Sean Mitchell’s recent take on the Tesla Pickup Truck in the video below.

What do you think about these speculations? Are they off base or close? Sound off in the comments below.

 

 

https://phys.org/news/2019-10-visible-nanoparticle-catalysts-desirable-bioactive.html

Visible light and nanoparticle catalysts produce desirable bioactive molecules

Visible light and nanoparticle catalysts produce desirable bioactive molecules
Molecules adsorb on the surface of semiconductor nanoparticles in very specific geometries. The nanoparticles use energy from incident light to activate the molecules and fuse them together to form larger molecules in configurations useful for biological applications. Credit: Yishu Jiang, Northwestern University

Northwestern University chemists have used visible light and extremely tiny nanoparticles to quickly and simply make molecules that are of the same class as many lead compounds for drug development.

Driven by light, the nanoparticle catalysts perform  with very specific chemical products— that don’t just have the right chemical formulas but also have specific arrangements of their atoms in space. And the catalyst can be reused for additional chemical reactions.

The semiconductor nanoparticles are known as —so small that they are only a few nanometers across. But the  is power, providing the material with attractive optical and  not possible at greater length scales.

“Quantum dots behave more like  than metal nanoparticles,” said Emily A. Weiss, who led the research. “The electrons are squeezed into such a small space that their reactivity follows the rules of quantum mechanics. We can take advantage of this, along with the templating power of the nanoparticle surface.”

This work, published recently by the journal Nature Chemistry, is the first use of a nanoparticle’s surface as a template for a light-driven reaction called a cycloaddition, a simple mechanism for making very complicated, potentially bioactive compounds.

“We use our nanoparticle catalysts to access this desirable class of molecules, called tetrasubstituted cyclobutanes, through simple, one-step reactions that not only produce the molecules in high yield, but with the arrangement of atoms most relevant for drug development,” Weiss said. “These molecules are difficult to make any other way.”

Weiss is the Mark and Nancy Ratner Professor of Chemistry in the Weinberg College of Arts and Sciences. She specializes in controlling light-driven electronic processes in quantum dots and using them to perform light-driven chemistry with unprecedented selectivity.

The nanoparticle catalysts use energy from  to activate molecules on their surfaces and fuse them together to form larger molecules in configurations useful for biological applications. The larger molecule then detaches easily from the nanoparticle, freeing the nanoparticle to be used again in another reaction cycle.

In their study, Weiss and her team used three-nanometer nanoparticles made of the semiconductor cadmium selenide and a variety of starter molecules called alkenes in solution. Alkenes have core carbon-carbon double bonds which are needed to form the cyclobutanes.

The study is titled “Regio- and diastereoselective intermolecular [2+2] cycloadditions photocatalysed by quantum dots.”


Explore further

Overlap allows nanoparticles to enhance light-based detection


More information: Yishu Jiang et al, Regio- and diastereoselective intermolecular [2+2] cycloadditions photocatalysed by quantum dots, Nature Chemistry (2019). DOI: 10.1038/s41557-019-0344-4

Journal information: Nature Chemistry

https://hackaday.com/2019/10/30/rpi4-now-overclocked-net-booted-and-power-sipping/

RPI4: NOW OVERCLOCKED, NET-BOOTED, AND POWER-SIPPING

It has now been a few months since the launch of the Raspberry Pi 4, and it would only be fair to describe the launch as “rocky”. While significantly faster than the Pi 3 on paper, its propensity for overheating would end up throttling down the CPU clock even with the plethora of aftermarket heatsinks and fans. The Raspberry Pi folks have been working on solutions to these teething troubles, and they have now released a bunch of updates in the form of a new bootloader, that lets the Pi 4 live up to its promise. (UPDATE: Here’s the download page and release notes)

The real meat of the update comes in an implementation of a low power mode for the USB hub. It turns out that the main source of heat on the SoC wasn’t the CPU, but the USB. Fixing the USB power consumption means that you can run the processor cool at stock speeds, and it can even be overclocked now.

There is also a new tool for updating the Pi bootloader, rpi-eeprom, that allows automatic updates for Pi 4 owners. The big change is that booting the Pi 4 over the network or an attached USB device is now a possibility, which is a must if you’re installing the Pi permanently. There are some fixes that caused problems with certain HATs, in which the Pi 4’s 3.3 V line was cycled during a reboot.

With a device as complex as a Raspberry Pi it comes as no surprise that it might ship with a few teething troubles. We’ve already covered some surrounding the USB-C power, for example. And the overheating. Where the Pi people consistently deliver though is in terms of support, both official and from the community, and we’re very pleased to see them come through in this case too.

https://www.itworldcanada.com/blog/5-trends-on-gartners-hype-cycle-for-emerging-technologies/423379

Today, companies detect insurance fraud using a combination of claim analysis, computer programs and private investigators. The FBI estimates the total cost of non-healthcare-related insurance fraud to be around $40 billion per year. But a maturing emerging technology called emotion artificial intelligence (AI) might make it possible to detect insurance fraud based on audio analysis of the caller.

In addition to catching fraud, this technology can improve customer experience by tracking happiness, more accurately directing callers, enabling better diagnostics for dementia, detecting distracted drivers, and even adapting education to a student’s current emotional state.

Though still relatively new, emotion AI is one of 21 new technologies added to the Gartner Hype Cycle for Emerging Technologies, 2019. The 2019 Hype Cycle highlights the emerging technologies with significant impact on business, society and people over the next five to 10 years. Technology innovation is the key to competitive differentiation and is transforming many industries.

This year’s emerging technologies fall into five major trends: Sensing and mobility, augmented human, postclassical compute and comms, digital ecosystems, and advanced AI and analytics.

Sensing and mobility

This trend features technologies with increasingly enabled mobility and the ability to manipulate objects around them, including 3D sensing cameras and more advanced autonomous driving. As sensors and AI evolve, autonomous robots will gain better awareness of the world around them. For example, emerging technologies such as light cargo delivery drones (both flying and wheeled) will be better able to navigate situations and manipulate objects. This technology is currently hampered by regulations, but its functionality continues is continuing to advance.

As sensing technology continues to evolve, it will aid more advanced technologies like the Internet of Things (IoT). These sensors also collect abundant data, which can lead to insights that are applicable across a range of scenarios and industries.

Other technologies in this trend include: AR cloud, autonomous driving levels 4 and 5, and flying autonomous vehicles.

Augmented human

Augmented human technologies improve both the cognitive and physical parts of the human body by including technologies such as biochips and emotion AI. Some will provide “superhuman capabilities” — for example, a prosthetic arm that exceeds the strength of a human arm — while others will create robotic skin that is as sensitive to touch as human skin. These technologies will also eventually provide a more seamless experience that improves the health, intelligence and strength of humans.

Other technologies in this trend include: Personification, augmented intelligence, immersive workspace and biotech (cultured or artificial tissue.)

Postclassical compute and comms

Classical or binary computing, which uses binary bits, evolved by making changes to existing, traditional architectures. These changes resulted in faster CPUs, denser memory and increasing throughput.

Post-classical computations and communications are using entirely new architectures, as well as incremental advancements. This includes 5G, the next-generation cellular standards, which has a new architecture that includes core slicing and wireless edge. These advancements allow low-earth-orbit (LEO) satellites to operate at much lower altitudes, around 1,200 miles or less, than traditional geostationary systems at around 22,000 miles. The result is global broadband or narrowband voice and data network services, including areas with little or no existing terrestrial or satcom coverage.

Technologies in this trend include: Next-generation memory and nanoscale 3D printing.

Digital ecosystems

Digital ecosystems are web-like connections between actors (enterprises, people and things) sharing a digital platform. These ecosystems developed as digitalization morphed traditional value chains, enabling more seamless, dynamic connections to a variety of agents and entities across geographies and industries. In the future these will include decentralized autonomous organizations (DAOs), which operate independently of humans and rely on smart contracts. These digital ecosystems are constantly evolving and connecting, resulting in new products and opportunities.

Other technologies in this trend include: DigitalOps, knowledge graphs, synthetic data and decentralized web.

Advanced AI and analytics

Advanced analytics is the autonomous or semi-autonomous examination of data or content using sophisticated tools beyond those of traditional business insights. This is the result of new classes of algorithms and data science that are leading to new capabilities, for example transfer learning, which uses previously trained machine learning models as advanced starting points for new technology. Advanced analytics enables deeper insights, predictions and recommendations.

Other technologies in this trend include: Adaptive machine learning, edge AI, edge analytics, explainable AI, AI PaaS, generative adversarial networks and graph analytics.

 

About the Hype Cycle

The Hype Cycle for emerging technologies distills insights from more than 2,000 technologies that Gartner profiles into a succinct set of must-know emerging technologies and trends. With a focus on emerging tech, this Hype Cycle is heavily weighted on those trends appearing in the first half of the cycle. This year, Gartner refocused the Hype Cycle to shift toward introducing new technologies not previously highlighted in past iterations of this Hype Cycle. These technologies are still important, but some have become integral to business operations and are no longer “emerging” and others have been featured for multiple years.


is a Research Vice President for Enterprise Architecture and Technology Innovation with more than 20 years of experience. Mr. Burke’s research focuses primarily on enterprise architecture, emerging technologies and innovation management. He is the chairperson for Gartner 2019 IT Symposium/Xpo in South Africa and he is the author of the 2014 book “Gamify: How Gamification Motivates People to Do Extraordinary Things.”

https://www.engadget.com/2019/10/29/alexa-light-wake-up-sleep-control/

Alexa can use smart lights to wake you or lull you to sleep

You can also set lighting routines to brighten or dim your lights.

It’s getting a bit easier to fall asleep or wake up in sync with your lights — if you have an Alexa-powered device. Amazon has introduced a trio of Alexa options that can gradually adjust smart lights to suit your daily habits. Wake-up lighting brightens the bulbs grouped with your Alexa device when you tell the voice assistant to set an alarm “with lights.” You can add lights to sleep timers if you want them to gradually dim as you call it a night. And if you want Alexa to gradually change lighting as part of a larger action, you can add brightening or dimming bulbs to routines — say, a morning routine that plays the news and ramps up the lights as you struggle to get out of bed.

The features should start reaching American users this week. This kind of control isn’t unique in the smart light world — the Hue app has had features like this for a while. It’s relatively uncommon for voice assistants, though, and it’s much simpler (if not as advanced) to speak a command when you’re going to sleep.

https://www.zdnet.com/article/mind-reading-technology-the-security-privacy-and-inequality-threats-we-will-face/

Mind-reading technology: The security, privacy and inequality threats we will face

Brain computer interface technology is developing fast. But just because we can read data from others’ minds, should we?

 

Since the dawn of humanity, the only way for us to share our thoughts has been to take some kind of physical action: to speak, to move, to type out an ill-considered tweet.

Brain computer interfaces (BCIs), while still in their infancy, could offer a new way to share our thoughts and feelings directly from our minds through (and maybe with) computers. But before we go any further with this new generation of mind-reading technology, do we understand the impact it will have? And should we be worried?

Depending on who you listen to, the ethical challenges of BCIs are unprecedented, or they’re just a repeat of the risks brought about by each previous generation of technology. Due to the so-far limited use of BCIs in the real world, there’s little practical experience to show which attitude is more likely to be the right one.

The future of privacy

It’s clear that some ethical challenges that affect earlier technologies will carry across to BCIs, with privacy being the most obvious.

We already know it’s annoying to have a user name and password hacked, and worrying when it’s your bank account details that are stolen. But BCIs could mean that eventually it’s your emotional responses that would be stolen and shared by hackers, with all the embarrassments and horrors that go with that.

BCIs offer access to the most personal of personal data: inevitably they’ll be targeted by hackers and would-be blackmailers; equally clearly, security systems will attempt to keep data from BCIs as locked down as possible. And we already know the defenders never win every time.

One reason for some optimism: there will also be our own internal privacy processes to supplement security, says Rajesh Rao, professor at the University of Washington’s Paul G. Allen School of Computer Science & Engineering.

“There’s going to be multiple protective layers of security, as well as your brain’s own mechanisms for security — we have mechanisms for not revealing everything we’re feeling through language right now. Once you have these types of technologies, the brain would have its own defensive mechanisms which could come into play,” he told ZDNet.

The military mind

Another big issue; like generations of new technology from the internet to GPS, some of the funding behind BCI projects has come from the military.

As well as helping soldiers paralysed by injuries in battle regain the abilities they’ve lost, it seems likely that military’s interest in BCIs will lead to the development of systems designed to augment humans’ capabilities. For a soldier, that might mean the chance to damp down fear in the face of an enemy, or patch-in a remote team to help out in the field — even connect to an AI to advise on battle tactics. In battle, having better tech than the enemy is seen as an advantage and a military priority.

There are also concerns that military involvement in BCIs could lead to brain computer interfaces being used as interrogation devices, potentially being used to intrude on the thoughts of enemy combatants captured in battle.

The one percent get smarter

If the use of BCIs in the military is controversial, the use of the technology in the civilian world is similarly problematic.

Is it fair for a BCI-equipped person with access to external computing power and memory to compete for a new job against a standard-issue person? And given the steep cost of BCIs, will they just create a new way for the privileged few to beat down the 99 percent?

These technologies are likely to throw up a whole new set of social justice issues around who gets access to devices that can allow them to learn faster or have better memories.

“You have a new set of problems in terms of haves and have nots,” says Rao.

This is far from the only issue this technology could create. While most current-generation BCIs can read thoughts but not send information back into the brain – future generation BCIs may well be able to both send and receive data.

The effect of having computer systems wirelessly or directly transmit data to the brain isn’t known, but related technologies such as deep brain stimulation — where electrical impulses are sent into brain tissue to regulate unwanted movement in medical conditions such as dystonias and Parkinson’s disease  (though the strength of the link is still a matter of debate).

And even if BCIs did cause personality changes, would that really be a good enough reason to withhold them from someone who needs one — a person with paraplegia who requires an assistive device, for example?

As one research paper in the journal puts it: “the debate is not so much over whether BCI will cause identity changes, but over whether those changes in personal identity are a problem that should impact technological development or access to BCI”.

Whether regular long-term use of BCIs will ultimately effect users’ moods or personalities isn’t known, but it’s hard not to imagine that technology that plugs the brain into an AI or internet-level repository of data won’t ultimately have an effect on personhood.

Historically, the bounds of a person were marked by their skin; where does ‘me’ start with a brain that’s linked up to an artificial intelligence programme, where do ‘I’ end when my thoughts are linked to vast swathes of processing power?

It’s not just a philosophical question, it’s a legal one too. In a world where our brains may be directly connected to an AI, what happens if I break the law, or just make a bad decision that leaves me in hospital or in debt?

The corporate brain drain

And another legal front that will open up around BCI tech could pit employees against employer.

There are already legal protections built up around how physical and intellectual property are handled when an employee works for and leaves a company. But what about if a company doesn’t want the skills and knowledge a worker built up during their employment to leave in their head when they leave the building?

Dr S Matthew Liao, professor of bioethics at New York University, points out that it’s common for a company to ask for a laptop of phone back when you leave a job. But what if you had an implant in your brain that recorded data?

“The question is now, do they own that data, and can they ask for it back? Can they ask for it back — every time you leave work, can they erase it and put it back in the next morning?”

Bosses and workers may also find themselves at odds in other ways with BCIs. In a world where companies can monitor what staff do on their work computers or put cameras across the office in the name of maximum efficiency, what might future employers do with the contents of their BCIs? Would they be tempted to tap into the readings from a BCI to see just how much time a worker really spends working? Or just to work out who keeps stealing all the pens out of the stationery cupboard?

“As these technologies get more and more pervasive and invasive, we might need to read to rethink our rights in the workplace,” says Liao. “Do we have a right to mental privacy?”

Privacy may be the most obvious ethical concern around BCIs, but it’s for good reason: we want our thoughts to remain private, not just for our own benefit, but for others’ as well.

Who hasn’t told a lie to spare someone’s feelings, or thought cheerfully about doing someone harm, safe in the knowledge they have no intention of ever doing so? Who wouldn’t be horrified if they knew every single thought that their partner, child, parent, teacher, boss, or friend thought?

“If we were all able to see each other’s thoughts, it would be really bad – there wouldn’t be any society left,” said Liao.

If BCIs are to spread, perhaps the most important part of using ‘mind-reading’ systems is to know when to leave others’ thoughts well alone.

https://www.scientificamerican.com/article/scientists-demonstrate-direct-brain-to-brain-communication-in-humans/

Scientists Demonstrate Direct Brain-to-Brain Communication in Humans

Work on an “Internet of brains” takes another step

By Robert Martone on 

The new paper addressed some of these questions by linking together the brain activity of a small network of humans. Three individuals sitting in separate rooms collaborated to correctly orient a block so that it could fill a gap between other blocks in a video game. Two individuals who acted as “senders” could see the gap and knew whether the block needed to be rotated to fit. The third individual, who served as the “receiver,” was blinded to the correct answer and needed to rely on the instructions sent by the senders.

The two senders were equipped with electroencephalographs (EEGs) that recorded their brain’s electrical activity. Senders were able to see the orientation of the block and decide whether to signal the receiver to rotate it. They focused on a light flashing at a high frequency to convey the instruction to rotate or focused on one flashing at a low frequency to signal not to do so. The differences in the flashing frequencies caused disparate brain responses in the senders, which were captured by the EEGs and sent, via computer interface, to the receiver. A magnetic pulse was delivered to the receiver using a transcranial magnetic stimulation (TMS) device if a sender signaled to rotate. That magnetic pulse caused a flash of light (a phosphene) in the receiver’s visual field as a cue to turn the block. The absence of a signal within a discrete period of time was the instruction not to turn the block.

After gathering instructions from both senders, the receiver decided whether to rotate the block. Like the senders, the receiver was equipped with an EEG, in this case to signal that choice to the computer.  Once the receiver decided on the orientation of the block, the game concluded, and the results were given to all three participants. This provided the senders with a chance to evaluate the receiver’s actions and the receiver with a chance to assess the accuracy of each sender.

The team was then given a second chance to improve its performance. Overall, five groups of individuals were tested using this network, called the “BrainNet,” and, on average, they achieved greater than 80 percent accuracy in completing the task.

In order to escalate the challenge, investigators sometimes added noise to the signal sent by one of the senders. Faced with conflicting or ambiguous directions, the receivers quickly learned to identify and follow the instructions of the more accurate sender. This process emulated some of the features of “conventional” social networks, according to the report.

This study is a natural extension of work previously done in laboratory animals. In addition to the work linking together rat brains, Nicolelis’s laboratory is responsible for linking multiple primate brains into a “Brainet” (not to be confused with the BrainNet discussed above), in which the primates learned to cooperate in the performance of a common task via brain-computer interfaces (BCIs). This time, three primates were connected to the same computer with implanted BCIs and simultaneously tried to move a cursor to a target. The animals were not directly linked to each other in this case, and the challenge was for them to perform a feat of parallel processing, each directing its activity toward a goal while continuously compensating for the activity of the others.

Brain-to-brain interfaces also span across species, with humans using noninvasive methods similar to those in the BrainNet study to control cockroaches or rats that had surgically implanted brain interfaces. In one report, a human using a noninvasive brain interface linked, via computer, to the BCI of an anesthetized rat was able to move the animal’s tail. While in another study, a human controlled a rat as a freely moving cyborg.

The investigators in the new paper point out that it is the first report in which the brains of multiple humans have been linked in a completely noninvasive manner. They claim that the number of individuals whose brains could be networked is essentially unlimited. Yet the information being conveyed is currently very simple: a yes-or-no binary instruction. Other than being a very complex way to play a Tetris-like video game, where could these efforts lead?

The authors propose that information transfer using noninvasive approaches could be improved by simultaneously imaging brain activity using functional magnetic resonance imaging (fMRI) in order to increase the information a sender could transmit. But fMRI is not a simple procedure, and it would expand the complexity of an already extraordinarily complex approach to sharing information. The researchers also propose that TMS could be delivered, in a focused manner, to specific brain regions in order to elicit awareness of particular semantic content in the receiver’s brain.

Meanwhile the tools for more invasive—and perhaps more efficient—brain interfacing are developing rapidly. Elon Musk recently announced the development of a robotically implantable BCI containing 3,000 electrodes to provide extensive interaction between computers and nerve cells in the brain. While impressive in scope and sophistication, these efforts are dwarfed by government plans. The Defense Advanced Research Projects Agency (DARPA) has been leading engineering efforts to develop an implantable neural interface capable of engaging one million nerve cells simultaneously. While these BCIs are not being developed specifically for brain–to-brain interfacing, it is not difficult to imagine that they could be recruited for such purposes.

Even though the methods used here are noninvasive and therefore appear far less ominous than if a DARPA neural interface had been used, the technology still raises ethical concerns, particularly because the associated technologies are advancing so rapidly. For example, could some future embodiment of a brain-to-brain network enable a sender to have a coercive effect on a receiver, altering the latter’s sense of agency? Could a brain recording from a sender contain information that might someday be extracted and infringe on that person’s privacy? Could these efforts, at some point, compromise an individual’s sense of personhood?

This work takes us a step closer to the future Nicolelis imagined, in which, in the words of the late Nobel Prize–winning physicist Murray Gell-Man, “thoughts and feelings would be completely shared with none of the selectivity or deception that language permits.” In addition to being somewhat voyeuristic in this pursuit of complete openness, Nicolelis misses the point. One of the nuances of human language is that often what is not said is as important as what is. The content concealed in privacy of one’s mind is the core of individual autonomy. Whatever we stand to gain in collaboration or computing power by directly linking brains may come at the cost of things that are far more important.

Are you a scientist who specializes in neuroscience, cognitive science, or psychology? And have you read a recent peer-reviewed paper that you would like to write about? Please send suggestions to Mind Matters editor Gareth Cook. Gareth, a Pulitzer prize-winning journalist, is the series editor of Best American Infographics and can be reached at garethideas AT gmail.com or Twitter @garethideas.