Apple has released the fourth watchOS 4 developer beta for testing on all Apple Watch models. watchOS 4 brings new watch faces including Siri and Toy Story options, enhanced Music and Workout experiences, Apple News, person to person payments and Apple Pay Cash, and more.
watchOS 4 beta is currently only available to registered developers. A public beta version is not provided for Apple Watch software.
watchOS 4 beta 3 recently added the new full-screen celebration effects for Activity achievements, although we’re still awaiting other new features like the new Apple Pay payments.
Explore the International Space Station With Google Street View
An astronaut and Google mapped the ISS for Street View with a DSLR and a lot of patience
Google Street View has taken armchair explorers to some of Earth’s most exotic locations, from the ancient ruins of Angkor and Machu Picchu to the natural wonders of the Galapagos Islands and the Grand Canyon. But its newest location is (literally) out of this world: the International Space Station. As Thuy Ong reports for The Verge, you can now explore the ISS from your own computer screen without suffering the challenges of spaceflight.
“In the six months that I spent on the International Space Station, it was difficult to find the words or take a picture that accurately describes the feeling of being in space,” French astronaut Thomas Pesquet writes in a blog post announcing the new Street View location. “Working with Google on my latest mission, I captured Street View imagery to show what the ISS looks like from the inside, and share what it’s like to look down on Earth from outer space.”
According to Pesquet, the team couldn’t use the bulky backpacks or car-mounted devices usually used to record Google Street View locations. Not only is it difficult to send new equipment to the station, it’s a pretty cramped environment. And then there’s the issue of microgravity.
“All of our Street View procedures are predicated on the existence of gravity,” Stafford Marquardt jokes in a video about the new Street View. Tripods would have to be secured wherever they were positioned. And photos taken by hand run into the issue that the photographer is constantly floating. So the team had to get creative.
The basic idea is that the astronaut would take images of the space station using a DSLR camera already on the ISS. Then the images would be stitched back together on Earth. The problem is that each image must be taken at a similar angle before being stitched, otherwise there would be seams or distortion in the final picture where the images didn’t quite line up.
After testing out various methods on Earth, they decided that Pesquet would stretch two bungee cords in a cross section of the station. Then he would take images, rotating the camera around the center point where the bungee cords cross.
This isn’t the first time non-traditional equipment has been used to add to the considerable library of Google Street View. An islander on Denmark’s Faroe Islands used 360-degree cameras strapped to sheep to map the rocky archipelago, while divers in Australia recorded the Great Barrier Reef with an underwater camera submarine.
Pesquet hopes that being able to explore this collaborative project orbiting thousands of miles above our planet and all of its borders will help people get perspective on the Earth.
“None of this would have been possible without the work of the team on the ground, my colleagues (turned roommates) on the ISS, and the countries that came together to send us up to space,” Pesquet wrote in his blog post. “Looking at Earth from above made me think about my own world a little differently, and I hope that the ISS on Street View changes your view of the world too.”
You’re watching the new episode of Game of Thrones, and suddenly you hear your children, up and about after their bedtime! Now you’ll probably miss a crucial moment of the show because you have to put them to bed again. Or you’re out to dinner with friends and longing for the sight of your sleeping small humans. What do you do? Text the babysitter to check on them? Well, luckily for you these issues could soon be things of the past, thanks to Bert Vuylsteke and his Pi-powered Sleepbuddy. This IoT-controlled social robot could fulfil all your remote babysitting needs!
A SOCIAL ROBOT?
A social robot fulfils a role normally played by a person, and interacts with humans via human language, gestures, and facial expressions. This is what Bert says about the role of the Sleepbuddy:
[For children, it] is a friend or safeguard from nightmares, but it is so much more for the babysitters or parents. The babysitters or parents connect their smartphone/tablet/PC to the Sleepbuddy. This will give them access to control all his emotions, gestures, microphone, speaker and camera. In the eye is a hidden camera to see the kids sleeping. The speaker and microphone allow communication with the kids through WiFi.
THE ROOTS OF THE SLEEPBUDDY
As a student at Ghent University, Bert had to build a social robot using OPSORO, the university’s open-source robotics platform. The developers of this platform create social robots for research purposes. They are also making all software, as well as hardware design plans, available on GitHub. In addition, you will soon be able to purchase their robot kits via a Kickstarter. OPSORO robots are designed around the Raspberry Pi, and controlled via a web interface. The interface allows you to customise your robot’s behaviour, using visual or text-based programming languages.
The Sleepbuddy’s components
BUILDING THE SLEEPBUDDY
Bert has provided a detailed Instructable describing the process of putting the Sleepbuddy together, complete with video walk-throughs. However, the making techniques he has used include thermoforming, laser cutting, and 3D printing. If you want to recreate this build, you may need to contact your local makerspace to find out whether they have the necessary equipment.
Assembling the Sleepbuddy
Finally, Bert added an especially cute touch to this project by covering the Sleepbuddy in blackboard paint. Therefore, kids can draw on the robot to really make it their own!
SO MANY ROBOTS!
At Pi Towers we are partial to all kinds of robots, be they ones that test medical devices, play chess or Connect 4, or fight other robots. If they twerk, or are cute, tiny, or shoddy, we maybe even like them a tiny bit more.
Do you share our love of robots? Would you like to make your own? Then check out our resource for building a simple robot buggy. Maybe it will kick-start your career as the general of a robot army. A robot army that does good, of course! Let us know your benevolent robot overlord plans in the comments
This image of NGC 4248 was produced by Hubble as it embarked upon compiling its first ultraviolet ‘atlas,’ for which the telescope targeted 50 nearby star-forming galaxies — a sample spanning all kinds of different morphologies, masses, and structures.
Microsoft promises AI chip for ‘on-device’ computer vision learning
No need for Azure?
Microsoft is upgrading the HoloLens with a mysterious chip to enable running power-hungry machine learning tasks for computer vision on device.
Harry Shum, EVP of Microsoft’s Artifical Intelligence and Research Group, announced yesterday at the annual CVPR computer vision conference in Honolulu, Hawaii, that the second-generation HoloLens will come with a custom AI co-processor for implementing deep neural networks.
Today, tech pros can use powerful AI-optimized chips from the likes of Intel, NVIDIA or even Google for complex machine learning applications. But a typical deep-learning task such as segmenting out parts of a sequence of images could consume somewhere between say 10 and 50 kW if running inside a remote data centre – far too much power for a consumer mobile device, which past around, say, 3W would likely be uncomfortably hot.
If you can’t run the processing directly on device, your next guess might be to use your mobile device to communicate to the cloud (Amazon will be supportingNVIDIA’s Volta V100, a power-hungry, AI-optimized NVIDIA GPU for AWS when it comes out later this year). But you’re left in the river without a paddle if you don’t have an internet connection. And even if you do, a few seconds of communication latency could make a real-time application such as hand-tracking far too slow.
Bloomberg, citing anonymous sources, reported this May that Apple is now testing iPhones with special chips for processing AI.
The main processor on the all-in-one, first-gen HoloLens currently processes info from the headset’s on-board sensors: such as the IMU, infrared cameras, head-tracking cameras and depth sensor. The new co-processor will apparently allow deep neural networks to run outside the cloud.
A Microsoft spokesperson referred The Registerto a blog post by the HoloLens director of science, Marc Pollefeys, about the announcement, noting “we have nothing more to share at this time”.
According to the extremely detail-light blog post, the “custom silicon” will be different from the reprogrammable field-programmable gate arrays used by the likes of Amazon, Microsoft (on Azure) and others which “primarily enhanced existing cloud computing fabrics”.
The chip – which “supports a wide variety of layer types” for deep neural nets that are “fully programmable” – will be able to run “continuously, off the HoloLens battery,” according to Pollefeys.
It’s not clear if the chip will support both the training of deep neural networks and prediction once the network is trained, or only prediction from a pre-trained network.
In Hawaii, Shum apparently demonstrated hand segmentation live.
“The blog doesn’t actually give enough detail to form any impression of how much difference the AI coprocessor will make,” writes Stephen Furber, a computer engineer at The University of Manchester who studies human-brain-inspired neuromorphic computing.
Researchers have already shown that you can do a “reasonable job” of implementing the computer vision task of simultaneous localization and mapping (aka SLAM) on mobile devices, he writes (for example, this study indicates that 3D mapping and tracking could be done on an embedded device with a 1W power budget).
More hardware assistance with deep networks could potentially allow for higher frame rates, better accuracy of tracking and better object recognition.
But he pointed out that Google’s 700Mhz Tensor Processing Unit, a large 8-bit integer matrix multiplier designed for neural network applications, is too power-hungry for mobile device use. It consumes about 40W when running, according to a Google blog post.
“I would guess that this is similar, though smaller, less powerful and less power-hungry,” he added. “But who knows?”
Microsoft has not officially announced a release date for the HoloLens 2, but some reports say it might not be until 2019.