watchOS 4 beta 4 for Apple Watch now available

Apple has released the fourth watchOS 4 developer beta for testing on all Apple Watch models. watchOS 4 brings new watch faces including Siri and Toy Story options, enhanced Music and Workout experiences, Apple News, person to person payments and Apple Pay Cash, and more.


watchOS 4 beta is currently only available to registered developers. A public beta version is not provided for Apple Watch software.

watchOS 4 beta 3 recently added the new full-screen celebration effects for Activity achievements, although we’re still awaiting other new features like the new Apple Pay payments.

For more on watchOS 4, check out these stories:

We’ll update with any changes discovered in watchOS 4 beta 3 below.

Explore the International Space Station With Google Street View

An astronaut and Google mapped the ISS for Street View with a DSLR and a lot of patience



Google Street View ISS

Looking out at Earth from the Cupola Observation Module of the International Space Station on Google Street View (Google / YouTube)

Google Street View has taken armchair explorers to some of Earth’s most exotic locations, from the ancient ruins of Angkor and Machu Picchu to the natural wonders of the Galapagos Islands and the Grand Canyon. But its newest location is (literally) out of this world: the International Space Station. As Thuy Ong reports for The Verge,​ you can now explore the ISS from your own computer screen without suffering the challenges of spaceflight.

“In the six months that I spent on the International Space Station, it was difficult to find the words or take a picture that accurately describes the feeling of being in space,” French astronaut Thomas Pesquet writes in a blog post announcing the new Street View location. “Working with Google on my latest mission, I captured Street View imagery to show what the ISS looks like from the inside, and share what it’s like to look down on Earth from outer space.”

According to Pesquet, the team couldn’t use the bulky backpacks or car-mounted devices usually used to record Google Street View locations. Not only is it difficult to send new equipment to the station, it’s a pretty cramped environment. And then there’s the issue of microgravity.

“All of our Street View procedures are predicated on the existence of gravity,” Stafford Marquardt jokes in a video about the new Street View. Tripods would have to be secured wherever they were positioned. And photos taken by hand run into the issue that the photographer is constantly floating. So the team had to get creative.

The basic idea is that the astronaut would take images of the space station using a DSLR camera already on the ISS. Then the images would be stitched back together on Earth. The problem is that each image must be taken at a similar angle before being stitched, otherwise there would be seams or distortion in the final picture where the images didn’t quite line up.

After testing out various methods on Earth, they decided that Pesquet would stretch two bungee cords in a cross section of the station. Then he would take images, rotating the camera around the center point where the bungee cords cross.

This isn’t the first time non-traditional equipment has been used to add to the considerable library of Google Street View. An islander on Denmark’s Faroe Islands used 360-degree cameras strapped to sheep to map the rocky archipelago, while divers in Australia recorded the Great Barrier Reef with an underwater camera submarine.

Pesquet hopes that being able to explore this collaborative project orbiting thousands of miles above our planet and all of its borders will help people get perspective on the Earth.

“None of this would have been possible without the work of the team on the ground, my colleagues (turned roommates) on the ISS, and the countries that came together to send us up to space,” Pesquet wrote in his blog post. “Looking at Earth from above made me think about my own world a little differently, and I hope that the ISS on Street View changes your view of the world too.”

Google Wi-Fi in the home means it’s bye-bye black spots

The Google Wi-Fi router system.

Your days of Wi-Fi black spots in your home are over, says Google. Last week I was keen to find out if this was true. I installed Google Wi-Fi around my largish apartment and experienced both its strengths and weaknesses.

Google Wi-Fi takes the form of little white disks that you place around your home to get a consistent Wi-Fi signal. It’s finally been released in Australia.

Google uses a newer type of networking called “wireless mesh”, which intelligently manages how data is routed around a home. If you have, say, three of these devices — three nodes — they will collectively decide which way data will be relayed to you via the nodes. Mesh networks also decide which of the nodes you connect to around a home.

Google Wi-Fi units come in boxes of three or one at a time. In theory, you could use three as separate stand-alone routers for three networks, as they are identical. They are not router extenders. But it makes sense to use them together on one network.

The units do not include any modem functionality. You need to connect your ADSL, VDSL, cable or fibre input to a modem and run its output into an ethernet port in one of the Google Wi-Fi units. If you haven’t got a modem, you can configure an older router to “bridge mode” to achieve the same.

Like so many Google products, you use a mobile app rather than a web browser to configure Google Wi-Fi and it’s straightforward. You use the phone camera’s lens to scan a barcode on the first Wi-Fi unit, plug it in and head to the Google Wi-Fi app to install it.

The app takes care of the rest. Once complete, the light on the Wi-Fi unit turns from blue to solid white and you’re good to go.

You then plug in and install other Wi-Fi units in various parts of your home, one by one. The only difference is that second and subsequent units are not connected to any modem. In this way they mimic, but are not the same as Wi-Fi extenders.

For optimal Wi-Fi coverage, Google says you should place these disks about two rooms apart. The app reassuringly checks the Wi-Fi signal in each spot as you go.

Like some other recent routers, Google Home does not offer separate 2.4 and 5GHz Wi-Fi networks. It uses both bands but, to the end user, there’s only one Wi-Fi signal. The system decides which band any particular device connects to.

If that’s an issue, some phones and tablets let you specify which band you connect to, if you must control this.

There were no black spots on a “Wi-Fi heat map” I produced. Wi-Fi speeds were at worst from 63 Mbps but mostly around 560 to 700 Mbps.

Google Wi-Fi offers features such as Guest Wi-Fi, and a family feature where you can “pause” devices. You create a label for the pause, select one or more phones, tablets and computers, and whenever you want, cut off Wi-Fi ­access, for example during meal times or at the kids’ bedtime. You can schedule cut-off times too.

Using the app you can prioritise devices’ access to traffic, and manage your home Wi-Fi network externally.

The advanced networking features covers router functions such as UpnP, reserve IP addresses, port forwarding, NAT mode and can enable IPv6.

But there’s no Dynamic DNS (a traditional way of accessing computers remotely), virtual private network settings nor multiple subnet support, for handling of multiple networks behind a router.

I also noticed that my Synology NAS box would not connect to this router without some reconfiguration.

I’m more concerned with cabling issues. First, each Wi-Fi unit has just two Ethernet ports, and on the first unit you install, one of those is for linking to a modem.

If you have lots of Ethernet connections, you’ll either have to swap to Wi-Fi, or use a Gigabit switch to add ports.

While I could get fast speeds, I couldn’t get the same throughput as with Ethernet.

Using Okla’s with an Ethernet connection, this PC returned a ping (latency) time of 16 milliseconds, with 69 megabits per second download and 25 Mbps upload — on a 100 Mbps plan.

Using a mesh link to the internet, and Ethernet between the local Google Wi-Fi hub and my PC, I got 35 to 50 Mbps download, about 20 Mbps less than with cable. So Ethernet ­cabling still reigns supreme speedwise.

It you want to keep your ­cabling, you can link a Google Wi-Fi system to the Ethernet output of a standard router. You get your current cable connectivity but with Google’s Wi-Fi set up. Just remember to turn off the standard router’s Wi-Fi.

Although Google prefers you to configure your network from a phone, there’s no substitute for being able to configure it on a big screen using your browser.

It’s a shame it’s not available here.

At $199 for one and $499 for three-pack, Google Wi-Fi is expensive, indeed more expensive than in the US where it is priced at $US129 ($163) for one and $US299 for three.

The “Australia tax” certainly kicks in but you might be able to reduce the cost by shopping around.

Whatever the case, you can get rid of black spots for good with systems such as this.


You’re watching the new episode of Game of Thrones, and suddenly you hear your children, up and about after their bedtime! Now you’ll probably miss a crucial moment of the show because you have to put them to bed again. Or you’re out to dinner with friends and longing for the sight of your sleeping small humans. What do you do? Text the babysitter to check on them? Well, luckily for you these issues could soon be things of the past, thanks to Bert Vuylsteke and his Pi-powered Sleepbuddy. This IoT-controlled social robot could fulfil all your remote babysitting needs!


A social robot fulfils a role normally played by a person, and interacts with humans via human language, gestures, and facial expressions. This is what Bert says about the role of the Sleepbuddy:

[For children, it] is a friend or safeguard from nightmares, but it is so much more for the babysitters or parents. The babysitters or parents connect their smartphone/tablet/PC to the Sleepbuddy. This will give them access to control all his emotions, gestures, microphone, speaker and camera. In the eye is a hidden camera to see the kids sleeping. The speaker and microphone allow communication with the kids through WiFi.


As a student at Ghent University, Bert had to build a social robot using OPSORO, the university’s open-source robotics platform. The developers of this platform create social robots for research purposes. They are also making all software, as well as hardware design plans, available on GitHub. In addition, you will soon be able to purchase their robot kits via a Kickstarter. OPSORO robots are designed around the Raspberry Pi, and controlled via a web interface. The interface allows you to customise your robot’s behaviour, using visual or text-based programming languages.

Sleepbuddy Bert Vuylsteke components

The Sleepbuddy’s components


Bert has provided a detailed Instructable describing the process of putting the Sleepbuddy together, complete with video walk-throughs. However, the making techniques he has used include thermoforming, laser cutting, and 3D printing. If you want to recreate this build, you may need to contact your local makerspace to find out whether they have the necessary equipment.

Sleepbuddy Bert Vuylsteke assembly

Assembling the Sleepbuddy

Finally, Bert added an especially cute touch to this project by covering the Sleepbuddy in blackboard paint. Therefore, kids can draw on the robot to really make it their own!


At Pi Towers we are partial to all kinds of robots, be they ones that test medical devices, play chess or Connect 4, or fight other robots. If they twerk, or are cutetiny, or shoddy, we maybe even like them a tiny bit more.

Do you share our love of robots? Would you like to make your own? Then check out our resource for building a simple robot buggy. Maybe it will kick-start your career as the general of a robot army. A robot army that does good, of course! Let us know your benevolent robot overlord plans in the comments

Hubble Observes NGC 4248

NASA has released a stunning image snapped by the NASA/ESA HubbleSpace Telescope of the irregular/spiral galaxy NGC 4248.

This image, taken with the Wide Field Camera 3 on board Hubble, shows the small irregular/spiral galaxy NGC 4248. Image credit: NASA / ESA / Hubble.

This image, taken with the Wide Field Camera 3 on board Hubble, shows the small irregular/spiral galaxy NGC 4248. Image credit: NASA / ESA / Hubble.

NGC 4248 is a small galaxy — perhaps an irregular or a peculiar dwarf spiral.

It was discovered on February 9, 1788, by the British astronomer William Herschel.

Also known as LEDA 39461 and UGC 7335, NGC 4248 lies in the constellation Canes Venatici, approximately 23.5 million light-years away.

It is member of the same group as the large spiral galaxy Messier 106 (NGC 4258).

This image of NGC 4248 was produced by Hubble as it embarked upon compiling its first ultraviolet ‘atlas,’ for which the telescope targeted 50 nearby star-forming galaxies — a sample spanning all kinds of different morphologies, masses, and structures.

Studying this sample can help us to piece together the star-formation history of the Universe.

By exploring how massive stars form and evolve within such galaxies, astronomers can learn more about how, when, and where star formation occurs.

They also can learn about how star clusters change over time, and how the process of forming new stars is related to the properties of both the host galaxy and the surrounding interstellar medium.

The color image of NGC 4248 was made from separate exposures taken in the visible, ultraviolet and infrared regions of the spectrum with Hubble’s Wide Field Camera 3 (WFC3).

Five filters were used to sample various wavelengths.

The color results from assigning different hues to each monochromatic image associated with an individual filter.

Microsoft promises AI chip for ‘on-device’ computer vision learning

No need for Azure?

Microsoft is upgrading the HoloLens with a mysterious chip to enable running power-hungry machine learning tasks for computer vision on device.

Harry Shum, EVP of Microsoft’s Artifical Intelligence and Research Group, announced yesterday at the annual CVPR computer vision conference in Honolulu, Hawaii, that the second-generation HoloLens will come with a custom AI co-processor for implementing deep neural networks.

Today, tech pros can use powerful AI-optimized chips from the likes of Intel, NVIDIA or even Google for complex machine learning applications. But a typical deep-learning task such as segmenting out parts of a sequence of images could consume somewhere between say 10 and 50 kW if running inside a remote data centre – far too much power for a consumer mobile device, which past around, say, 3W would likely be uncomfortably hot.

If you can’t run the processing directly on device, your next guess might be to use your mobile device to communicate to the cloud (Amazon will be supporting NVIDIA’s Volta V100, a power-hungry, AI-optimized NVIDIA GPU for AWS when it comes out later this year). But you’re left in the river without a paddle if you don’t have an internet connection. And even if you do, a few seconds of communication latency could make a real-time application such as hand-tracking far too slow.

Bloomberg, citing anonymous sources, reported this May that Apple is now testing iPhones with special chips for processing AI.

The main processor on the all-in-one, first-gen HoloLens currently processes info from the headset’s on-board sensors: such as the IMU, infrared cameras, head-tracking cameras and depth sensor. The new co-processor will apparently allow deep neural networks to run outside the cloud.

A Microsoft spokesperson referred The Register to a blog post by the HoloLens director of science, Marc Pollefeys, about the announcement, noting “we have nothing more to share at this time”.

According to the extremely detail-light blog post, the “custom silicon” will be different from the reprogrammable field-programmable gate arrays used by the likes of Amazon, Microsoft (on Azure) and others which “primarily enhanced existing cloud computing fabrics”.

The chip – which “supports a wide variety of layer types” for deep neural nets that are “fully programmable” – will be able to run “continuously, off the HoloLens battery,” according to Pollefeys.

It’s not clear if the chip will support both the training of deep neural networks and prediction once the network is trained, or only prediction from a pre-trained network.

In Hawaii, Shum apparently demonstrated hand segmentation live.

“The blog doesn’t actually give enough detail to form any impression of how much difference the AI coprocessor will make,” writes Stephen Furber, a computer engineer at The University of Manchester who studies human-brain-inspired neuromorphic computing.

Researchers have already shown that you can do a “reasonable job” of implementing the computer vision task of simultaneous localization and mapping (aka SLAM) on mobile devices, he writes (for example, this study indicates that 3D mapping and tracking could be done on an embedded device with a 1W power budget).

More hardware assistance with deep networks could potentially allow for higher frame rates, better accuracy of tracking and better object recognition.

But he pointed out that Google’s 700Mhz Tensor Processing Unit, a large 8-bit integer matrix multiplier designed for neural network applications, is too power-hungry for mobile device use. It consumes about 40W when running, according to a Google blog post.

“I would guess that this is similar, though smaller, less powerful and less power-hungry,” he added. “But who knows?”

Microsoft has not officially announced a release date for the HoloLens 2, but some reports say it might not be until 2019.