11,983% Overfunded Smart Speaker Shows Google And Amazon Missed A Trick
Voice Assistants are still in their infancy and widespread use cases remain pretty low-level for right now. While different developers and companies are looking at user journeys and testing what jobs voice can and should do (many favouring the former rather than the latter) most aren’t thinking about helping people learn new languages or are using traditional approaches. Lily, a crowdfunded smart speaker, might be about to change all that.
The first thing to know about Lily is that it has been overfunded not by one or two times but 11983%. Impressive. Secondly, Lily looks like a Pixar character and thirdly; you ask your questions in English, and it answers you in Chinese (for now, French and Spanish are set for 2019 and Japanese will come after that). Using Chinese language teachers and machine learning, Lily aims (it’s still a prototype) to add other languages and make money through premium content. There’s no subscription needed for the app and the basic language packs (HSK 1-6) are free. The iOS and Android app will keep track of progress, and generally, it’s a neat little bundle. The best part? There are exams, and thanks to Lily’s pronunciation correction it also has a leg over its rivals. Future features include (paid for) packages that will feature vocabulary specialized for manufacturing in China, child stories that have licensing rights and other opportunities.
Sounds smart enough, so why aren’t Amazon and Google pushing language? There seems to be a market for it, while the people who make Lily reside in San Francisco, the majority of comments on the IndieGoGo page are from non-US backers.
All in all, this product highlights a missed opportunity Google and Amazon seem to have ignored, choosing to rely on other developers to do the heavy lifting for them. The demand seems to be there and while both Alexa and Google Assistant can answer translation questions (“Hey Google, how do you say thank you in Chinese?”) neither can teach you a new language which – especially in Google’s case seems like a missed opportunity and PR goldmine.
As voice assistants move from a command-based structure (or activation sentence like “Alexa, open the Forbes homepage”), Lily doesn’t have a) a screen or b) any other purpose bar learning. As voice tech moves into a more conversational phase (akin to how Lily will work), possibilities open up. Could it be a new way to train a workforce? Will Lily become the interpreter of the future? As voice technology improves so do the opportunities that come with them. Hands-free language tools are a fertile area for individuals and businesses to explore.