For a good laugh, check out YouTube user Ben Actis‘ recently uploaded video of his 85-year-old grandmother learning how to use her Google Home Mini. She seems confused by the tiny device until someone suggests that she try saying “OK, Google” then she really gets into it…we quite enjoyed watching it.
Tabitha UntiltheBotsTakeOver Goldstaub
P.S. Loving the news briefing? Share the love with a friend or two. They can sign up here or by clicking the button below.
This year, the Queen’s New Year’s Honours list includes 16 people from the world of tech. The list features everyone from an artificial intelligence pioneer, a cybersecurity expert, and a pioneering investor, including CEO and co-founder of DeepMind, Demis Hassabis, Blippar founder Jess Butcher, Co-Founder of Founder’s Forum Jonathan Goodwin, and president of techUK Jacqueline de Rojas.
Hassabis said he was “very proud” of his team at DeepMind.
“This is recognition of the immense contribution they have already made to the world of science and technology, and I’m excited about the potential for many more breakthroughs and societal benefit in the years ahead.”
Martin Welker (TechCrunch) has written a great piece on how even with the support of AI frameworks like TensorFlow or OpenAI, artificial intelligence still requires deep knowledge and understanding compared to a mainstream web developer.
Among other things, he discusses 4 potential scenarios for the future of AI:
mainstream AI research train will get significantly slower or has already stopped
mainstream train will roll along at at its current clip
Teams of university students will develop a socialbot, a new Alexa skill that converses with users on popular societal topics. Participating teams will advance the state-of-the-art in natural language understanding, dialogue and context modeling, and human-like language generation and expression.
Teams will build their
bots using the Alexa Skills Kit (ASK), which will enable them to receive continuous feedback on their inventions in real-world settings. The grand challenge for the 2018 Alexa Prize is to create a socialbot that can engage in a fun, high quality conversation on popular societal topics for 20 minutes.
How do you vote? 50M Google images give a clue
What vehicle is most strongly associated with Republican voting districts? Extended-cab pickup trucks. For Democratic districts? Sedans. Those conclusions may not be particularly surprising. After all, market researchers and political analysts have studied such things for decades.
But what is surprising is how researchers working on an ambitious project based at Stanford University reached those conclusions: by analysing 50 million images and location data from Google Street View, the street-scene feature of the online giant’s mapping service. For the first time, helped by recent advances in artificial intelligence, researchers are able to analyse large quantities of images, pulling out data that can be sorted and mined to predict things like income, political leanings and buying habits. In the Stanford study, computers collected details about cars in the millions of images it processed, including makes and models.
Michael Sikorsky and Rita Gunther McGrath argue in a recent Wired article that in 2018, AI-enabled bots will provide a better customer experience than human-to-human chat exchange, following the explosion of messaging services that have changed the way companies interact with their customers.
Many organisations will fail to create the customer experience they desire because of a fundamental misunderstanding of human-to-machine interaction. In their belief that human agents give the best experience, many will develop messaging applications that stress person-to-person conversations. But companies will learn that using AI-powered bots, supported by human “escape hatches”, which seamlessly pass on the interaction to a human, will provide a vastly better experience than a standalone human-to-human exchange.
Over the Christmas holiday, the Japan Times reported that sources confirmed that facial recognition technology would be used to “streamline the entry of athletes, officials and journalists to the games venues.”
While spectators won’t be subject to the recognition treatment, the number of people from the groups that will be screened with facial recognition could total anywhere between 300,000 and 400,000, according to the Times report. The extra layer of security will come on top of ID cards that will also be handed out to those participants.
Mohseni says one of the best current examples of cognitive enhancement is the work of Professor Ted Berger, of the University of Southern California, who has been working on a memory prosthesis to replace the damaged parts of the hippocampus in patients who have lost their memory due to, for example, Alzheimer’s disease.
I think that artificial intelligence and autonomy raise probably the most questions, and that is largely because humans are not involved. So if you go back to Aquinas and to St. Augustine, they talk about things like “right intention.” Does the person who is doing the killing have right intention? Is he even authorised to do it? Are we doing things to protect the innocent? Are we doing things to prevent unnecessary suffering? And with autonomy and artificial intelligence, I don’t
believe there’s anybody even in the business who can actually demonstrate that we can trust that those systems are doing what they should be doing.