Business impact of AI. AI + beauty. Apple’s new machine learning research site. http://cognitionx.com/news-briefing/.
What are the implications of artificial intelligence? We’ve seen the reports and the swathes of articles, and now the House of Lords wants to hear from you.
The Select Committee on Artificial Intelligence has been appointed to consider the economic, ethical, and social implications of advances in artificial intelligence, and is now calling for evidence from those interested in these issues.
Led by Lord Clement-Jones, the inquiry will seek “pragmatic solutions to the issues presented” when it files its report by 31 March 2018.
The submission deadline is 6 September 2017 and you can submit your evidence here.
Tabitha UntiltheBotsTakeOver Goldstaub
We’d also love to hear what you are up to and see how we can be of assistance. Check out our subscription service.
Erik Brynjolfsson (MIT) and Andrew Mcafee (MIT) discuss in this great feature-length piece how in the sphere of business, AI is poised have a transformational impact, on the scale of earlier general-purpose technologies. Although it is already in use in thousands of companies around the world, most big opportunities have not yet been tapped. The article cuts through the noise to describe the real potential of AI, its practical implications, and the barriers to its adoption.
They argue that only the most nimble and adaptable companies and executives will thrive. Organisations that can rapidly sense and respond to opportunities will seize the advantage in the AI-enabled landscape.
So the successful strategy is to be willing to experiment and learn quickly. “If managers aren’t ramping up experiments in the area of machine learning, they aren’t doing their job. Over the next decade, AI won’t replace managers, but managers who use AI will replace those who don’t.”
What makes something beautiful? It’s an entirely subjective question. But AI now ‘thinks’ it has the answer.
Using deep learning techniques, data scientists from Warwick Business School trained a computer system on 200,000 images from the website Scenic-or-Not, where members of the public vote on how beautiful a British scene is.
The project was linked to earlier studies by the same team from Warwick’s Data Science Lab that showed a direct correlation between residing in a scenic location and good health. If the AI could recognise beauty like a human, city planning for wellbeing could potentially be automated.
Apple just launched a blog focused on machine learning research papers and sharing the company’s findings. The Apple Machine Learning Journal is a bit empty right now as the company only shared one post about turning synthetic images into realistic ones in order to train neural networks.
This move is interesting as Apple doesn’t usually talk about their research projects. Romain Dillet from TechCrunch says that it is also interesting for 2 further reasons: 1) the first paper put on the site has already been put up on Arxiv and 2) he thinks the layout of the site makes it looks like Apple plans to use this platform to find promising engineers in that field.
San Diego startup Brain Corp builds autonomous floor cleaning robots that can safely navigate store aisles without knocking anything over. Yesterday, it announced that it’s one step closer to building an army of robot mops thanks to an impressive $114 million later-stage funding round.
Founded in 2009, Brain Corp does not consider itself to be a robotics company but rather an “autonomy as a service” company. The startup’s main product is BrainOS, an operating system that combines computer vision and AI with off-the-shelf hardware and sensors to give existing machines the ability to navigate their surroundings and make decisions. According to Brain Corp, BrainOS powers robots the same way Android powers smartphones.
Neuroevolution is making a comeback. Prominent AI labs and researchers are experimenting with it, a string of new successes have bolstered enthusiasm, and new opportunities for impact in deep learning are emerging. Maybe you haven’t heard of neuroevolution in the midst of all the excitement over deep learning, but it’s been lurking just below the surface, the subject of study for a small, enthusiastic research community for decades. And it’s starting to gain more attention as people recognize its potential.
Put simply, neuroevolution is a subfield within AI and machine learning that consists of trying to trigger an evolutionary process similar to the one that produced our brains, except inside a computer. In other words, neuroevolution seeks to develop the means of evolving neural networks through evolutionary algorithms. This is a great article, written in plain English and is very accessible.
A panel of the US House Energy and Commerce Committee voted unanimously Wednesday to advance the first legislation on driverless cars. Advocates for the blind have let lawmakers know they have a special set of concerns: They want accessibility incorporated into car design and states to steer clear of laws that would prohibit the blind from one day sitting in the driver’s seat. You can check out the text of the Testing and Deployment Act of 2017 here.
In other news, the director of the California Department of Transportation said the state’s highway management system has already begun modifying roadways to accommodate the way self-driving cars navigate. He said, “They can’t follow the Botts’ Dots, so we’re actually changing our delineation standards to go away from the Botts’ Dots which we’ve been using for decades because AVs have a difficult time following those.”
Robots should be fitted with an “ethical black box” to keep track of their decisions and enable them to explain their actions when accidents happen, researchers say.
Scientists will make the case for the devices at a conference at the University of Surrey on Thursday where experts will discuss progress towards autonomous robots that can operate without human control. The proposal comes days after a K5 security robot named Steve fell down steps and plunged into a fountain while on patrol at a riverside complex in Georgetown, Washington DC. No one was hurt in the incident.
Earlier this week, at the campus of MIT, TechCrunch had the chance to sit down with famed roboticist Rodney Brooks, the founding director of MIT’s Computer Science and Artificial Intelligence Lab, and the co-founder of both iRobot and Rethink Robotics.
Brooks had a lot to say about AI, including his overarching concern that many people — including renowned AI alarmist Elon Musk — get it very wrong, in his view. He also warned that despite investors’ fascination with robotics right now, many VCs may underestimate how long these companies will take to build — a potential problem for founders down the road.
As a part of its partnership with ModiFace, the Toronto-based creator of AR technology for the beauty industry, Estée Lauder’s Facebook Messenger chat bot gives its users the ability to virtually try on Estée Lauder’s Pure Colour lipsticks. Try it out here. Sephora has also recently put out an app which you can use to virtually try on makeup.
“One of the key pillars of our partnership with ModiFace is the application of AR and AI across all platforms where customers interact with our brand,” said Stephane de La Faverie, global brand president of Estée Lauder, in a statement. “Messaging applications such as Facebook Messenger are the perfect platform for consumers to search, explore, try-on, and ideally purchase Estée Lauder products.”