Happy. Sad. Excited. As humans, we take our ability to detect emotions for granted.
What about AI? Can AI detect human emotions and, if so, for what purposes? Recently, we came across a particularly interesting use case: the MIT Media Lab (in collaboration with McKinsey) are investigating how machines could identify common emotional arcs in video stories and how storytellers could use this information to predict how audiences might respond.
When Shell wanted help evaluating digital business models in the car-maintenance sector, executives plugged the project into an algorithm that scanned for available Shell staffers with the right expertise—and assigned the job with a click. Shell uses machine-learning software designed by Boston-based Catalant Inc. to match workers and projects. The program tracks and evaluates their activity so it can refine the next round of matches.
There is evidence computers may be better suited to some managerial tasks than people are. Humans are susceptible to cognitive traps like confirmation bias. People using intuition tend to make poor decisions but rate their performance more highly, according to a 2015 University of New England analysis of psychological studies. And in an increasingly quantitative business world, managers are asked to deliver more data-driven decisions—precisely the sort at which machines excel.
In this paper, Apple presents a novel learning system architecture, which leverages local differential privacy and combines it with privacy best practices. To scale their system to millions of users and a variety of use cases, they have developed novel local differentially private algorithms – CMS, HCMS, and SFP – for both the known and unknown dictionary settings.
In their full paper, they provide analytic expressions for the tradeoffs among various factors, including privacy, utility, server computation overhead, and device bandwidth. Their utility theorems give a principled way to choose algorithmic parameters to minimise the transmission cost for the users without lowering accuracy. Without such expressions, it is difficult to evaluate the impact on accuracy if, for example, transmission cost is reduced without running costly iterations.
Next year, nearly 500 local elections for Senate and House seats are taking place across the United States. As part of their mission to help journalists report on the stories that matter, they worked with ProPublica to build the Election Databot: a tool that provides data about all of those elections, from campaign spending to Google Search Trends for candidates.
They launched the Databot for the 2016 general election, but this new version shows complete information for all the House and Senate races through November 8, 2018. It also includes this year’s special election on December 12 between Republican Roy Moore and Democrat Doug Jones, who are vying for one of Alabama’s Senate seats.
In a bid to think “client first”, Louis Vuitton has launched a chat bot on Facebook Messenger that advises shoppers on products, aiming to provide meaningful feedback during the busy holiday period.
The bot is powered by mode.ai, which has also previously partnered with the likes of Levi’s with its artificial intelligence, visual search and machine learning technology. It aims to give clients a more “sophisticated, personalised, visual and conversational online shopping experience” via Facebook, where Louis Vuitton currently has more than 20 million followers.
Apple’s secretive autonomous car project has shifted focus over the years, but this year, it seems to be picking up speed. In April, the company received a permit to test self-driving cars in California, while in June, Apple CEO Tim Cook confirmed that they were working on software that could allow cars — and maybe other things — to drive themselves. During a talk on Friday, Apple’s director of artificial intelligence research, Ruslan
Salakhutdinov, spoke about some of the company’s recent advances in machine learning that would be useful for such a project.
Wired reports that Salakhutdinov spoke before a group of AI experts at the end of this year’s Neural Information Processing Systems (NIPS) conference in Long Beach, California. There, he spoke about how Apple is using machine learning to analyse data from a vehicle’s cameras. He talked about techniques used in a recently published study on the advances that the company has made in using AI to detect pedestrians and cyclists using LiDAR.
Many aspects of machine learning lag vastly behind human performance. In particular, humans have the extraordinary ability to constantly update their memories with the most important knowledge while overwriting information that is no longer useful.
Today that looks set to change thanks to the work of Rahaf Aljundi and pals at the University of Leuven in Belgium and at Facebook AI Research. The connections between neurons grow stronger if they fire together, and these connections are therefore more difficult to break. This is how we learn—repeated synchronised firing of neurons makes the connections between them stronger and harder to overwrite. Aljundi and co have developed a way for artificial neural networks to behave in the same way. They do this by measuring the outputs from a neural network and monitoring how sensitive they are to changes in the connections within the network.
We highly recommend reading this piece from the Economist. They describe how venture-capital investment in AI in the first nine months of 2017 totalled $7.6B, according to PitchBook, a data provider; that compares with full-year figures of $5.4B in 2016. In the year to date there have been $21.3B in AI-related M&A deals, around 26 times more than in 2015.
The article addresses 3 major questions:
by cementing and extending the power of a handful of giants, AI will hurt competition
whether AI will always depend on vast amounts of data
Scientists have developed an algorithm to monitor the underwater chatter of dolphins with the help of machine learning. Using autonomous underwater sensors, researchers working in the Gulf of Mexico spent two years making recordings of dolphin echolocation clicks. The result was a data set of 52 million click noises.
To sort through this vast amount of information, the scientists employed an “unsupervised” algorithm that automatically classified the noises into categories. This enabled the scientists to determine specific patterns of clicks among the millions of clicks being recorded, and could help them to identify dolphin species in the wild.