Thanks to those who voted on our last week’s poll. Results have shown DeepMind won as the biggest contributor in the AI space against IBM and your suggestions.
Today instead we discuss the start up Numerai and its use of AI, encryption, crowdsourcing, and bitcoin to reinvent the financial industry; IBM’s research
on accountability and ethics in learning to trust AI; and the world’s first demonstration of spintronics-based AI.
Richard Craib is a 29-year-old South African who runs a hedge fund in San Francisco. Or rather, he doesn’t run it. He leaves that to an AI system built by several thousand data scientists whose names he doesn’t know. Craib and his team have built technology that masks the fund’s trading data before sharing it with a vast community of anonymous data scientists. “We give away all our data,” says Craib, “But we convert it into this abstract form where people can build machine learning models for the data without really knowing what
With 14 million trades a day on Nasdaq and innumerable chats and emails, it’s impossible to check everything. That’s why Nasdaq has decided to use AI to help it detect market abuse. Bank clients of Nasdaq and its artificial intelligence partner, Digital Reasoning, are also starting to use the technology — UBS is one.
After almost two years working with this technology in the healthcare space, the idea that a pure AI chat bot would never be able to replace a physician or benefits expert is quickly spreading. The solution might be the “human-augmented chatbot”, an hybrid approach combining the best parts of both man and machine.
A new study published in the journal Suicide and Life-Threatening Behavior showed that machine learning is up to 93% accurate in identifying a suicidal person. The researchers analysed both verbal and non-verbal language from the data, then sent the information through a machine-learning algorithm that was able to determine with remarkable accuracy whether the person was suicidal, mentally ill but not suicidal, or neither.
IBM discusses algorithmic responsibility and how to start trusting AI. “Trust is built upon accountability. As such, the algorithms that underpin AI systems need to be as transparent, or at least interpretable, as possible. In other words, they need to be able to explain their behaviour in terms that humans can understand — from how they interpreted their input to why they recommended a particular output.”
Researchers at Tohoku University have developed for the first time an artificial neural network in which their recently-developed spintronic devices, comprising micro-scale magnetic material, are employed. The used spintronic device is capable of memorising arbitral values between 0 and 1 in an analogue manner unlike the conventional magnetic devices, and thus perform the learning function, which is served by synapses in the brain.
How can executives get up to speed, what degree of knowledge and in what way they should learn AI so they can find opportunities in their own companies? John Straw, currently a senior advisor to McKinsey & Co. and working with executive teams in terms of finding new applications for AI, touches on what he sees as the biggest areas of oversight, in terms of preventing companies from finding those applications that can keep them up to speed with competitors and the big technology players.
The one way in which we’re radically different from machines is in our ability to step into the unknown, to do things that have never been done before with paint, form, sound and the written word. Most of the rewards to creative professionals today accrue to that ability, not to skill or the extensive knowledge of predecessors’ work. It is possible to teach a machine Van Gogh’s painting technique, but only if it already exists. An algorithm can write chorales like Bach because it can “study” Bach. Even a derivative work of art though needs to be derivative in groundbreaking ways to be appreciated.
Researchers devise test to determine whether machine learning algorithms are introducing gender or racial biases into decision-making. The test is aimed at machine learning programs, which learn to make predictions about the future by crunching through vast quantities of existing data. Since the decision-making criteria are essentially learnt by the computer, rather than being pre-programmed by humans, the exact logic behind decisions is often opaque, even to the scientists who wrote the software.
UIEvolution, a Kirkland, Wash.-based company just announced that it’s about to acquire Surround.io, a Seattle-based machine learning startup in order to make the connected car share more data with customers and manufacturers. Also, the company has changed its name to Xevo as part of the transaction. According to Xevo, Surround.io “has been developing an industrial-scale machine learning infrastructure with a specific focus on applying computer vision to automotive scenarios and enabling vast numbers of cars to share information and act cooperatively.”