To that end, we are excited to launch our new CognitionX research report The Impact of AI on HR by our Head of AI in Research Ian Bailie featuring a foreword from Josh Bersin (founder of Bersin by Deloitte). Click the link to download for FREE and
get a free month’s trial to our Pro research service.
The European Commission has announced the names of 52 experts from across industry, business and civil society who it has appointed to a new High Level Group on
AI which will feed its strategy and policymaking around artificial intelligence. In April the EU’s executive body outlined its approach to AI technology, setting out measures intended to increase public and private investment; prepare for socio-economic changes; and ensure an appropriate ethical and legal framework.
The group announced today is comprised of 30 men and 22 women, and includes industry representatives from AXA, Bayer, Bosch, BMW, Element AI, Google, IBM, Nokia Bell Labs, Orange, Santander, SAP, Sigfox, STMicroelectronics, Telenor and Zalando. Google is represented by Jakob Uszkoreit, an AI Researcher in the Google Brain team.
Robots are designed for speed and precision — but their rigidity has often limited how they’re used. In this illuminating Ted Talk, biomedical engineer Giada Gerboni shares the latest developments in “soft robotics,” an emerging field that aims to create nimble machines that imitate nature, like a robotic octopus. Learn more about how these flexible structures could play a critical role in surgery, medicine, and our daily lives.
Artificial intelligence can do some remarkable things, from driving cars to improving breast cancer diagnosis. And now it can make memes. Abel Peirson and Meltem Tolunay, two researchers at Stanford University, recently posted a paper to the arXiv preprint server detailing a new machine learning model they created that’s capable of creating pretty convincing memes.
Peirson and Tolunay trained their machine learning algorithm with a dataset of more than 400 types of memes with multiple captions that they pulled from memegenerator.com using a Python script. To simplify the task, they focused only on “advice animal” style memes, the kind where an image of a specific character like “socially awkward penguin” is overlaid with a caption of text that represents the traits of that character, usually with a humorous observation. “This allows for relatively simple collection of datasets,” the researchers wrote. “In this paper, we specifically refer to meme generation as the task of generating a humorous caption in a manner that is relevant to the initially provided image, which can be a meme template or otherwise.”
Ken Weiner (GumGum) writes about how as with other commoditised technologies, the off-the-shelf, one-size-fits-all approach doesn’t work for all companies or business goals. Here are the top questions he says you should be asking:
Waymo’s autonomous van program in Phoenix has been operating quietly for around a year now, with 400 riders using the service every day and tens of thousands of applications for people to become a part of the program. How has the last year gone and what has Waymo learned during that time?
Well, as it happens, the last year has gone pretty well. Waymo’s early rider program has participants that range in age from 9 to 69, and they go to all kinds of places in the greater Phoenix area, whether it’s to school or the store or the movies or a bar. One of the things that Waymo has been improving through use of these testers’ feedback is its ability to pick up and drop off riders in the most convenient possible place, rather than relying on just address data, ensuring minimal walks in Phoenix’s notorious heat. Have a look at their Medium post for more.
Yesterday, DeepMind unveiled a new type of computer vision algorithm that can generate 3D models of a scene from 2D snapshots: the Generative Query Network (GQN).
The GQN, details of which were published in Science, can “imagine” and render scenes from any angle without any human supervision or training. Given just a handful of pictures of a scene — a wallpapered room with a colored sphere on the floor, for example — the algorithm can render opposite, unseen sides of objects and generate a 3D view from multiple vantage points, even accounting for things like lighting in shadows. Check out this VentureBeat article for more.
A new dataset from Stanford aims to teach AI systems to understand how to answer questions more effectively, by knowing when there isn’t enough information to provide an accurate answer. The dataset is called SQuAD 2.0, short for the Stanford Question Answering Dataset. It’s an update of an earlier, wildly popular dataset used by companies like Microsoft, Google, and Alibaba to show off how accurate their language-understanding AI systems are at answering questions.
The new dataset includes nearly 50,000 questions that are unanswerable, but purposely crafted, so they loosely relate to the subject matter of the reference text. That’s not to say that AI systems have proven very good at answering the dataset’s trick questions yet. The Stanford researchers‘ first crack at training a question-answering algorithm on the dataset scored 66%, 20 points lower than the previous iteration of the SQuAD dataset, since the AI keeps trying to answer the unanswerable questions. By publishing this dataset, other researchers will have the ability to train their algorithms, figuring out better and better ways to make their AI systems answer questions. (For comparison, algorithms trained on the original version of SQuAD only scored 51% when it was first released.)
During any conversation about AI these days, it’s not long before someone raises the question of ethics. Answers to that question, however, are less easy to find. It’s tough territory to navigate. But hard as it might be, it’s also one of the most pressing issues facing industry and society today.
At CogX 2018, Accenture dedicated their pavilion to the responsibility theme as well as showcasing a broad range of their AI innovation all under the ‘Intelligence: Applied Responsibly’ wrapper.