China laid out a development plan yesterday to become the world leader in AI by 2030, aiming to surpass its rivals technologically and build a domestic industry worth almost $150 billion.
A great write-up from The New York Times discusses how the plan comes with China preparing a multibillion-dollar national investment initiative to support “moonshot” projects, startups and academic research in AI.
The government expects its companies and research facilities to be at the same level as leading countries like the United States by 2020. Five years later, it calls for breakthroughs in select disciplines within A.I. that will become “a key impetus for economic transformation.”
With the assistance of its human handlers, the Human Support Robot, as Toyota calls it, wheeled into Romy Camargo’s home on a mission: to support the quadriplegic veteran and in the process pave the way for truly useful care robots.
The HSR, a wheeled robot with a single arm outfitted with a gripper to snag objects like bottles and even a vacuum to suction-grasp pieces of paper, has to adapt to the chaos. To find its way around, it uses 3-D cameras and lasers—just a like a self-driving car, only it’s
necessarily far more cautious with its speed. And for the time being, it has to identify objects in Camargo’s home using QR codes.
Last year, Google introduced Open Images, a collaborative release of ~9 million images annotated with labels spanning over 6000 object categories, designed to be a useful dataset for machine learning research. The initial release featured image-level labels automatically produced by a computer vision model similar to Google Cloud Vision API, for all 9M images in the training set, and a validation set of 167K images with 1.2M human-verified image-level labels.
Yesterday, they introduced an update to Open Images, which contains the addition of a total of ~2M bounding-boxes to the existing dataset, along with several million additional image-level labels.
This unassuming little thumb drive from chipmaker Movidius, which packs one of the company’s machine vision processors — the same chip used by DJI for its autonomous drones — into a plug-and-play USB stick. If manufacturers want to beef up the AI capabilities of their new product, all they need to do is plug in one of these.
From a technical point of view, the new Compute Stick is the same as the old one. At its heart is a Myriad 2 Vision Processing Unit or VPU — a low-power processor (it consumes just a single watt) that uses twelve parallel cores to run vision algorithms like object detection and facial recognition. Movidius says it delivers more than 100 gigaflops of performance, and can natively run neural networks built using the Caffe framework.For more details, you can check out the full spec sheet for the Myriad 2 here.
The paper points out that more recent advances haven’t leaned on biology as effectively, and that a general intelligence will need more human-like characteristics—such as an intuitive understanding of the real world and more efficient ways of learning. The solution, Hassabis and his colleagues argue,
is a renewed “exchange of ideas between AI and neuroscience [that] can create a ‘virtuous circle’ advancing the objectives of both fields.”
Amazon acquired a Santa Barbara data analysis and search engine startup in May to help improve its Alexa virtual assistant and other services, according to four sources familiar with the deal but unauthorized to discuss it.
Founded in 2009 as FindTheBest, the company sought to collect and organize details about products, places, and people to simplify online research. The technology Graphiq has developed to connect the dots between billions of pieces of information could be valuable to Amazon as it tries to make Alexa smarter. Akin to Siri on the iPhone, Alexa answers queries about the weather, sports and other topics on devices such as Amazon’s Echo speaker.
Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) believe that analysing photos of food could help us learn recipes and better understand people’s eating habits. In a new paper with the Qatar Computing Research Institute (QCRI), the team trained an AI system called Pic2Recipe to look at a photo of food and be able to predict the ingredients and suggest similar recipes.
“In computer vision, food is mostly neglected because we don’t have the large-scale datasets needed to make predictions,” says Yusuf Aytar, an MIT postdoc who co-wrote a paper about the system with MIT Professor Antonio Torralba. “But seemingly useless photos on social media can actually provide valuable insight into health habits and dietary preferences.”
After Tesla’s former Autopilot Program Director, Sterling Anderson, launched his own self-driving startup earlier this year, we now learn of a second startup in the field of autonomous driving launched by former Tesla Autopilot engineers.
Andrew Kouri and Erik Reed, two former engineers on Tesla’s Autopilot Team, as well as George Tall, a computer vision engineer from iRobot, have taken their startup, lvl5, out of stealth mode today. The company has developed a computer vision algorithm that translates footage taken from dashcam into HD maps which are “accurate down to a 10cm threshold”, according to the company.
In twonew papers, DeepMind describes a new family of approaches for imagination-based planning. They also introduce architectures which provide new ways for agents to learn and construct plans to
maximise the efficiency of a task. These architectures are efficient, robust to complex and imperfect models, and can adopt flexible strategies for exploiting their imagination.
The agents they introduce benefit from an ‘imagination encoder’- a neural network which learns to extract any information useful for the agent’s future decisions but ignore that which is not relevant. They tested the proposed architectures on multiple tasks, including the puzzle game Sokoban and a spaceship navigation game.