DeepMind’s at it again. Sesame Street + IBM Watson. Turn your smartphone into a medical device. https://cognitionx.com/news-briefing/
Lyft claims that their latest partnership with nuTonomy will lead to self-driving taxis on the roads in Boston by the end of the year. Their approach differs from Uber’s who is developing self-driving tech in-house.
Modern deep learning methods have made tremendous progress solving problems from unstructured data, but they tend to do so without explicitly considering the relations between objects.
In two new papers (available here and here), DeepMind explored the ability for deep neural networks to perform complicated relational reasoning with unstructured data. In the first paper – A simple neural network module for relational reasoning – they describe a Relation Network (RN) and show that it can perform at superhuman levels on a challenging task. While in the second paper – Visual Interaction Networks – they describe a general purpose model that can predict the future state of a physical object based purely on visual observations.
+Fascinated by DeepMind? Come to CogX to meet DeepMind’s Dr Domic King and hear his thoughts on .
Sesame Workshop and IBM Watson announced yesterday that they are creating a vocabulary app and the Sesame Workshop Intelligent Play and Learning Platform. The new platform will be used by Sesame Workshop and IBM to create a series of cognitive apps, games, and toys to help kids learn. This is the first public action announced by the partnership, which was formed more than a year ago.
The platform will run on IBM Cloud and the partners invite the “ecosystem of software developers, researchers, educational toy companies, and educators to tap IBM Watson cognitive capabilities and Sesame Workshop’s early childhood expertise to build engaging experiences to help advance children’s education and learning,” according to a statement from IBM Watson.
+IBM will be at CogX, but Sesame Street couldn’t make it…sos.
Just a few months ago we saw pix2pix with cats. Draw something in a little box and an algorithm will try to interpret it as a cat and then fill in the colors and textures according to a machine learning model training on thousands of cat images. This was the first iteration of the pix2pix project. Now, we have human faces and things have become fully uncanny.
The pix2pix project demonstrates something pretty profound about machine learning circa 2017: It’s awful at generating new images, or at least meaningful new images. Machine learning is better at classifying existing images, but, even then, things drop off dramatically as we move beyond a handful of really robust object-recognition models. Pix2pix is based on a promising technique called generative adversarial networks (GANs). GANs work by training generative models that seek to minimize a particular “loss function” according to a prediction that the generated image is fake or real.
Which optimiser should you use to train our neural network? Tensorflow gives you lots of options, and there are way too many acronyms. In this video, Siraj Raval goes over how the most popular ones work and in the process shows how gradient descent has evolved over the years.
Code from this video (with coding challenge) can be found here.
Check out GeekWire’s new Health Tech podcast, exploring the frontiers of digital health. In this first episode, Clare McGrane speaks to Shwetak Patel. Through his work at the UW and his new startup, Senosis Health, Patel is developing apps for smartphones that can measure health almost as well as expensive medical equipment found in hospitals.
“Our idea has been: How do we repurpose the sensors that are already on a mobile phone to do similar things that you would find in a clinician’s office or at a hospital? And so, we’re looking at how to use microphones, the camera, the flash, the accelerometer, the gyro in new ways that people never used them before,” he said. To that end, they have created apps to measure blood pressure, hemoglobin in the blood, the capacity in one’s lungs to diagnose asthma, and more.
It’s hard to catch up with all the WWDC updates from Apple. I’ve found this video to be a good summary.
Like the rest of the tech world, Apple wants to make AI on your mobile device as fast and powerful as possible. That’s why the company unveiled a new machine learning framework API for developers named Core ML.
The key benefit of Core ML will be speeding up how quickly AI tasks execute on the iPhone, iPad, and Apple Watch. This could cover everything from text analysis to face recognition, and should have an effect on a broad category of apps. It means, says Apple, that image recognition on the iPhone will be six times faster than on Google’s Pixel. Core ML will support a number
of essential machine learning tools, including all sorts of neural networks (deep, recurrent, and convolutional), as well as linear models and tree ensembles.
Fujitsu has partnered with Vancouver-based 1QBit to apply quantum-inspired technology to artificial intelligence.
1QBit, which develops hardware-agnostic quantum and quantum-inspired software to solve computational challenges, will work with Fujitsu to develop applications using AI developed for use with quantum computers.The partnership will focus on areas of combinatorial optimization, the process of finding an optimal object in an extremely large dataset, and machine learning. Eventually, the two companies plan to provide a variety of services that combine 1QBit’s software and expertise in building applications which benefit from the capabilities of quantum computers.