In this episode of the McKinsey Podcast, McKinsey partners Louise Herring and Jessica Moulton
speak with McKinsey Publishing’s Monica Toriello about how grocery retailers can reinvigorate the in-store experience for customers while also making the most of the shift to online food shopping.
Herring discusses the ways in which robotics and automation are changing the face of retail in general and grocery in specific. “In stores, there’s a whole plethora of technologies becoming available as well. You have shelf-stacking robots and store-cleaning robots—the types of technologies trialed through Amazon Go. The returns are all available there, but retailers are in the process of thinking through how to plug those into their IT infrastructures and their road maps. And really, to work out, do they want to be leaders or followers in that game
Waymo is killing off the specifically designed “Firefly” in favour of an adapted Chrysler van.
By focusing on mass-produced vehicles like the Pacifica minivan, they’ll be able to bring fully self-driving technology to more people, more quickly. The minivans are equipped with their latest generation of custom-built radar, LiDAR and vision systems and an all-new AI compute platform, so they can see even further and sharper. They can also reach full speed (where the Firefly is limited to 25mph), and the interior is equipped
with creature comforts that passengers expect in their vehicles today — which makes their initial fleet of 600 self-driving minivans a great fit for their early rider program.
Element AI, a Montreal-based AI startup, plans to put its money where its mouth is, by building AI-powered solutions for companies within its doors.
They say that initial products that can be picked up there include predictive modeling, forecasting models for small data sets, conversational AI and natural language processing, image recognition and automatic tagging of attributes based on images, ‘aggregation techniques’ based on machine learning, reinforcement learning for physics-based motion control, compression of time-series data, statistical machine learning algorithms, voice recognition, recommendation systems, fluid simulation, consumer engagement optimization and computational advertising.
Microsoft researchers in Canada have created an AI-based system that learned how to get the maximum score on the addictive 1980s video game Ms. Pac-Man, using a divide-and-conquer method that could have broad implications for teaching AI agents to do complex tasks that augment human capabilities.
Doina Precup, an associate professor of computer science at McGill University in Montreal said she was impressed not just with what the researchers achieved but with how they achieved it. To get the high score, the team divided the large problem of mastering Ms. Pac-Man into small pieces, which they then distributed among AI agents. For more highlights, check out the article here.
In case you missed it, the UN hosted The Global Summit on AI for Good. Its aim was to accelerate and advance the development and democratisation of AI technologies to address specific global challenges in line with the 17 UN Sustainable Development Goals (SDGs).
Andy Townsend, Emerging and Disruptive Technology at PwC UK has a great 3-part series highlighting the key takeaways from the event (check out part two and part three). Here are a few he gleamed:
Conference highlighted that investment on projects that work from the bottom-up is the most effective at building the infrastructure that would allow the larger mass enterprise projects to take hold.
Yesterday, Google announced the release of MobileNets, a family of mobile-first computer vision models for TensorFlow, designed to effectively maximize
accuracy while being mindful of the restricted resources for an on-device or embedded application. MobileNets are small, low-latency, low-power models parameterized to meet the resource constraints of a variety of use cases. They can be built upon for classification, detection, embeddings, and segmentation similar to how other popular large-scale models, such as Inception, are used.
The AI-driven service will allow designers to embed voice assistant services in just about any device they make. Snips will work with customers to help them modify the platform so it suits their design needs. At the same time, a consumer version will be available on the web that will let anyone adapt a version for use on a device powered by Raspberry Pi.
Rand Hindi, Snips CEO and co-founder, said the goal of the service was to give companies a choice so they wouldn’t have to rely on the platforms of companies like Google and Amazon that can change their terms and designs over time.