PWC found that 81% of banking CEOs are concerned about the speed of technological change, more than any other industry sector. See how our research subscription service can help you stay ahead of the game.
This robotic arm is part of a masters thesis from grad students at the University of Antwerp who wanted to address the needs of the deaf and hearing impaired. In classrooms, courts and at home, these people often need interpreters — who aren’t always available.
Their solution is “Antwerp’s Sign Language Actuating
Node,” or ASLAN. It’s a robotic hand and forearm that can perform sign language letters and numbers. It was designed from scratch and built from 25 3D-printed parts, with 16 servos controlled by an Arduino board. It’s taught gestures using a special glove, and the team is looking into recognizing them through a webcam as well.
Daniel Faggella takes us on a deep dive of the impact of AI on the fashion industry. He explores how chatbots, predictive analytics, and more are being used to turbocharge the business of Dior, Nike, Grabit, VF Corporation, Macy’s, and Nordstrom. Great read.
The new feature connects snaps based on their audio and stitches them together in an attempt to give a near-seamless look at a live event from multiple perspectives. It is already live within select Our Stories curated by Snapchat. Users can see different perspectives of the same footage by clicking a new button in the right corner of their mobile screen.
Because of the audio connection, which Mashable has learned is a proprietary machine learning technology built in-house by Snap’s Research team, Snapchat users can essentially change the camera angle without losing the context of what’s being shown.
Microsoft has recently announced that their research team reached that 5.1% error rate with their speech recognition system, a new industry milestone, substantially surpassing the accuracy they achieved last year. A technical report published this weekend documents the details of their system.
They reduced their error rate by about 12 percent compared to last year’s accuracy level, using a series of improvements to their neural net-based acoustic and language models. They introduced an additional CNN-BLSTM (convolutional neural network combined with bidirectional long-short-term memory) model for improved acoustic modelling.
NVIDIA GPU Ventures has joined a group of investors led by Chinese social media company Sina in investing more than $20 million in Chinese startup TuSimple. Formed in 2015, TuSimple develops technology for autonomous long-distance freight delivery.
TuSimple uses NVIDIA GPUs, NVIDIA DRIVE PX 2, Jetson TX2, CUDA, TensorRT and cuDNN to develop its autonomous driving solution. In June, the company successfully completed a 200-mile Level 4 test drive from San Diego to Yuma, Arizona, using NVIDIA GPUs and cameras as the primary sensor.
Delphi Automotive said Friday it has taken a minority stake in Israel’s Innoviz Technologies, a two-year-old developer of advanced laser-based sensors for self-driving cars. It’s the second company that specialises in LiDAR in which Delphi has made a strategic investment. In 2015, Delphi bought a stake in Quanergy Systems as part of a $90 million funding round.
“Our strategy is to make sure we are working with several companies to make sure we have access to all the
technologies that might be needed for self-driving cars,” said Glen De Vos, Delphi’s senior vice president and chief technology officer.
Having conquered the Bitcoin mining industry, Bitmain is turning next to deep learning. Bitmain’s newest product, the Sophon, may or may not take over deep learning. But by giving it such a name (a sophon is a fictional proton-sized supercomputer from The Three-Body Problem that is sent by an alien civilization to halt scientific progress on Earth) Micree Zhan and his Bitmain co-founder, Jihan Wu, have signaled to the world their intentions.
The Sophon unit will include Bitmain’s first piece of bespoke silicon for a revolutionary AI technology. If things go to plan, thousands of Bitmain Sophon units soon could be training neural networks in vast data centres around the world.
Sally Eaves takes us on a tour of the tech being used in the Tour de France.
Over three pulsating weeks from Düsseldorf to the Champs-Elysées, fans of the 104th edition of the Tour De France have been treated to an immersive, real-time and increasingly co-created viewing experience, powered by IoT, machine learning, and data analytics. In terms of key numbers, this equates to 198 riders in 22 teams generating over 150 million geospatial and environmental data readings along the 3,540km route.