Starting in 2020, Cadillac will start to offer autonomous driving on every Cadillac model. The Super Cruise system is already available today but limited to just the full-size CT6 sedan. General Motors is also announcing today that the SuperCruise system will start hitting other GM brands — such as Chevrolet, GMC, and Buick — after 2020. And then by 2023, an unnamed, high-volume Cadillac crossover will have vehicle-to-everything communication to let it speak to similarly equipped vehicles, infrastructure and other connected devices.
Super Cruise launched earlier this year and only works on expressway in the United States. When active, Super Cruise controls the steering and speed, but again, only on an expressway. This is done through on board sensors and using GPS and mapping data. GM employed GeoDigital, a startup in GM Venture’s portfolio, to map 160,000 miles of expressways in the U.S. and Canada . The car company then used Super Cruise-equipped vehicles to test each mile.
Engineers are planning an army of motorised cones, designed to wheel themselves off the hard shoulder without human assistance, to follow roadworks as they progress along the motorway, and even automatically replace fallen comrades when they are hit by cars or stolen by students. The prototype devices, created by a team from the British engineering company Costain, are intended to keep roads flowing faster but also to improve safety for motorway roadworkers — for whom laying out the cones is one of the most dangerous parts of their job.
Al Clarke works at Meridian, a company created by the government to promote British autonomous driving technology and bring it to market. He said that he anticipated a future where such “robocones” were part of the “driving ecosystem”. “Vehicles will be communicating with robocones,” he said. “A few years down the line when robocones are swarming on the M25, they will communicate with vehicles still, for instance, on the M6 saying, ‘Robocones are on the move at the M25, you are going to be delayed. Why not reroute?’ ”
Most of Facebook’s two billion users have little idea how much the service leans on artificial intelligence to operate at such a vast scale. Facebook products such as the News Feed, Search and Ads use machine learning, and behind the scenes it powers services such as facial recognition and tagging, language translation, speech recognition, content understanding and anomaly detection to spot fake accounts and objectionable content.
The numbers are staggering. In all, Facebook’s machine learning systems handle more than 200 trillion
predictions and five billion translations per day. Facebook’s algorithms automatically remove millions of fake accounts every day. The company’s AI ecosystem includes three major components: the infrastructure, workflow management software running on top, and the core machine learning frameworks such as PyTorch.
Alongside Facebook and Google, fighting back against this age of digital distraction is a Cambridge startup, Foci AI, whose clip-on wearable promises to help train your mind to remain more centred.
Through the power of motion sensors and machine learning, the device will track your breathing patterns and develop a picture of your cognitive state to make you more aware of your focus levels. Currently doing the rounds on Kickstarter, Foci is available from £43, with shipments expected to begin in October.
A new facial recognition system is getting ready for a more permanent installation at the US-Mexico border. In August, Customs and Border Protection will deploy a new system for scanning drivers’ faces as they leave the US, The Verge has learned. The pilot, called the Vehicle Face System (or VFS), is planned for installation at the Anzalduas border crossing at the southern tip of Texas and scheduled to remain in operation for a full year.
According to a Customs spokesperson, the purpose of the project will be “to evaluate capturing facial biometrics of travelers entering and departing the United States and compare those images to photos on file in government holdings.”
The $200 Amazon Echo Look, one of Amazon’s most niche versions of its Echo smart speaker, has had limited availability since the online retailer released it last May. You had to request an invitation to buy the Echo Look, an internet-connected camera that takes your picture and gives you fashion advice. That exclusivity ended yesterday. Amazon has made the Echo Look available for anyone to order without an invitation, the company announced Wednesday. This is similar to what Amazon did with the original Echo, which was available by invitation only for seven months before its general release in 2015.
The Echo Look’s built-in cameras take full-body selfies and short video clips to capture what clothes you’re wearing when you say, “Alexa, take a photo” or “Alexa, take a video.” You can use those pictures to keep an inventory of your wardrobe and can get bespoke fashion advice.
Earlier this year, Sotheby’s made an interesting acquisition–and it wasn’t a Renoir. For an undisclosed sum, the company bought small AI startup Thread Genius as part of an unfolding long-term digital strategy. The way Spotify suggests what music users might like based on previous plays, Thread Genius deploys complex algorithms to predict what art or luxury items clients may want to purchase based on previous purchases and searches. In short, it learns an individual’s personal tastes in art.
The goal is to improve recommendations to buyers and make it easier to purchase and sell pieces on the platform. The company is betting on its tech enterprise to both increase its supply of inventory and open its platform up to the middle market–people who are serious about collecting art but are not among the most high-end buyers.
Tools of the Trade
Intelligent, accurate, fast, automated…fair?
By now we know that artificial intelligence systems don’t always make good decisions. Advanced algorithms and models can be biased – the results they generate leading to people being discriminated against because of their racial background, age, gender, dialect, income and residence.
Why? Because an AI learns from the real world. Unfortunately, this includes prevalent prejudices and stereotypes.
To help fight bias in AI, Accenture’s new ‘Fairness Tool’, prototyped with a host of esteemed researchers at The Alan Turing Institute Data Study Group, will be launched at Cog X2018 to help all organisations root out ethical problems and head them off before inflicting any harm.