As smart speaker ownership becomes more and more prevalent (a recent Lightspeed survey found that 33% of people in the US and UK own one), it is exciting to see Amazon allowing users to personalise their engagement and experience with their device. It will be interesting to see how Google and Apple follow suit.
Chinese tech company Huawei wants to change the way people talk to their artificially intelligent voice assistants. The firm plans to make those conversations more emotionally interactive, according to senior executives.
“We want to provide emotional interactions,” Felix Zhang, vice president of software engineering at Huawei’s
consumer business group, told CNBC at the company’s annual global analyst summit in Shenzhen, China.Huawei launched a voice assistant for the Chinese market in 2013 and said it’s looking to release an emotion-based AI software in the near future. The company says it has 110 million users of its voice assistance on a daily basis in China.
Facebook Inc. is building a team to design its own semiconductors, adding to a trend among technology companies to supply themselves and lower their dependence on chipmakers such as Intel Corp. and Qualcomm Inc., according to job listings and people familiar with the matter.
The social media company is seeking to hire a manager to build an “end-to-end SoC/ASIC, firmware and driver development organisation,” according to a job listing on its corporate website, indicating the effort is still in its early stages.
+We’re looking for leading women in AI chip design to join Azeem Azhar‘s track at CogX – the festival of AI (11-12 June). Shoot me an email if you have any leads.
Researchers at software infrastructure firm Pivotal have taught AI to locate leaked passwords in a surprising way: By looking at the code as if it were a picture. Since modern artificial intelligence is arguably better than humans at identifying minute differences in images, telling the difference between a password and normal code for a computer is just like recognising a dog from a cat.
To automate the process, the Pivotal team first turned the text of passwords and code into matrixes, or lists of numbers describing each string of characters. This is the same process used when AI interprets images—similar to how the images reflected into our eyes are turned into electrical signals for the brain, images and text need to be in a simpler form for computers to process.
In a hallway of an engineering building at Stanford University, some devilish researchers have built a sprawling obstacle course. To make it through, competitors have to wind over sand, through a door, up some steps, and finally, through a forest of small pillars. Sounds like the Rube Goldbergian machinations of an grad student with too much time and Red Bull on
their hands, but no: This is a robot training ground.
See, a tracked robot might be able to navigate the sand and the steps, but good luck in the forest. A wheeled automaton could well get stuck in the sand. Amateurs, says vinebot. Air pumps into a flexible plastic tube, slowly extending it over the sand as an operator steers it through the door and over the steps and between the pillars. Obstacles bested.
Bezos’ deputy Jeff Wilke is pushing into new markets and AI frontiers while thinking through backup plans if Trump delivers on his threats against Amazon’s postage rates.
Amazon’s most significant use of AI, in its network of about 240 warehouses, is less visible. It does, however, play into anxieties that the technology could one
day replace human labor. The company is using some 130,000 Kiva robots, conveyors that look like jumbo Roombas and move whole shelves around warehouse floors, sparing workers from having to walk miles each day to retrieve items.
A new machine learning algorithm developed by computer scientists at the University of Copenhagen can identify honesty — and dishonesty — by analysing how you swipe or tap a smartphone. Dishonest interactions often take longer and involve more hand movement than honest ones, according to a research paper (PDF) to be published today.
The algorithm, called Veritaps, flashes a green check mark when truthful statements are entered into a smartphone and notes doubtful information with a red question mark. It gives recipients of questionable messages the option of asking the sender for more information. The experimental app runs on Android phones and isn’t available to the public.
In 2014, researcher Brian Pasley and colleagues used a deep-learning algorithm and brain activity, measured with electrodes, to turn a person’s thoughts into digitally synthesised speech. This was achieved by analysing a person’s brain waves while they were speaking in order to decode the link between speech and brain activity.
Jump forward a few years, and the team has now improved on that earlier research and applied their findings to music. Specifically, they were able to accurately (50 percent more accurately than the previous study) predict what sounds a pianist is thinking of, based on brain activity.