At least £75 million will be invested in AI
(a total of £500 million being invested in a range of tech initiatives from AI to 5G and full-fibre broadband)
Plans for “the world’s first national advisory body for artificial intelligence”, to set standards for the use and ethics of AI and data
Looser framework for testing self-driving vehicles without a safety operator (in post-budget interviews, Hammond emphasised that the introduction of self-driving cars to UK roads will have a major impact on society, saying that over the next 20 years 1M people who drive for a living will have to retrain)
A new Regulators’ Pioneer Fund, as well as a new Geospatial Data Commission, to develop a strategy for using the government’s location data to support economic growth
For a solid breakdown of the tech-related items, We’d recommend Ingrid Lunden‘s article in TechCrunch. Looking for a quick guide to the different policy points and how different sectors of the population will be affected? The Guardian and The Telegraph have both put out good digestible pieces (as does the government itself). Finally, for a quick snapshot of tech-related funding, the BBC has you covered.
Toronto’s Blue J Legal has announced Employment Foresight, a software solution that incorporates AI to help human resources professionals, in-house counsel and lawyers easily resolve employment law issues.
Blue J Legal’s software uses machine learning to identify hidden patterns in judicial rulings, which will allow users to navigate difficult areas of employment law and come to more informed decisions. The platform can aid with everything from worker classification to employee drug testing. “Employment Foresight is a game changer because it takes into account all of the important factors that courts consider, and then makes a prediction based on how those factors have interacted in court in the past,” said Ben Alarie, CEO of Blue J Legal.
+Interested in the impact of AI on HR and recruitment. Check out what our research team has to offer.
Apple’s ambitions to build a self-driving car have reportedly shifted gears over the years, but we know the company is focusing on the software side of the equation. This June, CEO Tim Cook said the iPhone maker is building autonomous systems that could power a range of different vehicles (rather than, say, working on its own Apple-branded SUVs). “We sort of see it as the mother of all AI projects,” said Cook.
Now, new research from the company’s machine learning team confirms this direction, with a paper published on pre-print server arXivdescribing a mapping system that could be put to a range of uses, including powering “autonomous navigation, housekeeping robots, and augmented / virtual reality.” Though, to be clear,
this is just academic research: it doesn’t indicate that Apple is working on these particular use-cases. The system in question is called VoxelNet, and it’s all about improving the data we get from the eyes of most self-driving systems: LIDAR sensors.
French startup Snips is now helping you build a custom voice assistant for your device. Snips doesn’t use Amazon’s Alexa Voice Service or Google Assistant SDK — the company is building its own voice assistant so that you can embed it on your devices. And the best part is that it doesn’t send anything to the cloud as it works offline.
If you want to understand how a voice assistant works, you can split it into multiple parts. First, it starts with a wakeword. Snips has a handful of wakewords by default, such as “Hey Snips,” but you can also pay the company to create your own wakeword. The second part is automatic speech recognition. A voice assistant transcribes your voice into a text query. Popular home assistants usually send a small audio file with your
voice and use servers to transcribe your query. Snips can transcribe your voice into text on the device itself. It works on anything that is more powerful than a Raspberry Pi.
How can we take advantage of what AI systems have to offer, while also holding them accountable? In this work, Harvard researchers focus on one tool: explanation. Good choices about when to demand explanation can help prevent negative consequences from AI systems, while poor choices may not only fail to hold AI systems accountable but also hamper the development of much-needed beneficial AI systems.
The authors briefly review current societal, moral, and legal norms around explanation, and then focus on
the different contexts under which explanation is currently required under the law. They find that there exists great variation around when explanation is demanded, but there also exist important consistencies: when demanding explanation from humans, what we typically want to know is whether and how certain input factors affected the final decision or outcome.
Topics of discussion in this week’s episode include Tesla, Apple, Google, and more. They end off the episode with a discussion about Boston Dynamics “Back-flipping” Robot, the future of robots, regulation, and warfare.
US researchers have trained deep learning algorithms to identify Chinese surface-to-air missile sites—hundreds of times faster than their human counterparts.
The deep learning algorithms proved capable of helping people with no prior imagery analysis experience find surface-to-air missile sites scattered across nearly 90,000 square kilometers of southeastern China. Such AI based on neural networks matched the overall 90% accuracy of expert human imagery analysts in locating the missile sites. Perhaps even more impressively, the deep learning software helped humans reduce the time needed to eyeball potential missile sites from 60 hours to just 42 minutes.
Scientists have developed tiny, remote-controlled “microrobots” with the ability to release cancer-targeting drugs, which they hope will one day be used to diagnose disease and administer drugs inside the human body.
“Rather than fabricate a functional microrobot from scratch using intricate laboratory techniques and processes, we set out to directly engineer smart materials in nature,” said Professor Li Zhang, an engineer at the Chinese University of Hong Kong who contributed to the Science Robotics study.
Robert C. Wolcott (Professor of Innovation, Kellogg School of Management) has written a great piece on the importance of learning to code (or the lack thereof!). He argues that though it’s certainly far better to know a computer language than not, remaining relevant will be a moving target as computer languages and programming environments arise, evolve, and in some cases die.
The need for humans to code will gradually disappear for all but the most specialised situations. Platforms will enable humans to describe in natural spoken or written language what they’d like computers to accomplish. The coding will occur behind the computational scenes. We won’t code so much as direct and request. Ultimately, coding isn’t the point. The objective is to define and communicate what we want computational systems to do.