We wanted to thank all of you for answering our survey about cyborgs from yesterday where we asked “Do you agree with Musk and think that we should be merging humans and robots to create what he referred to as “cyborgs?”. If you haven’t done so already please answer here and leave a comment.
We’ve already gotten a great turnout from our community. We heard that 55% thought we should be merging with robots, 20% thought that it depends, and 25% thought we shouldn’t. Check out the results here.
Technology can inspire people to be creative in new ways. Magenta, an open-source project Google launched last year, aims to do that by giving developers tools to explore music using neural networks.
To help show what’s possible with Magenta, they have created an interactive experiment called AI Duet, which lets you play a duet with the computer. Just play some notes, and the computer will respond to your melody.
AI Duet works by taking the notes you play using your computer’s keyboard and running it through a neural network that has been trained using machine learning with scores of examples. The neural net then looks for melodic and rhythmic patterns it can identify. It’s able to then generate its very own organic melodies that, hopefully, sound like a direct response to your own. It was created by musician and computer scientist Yotam Man in collaboration with the Creative Lab and Magenta, Google’s open-source computational music and art project.
After a few months of development, Heliograf, The Washington Post’s news-bot debuted last year. It works like this: Editors create narrative templates for the stories, including key phrases that account for a variety of potential outcomes (from “Republicans retained control of the House” to “Democrats regained control of the House”), and then they hook Heliograf up to any source of structured data—in the case of the election, the data clearinghouse VoteSmart.org. The Heliograf software identifies the relevant data, matches it with the corresponding phrases in the template, merges them, and then publishes different versions across different platforms.
Jeremy Gilbert, director of strategic initiatives, says the next step is to use Heliograf to keep the data in both machine- and human-written stories up-to-date. For instance, if someone shares a Tuesday story on Thursday, and the facts change in the meantime, Heliograf will automatically update the story with the most recent facts. Gilbert sees Heliograf developing the potential to function like a rewrite desk, in which “the reporters who gather information write more discrete chunks—here’s some facts, here’s some analysis—and let the system assemble them.”
As part of the first annual TensorFlow Developer Summit, hosted in Mountain View and livestreamed around the world, Google announced TensorFlow 1.0. It is reported to be faster, more flexible, and more production-ready.
Other highlights include: 1) Higher-level API modules tf.layers, tf.metrics, and tf.losses – brought over from tf.contrib.learn after incorporating skflow and TF Slim, 2) New Android demos for object detection and localization, and camera-based image stylization, and 3) Installation improvements: Python 3 docker images have been added, and TensorFlow’s pip packages are now PyPI compliant.
If you weren’t able to catch the Summit live, check out the video below.
IBM is making machine learning technology available in the place where much of the world’s enterprise data resides: the z System mainframe.
On Wednesday, IBM announced IBM Machine Learning, a cognitive platform for creating, training and deploying a high volume of analytic models in the private cloud. The platform draws on the core machine learning technology from its Watson Machine Learning service on its Bluemix public cloud offering.
“Our mission is making data simple and accessible to clients,” says Rob Thomas, general manager, IBM Analytics. “If you look at the data landscape today, over 90 percent of the data in the world today cannot be Googled. It’s neither simple, nor accessible. Most of that data resides behind corporate firewalls in private clouds.”
MEPs ask the EU Commission to propose rules on robotics and AI, in order to fully exploit their economic potential and to guarantee a standard level of safety and security. They note that regulatory standards for robots are being planned in several countries, and point out that the EU needs to take the lead on setting these standards, so as not to be forced to follow those set by third countries.
Rapporteur Mady Delvaux (S&D, LU) said “Although I am pleased that the plenary adopted my report on robotics, I am also disappointed that the right-wing coalition of ALDE, EPP and ECR refused to take account of possible negative consequences on the job market. They rejected an open-minded and forward-looking debate and thus disregarded the concerns of our citizens.”
Arif Kareem, CEO of ExtraHop Networks, a stream analytics platform, argues that robots will not be taking over our jobs, but will rather augment our abilities and increase our productivity. He argues says that on a macro level, there’s no doubt that machine learning will reduce the need for manual labor across many industries and roles, including the maintenance aspects of IT. But as basic tasks are taken over by machines, it opens up tremendous opportunity for human roles focused not on rote tasks, but on innovation.
While IT organizations are already applying rudimentary machine learning to data sets like log files and code data, they are now turning to a new source of data that encompasses all digital interactions: the enterprise network — a data source so vast that tapping it has only recently become possible. Flowing over the network is a wealth of data about every aspect of the digital business, from application performance, to security issues like denial-of-service attacks and ransomware threats, to end-user experience. It’s real time, it’s accurate, and it’s complete. When we apply machine learning to this data set, it becomes the eyes and ears for IT, alerting them to application performance issues and potential
security threats before they can affect business operations or customer experience.
In the 4th part of Daniel Jeffries‘ series on “Learning AI If You Suck at Math”, he explains what tensors are (check out part 1, part 2, and part 3 as well). This lesson came out in tandem with Google’s announcement of TensorFlow 1.0. In the lesson, Jeffries walks the reader through the use of Python, Keras and TensorFlow, as well as the Python library NumPy in order to illustrate tensors. If that doesn’t excite you, he uses images of cats to give a concrete example of their application in machine learning.
Microsoft Corp. is trying again in health care, betting its prowess in cloud services and AI can help it expand in a market that’s been notoriously hard for technology companies.
A new initiative called Healthcare NExT will combine work from existing industry players and Microsoft’s Research and AI units to help doctors reduce data entry tasks, triage sick patients more efficiently and ease outpatient care. “I want to bring our research capabilities and our hyper-scale cloud to bear so our partners can have huge success in the health-care world,” said Peter Lee, a Microsoft Research vice president who heads Healthcare NExT. Microsoft unveiled the new effort ahead of the Healthcare Information and Management Systems Society conference next week. The University of Pittsburgh Medical Centre and Microsoft want to use things like speech and natural language recognition technology to replace manual data entry by doctors, Lee said.