Mobile automation solutions have become the norm, giving service organizations the ability to communicate service records digitally. Sharing work orders between the back office and the service technician has become a more efficient, digitised process with near instant transfer of information.
KJ Park (VP of Growth and Strategy, Noble Iron) describes how AI is going to be the next revolutionary stride toward improving technician utilization, first-time fix rates, and the customer experience. In order to accomplish this, we will need a system that will be able to learn and run a service department effectively.
Alibaba Group has announced the launch of a global research program and its plan to invest $15B in R&D through the program over the next three years.
The program, called Alibaba DAMO Academy, is aimed at increasing technological collaboration globally, advancing the development of “cutting-edge” technology, and making the world more inclusive by “narrowing the technology gap”, it said in an announcement on Wednesday. The DAMO Academy, short for “Academy for Discovery,
Adventure, Momentum, and Outlook”, will be led by Alibaba CTO Jeff Zhang.
The rise of the robots, coming first for our jobs, then maybe our lives, is a growing concern in today’s increasingly automated world. Just this week, the World Bank chief said the world is on a “crash course” to automate millions of jobs. But a recent report from Germany paints a less dramatic picture: Europe’s strongest economy and manufacturing powerhouse has quadrupled the amount of industrial robots it has installed in the last 20 years, without causing human redundancies.
In 1994, Germany installed almost two industrial robots per thousand workers, four times as many as in the US. By 2014, there were 7.6 robots per thousand German workers, compared to 1.6 in the US. In the country’s thriving auto industry, 60–100 additional robots were installed per thousand workers in 2014, compared to 1994.
Box CEO Aaron Levie has always had a vision for the company that extended well beyond its earliest use case as way to transfer files between machines online.
This week, at the BoxWorks customer conference, that vision took a considerable step forward with the announcement of a new artificial intelligence toolkit called Box Skills. The tools are aimed at multimedia — video, audio and photos — and enable customers to mix and match AI and machine learning tools from IBM, Microsoft and Google. Earlier this year, Box announced a photo tagging tool using Google Cloud Vision API. This set of tools takes this original idea much further and begins the process of building artificial intelligence into the foundation of Box content management.
Regina Barzilay, a professor in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) who does research in natural language processing and machine learning, is a recipient of a 2017 MacArthur Fellowship, sometimes referred to as a “genius grant.”
Artists’ bodies of work are like icebergs: We almost never see more than the 10% sticking above the water. For Edvard Munch, that means most folks are familiar with The Scream, and maybe a dozen or so of his other 1,800+ paintings, but the rest remain unseen. How much can we really know about an artist by seeing only a handful of his works?
Artnome put the complete paintings of Edvard Munch into a searchable online database so that art lovers can get the added context of seeing all of Munch’s paintings, the entire iceberg, not just what shows above water. In this blogpost, Jason Bailey shares his experience using machine learning to tag objects within the paintings in an attempt to expand search capabilities beyond traditional metadata like title, dimensions, materials, etc.
In this paper, the authors propose a framework for automatic classification of patients from multimodal genetic and brain imaging data by optimally combining
them. Additive models with unadapted penalties (such as the classical group lasso penalty or ` 1-multiple kernel learning) treat all modalities in the same manner and can result in undesirable elimination of specific modalities when their contributions are unbalanced.
To overcome this limitation, they introduce a multilevel model that combines imaging and genetics and that considers joint effects between these two modalities for diagnosis prediction. Furthermore, they propose a framework allowing to combine several penalties taking into account the structure of the different types of data.