Last night at our last event of the year, we brought together AI experts, executives, government officials, academics and friends for a review of AI in 2017 and some punditry on what 2018 holds in store. Check out our blogpost or read the highlights:
2018 will about moving from Lab to Live.Now is the time for all companies to get from proof of concept into production.
The UK government are really focused on making the UK the best place for AI. AI was selected as one of the 4 Grand Challenges for the country, which coupled with funding and a new Center for AI and ethics sets us rolling for some hard work in 2018.
From Hollywood to the AI industry, 2017 was the year it stopped (again) being acceptable to be sexist. Giving us a better chance that in 2018 we code a little less bias into our machines.
Explainable AI is set to become a reality initially for research departments and then quickly adopted by industry.
Waymo said on Tuesday it’s partnering with on-demand insurance provider Trov to cover self-driving vehicles in the Arizona program, where the commercial ride service launches in the next few months. The trip-based coverage for users includes lost items, problems related to interrupted rides and potential medical expenses.
It’s the first U.S. project for insurance broker Trov, which is backed by reinsurance giant Munich Re, and its first mobility-related undertaking. “Waymo is making personal transportation more effortless, and so it follows that all associated aspects of a journey – including insurance – should be equally painless, with passengers’ safety and peace of mind paramount,” Trov CEO Scott Walcheck, said in a statement.
Article to Share With Your Less Data Savvy Friends
Check out this piece from Daniel Tunkelang, who led machine learning projects at Endeca, Google, LinkedIn. As someone who often finds himself explaining machine learning to non-experts, he offers the following list as a public service announcement.
DocuSign, an e-signature and digital transaction management company, said it has acquired the technology and core team members from machine learning startup Appuri. Appuri is known for its machine-learning-based audience segmentation technology that lets companies deliver targeted marketing messages and product recommendations.
DocuSign said it now plans to integrate Appuri’s machine learning and artificial intelligence features directly into the DocuSign platform, and remove Appuri’s products from the open market. The majority of Appuri’s team, including co-founders Damon Danieli and Bilal Aslam, will join DocuSign’s product development group.
According to new research published in the Journal of the American College of Radiology, software using natural language processing and machine learning algorithms can accurately audit radiologist compliance with report templates. The study’s authors audited radiology reports from October 2015 at their facility both automatically—using this new software—and manually—looking at 25 reports for each of the faculty’s 42 members.
Lead author Lane F. Donnelly, MD, of the department of radiology at Stanford University, said, “The information can be used as a quality indicator for radiology dashboards, incentive programs, or practitioner evaluation, such as Ongoing Professional Practice Evaluation. Feedback about individual faculty member compliance with use of standardised reports can be particularly helpful during the period when a department is transitioning from free dictation of reports to use of standardised templates.”
Yesterday, Facebook launched some pretty cool facial recognition tools. Now, if you’re in a photo and are part of the audience for that post, they’ll notify you, even if you haven’t been tagged. You can make choices such as whether to tag yourself, leave yourself untagged, or reach out to the person who posted the photo if you have concerns about it.
Soon, you will begin to see a simple on/off switch instead of settings for individual features that use face
recognition technology. They designed this as an on/off switch because people gave them feedback that they prefer a simpler control than having to decide for every single feature using face recognition technology.
In “NIMA: Neural Image Assessment” they introduce a deep CNN that is trained to predict which images a typical user would rate as looking good (technically) or attractive (aesthetically). NIMA relies on the success of state-of-the-art deep object recognition networks, building on their ability to understand general categories of objects despite many variations.
Their proposed network can be used to not only score images reliably and with high correlation to human perception, but also it is useful for a variety of labor intensive and subjective tasks such as intelligent photo editing, optimising visual quality for increased user engagement, or minimising perceived visual errors in an imaging pipeline.
Fernando Lucini (Managing Director, Accenture Analytics) discusses how companies are dabbling and experimenting in AI. Some companies are well organised, many less so. What he hears is most companies desperately want move on from the “experiments” phase.
Seems to me the challenge has a number of parts; Operating Model, Contracting, Architectures and Responsible AI…We need to spend time working on our operating models. The best-case scenario has companies evolving their operating models towards continuous change. The worst case scenario has companies maintaining existing “handle” creation processes and trying to fit AI products into it, good luck with that.