AI + education. Andrew Ng’s latest project. Smart headphones from Bragi. http://cognitionx.com/news-briefing/.
In the UK, the government has dedicated institutions working on defining the scope and impact of AI, such as the Party Parliamentary Group on AI (APPG AI) and the Lords Artificial Intelligence Committee. In fact, Oxford Insights’ recently released Government AI Readiness Index named the UK 1st in their rankings, reflecting its world-leading centres for AI research and strong technology industry.
To that end, a new bill drafted by Senator Maria Cantwell asks the Department of Commerce to establish a committee on artificial intelligence to advise the federal government on how AI should be implemented and
regulated. Check out this article from Quartz for a good summary.
The government’s ‘Nudge Unit’ is experimenting with using machine learning algorithms to rate how well schools and doctors’ surgeries are performing.
For the last year, The Behavioural Insights Team (BIT) has been trialling machine learning models that can crunch through publicly available
data to help automate some of the decisions made by bodies such as Ofsted, which inspects schools, and the Care Quality Commission, which regulates health and social care in England. Michael Sanders, head of research at the BIT says it is working with Ofsted to put the technology into use during 2018. “We’re working with them to feed into variations on our model and to improve it using additional data that they have that isn’t public,” he says.
Andrew Ng is launching a startup called Landing.AI. Its aim initially is to help manufacturing companies use computer algorithms to cut costs, improve quality control, remove supply-chain bottlenecks, and more.
Landing.AI’s first strategic partner is Foxconn, the Taiwanese manufacturing giant that helps Apple build
iPhones. The company is helping implement a system to spot defects, such as tiny particles or scratches on camera lens units. Currently, thousands of people work to manually inspect such parts. Ng says the AI-powered system can work 24 hours a day, seven days a week, and be more accurate than people
Bragi is determined to stand out in the increasingly crowded smart earbud space, and it might just have an ace up its sleeve. The startup and its partner Audio Analytic are promising to show off the first “context-aware earphones” at CES in January — that is, a Dash Pro whose AI can recognise nearby sounds and act accordingly.
While the exact capabilities are under wraps, Audio Analytic hints at recognising both conversations and ambient sounds. The earphones might pipe down when you start talking to a friend, or let in more outside sound if an ambulance is approaching with sirens blaring.
Yesterday, Microsoft announced a series of AI-driven features for its Bing search engine to make it more conversational and nuanced. The news, unveiled at an event in San Francisco, means that Bing will make better use of object recognition, so-called machine reading (for parsing text and extracting meaning), and other techniques tuned and improved using AI training methods.
Search results will now show both multiple perspectives and multiple sources, culled from a list of pre-approved news sources, to show Bing users different sides of issues ranging from the benefits and downsides of kale to the pros and cons of contentious political issues. In a new partnership with social news site Reddit, Bing will also surface information from subreddits right in search results by using algorithms to read and analyse the user-generated text across Reddit’s many communities.
Kasisto, creators of KAI, the leading conversational AI platform for finance, announced a $17M Series B funding round yesterday led by Oak HC/FT with participation from existing investors Propel Venture Partners and more.
The new capital will be used to scale the business to meet the increasing demand for Kasisto’s products in existing and new markets as well as deepen partnerships with existing customers. This new funding will also be invested in expanding the KAI platform to include new AI-powered features and incorporate innovations that continue to deliver both business results and customer engagement and delight.
Sandra Upson (Wired) argues that cute machine learning projects like turning a video of a horse into a zebra is a nice stunt, but that’s not all it is. It is also a sign of the growing power of machine learning algorithms to rewrite reality.
Now the algorithms powering style transfer are gaining precision, signalling the end of the Uncanny Valley—the sense of unease that realistic computer-generated humans typically elicit. In contrast to the previous somewhat crude effects, tricks like zebrafication are starting to fill in the Valley’s lower basin.
There are a lot of opportunities to show off your machine learning skills to churn out something creative. First prize for each competition is $1K. All entries must be accompanied by a two-page description of the approach.
The deadline for entries is May 15th, 2018, by 11:59PM GMT.
PoetiX – competition in computer-generated sonnet writing
LimeriX – This is a new challenge for the creation of machine-generated limericks
LyriX – This is an “Open Format / Literary Metacreation” challenge
DigiKidLit – This new challenge is to generate a Children’s Story
A new study of patients with focal brain injury makes the case that machine learning can empower physicians to make healthcare recommendations with a level of individualised precision that would have been impossible just a decade ago.
A research team from University College London (UCL) examined 1172 stroke patients with anatomically registered brain lesions. They used MRI and CT scans, together with a machine learning algorithm, to map the brains of the patients, leveraging that data to measure the effects of hypothetical therapies. The team found that high-dimensional models were better at pinpointing therapeutic effects, particularly for therapies aimed at reducing the size of lesions. For instance, one intervention would need to reduce the size of a brain lesion by 78.4% to be picked up by traditional low-dimensional models. However, the high-dimensional model could pick up lesion shrinkage of as little as 55%, according to the