The AI Now Institute at New York University, an interdisciplinary research center dedicated to understanding the social implications of AI, has just released their latest report, full of recommendations for the future of AI, including Rights & Bias, & Inclusion Liberties, and Labor & Automation. Recommendations include:
Before releasing an AI system, companies should run rigorous pre-release trials to ensure that they will not amplify biases and errors due to any issues with the training data, algorithms, or other elements of system design.
More research and policy making is needed on the use of AI systems in workplace management and monitoring, including hiring and HR.
Expand AI bias research and mitigation strategies beyond a narrowly technical approach.
SAP has introduced a new tool to help recruiters eliminate bias in the hiring process, according to a company blog. SAP SuccessFactors’ Recruiting Management solution assists employers in creating competitive, effective job descriptions that flag language reflective of gender bias.
The new job analyser function uses SAP’s Leonardo machine learning platform, which allows managers and recruiters to target language and phrases that could be construed by applicants as to favor one gender over another.
+Looking to better understand how SAP, Textio, and more are using machine learning to combat bias in the recruitment process? Check out our research.
Under Benioff’s quirky tactics, Salesforce is as original a product marketer as exists in enterprise (or nonconsumer) technology. It has proved adept at predicting and embracing technology shifts, the latest being an aggressive move into AI.
Benioff explains the company’s move into AI as his realisation that “a lot was happening in AI But I also realised it wasn’t clear what Salesforce’s role in AI was. That’s when we started acquiring quite a few artificial intelligence companies, maybe a dozen.” He was convinced that ‘this could be our AI brand.’ He says, “Today, Einstein is the Salesforce-branded product that’s integrating machine learning and other AI-related technologies into the rest of the company’s offerings.”
How to avoid the nightmare scenario of AI? According to Ilya Sutskever and Dario Amodei, researchers from Elon Musk’s OpenAI, the trick is teaching machines to keep our interests in mind.
We have a long way to go before powerful AI systems become a reality, which is exactly why we need to devote time and energy to AI safety now. The world today would be a much safer place if the internet had been designed with security in mind—but it wasn’t. We now have an opportunity to take a safety-first approach to a far more powerful and potentially dangerous technology. We’d be wise to take it.
Andrew Ng has just announced that he will be joining Woebot’s board of directors as its Chairman. He will be assisting its CEO, Alison Darcy, and the company in its mission to build a chat bot that will help the millions of people who struggle with their mental health.
Woebot already has fantastic engineering and clinical teams. I’m excited to work with the company to continue building up its NLP team as well as its AI abilities. Just as MOOCs are making high-quality education globally accessible, Woebot will make high-quality mental health coaching globally accessible.
This paper (from Kenji Kawaguchi, Leslie Pack Kaelbling, and Yoshua Bengio) explains why deep learning can generalise well, despite large capacity and possible algorithmic instability, nonrobustness, and sharp minima, effectively addressing an open problem in the literature. Based on our theoretical insight, this paper also proposes a family of new regularisation methods.
Its simplest member was empirically shown to improve base models and achieve state-of-the art performance on MNIST and CIFAR-10 benchmarks. Moreover, this paper presents both data-dependent and data-independent generalisation guarantees with improved convergence rates.
The paper (PDF) backing this model up tries to establish some baseline equations (the setup pictured below is just one of the situations) for self-driving cars to make sure they will behave safely. That includes in situations where it’s possible for pedestrians to appear from behind another car, or, for example, making sure they’re not following another car too closely. The point is to make sure that autonomous vehicles don’t cause collisions, even
if it may not prevent all collisions, and be able to establish blame in a crash that happens without a human driver.