Happy Friday, here’s some homework! We’ve curated some articles that highlight gender bias and the issues around the lack of women in AI to prep you for the discussion on Tuesday (if you like the style, do subscribe to our weekly or daily briefing on All Things AI and we’ll keep you abreast of what’s going on in AI each morning).
We have a long wait list so please cancel your ticket if you can’t make it any longer and do arrive at 6:30PM to be sure you’ll get a seat. If you can’t make it, follow along on Twitter: #WomeninAI @Cognition_X, and we’ll share a video after.
Reminder of the location: Rathbone, 8 Finsbury Square, London, EC2M 7AZ
See you Tuesday,
Tabitha UntiltheBotsTakeOver Goldstaub
P.S. If you’d like to share this info with others, please use this link.
As machines are getting closer to acquiring human-like language abilities, they are also absorbing the deeply ingrained biases concealed within the patterns of language use, the latest research
+Panelist Dr Joanna Bryson, a computer scientist at the University of Bath and a co-author of the above paper, said: “A lot of people are saying this is showing that AI is prejudiced. No. This is showing we’re prejudiced and that AI is learning it.”
+Panelist Dr Sandra Wachter, a researcher in data ethics and algorithms at the University of Oxford, said: “The world is biased, the historical data is biased, hence it is not surprising that we receive biased results.”
A study based on interviews conducted by Microsoft with 11,500 girls and young women across Europe (definitely worth checking out), found their interest in STEM subjects drops dramatically at 15, with gender stereotypes, few female role models, peer pressure and a lack of encouragement from parents and teachers largely to blame.
According to Unesco, 29% of people in scientific research worldwide are women, compared with 41% in Russia. In the UK, about 4% of inventors are women, whereas the figure is 15% in Russia.
Russian girls view Stem far more positively, with their interest starting earlier and lasting longer, says Julian Lambertin, managing director at KRC Research, the firm that oversaw the Microsoft interviews.
+ Panelist Maxine Mackintosh, Founding Director of One HealthTech: “I’ve never experienced problems being a woman in a room full of men – if anything it’s worked to my advantage.”
“However, for some women, the gender imbalance in health IT is a real dissuader from moving down the health tech route – especially into innovation.”
“I realised that something needed to change – lacking the input of half the population, there was a big loss of intellectual capital and, having always had a bit of a pull-your-socks-up type of attitude, I
decided to do something about it.” Mackintosh and her team are working to involve and engage more women in health tech: and not just in the industry as a whole, but to encourage more of them to speak at events, to take on leadership roles, to innovate.
With machine learning, programmers don’t encode computers with instructions. They train them. If you want to teach a neural network to recognize a cat, for instance, you don’t tell it to look for whiskers, ears, fur, and eyes. You simply show it thousands and thousands of photos of cats, and eventually, it works things out. If it keeps misclassifying foxes as cats, you don’t rewrite the code. You just keep coaching it.
With this move from programming to training I think that
women have an important role to play. As I wrote in my blogpost on Why Women in AI, “empathy, nurturing, listening, multi-tasking, intuition, teaching and mothering are skills and qualities we need to be involved in training AI machines. The question is how to ensure they are at the table.”
When Textio arrived on the scene a couple years ago, it addressed two big pain points for the tech industry. “How can we recruit the best talent” and “how can we create a more diverse workforce?”
The Seattle startup aims to answer those questions using artificial intelligence to determine how the language in job postings will perform with certain demographics. The word “oversee,” for example, tends to attract more male applicants while “collaborate” is more appealing to women. Formal words like “the candidate” instead of “you” are a turn-off to both.
More than 7,000 companies are using Textio’s tools to improve the performance of job listings and attract underrepresented groups. That adds up to a lot of data — data Textio is now making public.
At the Recode conference last year, Melinda Gates addressed the audience and said, “The thing I want to say to everybody in the room is: We ought to care about women being in computer science,” she said. “You want women participating in all of these things because you want a diverse environment creating AI and tech tools and everything we’re going to use.”
The figures are actually worse in AI. At one of 2015’s biggest artificial intelligence conferences—NIPS, held in Montreal—just 13.7% of attendees were women, according to data the conference organizers shared with Bloomberg.
“I call it a sea of dudes,” said Margaret Mitchell, a researcher at Google. She estimates she’s worked with around 10 or so women over the past five years, and hundreds of men. “I do absolutely believe that gender has an effect on the types of questions that we ask,” she said. “You’re putting yourself in a position of myopia.”
The simplest explanation is that people are conditioned to expect women, not men, to be in administrative roles—and that the makers of digital assistants are influenced by these social expectations. But maybe there’s more to it.
“It’s much easier to find a female voice that everyone likes than a male voice that everyone likes,” the Stanford communications professor Clifford Nass, told CNN in 2011. “It’s a well-established phenomenon that the human brain is developed to like female voices.” In a 2012 study, people who used an automated phone system found a male voice more “usable,” but not necessarily as “trustworthy” as a female voice. Check out a very accessible video from Wall Street Journal’s Joanna Stern about why all your bots are female.
Erika Hayasaki, associate professor in the literary journalism program at the University of California, Irvine, wrote a thorough piece on bias in AI. As she writes, “in the not-so-distant future, artificial intelligence will be smarter than humans. But as the technology develops, absorbing cultural norms from its creators and the internet, it will also be more racist, sexist, and unfriendly to women.”
She covers a host of examples, such as the 1995 social bot ‘Maxwell’ which began to tell sexist jokes and Microsoft’s which quickly began to spew sexist and racist remarks. In Heather Roff’s mind, an artificial intelligence and global security researcher at Arizona State, there’s no waiting to address these issues. Feminist ethics and theories must take the lead in the world’s ensuing reality, she says. “Feminism looks at these relationships under a microscope,” she says, and poses the uncomfortable questions about forward-charging technology and all the hierarchies within it.
People often comment on the sexism inherent in these subservient bots’ female voices, but few have considered the real-life implications of the devices’ lackluster responses to sexual harassment. By letting users verbally abuse these assistants without ramifications, their parent companies are allowing certain behavioral stereotypes to be perpetuated. Everyone has an ethical imperative to help prevent abuse, but companies producing digital female servants warrant extra scrutiny, especially if they can unintentionally reinforce their abusers’ actions as normal or acceptable. Check out the article on Quartz for more.
David Robinson, Data Scientist at Stack Overflow, examined Mark Riedl’s Wikipedia plots dataset to analyse the distribution of verbs across genders. The dataset contains over 100,000 descriptions of plots from films, novels, TV shows, and video games. The stories span centuries and come from tens of thousands of authors, but the descriptions are written by a modern
audience, which means we can quantify gender roles across a wide variety of genres. Since the dataset contains plot descriptions rather than primary sources, it’s also more about what happens at than how an author describes the work: we’re less likely to see “thinks” or “says”, but more likely to see “shoots” or “escapes”.
He says that this data shows a shift in what verbs are used after “he” and “she”, and therefore what roles male and female characters tend to have within stories. However, it’s only scratching the surface of the questions that can be examined with this data.