Although we are approaching a self-driving future, out-dated and incongruous regulations in the USA are slowing things down and making it rather difficult. GM, Toyota and Lyft are urging Congress to set nationwide self-driving car standards. Do think these governmental regulations should be set? Answer the survey simply by clicking ‘Yes’ or No’ below and feel free to leave a comment as well.
With billions of dollars committed to research and testing of vehicles driven by artificial intelligence rather than humans, the last thing automakers and tech firms want is balkanized regulations that vary from state to state or out-of-date federal rules for this fast-developing technology.
So General Motors, Toyota, Volvo and ride-hailing service Lyft had a unified message for members of the House of Representatives on Tuesday: Set a national framework for testing and deploying autonomous vehicles — and do it soon.
“One of the most significant challenges that we face today with respect to the testing of autonomous vehicle technology is the patchwork of policy initiatives at the state level,” Gill Pratt, CEO of the Silicon Valley-based Toyota Research Institute, said in testimony to the House Subcommittee on Digital Commerce and Consumer Protection.
Nuance Communications, Inc. yesterday announced from the RSA Conference that it has partnered with BioCatch, the global leader in behavioural biometrics to deliver continuous authentication on the web and mobile as part of the Nuance Security Suite solution.
Through this partnership, Nuance is expanding its Security Suite to take fraud prevention beyond the contact centre by combining digital inputs to more accurately detect fraud across multiple channels in the FraudMiner platform.
Through the partnership, Nuance will incorporate BioCatch’s behavioural biometrics data within its voice biometrics and facial biometrics platform, adding a unique layer of protection that allows organizations to flag potentially fraudulent activity based on inconsistencies in the way a person interacts with a device or an application.
Quora recently released the first dataset from their platform: a set of 400,000 question pairs, with annotations indicating whether the questions request the same information. This data set is large, real, and relevant — a rare combination. In this post, Matthew Honnibal, a leading expert in AI technology, explains how to solve text-pair tasks with deep learning, using both new and established tips and technologies.
When asked about ‘machine ethics’ he said, “‘Machine ethics’ means the ethical reflection on machines done by humans, not machines engaging in such reflection on themselves (and on us). Ethics is done by members of human society reflecting critically on the customs (Greek: ethos, Latin: mores) underlying their being-together. “Machine Ethics” is, from this perspective, i.e., as a critical reflection done by machines, an oxymoron. If we want machines to act ‘in an ethical way’ then we have to provide them with some kind of moral (and legal) rules of behaviour. We are responsible for their formulation, fixation and interpretation. It makes no sense to make machines ‘morally responsible’ for their actions.”
A recent study has set out to use AI to help researchers detect epilepsy in children. The research, which was a collaborative project between Young Epilepsy, UCL Great Ormond Street Institute of Child Health and the University of Cambridge, focused on Focal Cortical Dysplasia, which is a major cause of epilepsy in children. It describes the way the brain fails to form normally, and because the abnormalities tend to be small, they tend to be very difficult to pick up on MRI scans.
So, the researchers turned to machine learning to help rapidly identify these abnormalities in children. Firstly, the subtle abnormalities in the brain were identified by a pediatric neuroradiologist, before these were transformed into a range of features, including the thickness and folding of the brain, that could be used to train the algorithm. When the algorithm was put through its paces, it was able to correctly identify the brain abnormality in 73% of patients.
A group of investors, including co-founders of Prisma and MSQRD, announce Ommy. The investors that launched 2 of the hottest new apps of the 2016 continue to push the envelope. They now present an app that creates a set of emojis that are similar to users’ very own facial features based off of one photo.
Unlike other apps, Ommy creates a unique emoji using artificial intelligence from users simply taking a selfie. Ommy removes the hassle of personalizing your emoji, instantly creating an image of users based off many styles: funny, love, cartoon, rage, etc. “My experience in neural networks and app development gave me this idea. I wanted to create personalized emoticons based off of users’ selfies,”creator Dmitry Koltsov stated. Koltsov is the developer of Love Collage, which creates collages of photos that allows users to edit with texts, stickers, photo frames and more.
In this video, a fully autonomous vehicle from drive.ai drives on a rainy night on the streets of Mountain View, CA. The car drives without human intervention in a heavy downpour. The car also maneuvers its way down narrow streets with parked cars. Despite these challenges, the car drives safely and makes it to its destination.
Santander customers will be able to make payments with their voice by talking to their smartphone app, in yet another sign of the technological revolution that is transforming the banking industry.
Under a new pilot scheme, the company has revamped its voice recognition technology to allow customers to make transfers to existing payees by speaking to their iPhone SmartBank app. It is the first high street lender to offer the service and comes after the company launched its so-called “voice assistant banking” technology