Issue 142: CognitionX Data Science, AI and Machine Learning


“Be Right Back”, an episode of Black Mirror in which a character interacts with a digital version of their deceased boyfriend, might just become a reality. Since we are generating so much online data and people will only continue to generate more and more data, Hossein Rahnama of Ryerson University and the MIT Media Lab thinks that this is more than just science fiction. In fact, he is already creating chatbots crafted from personal data to make it a reality. 

How do you feel about this digital duplication? Is it ethical to create a digital version of a deceased loved one, or is this going a step too far? Head on over to the forum to discuss.

Best,

Tabitha UntiltheBotsTakeOver Goldstaub

P.S.

We are excited to announce that we are partnering with Tech Tour Growth Forum, which will take place March 30-31st. Sign up now, because the early bird discount ends soon! Check out our events page to stay up to date with the latest events. 

P.P.S.

Thank you for everyone who voted in our graph database poll last week. Titan stormed to victory with 30% of the vote. There was a battle for second place with newcomers grakn.ai just pipping the tried and tested Neo4j.

Ethics question for the day

New technology is forcing us to confront the ethics of bringing people back from the dead

The possibility of digitally interacting with someone from beyond the grave is no longer the stuff of science fiction. The technology to create convincing digital surrogates of the dead is here, and it’s rapidly evolving, with researchers predicting its mainstream viability within a decade. In the past year, neuroscientists and philosophers have been speculating about the potential of, let’s say, building a digital duplicate of your grandmother. This copy could exist in a kind of virtual
Elysium, able to Skype in to Thanksgiving dinners long after her death.

But Hossein Rahnama of Ryerson University and the MIT Media Lab is working on something more immediately realizable than mental duplicates: chatbots crafted from personal data. “Fifty or 60 years from now, [millennials] will have reached a point in their lives where they each will have collected zettabytes [1 trillion gigabytes] of data, which is just what is needed to create a digital version of yourself,” Rahnama says.

Art

Grammy-Nominee Alex Da Kid Creates Hit Record Using Machine Learning

As well as write poetry and novels, AI is now being used to create music by a Grammy-nominee producer, who collaborated with IBM’s Watson cognitive computing platform on his newest release. Alex Da Kid used Watson to analyze the composition of five years’ worth of Billboard songs, as well as cultural artefacts such as newspaper articles, film scripts and social media commentary. The idea was to understand the “emotional temperature” of the time period, and use this to inform Alex’s creative process.

Alex says “I use the tools that help me best express myself. As long as it’s my vision, my emotion and sound making it onto the track, I’m open to using any kind of technology available. So far, I haven’t encountered anything that presented a challenge to those values – including AI.”

Data visualisation

Virtualitics LLC is using virtual reality to make Big Data analysis as immersive as games

Data analysis and visualization startup Virtualitics LLC just announced the launch of a new tool that will allow researchers to better understand data using virtual reality and augmented reality. The new tool, unveiled Thursday, combines the use of VR and AR with machine learning and Big Data analysis to allow data scientists to immerse themselves in the data to better discover hidden insights hidden in complex data sets. The company also announced that it just closed a $3 million seed funding round from angel investors.

“Big Data is worthless if we cannot extract actionable knowledge from it,” said Michael Amori, chief executive officer of Virtualitics. “Visualisation can reveal the knowledge hidden in data, but traditional 2D and 3D data visualisations are inadequate for large and complex data sets.”

Open source

Quantum Computing Progress Will Speed Up Thanks to Open Sourcing

In the quest for ever more powerful computers, researchers are beginning to build quantum computers—machines that exploit the strange properties of physics on the smallest of scales. The field has been making progress in recent years, and quantum computing company D-Wave is one of the pioneers. Researchers at Google, NASA, and elsewhere have been studying how they can use D-Wave’s chips to solve tricky problems far faster than a classical computer could. Although the field is making progress, it is still largely the domain of an elite group of physicists and computer scientists.

However, more minds working a problem tend to be better than fewer. And to that end, D-Wave took a bold step toward democratizing quantum computing last week by releasing an open-source version of its basic quantum computing software, Qbsolv. The implication of this is that pharmaceutical production or the discovery of new materials could be made much more efficient and happen at a faster rate thanks to quantum computer simulations. Quantum computing may also accelerate development of AI.

Future of transportation

Swiss researchers: ‘self-driving cars a reality by 2030’

Swiss scientists have developed an algorithm for automated vehicles that will allow them to operate in traffic alongside manually-driven cars. The algorithm developed by researchers at EPFL, the Swiss federal technology institute in Lausanne, is a key part of an European project that has recently proved it is possible for self-driving vehicles to operate in high-speed, multi-lane traffic under real-life conditions, EPFL said in a statement. A wifi-based communication system allows vehicles to share information with each other, it said.

“This, combined with an array of driving-assistance devices – GPS, lasers, video cameras and other sensors – gives vehicles the ability to drive completely on their own. “That said, it will be another 15 years before most vehicles are equipped with these devices, heralding a true driverless future,” it added.

Future of health

Bitesnap uses machine learning to figure out how healthy a meal is based on pictures

In order to help you make smarter decisions about what you’re putting in your body, there’s a new app in town that leverages AI. Meet Bitesnap from Bite AI, which uses machine learning to determine what food items are in your latest food photo, and subsequently tells you how many calories you’re consuming. Not only does Bitesnap identify the foot item on your plate, it also does some pretty impressive work as far as figuring out how much of it is actually there. And it’ll give you more than just caloric data — rather, you can get a full breakdown of all your macros using the app. 

Education and advice we rate

How do GANs (Generative Adversarial Networks) intuitively work?

GANs or Generative Adversarial Networks are a kind of neural networks that is composed of 2 separate deep neural networks competing each other: the generator and the discriminator. Their goal is to generate data points that are magically similar to some of the data points in the training set. Currently, people use GAN to generate various things. It can generate realistic images, 3D-models, videos, and a lot more.

The generator will try to generate fake images that fool the discriminator into thinking that they’re real. And the discriminator will try to distinguish between a real and a generated image as best as it could when an image is fed.” They both get stronger together until the discriminator cannot distinguish between the real and the generated images anymore. It could do nothing more than guessing with a probability of 0.5 for both choices because the generator generates really realistic face images.

Innovation

Jiobit uses machine learning to help parents keep track of their children

Jiobit, the small, clip-on device combines Wi-Fi, Bluetooth, GPS and cellular triangulation in a sort of hybrid system of location tracking that we haven’t seen from other kid-tracking products. Jiobit’s creators said it will have a 2-month battery life, accurate tracking indoors and outdoors along with machine learning to map the
typical patterns of the child over time to notify parents of deviations.

Jiobit’s other cool feature is machine learning. Over time, the wearable will learn the routes and routines of children who wear it, and alert parents to deviations. That means you’ll get an alert if your kid gets off the bus at the wrong stop or leaves school before the day is over. You can also turn on a “Follow me” mode that helps keep track of your child in places like parks, grocery stores or malls — where it’s easy for a toddler to wander off into the crowd before you can stop them.

I’ve been making some changes based on Feedback. Would love to hear from more of you. Please do click to share your thoughts! 

 

Save

Published in
0 Comments

Leave a reply

Processing...
Thank you! Your subscription has been confirmed. You'll hear from us soon.
ErrorHere

Log in with your credentials

or    

Forgot your details?

Create Account