You Look Like a Thing and I Love You

The researchers also tried putting the AI in a 3-D maze. Sure enough, it learned to navigate the maze so it could see interesting new sections it hadn’t explored yet. Then they put a TV on one of the maze walls, a TV that showed random unpredictable images. As soon as the AI found the TV, it was transfixed. It stopped exploring the maze and focused on the super-interesting TV.

This book is fascinating and laugh-out-loud funny with anecdotes like the above.

The researchers had neatly demonstrated a well-known glitch of curiosity-driven AI known as the noisy TV problem. The way they had designed it, the AI was chaos-seeking rather than truly curious. It would be just as mesmerized by random static as by movies. So one way of combating the noisy TV problem is to reward the AI not just for being surprised but also for actually learning something.

The good news is AI’s are not going to take over the world anytime soon. Their neural nets are less complex and they basically don’t have any long term memory. Then again maybe we’re all just AIs scrolling endlessly through social media (we created our own noisy tv problem).

Researchers are working on designing AIs that can master a topic with fewer examples (an ability called one-shot learning), but for now, if you want to solve a problem with AI, you’ll need tons and tons of training data. The popular ImageNet set of training data for image generation or image recognition currently has 14,197,122 images in only one thousand different categories.

Looking at how the size of the world’s largest neural networks has increased over time, a leading researcher estimated in 2016 that artificial neural networks might be able to approach the number of neurons in the human brain by around 2050.1 Will this mean that AI will approach the intelligence of a human then? Probably not even close. Each neuron in the human brain is much more complex than the neurons in an artificial neural network—so complex that each human neuron is more like a complete many-layered neural network all by itself. So rather than being a neural network made of eighty-six billion neurons, the human brain is a neural network made of eighty-six billion neural networks. And there are far more complexities to our brains than there are to ANNs, including many we don’t fully understand

This quirk of neural networks is known as catastrophic forgetting. A typical neural network has no way of protecting its long-term memory.

I’ve been fascinated with GANs since reading the Artist in the Machine, even recently learning Python to start experimenting (long way to go).

The GAN is, in a way, using its generator and discriminator to perform a Turing test in which it is both judge and contestant.

Sometimes people will design GANs that don’t try to match the input dataset exactly but instead try to make something “similar but different.” For example, some researchers designed a GAN to produce abstract art, but they wanted art that wasn’t a boring knockoff of the art in the training data. They set up the discriminator to judge whether the art was like the training data yet not identifiable as belonging to any particular category. With these two somewhat contradictory goals, the GAN managed to straddle the line between conformity and innovation.11 And consequently, its images were popular—human judges even rated the GAN’s images more highly than human-painted images.

And lastly…

“For a while, when you typed ‘I’m going to my Grandma’s,’ GBoard would actually suggest ‘funeral.’ It’s not wrong, per se.

The author, Janelle Shane also writes a “humor AI blog” called AI Weirdness. I highly recommend both the book and the blog.