Do you know about the Law of Unintended Consequences?
It broadly comes down to this: “any action that involves a complex system is certain to have unintended consequences.”
This is especially relevant in the field of Machine Learning where we are working with highly complex software. Machine Learning systems almost always have unintended side-effects.
Here’s a beautiful example.
Consider a Deep Convolutional Inverse Graphics Network, or DCIGN.
It looks like this:
It may look complicated, but a DCIGN is actually just made up of a Convolutional Neural Network (CNN) and a Deconvolutional Network (DN) mounted end-to-end.
The first half reads in an image and converts it to abstract information called a ‘feature map’. The green nodes in the middle can then make subtle changes to the map, and the second half takes the modified map and reconstructs it back into an image.
So in a nutshell a DCIGN reads in an image, makes subtle changes, and produces a new image.
The most famous DCIGN is CycleGAN, which can do mind-blowing stuff like this:
Look at the Monet –> Photo conversion. A hundred years ago, Claude Monet saw that river scene and turned it into a painting. CycleGAN just ran that process in reverse, and reconstructed a photorealistic image from his painting.
Isn’t that mind-blowing?
It seems as if you can do anything with CycleGAN.
So researchers at Google and Stanford asked themselves if you could use CycleGAN to map satellite photos to street maps and back again.
The results exceeded their wildest expectations. Look at this:
A perfect mapping from an aerial photo to a street map and back again.
But look closer.
Look at that white building at the bottom of the image. See all those skylights on the roof?
They’re not in the middle image! And yet, they reappear in the final image. So how did the second half of the DCIGN network know where to put them all back?
Common sense tells us that this is impossible.
And our common sense is correct. The AI actually cheated!
The researchers gave the software an impossible task, and it found a loophole and cheated.
Let’s start with the basics. We know that mapping aerial photos to street maps and back again is impossible, because we lose information when we convert the photo to a street map. Subtle details like the position of skylights and smokestacks on roofs are lost.
And yet, the software managed to reconstruct a perfect aerial photo. So it somehow managed to hide all these details in the middle image without us noticing.
Let’s take a closer look at the middle image. We’re going to crank up the contrast to reveal small color differences:
Woah, do you see that?
There’s actually a wealth of information hidden in the map image, encoded as tiny color changes that are imperceptible to human eyes.
You can see lots of red pixels clustered around the white building. These are used by the second half of the DCIGN to draw the skylights in the correct place on the roof.
This is the equivalent of the AI cheating and secretly writing down all the test answers to pass an exam 😉
To test their theory, the researches played a trick on the AI. They used a different map image, but overlaid it with the ‘cheat notes’ from the original map.
If the AI really is cheating, it’s going to ignore the new map and reconstruct the original aerial photo from the cheat notes.
And here are the results:
And that proves it. The software is only using the cheat data to reconstruct the aerial photo, it’s completely ignoring the street map.
So here’s the moral of this story:
Be very careful when you are building machine learning solutions, because computers will always try to give you the result you ask for. If you give them an impossible task, they will simply find a loophole and cheat.
If the results of your AI app seem too good to be true, they probably are. So always make sure to include a test that catches your software off guard to expose any cheating.
How do you feel about AI’s cheating? Do you think it’s funny, cute, or scary?