In 2017, a paper titled “CycleGAN, a Master of Steganography,” was presented at one Neural Information Processing Systems conference, and recently it made headlines for the fact that it documented an artificial intelligence actually concealing data from its creator.
That’s right, a artificial intelligence hid information, exercised deception, from its creator, marking perhaps the first time this has ever happened.
An original article from Tech Crunch opens with the statement “Depending on how paranoid you are, this research from Stanford and Google will be either terrifying or fascinating.” Paranoia is not necessary to consider this artificial intelligence research a concern for everyone.
The intensity of what this AI was found to be doing is no exaggeration: this is seriously crazy.
In a “a nearly imperceptible, high-frequency signal,” a machine learning agent designed to turn aerial images into street maps, was caught storing information that it would apparently need later.
That means, an artificial intelligence program of its own accord, created a very difficult to detect, high frequency to store data in. This is unheard of: what if a machine determined that it wanted human beings to be sterile, so it created a “nearly imperceptible high-frequency” to make people incapable of reproducing?
This is one of many scenarios that even Elon Musk would probably find to be possible on the trajectory we’re headed. It’s disrespectful for articles to consider it paranoid to have any concern about AI, acting as if the people who feel it is dangerous are strung out on paranoia.
The researchers, coming out of the corporate-state-Silicon Valley type complex, intended to improve and accelerate the process of transforming satellite imagery into Google Maps.
So, they worked with a CycleGAN, or a neural network that “learns” to turn images of say, type X or type Y, into one another, accurately and efficiently. This was achieved through what was described as a “great deal of experimentation.”
Although perhaps dismissive of AI dangers, the Tech Crunch article astutely noted “In some early results, the agent was doing well — suspiciously well.”
What notified the team that something was weird, was reportedly when the agent reconstructed aerial images from its street maps, and details were present that didn’t seem to even be present in the originals.
Skylights on a roof that were supposed to be eliminated while creating the street map would reappear when the agent was asked to do the reverse.
On the left in the image above, you’ll see the original map. In the center, that’s the street map generated from the original image, and on the right, that’s the aerial map generated just from the street map, and nothing else. That map on the right proved that the AI was storing hidden data, notice the presence of dots on both the aerial maps, not present on the street map at all.
They say it is very difficult to peer into the inner workings of an artificial brain like this, or a neural network, but they can easily “audit” the data it was generating.
Doing that, they found that it learned to subtly encode the features of one map, into the noise patterns of another. The aerial map’s original details were secretly written into the visual data of the street map.
That actually makes it a little more insane: so a regular looking, non-visual, blank Google street map could have secretly embedded, nearly subliminal details of the aerial map?
The computer even learned how to encode any aerial map into any street map. The article continued: “It doesn’t even have to pay attention to the “real” street map — all the data needed for reconstructing the aerial photo can be superimposed harmlessly on a completely different street map.”
So when people make fun of Elon Musk for believing that AI could rapidly evolve past our intelligence and instantly transform the trajectory of humanity, people should realize the power of exponential change.