Fooling Deep Neural Networks

A video summary of the paper: Nguyen, Anh, Jason Yosinski, and Jeff Clune. “Deep Neural Networks are Easily Fooled: High Confidence Predictions for Unrecognizable Images.” arXiv preprint arXiv:1412.1897 (2014). The paper is available here.

From MIT Technology Review: “A technique called deep learning has enabled Google and other companies to make breakthroughs in getting computers to understand the content of photos. Now researchers at Cornell University and the University of Wyoming have shown how to make images that fool such software into seeing things that aren’t there. The researchers can create images that appear to a human as scrambled nonsense or simple geometric patterns, but are identified by the software as an everyday object such as a school bus. The trick images offer new insight into the differences between how real brains and the simple simulated neurons used in deep learning process images” (December 24, 2014).

Connections: Learning everything about anything. Also: Google researchers have developed software that can match complex images with simple sentences describing whole scenes, rather than just objects, e.g. “a group of young people playing a game of frisbee.”  

How does virtual reality affect the brain?

virtual-reality Nature Neuroscience, advance online publication: 24 November 2014.
UCLA Newsroom, 24 November 2014.

“Put rats in an IMAX-like surround virtual world limited to vision only, and the neurons in their hippocampi seem to fire completely randomly — and more than half of those neurons shut down — as if the neurons had no idea where the rat was, UCLA neurophysicists found in a recent experiment. Put another group of rats in a real room (with sounds and odors) designed to look like the virtual room, and they were just fine.” Kurzweil Accelerating Intelligence, November 25, 2014.

This raises many interesting questions: What happens when humans hear or read spatial descriptions or look at maps? Are their hippocampi building maps? Partial maps? No maps at all? How does this relate to the results reported in Benjamin Bergen’s book? How does the brain distinguish reality and fiction?

Radhika Nagpal: One of 10 people who mattered this year in science

Radhika-Nagpal

Source: Reflection Films

From Nature, volume 516, issue 7531, December 17 2014.
“When Radhika Nagpal was a high-school student in India, she hated biology: it was the subject that girls were supposed to study so that they could become doctors. Never being one to follow tradition, Nagpal was determined to become an engineer. Now she is — leading an engineering research team at Harvard University in Cambridge, Massachusetts. But she also has a new appreciation for the subject she once disliked. This year, her group garnered great acclaim for passing a milestone in biology-inspired robotics. Taking their cue from the way in which ants, bees and termites build complex nests and other structures with no central direction, Nagpal’s group devised a swarm of 1,024 very simple ‘Kilobots’. Each Kilobot was just a few centimetres wide and tall, moved by shuffling about on three spindly legs and communicated with its immediate neighbours using infrared light. But the team showed that when the Kilobots worked together, they could organize themselves into stars and other two-dimensional shapes.”
Earlier entry: Inferring simple rules from complex structures.