Categories
Present

Computer Computer Interaction

Pareidoloop: sample output

Last year the Cornell Creative Machine Lab created a system where two chatbots, machines designed to emulate the conversational abilities of humans, engaged in a “conversation” with each other. The result, visualized as an animation with avatars, was a fantastic, if vaguely absurd, example of two computers interacting with each other.

But what does it even mean for a computer to interact with another computers? Sure, there are basic ways in which systems build on others. But when the nature of the interaction is a bit more subjective, the dialog becomes more about AI.

So it was cool to recently discover Pareidoloop. Instead of being about language (like the chatbot experiment), this is about visuals. It’s about what can a computer “draw” that’s recognizable

Pareidoloop scanning images for faces

Pareidoloop starts by generating random collections of polygons and then feeds them into a face detector application. Over time it learns which “drawings” are more face-like and begins to learn how to create collections that are increasingly face-like.

Pareidoloop

As I ventured down the black-hole of related links, there were some interesting examples of using image and facial-recognition, and machine learning, software in other, unusual, ways.

The first is Roger Alsing’s Genetic Programming: Evolution of Mona Lisa. Here he got a system to learn how to create a replica of the Mona Lisa using only 50 semi-transparent polygons.

Mona Lisa evolution

The second is Greg Borenstein’s using facial-recognition software to find faces in every-day objects. It’s something we humans do all the time, called pareidolia. And it’s fascinating to see when computers see faces where we see them, and where they see them somewhere completely unexpected.

Finding a face in a cookie
Not at all where I saw a face