2004. The image on each disk is created by a software process in which the images ˜emerge from the results of thousands of local interactions between autonomous elements.'
2001. Users can interact with algorithms generating animations and sound
2007. Generative Audiovisual Performance. Images and sound generated by computer algorithms and performance.
2004. A projected grid of colors is programmed to endlessly portray abstract battles in which colors conquer their neighbors.
CodedocII. 2003. Array of algorithmically generated animations that respond to curator's challenge of generating software art to connect and move three points in space.
Software art that animates three spheres, whose movements explore ˜the realm of the attraction/repulsion/indifference relationships. Image shows a detail of the screen with one configuration out an extremely large universe of possibilities.
2006. Viewers can breed swimbots whose offspring eventually gain swimming skills that help with mating and feeding.
2006. Evolutionary art program, which evolves the next generation of images by viewers voting which images in a set should serve as parents. Image shows parent on top and a set of sibling children images, which evolved from that parent. System uses spare computer cycles of volunteer computers linked on the Internet
2006. An evolutionary art system breeds new digital organisms taking into account viewer votes. Workers in a research center, where it was installed, indicate their choices on several touch screens.
2006. Billboard showing artificial variations of native Australian plant species resulting from algorithms simulating the effects of evolution and the environment.
2006. Three species of artificial life from a fictional world called Ludea compete for territory in a game-like urban environment. Image shows three a-life forms moving about the city. The signs show artificial language related to their culture.
2006. Text typed on a typewriter transforms into animated a-life creatures projected onto the paper.
Quorum Sensing. 2002. Projected A-life creatures swarm at the feet of viewers and respond to their movements.
2005. A-life creature with virtual physiology generated with help of physics qualitative environment and immersive VR techniques. Part of research in models and behavioral studies in Artificial Life and Hypozoologie.¨
2006. Hundreds of artificial life forms produce coordinated light and sound behavior. The proximity sensors on the bions enable individuals to react to visitors. These reactions can then spread throughout the group.
2005-08. Research project to study the possibility of evolving robots capable of exhibiting autonomous creative drawing behavior.
2005. AI program responds in real time to underlying structures of dancer motion by generating graphic agents on foreground scrim to accompany performance.
2007. Surveillance system uses image analysis software to track pedestrians and convert the mundane video to special effectsÂ treatments.
2007. Using a city-wide network of surveillance cameras, AI-based system attempts to recognize and capture cinematic moments from people's everyday lives and recombine them into feature films.
2003. Installation uses facial expression recognition system to sound alerts when actresses forced smiles deviate from acceptable levels.
2003. Active surveillance camera extracts snippets of peopleâ€™s actions and then creates composite display of those snippets.
2003. Two decision making machines debate their understandings of the viewerâ€™s behavior and indicate reasoning via text displays and sound.
2005. An AI driven virtual world lets viewer dynamically interact with characters in midst of marital problems.
2002. Web based chatbot engages in dialog with web visitors. Image shows some of Ruby's facial expressions.
2006. Hemispheric, robotic sculpture that expresses its emotions non-verbally by sweating at places it gets touched.
2005. AI program generates plots and narratives. For example, Inspiration Space generatesÂ hidden relationships and contextual emergence between the words ('ant' and 'war'). Context-5W1H generates context from "monkey" and "study". Kanji-plot generates relational english with hieroglyphic symbol. Sentence Inspiration generates transformational sentences from 'love is blind' to 'to be or not to be' for future narratives.
2006. Robots use AI, speech synthesis, and recognition to get into arguments and swearing matches with each other about what they read on the Salon.com website. They rise out of their boxes when ready for a discussion/argument.
2006. System attempts to read the emotional expression of the computer user's face on dimensions of pleasure and arousal. Image shows original expression, a graph of how the system interprets the expression, and a picture of a dragon automatically selected to indicate the mood.