Visualizing Machine Learning
An experience to break down and visualize a machine learning algorithm, specifically a convolutional neural network, in virtual reality. A user can see how her input, a hand-drawn number, is processed and features are extracted by the algorithm to identify the number.
Tandem
Take a human sketch/painting as an input and let a neural network ‘imagine’ on it. The human also communicates to this computer collaborator, some aspects of personality (with a relaxed definition of personality) like happy, sad, dark, even painting styles like cubism etc. The output from the neural network that imagines on the input, is then used as in input in a style transfer implementation that is then presented to the human on top of their input. The human can then tweak their input, and continue this conversation.
TransProse
TransProse is a program that generates music from novels. It is an experiment in whether it is possible to programmatically translate abstract data (emotions) across mediums. TransProse works by identifying emotions throughout a novel, and using that underlying emotional structure to create musical pieces with the same emotional tone.
The Samples Never End
A live-coding musical performance on top of a large collection of audio samples organized and visualized with t-SNE.
TV comment bot
TVCommentBot is a computer program that watches live broadcast TV over the airwaves in real-time and uses image analysis algorithms to improve it with new dialogue for you to enjoy. (Originally created for Art Hack Day: Deluge by David Lublin with Blair Neal and David Newbury). Tune into the
bot live twitter feedK-Means Equilibria
A visualization of k-means and the landscape of the local minima and maxima the algorithm traverses. K-means is a hill climbing algorithm that while is guaranteed to converge is not necessarily guaranteed to converge to the same place each time. The visualization shows many runs of k-means each with the same input data and parameters, but with different starting conditions. What is illustrated is a landscape of the hills and valleys which k-means traverses when trying to converge onto a set of clusters. This visualization clearly shows the fixed points (stable and unstable points) which k-means converges to and away from. Furthermore at times it is easy to see where the algorithm fails to converge to appropriate solutions. When run using different parameters, this visualization results in a quite diverse set of landscapes.
Body Language
A research work-in-progress that asks the question: how else might code be performed (If not with fingers and keyboard)? If somewhere between sport, hip-hop and sign language that can be passed on the street and in clubs like popular dance, who then, can access and perform coade, and what are its products? Body Language challenges the preconceptions of the technology complex by inserting the body as input device into the increasingly disembodied system. It is a speculative project about the future of the body in a digital world and a working system for capturing and translating physical input into digital output via an artificial neural network.
Animal Parade
A deepdream-style zoom but with bilateral filtering and stepping through the classes using the Inception network.
Doppelcam
A tool for decontextualizing your surroundings. A window into the uncanny valley. A machine for experiencing digital deja vu. Doppelcam is a visually similar camera. The user takes a picture with the app on their phone and receives an image pulled from the internet that is similar to the one they took in pixel-by-pixel value.
Typeface // 字体
meaningless font : created by a deep convolutional generative adversarial network (DCGAN) trained on calligraphy collection
Piano Die Hard
A piano which plays itself everytime there's an explosion in the film 'Die Hard'
Screeners
Glasses that screen screens, using a convolutional neural network and Wekinator trained to recognize them.
10,000 grocery store items
A t-SNE grid of 10,000 items photographed from a supermarket.
The Puddle and the Rock
A thought experiment about design for AI
Terrapattern
An open-source system to help journalists, citizen scientists, humanitarian workers and other curious people to detect “patterns of interest” in satellite imagery. The project was developed at the Frank-Ratchye STUDIO for Creative Inquiry at CMU, with assistance from Irene Alvarado, Aman Tiwari and Manzil Zaheer, and made possible through funding from the Knight Foundation.
AANN: Artificial Analog Neural Network
An interactive, handmade electronic sculpture that responds to environmental stimuli in a display of light and sound. AANN's structure is a skeletal point-to-point soldered network of analog electronic components designed to approximate biological neural network behavior. The sculpture is a 45 neuron network whose form was influenced in part by multi-layered network models used in neural computing, and by the Fibonacci based branching of natural systems. As guests speak or cast shadows on AANN the abrupt changes in sound and light cause the network to react by producing a series of swoops and chirps, and by illuminating LEDs on active neurons.
Neural Slime Volleyball
Recurrent neural networks trained with genetic algorithms to play slime volleyball. The agents learned to play the game entirely through self play, and were not programmed with any prior knowledge about the rules of this game.
Creatures Avoiding Planks
Creatures Avoiding Planks is a web toy demonstrating natural selection. Wee blobby creatures wander around avoiding floating planks, which kill on touch. If one lives long enough, it reproduces, passing on slight variations of its own movement behaviour to the offspring.
Sebastian Zimmerhackl, Julie Peters, Julius Voigt
This work is an experimental approach towards an alternative form of collaboration. When two artists are working on the same canvas, be it digital or analogue, many problems arise. The compromises that emerge due to the limitation of the medium have here been mediated with the help of machine learning techniques. This generative video combines structure with color. The two aspects, form and style were created independently from each other and are interwoven on the dna-level.
Neural Recylce
A camera which detects recyclable items, trained with a convolutional neural network and Wekinator
Instagram mosaics
A series of photo mosaics automatically generated from recent Instagram photos of key hashtags.
Assisted Visions
Last year, researchers published an algorithm that allows the style of one image to be superimposed onto the content of another. Some believe that the ability to mix and match preexisting styles and genres will be an invaluable tool to help creators find their own voice, though some, often in popular debate, argue that technologies that can imitate style threaten to replace rather than assist artists. Assisted Visions is an attempt to methodically use style transfer technology for my personal stylistic development. In doing so, the generated images explore the notion that the process of developing a distinctive personal style could be quantified, modified and accelerated.
Cubist Mirror
A near-real-time style transfer mirror based on 'Perceptual losses for real-time style transfer' from Johnson et al, trained on an unidentified Cubist painting.
Inverted Pendulum Balancing Experiment
Neural network evolved to balance a double pendulum. The system is inherently chaotic, and very sensitive to the initial state. The neural network controller learns how to control the speed and direction of the wheel in order to stabilise the pendulum.
Keeper of our collective consciousness
A collaboration with Google. Not people working at Google, but actual Google, the search engine. And it’s actually more a collection of prayers.
Zuck's Minions
Portrait of Mark Zuckerberg, drawn by a neural net, in the style of a Facebook shitpost about Minions