This series of data art prints represents the source code structure and work progress from start to current time of tree different software source code. The intention was to represent the sheer volume of work put into open source software development usually created by a small number of people. It was commissioned by goto10 and presented at the make art 2009 festival “what the fork”

All the data have been harvested via svn log. What comes out of it are paths files and author names but also comments made and some sort of time stamps.

Folders are represented by black branches. Their thickness represents the frequency at which the developers worked onto the folder. Finally their lengths are proportional to the number of sub-folders.

Below the representation of the processing svn repository:

Along the branches the amount of thin curved lines is equal to the number of people that have been modifying the actual folder. Best seen on a 100% detail of the processing folder and file structure :

Representation of the svn log of pure data :

And finally Supercollider :

At the end of the branches are fruits. Fruits represent files by extensions. Every dot of a fruit represents a letter of the file extension and are placed according to their alphabet space. Two ‘a’ would overlap, and at contrario an ‘a’ and a ‘z’ would be the further away possible.

The red stamps hold the names of the authors following the same principle than the file naming convention. ‘a’ would translate to a square on the far left of the horizontal line an z to a square on the far right. Their vertical position of the name in the stamp is proportional to when the person entered the project for the first time. Thus, the stamp height (minus margins) representing the total duration of the project.

The squares sticking at the bottom of the stamps representing the numbers of years since the project started. in this case 7 years for Supercollider.

Processing source code of this project available via svn at https://devel.goto10.org/svn/oli/whatthefork

Comment

Second version of our "Rephlexion" interactive installation. The team ( Phillipe, Mag, Roy ) got back together for a week residency aiming at finalising a project started a year ago. Probably the only way to be really efficient considering each of us lives at a different geographical location.

boom

We ended up with a fully spatialized audiovisual response to the user(s) presence and movements. 4 speakers on each side of the interactive area allow us 3d sound manipulation. One side of the tunnel being used for retroprojection and the other lit up from above to optimise the camera vision background substraction.

boom

The space is divided in four zones. Each zone triggers a generative graphic and sound response, itself influenced by the body position (arms, head and hands). Once a second person enters an interactive area, symblolic links are being displayed and the composition becomes collaborative. Up to the public to use it then as they wish.

Below is a clumsy video of the display projected on one side of the interactive area.

The absence of physical interface makes it a very spontaneous experience. It was pleasing to see that people of different ages and background would immersed themselves with such an ease in our own little creative galaxy.

We also found out we can use this space as an instrument. Using custom video or light input to generate audio video ouputs regardless of people’s positions, or even presence.

We’re now hoping to show it to a larger public very soon.

On the technical side, we used 4 pieces of software run on three computers.
Eyes web on computer A taking care of the computer vision, mapping points along the person’s limbs.


Eyes web would then send the coordinates via OSC messages to Pure Data on computer B. Pure Data’s job is to allocate the points to new custom OSC events and send it to either Processing (running on the same computer) or Reaktor (on computer C).

Pure data and processing happy to run together on the same computer cause they are super cool :

Reaktor isn’t that cool so we left him in his own. It tends to develop allergy to OSC causing it to randomly crash.

Splitting complex camera vision into simple OSC nodes allowed us to get the system ready before physically meeting up. We then had time to fine tune the system and most importantly make it sound and look like we wanted.

It has proven to be a strong and flexible architecture allowing us to map new sounds and graphics to it. So if you have any idea or suggestions, be my guest.

Comment


a – 20 patches

My colleague Julien showed me last week a 3d format i’ve never heard of : Renderman.

Renderman, was created by Pixar animation Studio as an Interface specification and a photorealistic renderer. It is mostly designed for high end animations, but stays relatively accessible for who wants to generate and render a still image with simple shaders. Which just happened to be my case.

For diverse reasons i love working on Linux and i was positively surprised to see how many 3d applications are penguin friendly. I guess the stability of the os and the possible customization makes it an ideal candidate for heavy duty computational tasks.

Since it’s not like i was going to render bugs life 4 or The Matrix 5 i needed to find a simple, accessible and free renderer. And after a few tests i adopted aqsis.

So the pipeline is almost complete. The last thing needed is a programmatic tool that would help me creating .rib files. (that’s the Renderman interface format). There is a c library that seems to be the way to go, but i have no prior knowledge of writing and compiling c, and didn’t have enough time left to learn what seems to be yet another project, so here comes again trusty processing.

A few functions later. a rib was born. Then everything was ready for the long trial and error never ending happy accidents series that’s the visual creative process.


b – 100 patches

I think one of the most exciting things in this project and in trying 3d rendering is the ability to work with light in such a quasi realistic way. It almost took me back to the days i was experimenting in photography studios where every stops needs to be carefully calibrated to reveal the nature of the subject.


c – 100 patches 2

Links:
Pixar Renderman web site
Renderman on wikipedia
aqsis
The book to get
The other book to get

Comment [3]

There was some interest from people on how to visualise wifi nodes.
this is the second step that aims to demonstrate how to parse and simply visualise in processing data harvested using kismet.

Here is a small processing sketch to demonstrate a way of doing so., there is a lot of room for improvement in the code but that gives an idea of the process, as a ground base if you use kismet xml generated data. The result is very basic looking but good enough to have an idea of the dataset :

Comment


Bit of a technical note to show a step by step of what i did to get kismet to work on linux with a gps device (a garmin venture hc gps) and output the data harvest as an xml file.

——————————————————————————————————————————————

I – Installing kismet


1 – getting kismet.
$sudo apt-get install kismet


2 – configure kismet.
For me kismet.conf was in :
/etc/kismet/kismet.conf

This file can be installed in different places depending on your distro. I run Ubuntu gutsy. If you’re unsure to where it might be do:

$sudo updatedb $locate kismet.conf

open the configuration file as super user :
$sudo gedit /etc/kismet/kismet.conf

you should see something like that :

what i changed was :
source=ipw2200,eth1,kismet suiduser=[myUser]

This was on a Thinkpad T43 with this network card:
Intel Corporation PRO/Wireless 2915ABG Network Connection (rev 05)

if you’re unsure about what card you have, you can list the pci devices with :
$lspci


3 – run kismet:
in a terminal start kismet as super user:
$sudo kismet

The only issue i’ve found was that the network doesn’t restart automatically as the card doesn’t like when exiting monitor mode mode. To get it back to work i just unselected the wireless card from the network manager and ticked it back again. Pressed apply. sorted. There must be a command line to do this tho.

——————————————————————————————————————————————

II – GPS configuration


1 – get a gps : ) im my case i got a garmin venture hc


2 – install gpsd


3 – start gpsd.
In my case it was connected onto usb port:
$sudo gpsd -p /dev/ttyUSB0

Make sure to start gpsd before kismet and it should be all good from there. Kismet will pick it up and start parsing the gps points along with the wireless activity in range.

——————————————————————————————————————————————

Comment

Invisible Journeys is my first try at data visualisation. I have seen a few wifi geographical mapping, but they looked a bit too technical to my taste. Here, i aimed at a semi abstract visualisation while keeping the ability to extract sense out of the graphics. Below is one of the visualisation showing 4 different journeys. Bigger to smaller rings : London / Vescemont / Belfort / Barcelona.

Each circular item represent the recording of wireless networks along one journey.

The time dimension starts reading from the right then goes clockwise along the main black thick line.

Each successful node recording influences the time line thickness and adds a “pin” onto it. Red pins represents non encrypted networks other networks are the smaller black ones.

Technically i have been using kismet for recording the wifi nodes. Unlike Netsumbler (windows only), kismet dumps a nicely formated xml file for each session. The only limitation i’ve found was that you can’t use it with a laptop that have pcmcia wifi card. Those cards can’t go into rfmon mode (to constantly scan the network).

Once the xml file was created, it was then easy to get all the data into processing using the xml native library.

Next step is to pair wifi recording with gps. The good new is that Kismet seems to be able to handle both at the time.

More images of the ongoing process can be found on my flickr set

more later.

Comment [1]

Processing sketch made on the train journey between London and Newcastle and Newcastle and London. This is an attempt of recording a journey, or at least a part of it.

On the way out the program was grabing a web cam input pointed at the landscape, then squeeze the camera frame to 1 pixel wide and rotate it through time. On the way back it actually grab only the first vertical line of pixel and apply the same transformations as previouslly. This was to get rid of a moire effect that happend on the first version. nice!

Well, it was fun and entertaining coding on the train and looking at people around trying to figure out what the hell what i was doing pointing that camera at the window all the time.

In the end it looked as i wanted, a bit like truck’s speed recording graph. Truck speed recording graphs are nice looking. A bit like seismographs but circular. I like truck speed recording graphs.

Train slowiiinnnnggggg dooowwnn at the station.

source code

Comment



Research for the illustrations for the make art the international festival dedicated to the integration
of free/libre and open source software (FLOSS) in digital arts.

I stupidly haven’t taken any pictures from the final printed result.

All the illustrations where exported to pdf then i did a few color adjustment in illustrator, to meet printing standards. Under linux i unfortunately couldn’t find a vector application that could open and edit the pdf files that processing exports.

The only app i found that could open the pdfs was Xaraxtreme but from there it was impossible to break apart the illustration. Still it was a good occasion to see how fast the vector rendering was on Xara. Thousands of shapes and no slowing down.

I’m not sure what processing uses to export pdfs. There might be something to modifiy at core level so it exports to a more linux friendly format.

… to be continued

Comment [1]

mozart

A very modified version of the pointillism example by Daniel Shiffman.

Comment [1]

Glass is one of cracktux audiovisual piece. Chun used pure data for the sound sythesis. It always amazes me to hear what he comes up without prerecorded material.


better quality and extra informations on the internet archive page

Comment [1]

Older