Example Output
Still Images
Videos
These images and videos were generated by running the Python script using frames from a video of me rotating very slowly on a chair as input.
Introduction
Slitscanning is a technique for constructing synthetic still images from a moving subject. Analogue slit-scanning has been carried out since the early days of photography by sequentially exposing thin slices of film over time to create a single image. A digital alternative can be constructed using a video camera, replacing complicated moving apertures by digital extraction of strips (typically one pixel wide) from video frames. Although slitscanning can be used to represent motion in a single frame without motional blur, the synthetic images generated are often difficult (though not impossible) to interpret visually and look positively weird. The struggle to interpret the resulting images is the main appeal of this technique.
Recently, while attending molecular biology conferences, I have noticed many presentations which include so-called kymographs, which are visualisations designed to summarise observations of dynamics captured by video or time-lapse microscopy (usually observing fluorescently labelled proteins). Kymographs are constructed by synthesising a single image from strips extracted from digital video frames, and as far as I can tell, their main function is to provide a means to represent some of the three-dimensional data contained in a video (colour or intensity in x and y spatial dimensions and how that varies in the time dimension z) in a two-dimensional image appropriate for publication in a journal article. This is precisely a digital implementation of slitscanning.
This piece of Python code builds slitscans by taking slices through stacks of digital images. It works by loading the pixels of each frame of a video into a stack (a multi-dimensonal array) and constructing images by taking slices through the stack. Straight horizontal or vertical slices through the stack give regular slitscans (equivalent to kymographs), but any other diagonal slicing through the stack is also possible. The connection between the dynamics captured in the video and the resulting image are slightly different in each case.
Hardware Requirements
To create good quality slitscans, the source video should really be of high resolution (i.e. at least 720p HDV). Extracting frames from BluRay video for example would give 1080p. To create a relatively "square" stack of images, we will need at least as many frames as the largest dimension of a single video frame. For example, for 1080p video, we will need at least 1920 frames, which is 64 seconds worth of footage at 30 FPS. We will need to store these frames in RAM, uncompressed to be able to perform quick slicing. For HD projects this requires quite a lot of RAM (~12 Gb) and 64-bit computing.
Software Requirements
This script obviously requires Python, but it also requires the Python Imaging Library (PIL) for opening and saving frames and images, as well as the NumPy package which extends Python to include multi-dimensional arrays and a suite of fast functions for operating on them. Currently I split the source video into individual .png frames outside of Python, storing them temporarily on my hard-drive. If I could have incorporated reading in a video file and serving up frames one at a time into the script, it would improve speed and greatly reduce the requirement for hard-drive space. The pyFFMPEG project looked promising for extracting frames from video files at first, however it doesn't seem to be compatible with the latest version of FFMPEG and it is not currently being developed, so I am reluctant to use it. I recommend that users split video into individual .png images, using FFMPEG directly, or under Windows, using the amazing VirtualDub.
To access the large amounts of RAM we will require, we need a 64-bit operating system as well as 64-bit builds of Python and the required Python packages. This is relatively straightforward under Linux, if you have a 64-bit machine, the default packages for installation (in Ubuntu for example) are all compiled for 64-bit computing. Under Windows, you need to be a little more careful. Choose the 64-bit version from the Python download page. Sadly, the PIL and NumPy developers are not so Windows friendly, however you can download 64-bit binaries for these (and many other) science-related packages, generated by Christophe Gohlke here.
Running Script
Make sure that all of your individual .png frames are numbered sequentially, and that the numbers are sensibly padded (e.g. Frame000001.png instead of Frame1.png). Place all frames into a single directory and edit the inroot
string to point to a frame in that directory. Similarly, choose an output directory and replace the outdir
string with the path to that directory. The script will find all available frames, calculate the amount of RAM currently available on your machine, load as many frames as it can into memory and then generate a series of slices through the image stack. If a series of slices can be generated, they will be and these series can be combined (again using FFMPEG or VirtualDub) to create a new video for example.