DeformIt is an online tool that generates a large dataset of images and their ground-truth segmentations based on a single image and its ground-truth segmentation. Problems in the field of medical imaging analysis, specifically, machine learning algorithms, require a large dataset of images and their corresponding ground truth segmentations. Though many publicly available datasets exist, this is often not sufficient. This is obviously because of the legal issues surrounding the sharing of such sensitive information. Many tools have attempted to circumvent this problem by creating artificial images and ground truth segmentations. In fact, I have also done this with VascuSynth. Instead of attempting to create synthetic images, we (Ghassan and I) have decided to create a large training set of images using deformations on an existing image and ground-truth segmentation.
Fig. 1 A MR image with a grid overlayed that has been deformed by the vibrational and variational method.
In a nut shell, we take an image and its segmentation, warp both the image and the segmentation in the same way, apply degradations such as non-uniformity intensities and noise, and then save the resulting image. The images are deformed by generating a set of displacement vectors at control points across the image space. These control points can optionally be specified, otherwise, the control points create a uniformly spaced grid in the image space. Using the displacement vectors at the control points, we interpolate the resulting image and ground-truth segmentation.
There are two methods to generate the displacement vectors: random deformations and vibrational and variational deformations. For random deformations, the components of the deformation vectors are random numbers (from a uniform random distribution). The vibrational and variational deformations use a combined Finite Element Method (FEM) and Point Distribution Model (PDM).
The vibrational portion of the deformation treats the image as a flexible material where each control point is connected to every other control point with a spring of equal stiffness. The vibrational portion of the deformation "pulls" on the image, resulting in a new deformed image. The variational method is a statistical model that creates the displacement vectors based on a history of the previous displacement vectors. The first few sets displacement vectors will be generated using strictly vibrational deformations; however, as more sets of displacement vectors are generated, the statistical portion will begin to dominate and there will be no contribution made by the vibrational deformation.
In addition to the deformations, the image can be degraded using Gaussian, Poisson, Salt and Pepper and Speckle noise. Also, non-uniformity intensities can be added in an attempt to mimic magnetic resonance image acquisition.
The deformation code is written in MATLAB. The website was created using the typical web technologies: CSS, PHP, XHTML, DHTML, AJAX. YUI's image uploader (which uses Flash) is used to upload the images. PHP writes to an XML queue which is then parsed by a Java daemon which then executes the compiled MATLAB code.
The paper was accepted to MICCAI in 2008. I attended the conference in New York and presented a poster. Check out images from the conference here. I had a blast and learned a lot! Read the paper and check out the working website :).