Logo
menu top right top corner

menu top

ViS E-Mail

menu bottom

 

 

Introduction Background Instruments Approaches Researchers Publications

Digital image processing started to be used for marine particles as early as the 1970 s as computer technology became powerful enough and available to scientists. An approach to automatically recognize diatoms for pollution monitoring was developed first in the early 1970 s. (Cairns et al., 1972), and was refined to use spatially matched optical filters, video and computer technology to determine particle shape (Almeida and Eu, 1976), and was later simplified to use rotating spatial filters (i.e. holograms) (Fujii et al., 1980). Automated recognition of phytoplankton cell types by digital image analysis was also demonstrated in the 1970 s (Uhlmann et al., 1978). An automated system for pattern recognition of cells of the toxic algal Prorocentrum in Japan has been described (Tsuji and Nishikawa, 1984). Similarly, an image recognition system for zooplankton was described by Jeffries et al. that achieved classification to major taxonomic groups (Jeffries et al., 1984). Zooplankton (Rolke and Lenz, 1984) and detrital (Lenz, 1972) size spectra have been measured by digital image analysis with optical microscopy.

In situ cameras systems have become common and widespread on remotely operated vehicles (ROVs). In the past decade the Video Plankton Recorder (VPR) has been developed for the automated mapping and analysis of zooplankton (Davis et al., 1992) and larger phytoplankton colonies such as the Chaetoceros, Phaeocystis, and Rhizosolenia mats. In-situ holographic cameras are being developed and tested to image volumes of water to show spatial patterns at small scales (Craig et al. 2000; Katz et al. 1999). Jaffe and Franks (1996) are developing an in-situ fluorescence imaging system for studying small scale patterns of phytoplankton distributions.

Automated techniques for optical fluorescence microscopy have also been developed for marine bacteria (Sieracki et al., 1985) and protists (Sieracki and Viles, 1990). This work includes the evaluation of threshold methods for segmenting images of fluorescing cells (Sieracki et al., 1989a), (Viles and Sieracki, 1992), an algorithm for calculating cell biovolume from 2-D images (Sieracki et al., 1989b), and the accurate counting and sizing of cells from images (Sieracki and Viles, 1998). The largest remaining challenge in the image analysis of fluorescence microscopy images is the recognition and classification of particle types, especially in the nanoplankton samples that can contain large amounts of detrital particles and can be confused with cells.

More recently, work on automated pattern recognition of phytoplankton has been done by Europeans (Culverhouse et al., 1996). This study compared classification methods and human experts with a set of images of 23 dinoflagellates species from 4 genera. The best algorithm was a Radial Basis Function neural network classifier that performed as well as the experts (84% accurate). The self-learning neural network methods outperformed the classical multivariate statistical approaches. This classification system has also been used for loricate marine ciliates (Culverhouse et al., 1994). The software, termed DiCANN (Dinoflagellate Categorisation by Artificial Neural Network (Culverhouse et al., 2002)) is under development for
commercialization, but is not yet available. If it becomes available during this project and it is
affordable we will purchase it and test it with our images.

These results show that neural network algorithms can approach human experts in categorizing dinoflagellates from field samples, however there is much more work to be done. The test set was rather limited and the images were digitized from film images, causing artifacts (Culverhouse et al., 1996). One limitation of neural network classification solutions is that, although they are powerful and often perform well, the nature of the complex network created prevents a full understanding of how the input features are weighted and used. Also the features chosen to be extracted from the images may not be the best ones, and they were not systematically tested. So there remains much work to be done.

left bottom corner right bottom corner
Computer Vision Laboratory University of Massachusetts