Financial support

Static and Dynamic Texture Analysis and their Applications in Biology and Nanotechnology

Members

Wesley Nunes Gon¨alves, Odemir Martinez Bruno

Introduction

Texture analysis is an important research area in computer vision with many potential areas of application, including analysis of satellite images, industrial inspection, medical image analysis, biology, geology, among others. Research on texture image can be divided into two classes: static [2,3,6,8,11] and dynamic textures [4,5,7,12]. Static texture can be easily understood by the human visual system as a repeating pattern in its exact form or with minor variations. In psychology of perception, texture has been described by properties such as roughness, coarseness and contrast. On the other hand, dynamic textures have emerged as a new field of investigation that extends the concept of self-similarity of static textures to the spatio-temporal domain. This project aims at developing methods for both classes of texture based on three approaches (deterministic partially self-avoiding walks [2,3], complex networks [9,10] and fractal dimension). The proposed methods will be evaluated through comparison with state-of-the-art methods and applications in biology and nanotechnology [1]. The figure below shows texture images of 1 (a) titanium dioxide nanostructures and 1 (b) plant leaves that will be used to evaluate static texture methods. Examples of mosaic of dynamic textures from Synthetic Video Textures Database are presented in Figure 1 (c) and 1 (d).


Figure 1. Examples of texture images. (a) Titanium dioxide nanostructures; (b) Plant leaves; (c) and (d) Mosaic of dynamic textures


Static Texture Analysis using Fractal Dimension of Walks

Texture plays an important role in image feature extraction and classification. In this work, we present a novel texture modeling method based on deterministic partially self-avoiding walks and fractal dimension theory. After finding the attractors using deterministic partially self-avoiding walk [2,3], they are dilated in direction to the whole image by using the relevance of each pixel. Figure 4 illustrates the dilation of attractors for different values of dilation. As the value of dilation is increased more pixels of the image are merged to the attractors. It is important to note that the behavior of dilation follows the properties of the image, thus preserving the characteristic of each texture.

Figure 4. Examples of dilation of (a) attractors found by the walkers for values of dilation (b) 5, (c) 10, (d) 15.

Results

Table 1 presents the experimental results obtained from Brodatz dataset. Here, a success rate of 97.30% is obtained by the proposed method which clearly shows the rich property of dilation of attractors. Experimental results indicate that the proposed method improves recognition performance from 89.37% to 97.30% over the original deterministic tourist walk method and from 89.11% to 97.30% over the multi-scale fractal dimension method.

Table 1. Experimental results for different texture methods using the Brodatz dataset.
Method Success Rate
Fourier descriptors 70.67(3.39)
Co-occurrence matrices 86.49(2.69)
Gabor filters 88.99(2.03)
Original Deterministic Tourist Walk 89.37(1.96)
Multiscale Fractal Dimension 89.11(2.80)
Proposed method 97.30(1.36)


Deterministic Partially Self-avoiding Walks for Dynamic Texture Analysis

Recently, a promising method for static texture based on deterministic partially self-avoiding walks has been published [2,3]. This method consists of an agent walking along the pixels according to a walk rule and a memory. Each trajectory is composed by two parts: the transient and the attractor. By analyzing the distribution of transients and attractors, it is possible to quantify and compare texture images. The proposed method [4] is an extension of the walks that models motion and appearance features from a dynamic texture. This way, walks are performed in three orthogonal planes of the sequence of images: XY, XT, and YT planes. The first plane models appearance features, while the two other ones model motion features. The proposed method can be seen in Figure 2.

Figure 2. Summarization of the proposed method. Step 1: the sequence of images is modeled in three planes. Step 2: For each plane, deterministic partially self- avoiding walks with different memory sizes are performed. Step 3: A feature vector for each joint distribution is calculated. Step 4: Feature vectors of each plane are concatenated to compose the final feature vector.

Results

In this section, we compare the proposed method with existing methods in Table 2 by classifying videos from dyntex database. The table present the average and standard deviation in terms of success rate and the number of features extracted by each method. The last column present the average of the success rate obtained by k-nearest neighbor (KNN) and Support Vector Machine - SVM. Experimental results indicate that the proposed method achieved the highest average of success rate of 97.60%, which is followed by the Local Binary Patterns (LBP-TOP) with a success rate of 96.34%. The next success rate of 96.14% was achieved by the Rotation Invariant Volumetric Local Binary Patterns (RI- VLBP).

Table 2. Comparison results for existing methods in the dyntex database.
Method N. of Features KNN SVM Average
RI-VLBP 4115 97.64(1.32) 94.63(2.68) 96.14
LBP-TOP 768 99.02(0.77) 93.66(2.29) 96.34
Proposed Method 75 97.56(1.20) 97.64(1.34) 97.60

We also test our method by segmenting natural dynamic textures [5] from the dyntex database. Figure 3 presents the segmentation results for natural dynamic textures. The original frames are presented in the first column while the results of our method is presented in the second column. For comparison, we also show the results obtained by Fazekas et al [12] in the third column. These results show that our method is in agreement with perceptual categorization of the dynamic textures even in a challenge natural context.

Figure 3. Segmentation results for natural dynamic textures. First column presents the original frame of the video, second column presents the results for our method and third column presents the results obtained by Fazekas et al [12].

References

[1] ZIMER, A. M.; RIOS, E. C.; MENDES, P. C. D. M.; GON‚ALVES, W. N.; BRUNO, O. M; PEREIRA, E. C.; MASCARO, L. H. Investigation of AISI 1040 steel corrosion in H2S solution containing chloride ions by digital image processing coupled with electrochemical techniques. Corrosion Science, v. 53, p. 3193-3201, 2011.

[2] BACKES, A. R.; GON‚ALVES, W. N.; MARTINEZ, A. S.; BRUNO, O. M. Texture analysis and classification using deterministic tourist walk. Pattern Recognition, v. 43, p. 685-694, 2010.

[3] GON‚ALVES, W. N.; BACKES, A. R.; MARTINEZ, A. S.; BRUNO, O. M. Texture descriptor based on partially self-avoiding deterministic walker on networks. Expert Systems with Applications, 2012.

[4] GON‚ALVES, W. N.; BRUNO, O. M. Dynamic Texture Analysis and Classification using Deterministic Partially Self-avoiding Walks. In: Advanced Concepts for Intelligent Vision Systems, 2011, Ghent. Advanced Concepts for Intelligent Vision Systems, 2011. v. 6915. p. 349-359.

[5] GON‚ALVES, W. N.; MACHADO, B. B.; BRUNO, O. M. Segmenta¨‹o de texturas din‰micas: um novo mˇtodo baseado em caminhadas determin’sticas. In: Workshop de Vis‹o Computacional, 2011, Curitiba. VII Workshop de Vis‹o Computacional, 2011. v. 1. p. 73-78.

[6] MACHADO, B. B.; GON‚ALVES, W. N.; BRUNO, O. M. Image decomposition with anisotropic diffusion applied to leaf-texture analysis. In: Workshop de Vis‹o Computacional, 2011, Curitiba. VII Workshop de Vis‹o Computacional, 2011. v. 1. p. 155-160.

[7] GON‚ALVES, W. N.; MACHADO, B. B.; BRUNO, O. M. Spatiotemporal Gabor filters: a new method for dynamic texture recognition. In: Workshop de Vis‹o Computacional, 2011, Curitiba. VII Workshop de Vis‹o Computacional, 2011. v. 1. p. 184-189.

[8] MACHADO, B. B.; GON‚ALVES, W. N.; BRUNO, O. M. Enhancing the Texture Attribute with Partial Differential Equations: a Case of Study with Gabor Filters. In: Advanced Concepts for Intelligent Vision Systems, 2011, Ghent. Advanced Concepts for Intelligent Vision Systems, 2011. v. 6915. p. 1-1.

[9] GON‚ALVES, W. N.; SILVA, J. A.; BRUNO, O. M. A Rotation Invariant Face Recognition Method based on Complex Network. In: Iberoamerican Congress On Pattern Recognition, 2010, S‹o Paulo. Lecture Notes in Computer Science, 2010. v. 6419. p. 426-433.

[10] GON‚ALVES, W. N.; MARTINEZ, A. S.; BRUNO, O. M. Complex network classification using partially self-avoiding deterministic walks. Chaos (Woodbury, N.Y.), 2012.

[11] FABBRI, R.; GON‚ALVES, W. N.; LOPES, F. J.; BRUNO, O. M. Multi-q Analysis of Image Patterns. Physica. A (Print), 2012.

[12] FAZEKAS, S.; AMIAZ, T.; CHETVERIKOV, D.; KIRYATU, N. Dynamic texture detection based on motion analysis. International Journal of Computer Vision 82 (2009) 48-63.