SciELO - Scientific Electronic Library Online

 
vol.23Efecto en las propiedades mecánicas de una resina pinífera modificada biodegradable, al utilizarla como compatibilizante o acoplante en formulaciones elastoméricas sin o con fibra de agave y hule de poli(estireno-butadieno). Un paso hacia la formulación de elastómeros verdesComparison of some wood properties of juvenile black pines of different origin planted in the same habitat índice de autoresíndice de materiabúsqueda de artículos
Home Pagelista alfabética de revistas  

Servicios Personalizados

Revista

Articulo

Indicadores

Links relacionados

  • En proceso de indezaciónCitado por Google
  • No hay articulos similaresSimilares en SciELO
  • En proceso de indezaciónSimilares en Google

Compartir


Maderas. Ciencia y tecnología

versión On-line ISSN 0718-221X

Maderas, Cienc. tecnol. vol.23  Concepción  2021  Epub 10-Sep-2021

http://dx.doi.org/10.4067/s0718-221x2021000100465 

ARTÍCULO

Automatic identification of charcoal origin based on deep learning

Ricardo Rodrigues de Oliveira Neto1 

Larissa Ferreira Rodrigues2 

João Fernando Mari2 

Murilo Coelho Naldi3 

Emerson Gomes Milagres1 

Benedito Rocha Vital1 

Angélica de Cássia Oliveira Carneiro1 

Daniel Henrique Breda Binoti4 

Pablo Falco Lopes4 

Helio Garcia Leite1 

1Federal University of Viçosa, Department of Forest Engineering, Viçosa, MG, Brazil.

2Federal University of Viçosa, Institute of Exact and Technological Sciences, Rio Paranaíba, MG, Brazil.

3Federal University of São Carlos, Department of Computer Science, São Carlos, SP, Brazil.

4DAP Florestal, Centro Empresarial da Serra. Parque Res. de Laranjeiras, Serra, ES, Brazil.

Abstract:

The differentiation between the charcoal produced from (Eucalyptus) plantations and native forests is essential to control, commercialization, and supervision of its production in Brazil. The main contribution of this study is to identify the charcoal origin using macroscopic images and Deep Learning Algorithm. We applied a Convolutional Neural Network (CNN) using VGG-16 architecture, with preprocessing based on contrast enhancement and data augmentation with rotation over the training set images. on the performance of the CNN with fine-tuning using 360 macroscopic charcoal images from the plantation and native forests. The results pointed out that our method provides new perspectives to identify the charcoal origin, achieving results upper 95 % of mean accuracy to classify charcoal from native forests for all compared preprocessing strategies.

Keywords: Charcoal; classification; deep learning; native wood; preprocessing.

Introduction

Brazil is one of the largest charcoal producers, with a reaching 5,3 million tons in 2019 (Ministry of Mines and Energy 2020). Besides being a world producer, Brazil is also one of the largest consumers of charcoal. Most of this production is destined for the internal market, mainly for the pig-iron and steel sectors and lesser, for the ferroalloy sector and residential consumption (ABRAF 2013). However, this demand is not supplied through charcoal using planted forests, making the illegal exploitation of native forests attractive.

In order to try to prevent this illegal production, the Ministry of the Environment, through Ordinance No. 253/2006, established the Forest Origin Document (DOF), an obligatory license for the transportation and storage of forest products and by-products, that includes information about the origin of those products. This license expired in cases when the transported product does not correspond to the species authorized in the DOF. In this context, forensic identification is used in the analysis of the preserved wood in charcoal to determine his origin (Gonçalves et al. 2012, Nisgoski et al. 2014), i.e., to distinguish those produced with native forests from those from planted forests, mainly composed of species of Eucalyptus (Davrieux et al. 2010). The principal clones used to produce charcoal are Eucalyptus urophylla, E. grandis, and hybrids E. urophylla x grandis, E. urophylla x camaldulensis, and E. grandis x camaldulensis (Santos 2010, Pereira et al. 2012).

Usually, the anatomic analysis of charcoal can be done through a macro or microscopic approach. In the microscopic identification is observed features of the tissues and the constituent cells of the wood (Zenid and Ceccantini 2012), while in macroscopic analysis, only anatomical features visible to the naked eye or with a magnifying glass, such as vessel arrangement and grouping, arrangement and abundance of axial parenchyma and ray width (Wheeler and Baas 1998). Both analyses can be used in the distinction between Eucalyptus and other genera.

Much has been proposed on the microscopic analysis, as reported in the studies proposed by Gonçalves et al. 2012, Albuquerque (2012) and Muñiz et al. (2012), with higher cost and limited logistics, can identify the charcoal to the level of species with trustable results, although this is not always necessary for charcoal identification for supervision purpose. On the other hand, just a few studies have been proposed the macroscopic analysis to distinguish the origin of charcoal, although it allows agility and practicality. The genus Eucalyptus present a homogeneous anatomical constitution among the species, under the morphological level, a factor that hinders the separation, based only on the composition and structural arrangement of the wood constituents (Tomazello-Filho 1985, Oliveira 1997). This similarity can help in distinguishing this genus from the others.

Digital image process and machine learning techniques are essential to this task because it allows the acquisition of visual features for the automatic classification. Some studies proposed to classify charcoal images with a non-automated user-based process. Khalid et al. (2008) proposed a method based on analysis of anatomical images of the transverse plane in order to differentiate charcoals of the genus Eucalyptus sp. from charcoal of native species. Andrade et al. (2019) proposed a system of classification of the origin of the charcoal using analysis of texture in digital images of the cross-section plane. For this, a database was produced containing 900 images of 18 species, 12 native and 6 of the genus Eucalyptus sp. After, texture features were extracted from each image using Level Co-occurrence Matrices (GLCM) (Haralick et al. 1973), which were used in training and in the evaluation of statistical classifiers that identified the origin of the charcoals correctly in about 97 % of the attempts.

However, the previously cited works do not add much to the identification of the origin of the charcoal in the field, due to the subjective, expensive logistic limitation imposed by the use of microscopes and the preparation of the material. The computational resources advances have allowed deep learning approach outperforms techniques based on handcrafted feature extraction on several fields such as computer-aided medical diagnosis systems (Litjens et al. 2017, Rodrigues et al. 2020), remote sensing (Nogueira et al. 2017, Zhu et al. 2017), forest species recognition (Hafemann et al. 2014), identification of ecosystems (Morales et al. 2018, Bayr and Puschmann 2019), agriculture (Kamilaris and Prenafeta-Boldú 2018, Knoll et al. 2018), and other applications (Gu et al. 2018).

Recently, Maruyama et al. (2018) proposed a method for automatic classification of native species of charcoal based on deep learning using Inception-V3 architecture (Szegedy et al. 2016) as a feature extractor. However, it was considered microscopy images, and these experiments performed a simple hold-out validation technique (Devijver and Kittler 1982), which can randomly create biased sets, causing the CNNs to fit non-representative (abnormal) samples and result in unexpected accuracies. Differently, we considered the VGG-16 architecture (Simonyan and Zisserman 2014) instead of Inception-V3. The VGG-16 network was chosen due to its simplicity and robustness. Moreover, it was the first architecture to replace the filters that require more computational power, by large sequences of convolutional filters with size 3x3.

In this work, we study an efficient method for automatic identification of charcoal origin based on deep learning and cross-validation k-fold technique using macroscopic images. This is the first work to classify automatically in order to distinguish Eucalyptus and native species using the VGG-16 architecture. Also, preprocessing strategies based on contrast enhancement, data centralization, and data augmentation on the rotation of the training set images were tested to increase the performance of the CNN with fine-tuning.

Material and methods

The experiment was performed on a machine with an Intel i5 3,00 GHz processor, 16 GB RAM, and a GPU NVIDIA GeForce GTX 1050Ti with 4 GB memory. All experiments were programmed using Python 3.6, the PyTorch 1.7 deep learning framework (Paske et al. 2019) under CUDA version 10.1 (2019) and cuDNN 7.6 (2020). The operating system was Ubuntu 18.04.5 LTS.

Images acquisition

The dataset of macroscopic images of charcoal was acquired from Wood Panel and Energy Laboratory (LAPEM) at the Federal University of Viçosa (UFV), Brazil. The material is composed of samples of carbonized wood of Eucalyptus and native species typical of the region of Zona da Mata, Minas Gerais. Native species were chosen based on the anatomical similarity to the genus Eucalyptus as well as their attractiveness to the illegal production of charcoal. Eucalyptus species were chosen from those predominantly used for the production of charcoal, as Pereira et al. (2012) define.

In this dataset, each species or hybrid is represented by a sample coming from a single tree, without information of age or position of the trunk. The samples were charred in a muffle-type electric furnace, following an initial temperature of 150 ºC, with an increase of 50 ºC per hour, and the final temperature of 450 ºC, totaling 7 hours of carbonization. The condensable gases were collected in a condenser coupled to the muffle door. The species and hybrids used in this study and the numbers of samples for each species are presented in Table 1.

Table 1: Species and hybrids used. 

The images were acquired using equipment with led light illumination and support for a cell phone, generating images with 12 megapixels and optical zoom of 20 times. As the charcoal pieces were broken, and not cut, there was a large amount of non-flat surfaces. With this zoom, a larger area in which there are no irregular breaks on the surface of the charcoal (that made it difficult to analyze the distribution of cellular components) could be analyzed.

The dataset is composed of 360 charcoal images, in which 135 images are of Eucalyptus species, and 225 images of native species. An expert in wood anatomy analyzed the charcoal images classified them as Eucalyptus and native. To illustrate them, Table 2 shows information about name, quantity, and one image from each class. All images of charcoal dataset were categorized into two classes properly labeled: eucalyptus (135 images), and native (225 images). After, all images of the charcoal data set were randomly sampled and partitioned into five stratified sets (folds).

Table 2: Information about each class in the dataset. 

Image preprocessing

All images were resized to 224 x 224 pixels, size allowed for the input of the CNN architecture used in this work. Then was applied one of the preprocessing methods and used to train and test the VGG-16 architecture.

Figure 1 shows samples of charcoal images considering each preprocessing strategy evaluated. The original image from the dataset is defined as a strategy (a) (i.e., no preprocessing). In (b), there is an example of contrast stretching strategy.

Figure 1: Contrast improvement applied in charcoal image: (a) original image (i.e. without preprocessing); (b) contrast stretching. Image instances from the charcoal dataset showing Eucalyptus (top) and native (bottom) classes. 

Data augmentation

Data augmentation is a strategy that consists of increase the training data without increasing the number of samples (Krizhevsky et al. 2012). In this study, we applied data augmentation based on rotations of the images considering angles of between 0 º and 360 º with steps of 45 º, increasing the training set in 8 times.

Convolutional neural networks

The main concepts addressed in the Deep Learning paradigm were obtained from Neural Networks, which aims to develop computer programs capable of solving problems that are difficult to solve through formal rules (Goodfellow et al. 2016). The main characteristic of a Convolutional Neural Network (CNN) is to be composed mainly of convolutional layers, and its main application is the processing of visual information (Ponti et al. 2017). A CNN consists of three types of neural layers, described below (Guo et al. 2016).

Convolutional

The convolutional layer is generated through a set of filters over an input image. Each filter is responsible for detecting a specific type of feature. Figure 2 illustrates the basic structure of the convolutional layer define by C l and composed by filters with size of the spatial stent and the hyper-parameter from the input volume . Finally, the convolution result is added to the bias b, generating K 2D feature maps stacked in an output volume Ml, defined by Equation 1 (Rodrigues et al. 2020).

(1)

Figure 2: Illustration of the convolutional layer. 

Pooling

The pooling layer allows reducing the size of feature maps considering maximum or average pooling. The CNN architecture considered in this paper applies maximum pooling because this criterion results in better generalization and faster convergence (Scherer et al. 2010). Figure 3 illustrates the maximum and average pooling considering a pooling layer with size 2 x 2.

Figure 3: Illustration of the pooling layer and the computations to maximum and average pooling. 

Fully connected

The fully connected layer is present in the last layers and converted the two-dimensional feature maps into a one-dimensional feature vector. Finally, the last layer is composed of softmax with neurons representing the number of classes in the dataset. Figure 4 illustrates the fully-connected layers after the convolutional and pooling layers and the softmax layer.

Figure 4: Illustration of the structure of fully-connected layers and softmax layer. 

Training based on fine-tuning

The training strategy based on fine-tuning it is a practical and common approach for training deep learning architectures (Goodfellow et al. 2016). The network is previously trained for a classification task using a very large data set (Deng et al. 2009). The parameters values (weights) learned for the initial layers of the network are kept (frozen), and the top layers trained over the data set of interest, which are intended to learn the more complex structures of the data.

VGG-16 architecture

The VGG-16 network, which is composed of 13 convolutional layers, five pooling layers, and three fully-connected (considering the softmax)(Simonyan and Zisserman 2014), was chosen due to its simplicity and robustness. In this study, we evaluated the VGG-16 improved with batch normalization. This strategy maintains the mean output close to 0 and the output standard deviation close to 1, increasing stability across the network and leading to a faster learning rate (Ioffe and Szegedy 2015).

We keep fixed all convolutional layers blocks to maintain the parameters learned from training over the ImageNet dataset, while the top layers have their parameters adjusted using a small learning rate. Figure 5 illustrates the VGG-16, and the blue box indicates the fixed layers.

Figure 5: VGG-16 architecture. Blue box indicates the blocks of convolutional layers fixed during training based on fine-tuning. 

The training of the VGG-16 is defined as an optimization problem to improve the quality of prediction. In this study, we considered the loss function as the objective function. The loss function used was binary cross-entropy function, commonly used for binary classification problems. In this way, we minimize this function using the Stochastic Gradient Descent (SGD) optimizer (Lecun et al. 1998), a popular optimization algorithm for parameter optimization of machine learning and deep learning models. It is based on a gradient descendent approximation using batches of randomly selected data samples instead of computing the gradient for each object of the dataset. Thus, the SGD optimizer allows finding iteratively the parameter values that minimize the loss function (cross-entropy) (Goodfellow et al. 2016).

VGG-16 was trained with a learning rate of 0,001, weight decay of 1e-6, a momentum of 0,9 momentum Nesterov, mini-batch size of 32, REctified Linear Unit (RELU) function, and training considering 100 epochs.

Validation

The validation of the classification is performed using k-fold cross-validation (Kohavi 1995) statistical method, which partition the data into k folds used for training and test. All images were sampled and partitioned into five stratified sets, i.e., the folds are build preserving (approximately) the proportion of examples for each class of the original set. We repeated the cross-validation five times, and for each iteration, one of the training folds is chosen for validation and the others for training.

Additionally, the mean value of accuracy (Equation 2) is used to quantify the quality of the results. The accuracy index is based on the number of true positives (TP), true negatives (TN), false positives (FP) and false negative (FN), computed from the confusion matrix, that allows verifying the number of correct classifications as opposed to the classifications predicted for each class (Duda et al. 2000).

(2)

Also, to visualize the True Positive Rate (TPR) against the False Positive Rate (FPR) at various decision thresholds, it was considered the Receiver Operating Characteristic (ROC). The Area Under ROC (AUC) is used as a reliable classification performance measure of all possible classification thresholds (Figure 6) (Fawcett 2006).

Figure 6: Approach proposed. 

Results and discussion

We trained the VGG-16 architecture considering each contrast improvement strategy and average subtraction. Figure 7 shows the evolution of the loss values and accuracy’s for the considering the average of all k-fold iterations for each preprocessing strategy evaluated. This behavior result suggests that the training did not overfit the data and maintaining the generalization property of the CNN.

Figure 7: Evolution of accuracy values and loss values for each fold and each strategy evaluated  

In order to assess the values of True Positive Rate (TPR) against the False Positive Rate (FPR) we analyzed the ROC (AUC) for each iteration of the k-fold. The evolution of these values is shown graphically in Figure 8. It is important to note that an AUC upper of 80% for most of the folds results in an average AUC of 84% and 81,6% for original and contrast stretching, respectively. Also, this result suggests that our approach is a promising method.

Figure 8: ROC curves for each fold. 

The mean accuracy resulted from VGG-16 is presented in Table 3, considering each preprocessing strategy evaluated. The use of the original images is the best choice, resulting in a mean accuracy of 85,8%. The data centralization performed by average image subtraction has a positive impact, independently of preprocessing.

Table 3: Average test accuracy for each preprocessing strategy  

The confusion matrices (Table 4) allow observing some aspect of the classification problem investigated in this work. The presented values were obtained for training with the whole training set and prediction over the validation set (which is the 3rd fold). It is worth noticing that the charcoal from native wood is rarely misclassified as eucalyptus, which is the main objective of this research, i.e., to provide a computational method capable of preventing the exploitation of native wood. Although the best overall result was obtained with the original images without preprocessing, it is possible to see that contrast widening allowed the identification of 97,78 % of native woods when fold-3 is considered.

Figure 9 shows samples of native images classified as Eucalyptus for each strategy tested. Although the goal is to perform a binary classification, we found that native species with few samples in the database such as Cydonia oblonga Mill, Inga edulis, Prosopis juliflora, and Sclerolobium paniculatum may be classified as Eucalyptus. Therefore, a small number of samples of these species results in a lack of visual patterns. Also, we observed that the other native species misclassified presents visual patterns similar to Eucalyptus, like an increase in the thickness and distribution of the vessels in the center - bark direction (de Jesus and Silva 2020).

Table 4: Confusion Matrix of the best result for each preprocessing strategy. 

Figure 9: Examples of native images classified as Eucalyptus for each strategy evaluated.  

Conclusions

The results allow concluding that, for the classification of charcoal images, the VGG-16 architecture obtained better results when the augmented data set is analyzed considering the average subtraction as preprocessing strategy (values lying on 85,8 %, in terms of accuracy). Also, after learning the particular features, the VGG-16 architecture resulted from the proposed method was able to classify charcoal from native forests, at least, 95 % mean accuracy using original images, i.e., without preprocessing strategy, and considering the 5-fold cross-validation procedure.

The presented results open new opportunities towards better exploiting deep learning for automatic classification between charcoal produced from planted wood (Eucalyptus), and those originated from native forests. As for future work, other data augmentation strategies may be tested, together with other normalization strategies and different types of convolutional neural networks.

Acknowledgments

We gratefully acknowledge the support of NVIDIA Corporation, FAPEMIG for financial support, LAPEM (Panels and Wood Energy Laboratory) for the charcoal materials, and Institute of Exact and Technological Sciences (IEP UFV-CRP) for providing the resources for the acquisition of the GeForce GTX 1050Ti GPU used in this research. This study was financed in part by the Coordenação de Aperfeiçoamento de Pessoal de Nível Superior - Brasil (CAPES) - Finance Code 001.

References:

ABRAF. 2013. ABRAF Statistical Yearbook 2013 - Base Year 2012, Brasília, Brazil. [ Links ]

Albuquerque, Á.R. 2012. Anatomia comparada do lenho e do carvão aplicada na identificação de 76 espécies da floresta amazônica, no estado do Pará, Brasil. Master's Dissertation, University of São Paulo, Piracicaba, Brazil. https://dx.doi.org/10.11606/D.11.2012.tde-20092012-093146 [ Links ]

Andrade, B.G.D.; Vital, B.R.; Carneiro, A.D.C.O.; Basso, V.M.; Pinto, F.D.A.D.C. 2019. Potential of Texture Analysis for Charcoal Classification. FLORAM 26(3): 1-10. http://dx.doi.org/10.1590/2179-8087.124117 [ Links ]

Bayr, U.; Puschmann, O. 2019. Automatic detection of woody vegetation in repeat landscape photographs using a convolutional neural network. Ecol Inform 50:220-233. https://doi.org/10.1016/j.ecoinf.2019.01.012 [ Links ]

Davrieux, F.; Rousset, P.L.A.; Pastore, T.C.M.; Macedo, L.A. de; Quirino, W.F. 2010. Discrimination of native wood charcoal by infrared spectroscopy. Quim Nova 33(5): 1093-1097. http://dx.doi.org/10.1590/S0100-40422010000500016 [ Links ]

Deng, J.; Dong, W.; Socher, R.; Li-Jia, L.; Fei-Fei, L. 2009. ImageNet: A large-scale hierarchical image database; In: IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, pp. 248-255. https://doi.org/10.1109/CVPR.2009.5206848 [ Links ]

Devijver, P.A.; Kittler, J. 1982. Pattern recognition: A statistical approach. Prentice-Hall: London, UK. [ Links ]

Duda, R.O.; Hart, P.E.; Stork, D.G. 2000. Pattern Classification (2nd Edition). New York, NY, USA: Wiley-Interscience. [ Links ]

Fawcett, T. 2006. An introduction to ROC analysis. Pattern Recogn Lett 27: 861-874. https://doi.org/10.1016/j.patrec.2005.10.010 [ Links ]

Gonçalves, T.A.P.; Marcati, C.R.; Scheel-Ybert, R. 2012. The effect of carbonization on wood structure of Dalbergia violacea, Stryphnodendron polyphyllum, Tapirira guianensis, Vochysia tucanorum, and Pouteria torta from the brazilian cerrado. Iawa J 33(1): 73-90. https://doi.org/10.1163/22941932-90000081 [ Links ]

Goodfellow, I.; Bengio, Y.; Courville, A. 2016. Deep learning. MIT Press: USA. http://www.deeplearningbook.orgLinks ]

Gu, J.; Wang, Z.; Kuen, J.; Ma, Lianyang.; Shahroudy, A.; Shuai, B.; Liu, T.; Wang, X.; Wang, L.; Wang, G.; Cai, J.; Chen, T. 2018. Recent advances in convolutional neural networks. Pattern Recogn 77: 354-377. https://doi.org/10.1016/j.patcog.2017.10.013 [ Links ]

Guo, Y.; Liu, Y.; Oerlemans, A.; Lao, S.; Wu, S.; Lew, M.S. 2016. Deep learning for visual understanding: A review. Neurocomputing 187: 27-48. https://doi.org/10.1016/j.neucom.2015.09.116 [ Links ]

Hafemann, L.G.; Oliveira, L.S.; Cavalin, P. 2014. Forest species recognition using deep convolutional neural networks. In: 22nd International Conference on Pattern Recognition, Stockholm, Sweden, pp. 1103-1107. https://doi.org/10.1109/ICPR.2014.199 [ Links ]

Haralick, R.M.; Shanmugam, K.; Dinstein, I. 1973. Textural features for image classification. IEEE Transactions on Systems, Man, and Cybernetics 6: 610-621. https://doi.org/10.1109/TSMC.1973.4309314 [ Links ]

Ioffe, S.; Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. Proceedings International conference on machine learning 37: 448-456. https://arxiv.org/pdf/1502.03167.pdf Links ]

de Jesus, D.S.; Silva, J.S. 2020. Variação radial de propriedades anatômicas e físicas da madeira de eucalipto. Cadernos de Ciência & Tecnologia 37(1): 26476. http://dx.doi.org/10.35977/0104-1096.cct2020.v37.26476 [ Links ]

Kamilaris, A.; Prenafeta-Boldú, F.X. 2018. Deep learning in agriculture: A survey. Comput and Electron in Agr 147: 70-90. https://doi.org/10.1016/j.compag.2018.02.016 [ Links ]

Khalid, M.; Lee, E.L.Y.; Yusof, R.; Nadaraj, M. 2008. Design of an intelligent wood species recognition system. IJSSST 9(3): 9-19. https://ijssst.info/Vol-09/No-3/paper2.pdfLinks ]

Knoll, F.J.; Czymmek, V.; Poczihoski, S.; Holtorf, T.; Hussmann, S. 2018. Improving efficiency of organic farming by using a deep learning classification approach. Comput Electron Agric 153: 347-356. https://doi.org/10.1016/j.compag.2018.08.032 [ Links ]

Kohavi, R. 1995. A study of cross-validation and bootstrap for accuracy estimation and model selection. In Proceedings of the 14th International Joint Conference on Artificial Intelligence - IJCAI’95, Volume 2: 1137-1143. Morgan Kaufmann Publishers Inc.: San Francisco, CA, USA. [ Links ]

Krizhevsky, A.; Sutskever, I.; Hinton, G.E. 2012. ImageNet classification with deep convolutional neural networks. In Proceedings of the 25th International Conference on Neural Information Processing Systems - NIPS’12 Volume 1: 1097-1105. Curran Associates Inc.: Red Hook, NY, USA, [ Links ]

Lecun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11): 2278-2324. https://doi.org/10.1109/5.726791 [ Links ]

Litjens, G.; Kooi, T.; Ehteshami-Bejnordi, B.; Setio, A.; Ciompi, F.; Ghafoorian, M.; van der Laak, J.; van Ginneken, B.; Sánchez, C. 2017. A survey on deep learning in medical image analysis. Med Image Anal 42: 60-88. https://doi.org/10.1016/j.media.2017.07.005 [ Links ]

Maruyama, TM.; Oliveira, L.S.; Britto, A.S.; Nisgoski, S. 2018. Automatic classification of native wood charcoal. Ecol Inform 46: 1-7. https://doi.org/10.1016/j.ecoinf.2018.05.008 [ Links ]

Ministry of Mines and Energy. 2020. Brazilian Energy Balance - year 2019. Ministry of Mines and Energy, Rio de Janeiro, Brazil. http://biblioteca.olade.org/opac-tmpl/Documentos/cg00828.pdfLinks ]

Morales, G.; Kemper, G.; Sevillano, G.; Arteaga, D.; Ortega, I.; Telles, J. 2018. Automatic segmentation of Mauritia flexuosa in unmanned aerial vehicle (uav) imagery using deep learning. Forests 9(12):736. https://doi.org/10.3390/f9120736 [ Links ]

Muñiz, G.I.B.; Nisgoski, S.; Shardosin, F.Z.; França, R.F. 2012. Anatomia do carvão de espécies florestais. Cerne 18(3): 471-477. http://dx.doi.org/10.1590/S0104-77602012000300015 [ Links ]

Nisgoski, S.; Magalhães, W.L.E.; Batista, F.R.R.; França, R.F.; de Muñiz, G.I.B. 2014. Características anatômicas e energéticas do carvão de cinco espécies. Acta Amazon 44(3): 367-372. https://dx.doi.org/10.1590/1809-4392201304572 [ Links ]

Nogueira, K.; Penatti, O.A.B.; dos Santos, J.A. 2017. Towards better exploiting convolutional neural networks for remote sensing scene classification. Pattern Recogn 61: 539-556. https://doi.org/10.1016/j.patcog.2016.07.001 [ Links ]

Oliveira, J.T. 1997. Caracterização da madeira de eucalipto para a construção civil. Ph.D. Thesis, Universidade de São Paulo, São Paulo, Brazil. [ Links ]

Paske, A.; Gross, S.; Massa, F.; Lerer, A.; Bradbury, J.; Chanan, G.; Killeen, T.; Lin, Z.; Gimelshein, N.; Antiga, L.; Desmaison, A.; Kopf, A.; Yang, E.; DeVito, Z.; Raison, M.; Tejani, A.; Chilamkurthy, S.; Steine, B.; Fan, L.; Bai, J.; Chintala, S. 2019. Pytorch: An imperative style, high-performance deep learning library. Advances in neural information systems 32 (2019). 8026-8037. [ Links ]

Pereira, B.L.C.; Oliveira, A.C.; Carvalho, A.M.M.L.; Carneiro, A. de C.O.; Santos, L.C.; Vital, B.R. 2012. Quality of wood and charcoal from eucalyptus clones for ironmaster use. Int J For Res Article ID 523025. https://doi.org/10.1155/2012/523025 [ Links ]

Ponti, M.A.; Ribeiro, L.S.F.; Nazare, T.S.; Bui, T.; Collomosse, J. 2017. Everything you wanted to know about deep learning for computer vision but were afraid to ask. In 30th SIBGRAPI conference on graphics, patterns and images tutorials (SIBGRAPI-T). Niterói, Brazil. 17-41. https://doi.org/10.1109/SIBGRAPI-T.2017.12 [ Links ]

Rodrigues, L.F.; Naldi, M.C.; Mari, J.F. 2020. Comparing convolutional neural networks and preprocessing techniques for HEp-2 cell classification in immunofluorescence images. Comput Biol Med 116: 103542. https://doi.org/10.1016/j.compbiomed.2019.103542 [ Links ]

Santos, R.C. 2010. Parâmetros de qualidade da madeira e do carvão vegetal de clones de eucalipto. Ph.D. Thesis, Universidade Federal de Lavras, Lavras, Brazil. http://repositorio.ufla.br/jspui/handle/1/2775Links ]

Scherer, D.; Müller, A.; Behnke, S. 2010. Evaluation of pooling operations in convolutional architectures for object recognition. In: International Conference on Artificial Neural Networks. Springer, Berlin, Heidelberg. p. 92-101. https://doi.org/10.1007/978-3-642-15825-4_10 [ Links ]

Simonyan, K.; Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556. https://arxiv.org/pdf/1409.1556.pdfLinks ]

Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA. 2818-2826. https://ieeexplore.ieee.org/document/7780677 Links ]

Tomazello-Filho, M. 1985. Estrutura anatômica da madeira de oito espécies de eucalipto cultivadas no Brasil. IPEF 29: 25-36, Brazil. https://www.ipef.br/publicacoes/scientia/nr29/cap03.pdfLinks ]

Zenid, G. J.; Ceccantini, G.C. 2012. Identificação macroscópica de madeiras. IPT: São Paulo, Brazil. [ Links ]

Zhu, X.X.; Tuia, D.; Mou, L.; Xia, G.S.; Zhang, L.; Xu, F.; Fraundorfer, F. 2017. Deep learning in remote sensing: A comprehensive review and list of resources. IEEE Geoscience and Remote Sensing Magazine5(4): 8-36. https://doi.org/10.1109/MGRS.2017.2762307 [ Links ]

Wheeler, E.A.; Baas, P. 1998. Wood identification-a review. IAWA J 19(3): 241-264. https://doi.org/10.1163/22941932-90001528 [ Links ]

Recibido: 20 de Febrero de 2020; Aprobado: 04 de Agosto de 2021

Corresponding author: ricardo.rodrigues@ufv.br

Creative Commons License This is an open-access article distributed under the terms of the Creative Commons Attribution License