A new discovery on visual information dynamic changes from retina to V2

: The information processing mechanisms of the visual nervous system remain to be unsolved 6 scientific issues in neuroscience field, owing to a lack of unified and widely accepted theory for explanation. 7 It has been well documented that approximately 80% of the rich and complicated perceptual information 8 from the real world is transmitted to the visual cortex, only a small fraction of visual information reaches the 9 V1 area. This, nevertheless, does not affect our visual perception. Furthermore, how neurons in V2 encode 10 such a small amount of visual information has yet to be addressed. To this end, the current paper establishes 11 a visual network model for retina-LGN-V1-V2 and quantitatively accounts for that response to the scarcity 12 of visual information and encoding rules, based on the principle of neural mapping from V1 to V2. The results 13 demonstrate that the visual information has a small degree of dynamic degradation when it is mapped from 14 V1 to V2, during which there is a convolution calculation occurring. Therefore, visual information dynamic 15 degradation mainly manifests itself along the pathway of the retina to V1, rather than V1 to V2. The slight 16 changes in the visual information are attributable to the fact that the receptive fields (RFs) of V2 cannot 17 further extract the image features. Meanwhile, despite the scarcity of visual information mapped from the 18 retina, the RFs of V2 can still accurately respond to and encode “corner” information, due to the effects of 19 synaptic plasticity, of which function is not existed in V1. This is a new discovery that has never been noticed 20 before. To sum up, the coding of the “contour” feature (edge and corner) is achieved in the pathway of retina-21 LGN-V1-V2. 22


3
The "Brain Projects" have been widely implemented throughout the world in recent years, such as those 4 in China (Poo et al., 2016), the U.S.A (Bargmann & Newsome, 2014), Europe (Amunts et al., 2016), and 5 Japan (Okano, Miyawaki, & Kasai, 2015). Such phenomenon has accordingly contributed to the burgeoning 6 of research on visual information processing mechanisms in academic fields such as cognitive neuroscience 7 and computer vision (CV) ( Yu et al., 2020). Considered as the perfect image information processing system, visual system of 9 human beings can quickly recognize such objective information as position, size, shape, color, orientation 10 with substantial advantages in stability, robustness, efficiency and simplicity (Zhao,  mechanisms has kept accelerating as biological techniques continue to evolve over the past few decades. (Xu 16 et al., 2017). In 1962, Wiesel and Hubel's experimental research findings on cat's V1 illustrated the 17 correlation between the RFs of the lateral geniculate nucleus (LGN) and RFs of V1, which significantly 18 advanced research in the field of biological vision (Hubel & Wiesel, 1962). In 1971, Dubner and Zeki studied 19 the characteristics of the orientation selectivity of cells in V5, initially revealing that the MT area belongs to 20 the central region of motion perception (Dubner & Zeki, 1971). In 1994, Ungerleider and Haxby proposed 21 the theory of the dorsal pathway and ventral pathway, providing a physiological basis for the visual system 22 to process motion and static information (Ungerleider & Haxby, 1994). In 2002, Riesenhuber and Poggio 23 discovered mutual projection and interaction between the dorsal and ventral pathways. Building on this 24 synergistic effect, the researchers further investigated perceptions under the influence of visual stimuli 25 (Riesenhuber & Poggio, 2002). In the same year, Yifeng Zhou and Tiande Shou (2002) revealed that the 26 orientation sensitivity of LGN cells could experience changes due to visual cortex feedback. In 2010, Bin 27 Zhu and Tiande Shou reported that V4 has a positive correlation effect on the orientation selectivity of V1 28 (Shou, 2010b). Further, Jianbo Xiao and Xin Huang discovered the characteristics of MT cells for 29 distinguishing complex orientations, indicating their great significance for the extraction of multiple 30 movement directions (Xiao & Huang, 2015). Altogether, these experimental results have contributed 31 considerably to understanding the basic principles of visual information processing (Li, 2014;Marr, 2010;32 Yin, Li, & Wu, 2018). 33 In parallel with neurobiological experiments, a number of neural computational models for the visual 34 system have been likewise put forward. As early as in 1982, Marr firstly introduced a relatively 35 comprehensive theory of visual computing informed by research grounded in neurobiology (Marr, 1982). He 36 argued that visual cognition obtains "what" and "where" information through the "seeing" behavior and that 37 the brain follows the hierarchical processing of visual information and the bottom-up principle. Such findings 38 3 are deemed to lay the groundwork for research in subsequent years. In 1999, Riesenhuber and Poggio 1 proposed a model named "HMAX" based on V1 cells, mimicking the neural mapping from simple cells to 2 complex cells in V1 (Riesenhuber & Poggio, 1999). In 2001, the widespread computing models of visual 3 attention were brought to the fore, consisting of environmental stimuli saliency, saliency map, inhibition of 4 return, attention and eye movements, scene understanding and object recognition, etc. These models enlarged 5 the knowledge base concerning the neurobiological mechanisms of visual attention (Itti & Koch, 2001). In 6 2003, Li Zhaoping explored the segmentation and contour enhancement of V1 cells from the perspective of 7 the computational model (Li, 2003). In 2006, Schölkopf  based on four continuous-reproduction experimental data about working memory of color and direction 23 (Oberauer & Lin, 2017). It is concluded that continuous visual information and discrete visual information 24 have the same mechanisms of cue-based retrieval and interference. The findings thus paved the way for 25 developing a unified theory of working memory in verbal, spatial, and visual information. 26 As the above reviews suggested, exploration into the visual information mechanisms have went through 27 a long developmental period giving rise to a substantial amount of scientific achievements both in the field 28 of neurobiological experiments and of computational neuroscience. Nevertheless, there has been a lack of a 29 well-established theory to elucidate the significant phenomenon of visual information dynamic degradation 30 in the visual nervous system. 31 Clearly, the visual information dynamic degradation occurs in the visual system. According to the 32 experimental data provided by Anderson  can also extract the edge features efficiently according to the principle of energy minimization of brain 3 activity. Moreover, the computational model proposed in accordance with such findings provides quantitative 4 methods to understand the neural mechanisms of the dynamic degradation mapping from the retina to V1, 5 which can produce results that match the experimental data noted above. 6 As we mentioned earlier, however, the mechanism of visual information mapping from V1 to V2 still 7 remains unclear as regards, the existence (or not) of degradation during the mapping process and the way in 8 which such visual information can be quantitatively analyzed (Semedo,Zandvakili,Machens,Byron,& 9 Kohn, 2019; Zhaoping, 2019). These are vital to understanding visual information processing in higher-order 10 cortices. 11 Due to a lack of available models to address these questions, we established a computational model in 12 the current paper to quantitatively predict and analyze the visual information dynamic degradation based on 13 the mapping from V1 to V2. The study was informed by the convolution calculation approach for the pathway 14 The novelty of this study mainly consists in the following three respects. of which were consistent with experimental data and proved its feasibility. Therefore, we extend that model 22 based on the anatomical architecture between V1 and V2, which is of great value to research on visual 23 information processing from a new theoretical perspective. 24 Second, the computational model proposed by the current paper, which includes 6 layers simulating the 25 levels of Photoreceptors, Ganglion cells, LGN, V1, and V2, mimics the visual information processing. The 26 results indicate there still exists convolution calculation and a slight degree of dynamic degradation in V1-27 V2. Specifically, the visual information of V2 is 0.18 times that of V1, which offers us a precise understanding 28 of the visual information mapping mechanism from V1 to V2. In addition, the computational results will 29 make up for the lack of experimental data of V1-V2. 30 Lastly, the results demonstrate that although the RFs of V2 have strong responses to the "corner" of the 31 visual image (Hosoya & Hyvärinen, 2015), they do not extract the feature information to any further degree. 32 Therefore, it can be concluded that the significant dynamic degradation occurs in the pathway of the retina 33 to V1. In other words, the novel visual information from the real world is entirely processed in the early 34 visual areas and primarily processed in retina-LGN-V1. On the other hand, following the principle of synaptic 35 plasticity, the RFs of V2 can accurately respond to and encode the scarce "corner" information about the real 36 world. The contour detection (edge and corner detection) of visual perception in natural scenes only uses 37 lower-order areas' visual information.

The Visual Information Changes from Retina to V1
2 The visual system grants animals the capability to perceive the real world (Gazzaniga et al., 2019). In 3 the ventral pathway of the visual cortex, the form perception is gradually improved with respect to the cortical 4 hierarchy of low-order to high-order (Hatori, Mashita, & Sakai, 2016). In V1, V2, and V4, their RFs are 5 selective for orientations, angles, and curvatures, respectively. 6 The light reaching the retina, and then mapping to LGN, and V1, the sequence of visual information 7 processing follows Figure 1, as shown below (Zhong & Wang, 2020). 8

Image Input
Photoreceptors

RFs of Complex Cells
Data Output

Convolution Computation
Channel of Edge Detection of Visual System 9 Fig.1 The visual information processing from retina to V1.

10
The pathway of the retina to V1 is a one-to-one neural mapping (Zhao et al., 2010). The photoreceptor 11 converts the external light signals into bioelectrical signals and delivers them to ganglion cells, which finally 12 are transmitted to V1 through LGN. In this system, about 10 10 bits/sec are deposited in the retina; only 10 4 13 bits/sec can get to V1. Obviously, the visual information changes from the retina to V1 is dynamic 14 degradation. 15 The type model of ganglion cell is On-center, which is shown in Figure 2.

14
The previous section 2.1 briefly introduced and analyzed the reason for visual information dynamic 15 degradation from the retina to V1 in visual nervous system. Nevertheless, the visual information remains 16 unknown for the changes in transmission from V1 to V2, and for the changes in ventral pathway transmission 17 as the cortical order increases. 18 Sparse coding theory is a critical approach in visual information processing. Due to the restriction of 19 energy metabolism during brain information processing and signal transmission, the number of neurons that 20 process large amounts of visual information is very few (Hosoya & Hyvärinen, 2015). To some extent, the 21 activity of simple cells in V1 can be summed as a linear function of RFs in a small spatial position. We could 22 utilize the Gabor function to represent the characteristics of the two-dimensional mapping of simple cells 23 (Olshausen & Field, 1996) (Livingstone & Hubel, 1987), the shape, color, motion, and stereopsis are separated 3 in V1 and V2 during the information processing of retina-LGN-V1-V2. Since we focus on the visual 4 information changes, we pay greater attention to the shape. Hosoya and Hyvärinen have proposed a model 5 based on a 3-layer network consisting of simple cells, complex cells, and V2 cells (Hosoya & Hyvärinen, 6 2015). Accordingly, we contend that the visual information processing of retina-LGN-V1-V2 in the visual 7 system, shown in Figure 4, can be represented by a structural schematical diagram, as shown in Figure 5.  LGN

The Analysis of Visual Information Changes from V1 to V2
1 The visual information changes from V1 to V2 has long puzzled neuroscientists. In other words, there 2 is no available method to quantitatively analyze the visual information changes from V1 to V2 from the 3 perspective of neurobiological experiments or computational models, which hinders a fuller grasp of the 4 mechanisms of visual information processing. Literature suggests that the edge detection channel of the 5 visual system, which is the functional channel of retina-LGN-V1, has the characteristic of one-to-one neural 6 mapping (Zhong & Wang, 2020). The mapping mechanism from the retina to V1 is closely related to the 7 convolution calculation, which partly causes significant dynamic degradation. The EDMRV1 model is 8 established based on the pathway of photoreceptor-ganglion cell-LGN-V1. The simulation results turned out 9 to fit well with the experimental data provided by Anderson (Anderson et al., 2005), and clearly explained 10 the dynamic degradation phenomenon, as shown in Figure 6.

15
Since the visual information processing is processed by the RFs of simple cells and complex cells in V1, 16 the processed information is directly output to V2 cells. According to the neural mapping from V1 to V2, it 17 can be argued that one RF of V2 is weighted by two RFs of V1, which are the same or different preferred 18 orientations (Hosoya & Hyvärinen, 2015). Hence, the visual information is in fact transmitted in the way of 19 connection. Given this connection, we established a visual information detection model based on V2, which 20 could predict and calculate the visual information changes in V2 through a quantitative analysis. 21 As existing research uncovers (Zhong & Wang, 2020), in the pathway of photoreceptor-ganglion cell-22 LGN-V1, the visual information hierarchical transmission from low-order to high-order visual cortex follows 23 the convolution calculation. It is also the main reason for the visual information changes from the retina to 24 LGN to V1. The RFs of V2 are constructed by the combination of RFs of V1 (Minami & Naokazu, 2011), 25 which accounts for the existence of convolution calculation exists in the neural mapping from V1 to V2. 26 Therefore, it is reasonable and feasible to use the photoreceptor-ganglion cell-LGN-V1-V2 model to predict 27 and calculate the visual information changes.
where depression(i, j) represents image edge decoding information after the LTD effect in STDP. 18 Concerning Equation (1) and (2) 10 To this end, considering Figure 5, in order to advance research on the changes of visual information 1 from V1 to V2 and the mechanism of visual information processing of V2, we designed a 6-layer feedforward 2 network model, which is a predictive model for visual information changes in V2 (PMVICV2) . 3 In our proposed model, Layer 1 represents the photoreceptor of the retina. The real world's information 4 is transmitted to the photoreceptor after being refracted by the lens and then converted into a bioelectric 5 signal. At this point, the size of the entire image on the retina is denoted as A, which depends on the specific 6 experiment subjects. It is assumed that A could be divided into M×N patches. We define the image 7 information on the photoreceptor as I (i , j) (i=1, 2, 3, ..., M；j=1, 2, 3, ..., N), then: 8 In Layer 3, the ganglion cells in Layer 2 have processed the visual information, which is transmitted to 27 LGN. The RF of LGN is divided into two antagonistic areas, of which the structure and function are very 28 similar to that of the ganglion cell (Gazzaniga et al., 2019). To this end, we still use the DOG model for 29 representation. Then, suppose the visual information on the RF of LGN in Layer 3 is I3(i, j), which is mapped 30 from Layer 2, shown in the following: 31 (9) 6 Equation (8) is the product of a Gaussian function and a cosine function. λ is the wavelength, which 7 directly affects the filter scale of the filter. θ is the direction of the filter. ψ is the phase shift of the tuning 8 function. γ is the ratio of spatial vertical to horizontal. σ is the variance of the Gaussian filter. In Layer 6, the visual information on the RFs in V2 is the neural mapping from Layer 5, recorded as I6(i, 20 j). The RF in V2 is composed of two RFs in V1. Each RF's preferred orientation could be the same or different. 21 The RF in V2 is selective for the angle profile, shown in the following Equation where the visual information reaches Layer 5, that is, the neural mapping from V1 to V2, I5(i, j) is 24 equivalent that a series of stimuli react to the different RFs in V2. Therefore, the RFs in V2 extract the 25 corresponding features according to the different strengths of the stimuli. To this end, the model of Layer 6 26 is composed of two RFs in V1 with the preferred direction. Such a combination forms an angle, the value 27 range of which is [0, 360] in degree. Also, each angle has a direction, the value range of which is also [0, 28 360]. The unit is degree. 29 The RFs in V2 have different preferred angles according to varying degrees and directions. In the current 30 study, each 30° can be used as the sample angle. As such, 12 different angles and directions are shown in 31  According to the first row of Figure 8, each angle has two sides; one of them is fixed, the other is rotated. 6 These two form the shape with different degrees. In the first column, the shape of the angle is fixed; the 7 rotation forms different directions of angles. Each side of the angle is an RF in V1, which has a specific 8 orientation preference. Among the RFs in V2, angles in the first column are 0, as shown in (a) of Figure 8. 9 The second column of RFs has angles with 30°, as shown in (b) of Figure 8. Angles in the third column are 10 60°, as shown in (c) of Figure 8. The fourth has angles with 330°, as shown in (d) of Figure 8. 11 The angles with different degrees and directions in Layer 6 are defined as follows: 12 1) anglesize is indicated as the following Equation (12) 2) angleorientation is shown as the following Equation (13) (13) 2 On the effect of visual information stimuli in Layer 5, Layer 6 performs convolution calculation with 3 RFs at varying angles and directions in V2, and finally obtains RFs responses in V2. 4

19
As shown in the (b) of Figure 9, the picture of Lena (original image), of which resolution was 512×512, 20 was utilized as the experimental object. The optical signal reached the photoreceptors, the visual information 21 of which was 2.10×10 6 bits. Subsequently, the visual information was processed by cones and rods and then 22 reached ganglion cells for processing. Since the RFs in ganglion cells had antagonistic properties that are 23 14 highly sensitive to the change of light and dark, the edge feature information of the image could be detected. 1 The dynamic degradation occurred after the visual information was transmitted to LGN for processing. 2 Simple cells with different preferred orientations are defined as θ60°,120°,180°, which actively responded to the 3 image information and recognized the edge feature information in the specific orientation. The visual 4 information in V1 was 1.34×10 3 bits，1.07×10 3 bits and 1.14×10 4 bits, respectively, which was about 5 6.39×10 -4 times, 5.11×10 -4 times, and 5.43×10 -3 times that of the photoreceptors. The visual information in 6 V1 was obtained by the processing of RFs of ganglion cells and those of LGN. Finally, the visual information 7 was transmitted from V1 to V2. The RFs in V2 have a strong response to the different corresponding angles 8 and directions, which can identify the image feature information. These angles are denoted as anglesize, shown 9 in Equation (14) (14) 11 The image after processing is shown in (a) of Figure 9. The visual information in V2 is shown in Table  12 1: 13 The comparison of visual information between photoreceptors and V2, shown as the following: 15 that the changes of visual information from the retina to V1 and V2, shown in (b) of Figure 9. It can be 19 recognized that the average value of visual information of photoreceptors was 2.10×10 6 bits; the average 20 value of V1 was 4.60×10 3 bits; the average value of V2 was 4.26×10 2 bits. These values demonstrated that 21 the visual information degrades significantly from photoreceptors to V1. The visual information of V1 was 22 2.20×10 -3 times that of the photoreceptor. Nevertheless, during the processing from V1 to V2, the dynamic 23 degradation already existed but was scanty; the visual information of V2 was 9.25×10 -2 times that of V1. 24 15  For more details, we have zoomed in to clarify.

7
As shown in (a) of Figure 10, the Manhattan image (original image), of which resolution was 1023×674, 8 was used as the experimental object. The photoreceptor received the optical signal, the visual information of 9 which was 5.52×10 6 bits. Afterward, the visual information was processed by cones and rods, and then passed 10 to ganglion cells for processing. As the RFs in ganglion cells had antagonistic properties, the edge features 11 of the image could be detected. The dynamic degradation occurred after the visual information was 12 transmitted to LGN for processing. Simple cells with different preferred orientations are defined as 13 θ60°,120°,180°, which actively responded to the image information and recognized the edge feature information 14 in the specific orientation. The visual information in V1 was 3.52×10 3 bits，3.36×10 3 bits and 6.62×10 4 15 bits, respectively, which was about 6.39×10 -4 times, 6.09×10 -4 times, and 1.20×10 -2 times that of the 16 photoreceptors. The visual information in V1 was obtained by the processing of RFs of ganglion cells and 17 those of LGN. Finally, the visual information was transmitted from V1 to V2. The RFs in V2 had a strong 18 response to the different corresponding angles and directions denoted as anglesize, which can identify the 19 image feature information, shown in Equation (14). The processed image is shown in (a) of Figure 10. Lastly, 20 the visual information in V2 is shown in Table 3: 21 The comparison of visual information between photoreceptors and V2 is shown in the following: 23  From the above analysis, taking the image of the island of Manhattan as the experimental object, we 3 have indicated that the visual information changes from the retina to V1 and V2, as shown in (b) of Figure  4 10. It can be recognized that the average value of visual information of photoreceptors was 5.52×10 6 bits; the 5 average value of V1 was 2.43×10 4 bits; the average value of V2 was 1.65×10 3 bits. These values demonstrated 6 that the visual information degraded significantly from photoreceptors to V1. The visual information in V1 7 was 4.41×10 -3 times that in the photoreceptor. Nevertheless, during the processing from V1 to V2, the 8 dynamic degradation already existed but was scanty; the visual information in V2 was 6.77×10 -2 times that 9 in V1. 10  can identify the image feature information, as shown in Equation (14). The processed image is shown in (a) 5 of Figure 11. Lastly, the visual information in V2 is shown in Table 5: 6 The comparison of visual information between photoreceptors and V2 is shown in the following: 9  the visual information degrades significantly from photoreceptors to V1. The visual information in V1 was 16 9.17×10 -4 times that in the photoreceptor. Nevertheless, during the processing from V1 to V2, the dynamic 17 degradation was scanty; the visual information in V2 was 3.74×10 -1 times than that of V1. 18 For more details, we have zoomed in to clarify.

5
As shown in (a) of Figure 12, the Mount Fuji image (original image), of which the resolution was 6 3840×2160, was used as the experimental object. The photoreceptor received the optical signal, of which the 7 visual information was 6.64×10 7 bits. Subsequently, the visual information was processed by cones and rods, 8 then passed to ganglion cells, and the edge features of the image can be detected. The dynamic degradation 9 occurred after the visual information was transmitted to LGN. Simple cells with different preferred 10 orientations are defined as θ60°,120°,180° as well. The visual information in V1 was 22.4 bits，26.9 bits and 5.83 11 ×10 2 bits, respectively, which was about 3.38×10 -7 times, 4.05×10 -7 times, and 8.79×10 -6 times that of 12 the photoreceptors. The visual information in V1 was obtained by the processing of RFs of ganglion cells 13 and those of LGN. Finally, the visual information was transmitted from V1 to V2. The RFs in V2 had a strong 14 response to the different corresponding angles and directions denoted as angle size, identifying the image 15 feature information, as shown in Equation (14). The processed image is displayed in (a) of Figure 12. Lastly, 16 the visual information in V2 is shown in Table 7: 17 The comparison of visual information between photoreceptors and V2 is shown in Table 8: 20 21 From the above analysis, drawing on the image of Mount Fuji, we have indicated that visual information 1 changes from the retina to V1 and V2, as shown in (b) of Figure 12. It can be recognized that the average 2 value of visual information of photoreceptors was 6.64×10 7 bits; the average value of V1 was 2.11×10 2 bits; 3 the average value of V2 was 7.51×10 2 bits. These values demonstrated that the visual information degraded 4 significantly from photoreceptors to V1. The visual information in V1 was 3.18×10 -6 times that in the 5 photoreceptor. Nevertheless, during the processing from V1 to V2, the visual information in V2 remained 6 constant, which was 3.57 times that of V1. 7  Based on the above experimental images, the visual information of photoreceptors from the PMVICV2 2 model was 2.10×10 6 bits, 5.52×10 6 bits, 1.24×10 7 bits, and 6.64×10 7 bits, respectively. The average value of 3 those was 2.16×10 7 bits, as shown in (a) of Figure 13. 4

Results and Analyses
After transmitting the visual information to the RFs of ganglion cells and LGN and V1 area, the data 5 was calculated as 4.60×10 3 bits, 2.43×10 4 bits, 1.14×10 4 bits, and 2.11×10 2 bits, respectively. The average 6 value was 1.01×10 4 bits, which is shown in (b) of Figure 13. 7 Ultimately, the processed visual information was transmitted from V1 to V2, of which the value was 8 4.26×10 2 bits, 1.65×10 3 bits, 4.26×10 3 bits, 7.51×10 2 bits, respectively, and the average value was 1.  Figure 13 showed that the visual information changes of the PMVICV2 model in these 11 four scenarios could be obtained, as shown in (d) of Figure 13 and Table 9. The visual information 12 transmitted to V2 was 8.19×10 -5 times that to photoreceptor and 0.18 times that to V1. Despite the different 13 test images, there were no significant differences across the experimental results. It can be concluded that the 14 significant dynamic degradation existed in the photoreceptor to V1 during the pathway of photoreceptor-15 ganglion cell-LGN-V1-V2. In the subsequent process of transmitting from V1 to V2, there had only a short 16 dynamic degradation. Taken the analyses together, the significant dynamic degradation existed in the pathway 17 of photoreceptor-ganglion cell-LGN-V1, which exhibited substantial differences between light and dark were 18 retained by convolution calculation. Then, the edge signal of the image was obtained. In the process of visual 19 information processing of the pathway of V1-V2, although the RFs in V2 had a strong response to the corner, 20 they did not further extract the image feature, which accounted in part for the small dynamic degradation. 21

22
Taking into account energy metabolism, the brain capacity is actually limited in terms of fully 23 transmitting visual information into the visual cortex, leading inevitably to visual information degradation. 24 Then, how could the brain perceive the environment efficiently? Chumbley and Friston contend that surprise, 25 captured by prediction error (defined as the difference between observed and expected quantities), drives 26 learning (Chumbley et al., 2014;Friston, 2010). Our previous research shows one reason for degradation, 27 which is related to prediction error, is that Retina-LGN-V1 has the convolution calculation, which acts to 28 extract the pivotal visual information, ignore the unnecessary, and thus effectively saving brain power 29 consumption. The findings serve as a further elaboration of the "prediction error" proposed by Friston. 30 Building on this discovery, we are driven to further explore the visual information degradation or changes in 31 V1-V2. As a result, in undertaking this study, we seek to shed light on the mechanism by which the visual 1 information is mapped from V1 to V2. Through establishing an original PMVICV2 model and conducting a 2 quantitative analysis, it reaches four major conclusions stated as follows: 3

1) A quantitative description of visual information degradation in V1-V2. 4
According to the results of the PMVICV2 model, we can achieve Table 9, which shows the visual 5 information in V2 is 8.19×10 -5 times that of the photoreceptor and 0.18 times that of V1. It yields an 6 exact quantitative interpretation of the visual information dynamic degradation in V2 by developing and 7 experimenting with a new computational model. In doing so, it complements previous research wherein 8 the neuroscientific experiment of the dynamic degradation focused chiefly on V1, which promotes a 9 more accurate and specific understanding of the way visual information is encoded and managed in V2. 10 2) Strong response to the "corner" information, but a slight degradation 11 While moving from low-order to high-order visual signal processing, the visual information degrades 12 significantly from the pathway of photoreceptor-ganglion cell-LGN-V1 (Raichle, 2010; Zhong & Wang, 13 2020). However, according to (d) of Figure 13 and Table 9, the dynamic degradation has been scarcely 14 observed during the mapping from V1 to V2. Whereas the RFs in V2 exhibit a strong response to the 15 "corner" information (Hosoya & Hyvärinen, 2015), they do not further extract the image feature 16 information. This demonstrates that a significant amount of dynamic degradation only has occurred on 17 the pathway of photoreceptor-ganglion cell-LGN-V1, leaving limited visual information existing in V1 18 for the RFs in V2 to encode. This is a new discovery that has never been noticed before. 19

3) Convolution calculation in V1-V2 20
During the visual information processing (LeCun et al., 2015), the convolution calculation can be found 21 on the pathway of photoreceptor-ganglion cell-LGN-V1 (Zhong & Wang, 2020). Moreover, the 22 anatomical architecture between V1 and V2: one RF of V2 is weighted by two RFs of V1 (Hosoya & 23 Hyvärinen, 2015), which suggests that the convolution calculation also exists in V1-V2. 24 4) STDP rule making more effective response to "corner" information 25 As we mentioned in Fig. 7, STDP rule intensifies the edge of the image and moderates the non-edge of 26 the image. Therefore, the RFs of V2 can effectively respond to and encode "corner" information about 27 the real world, dealing with the scarcity of visual information mapped from V1. 28 Despite the quantitative calculation and interpretation of the visual information changes in V1-V2, the 29 study also has limitations. Structurally, we did not take all the details of Retina-LGN-V1-V2 into account 30 due to the fact that the human visual system is complicated (see Figure 4), and that the visual information 31 processing mechanisms have not been clearly uncovered (Raichle, 2010;Zhong & Wang, 2020). Therefore, 32 we concentrated on the basic contour features such as edge and corner, which are considered highly relative 33 to the degradation. Furthermore, we have not counted the top-down predictions since the novel visual 34 information of the real world mapping from the retina to V2, which involves degradation, is a bottom-up 35 transmission. According to Chumbley and Friston (Chumbley et al., 2014;Friston, 2010), bottom-up inputs 36 make prediction errors, which originate from the novel visual information and are linked to degradation. The 37