Skip to main content
Advertisement
Browse Subject Areas
?

Click through the PLOS taxonomy to find articles in your field.

For more information about PLOS Subject Areas, click here.

< Back to Article

Top-Down Feedback in an HMAX-Like Cortical Model of Object Perception Based on Hierarchical Bayesian Networks and Belief Propagation

Figure 6

Prototype weight matrices and CPTs between C1 and S2 nodes in the toy-example.

Let be the CPT between and for C1 state and S2 state . The left column shows the HMAX-like prototype weights where an individual table is learned for each of the S2 states (prototype), , as function of the afferent C1 nodes, , and the S1 states, . However, the CPTs of a Bayesian network are defined as a function of the child states, , and the parent states, , for each of the child nodes, . Therefore, once the weight matrices are generated for each S2 state, they need to be converted to the corresponding CPTs of each S1 node (right column). To conform to probability rules each column of the CPT, the distribution over the child node states, is sum-normalized to one (empty columns are converted to equiprobable distributions). Importantly, S2 prototypes are actually learned as a function of C1 groups, such that the weights of C1 states belonging the same group are equal, which helps to approximate the invariance operation (see text for details). The feature/prototype symbols associated with each state are included in the figure.

Figure 6

doi: https://doi.org/10.1371/journal.pone.0048216.g006