Flexible connectivity under physiological constraints

Physiological constraints define allowed configurations of synaptic weights for neural circuits. How this affects circuit function remains little understood. We examine the hypothesis that neural circuits may be structured to make constraints flexible: to allow many configurations of synaptic weights. The size of these allowed weight spaces depends on the number of inputs to a neuron: its connectivity degree. We predict degree distributions that optimize simple constraints on a neuron’s total synaptic weight. We also find the degrees of connectivity that maximize the number of allowed synaptic weight configurations. To test these predictions, we examine reconstructions of the mushroom bodies from the first instar larva and the adult Drosophila melanogaster. Overall, flexibility under a homeostatically fixed total synaptic weight describes Kenyon cell connectivity better than other models, suggesting a principle shaping the apparently random structure of Kenyon cell wiring.


Introduction
Learning in neural networks is the process of finding connection strengths that minimize a cost function. In biological neural networks, that cost function is often unknown. Neuronal connectivity is also, however, homeostatically regulated and constrained by physiological limits. The total strength of synaptic connections between two neurons is limited by the amount of receptor and neurotransmitter available and the size of the synapse (Kasai, Matsuzaki, Noguchi, Yasumatsu, & Nakahara, 2003). Pyramidal neurons of the cortex and hippocampus undergo synaptic scaling, regulating their total synaptic input strengths (Turrigiano, 2008).
Constraints define spaces of allowed synaptic weight configuration. If the constraint is tight, it solution set is small (Fig.  1a); if a constraint is loose its solution space is large. A small solution set has less potential overlap with other subspaces of synaptic weights, potentially restricting the ability of that neuron to learn a computation. Loosening the constraint, however, requires using more physiological resources. We examine how circuits can be structured to make a constraint's solution set large without changing the available resources.
We begin by defining the different constraints on a neuron's total synaptic weight we consider and exploring the geometry of their solution spaces. In each case, we characterize the maximally flexible connectivity configurations: those that maximize the size of the constraint's solution set. We then leverage them to define maximum entropy models for neural connectivity. We derive the Laplace approximations for these models' posterior likelihoods. Finally, we apply these models to recently characterized connectomes of a learning and memory center of the Drosophila melanogaster (Eichler et al., 2017;Takemura et al., 2017), asking which constraints best explain the degree distributions of neurons at different developmental stages.

Measuring constraint flexibility
We begin with a simple example where a neuron has N units of synaptic weight, of size ∆J, available. These could correspond, for example, to individual receptors or vesicles. It can assign these synaptic weight units to its K partners (presynaptic partners as in Fig. 1b for receptors, or postsynaptic partners for vesicles). We can count how many ways this constraint can be satisfied. If not all synaptic weight units have to be assigned, we can add one "partner" corresponding the the unused synaptic weight pool. For N = 4 and two connections, there are six possible configurations. With three connections, there are four possible configurations. Thus with the constraint of N = 4, two connections are more flexible than three since there are more ways to satisfy the constraint. Since the constraint treats all synaptic partners symmetrically, the number of possible configurations is given by the binomial coefficient "N choose K". For different numbers of synaptic weight units N, there are different maximally flexible configurations K (Fig.   1c).

Geometry of constraint spaces
We consider a simple model of synaptic interactions where a neuron has K synaptic partners and the total strength of projection i is J i . Synaptic weights can be made up of many small units of strength, corresponding to (for example) individual receptors or vesicles. In this case, it is reasonable to model individual synaptic weights as continuous variables. We can visualize these weights as a vector in the K-dimensional synaptic weight space; K is the synaptic degree. These partners could be either all presynaptic, or all postsynaptic. A particular configuration of synaptic weights occupies a point in the K-dimensional synaptic weight space. A constraint specifies a space of allowed synaptic weights. We will examine the hypothesis that synaptic connectivity is structured to make these spaces large: to make the constraint flexible.

Flexibility under bounded net synaptic weight
We begin by considering an upper bound on the net synaptic weight, so that This bound could be interpreted multiple ways, for example as a presynaptic limit due to the number of vesicles currently available before more are manufactured or a postsynaptic limit due to the amount of dendritic tree available for synaptic inputs. The scaling of the summed synaptic weight as K p corresponds to scaling the individual synaptic weights as K p−1 . If every synaptic weight has an order 1/K strength, the sum of the synaptic weights would be order 1 and p = 0. If every synaptic weight has an order 1 strength, the summed weight is order K and p = 1. If synaptic weights have balanced (1/ √ K) scaling, then the summed weight would have p = 1/2. With K synaptic partners, the constraint (1) defines a volume in K dimensions. For two synaptic partners, this is the portion of the plane bounded by the axes and a line that stretches between them (Fig. 1d). In general, for K synaptic partners the synaptic weights live in the volume under a K − 1 dimensional simplex in K-dimensional synaptic weight space.
We call the synaptic degree that maximizes the volume under the simplex the optimal degree, K * (Fig. 1e). We computed this optimal degree. It is approximately linearly related to the total synaptic weight: We can see from Eq. 2 that if p = 1, we obtain the condition J = 1/e (to leading order). So if p = 1 andJ = 1/e, the surface area is approximately independent of K. If p = 1 andJ < 1/e, the volume decreases monotonically and vice versa.

Flexibility under fixed net synaptic weights
We next consider a simple model of homeostatic synaptic scaling: where the net synaptic weight is fixed at K pJ . The fixed net weight constraint defines the same simplices as the bounded net weight, but requires synaptic weights to live on their surfaces instead of the volumes under them. The size of the space of allowed weights under Eq. 3 is given by the surface area of the K − 1 simplex, A. The surface area of the simplex increases with the net excitatory weight, but forJ ≥ 1 it has a maximum at positive K (Fig. 1h). The optimal degrees obey: revealing an approximately linear relationship, similar to the constraint on the maximum possible synaptic weight (Eq. 2). As for the bounded net weight, we can see from Eq. 4 that if p = 1, we obtain the conditionJ = 1/e (to leading order). So if p = 1 andJ = 1/e, the surface area is approximately independent of K. If p = 1 andJ < 1/e, the area decreases monotonically and vice versa.

Flexible connectivity in Drosophila mushroom bodies
Testing these predictions requires joint measurements of neurons' total synaptic weight and number of synaptic partners. For this purpose we turned to dense electron micrographic (EM) reconstructions with synaptic resolution. Dense EM reconstructions have been published for the mushroom bodies of the first instar larval D. melanogaster (Eichler et al., 2017) and the alpha lobe of the adult (Takemura et al., 2017). The reconstruction of the first instar larva includes all pre-and postsynaptic partners of the Kenyon cells (KCs). The adult's alpha lobe is defined by axons of KCs; that reconstruction does not include inputs onto the alpha lobe KCs' dendrites.
Since the fly EM data includes synapse counts but not synaptic weights, we assumed that the number of synapses, S, is proportional to the physiologically constrained synaptic weight: where α is the constant relating the synapse count and the physiological weight. This assumes that the scaling with K p arises from something other than the number of synapses. If it did arise from the number of synapses directly, then this would correspond to taking p = 0.
We began by examining neurons' input connectivity. Across all postsynaptic neurons in the first instar larval mushroom body, we saw an approximately linear relation between K and S (r(K,S) = 0.82). We next examined whether this relationship depended on the postsynaptic cell type. We first examined the larval KCs, which had r(K,S) = 0.92 (Fig. 2a) (Fig, 2c green, r(K,S) = 0.74). We found similar results for KC output connectivity (Fig. 2c, d).

Degree distributions under connectivity constraints
We next examined the hypothesis that neural degree distributions under constraints reflect the size of those constraints' solution spaces. These are, in a certain sense, maximum entropy degree distributions. For given K, the maximum entropy distribution on the synaptic weight configurations J under a constraint is the uniform distribution over its solution space, S K . For the fixed net weight (Eq. 3), for example, S K is the surface of the (K − 1) regular simplex with vertices atJK p (Fig. 1g). For K from 1 to some finite maximum K max , the maximum entropy distribution for synaptic weight configurations J is likewise the uniform distribution over the union of inputs is proportional to the size of the solution space at that K. These provide predictions for neural degree distributions, conditioned on each neuron's total synaptic weight: We computed the Laplace approximation for the posterior odds of the model corresponding to each of the constraints discussed above, marginalizing out the scale factors α relating the anatomical measurements to the net synaptic weights. We also examined a simple random wiring model where p B (K|K > 0) ∼ Binomial(N, q). As for the other models, we found the Laplace approximation for the zero-truncated binomial model and computed the posterior odds by marginalizing out the connection probability q and used anatomical measurements of the number of potential partners, N.
In the first instar larval Drosophila, the model of a fixed net weight predicted the connectivity of immature (young and multi-claw) KCs best, while single-claw KCs were better described by the binomial model (Fig. 3b, c). In the adult, all KC types were best described by the fixed net weight model ( Fig.  3d-f). We found similar results for KC outputs. Together, these results suggest that immature KC connectivity is governed by a homeostatically constrained net synaptic weight, while mature KC degree distributions are better described by binomial wiring.

Discussion
In simple neural network models, constraints define a space of possible synaptic weights (the "constraint space"; Fig. 1a gray regions), while the synaptic weights that perform a particular computation define another space (the "computation space"; (d-f) Same as (a-c) but for adult alpha lobe KC axonal inputs. Fig. 1a colored regions). The intersection of the constraint and computation spaces defines the allowed synaptic weights (the "solution space"). The size of the solution space is then bounded by the size of the constraint space and the size of the computation space. A large solution space could be beneficial to maintain performance if synaptic weights fluctuate, or to enable fast learning. While the computation space is circuitdependent and may change with experience and environment, the constraint space reflects the underlying neurophysiology. Maximizing its size may prevent the constraints from interfering with task-specific learning. We examined how flexibility under constraints can shape connectivity. We focused on constraints on summed synaptic weights, inspired by homeostatic regulation of total synaptic weights (Turrigiano, 2008) and resource limits (Kasai et al., 2003). In the models discussed here, metabolic constraints or wiring constraints could be thought of as fixing the total available synaptic weight (the parametersJ andW ). We elucidated the optimally flexible numbers of synaptic partners for a neuron and how these depended on the scaling of synaptic weights, and derived the degree distributions that maximize the entropy of synaptic weight configurations under these constraints.
We tested these predictions in larval Drosophila mushroom bodies, using a complete wiring diagram for the larval Kenyon cells (KCs) (Eichler et al., 2017) and a reconstruction of axonal connectivity in the adult mushroom body alpha lobe (Takemura et al., 2017). Random wiring of KCs has been discussed as a mechanism to facilitate associative learning in mushroom bodies, focusing on apparent lack of structure in glomerular projections to KCs. This notion of random wiring begs the question: what principle governs the distribution of that random process? We examined a hypothesis for that principle: flexibility under constraints. We found that immature KCs' degree distributions are best predicted by a homeostatically fixed total synaptic weight, while the most mature KCs best match a binomial wiring model.