On a previously unpublished work with Ralph Kenna

This is part of an unpublished work in collaboration with Ralph Kenna. It was probably not mature enough at the time it was submitted more than ten years ago and it was rejected by the editors, but some of the ideas had later been published partially in subsequent works. I believe that this"draft"reveals a lot about Ralph's enthusiasm and audacity and deserves to be published now, maybe as a part of his legacy.

way of working, I believe.I sometimes held him back in speculations, and if I allowed myself a wink, even when he mixed Star Trek with our work!The cover letter had been written by Ralph.His style can be recognized as his enthusiasm, using words like "fundamental issues of statistical physics in high dimensions", "Our new theory incorporates or subsumes existing theories" or "shift in the paradigm": The material in our paper connects with fundamental issues of statistical physics in high dimensions.We identify subtle, hidden flaws -even at the level of mean-field theorywhich we believe have profound consequences.Because of the subtle nature of the issues we address, we offer here a very brief contextualization.(. . . ) The statistical mechanics of condensed-matter, high-dimensional physics have been puzzling for a long time.It has been summed up by Kurt Binder et al. as "a rather disappointing state of affairs" -"the existing theories are not so good".In addition, systems with free boundary conditions have been described by Peter Young et al. as particularly "poorly understood".Since they are experimentally accessible, the understanding of such systems impacts our understanding of finite-size materials with surfaces, comprising particles with long-range interactions.Our new theory incorporates or subsumes the existing theories and is compatible with a vast amount of analytical and numerical evidence.However, our theory goes beyond this and introduces a powerful new principle that predicts and explains important features missed by current theories.For free boundaries, we show why 40 years of literature on the subject is based on an incorrect assumption.
We believe there is no current empirical or analytic evidence pointing against our new theory and it represents a shift in the paradigm of finite-size scaling and Landau mean-field theory in high dimensions.For these reasons, we would be grateful if you would consider the paper.
It is well known that standard finite-size scaling (FSS) is valid below the upper critical dimension  =   when hyperscaling holds and where the correlation length is comparable to the linear extent of a system exhibiting a continuous phase transition [1].Above   , standard hyperscaling breaks down, and the bulk critical behaviour there is described by mean-field exponents [2].FSS was analyzed for  ⩽   , Euclidean  4 theory and the Ising model with periodic boundary conditions (PBC) [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22] following large- analytical studies by Brézin [2].The breakdown of standard FSS and hyperscaling is attributed to Fisher's dangerous irrelevant variables [23] in the renormalization-group (RG) framework [3-6, 10, 11].To repair FSS above   , Binder introduced another length scale that emerges from the RG treatment, dubbed the thermodynamic length [3,8,16].Below   , it coincides with the correlation length, while above   , it scales as a power of the system size.Extensive comparisons with numerical simulations have been performed and FSS above the upper critical dimension with PBCs is now considered to be well understood [12-15, 24, 25].It is therefore perhaps surprising that, although the role of dangerous irrelevant variables in the breakdown of hyperscaling in high dimensions is well developed [3][4][5][6][7][8][9][10][11][12][13][14][15][16][17][18][19][20][21][22], FSS above the upper critical dimension was summarized by Binder et al. as "a rather disappointing state of affairs -although for the  4 theory in  = 5 dimensions, all exponents are known, including those of the corrections to scaling, and in principle very complete analytical calculations are possible, the existing theories clearly are not so good" [17,25].In contrast to the PBC case, there have been relatively few studies of high-dimensional systems with free boundary conditions (FBC) [7,22], which are complicated by additional scaling fields associated with boundaries in the RG picture [24,26].The situation with FBCs was recently described in reference [21] as "poorly understood".Here, we present an alternative, corroborative theory and show that high-dimensional Ginzburg-Landau-Wilson physics is less "trivial" than hitherto realized.Although delivering some correct scaling and FSS behaviour above   , it is indeed lacking in several respects, most seriously for FBCs at the pseudocritical point.A comprehensive picture emerges by simply separating notions of underlying space and emergent space.The corresponding two notions of the correlation function, one of which has a stretched exponential form, are then associated with two separate anomalous dimensions and two associated fluctuation-response relations, only one of which is captured by mean-field theory.At the critical dimension, there are analogous pairs of logarithmic terms and scaling relations [27,28].We demonstrate that existing analytic and numerical-based understandings of FBCs at pseudocriticality are unfounded and propose to postulate that FSS there is more similar to the PBC case than hitherto realized.Hyperscaling may then be extended beyond the upper critical dimension universally.After presenting numerical evidence supportive of our claims, we then propose an information-entropic foundation1 which lays behind, and greatly simplifies the dangerous-irrelevant-variables picture and delivers a new prediction for logarithmic corrections at  =   .
Having separated the notions of emergent length, volume, and dimensionality from those of the original system in equation (1.3), it is sensible to distinguish the associated spaces.We refer to the original -dimensional system as -space, the -lattice or the -continuum and emergent,   -dimensional one as -space, hence the notation    .
Fisher's fluctuation-response relation is associated with the correlation function  (), which is also dimension dependent and needs reexamination to account for whether the distance is measured in the -dimensional -space or the emergent   -dimensional -space.When these are not distinguished (below the upper critical dimension), the standard derivation is to integrate  over space giving Replacing  () by unity in equation (1.7) gives the volume of space to be ∫   0 d  −1 =    .This is correct below the upper critical dimension where   ∼ .Above  =   , however, bounding the integral by    would erroneously give the volume of space to be     ∼   2 /  .The problem is due to the failure to separate the notions of distance in -space and -space.If the integral is bound by    , one must integrate over the   dimensions of -space.Alternatively, if the integral is -dimensional, it must be bound by    ≡ (   ) 1/ϙ ∼ .With  referring to -space distance, the former approach gives where   () is the -space correlation function (1.9) away from criticality and at it.This identifies the usual Fisher law as a -space relation.With / = 2, we see that the mean field theory only captures the anomalous dimension of emergent -space:   = 0.The -distance  is related to displacement  in -space via  ∼  ϙ .In terms of this underlying scale, the counterparts of equations (1.9) and (1.10) are and   () ∼ 1 respectively.Integrating this function over the  dimensions of -space yields the correct FSS formula (1.4).We identify the anomalous dimension in fundamental -space as which is the fluctuation-response relation there.
Since  and  in the extended version of hyperscaling (1.6) are universal, ϙ may be expected to be a new critical exponent.However, conventional wisdom has it that ϙ = 1 for FBC's in particular, and that   cannot diverge more rapidly than  / =  2 above   [4,7,10,11,21,30].In this sense, convention holds that FSS is not quite universal after all.Indeed, the failure to properly separate    from    ∼  in equation (1.7), and in conventional FSS, would instead lead to the Gaussian form   ∼  2 .
To test for universality, we simulated the Ising model in 5D using lattices with both PBCs and FBCs.In figure 1(a), the FSS of the susceptibility peak is plotted against  in 5D and the form (1.4) is verified, in agreement with references [2-5, 8-22, 24, 25] in the PBC case.
For FBCs, the proportion of sites in the bulk of a size- lattice is (1 − 2/)  , the remaining ones being in lower-dimensional manifolds.Thus, the  = 4 to  = 20, 5D lattices of the recent numerical work [22] have only between 3% and 59% of sites in the bulk and do not represent five-dimensionality.The resulting conclusion is that   ∼  2 is therefore not a 5D one.On the theoretical side, it is reported in reference [7] that equation (1.4) "cannot hold for free boundary conditions because it lies above a strict upper bound" (namely  / =  2 ) established in reference [30] (see also reference [31]).However, that upper bound was determined at   , not at   , and using the fluctuation-response relation (1.11) instead of (1.13).Moreover, the Fourier analysis of reference [7], which yielded the same conclusions, neglects the quartic part of the Ginzburg-Landau-Wilson  4 action because of an expectation that the Gaussian result should "apply as an exact leading order result in more than four dimensions".It was shown in references [12][13][14] that for PBC, the anomalous FSS behaviour (1.4) is obtained from precisely this term.These observations, together with the conclusion that ϙ = 1 delivers leading logarithms in  = 6 from equation (1.5), indicate that the FSS paradigm from numerical and analytical conclusions in over 40 years of literature on the susceptibility of FBC lattices above   is unsupported, at least at pseudocriticality [7,22,30,31].We simulated 5D FBC Ising lattices up to  = 40.To probe the -dimensionality of the system, we remove the contributions of sites close to the surfaces resulting in lattice cores of size /2.The plot demonstrates that this procedure changes the apparent effective critical exponent from 2 (coming from the 4D surface sites and erroneously hinting at Gaussian behaviour) to 5/2.(In fact in figure 1(a), the susceptibility peaks for FBC lattice cores are multiplied by 5 to bring them within the range of the plot, but this does not affect scaling.)This supports the universality of ϙ and modified FSS at pseudocriticality.
To numerically extract the correlation length, we use the second moment of the correlation function   .The FSS of the correlation-length peak and the pseudocritical point are given in figure 1(b) for PBC's and verify the 5D scaling forms   ∼    ∼  5/4 and   ∼  −5/2 over the Gaussian forms   ∼  and   ∼  −2 .The validity of modified FSS above   = 4 is further confirmed for the Lee-Yang edge, which scales as ℎ 1 () ∼  −15/4 instead of ℎ 1 () ∼  −3 .The pseudocritical field scales as ℎ  () ∼  −15/4 instead of as  −3 .(The Lee-Yang zeros are multiplied by 10 4 and the pseudocritical field by 200 to bring them within the range of the plot.)The form  ∼  −5/2 from equation (1.12) is also verified and the Gaussian form  ∼  −2 is dispelled (see also reference [12][13][14]).This represents a fundamental change in our understanding of the behaviour of the correlation function in high dimensions -the hitherto widely accepted mean field value of zero for the anomalous dimension is an effective one, holding when the distance is measured in emergent -space only.In terms of the more fundamental distance scale of -space, the anomalous dimension is negative and given by equation (1.13).
To summarize so far, above the upper critical dimension, a second notion of distance    ∼  ϙ emerges alongside    ∼  [2,3,5,8,21,24].Each length scale has an associated dimensionality;  for the fundamental -space and   for emergent -space.Correlation decay is governed by the stretched exponential (1.12) in -space and the more usual form (1.9) in -space.Defining   = ϙ  =  restores the new hyperscaling and fluctuation-response relations (1.6) and (1.13) to the standard forms  = 2 −  and (2 − ) = , provided (, , ) = (,   ,   ) in -space and (, , ) = (  ,   ,   ) in -space.The thermodynamic limit is then characterized by   ∞ ∼ || −  and   ∞ ∼ || −  in fundamental and emergent space and only the latter is captured by mean-field theory.Thermodynamic functions ,  and  and associated exponents , , ,  are the same in -and -space since, rather than notions of length or dimensionality, they involve sums over the lattice or integrals over the continuum.
We now turn to logarithmic corrections at the upper critical dimension itself and propose a deeper reason behind the heuristic arguments for equation (1.2) given earlier.FSS at the upper critical dimension exhibits multiplicative logarithmic corrections of the form [2,21,27,28]   ∼  (ln ) ϙ, (1.14) which is not captured by the heuristic argument associated with equation (1. 3).An empirical observation, which to our knowledge has gone unnoticed in the literature3, is that the relationship appears to hold for models at their upper critical dimensions:   = 4 for the Ising and  () models for all values of ;   = 6 for -component spin glasses for all , as well as for percolation and the Yang-Lee edge problem; and   = 2 for models with long-range interactions characterised by  [28,29].Models below the upper critical dimension which exhibit logarithmic corrections, on the other hand, have ϙ = 0 (e.g., the 4-state Potts and Ising models in  = 2, the random-bond or site Ising model in  = 2, and the -color Ashkin-Teller model) [28,29].
A transformation of the form  =  ϙ from -space to -space is not bĳective and is therefore associated with a loss of information, which should be taken into account.According to the Landauer Principle, any such logically irreversible transfer of information must be accompanied by an entropy increase [37].Recent experiments verify that information is indeed physical and the conversion of information to energy is possible [38,39].In statistical mechanics, information is measured through Shannon or Hartley entropy, which is an extensive concept provided the system under consideration has short-range interactions.The Hartley information content of -space is  = ln  where  = (  )! is the number of ways to place   spins on the -lattice.Assuming the information loss is proportional to the amount of information available, Landauer's theory predicts the energy gain in mapping from -to -space is         −      = ln(  !) =   ln   by Stirling's approximation.Here,    and    are the internal energies in -and -space, respectively, and, dominated by the regular part, are constant to the leading order.Above the upper critical dimension, we have seen that we must account for long-range correlations since ϙ > 1 and   < 0 in -space.We, therefore, promote the Hartley information entropy to Tsallis's Q-logarithmic form, so that for constants  and .The -logarithm, defined as ln q  = ( q−1 − 1)/( q − 1), becomes a usual logarithm in the limit q → 1.The identification q = 1/ϙ yields    ∼  ϙ (1 +  − (−  ) ), recovering equation (1.3) as the dominant behaviour when  >   together with the same corrections as those from the RG treatment of the susceptibility in references [12][13][14].If  <   , equation (1.16) delivers    ∼  with corrections which are swamped by the Wegner irrelevant-field terms [41][42][43].In addition, the ϙ = 1 limit delivers equation (1.15), explaining the observations made earlier for various models.This also explains why ϙ = 0 in models away from their upper critical dimensions [28,29].

The problem of "Q-entropy"
More serious is that both Referees A and C were puzzled by our derivation of equation (1.16) and our usage of the term "Q-entropy" and Referee B was concerned that our paper lacked a "solid foundation in terms of a viable theory".Also, Referee C asked "Surely if I live in a seven-dimensional world, I would be able to look out in all seven dimensions, and not think that I was living in only four dimensions?".
As we see, the major criticism concerns the very subject of the article, and although more details were available in the Supplemental Material that we had provided in support of our submission, this did not convince the Referees.We gave up and did not submit the same paper anywhere else, since we agreed eventually our hypothesis was not solid enough.
Nevertheless, I think it is useful now to provide an answer, using other insights from unpublished material.We based the central hypothesis using a nonextensive entropic start.We were aware that nonextensive entropy, while enjoying the support of some famous names (Gell-Mann [49]) is not universally supported (Nauenberg [50]).Nonextensive entropy is a generalization of the traditional Boltzmann-Gibbs entropy used in statistical mechanics.It has found applications in various fields, including complex systems, systems with long-range interactions which depart from extensivity, and non-equilibrium statistical mechanics (for example the volume from the Sante Fe Institute on the Sciences of Complexity edited by Gell-Mann and Tsallis [49] is devoted to applications in complex systems).On the other hand, nonextensive entropy has faced several critiques.One of the main critiques revolves around its theoretical foundations and the departure from the standard principles of statistical mechanics.
Despite these caveats, since this single hypothesis is capable of explaining (i) the coincidence of the finite-size correlation length with system size below the upper critical dimensionality (ii) the power-law scaling  =  ϙ above   , (iii) the multiplicative log corrections to the FSS of  in a multitude of models at   and their absence in models away from   , we felt it constitutes progress.
Let us look at the problem with the elaboration of equation (1.16), but before that, a few comments are in order to clarify the meaning of the different lengths introduced in the original submission.The quantity called correlation length,  () ∼ || − , is, as usual, the characteristic length appearing in the correlation functions, e.g., that which measures the typical exponential decay of the correlations when approaching criticality.What we denote as    ∼  and call characteristic length is just the physical length associated with the lattice (Q-space) and    ∼  ϙ , called emergent, is the finite-size critical correlation length (in P-space).The thermodynamic length was introduced in reference [3] to account for the fact that    does not govern FSS in P-space.In terms of our notations, it is ℓ() ∼ || −/ϙ and its FSS counterpart was called coherence length in reference [24].
In terms of the finite-size correlation length, the volume of the system in emergent space is  = (  / 0 )   while that of the physical space is  = (/ 0 )  .Here,  0 and  0 are lattice units.
We will show that the hypothesis is valid to first approximation (i.e., to the leading order) for   = 4, where the symbol "∼" indicates asymptotic proportionality (similar scaling behaviour at least to leading order).Equation (2.17) does not constitute a full description of correlation-volume scaling because it does not capture the logarithmic corrections in equation (1.14) which are present when  = 4.Moreover, the map associated with equation (2.17) is not bĳective.In particular, if ϙ > 1, a given set of coordinates in some reference frame of the P-space is not sufficient to reconstruct a corresponding event in the Q-space unambiguously.The loss of information in going from Q-space to P-space is associated with the fact that long-range correlations are introduced in Q-space automatically.The reason is that even short-range correlations between contiguous sites in P-space translate to interactions between several non-adjacent sites in Q-space.These are consistent with the notion that   emerges from dangerous irrelevant variables (DIVs) and Renormalization Group (RG) (see reference [51]) but not the other way -the RG is a semi-group, not a group.These arguments mean the information-cardinality of Q-space is greater than that of P-space -a Q-microstate contains more information than a P-microstate.
Therefore, the RG DIV transformation from Q-space to P-space is logically irreversible in the sense that the former cannot be uniquely determined from the latter.According to the Landauer Principle, information is physical and the deletion of information is a dissipative process accompanied by an entropy increase, and conversion of information to energy is possible [52][53][54].The information content is measured through the Shannon entropy, where   is the probability of the particular message  from the "message space" which contains  messages [55].If the microstates  have equal probability, the information entropy becomes the Hartley entropy [56]  = ln  .
(2. 19) In the case of thermodynamic probabilities, the connection between information entropy and thermodynamic Gibbs entropy comes by identifying the (reduced) Gibbs entropy with the amount of Shannon information needed to uniquely determine the microscopic state of the system from its macroscopic description.
Here, we write the total energy content of Q-and P-space as in terms of the energy densities in each space, and these energies are expected to differ due to the difference in their information content.In the discrete case,  = !/( − )! distinct pieces of information (messages) are required to reconstruct Q-space from P-space.The corresponding Hartley entropy is The Landauer energy gain associated with this loss of information in going from Q-space to P-space is  B   → .Conservation of energy/information then demands that To deal with the  > 4 case, we have to take the character of the interactions on the two spaces into account.We started with a model in Q-space with only nearest-neighbor interactions.Long-range correlations emerge in Q-space (  < 0 and ϙ > 1) but not in P-space (  = 0).Therefore, the long-ranged correlations are a property of the Q-lattice, not a property of the spin interactions, which remain short-range.To take this into account, we promote the measure of information content to Tsallis' -entropy [49].A minimalist way to do this is to promote the entropic logarithm in equation (2.25) to a q-logarithm: where ln q  =  q−1 − 1 q − 1 (2.28) which delivers the ordinary logarithm when q → 1. Equation (2.27) is then equation (1.16) in the form The value of q is chosen to recover equation (1.2) to the leading order, as well as the main corrections to scaling calculated by Luĳten and Blöte [57].In the case of the susceptibility they got where t ∼  This is exactly what we get using q = 1/ϙ.

The problem with FBCs
Referee B did not feel "that a coherent and viable FSS theory has been developed for free boundary conditions".Referee C suggested checking the rounding exponent alongside the other exponents.We had done this, and we confirmed the Referee's expectation that both the rounding and shifting exponents are /2 for PBCs and for FBCs we confirmed the Referee's expectation that the shift exponent is 2 and the rounding exponent is /2.The Referee's motivation here was that, if the shift is bigger than the rounding,   will be too far from   to "feel" the peak -which will be outside the FSS zone.This would explain why, even if   (  ) (at the pseudocritical point) scales as  /2 for FBCs,   (  ) (at the critical point) may scale differently, like  2 at   , and may rescue Gaussian FSS at   for FBC.
We had checked this explicitly too and we found that in 5D   (  ) (at the critical point with FBCs) scales as  1.71 (2) using all sites of the Q-lattices or  1.92(2) using only sites at the core of the lattices.This was close to, but not quite the Gaussian behaviour expected by the Referee.Therefore, we had not been able to confirm the Referee's expectation that it is Gaussian at criticality.Specifically, we were able to claim, in our revised version that (i) Q-space systems with PBCs and  >   are not Gaussian either at criticality or at pseudocriticality.Instead ϙ = /  governs modified FSS.This was in agreement with Luĳten's and Blöte's numerics in the PBC case at criticality [58].(ii) Q-space  >   systems with FBCs are not Gaussian at the pseudocritical point.Instead, ϙ = /  governs modified FSS there.Thus, pseudocriticality with FBCs is essentially the same as pseudocriticality with PBCs and obeys modified FSS.(iii) Q-space  >   systems with FBCs may or may not be Gaussian at the critical point, but with currently available lattices our numerics were not supportive of Gaussian behaviour there.The question went on attracting the attention of the community later [59,60], and even recently, Ralph was still involved in trying to solve this difficult question [61].(iv) All  >   systems in P-space are Gaussian, as per Landau theory.

Since 2012
Since 2012, a few papers that I consider as important in the field have been published, clarifying, or completing some of the perspectives presented in the present paper.
As far as I know, Wittmann and Young [62] were the first to study the FSS of Fourier modes in high dimensional Ising models.They have shown that the modified FSS that allows for violation of hyperscaling due to a dangerous irrelevant variable applies only to k = 0 fluctuations, and that "standard" FSS applies to k ≠ 0 fluctuations.Nevertheless, the denomination of "standard" was referring to Landau scaling while an elucidation in references [60,63] has shown that this should be understood as Gaussian Fixed Point scaling.
The universality class of the percolation problem above its upper critical dimension   = 6 was studied from the perspective of the role of the dangerous irrelevant variable in systems with PBCs and FBCs in reference [64].
After the spectacular work of Luĳten [58] in 1997, the case of the logarithmic corrections of the Ising Model exactly at   = 4 was revisited very convincingly recently by Lv et al. [65].
The work of Langheld et al. [66] has extended Q-FSS to quantum systems in a remarkable manner.The study was performed in the case of a finite -dimensional Transverse Field Ising Model (TFIM), a quantum system of Pauli spin operators interacting in nearest neighbours, ∼       (the long-range interacting case was also considered), with an additional transverse interaction ∼ ℎ   .The quantum time evolution plays the role of an additional space dimension for the classical analogue   × ∞ of dimension  =  + 1.For the -dimensional quantum system, the upper critical dimension is thus   = 3 =   − 1 and the exponent ϙ takes the value ϙ = /3 = ( − 1)/(  − 1) which differs from a -dimensional classical analogue for which one would have ϙ cl = /4.The hyperscaling relation also needs to be rewritten.-dimensional classical systems below their upper critical dimension have 2 −  = .−dimensional quantum systems have 2 −  = ( + ) in terms of the anisotropy exponent (that distinguishes the time direction from the spatial ones), and above their upper critical dimension, quantum systems have 2 −  = (  ϙ + ).Random field Ising models above their upper critical dimension (  = 6) in  = 7 were studied very recently by Fytas et al. in reference [67] where the anomalous FSS of the correlation length   ∼  7 6 was confirmed for the first time in a disordered system.

Some personal thoughts
A scientist leaves a mark through his scientific production, and his articles, but also through how he has left his mark on the people he has encountered, his students, and his colleagues.Ralph is certainly one of those people whose memory will stay with us for a long time.I think he would have been happy to have this article published and I don't think I'm betraying his wishes by submitting it to Condensed Matter Physics.