First results from the LHC heavy ion program

The Large Hadron Collider (LHC) at CERN outside Geneva, Switzerland provides Pb + Pb beams at a nucleon–nucleon center-of-mass energy of 2.76 TeV, which is nearly 14 times higher than the energy available at the Relativistic Heavy Ion Collider (RHIC) at Brookhaven (200 GeV). The first LHC heavy ion run ended in December 2010, and already several results are available which give some indications of future directions in heavy ion physics. Results have been released both for bulk observables, the charged particle multiplicity and elliptic flow, as well as for ‘hard probes’ such as jets and J/ψ. The charged particle multiplicity near mid-rapidity shows no anomalous rise relative to lower energies, although an extrapolation to full phase space using extended longitudinal scaling may be revealing the first known violation of the Landau–Fermi multiplicity formula. Elliptic flow is found to agree surprisingly well with lower-energy data when measured as a function of transverse momentum, agreeing with viscous hydrodynamic calculations that treat the matter at the LHC similarly to that found at RHIC. Despite the similar features found at lower energies, the higher center-of-mass energies provide much higher rates of high-pT ‘hard’ probes, to study the medium microscopically. It makes it possible for the first time to study ultrahigh-energy jets, which are found to show dramatic event-by-event asymmetries in their energies, possibly reflecting strong energy loss in the hot, dense medium. Measurements of the spectra of charged tracks within jets are reported, to indicate a softening of fragmentation of the lower-energy jet. J/ψ rates have also been measured, and were found to be suppressed at a similar level as lower-energy results. In total, the medium formed at the LHC appears to be qualitatively similar to that measured at lower energies. However, future measurements will certainly take even more advantage of the particular strengths of the LHC program, in particular the higher multiplicities and higher rates of large-momentum processes, so it is too early to draw strong conclusions.


The Large Hadron Collider heavy ion program and its experiments
The CERN Large Hadron Collider (LHC) has always planned to run part of the year with heavy ion collisions instead of protons. While the machine is limited to relatively low luminosities at present, due to the need for upgrades to the machine protection systems, it provides beam energies more than an order of magnitude above the previous leader in the field, the Relativistic Heavy Ion Collider (RHIC) at Brookhaven National Laboratory. The current center-of-mass energy is 2.76 TeV per nucleon-nucleon collision, which will eventually be increased to the maximum design energy of √ s NN = 5.5 TeV.
In November 2010, the machine adopted without modification the magnet lattice of the very successful 7 TeV proton-proton run, which had been running since March 2010. This led to a rapid setup time, with the first collisions occurring after only 3 days and collisions for physics after only 4 days. After a week of rapid luminosity development, with the number of bunches in the machine doubling and quadrupling daily, the machine began to reliably provide more than 120 colliding bunches in each ring, leading to a typical instantaneous luminosity of 2-3 × 10 25 s −1 cm 2 and ultimately integrated luminosities of more than 9 µb −1 per experiment.
The three LHC experiments taking heavy ion data represent a major increase in capabilities over the existing experimental program at RHIC, which was itself a similar major advance over previous programs when it came online in 2000. The two major differences between the RHIC and LHC programs are the increased rates of high-p T processes, the increased longitudinal phase space and the much larger angular acceptance of the experiments at the LHC. Of course the higher energy should also lead to a higher initial energy density, and thus a higher final multiplicity, which characterizes the final state entropy (thought to be linearly related to the initial entropy produced by the thermalization process). The ALICE detector [1], shown in figure 1(a), is based around a 5 m diameter time projection chamber (TPC) which is optimized for tracking in an environment characterized by a high track density, particularly for measuring essentially all of the charged particle tracks produced down to very low transverse momentum. It is also optimized for particle identification in limited angular regions. The ATLAS detector [2], shown in figure 1(b), is a general purpose detector primarily intended for high-rate proton-proton measurements, but which is excellent for heavy ion measurements. In particular, it provides for jet measurements with hermetic, longitudinally segmented calorimetry out to |η| < 4.9, as well as a precise tracking for charged particles and muons out to |η| < 2.5. The CMS detector [3], shown in figure 1(c), focuses on high-precision muon measurements, and has excellent jet capabilities, similarly to ATLAS although with different technologies.

Charged-particle multiplicities
The charged-particle multiplicity is typically the first observable measured at new machines. The multiplicity of final state particles reflects all of the various degrees of freedom available to a system during its dynamical evolution. Thus, the functional dependence of multiplicity on other observables (e.g. temperature) was long expected to be sensitive to the phase structure of quantum chromodynamics (QCD), e.g. if it would go through a first-order phase transition [4]. However, at present it is more generally believed that the multiplicity scales linearly with the number of partons liberated in the initial condition, which is preserved even through hydrodynamic evolution provided the shear viscosity to entropy ratio is not too large [5]. The ALICE result for the charged particle density at mid-rapidity was released during the first LHC heavy ion run, and reported a particle density of dN ch /dη = 1584 ± 4(stat.) ± 76(syst.) for the 5% most central Pb + Pb events [6]. This corresponds to a particle density per participant pair of ρ 0 = dN ch /dη/(N part /2) = 8.4 ± 0.4. By now, all three experiments have released data on this quantity, and figure 2 (from [7]) shows these as well as a compilation of lower-energy data for context. The total multiplicity at mid-rapidity had been predicted by a wide variety of dynamical models and empirical scaling rules. Here several representative examples are discussed. Busza [8] advocated extrapolating the observed logarithmic rise in the charged-particle multiplicity (per participant pair), seen in heavy ion collisions already starting at alternating gradient synchrotron (AGS) energies, all the way to LHC energies. This model is seen to fail by about 30%. A wide class of models related to high-density QCD, in particular the color glass condensate (e.g. [9]), predict a power-law dependence, which is broadly consistent with the data. Finally, the Landau-Fermi model, which assumes that the system thermalizes rapidly in the Lorentz-contracted overlap volume of the two nuclei, predicts a characteristic s 1/4 / √ log s dependence on beam energy [10,11], which is found to be higher than the existing measurements by about 30%. While a power law of s 0.15 is found to adequately describe the data (top) Mid-rapidity multiplicity density from ALICE as a function of N part [14], with RHIC data scaled up by a factor of 2.1, to match the data in the most central events. (bottom) Ratios of PHOBOS multiplicity data for Au + Au and Cu + Cu data at different √ s NN as a function of N part [15], showing that each ratio is essentially constant as a function of centrality.
starting at about √ s NN = 20 GeV, it systematically exceeds the lower-energy data, most likely reflecting the need to account for the net baryon number at mid-rapidity, which can be shown to suppress the overall multiplicity [12]. In a subsequent paper, ALICE measured the centrality dependence of the mid-rapidity particle density [14] and compared it to a range of existing models. The dependence of the multiplicity near η = 0 with centrality, shown in the top panel of figure 3, is found to show large variations between different models (not shown). ALICE found good agreement only with a solution of the Balitsky-Kovchegov equations with running coupling [16], which are an implementation of nonlinear QCD evolution at high gluon density. However, the most surprising outcome of the ALICE measurement is the comparison of the centrality dependence data with the average of the RHIC Au + Au data at 200 GeV, shown in the top panel of figure 3. Just as was found with PHOBOS data [15,17] (shown in the bottom panel of the same figure) comparing the highest RHIC energy (200 GeV) with the lowest available at the time (19.6 GeV), the relative centrality dependence appears to have little dependence on beam energy: a 'factorization' of beam energy and centrality [15]. This is surprising behavior in the context of the 'two-component' model. In this model, the multiplicity scales as dN ch /dη = n pp ((1 − x)N part /2 + x N coll ), with n pp being the multiplicity in proton-proton collisions, N part being the number of participants (which control soft particle production), N coll being the number of binary collisions (which induce hard processes), and x being the parameter (ranging from 0 to 1) which interpolates between the two. If the N coll term truly reflects hard or semi-hard processes, it should make a larger contribution to the charged particle production as the energy increases, i.e. x should increase with collision energy. That the data do not imply this to be the case, even with the large jump in beam energy, should raise serious doubts about the model's applicability. That said, there is still no widely accepted explanation for the factorization of the centrality dependence with energy. Some color glass condensate calculations (e.g. [9]) seem to naturally incorporate such factorization, but many calculations do not (e.g. [16]), its appearance apparently arising fortuitously or by tuning to existing data.
One approach for understanding this phenomenon was outlined in a 2005 PHOBOS publication, presenting what at the time was the most comprehensive compilation of the energy dependence of pseudorapidity distributions [13]. In this paper, the phenomenon of 'extended longitudinal scaling' was observed for charged particle distributions from 19.6 to 200 GeV, as the energy independence of dN ch /dη , where η = η − y beam . It was also noted that the energy independent shape was in fact centrality dependent, and varied systematically so as to maintain an approximately constant value of dN ch /dη/(N part /2) as a function of centrality. This suggests that instead of asking why the centrality dependence of the particle density at mid-rapidity is energy independent, it should rather be asked why the multiplicity per participant pair integrating over all rapidities is constant and why the shape of dN ch /dη varies with centrality but in a similar way at all energies.
It is also interesting to consider data from a smaller system, such as Cu + Cu. The shape of dN ch /dη/(N part /2) is found to resemble the Au + Au data much better when comparing at the same centrality (the same fraction of the total cross section) rather than when matching at the same value of N part [19]. It is not obvious why this occurs, and it is especially not a natural conclusion from the two-component model, which has no predictive power at different rapidities. A proposal for this can be constructed using observations made in [20]. This work demonstrates that the pseudo-rapidity distribution in deuteron-gold collisions as a function of centrality can be constructed using the standard proton-proton rapidity distribution shifted from y = 0 by a quantity reflecting the center-of-mass rapidity of the colliding nucleons. This quantity is found to be simply y = 1 2 where N Au and N d are the number of participants in a particular centrality class from the gold and deuteron, respectively. Generalizing this to Au + Au and Pb + Pb requires a well-defined procedure to partition the nucleus-nucleus collision into 'tubes' of nucleons, with a radius assumed to be similar to that of a nucleon, in order to define similar longitudinal forward-backward asymmetries, as was done with d + Au. These would be prominent in non-central heavy ion collisions. Such a prescription might even be used in association with well-known Glauber models to explain the 'tilted source' found in heavy ion collisions at all energies, even at mid-rapidity [21]. However, such a procedure does not yet exist.
To avoid the complications of the centrality dependence of dN ch /dη, one can also integrate the multiplicity over the full solid angle to estimate the total charged particle multiplicity per participant pair, N ch /(N part /2). As shown in [5], this quantity is found to be constant with The proton-proton data are also plotted at √ s/2 to account for the leading particle effect. (From [18], except for the ALICE interpolation from (c).) (b) The same data divided by a pQCD formula used to describe the e + + e − data. (c) Interpolation of ALICE data at η = 0 to the PHOBOS data at lower energies, using a simple Woods-Saxon-type functional form, with a Jacobian near mid-rapidity. centrality at RHIC energies, despite the observed centrality dependence of the pseudorapidity distribution. So far, the total charged particle multiplicity has been measured as a function of beam energy for (p) p + p, e + + e − , p + A, d + Au, Pb + Pb, Au + Au and Cu + Cu. What has been striking is that all of the data follow the same curve for √ s > 20 GeV, with the only nontrivial scaling being a factor of two in beam energy that has to be applied to proton-proton and deuteron-gold collisions, in order to account for the leading particle effect. This is shown by a large set of pre-LHC data for N ch and N ch /(N part /2) from a variety of colliding systems, compiled in the right panel of figure 4, adapted from [18].
It will require more experimental work to get a reliable estimate for N ch integrated over the full solid angle from the LHC data. In particular, measurements at large η are needed. However, as an exercise one can use trends found in older data to try and interpolate into the currently unmeasured regions. The approach used here is based on 'extended longitudinal scaling', the invariance of charged particle yields observed in the rest frame of one of the projectiles (i.e. as a function of η = η − y beam ). This scaling has been found to apply in all hadronic systems at all energies, including p + p, e + + e − and A + A (see e.g. [22]). In [18], it was found that both a double Gaussian and Woods-Saxon distribution (modified by a simple Jacobian to convert from rapidity to pseudorapidity) fit the data equally well, particularly when the lower energy was used to extend into the far forward region. A similar combined Woods-Saxon function, adjusted to fit the ALICE and PHOBOS data, provides 'forward' data points at the lower energies and provides a reasonable description of the existing data, as shown in the right panel of figure 4. The function was required to have a plateau out to |η| ∼ 2 and then meet the PHOBOS data near η ∼ 0. It is not yet firmly established how far the pseudorapidity 'plateau' will actually extend at the LHC, and some have even predicted violations of extended longitudinal scaling [23]. However, taking this as a simple estimate and integrating this distribution over all pseudorapidities gives a first estimate of the total multiplicity of N ch /(N part /2) = 86. While it is difficult to assign a meaningful systematic uncertainty without data at high η, an overall uncertainty of 15-20% does not seem unreasonable given the similarity to distributions at lower energies. This value is shown in the context of older heavy ion data and elementary systems in figure 4. It falls about 30% below the expectations from the old Landau-Fermi calculation which is expected to be √ 2760/200 = 3.7 times the RHIC value of N ch = 29.4 ± 1.9 or approximately 109, the first time this expression has been wrong by such a large factor. However, the extracted total multiplicity is much closer to the value predicted by the formula used to fit multiplicities in e + + e − annihilations to hadrons [18]. There is no obvious interpretation of this, even if the interpolation would be found to be robust as more data are added at higher pseudorapidities. The latter should appear shortly, as ATLAS and CMS have tracking coverage out to |η| = 2.5 and ALICE has multiplicity coverage out to η = 5.

Elliptic flow
The current status of 'elliptic flow' is described in detail in the work by Snellings [25]. The effect of collective expansion is quantitatively characterized by the variable 'v 2 '. This gives the amplitude of the cos(2φ) modulation of the eventwise azimuthal distribution relative to an 'event plane' extracted by an analysis of the outgoing transverse momentum vectors. The variation of this quantity as a function of collision energy, centrality, pseudorapidity and transverse momentum is typically used to test whether the system achieves rapid thermalization and subsequent hydrodynamic expansion as a perfect (or near-perfect) fluid. This is because v 2 is sensitive to the pressure gradients that can build up due to the anisotropic shape of the initial overlap region. Elliptic flow has been measured in a wide variety of nuclear collisions over a wide range of collision energies, compiled in the left panel of figure 5. While there is a large, rapid change in the mean value of v 2 as a function of √ s NN at lower energies, the evolution from AGS energies through RHIC energies is nearly logarithmic. It has recently been a topic of some debate as to exactly what drives the energy dependence of v 2 , but the recently released ALICE data have started to help in resolving this issue. Although the excitation function shows a nearly 50% increase of the p T averaged v 2 , the p T -dependent differential flow v 2 ( p T ) is found to be essentially identical at RHIC and the LHC for the same centrality bins (which essentially selects the same initial geometric shape). The reliability of this result is increased when care is taken to compare the same observable (v 2 {4}, the fourth-order cumulant) from a similar detector (ALICE versus STAR) in a similar phase space (|η| < 1) [26]. The striking conclusion reached by several authors is that the similarities are consistent with viscous hydrodynamic calculations initialized to agree with the measured multiplicity near mid-rapidity [27,28]. The shear viscosity to entropy density ratio in those calculations is set at the predicted viscosity bound (η/s = 1/4π). The fact that both calculations describe the data, without further tuning, suggests that the systems at RHIC and the LHC have similar microscopic properties. Notable at the LHC is the fact that the multiplicities are so much larger on average, making it possible to measure these event-wise fluctuations much more precisely than was available at RHIC. Even more notable is the prospect of precision measurements of higher harmonics, beyond v 2 . It has been proposed that 'triangularity' [29] may explain several of the most interesting features in the two-particle correlation function data (e.g. the 'ridge' and 'cone' [30,31]) and this has already been tested in recent data analyses (e.g. [32,33]).

Jet quenching
One of the most striking sets of phenomena from the RHIC program is the suppression of high-p T hadrons and non-photonic electrons (which come primarily from semileptonic decays of heavy-flavor mesons), relative to the expectations, based on QCD factorization, that their production should scale linearly with the number of binary collisions. The level of suppression is typically characterized by the variable R A A , the ratio of yields measured in heavy ion collisions to that measured in proton-proton collisions, scaled by the number of binary collisions. If the proton reference is not available, a similar variable R CP is defined that uses the yield measured in more peripheral collisions as a proxy for proton-proton. Alternatively, some experiments construct a proton-proton reference by interpolating data from nearby center-of-mass energies. The first measurements of R A A were released by the ALICE experiment in December 2010, using an interpolation of 900 GeV and 7 TeV data as a proxy for p + p data at the appropriate center-of-mass energy. They were presented in two centrality bins, 70-80% and 0-5%, to indicate the extremes of the geometric range sampled by the experiments. However, whereas at RHIC the R A A was found to be approximately flat with p T out to quite high p T and for all centralities, the LHC data shown in figure 6 show a notable qualitative difference in the more central events. The peripheral data are found to have approximately the same spectrum as that found in proton-proton data, i.e. the measured R A A is flat above 3 GeV. The central events show the previously observed peak structure in R A A for moderate p T of a few GeV, a minimum at about 7 GeV and then a steady rise in R A A at higher p T . The rise shows no sign of abatement at 20 GeV, and it is a major question of the LHC program if it will saturate at high enough p T . Interestingly, perturbative calculations (e.g. the one shown in figure 6 from [35]) are able to describe the RHIC data, but predict too much suppression when extrapolating to LHC energies,  Figure 6. ALICE data on the suppression of inclusive charged hadrons, from [34], compared with pQCD predictions for R A A for the LHC from [35], based on RHIC data. based on the measured inclusive multiplicities (which are assumed by the calculations to scale proportionally to the initial gluon density).
While these first measurements of R A A are important for comparisons with the previous RHIC results, the enormous jump in beam energy (nearly 14 times that available at RHIC) provides a large rate of very-high-p T jets, many of which visibly emerge from the uncorrelated background even in the most central events. This can be clearly seen in figure 7 which shows an event with a single clear high-energy jet, with no obvious recoil. Events like these were noticed early in the run, and led to a first study of dijet events in the first 1.7 µb −1 of the heavy ion run by ATLAS [36], submitted for publication during the first heavy ion run.
Using its large acceptance tracker and calorimeter, the ATLAS experiment performed the first published measurements of full jets reconstructed in the busy heavy ion backgrounds. These were performed with the anti-k t algorithm in a wide phase space (|η| < 4.5) but restricting the reconstructed jets to have E T > 25 GeV and |η| < 2.8. An energy asymmetry was calculated for all pairs of jets with E T 1 > 100 GeV, E T 2 > 25 GeV and φ > π/2, the latter condition requiring the pairs to be in opposite hemispheres. The evolution from peripheral to central events is shown in figure 8, illustrating two basic features. The first is that the asymmetry distribution widens substantially between peripheral and central events, suggesting that the lower-energy jet has been modified. However, the second feature is that the angle between the two jets remains back-to-back, which both limits the possible contributions from fakes and which also argues against the suppression arising from single hard radiations.
In early February 2011, the CMS collaboration released their first study of jet properties in Pb + Pb collisions at the LHC [37]. While using higher statistics (sampling four times the integrated luminosity) and slightly different selection cuts on the jets, they confirm the observations made by ATLAS. They study modified fragmentation by the transverse momentum balance projected along the direction of the leading jet. The left panel of figure 9 shows that the overall momentum balance is recovered in both peripheral and central samples, and for jets with varying dijet asymmetry. However, when comparing tracks in the jet cone ( R < 0.8) and outside the jet cone ( R > 0.8) they find that the in-cone energy is carried by high-momentum tracks while the 'lost' energy is carried by lower-energy tracks at large angles relative to the nominal recoil jet.

13
The suppression of jets has turned out to be a clear indication of jet quenching at the LHC, visible at the event level, and not just in the modification of inclusive jet rates. A recent paper by Cacciari et al [38] has argued that some part of the asymmetry broadening could be explained by the presence of larger fluctuations in the background than expected, provided the fluctuations are correlated on the scale of the jet size. The currently measured fluctuations, on scales of η × φ = 0.1 × 0.1, are currently in excellent agreement with the ATLAS simulations (surprisingly so!). While it remains to be demonstrated if similar agreement persists as the window size is increased, there is currently no evidence in the heavy ion literature for largeamplitude background fluctuations, particularly negative ones, except those induced by jets. The largest known background correlations occur as 'clusters' of 2-3 low-p T charged particles at a time, over angular scales characterized by a Gaussian in η with σ ≈ 1 [39]. It is not clear how such small fluctuations could create large, correlated fluctuations that could effectively 'extinguish' jets.

J/ suppression
Dilepton measurements have long been considered to be a powerful tool in heavy ion collisions, particularly since they allow experimental access to electroweak processes and heavy quarkonia. In particular, the measurement of quarkonia production rates as a function of centrality are thought to be a direct probe of deconfinement [40]. The J/ψ's are expected to be formed out of charm-anti-charm pairs from hard processes in the initial collisions of the nucleons, with a survival probability that depends on the presence of a quark-gluon plasma. The hot plasma prevents the formation of bound states via the QCD analogue of Debye screening. However, at high energies low-momentum J/ψ are also expected to be regenerated from the recombination of charm produced at thermal rates in the medium [41].
The ATLAS experiment published first results for J/ψ production in Pb + Pb collisions at the LHC, along with a first look at the production rates of Z bosons [42]. Z bosons do not interact strongly and thus should be unchanged by the presence of the plasma. However, they are sensitive to the initial quark and gluon distributions, and thus probe the nuclear PDFs at a high Q 2 although at a relatively large x = 2M Z / √ s ∼ 0.06 where nuclear shadowing effects are not expected to be very large [43,44].
The ATLAS mass peaks for both J/ψ and Z are shown in figure 10. The J/ψ resolution is approximately 60 MeV, comparable to that found in proton-proton collisions. The similar performance is primarily because the multiplicity of tracks in the ATLAS muon spectrometer is quite low, due to being located outside the ATLAS calorimeter system, which absorbs most of the hadronic activity. The Z peak also shows a similar resolution to that found in proton-proton, and agrees well with a simulated distribution, shown as a gray histogram.
To quantify the suppression phenomenon observed in previous experiments (e.g. at the CERN SPS and RHIC), ATLAS constructed the double ratio: where Y c is the efficiency and acceptance corrected yield per event for a given centrality class c (of which four were defined, 0-10, 10-20, 20-40 and 40-80%). These ratios are computed for both J/ψ and Z yields and presented in figure 11. It should be noted that no attempt is made to propagate the statistical error of the 40-80% bin into the more central bins. Instead only the mean value is used to rescale all of the other bins to look for a violation of N coll scaling in the production of either particle.
To compare the ATLAS results to the older PHENIX data, it is necessary to calculate a full R CP from the ATLAS data, propagating all statistical uncertainties as well as the systematic ones that depend on centrality. One also has to rebin the available PHENIX data into comparable centrality bins, again propagating centrality-dependent errors where necessary. Using the publicly available ATLAS and PHENIX data [45], the author and J Jia have created a first version of this plot, shown in figure 12. Despite the large uncertainties, it is still striking that the two data sets agree despite many differences between them: (i) the large difference in √ s NN , (ii) the factor of 2 in particle density, which should reflect different energy densities and thus different suppression levels, (iii) the different p T ranges measured ( p T < 4 GeV for 76 TeV compared to PHENIX data from Au + Au at 200 GeV. The PHENIX systematic uncertainties combine the uncorrelated uncertainties as well as the Glauber uncertainties on N coll ratios, assumed to be the same as in ATLAS. The ATLAS systematic and statistical errors were propagated by J Jia, and are also assumed to be uncorrelated. PHENIX, p T > 6.5 GeV for ATLAS) and (iv) not correcting for non-prompt (i.e. B meson decay) contributions, which have been estimated to be about 25% in p + p [46] at the top LHC energy and 1-4% at the top RHIC energy [47]. The scaling also seems to be intriguingly similar to that observed for the inclusive charged particles shown by ALICE (discussed above in figure 3) in that the centrality dependence of the J/ψ yields is independent of the beam energy.

Summary and outlook
The LHC heavy ion program is under way and exciting results have emerged quite quickly. Results have already been released both for bulk observables, the charged particle multiplicity and elliptic flow and for 'hard probes' such as jets and J/ψ. The charged particle multiplicity near mid-rapidity shows no anomalous rise relative to lower energies, although a simple extrapolation to full phase space using limiting fragmentation seems to show the first apparent violation of the Landau-Fermi formula at high energies. Elliptic flow is found to agree surprisingly well with lower-energy data when measured as a function of transverse momentum, agreeing with viscous hydrodynamic calculations that treat the matter at the LHC similarly to that found at RHIC. Despite the similar features found at lower energies, the higher centerof-mass energies provide much higher rates of high-p T -'hard' probes, to study the medium microscopically. It makes possible for the first time to study ultrahigh-energy jets, which are found to show dramatic event-by-event asymmetries in their energies, possibly reflecting strong energy loss in the hot, dense medium. J/ψ rates have also been measured, and were found to be suppressed, but at the same level as lower-energy results. In total, the medium formed at the LHC appears to be qualitatively similar to that measured at lower energies. However, it will take the availability of measurements reflecting the higher rates of high-p T processes and the higher event-by-event multiplicities before more quantitative statements can be made about the microscopic properties of the hot, dense matter.