Thermodynamics of discrete quantum processes

We define thermodynamic configurations and identify two primitives of discrete quantum processes between configurations for which heat and work can be defined in a natural way. This allows us to uncover a general second law for any discrete trajectory that consists of a sequence of these primitives, linking both equilibrium and non-equilibrium configurations. Moreover, in the limit of a discrete trajectory that passes through an infinite number of configurations, i.e. in the reversible limit, we recover the saturation of the second law. Finally, we show that for a discrete Carnot cycle operating between four configurations one recovers Carnot's thermal efficiency.


I. INTRODUCTION
The intuitive meaning of heat and work in thermodynamics is that of two types of energetic resources, one fully controllable and useful, the other uncontrolled and wasteful. An impressive effort has been devoted to provide a consistent mathematical characterisation of these notions within a quantum mechanical description of physics [1][2][3][4][5][6][7]. This is a challenge since in contrast to other thermodynamic quantities, such as internal energy and entropy, heat and work are not properties of individual states of a system. They are defined for continuous processes connecting different states [2,8,9], implying that their statistical fluctuations cannot be described in terms of a single system observable. Two-point correlation functions characterising the correlations along process paths are required -a problematic territory for quantum mechanics where definite trajectories cannot be fixed unless the system is continuously measured. Resolving these issues has been the topic of a number of publications that have formulated quantum trajectory approaches [2,4,6,[8][9][10][11].
In contrast here we focus on the mean values of heat and work where the analysis simplifies but still requires careful thought. We will adopt the identification of the system's internal energy with U (ρ) = tr[ρ H] where ρ is the density matrix describing the state of the system at given time, and H is its instantaneous Hamiltonian. Clearly, a proper definition of this Hamiltonian is in general problematic! If the system is coupled to an environment the non-equilibrium behaviour of a general open system makes the definition of the system's Hamiltonian ambiguous [10,[12][13][14][15] both, mathematically and experimentally. Ultimately the choice of the Hamiltonian one assigns to the system must rely on the set of operations and observables one can access experimentally. In many situations of physical interest H can be identified with the bare system Hamiltonian or an effective system Hamiltonian that incorporates the effect of the environment.
While the environment degrees are in principle uncontrolled, full control can be exerted over the temporal "variation" of the system Hamiltonian. For instance, the size of a container in which steam is pumped can be freely chosen and a piston can be attached to the container that can push the wheels of a train. A formal definition of mean heat and work is then obtained by considering an infinitesimal change of the internal energy associated with the time evolution of the system which brings its density matrix from ρ to ρ+dρ while the Hamiltonian varies from H to H + dH. The origin for the change of ρ may here be due to both, the variation of H induced by the experimenter and by the dynamics due to the coupling with the environment. The possibility of externally controlling H suggests to identify the first term on the r.h.s. of Eq. (1) with the average work done by the experimenter during the evolution. The second term describes the internal energy change due to a reconfiguration of the system, i.e. a variation of the system's density matrix. This is an energy contribution over which the experimenter has no direct control and this is why it is associated with heat. The infinitesimal average heat absorbed by the system and the infinitesimal average work done on the system [12,[16][17][18][19][20][21][22][23] with the symbol δ indicating that heat and work are in general no full differentials, i.e. they do not correspond to observables.
While the first law of thermodynamics states that the sum of the two average energy types is the average internal energy, the split into these two types of energies is crucial as it allows the formulation of the second law of thermodynamics. A fundamental law of physics, it sets limits on the work extraction of heat engines and establishes the notion of irreversibility in physics. The second law can be phrased in form of Clausius' inequality: stating that the change in a system's entropy must be larger than the average heat absorbed by the system during a process. While the first law of thermodynamics is fundamental for any process, the second law was originally stated for processes that start and end in equilibrium. Recently, the non-equilibrium work relation due to Jarzynski has been used to argue that the second law should also hold for any process starting from equilibrium, at temperature T , but ending in an arbitrary nonequilibrium state [24]. However, no conclusive argument has yet established the most general set of dynamical processes that obey the Clausius inequality [25]. Extending the infinitesimal scenario to finite, continuous processes in which the temporal evolution of ρ(t) and H(t) in time t is known, the mean heat and work can be found by integrating over the trajectory taken from ρ(0) and H(0) to ρ(τ ) and H(τ ), i.e.
while the first law becomes The mathematical consistency of the above expressions and their compatibility with the predictions of thermodynamics have been verified for many models, for example, for processes that are induced by Markovian master equations [18]. There are two paradigmatic examples of all work and all heat processes that we introduce here and which will become important in the later part of the manuscript. The first process is a unitary process, which we will also refer to as closed, where the (non-equilibrium) evolution of the state is given by the Schrödinger equation, Mean heat and work are then consistent with the physical intuition that no heat has been provided to the system during the evolution. The second example is a system that evolves through the action of a dissipative, i.e. open, Markov process via a master equation [26], with L being the dissipative Lindblad term. Under the assumption that the typical time scales associated with the time-independent H are much shorter than those associated with L we can treat the system as almost isolated and use Eq.(1) to compute its internal energy. In this limit Eq. (6) is valid with the Hamiltonian just being the time-independent H, (11) which is in full agreement with the physical intuition that no work has been performed on the system.
While these examples constitute special cases of continuous processes the heat and work in a general process depend intimately on the exact details of the process. However, the caveat with this viewpoint is that in most real life applications one does not know what the dynamics of the state of the system is nor what the appropriate local Hamiltonian is at all times. Importantly, this is not just due to our ignorance of what happens at the quantum level. Quantum physics has strong fundamental limitations on what we can know without choosing a measurement apparatus, measuring the system and interpreting the data. Moreover, if the system is indeed measured then the experimenter's choice of what degrees of freedom she actually measures will effect what the measured heat and work will be. In other words, we propose that there is no one average heat and work for a particular process, there are different sensible outcomes to this question and the answer depends on the choice of system Hamiltonians in time, H(t), that corresponds to specific measurement choices.
The aim of this paper is to show that it is possible to formulate a general second law independently of these choices. To achieve this we will depart entirely from the traditional continuous trajectory approach and propose a rather drastic but pragmatic change of perspective. We develop a consistent framework of mean heat and work for discrete thermodynamic processes. The rationale for this approach is that while the true process is continuous, observations we make on the system are almost always discrete. (We will neglect here the possibility of monitoring through continuous weak measurements.) For discrete snapshots of the dynamics, we find that by decomposing the transition into possible sequences of two fundamental primitives, it is possible to define heat and work for the discrete process in a way that is experimentally and mathematically clear. This allows us to establish a general second law for discrete processes between equilibrium and non-equilibrium states and the analysis of a discrete Carnot cycle, where we uncover the usual Carnot efficiency.
The paper is structured as follows. In Sec. II we review the traditional perspective on the second law and the definition of entropy. In Sec. III we define the dynamical configuration space of a system that allows us to formulate a notion of two primitives for discrete processes in Sec. IV, the discrete unitary and discrete thermalising transformations (DUTs and DTTs). Sec. V contains the main results of the paper. First we show that entropic inequalities when applied to discrete trajectories formed by concatenating DUTs and DTTs yields the second law of thermodynamics in the Clausius formulation. We then derive two consequences: We find the minimum and maximum heat for a single DUT and DTT sequence and prove the existence of a discrete trajectory, formed by sequences of DUTs and DTTs, that connects two given thermodynamical configurations while asymptotically saturating the Clausius inequality. Finally we identify a discrete trajectory that connects the same initial and final configurations as the continuous trajectory through a sequence of DUTs and DTTs, and which approximates the continuous heat. In Sec. VI we derive the thermal efficiency of a discrete cycle, the Carnot efficiency, and conclude in Sec. VII.

II. ENTROPY AND THE SECOND LAW
In 1865 Clausius established that the overall heat flow in any cyclic, reversible process vanishes, implying that the integral over any non-cyclic process must be path independent. This led him to define the state function entropy, S, and the entropy change, ∆S, between the final and initial point of a reversible process, Clausius also showed that any cyclic process, reversible or irreversible, obeys This relation is the basis for a formulation of the second law of thermodynamics, known as the Clausiusinequality. It is a statement for all thermodynamic processes, not just cyclic ones, that start from equilibrium at temperature T , and it simplifies to Q ≤ T ∆S when the system interacts with a bath at constant temperature, T . In this form Clausius' inequality establishes the existence of an upper bound to the heat received by the system. Clearly, Clausius's goal was to characterise different forms of energy and their interconversion. However, by formulating the second law of thermodynamics he defined a new quantity: entropy. In contrast, in modern information theory the focus is on the state of a system. Entropy is here used as the central physical quantity to measure the amount of information of a state, while heat and work, and energy in general, have no well-defined purpose for the interpretation of information processing. This opens the possibility of turning Clausius' original argument around! It allows us to use the entropy change in discrete quantum processes to define the average heat, and work. Before we proceed, let us first highlight that non-trivial entropy bounds exists for any process between two states.
A state ρ describes an amount of information, quantified by the von Neumann entropy, S(ρ) = −tr[ρ ln ρ]. The evolution of a quantum system from an initial state ρ i to a final state ρ f through an arbitrary process, or quantum channel, has a meaningful associated entropy change, which quantifies the change of the encoded amount of information. The entropy change is non-trivially bounded from above and below by virtue of the positivity of the relative entropy (classically Kullback-Leibler divergence [27]). The relative entropy, S(ρ 1 ||ρ 2 ), between two states ρ 1 and ρ 2 characterises the number of additional bits required to encode ρ 1 when using the diagonal basis of ρ 2 , rather than the diagonal basis of ρ 1 . It is defined as [28] S(ρ 1 ρ 2 ) := tr[ρ 1 ln and is a positive quantity Intuitively, the relative entropy is similar to a distance measure, however, it is important to keep in mind that it is asymmetric S(ρ 1 ρ 2 ) = S(ρ 2 ρ 1 ). Rewriting the entropy change, Eq. (15), in two ways a lower and upper bound on the entropy change emerge From the information theory point of view, bounds on the entropy change are important in their own right as they characterise how much information is lost or gained.
If we now assume the special case that ρ f is a thermal state for the Hamiltonian H f at an inverse temperature β f then the lower bound becomes Interpreting tr[∆ρ H f ] as the heat of the discrete process, the above expression would constitute exactly the second law of thermodynamics. This is exactly what we will pursue in Sec. IV, e.g. in Eq. (28). Interestingly, from (20) it is apparent that also an upper bound on the entropy change exists that is rarely discussed in the literature. This maximum value of the entropy change is enforced to ensure that any reverse process, from ρ f to ρ i , also obeys the second law [32].

III. DYNAMICAL CONFIGURATION SPACE
To assist our discussion of discrete quantum processes we introduce the concept of configuration space, following the spirit of [16,17,29], and propose a graphical representation for that space, see Fig. 1.
Definition 1 Let S be the quantum system under investigation, with H S its Hilbert space, L(H S ) the set of linear operators on H S , and S(H S ) ⊂ L(H S ) the set of density matrices on H S . We define the dynamical configuration space C(H S ) of S as the set formed by the pairs (ρ, H) = c with ρ ∈ S(H S ) a density matrix [33] and H ∈ L(H S ) a Hermitian operator on H S whose spectrum is bounded from below. Points in the dynamical configuration space c are called "configurations" to distinguish them from "states", ρ.
The evolution of the system is described by discrete trajectories in C(H S ): Definition 2 A discrete trajectory T is defined as an ordered list of elements of C(H S ) that describes the succession of configurations, with each element (ρ, H) containing both the density matrix ρ of S and the local Hamiltonian H of S at that specific instance of the evolution.
We stress that both ρ and H of a configuration point c ∈ C(H S ) have a clear experimental meaning. ρ is the density matrix that one would reconstruct by state tomography, i.e. the preparation of many copies of the same state ρ and the full tomographic measurement of its properties. The Hamiltonian of the system, H, is determined by the set of projective measurements {M j } j the experimenter performs on the system to "measure the energy" together with the interpretation of the corresponding energy eigenvalues, E j , so that H = j E j M j . (The choice of the measurement and interpretation can be motivated by a process tomography on the Hamiltonian at any point in time. For this the system needs to be decoupled from the rest of the universe at that instance and evolve for a complete set of states for a short time interval τ through the action of H. By measuring the final states of the evolution the unitary e −iHτ and hence H can be uncovered.) It is then straightforward to establish the internal energy and the entropy for each point in dynamical configuration space.
and the entropy as the von Neumann entropy S(ρ) of ρ, A central notion in thermodynamics is the canonical Gibbs state, often also referred to as thermal state or equilibrium state. Since a thermal state, ρ, at temperature T is well-defined only with respect to a certain Hamiltonian, H, it is actually the configuration c = (ρ, H) that is thermal.
Definition 4 An element (ρ, H) ∈ C(H S ) describes a thermal equilibrium configuration (or briefly thermal configuration) if ρ is a Gibbs state of the Hamiltonian, H, for some finite inverse temperature β > 0, i.e.
with Z(β) = tr[e −βH ] being the associated partition function [34]. In the following the thermal configurations will be indicated by c(β) := (ρ, H) β with the subscript β specifying the configuration's temperature.
Thermal configurations (ρ, H) β are very special. Firstly, for a given Hamiltonian, H, from all possible states that have a fixed value of the internal mean energy, U = tr[ρH], the thermal state maximises the entropy S(ρ) = −tr[ρ ln ρ]. In other words c(β) = (ρ, H) β is the most unbiased configuration one can assert to the system given only the knowledge of U [30]. Another insightful characterisation of thermal configurations in terms of a property called complete passivity was achieved by Lenard [17], building on ideas of Pusz and Woronowicz [16]. Complete passivity captures the intuitive notion of thermal equilibrium. A configuration (ρ, H) is said to be passive if no work can be extracted from the system, i.e. W ≥ 0 cf. Eq. (6) is allowed to introduce any sort of interactions between the various copies of ρ. It turns out that while all c = (ρ, H) with commuting ρ and H are passive configurations, only thermal configurations c(β) and the ground state are completely passive [16,17]. To stress the special role of thermal configurations graphically, they are denoted as red circles while all configuration that are not thermal will be called non-equilibrium configurations and are depicted as blue squares, see Fig. 1.
A note on gauge. Given a generic state ρ ∈ S(H S ) which is full rank, there always exists a Hermitian operator H ∈ L(H S ) and a β > 0 such that (ρ, H) β is thermal [35] at the inverse temperature β. In fact the problem admits infinite solutions, since there are two gauge freedoms for the choice of H and β. Firstly, the zero-point of the energy scale can be chosen arbitrarily by a constant a. The second gauge, b, is the temperature itself, which sets a spacing of the energy scale. The pair {H, β} is equivalent to {b(H + a), β/b} in that they have the same set of thermal configurations. In particular the internal energy (22) and the entropy (23) of such configurations do not depend on the values of a and b. In the following we will assume that both gauges have been chosen to some fixed values.

IV. DISCRETE TRANSFORMATIONS IN DYNAMICAL CONFIGURATION SPACE
Among all possible discrete transformations in dynamical configuration space C(H S ) we identify two classes that admit a clear analysis of the energetic balance and can be used as primitives for general discrete dynamical evolutions.

A. Discrete Unitary Transformations (DUTs)
These transformations map an initial configuration −→ c f no assumption is made on the time duration τ nor the specific form of H(t) which realises the unitary V . In analogy to the continuous situation, we define the work done on the system due to a DUT identical to the total variation of the internal energy, ∆U , i.e.
while no heat is associated with DUTs, i.e.
A special class of DUTs are the Discrete Unitary Quenches (DUQs). Experimentally, a quench is an abrupt, instantaneous change of the system Hamiltonian which leaves the system density matrix unchanged, i.e.
We also note that for full rank states ρ i a DUQ can be found that brings (ρ i , H i ) to a final configuration that is thermal, c f (β) = (ρ i ,H f ) β , with the Hamiltonian defined asH f = − 1 β (ln ρ i + ln Z). DUTs will be denoted as blue arrows in the graphical representation of the configuration space, see 3. DUTs can be concatenated to produce another DUT.
4. Any DUT has an inverse that is also a DUT.

B. Discrete Thermalising Transformations (DTTs)
DTTs are defined as those transformations which take a generic c i = (ρ i , H) into a Gibbs state at some inverse temperature β, c f (β) = ρ 1 = e −βH Z , H β , without modifying the system Hamiltonian H. The prototypical example of a DTT is an arbitrary thermalisation process in which the system is put into a weak thermal contact with a reservoir at inverse temperature β and left until its state becomes time-independent. Physically this is realised by the system weakly interacting with a large external environment. The requirement of a small coupling ensures a clear definition of a local system Hamiltonian.
For example, the dissipative evolution (ρ(t), H) defined in Eq. (10) with the additional assumption that the Lindblad term L commutes with H will for t → ∞ converge to ρ 1 . In analogy with this continuous process, we assume that the internal energy change due to a DTT is a result solely of the heat absorbed by the system while the work of a DTT vanishes, This non-trivial expression of the heat is exactly of the form that we expected from the bounds on the entropy in Eq. (21), and it will be the basis for deriving a general second law for discrete quantum trajectories in the next section.
DTTs will be denoted as horizontal red arrows in the graphical representation, see 4. DTTs can be concatenated to produce another DTT.

The inverse of a DTTs is in general not a DTT.
Only if the initial configuration c i was already thermal, can the action of a DTT be reversed by another DTT.

V. HEAT AND CLAUSIUS INEQUALITY FOR DISCRETE THERMODYNAMIC PROCESSES
Having identified two fundamental process primitives in configuration space, we now focus on more complex discrete trajectories. These can start from equilibrium or non-equilibrium configurations, however, we restrict ourselves to discrete trajectories that can be obtained by concatenating DUT and DTTs. Within this scenario we will be able to formulate a general second law for discrete quantum processes, that does not require detailed knowledge of the continuous state and local Hamiltonian evolution.

A. Single DUT+DTT transformations
Let us begin with the simplest non trivial discrete transformation which can be used to connect two equilibrium configurations.

Equilibrium to equilibrium processes
We consider a trajectory that starts from a thermal configuration c i (β i ) = (ρ i , H i ) βi and ends at a final thermal configuration c f (β f ) = (ρ f , H f ) β f via the action of a single DUT followed by a DTT. The heat of the discrete process can then be determined as the sum of the heats of each component, for both of which the heat is a welldefined quantity. The DUT first unitarily rotates the input density matrix to ρ 1 = V ρ i V † while the Hamiltonian changes from H i to H f , ending in an intermediate (not necessarily thermal) configuration c 1 = (ρ 1 , H f ). A DTT follows that brings c 1 to c f (β f ), resulting in the discrete overall trajectory shown in Fig. 3. While heat of process (30) is only exchanged during the DTT, the amount of exchanged heat depends on the DUTs unitary V Clearly, the value of the heat depends on the choice of the unitary V with the maximum and minimum heat given by where {H f (k)} k and {H i (k)} k are the eigenvalues of H f and H i ordered in decreasing order and Z i,f the partition functions of the initial and final configuration. The implying Thus the process (30) obeys a Clausius-type inequality, cf. (14), which states that the heat absorbed by the system is upper bounded by the entropy change.

Non-equilibrium to non-equilibrium processes
We now turn to discrete non-equilibrium processes for which establishing the Clausius inequality in the continuous case has only recently been addressed [25]. In our approach this can be done by observing that given two generic configurations c i = (ρ i , H i ) and c f = (ρ f , H f ) in C(H S ), it always possible to connect them via a discrete trajectory composed by three primitive steps which differs from the one given in Eq. (30) only by a final DUQ transformation. Specifically we can write allows us to identify the heat associated with T with the quantity The lower bound of Eq. (20) can then be used to establish a Clausius-type inequality for the discrete transformation (35), i.e.
where β 2 is the temperature of the intermediate configuration c 2 (β 2 ).

B. Sequences of DUT+DTTs
The trajectories defined in Eq. (30) and Eq. (35) are just specific choices of discrete trajectories connecting two configurations c i and c f . We will now show that a Clausius inequality, e.g. inequalities of the type (34), holds for general discrete processes as long as they can be decomposed in a sequence of DUT+DTTs steps. To show this, we first consider the discrete trajectory, γ, pictured in panel a of Fig. 4 where In this scenario the following inequality for the entropy holds, where , and where Eq. (34) was used for the two DUT+DTT transformations.
Inequality (39) can immediately be generalised to an arbitrary number of intermediate DUT+DTT steps connecting c i (β i ) to c f (β f ). Specifically, consider a generic discrete trajectory T composed of N consecutive DUT+DTT steps that pass through the thermal config- Fig. 4. Then by expressing the total entropy increment ∆S(ρ i , ρ f ) as a sum of terms ∆S(ρ k , ρ k+1 ) associated with the various steps of T and applying Eq. (34) to each one of them, the Clausius inequality becomes The generality of this derivation implies that sequences of discrete unitary and discrete thermalising transformations always fulfil a Clausius type equation.
To formulate this as a lemma, we introduce a useful discrete process quantity, Λ, for a DUT+DTT sequence, The quantity Λ is the discrete analog to the continuous expression   Fig. 5, is then strictly positive for any p where we have assumedρ k = ρ k+1 , and used Eq. (44) and the joint convexity of the relative entropy [28], with equality iff ρ k+1 = ρ 1 =ρ k . We summarise this result in the following Lemma: Lemma 2 Adding intermediate thermal configurations Having confirmed that it is possible for any given discrete trajectory to introduce intermediate steps such that the r.h.s. of Eq. (43) increases, the task is now to show that the entropy bounds can be saturated by reiterating the procedure. The proof relies on lower bounding Λ and showing that the bound converges to the upper bound on Λ, Eq. (43), in the limit of infinite steps. The detailed derivation is given in Appendix B proving the following theorem: Theorem 1 Let T be a discrete trajectory connecting the initial Gibbs configuration c i (β i ) = (ρ i , H i ) βi to the final Fig. 4. Then a sequence of trajectories T n exists, obtained from T by adding n intermediate thermal steps, which saturates the Clausius bound (40) in the asymptotic limit, i.e.

D. Approximation of continuous processes by discrete processes
In the introduction we have seen that for continuous processes where consistent definitions of ρ(t) and of the local Hamiltonian H(t) can be assigned for all t, Eq. (5) defines the heat absorbed by the system. We have already discussed the difficulties of knowing ρ(t) and identifying a proper local Hamiltonian H(t) for the system. However, in what follows we will assume that some "valid" continuous trajectory c(t) = (ρ(t), H(t)) ∈ C(H s ) is given for which Eqs. (5) and (6) apply. We now wish to identify a discrete trajectory that connects the same initial and final configurations as the continuous trajectory through a sequence of DUTs and DTTs, and which approximates the continuous heat. The analysis leads to the following theorem: Theorem 2 For a continuous process between two configurations c 0 and c τ that obeys the Clausius inequality, a discrete trajectory Γ exists that connects the same configurations and has exactly the same heat as the continuous process.
Proof: Consider an infinitesimal heat increment along the continuous trajectory, with ρ(t) and ρ(t − dt) being the density matrices of two infinitesimally separated configurations on the trajectory. Define the initial and final configuration for a discrete trajectory to be c i = (ρ(t − dt), H(t − dt)) =: (ρ 1 , H i ) and c f = (ρ(t), H(t)) := (ρ 2 ,H 2 ). To compare the continuous heat (48) with a discrete heat we need to identify a discrete trajectory, γ, connecting the same initial and final configuration as the continuous trajectory. One example is the sequence γ shown as a solid line in Fig. 6, is exchanged when passing from c 0 →c 2 (1) via a DUQ+DTT through the intermediate configuration c 1 , see Fig. 6. Heat Q 2 , is exchanged when passing fromĉ 2 → c 2 (1). Therefore On the other hand, the continuous heat increment δQ(t) can be decomposed into two heat contributions, where Q f would be the heat absorbed by the system if it passed from c f toc 2 (1) via a DTT, i.e.
To compare the continuous heat (53) with the discrete heat (52), we introduce a second discrete trajectory ω. This is a closed loop sequence of DUQ and DTT transformations, see Fig. 6, In trajectory ω heat is exchanged fromĉ 2 → c 2 (1) and from c f →c 2 (1), so Q(ω) = Q 2 + Q f . As discussed in previous sections the discrete heat always obeys the Clausius inequality (34), and with β = 1 for both steps this implies for trajectory ω. Using this in Eq. (52) we find that the heat associated with the discrete trajectory γ from c i to c f is a lower bound to the infinitesimal continuous heat for the same initial and final configuration, (48), i.e.
This can immediately be extended to the full continuous process: For any arbitrary continuous process between c 0 and c τ there is always a discrete trajectory Γ between the same two configurations that has a lower heat than the continuous heat, Eq. (5). Moreover, by augmenting intermediate steps in the discrete trajectory Γ, resulting in the trajectory Γ that passes through an infinite sequence of points c(t), it is possible to increase the associated heat, as shown in Sec. V B. From Theorem 1 follows that if the continuous trajectory fulfils the Clausius inequality, then a discrete trajectory connecting the same initial and final configuration can be found that has the same heat as the continuous trajectory.

VI. THERMAL EFFICIENCY
The last piece in our analysis of the energy balance in discrete quantum processes is to determine the efficiency of a discrete cyclic process, such as the one depicted in Fig. 7 where c 1 (β 1 ) = (ρ 1 , H 1 ) β1 , c 2 (β 2 ) = (ρ 2 , H 2 ) β2 are equilibrium configurations, while c 3 = (ρ 1 , H 2 ), and c 4 = (ρ 2 , H 1 ) are not. This results in the following Lemma: Lemma 3 The thermal efficiency of the discrete cycle depicted in Fig. 7 is bounded by the Carnot efficiency, and the optimal efficiency is achievable.
Proof: For the overall loop, the entropy change is zero. However, the entropy change nevertheless bounds the heat of the two heat-producing DTT processes, c 3 → c 2 (β 2 ) and c 4 → c 1 (β 1 ), This implies that at least one of the two heats must be negative. Let us assume for instance that the heat exchanged with the thermal reservoir at temperature T 2 = 1/(k B β 2 ) is positive, Q(c 3 → c 2 ) > 0, while the other heat is negative Q(c 4 → c 1 ) < 0 (other scenarios can be treated analogously, see below). The total heat absorbed per cycle is with the energy balance implying that the overall absorbed heat must be equal to the negative work done on the system during the cycle, The thermal efficiency is defined as the ratio between the work performed and the heat absorbed, leading to where we used Eq. (60). If T 1 T 2 the system absorbs heat from a higher temperature bath and gives heat to a lower temperature bath. The efficiency η is then positive and smaller than unity with the optimal efficiency reproducing the Carnot efficiency. The optimal efficiency can be reached by augmenting the discrete trajectory to saturate the equality in the Clausius-inequality, see Theorem 1.
Remark: If instead T 2 T 1 , i.e. heat is absorbed from a lower temperature bath and given to one at a higher temperature, the system operates as a refrigerator. In this case the total work absorbed by the system, W (c 1 → c 1 ) = Q(c 3 → c 2 )(T 1 /T 2 − 1), is positive. The efficiency of the process can be measured by the coefficient of performance, COP cooling , defined as the ratio between the heat absorbed from the cold reservoir T 2 (i.e. Q(c 3 → c 2 )) and the total work done on the system which again is always smaller than one. Finally, if the signs for the heats Q(c 3 → c 2 ) and Q(c 4 → c 1 ) are interchanged the above argument still holds with Eqs. (63) and (64) being replaced by the inequalities η ≤ 1 − T2 T1 and COP cooling T1 T2−T1 , respectively.

VII. CONCLUSIONS
The early development of thermodynamics culminating in the formulation of the second law also gave birth to a new quantity, the entropy, whose physical meaning was at first opaque. Only later was its meaning elucidated by the works of Boltzmann and others. In this paper we proposed to turn the original argument around and use the well-established notion of entropy that characterises the information content in a (quantum) state to motivate the definition of a notion of heat for discrete quantum processes. The approach circumvents the large cluster of problems surrounding the idea of a unique definition of heat and work in processes where (i) the Hamiltonian of the system is not well-defined due to the open nature of the system and (ii) there are fundamental limitations on the notion of trajectories where full knowledge is only given at discrete points in time when a measurement with a specific Hamiltonian occurred.
By introducing thermodynamic configurations, identifying two primitives for discrete processes, DUTs and DTTs, and defining heat to pertain only to DTTs we were able to uncover a general second law valid for any discrete process consisting of sequences of DUT+DTTs between both, equilibrium and non-equilibrium configurations. Moreover, we showed that an infinite sequence of DUT+DTT processes exists that saturates the Clausius inequality. In other words, saturation occurs when a discrete trajectory is mapped out into a continuous one by a sequence of measurements that are infinitely close together. This provides a link between reversibility -here the reversibility of a discrete process -and the equality in the second law for discrete processes, reminiscent of the Clausius' statement of equality for thermodynamic reversible continuous processes, Eq. (12). On the other hand, we also showed that for any continuous process between two configurations that obeys the Clausius inequality, there exists a discrete process between the same configurations with the same heat. Finally, we showed that for the discrete version of a thermodynamic cycle, formed by a discrete trajectory passing through four configurations Carnot's efficiency is recovered.
The strength of our approach is to give meaning to heat and work, reversibility and efficiency following from just a few sensible and simple definitions. In some respect this is analogous to the axiomatic approach to thermodynamics first developed by Carathéodory [31]. We hope that the presented analysis will inspire discussions and future work on characterizing heat and work in quantum processes. Of course many open questions remain. One direction of particular relevance is clearly the identification of a proper metric in configuration space, that would allow to quantify, in a precise and (hopefully) operationally well defined way, how distant two generic discrete trajectories are.
In other words equivalence in (30) requires that no DTT enters in the process so that ∆S(ρ i , ρ f ) = 0. For any non-trivial DTT a finite gap between the l.h.s. and the r.h.s. of Eq. (34) exists. (This is not true however for sequences of DUT-DTT transformations as considered in Sec. V C.) To determine the minimum/maximum gap we require the maximum/minimum heat with respect to all possible DUTs of a DUT+DTT process connecting two thermal configurations c i (β i ) = (ρ i , H i ) βi and c f (β f ) = (ρ f , H f ) β f , see Fig 8, i.e.
where the maximisation/minimisation is taken over all unitary transformations V . This task is solved with the following Lemma.
Lemma 4 Given A = j α j |α j α j | and B = j β j |β j β j | Hermitian operators on a finite dimensional Hilbert space H, with α j and β j being their corresponding eigenvalues which are ordered in decreasing order (i.e. α j α j+1 , β j β j+1 ). Then the minimum of tr[A V B V † ] over the set of unitary transformations is achieved by the unitary V min which maps the eigenvector {|β j } of B into the eigenvectors {|α j } of A in such a way that i.e. the maximum eigenvector of B is mapped into the minimum eigenvector of A. As a consequence the minimum expectation value is Consequently the maximum expectation value is Proof: These minimum and maximum expectation values are a trivial consequence of the Theorem 2 of Ref. [17].
Application of Eqs. (A3) and (A5) gives the minimum and maximum heat for the DUT+DTT process (30) where {H f (k)} k and {H i (k)} k are the eigenvalues of H f and H i ordered in decreasing order.
∆S(ρ k , ρ k+1 ) = lim n→∞ Λ(T k;n ). (B7) In other words by augmenting the intermediate points of which connects c k (β k ) and c k+1 (β k+1 ) we can saturate the associated Clausius inequality for the k-th step of the trajectory T . By repeating the same procedure for each of the steps of T a new trajectory emerges as the union of the individual sequences, where n is the multidimensional variable (n 1 , n 2 , · · · , n N −1 ). Lemma 1 follows from the additivity of Λ (42) and taking the limit of each n k → ∞ (B7). However, it is still possible to find Gibbs configurations whose density matrices are arbitrarily close to ρ.
[36] cm(βm) is a well-defined thermal configuration as its state is by construction full rank.
[37] A proper definition of c2(1) requires ρ f to be full rank. If this is not the case one can still define a trajectory (49) with c f replaced by a full rank configuration which can be chosen to be arbitrarily close to c f .
[38] The symmetric sum of relative entropies S(ρ k+1 ρ k ) + S(ρ k ρ k+1 ) explodes if and only if the kernel of ρ k or ρ k+1 admits a non-trivial overlap with the support of the other state [28]. Since ρ k and ρ k+1 are full rank, neither conditions can be fulfilled and the symmetric sum of relative entropies is always finite.