# Second Law of Thermodynamics

Computational Thermodynamics

Citation
, XML
Authors

# Abstract

We present a deterministic continuum mechanics foundation of thermodynamics for slightly viscous fluids or gases based on a 1st Law in the form of the Euler equations expressing conservation of mass, momentum and energy, and a 2nd Law formulated in terms of kinetic energy, internal (heat) energy, work and shock/turbulent dissipation, without reference to entropy.

Heat, a quantity which functions to animate, derives from an internal fire located in the left ventricle. (Hippocrates, 460 B.C.)

## Thermodynamics

Thermodynamics is fundamental in a wide range of phenomena from macroscopic to microscopic scales. Thermodynamics essentially concerns the interplay in a fluid or gas between kinetic energy and heat energy, also referred to as internal energy. Kinetic energy, or mechanical energy, may generate heat/internal energy by compression or turbulent dissipation. Heat energy may generate kinetic energy by expansion, but not  through a reverse process of turbulent dissipation

## Newcomen´s engine for converting heat enerygy to mechanical energy from 1712.

Turbulent compressible flow

Thermodynamics concerns primarily compressible slightly viscous flow. Slightly viscous flow is in general turbulent and therefore

• thermodynamics concerns the interplay between kinetic energy and heat energy in turbulent flow.

Turbulence is considered as the main unsolved problem of mechanics. Since thermodynamics is closely coupled  to turbulence, also thermodynamics can be seen as a main unsolved problem of mechanics. But there is hope:

### Computational thermodynamics

Computational thermodynamics   offers a new foundation of thermodynamics based on simulation of turbulent fluid flow in the form of

• computational solution of the Euler and Navier-Stokes equations

• by stabilized finite element methods with a posteriori error estimation.

Basic facts are:

• heat energy is small scale microscopic kinetic energy
• turbulence is irreversible conversion of large scale macroscopic kinetic energy into  small scale kinetic energy in the form of heat energy.

## Joule’s 1845 experiment

Basic aspects of thermodynamics can be illustrated in Joule’s experiment from 1845 with a gas compressed to high pressure (20 atmospheres) in a chamber R connected to another empty chamber L through a tube with a initially closed valve, both containers being submerged in a larger container filled with water. At initial time the valve was opened and the gas was allowed to expand into the double volume R+L while the temperature change in the water was carefully measured by Joule. The experiment can be followed in movies from computational simulation of density and momentum with snap-shots below of density and temperature in a model with cubical chambers.

##    Density at two time instants                                                         Temperature at two time instants

We see

• gas flowing through the valve from R into L
• the gas is put into motion by the high pressure in R and thus picks up kinetic energy
• since the total energy (sum of kinetic energy and heat energy) is constant the temperature drops in R
• as the gas expands into L turbulence develops and shocks bounce back and forth in R
• eventually the gas comes to rest in the double volume R+L at initial temperature
• the turbulent motion is converted into heat as the gas comes to rest.

We can follow the initial drop of temperature in R as the gas is put into motion picking up macroscopic kinetic energy which eventually in turbulent dissipation is converted to heat and the gas comes to rest in R+L with the temperature back to its initial value.   Temperature in R: short time                    Total kinetic energy                       Kinetic energy in L
We thus see an interplay between heat energy (microscopic kinetic energy) and macroscopic kinetic energy, where first heat energy is converted to  macroscopic kinetic energy, which in turbulent dissipation is converted back to heat energy. In short, the gas expands from rest to double volume, where it comes to rest at the initial temperature.

### Irreversible expansion

A basic problem of thermodynamics is to show that the expansion to double volume is irreversible:
The gas can expand itself from R to R+L, but cannot compress itself from R+L back  to R. We shall
show below that computational thermodynamics offers a solution based on a combination of
• finite precision computation
• instability of slightly viscous flow generating turbulence
which shows that
• macroscopic kinetic energy can generate microscopic kinetic energy by turbulent dissipation
• microscopic kinetic energy can only generate macroscopic kinetic energy in expansion
• an inverse process of turbulent dissipation is impossible because infinite precision would be required to coordinate microscopic motion into macroscopic motion.

The basic principle can be stated:

• breaking into pieces is possible with low precision
• the reverse process of putting pieces together requires infinite precision.

It follows that compression from rest is impossible, because a gas a rest can only be put into motion
by expansion (in an isolated system without external forces), because of finite precision.

## The laws of thermodynamics

### 1st Law of thermodynamics

The 1st Law states (for an isolated system) conservation of total energy, with the total energy being equal to the sum of kinetic and heat/internal energy.

### The 2nd Law has the form of an inequality dS ≥ 0 for a scalar quantity named entropy denoted by S, with dS denoting change thereof, supposedly expressing a basic feature of real thermodynamic processes. The classical 2nd Law states that the entropy cannot decrease; it may stay constant or it may increase, but it can never decrease (for an isolated system).

The role of the 2nd Law is to give a scientific basis to the many observations of irreversible processes, that is, processes which cannot be reversed in time, like running a movie backwards. Time reversal of a process with strictly increasing entropy, would correspond to a process with strictly decreasing entropy, which would violate the 2nd Law and therefore could not occur. A perpetum mobile would represent a reversible process and so the role of the 2nd Law is in particular to explain why it is imposssible to construct a perpetum mobile, and why time is moving forward in the direction an arrow of time, as expressed by Max Planck:

• Were it not for the existence of irreversible processes, the entire edifice of the 2nd Law would crumble.

## The enigma

Those who have talked of chance are the inheritors of antique superstition and ignorance…whose minds have never been illuminated by a ray of scientific thought. (T. H. Huxley)

While the equality of the 1st Law can be viewed as a definition of heat energy, the nature of the inequality of the 2nd Law posed a main challenge to the scientists of the late 19th century with the following basic questions:
• If the 2nd Law is a new independent law of Nature, how can it be justified?
•  What is the physical significance of that quantity named entropy, which Nature can only get more of and never can get rid of, like a steadily accumulating heap of waste? What mechanism prevents Nature from recycling entropy?

### Statistical mechanics

After much struggle, agony and debate, the agreement of the physics community has become to view statistical mechanics developed by Boltzmann   to offer a rationalization of the classical 2nd Law in the form of a tendency of (isolated) many-particle systems to move from less probable towards more probable states, or from more ordered to less ordered states. Statistical mechanics is based on an assumption of molecular chaos for a particle model of a dilute gas of elastically colliding molecules, that molecules have statistically independent velocities before collision. From the statistical particle model Boltzmann derived a deterministic continuum model in the form of Boltzmann’s equation, and then formally proved that solutions satisfies an entropy inequality, referred to as the H-theorem, as a consequence of a positivity property of the collision term resulting from the assumption of molecular chaos. Increasing entropy would then represent increasing disorder and the H-theorem would reflect the eternal pessimistists idea that things always get more messy, and that there is really no limit to this, except when everything is as messy as it can ever get. Of course, experience could give (some) support this idea, but the trouble is that it prevents things from ever becoming less messy or more structured, and thus may seem a bit too pessimistic.

No doubt, it would seem to contradict the many observations of emergence of ordered non-organic structures (like crystals or waves and cyclons) and organic structures (like DNA and human beings), seemingly out of disordered chaos, as evidenced by the physics Nobel Laureate Robert Laughlin . Most trained thermodynamicists would here say that emergence of order out of chaos, in fact does not contradict the classical 2nd Law, because it concerns “non-isolated systems”. But they would probably insist that the Universe as a whole (isolated system) would steadily evolve towards a “heat-death” with maximal entropy/disorder (and no life), thus fulfilling the pessimists expectation. The question from where the initial order came from, would however be left open.

The basic objective of statistical mechanics as the basis of classical thermodynamics, is to (i) give entropy a physical meaning, and (ii) to motivate its tendency to (usually) increase. Before statistical mechanics, the 2nd Law was viewed as an experimental fact, which could not be rationalized theoretically. The classical view on the 2nd Law is thus either as a statistical law of large numbers or as a an experimental fact, both without a rational deterministic mechanistic theoretical foundation.

### Explaining Joule´s experiment by statistical mechanics

In statistical mechanics the dynamics of the expansion process would be dismissed and only the initial and final state would be subject to analysis. The final state would then be viewed as being “less ordered” or “more probable” or having “higher entropy”, because the gas would occupy a larger volume, and the reverse process with the gas contracting back to the initial small volume, if not completely impossible, would be “improbable”. But to say that a probable state is is more probable than an improbable state is more mystifying than informative. In short, statistical mechanics offers a very confusing analysis of Joule´s basic experiment.

### Does anybody understand thermodynamics?

The problem with thermodynamics based on statistical mechanics in this form is that it is understood by very few:

•  Every mathematician knows it is impossible to understand an elementary course in thermodynamics. (V. Arnold)
•  …no one knows what entropy is, so if you in a debate use this concept, you will always have an advantage. (von Neumann to Shannon)
•  As anyone who has taken a course in thermodynamics is well aware, the mathematicsused in proving Clausius’ theorem (the 2nd Law) is of a very special kind, having only the most tenous relation to that known to mathematicians. (S. Brush )
• Where does irreversibility come from? It does not come form Newton’s laws. Obviouslythere must be some law, some obscure but fundamental equation. perhaps in electricty,maybe in neutrino physics, in which it does matter which way time goes. (Feynman )
• For three hundred years science has been dominated by a Newtonian paradigm presentingthe World either as a sterile mechanical clock or in a state of degeneration and increasing disorder…It has always seemed paradoxical that a theory based on Newtonian mechanics can lead to chaos just because the number of particles is large, and it is subjectivly decided that their precise motion cannot be observed by humans… In the Newtonian world of necessity, there is no arrow of time. Boltzmann found an arrow hidden in Nature’smolecular game of roulette. (Paul Davies )
• The goal of deriving the law of entropy increase from statistical mechanics has so far eluded the deepest thinkers. (Lieb ])
• There are great physicists who have not understood it . (Einstein about Boltzmann’s statistical mechanics)
• Yet, when we went with the odds and imagined that everything popped into existence by a statistcal fluke, we found ourselves in a quagmire: that route called into question the laws of physics themselves. And so we are inclined to buck the boggies and go with low-entropy big bang as the explanation of the arrow of time. The puzzle then is to explain how the universe began in such an unlikely, highly ordered configuration. That is the question to which the arrow of time points. It all comes down to cosmology. (Greene ).

The standard presentation of thermodynamics based on the 1st and 2nd Laws, thus involves a mixture of formal axiomatics, statistical particle models and deterministic continuum mechanics models (Boltzmann’s equation with the H-theorem), which makes it admittedly difficult to both learn, teach and apply, despite its strong importance. The confusion is witnessed in the 2007 symposium Meeting the Entropy Challenge.

One reason is that the question why necessarily dS ≥ 0 and never dS < 0, is not given a convincing understandable answer. In fact, statistical mechanics allows dS < 0, although it is claimed to be very unlikely.

Another reason is that Boltzmann’s equation is very difficult to use for predictions, because analytical solutions are lacking and computational solution is very costly because not only position (and time) but also particle velocities serve as independent variables. Further, the focus is on equilibrium states leaving out dynamics.
Statistical mechanics also suffers from inconsistencies, such as Gibbs paradox and Loschmidt’s paradox , which have never been properly resolved. And to suggest that the origin of order is Big Bang, is just the bang of empty barrels.

## Euler and Navier-Stokes equations

Averaging over velocities in Boltzmann’s equation one can formally derive a deterministic fluid mechanics model of thermodynamics in the form of the Navier-Stokes equations for a compressible gas expressing conservation of mass, momentum and total energy as a system of partial differential equations with position and time as independent variables, together with constitutive laws defining pressure, viscous forces and diffusive heat fluxes in terms of mass density, velocity and internal energy, combined with initial and boundary conditions. In a perfect gas the pressure is proportional to the internal energy. With vanishing viscosity and heat conductivity the Navier-Stokes equations reduce to the Euler equations.

Solutions of the Navier-Stokes equations for a perfect gas formally satisfy an inequality of the form dS > 0 for a specific entropy S defined in terms of mass density and internal energy, as a consequence of positive viscous dissipation. Conversely, dS > 0 will require positive viscous dissipation. Thus a 2nd Law with dS > 0 and positive viscous dissipation, can be viewed to be equivalent, and the basic open problem is to motivate one or the other without ultimate resort to statistical mechanics. Euler solutions formally satisfy dS = 0 as a consequence of zero viscous dissipation and thus do not seem to contain a 2nd Law dS > 0.

A further open problem is the existence and uniqueness of solutions to the Euler or Navier-Stokes equations, which is one of the seven Clay Mathematics Institute Millenium Problems.

Altogether, thermodynamics seems to lack a constructive deterministic mathematical basis, principally because both the origin of viscosity in Boltzmann/Euler/Navier-Stokes equations and the existence/uniqueness of solutions to these equations represent open problems. Simply assuming the presence of viscous effects and solutions to exist, is not scientifically satisfactory.

## Computational thermodynamics: EG2

In the spirit of Dijkstra , we thus view EG2 as constructive mathematical model of thermodynamics in the form of the Euler equations together with a computational solution procedure, to be distinguished from a formal model consisting of the Navier-Stokes equations without constructive solution procedure, simply assuming positive viscous dissipation and solutions to exist. The Euler equations are combined with slip boundary conditions at solid walls allowing fluid particles to slide along the boundary without friction (or more generally a friction boundary condition with small friction), motivated  by the computationally and experimentally verified fact that the skin friction of a turbulent boundary layer tends to zero with viscosity and thus is small for slightly viscous flow. You find more information on slides from a lecture.

The least squares stabilization in EG2 penalizes large Euler residuals by viscosity acting in the streamline direction as a form of bulk viscosity of order h, and is complemented by a residual dependent isotropic shear viscosity of order h^2, where h is the mesh size.

Using EG2 as a model of thermodynamics changes the questions and answers and opens new possibilities of progress together with new challenges to mathematical analysis and computation. The basic new feature is that EG2 solutions are contructed/computed and thus are available to inspection. This means that the analysis of solutions shifts from a priori to a posteriori; after solutions have been computed they can be inspected and their qualities can be evaluated.

We discover that EG2 solutions are turbulent and have shocks, both phenomena being identified by pointwise large (but weakly small) Euler residuals, reflecting non-existence of pointwise solutions to the Euler equations. We observe turbulence in EG2 solutions with slip boundary conditions, and conclude that turbulence does not necessarily originate from viscous boundary layers with no-slip boundary conditions, contrary to state-of-the art boundary layer theory by Prandtl and Schlichting.

### Wellposedness

EG2 solutions exist because they are constructed. Uniqueness relates to wellposedness in the sense of Hadamard, which concerns what aspects or outputs of EG2 turbulent/shock solutions are stable under perturbations in the sense that small perturbations have small output effects. In particular, a wellposed output converges under decreasing mesh size. We show that wellposedness of EG2 solutions can be tested a posteriori by computationally solving a dual linearized problem, through which the output sensitivity of non-zero Euler residuals can be estimated. We find that mean-value outputs such as drag and lift and total turbulent dissipation are wellposed, while point-values of turbulent flow are illposed.

We can thus a posteriori assess the quality of EG2 solutions as solutions of the Euler equations and identify what outputs are wellposed and converge with decreasing mesh size. We prove that EG2 solutions satisfy a basic form of the 2nd Law formulated without the concept of entropy, and we discover a connection between the 2nd Law and  wellposednes with satisfaction of the 2nd Law being necessary for wellposedness of some outputs.

### EG2 as LES with automatic turbulence model

EG2 can be viewed as a form of Large Eddy Simulation LES for slightly viscous flow with an automatic computational turbulence model given by the least squares stabilization for interior turbulence and the slip boundary condition for turbulent boundary layers. The stabilization amounts to a substantial loss of kinetic energy in turbulent EG2 solutions with pointwise large Euler residuals.

### From probable to necessary

The stabilization is necessary because without stabilization EG2 solutions cease to exist, while physical flows continue to exist even after transition to turbulence. The positivity of viscous dissipation thus comes out from a necessity to avoid break-down or blowup, and is thus forced upon the model. The necessity comes from the fact that solutions necessarily become turbulent, as we shall see as a result of an inherent instability of slightly viscous flow.

We find that EG2 is a computationally affordable useful model for slightly viscous (compressible and incompressible) flow, because outputs of interest such as drag and lift can be computed using the automatic turbulence model, without resolving boundary layers and interior turbulence to physical scales. For example, EG2 allows accurate prediction of the drag/lift of a car and airplane with millions of mesh points, instead of the trillions required according to state-of-the art, which is way beyond the capacity of any forseeable computer.

Altogether, EG2 is a model of the thermodynamics of slightly viscous flow, for which the basic questions of the (i) origin of positive viscous dissipation and (ii) existence/wellposedness, have constructive answers.

## EG2 satisfies a 2nd Law formulated without the concept of entropy, in terms of the basic physical quantities of kinetic energy K, heat energy E, rate of work W and shock/turbulent dissipation D > 0. We refer to this law as the new 2nd Law, which reads

(1)               dK/dt = W − D,    dE/dt = −W + D.

Here K is the total kinetic energy as the integral in space of the pointwise kinetic energy, with E, W and D

defined similarly. Slightly viscous flow always develops turbulence/shocks with D > 0, and the 2nd Law thus expresses an irreversible transfer of kinetic energy into heat energy, while the total energy E + K remains constant because

(2)               dE/dt + dK/dt = 0.

With the 2nd Law in the form (1), we avoid the (difficult) main task of statistical mechanics of specifying the physical significance of entropy and motivating its tendency to increase by probabilistic considerations based on (tricky) combinatorics.

### Compression and expansion

The work W is positive in expansion and negative in compression, since

W=p∇ · u

where p is the pressure, u is the velocity and ∇ · u is the divergence of u which is positive in expansion and negative in compression. It follows as stated above that the kinetic energy can only increase in expansion (without forcing). The new 2nd Law thus shows that the expansion in Joule’s experiment is irreversible.

### Ockham’s razor

Using Ockham’s razor , we rationalize a scientific theory of major importance making it both more understandable and more useful. The new 2nd Law is closer to classical Newtonian mechanics than the 2nd Law of statistical mechanics, and thus can be viewed to be more fundamental.

The new 2nd Law is a consequence of the 1st Law in the form of the Euler equations combined with EG2 finite precision computation effectively introducing viscosity and viscous dissipation. These effects appear as a consequence of the non-existence of pointwise solutions to the Euler equations reflecting instablities leading to the development shocks and turbulence in which large scale kinetic energy is transferred to small scale kinetic energy in the form of heat energy

The viscous dissipation can be interpreted as a penalty on pointwise large Euler residuals arising in shocks/turbulence and resulting in a loss of kinetic energy, with the penalty being directly coupled to the violation following a principle of criminal law 

The positivity of the turbulent dissipation reflects that the penalty is positive, which is a necessity for existence of a solution. In short, the show must go on, which is possible by paying a penalty on pointwise large Euler residuals, while with zero (or negative) penalty, solutions cease to exist in blowup to infinity of gradients, which is physically inadmissible. EG2 thus explains the 2nd Law as a consequence of the non-existence of pointwise solutions with small Euler residuals, and not of an hoc assumption that “there is always some small viscosity of some sort or the other”, which is often put forward. This offers an understanding to the emergence of irreversible solutions of the formally reversible Euler equations.
If pointwise solutions had existed, they would have been reversible without dissipation, but they don’t exist, and the existing computational solutions suffer from turbulent dissipation and thus are irreversible.

### Emergence

The classical 2nd Law is often described as a general tendence of heat to spread or temperature gradients to decrease in a steady march to a heat death with uniform temperature, or more generally a tendency in physical processes of decreasing differences with increasing time. In this form the classical 2nd Law seems to contradict all forms of emergence of ordered structures in the form of crystals, waves  and life, characterized by increasing difference. The new 2nd Law does not contradict emergence of ordered structures, only states that increasing difference and creating order cannot be done quickly 

### Comparison with Classical Thermodynamics

Classical thermodynamics is based on the relation

TdS = dT + pdV,

where dS represents change of entropy s per unit mass, dV change of volume, p is pressure and dT
denotes the change of
temperature T per unit mass, combined with a classical 2nd Law in the
form dS ≥ 0.  The new 2nd Law in the form dE/dt +W = D ≥ 0 takes the symbolic form

dT + pdV ≥ 0,

effectively expressing that TdS ≥ 0, which is the same as dS ≥ 0 since T > 0. The new 2nd Law thus effectively expresses the same inequality as the classical 2nd Law, without reference to entropy.

Integrating the classical 2nd Law for a
perfect gas with p = (γ − 1)ρT, where 1<γ<2 is a gas constant,
and dV = d(1/ρ) = −dρ/ρ2, we get dS =dT/T+ (1 − γ)dρ/ρ and conclude that up to a constant

S = log(Tρ1−γ) .

The entropy S for a perfect gas is thus a function of the physical quantities ρ and T, thus is a state function, suggesting that S might have a physical significance, because ρ and T have.

But this is the great illusion and mistake of classical thermodynamics with the following questions without answer:

• What is the physical significance of S?
• Why is dS ≥ 0?

Boltzmann tried to give an answer by statistical mechanics, but failed since no answers were given, just more questions.

### Viscosity solutions

An EG2 solution can be viewed as particular viscosity solution of the Euler equations, which is a solution of regularized Euler equations augmented by additive terms modeling viscosity effects with small viscosity coefficients. The effective viscosity in an EG2 solution typically may be comparable to the mesh size.

For incompressible flow the existence of viscosity solutions, with suitable solution dependent viscosity coefficients, can be proved a priori using standard techniques of analytical mathematics. Viscosity solutions are pointwise solutions of the regularized equations. But already the most basic problem with constant viscosity, the incompressible Navier-Stokes equations for a Newtonian fluid, presents technical difficulties, and is one of the open Clay Millennium Problems.

For compressible flow the technical complications are even more severe, and it is not clear which viscosities would be required for an analytical proof of the existence of viscosity solutions  to the Euler equations. Furthermore, the question of wellposedness is typically left out, as in the formulation of the Navier-Stokes Millennium Problem, with the motivation that first the existence problem has to be settled. Altogether, analytical mathematics seems to have little to offer a priori concerning the existence and wellposedness of solutions of the compressible Euler equations.

In contrast, EG2 computational solutions of the Euler equations seem to offer a wealth of information a posteriori, in particular concerning wellposedness by duality. An EG2 solution thus can be viewed as a specific viscosity solution with a specific regularization from the least squares stabilization, in particular of the momentum equation, which is necessary because pointwise momentum balance is impossible to achieve in the presence of shocks/turbulence. The EG2 viscosity can be viewed to be the minimal viscosity required to handle the contradiction behind the non-existence of pointwise solutions. For a shock EG2 could then be directly interpreted as a certain physical mechanism preventing a shock wave from turning over, and for turbulence as a form of automatic computational turbulence model. EG2 thermodynamics can be viewed as form of deterministic chaos, where the mechanism is open to inspection and can be used for prediction. On the other hand, the mechanism of statistical mechanics is not open to inspection and can only be based on ad hoc assumption, as noted by e.g. Einstein . If Boltzmann’s assumption of molecular chaos cannot be justified, and is not needed, why consider it at all, ?

## The Euler equations

We consider the Euler equations  for an inviscid perfect gas enclosed in a volume Ω in three-dimensional spacewith boundary Γ with outward unit normal n over a time interval I = (0, 1],expressing conservation of mass density ρ, momentum m and total energy e: Find U = (ρ, m, e) depending on (x, t) ∈ Q ≡ Ω × I such that

Rρ(U) ≡ dρ/dt + ∇ · (ρu) = 0 in Q,
Rm(U) ≡ dm/dt + ∇ · (mu + p) = f in Q,
Re(U) ≡ de/dt + ∇ · (eu+pu)  = g in Q,
u · n = 0 on Γ × I
u(·, 0) =u0 in Ω,

where u =m/ρ is the velocity, p = (γ − 1)e with γ > 1 a gas constant is the pressure, f is a given

volume force, g a heat source/sink and u0 a given initial state, and we impose slip boundary conditions corresponding to an inpenetrable boundary with zero tangential friction.
Because of the appearance of shocks/turbulence, the Euler equations lack pointwise solutions, except possible for short time, and  regularization is therefore necessary. For a monoatomic gas γ = 5/3 and (2) then is a parameter-free model, the ideal form of mathematical model according to Einstein. In EG2 the meshsize enters as a parameter, and we identify aspects of solutions which are independent of the meshsize.

#### EG2

EG2 solves the Euler equations with a stabilized finite element method with the basic property that the residual R(U)=(Rρ(U),Rm(U),Re(U)) is small in a mean-value sense and not too large in a pointwise sense. Turbulence is signified by a pointwise large residual reflecting substantial turbulent dissipation with  D>0.

### Output uniqueness and stability

Defining a mean-value output M(U) in the form of a space-time integral of U defined by a smooth weight function, it follows by duality-based a posteriori error estimation that

|((U, ψ)) − ((W, ψ))| ≤ S(||hR(U)|| + ||hR(U)||)

where U and W are two EG2 solutions on meshes of meshsize h, ||.|| is an L2 space-time norm, S = S(U,W) is a stability factor defined as the Hnorm in space-time of a solution to a linearized dual problem with  coefficients defined by U and W with the  smooth weight-function as data.
In the case shocks/turbulence ||R(U)|| will be large, but  ||hR(U)|| small, and S is of moderate size showing
that mean-value outputs of turbulent solutions are wellposed.
On the other hand smooth potential solutions will be illposed with S very large, and thus represent fictional
mathematical solutions without physical significance.

Compressible flow around a sphere

We show below compressible flow around a sphere with a bow shock and turbulent wake both generating

substantial turbulent dissipation. The flow is irreversible: Changing the sign of the velocity of incoming flow
will shift position of both shock and wake, showing irreversibility   