From Conservapedia
This is the current revision of Thermodynamics as edited by SamHB (Talk | contribs) at 17:35, February 14, 2017. This URL is a permanent link to this version of this page.

(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to: navigation, search

Thermodynamics (from Greek: θερμός "theromos"; δύναμης "dynamis") is a part of the physical sciences involved in the study of the effects of work, heat, and energy on a system. The study of thermodynamics is central to the subjects of physics and chemistry, as well as important to the processes within biology and geology.

Thermodynamics includes several sub-disciplines within its framework:

  • Classical thermodynamics, which concerns the transfer of energy and work in normal systems with no consideration of the interactions of particles at the microscopic level. Classical thermodynamics makes no explicit reference to the constituent particles of a system. It consists of a number of "empirical" laws, which are derived purely from observations on thermodynamical systems, such as vessels of gas, or steam engines.
  • Statistical thermodynamics, which concerns the interactions and energy relationships of particles at the microscopic level
  • Quantum thermodynamics, which is thermodynamics extended to quantum systems.
  • Chemical thermodynamics, which is heat and energy transfers involving chemical reactions within chemical systems.

Thermodynamics has an emphasis on a beginning or initial state of a system, and an end or final state of a system, with the system being all of the interacting components on this energy path. Measurements in thermodynamics are usually reported on the Kelvin scale [1]


It has been known since antiquity that, when hot and cold objects are juxtaposed to each other, the warmer one cools off while the cooler one warms up, until they reach the same temperature. That is, heat only flows "downhill", never "uphill". It would take much scientific inquiry to figure out why this is so.

This didn't stop people from constructing heat engines. About 120 A.D. Heron of Alexandria created the first reaction turbine, essentially a copper sphere with two bent nozzles mounted opposite each other; the sphere itself was mounted above a fire which heated water within the sphere, causing the sphere to rapidly rotate via the steam escaping from the nozzles. Simply a curious novelty at the time, Heron's sphere would cause speculation on the nature of heat and heat transfer, and spark some investigation into using heat transfer to accomplish meaningful work.

In 1789 Antoine Lavoisier demonstrated the law of conversion of mass, when he observed that heat flowed from a warm body to a cold one. He proposed that heat was an element (he called this element caloric), and speculated that it was a type of fluid surrounding an atom, and seemingly confirmed his theory when he removed oxygen from mercuric oxide.

Prior to that, the conversion of heat to work was being accomplished on the industrial level. By the end of the 17th century, Thomas Savory invented the first practical steam-operated machine, a pump used to draw up well water. This in turn led to the first piston engine, invented by Thomas Newcomen in 1712, and based on a refinement of Savory's work, which in turn was further revised by James Watt by the end of the 18th century.

The downfall of Lavoisier's caloric theory happened at the arsenal in Munich, Germany. The Bavarian minister of war was a British expatriate, Sir Benjamin Thompson, and he observed that work was being converted into heat by observing the boring of a cannon. If caloric theory was correct, he reasoned, no more heat would be made once all of the caloric was removed from the cannon at the atomic level, yet his observations on this procedure - including a cannon bored while under water - demonstrated that work can be converted into heat, like the steam engines of his time converting heat into work.

James Joule in 1849 made a precise determination of the mechanical equivalent of heat into work. His stirring of water in a pot (work input with a mechanical stirring rod, caused by the motion of a 1-kg weight falling 42.4 cm) caused a temperature increase (heat output); his homemade, yet very-precise thermometers recorded the conversion factor. The unit of energy is now called the joule. 4.2 joules of energy can raise one gram of water 1 degree Celsius, an amount called one calorie. (The large-C "Calorie" used in nutritional measure is 1000 small-c calories.)

Zeroth Law of Thermodynamics

The zeroth law was formulated after the first and second laws, but is more fundamental, so was named "zeroth" as to avoid renumbering the other laws. It can be stated in terms of three systems, A, B and C. It states that if A is in thermal equilibrium with B, and B is in thermal equilibrium with C, then A must also be in thermal equilibrium with C. It allows us to define temperature. By bringing one body (our thermometer) into contact with another body, eventually they will reach thermal equilibrium and be at the same temperature. If we can measure a physical property of our thermometer such as its length and know how that varies with temperature, we can calculate the temperature of the body in question.

First Law of Thermodynamics

Joule, and others, continued in other experiments, noting the pressure changes caused by electrically-heated gases, and achieved similar results. This in turn led to a number of other scientists to research in this area, among them the German physicists Rudolf Clausius and Hermann von Helmholtz in the 1840's. This led to the general acceptance of Conservation of Energy as a clear and precise principle. Clausius stated in 1850:

"In any process, energy can be changed from one form to another, but it is never created or destroyed."

This can be expressed mathematically as

where is the infinitesimal change in internal energy, is the heat that flows into the system and is the work the system performs.

This principle notably included heat as a form of energy, which was the first law of thermodynamics. Hot objects contain potential energy in the form of their heat, and all the usual rules of transformation between potential and kinetic energy apply. When an electric current is passed through a resistor, the electrical energy is converted to heat energy, and the resistor gets hotter. Everything seemed to work out accurately.

Of course, in all experiments that attempt to tally energy (or other properties) accurately, one must be careful to avoid outside interference. Newton's formula requires that no unaccounted-for forces are acting on the object. Similarly, in thermodynamics, one must take into account any possible sources of heat into or out of the entity under test. This is often described by saying that the law of thermodynamics apply only to "isolated", or "closed", systems. If a system can interact with some external entity, that entity's properties must be taken into account.

Scientists had mostly figured out a linear relationship between temperature rise and change in heat energy. This is called the specific heat, in joules per degree of temperature rise per gram of substance. For example, the specific heat of water is 4.2 joules per gram per degree Celsius of temperature rise.

Heat engines (that is, steam engines) were being used on an industrial scale by then, but scientists still didn't know what temperature really meant. And they didn't know why it only flows "downhill". Also, while other forms of energy (as in running electricity through a resistor) could be converted to heat with essentially perfect efficiency, converting the other way (as in a steam engine) was very inefficient. No one knew why.

The clues that unravelled this mystery came from the study of gases, which had been going on for some time before. Boyle's Law, formulated in the 1660's, stated that, for a given sample of gas at a fixed temperature, the pressure was inversely proportional to the volume. That is,

where the constant depends on the amount and type of the gas sample.

Charles' Law, formulated in the 1780s, stated that, for a given sample of gas at a fixed pressure, the volume was directly proportional the "absolute" temperature. That is,

where the constant depends on the amount and type of the gas sample This required that the temperature scale be modified. The necessary scale was known as absolute temperature, now known as Kelvin scale. All thermodynamic measurements are in Kelvins.

Putting these together, we get, for a given sample of gas:

Where X is some constant that is characteristic of the gas sample. It's easy to see that X is proportional to the amount of gas (2 grams of gas will have twice the volume of 1 gram). So X is actually the amount of gas, measured in some convenient units (grams, moles, molecules), times some number that is characteristic of the gas.

In the 1810s, Gay-Lussac and Avogadro made an amazing discovery: The mysterious constant is just the molecular weight of the gas, if the amount of gas is measured in the right units. The right unit to use is the mole, which is the mass, in grams, that matches the molecular weight of the gas. (By then atomic weights and molecular weights were beginning to be understood.) So, for example, a mole of Chlorine is 71 grams, because a Chlorine molecule has two atoms. This led to the Universal Gas Law or Ideal Gas Law:

where is the amount of gas, measured in moles, and is the Universal Gas Constant of 8.314 joules per Kelvin. A mole has to be defined as that amount, in grams, equal to the molecular weight of the gas. This required that the molecular weight of diatomic gases, like Hydrogen, Nitrogen, Oxygen, and Chlorine, be twice the atomic weight, because the molecules have two covalently bound atoms. Inert gases, like Helium and Neon, have only one atom per molecule. For something like Ammonia vapor (NH3), the molecular weight is 17, the sum of the atomic weights of the atoms.

There are a number of ways of stating this. A mole is Avogadro's number (6.022x1023) of molecules. The gas law can be restated in terms of the number of molecules:

where is the number of molecules and is Boltzmann's constant (1.38x10-23 joules per kelvin). It is just the universal gas constant scaled by Avogadro's number. Since it makes no reference to artificial units like grams, physicists consider it to be more theoretically significant than the gas constant, and it shows up in many physical formulas (quantum mechanics and statistical mechanics, for example) that are not related to gas behavior.

Scientists were now fairly close to figuring out thermodynamics. They just needed the kinetic theory and statistical mechanics. They still didn't know why heat only flows "downhill"—it was still just an observed fact. And they didn't know why "heat engines", that is, things that turn heat (e.g. steam) into mechanical energy, aren't very efficient.

Second Law of Thermodynamics

James Watt's steam engine drew heat from a specific source and converted some of it to useful work; the remainder of the heat was transferred to a cooler reservoir. In 1824 a French engineer, N.L. Sadi Carnot, proposed the Carnot cycle[2], consisting of two isothermal processes (involving a constant temperature) and two adiabatic processes (no heat is gained or lost). The result was, in theory, the most efficient-working heat engine cycle of any kind, one involving processes that must be reversible and involve no change in entropy. What was discovered in practice was the second law of thermodynamics, which states (in one of its various formulations) that entropy in an isolated system cannot decrease, and that irreversible processes can only make it increase.[3] An equivalent formulation states that heat cannot spontaneously flow from a cooler body to a hotter body. Clausius stated in what is known as the Clausius statement that [4]

No process is possible whose sole result is the transfer of heat from a colder to a hotter body.

Kelvin proposed another statement, which can be shown to be equivalent to that above:

No process is possible whose sole result is the complete conversion of heat into work

What it meant was heat transfers to cooler temperatures, and not the other way around. But this was still just a carefully worked out summary of experimental observation. While such summaries of observation are important for the progress of science, people still didn't know why this was true. Nevertheless, the development of Carnot's theory used that observation to explain why heat engines have limited efficiency.

Making progress on the theory required the development of the concept of entropy, and the development of statistical mechanics. To see what entropy is about, consider its thermodynamical definition as a differential. Entropy is generally symbolized with a capital S, and heat energy with a capital Q. While defining entropy in terms of its derivative rather than an actual absolute definition may seem to leave something to be desired (it leaves a "constant of integration" unspecified), that generally doesn't matter. This standard definition is used:

The change in entropy of some object is the change in heat energy divided by the temperature at which the change takes place. The unit of thermodynamic entropy is the Joule per Kelvin. This is an "extensive" measure. To get the entropy for a given substance, independently of the size of the sample, it has to be divided by the size of the sample. So the unit of entropy for a substance (e.g. ice) is Joules per Kelvin per mole. Or per gram, or per atom, or per liter, or whatever.

Why is this useful? It captures the fact that heat flows downhill. Suppose there is a hot cup of coffee in a cooler room. Heat will flow from the coffee to the room. By conservation of energy, that is, the first law of thermodynamics, the amount of heat energy flowing out of the coffee is equal to the heat energy flowing into the room.


(These would actually be instantaneous time derivatives.) Energy is conserved. But now consider the equation for entropy. It the coffee is at 310 Kelvins and the room at 290 Kelvins:

Because of the temperature difference, the entropy of both combined went up by 0.0111. The fact that heat flows downhill may be captured by saying that

Entropy always increases, or stays the same. It does not decrease.

This is still just a nicely formulated statement of observations. To see why this is the correct definition of entropy, and why this law is correct, statistical mechanics must be developed. This was done by Maxwell, Boltzmann, Clausius, and others during the 19th century.

For a continuation of this, see the Second Law of Thermodynamics article.

The Third Law of Thermodynamics

Also known as Nernst's Law, states that it is not possible to bring any system to the absolute zero of temperature in a finite number of Carnot cycles. Also stated as follows: The entropy of a perfect crystal at absolute zero is zero.

These laws tell us to what constraints any system is subject. For example, it allows us to calculate the maximum possible efficiency of an engine once we know the temperature at which it operates.

The particular properties of a specific system cannot be calculated from these laws alone. More information is required: so-called thermodynamic equations of state tell us how a particular system will behave under thermodynamic processes. A simple example of such an equation is the Ideal Gas Law that applies to dilute gases.

See also


  1. David Halliday, "Fundamentals of Physics Extended", John Wiley & Sons, New York, 1997
  2. Carnot Engine
  3. Gregory H. Wannier, Statistical Physics, John Wiley & Sons, New York, 1966
  4. S. J. Blundell & K. M. Blundell, Concepts in Thermal Physics, Oxford University Press, New York, 2016