RESUME
Chapter 22
Heat Engine, Entropy and the second law of thermodynamic
Part 2
Heat Engine, Entropy and the second law of thermodynamic
Part 2
oleh :
Reizsky Reynaldy
3415115829
FAKULTAS
MATEMATIKA dan ILMU PENGETAHUAN ALAM
PENDIDIKAN BIOLOGI BILINGUAL
UNIVERSITAS NEGERI JAKARTA
2011
PENDIDIKAN BIOLOGI BILINGUAL
UNIVERSITAS NEGERI JAKARTA
2011
Chapter 22
Heat Engine, Entropy and the second law of thermodynamic
Part 2
Heat Engine, Entropy and the second law of thermodynamic
Part 2
- Gasoline & Diesel Engine
- Entropy
- Entropy Changes in Irreversible Process
- Entropy on a Microscopic Scale
Gasoline and Diesel Engine
Gasoline
Engine Diesel
Engine
Gasoline Engine
A petrol engine (known as a gasoline engine) is an internal combustion engine with spark-ignition, designed to run on petrol (gasoline) and similar volatile fuels.
It differs from a diesel engine in the method of mixing the fuel and air, and in using spark plugs
to initiate the combustion process. In a diesel engine, only air is
compressed (and therefore heated), and the fuel is injected into very
hot air at the end of the compression stroke, and self-ignites. In a
petrol engine, the fuel and air are usually pre-mixed before compression
(although some modern petrol engines now use cylinder-direct petrol
injection).
The pre-mixing was formerly done in a carburetor, but now (except in the smallest engines) it is done by electronically controlled fuel injection.
Petrol engines run at higher speeds than diesels, partially due to
their lighter pistons, con rods and crankshaft (as a result of lower
compression ratios) and due to petrol burning faster than diesel.
However the lower compression ratios of a petrol engine give a lower
efficiency than a diesel engine.
Working cycles : 4-Stroke Petrol engine
Petrol engines may run on the four-stroke cycle or
the two-stroke cycle. For details of working cycles see:
Cylinder arrangement
Common cylinder arrangements are from 1 to 6
cylinders in-line or from 2 to 16 cylinders in V-formation.
Flat engines
– like a V design flattened out – are common in small airplanes and motorcycles
and were a hallmark of Volkswagen automobiles into the 1990s. Flat 6s are still used in many modern Porsches,
as well as Subarus.
Many flat engines are air-cooled. Less common, but notable in vehicles designed
for high speeds is the W formation, similar to having 2 V engines side by side.
Alternatives include rotary and radial engines
the latter typically have 7 or 9 cylinders in a single ring, or 10 or 14
cylinders in two rings.
Cooling
Petrol engines may be air-cooled,
with fins (to increase the surface area on the cylinders and cylinder head);
or liquid-cooled, by a water jacket
and radiator. The coolant
was formerly water, but is now usually a mixture of water and either ethylene glycol
or propylene glycol. These mixtures have lower freezing points and
higher boiling points than pure water and also prevent corrosion, with modern
antifreezes also containing lubricants and other additives to protect water pump
seals and bearings. The cooling system is usually slightly pressurized to
further raise the boiling point of the coolant.
Compression ratio
The compression ratio
is the ratio between the total volumes of the cylinder AND the combustion
chambers – at the beginning, and end of the compression stroke. Broadly
speaking, the higher the compression ratio, the higher the efficiency of the
engine. However, compression ratio has to be limited to avoid pre-ignition of
the fuel-air mixture which would cause engine knocking
and damage to the engine. Modern motor-car engine overall have compression
ratios of between 9:1 and 10:1, but this can go up to 11 or 12:1 for
high-performance engines that run on higher octane fuel
Ignition
Petrol engines use spark ignition
and high voltage current for the spark may be provided by a magneto or an ignition coil.
In modern car engines the ignition timing
is managed by an electronic Engine Control Unit.
Diesel Engine
The diesel engine has the highest thermal efficiency of any regular internal or external combustion engine due to its very high compression ratio. Low-speed Diesel engines (as used in ships and other applications where overall engine weight is relatively unimportant) often have a thermal efficiency which exceeds 50 percent. Diesel engines are manufactured in two-stroke and four-stroke versions. They were originally used as a more efficient replacement for stationary steam engines. Since the 1910s they have been used in submarines and ships. Use in locomotives, trucks, heavy equipment and electric generating plants followed later. In the 1930s, they slowly began to be used in a few automobiles. Since the 1970s, the use of diesel engines in larger on-road and off-road vehicles in the USA increased. As of 2007, about 50 percent of all new car sales in Europe are diesel.
In the sucking
motion of the closed
environment of diesel engines,
air is sucked into
the cylinder, then the motion is adiabatic
compression compressed to a temperature high enough. So that the fuel is
sprayed into the cylinder at the
end of this movement on fire
without a spark is
needed. not as fast burning
combustion in engines
that use gasoline as fuel.
The first part of the labor movement was largely due to constant pressure. The next part of the labor movement is due to adiabatic development. Then terminated by throwing motion so that the circle becomes closed.
The first part of the labor movement was largely due to constant pressure. The next part of the labor movement is due to adiabatic development. Then terminated by throwing motion so that the circle becomes closed.
Figure 1.5 shows a closed loop air diesel engines are ideal. 1 starting
point, the air is adiabatically compressed to the point 2, then with constant
pressure is heated to the point 3. The next, adiabatic expanded to the point of
4 and with a constant volume is cooled to the point 1.
because during the motion of the compression in the cylinder diesel engine
there is no fuel, then the light will not occur prematurely. and comparison kompersi
v1/v2 can be much larger than heat engines.
Entropy
In classical thermodynamics, the concept of entropy is defined phenomenologically by the second law of thermodynamics, which states that the entropy of an isolated system always increases or remains constant. Thus, entropy is also a measure of the tendency of a process, such as a chemical reaction, to be entropically favored, or to proceed in a particular direction. It determines that thermal energy always flows spontaneously from regions of higher temperature to regions of lower temperature, in the form of heat. These processes reduce the state of order of the initial systems, and therefore entropy is an expression of disorder or randomness. This picture is the basis of the modern microscopic interpretation of entropy in statistical mechanics, where entropy is defined as the amount of additional information needed to specify the exact physical state of a system, given its thermodynamic specification. The second law is then a consequence of this definition and the fundamental postulate of statistical mechanics.
Thermodynamic entropy has the dimension of energy divided by temperature, and a unit of joules per kelvin (J/K) in the International System of Units.
The term entropy was coined in 1865 by Rudolf Clausius based on the Greek εντροπία [entropía], a turning toward, from εν- [en-] (in) and τροπή [tropē] (turn, conversion).
Answer :
(b).
Because the process is reversible and adiabatic, Qr = 0; therefore, ΔS = 0.
On the Law of Thermodynamics I related
to the concept of energy in the U.
While the second law of thermodynamics
is concerned with variables of thermodynamics is
called entropy S. In the discussion of the
Carnot cycle in which the obtained relationship between
temperature and heat flow in Carnot cycle
is :
Qp / Qd = Tp / Td
In this case, Qp is the amount of heat flowing
into the cycle and
Qd is the amount of heat flowing out of the
system, therefore, between
the two have opposite
signs. So for the
carnot cycle (which
is the irreversible process) can be written as :
(Qp / Tp) = - (Qd / Td) or (Q1 / T1) + (Q2 / T2) =
0
Thus, the algebraic sum of the amount of Carnot cycle is Q
/ T = 0.
Now, note any irreversible process (reversible) as shown by the closed curve in the following figure
Now, note any irreversible process (reversible) as shown by the closed curve in the following figure
(ΔQ1 / T1) + (ΔQ2 / T2) = 0
And
if the relationship is applied to all cycles obtained:
Σ
=> ΔQ / T = 0
When
silus-cycle is made much smaller, meaning that if the temperature difference
between two successive isotherms curve is very small, then the above equation
can be rewritten as
∫
(d Q / T) = 0
So,
if the heat d'Q flowing into any point in the system divided by the temperature
T of the system at that point, and the results are summed to all cycles, then
the total is equal to 0. If the integral of a quantity around a closed path is
0, then the scale is a state variable, in this case we refer to it as a state variable
entropy S. Thus obtain the equation :
dS
= d'Q / T
then
for any cyclic process applies
∫ ds = 0
Based
on the equation ∫ (d Q / T) = 0, although d'Q is not an exact differential. dS
is an exact differential. Other properties of the exact differential is that
the price of the integral between any two state of equilibrium is the same for
all trajectories between these circumstances. Therefore, for any path between a
and b conditions apply :
a∫b
dS = Sb - Sa
This
equation states the amount of entropy change between two equilibrium state of a
and b. While the unit of entropy in the international system of units are
Joules / Kelvin (JK-1) or calories / Kelvin (cal K-1). We define, entropy type
specific (specific entropy) as the entropy per mole or per unit mass :
ΔS
= S / n or ΔS = S / m
Entropy Changes in
Irreversible Processes
By definition, a calculation of the change in entropy for a system
requires information about a reversible path connecting the initial and final
equilibrium states. To calculate changes in entropy for real (irreversible)
processes, we must remember that entropy (like internal energy) depends only on
the state of the system. That is, entropy is a state variable. Hence,
the change in entropy when a system moves between any two equilibrium states
depends only on the initial and final states. We can calculate the entropy
change in some irreversible process between two equilibrium states by devising
a reversible process (or series of reversible processes) between the same two
states and computing for the reversible process. In irreversible
processes, it is critically important that we distinguish between Q, the
actual energy transfer in the process, and Qr , the energy that would
have been transferred by heat along a reversible path. Only Qr is the
correct value to be used in calculating the entropy change. As we show in the
following examples, the change in entropy for a system and its surroundings is
always positive for an irreversible process. In general, the total entropy and
therefore the disorder always increases in an irreversible process. Keeping
these considerations in mind, we can state the second law of thermodynamics as
follows :
The total entropy of an isolated system that undergoes
a change cannot decrease.
Furthermore, if the process is irreversible, then the
total entropy of an isolated system always increases. In a reversible process,
the total entropy of an isolated system remains constant. When dealing
with a system that is not isolated from its surroundings, remember that the
increase in entropy described in the second law is that of the system and its
surroundings. When a system and its surroundings interact in an irreversible
process, the increase in entropy of one is greater than the decrease in entropy
of the other. Hence, we conclude that the change in entropy of the Universe must
be greater than zero for an irreversible process and equal to zero for a
reversible process. Ultimately, the entropy of the Universe should
reach a maximum value. At this value, the Universe will be in a state of
uniform temperature and density. All physical, chemical, and biological
processes will cease because a state of perfect disorder implies that no energy
is available for doing work. This gloomy state of affairs is sometimes referred
to as the heat death of the Universe.
Answer :
False.
The determining factor for the entropy change is Qr , not Q. If
the adiabatic process is not reversible, the entropy change is not necessarily
zero because a reversible path between the same initial and final states may involve
energy transfer by heat.
Entropy Change in Thermal Conduction
Let us now consider a system consisting of a hot reservoir and a cold
reservoir that are in thermal contact with each other and isolated from the
rest of the Universe. A process occurs during which energy Q is
transferred by heat from the hot reservoir at temperature Th to the cold
reservoir at temperature Tc . The process as described is irreversible,
and so we must find an equivalent reversible process. Let us assume that the
objects are connected by a poor thermal conductor whose temperature spans the
range from Tc to Th. This conductor transfers energy slowly, and
its state does not change during the process. Under this assumption, the energy
transfer to or from each object is reversible, and we may set Q = Qr .
Because the cold reservoir absorbs energy Q, its entropy increases by Q/Tc
. At the same time, the hot reservoir loses energy Q, and so its
entropy change is = Q/Th . Because Th > Tc , the
increase in entropy of the cold reservoir is greater than the decrease in
entropy of the hot reservoir. Therefore, the change in entropy of the system
(and of the Universe) is greater than zero :
Entropy Change in a Free Expansion
Let us again consider the adiabatic free expansion of a gas occupying
an initial volume Vi (Fig. 22.16). In this situation, a membrane
separating the gas from an evacuated region is broken, and the gas expands
(irreversibly) to a volume Vf . What are the changes in entropy of the
gas and of the Universe during this process? The process is neither reversible
nor quasi-static. The work done by the gas against the vacuum is zero, and
because the walls are insulating, no energy is transferred by heat during the
expansion. That is, W = 0 and Q = 0. Using the first law, we see
that the change in internal energy is zero. Because the gas is ideal, E int
depends on temperature only, and we conclude that ΔT = 0 or Ti = Tf
. To apply Equation 22.9, we cannot use Q ! 0, the value for the
irreversible process, but must instead find Qr ; that is, we must find
an equivalent reversible path that shares the same initial and final states. A
simple choice is an isothermal, reversible expansion in which the gas pushes
slowly against a piston while energy enters the gas by heat from a reservoir to
hold the temperature constant. Because T is constant in this process,
Equation 22.9 gives
For an isothermal process, the first law of thermodynamics
specifies that is equal to the negative of the work done on
the gas during the expansion from Vi to Vf , which is given by
Equation 20.13. Using this result, we find that the entropy change for the gas
is
Because Vf > Vi , we conclude that ΔS is
positive. This positive result indicates that
both
the entropy and the disorder of the gas increase as a result of the
irreversible, adiabatic expansion. It is easy to see that the gas is more
disordered after the expansion. Instead of being concentrated in a relatively
small space, the molecules are scattered over a larger region. Because the free
expansion takes place in an insulated container, no energy is transferred by
heat from the surroundings. (Remember that the isothermal, reversible expansion
is only a replacement process that we use to calculate the entropy
change for the gas; it is not the actual process.) Thus, the free
expansion has no effect on the surroundings, and the entropy change of the
surroundings is zero. Thus, the entropy change for the Universe is positive;
this is consistent with the second law.
Entropy Change in Calorimetric Processes
A substance of mass m1, specific heat c1, and initial
temperature Tc is placed in thermal contact with a second substance of
mass m2, specific heat c 2, and initial temperature Th >
Tc . The two substances are contained in a calorimeter so that no energy
is lost to the surroundings. The system of the two substances is allowed to
reach thermal equilibrium. What is the total entropy change for the system?
First,
let us calculate the final equilibrium temperature Tf . Using the
techniques of
Section
20.2—namely, Equation 20.5, Qcold = -Qhot,
and Equation 20.4, Q = mc ΔT,
we
obtain
Solving for Tf , we have
The process is irreversible because the system goes through a series of
nonequilibrium states. During such a transformation, the temperature of the
system at any time is not well defined because different parts of the system
have different temperatures. However, we can imagine that the hot substance at
the initial temperature Th is slowly cooled to the temperature Tf as
it comes into contact with a series of reservoirs differing infinitesimally in
temperature, the first reservoir being at Th and the last being at Tf
. Such a series of very small changes in temperature would approximate a
reversible process. We imagine doing the same thing for the cold substance.
Applying Equation 22.9 and noting that dQ ! mc dT for an
infinitesimal change, we have
where we have assumed that the specific heats remain constant.
Integrating, we find that
where Tf is given by Equation 22.14. If Equation 22.14 is
substituted into Equation 22.15, we can show that one of the terms in Equation
22.15 is always positive and the other is always negative. (You may want to
verify this for yourself.) The positive term is always greater than the
negative term, and this results in a positive value for ΔS. Thus, we
conclude that the entropy of the Universe increases in this irreversible process.
Finally, you should note that Equation 22.15 is valid only when no mixing of
different substances occurs, because a further entropy increase is associated
with the increase in disorder during the mixing. If the substances are liquids
or gases and mixing occurs, the result applies only if the two fluids are
identical, as in the following example.
Entropy on a Microscopic Scale
As we have seen, we can approach entropy by relying on macroscopic
concepts. We can also treat entropy from a microscopic viewpoint through
statistical analysis of molecular motions. We now use a microscopic model to
investigate once again the free expansion of an ideal gas, which was discussed
from a macroscopic point of view in the preceding section. In the kinetic
theory of gases, gas molecules are represented as particles moving randomly.
Let us suppose that the gas is initially confined to a volume Vi , as
shown in Figure 22.17a. When the partition separating Vi from a larger
container is removed, the molecules eventually are distributed throughout the
greater volume Vf (Fig. 22.17b). For a given uniform distribution of gas
in the volume, there are a large number of equivalent microstates, and we can
relate the entropy of the gas to the number of microstates corresponding to a
given macrostate. We count the number of microstates by considering the variety
of molecular locations involved in the free expansion. The instant after the
partition is removed (and before the molecules have had a chance to rush into
the other half of the container), all the molecules are in the initial volume.
We assume that each molecule occupies some microscopic volume Vm. The
total number of possible locations of a single molecule in a macroscopic
initial volume Vi is the ratio wi ! Vi/Vm, which is
a huge number. We use wi here to represent the number of ways that
the molecule can be placed in the volume, or the number of microstates, which
is equivalent to the number of available locations. We assume that the
probabilities of a molecule occupying any of these locations are equal. As more
molecules are added to the system, the number of possible ways that the
molecules can be positioned in the volume multiplies. For example, if we
consider two molecules, for every possible placement of the first, all possible
placements of the second are available. Thus, there are w1 ways of
locating the first molecule, and for each of these, there are w2 ways of
locating the second molecule. The total number of ways
of
locating the two molecules is w1w2 .
Neglecting the very small probability of having two molecules occupy
the same location, each molecule may go into any of the Vi/Vm locations,
and so the number of ways of locating N molecules in the volume becomes (Wi is not to be confused with work.)
Similarly, when the volume is increased to Vf , the number of ways of
locating N molecules increases to The ratio of the number of ways of placing the
molecules in the volume for the initial and final configurations is
If we now take the natural logarithm of this equation and multiply by
Boltzmann’s constant, we find that
where we have used the equality N = nNA. We know from
Equation 19.11 that NAkB is the universal gas constant R;
thus, we can write this equation as
From Equation 22.13 we know that when n mol of a gas undergoes a
free expansion from Vi to Vf , the change in entropy is
Note that the right-hand sides of Equations 22.16 and 22.17 are
identical. Thus, from the left-hand sides, we make the following important
connection between entropy and the number of microstates for a given
macrostate:
The more microstates there are that correspond to a given macrostate,
the greater is the entropy of that macrostate. As we have discussed previously,
there are many more microstates associated with disordered macrostates than
with ordered macrostates. Thus, Equation 22.18 indicates mathematically that
entropy is a measure of disorder. Although in our discussion we used the
specific example of the free expansion of an ideal gas, a more rigorous development
of the statistical interpretation of entropy would lead us to the same
conclusion. We have stated that individual microstates are equally probable.
However, because
there
are far more microstates associated with a disordered macrostate than with an ordered
microstate, a disordered macrostate is much more probable than an ordered one.
Figure 22.18 shows a real-world example of this concept. There are two possible
macrostates for the carnival game—winning a goldfish and winning a black fish.
Because only one jar in the array of jars contains a black fish, only one
possible microstate corresponds to the macrostate of winning a black fish. A
large number of microstates are described by the coin’s falling into a jar
containing a goldfish. Thus, for the macrostate of winning a goldfish, there
are many equivalent microstates. As a result, the probability of winning a
goldfish is much greater than the probability of winning a black fish. If there
are 24 goldfish and 1 black fish, the probability of winning the black fish is
1 in 25. This assumes that all microstates have the same probability, a
situation
that may not be quite true for the situation shown in Figure 22.18. For
example, if you are an accurate coin tosser and you are aiming for the edge of
the array of jars, then the probability of the coin’s landing in a jar near the
edge is likely to be greater than the probability of its landing in a jar near
the center. Let us consider a similar type of probability problem for 100
molecules in a container. At any given moment, the probability of one molecule
being in the left part of the container shown in Figure 22.19a as a result of
random motion is . If there are two molecules as shown in
Figure 22.19b, the probability of both being in the left part is or 1 in 4. If there are three molecules (Fig.
22.19c), the probability of all of them being in the left portion at the same
moment is , or 1 in 8. For 100 independently moving
molecules, the probability that the 50 fastest ones will be found in the left
part at any moment is . Likewise, the probability that the remaining
50 slower molecules will be found in the right part at any moment is . Therefore, the probability of finding this
fast-slow separation as a result of random motion is the product , which corresponds to about 1 in 1030. When
this calculation is extrapolated from 100 molecules to the number in 1 mol of
gas (6.02 x 1023), the ordered arrangement is found to be extremely improbable
!