Saturday, March 15, 2014

Critique of the Universe


CRITIQUE

of

THE UNIVERSE

 

The Friedmann Equations, FLRW Metric,

Hubble’s Law Acceleration and Dark energy

 

Note that using the “Standard Model of Cosmology” with its necessary assumption of a Hubble constant, Ho, to compute an accelerating universe expansion rate, with Ho ≠ k, is self-contradictory.

The Friedmann equations (FE) and the Friedmann/Lemaitre/Robertson/Walker (FLRW) metric upon which the FE are based have been described as a “first approximation” but, this is misleading. Both the FE and the FLRW metric, as they are always presented, contain so many layered unstated assumptions as well as some acknowledged “guesstimates” that the result is not merely a “first approximation”, but is a blatant mistake. The hidden postulates and unstated assumptions are far more important then the explicit ones. 

The Introduction and the Solutions sections of Wikipedia’s the FLRW Metric and the FE describe the substrate matrix and substance of the universe together as comprising a fluid. Due to the required properties (such as frictionlessness) of any such substrate matrix for the whole universe, what they mean by a “fluid” is really an “ideal” or “perfect” fluid. Both matter/energy and the spacetime continuum under general relativity comprise this fluid1, which can only be described as an ideal gas2 simply because no other metaphor is appropriate. 

The universe is described as comprising a continuum, an entity which is really unlike any solid, liquid or gas. Therefore, to refer to the universe and the continuum together as expanding without limit or as having a pressure or density, per se, is pure expedience. Einstein described the space-time continuum as a matrix of infinitesimal (infinitely small, infinitely closely spaced and infinitely numerous) massless particles having no mutual affinity3 (so, it must have the nature of an infinitely deep fractal chaos, completely unlike a gas). Yet, the universe appears to be indefinitely compressible, as is demonstrated by its extreme expansion4 – it is observed to seem indefinitely decompressible. So, any reference by the FE to a “ground” having mass density, ρ or rho, and a pressure, p5, must indeed refer to a system comprised of a putative ideal gas because this sub-floor would then meet the definition. This is especially so in the absence of any qualifier. Inadequate as it may be, there simply is no alternate interpretation. 

Semantic arguments will not fix the flaws in the inherent, fully explicit complete definitions of the FE/FLRW. The ideal gas model also presumes that this system is bounded and it must be at equilibrium. It presumes it is bounded because, if it is not, it is cannot be at equilibrium. The ideal gas law and its corollaries simply do not apply to any system that is not “stationary”. So, the equilibrium condition is prerequisite. Then, indeed, the universe must also be timeless. 

As mentioned above, this ideal gas must be frictionless: it must be an explicitly defined superfluid6. Altogether, on the surface, the ideal gas model seems not to be such a terribly bad approximation since the vast majority of the volume of the universe is filled with hydrogen with a little helium thrown in7, all embedded in superfluid spacetime. Assuming we can mix the spacetime continuum and its “content” this way, as if the content is separate and apart, this mixture exists at an extremely low pressure, which would facilitate the ideal gas approximation. The average density of the universe is thought to be only about 6 atoms or molecules per cubic meter8. 

In an expanding universe, stars, galaxies, nebulae, planets and people may be considered to be just along for the ride because according to the approximations made - strictly for the sake of computational tractability - they are not supposed to comprise a significant new condensed phase9.
 
That condensed or more highly compressed phases are supposed to constitute an insignificant contribution to the nature and properties of the universe is summarized by the so-called “Cosmological Principle” (CP). The total density of the universe may be considered identical with an average density, as if the condensed matter is all smeared out evenly over the whole volume. In other words, the CP approximation says that there really are no such things as stars, galaxies, nebulae, planets or people. And, in effect, this approximation imputes mass to the continuum. This might have useful implications, but it is not part of the general theory of relativity, having no experimental basis.

Moreover, CP says that everywhere one looks, in any direction, from any location in the universe, in this homogeneous rarified soup of mainly hydrogen and helium embedded in a relativistic spacetime matrix, the view is exactly the same. In reality, the universe may be very lumpy and heterogeneous, but this detail is handled separately by the “new standard model of precision cosmology” as an ugly ad hoc add-on. In other words, a lumpy bumpy model is superimposed upon the smooth and creamy model to make a sort of stack of pancakes - with butter and maple syrup too, no doubt. 

“In a strictly FLRW model, there are no clusters of galaxies, stars or people, since these are objects much denser than a typical part of the universe. Nonetheless, the FE-FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models which calculate the lumpiness in the universe are added onto the FLRW model as extensions.” from Wikipedia, The FLRW Metric10

Needless to say, there are a few problems with these approximations and assumptions. The most severe problem is that the “consensus” of cosmologists (and Wikipedia editors too) does not recognize any of the problems. Conventional wisdom among astrophysicists and such amateurs alike does not accord the FE-FLRW “standard model” with nearly enough skepticism. 

The Trouble with All “Models”  

Models are always designed to simplify and sometimes even to oversimplify. Otherwise, they would be called “ab initio” exact descriptions. At best, this model is indeed a “first approximation”. The math is unsolvable if the lumpiness of the real universe is included, so this “detail” is left out when computing numerical properties of the universe like density, pressure and even the interpretation of redshifts. So, nobody should be surprised when queer circular conclusions are reached such as an accelerating expansion rate and “dark energy”. 

One trouble is that the condensed or relatively compressed or compact matter in the universe constitutes new phases which, over the billions of years that the universe has existed, must be expected to behave as if they had a vapor pressure. They are not inert. 

They are not degenerate either. So, a mixture containing a gas and these other more compact phases cannot be treated like a pure virgin gas. Any amount of an active compact or condensed phase, no matter how small, a physical chemist will say, will upset pressure and density calculations for any kind of a putative gas11.
 
Plus, any physicist will aver that a dispersed suspension within any kind of fluid containing more than one phase is expected to have altered bulk properties like the way it transmits light and other energy. This will be true even if these other phases are indeed actually otherwise inert. In the universe, the percentage of additional phases may be very small, but the effects that are to be detected are also very small. This should be of concern to “modern precision cosmologists”. 

Using the ideal gas law does indeed require one to presume that the system is at a stationary timeless condition of equilibrium. But, the universe is demonstrably not at equilibrium. This fact not only upsets the direct mathematical approximations, but it seriously upsets any indirect theoretical physics of all such oversimplified models that presuppose the ideal gas law, like the Friedmann equations. 

Equilibrium
 
The FE/FLRW model indeed requires one to presume that the fluid system, regardless of type, must be truly at equilibrium, including uniform constant temperature, because it refers the system to a “state” variable, w. Therefore, since the universe must always be in some particular state, a value for an equilibrium constant, K(eq), may be defined. 

Repeating, the universe is demonstrably not at equilibrium. Also, if it is insisted, for the sake of argument, that the universe is really held to be at equilibrium at all times, strictly speaking, then its processes must be held to be thermodynamically reversible. This is huge: the real, practical and eschatological thermodynamic implications of this are stunning. If it is held, for the sake of argument, that these auxiliary implications simply do not apply, then this constitutes another colossal set of assumptions. 

Equilibrium also means that the magnitude of the equilibrium constant points to whether the system proceeds toward completion of the implied process or “reaction”, that is, toward the final yield of “products”. Or else, it stays close to the initial stage composed mainly of beginning substances, phases or “reactants”. 

Thus

 

M0(s) ↔ M0(g) .

 

In other words, according to Alan Guth’s inflationary scenario, the “inflaton”, a potentially huge parcel or domain of greatly excited false vacuum, an infinitely dense solid clot of pure energy that must arise probabilistically, call it M0(s), must ultimately decay to its final ground state, a true vacuum, labeled M0(g). At the finish, the ground state comprises nothing else but the simple vacuum except, perhaps, a stray degenerate photon.  

The equilibrium constant would then be written 

K(eq) = [M0(g)] / [ M0(s)] ,
 
where the brackets denote concentration, partial pressure or partial density. 

With the bracketed quantities in natural units, a vast volume containing a unit “quantity” of the true vacuum denotes the final product “density” or concentration, [M0(g)]. This is divided by a unit quantity of matter, [ M0(s)], as the clot of pure energy from Guth’s super dense, ultra intense solid energy “inflaton” point particle. This is expressed as  

K(eq) = 1/1 = 1 . 

By its intermediate magnitude alone, a process physicist or physical chemist would know, this nonzero low value neatly admits or allows that the evolution of the universe should obviously still be continuing. The implication is that it will take a long time to reach equilibrium. So, both the stated and hidden FE/FLRW assumptions actually will be met only at such time as this equilibrium condition becomes a reality, perhaps more than 2 trillion years hence. By their own definitions, the hidden assumptions can neither pertain to our real universe at this moment nor at any time in the near future. 

Eternal Homogeneity

Universe homogeneity is part of the so-called “Cosmological Principle” (CP). The CP approximation says that there really are no such things as stars, galaxies or people; there are no relative concentrations of matter. Another unstated presumption is that this condition must have persisted throughout time - since the beginning ‑ or else the other affected assumptions will never have had a chance to initialize. That is to say, crucial characters most typical of the deep past cannot possibly obey the implications of CP because the expansion process itself must interfere. 

That is, CP homogeneity maintains that such characters should be considered to typify, help or epitomize the universe’s evolution, but cannot have evolved themselves. Yet, the crucial features of the universe really do evolve. This is what cosmology is all about! So, as another contradiction in terms, eternal homogeneity is impossible. 

The so called “perfect” timeless CP is not different from the true and complete definition of the putatively less stringent CP because whenever we look at distant celestial objects, we look far back in time. Any statement of the CP must assume this as a sort of “timelessness”. We deal with spacetime, after all, when we make astronomical observations. The ideal gas model also assumes the same sort of timelessness and so does not constitute a dynamic model at all. Far from it. It treats a dynamic process as if it was static, another hidden postulate. 

So, homogeneity, as here discussed, is not a property of the universe. The universe simply is not homogeneous. This assumption is totally bogus. Why should it be relevant that the universe could be considered to be homogeneous on scales of more than 100 megaparsecs? What has scale got to do with it? Whatever the answer, we must “Prove it.” 

Prove it, consistent with the scientific method, with explicit falsifiable premises and without a trace of circular logic. This cannot be done. The universe is either homogeneous or it is not, there is no middle ground. 

Plus, there are structures in the universe that are larger than 100 megaparsecs, such as several “great walls” or “sheets” of galaxy clusters and superclusters12. Furthermore, every spiral galaxy has a supermassive black-hole in its core that possesses an hyperbolic gravitational potential around it. This is due to the relativistic nature of black-holes which are supposed to exist as gravitational or spacetime singularities. 

Because of this detail of its relativistic geometrical nature, a gravitational singularity must possess an hyperbolic gravity field. Hyperbolic gravitational potentials (proportional to 1/r) do not fall off very rapidly, nearer to zero, like the normal parabolic potentials of Newton’s Law (proportional to 1/r2). Hyperbolic supermassive black-hole potentials extend to infinity, or at least to a lot farther than 100 megaparsecs. They could actually explain “dark matter” (DM) because the presence of the galactic disc means that the potential falls off very slowly indeed (as 1/r + 1/r2). These debate points mean that the universe is actually as heterogeneous as Swiss cheese. 

Furthermore, if it is seen that DM is a result of this hyperbolic field, it would confirm that black-holes really are relativistic singularities. Some of the competitors to Einstein’s theory might thus be eliminated. So, we would not need “M theory” or “Supersymmetry”. It would be so sad if there is no such thing as an ultra-massive Higgs boson or a DM particle. Astrophysicists may be so willing to uncritically accept the FE/FLRW model and DE with DM because so many of their colleagues’ whole careers depend on it. 

Isotropism 

And, we cannot assume that the universe is isotropic either. George Ellis has repeatedly maintained and others have also pointed out that, since we cannot observe the universe in the direction of the plane of our galaxy, we cannot be sure that it is isotropic. Even if we could observe in that direction, our light horizon may not extend far enough to confirm that we are not inside a huge cosmic void. Interpolation to fill in the blocked zone must use all sorts of assumptions that subvert what we would be trying to define. If we are in a void, redshift measurements at very large distances will be skewed to appear as if the universe’s expansion rate is accelerating13. 

Saul Perlmutter and Adam Riess 

Saul Perlmutter and Adam Riess each claim to head up independent teams of researchers that have both uncovered evidence for accelerating expansion and dark energy. But, their efforts were not independent but were a concerted collaboration14. 

But, this observation of acceleration, made by Saul Perlmutter15 et al. and Adam Riess16 et al. is the result of assuming yet another item. They presume that the Hubble constant is, ever will be and surely always has been truly constant. Because they insist on this unstated premise, they multiplied the magnitude modulus versus redshift data for nearer or not so old type Ia supernovae by a fudge factor to bring this data into line with data got for much more distant and older data. This produces a nice straight line and irons out the kink that embarrassingly shows between segments of a simultaneous plot of the two data sets. Then the slope of the straight line is artificially made constant, like data for a well behaved Hubble diagram should be. It does not matter to either of them that this kink in the straight line between the linked data sets could just as well denote an unresolved systematic error. 

In an exercise of pure faith, this fudge factor alone, all by itself, is said to indicate acceleration17 because its sign is positive. 

Yet, one could just as well apply a fudge factor to the more “ancient” supernova data and bring this segment of the curve into alignment with that of the “younger” SNe Ia data. Then, the fudge factor would say that the universe is decelerating, not accelerating. That the whole argument for acceleration depends on the arbitrary application of a manufactured fudge factor is very disturbing. 

It is another case of intellectual recklessness within the debate. And the media, including Wikipedia editors, go for this bait. 

With amazing hubris, both Perlmutter and Riess claim that theirs is the only good Hubble constant data that has ever been obtained18. But, their magnitude modulus versus redshift data for SNe Ia were calibrated using data got from analysis of Cepheid variable stars. Earlier modern Hubble constant determinations were also got by using Cepheid variable stars. So, this data should be every bit as good. But, all previous determinations show that Ho is not constant and is decreasing with time, not increasing19. 

Dark Energy 

Now, the FE/FLRW model implicitly refers to the whole universe as if it is an homogeneous laboratory subsystem. The FE must allow that energy or thermodynamic work, w, can be done upon the universe and it can do work on its parent system. This presumption that the universe must do work is used to conclude that a perceived acceleration in the rate of expansion means that there is such a thing as “dark energy” (DE). But, this system/subsystem implication is never mentioned by anybody. It is yet another hidden presumption. 

DE purportedly follows from the idea that there must be a real discrete value for w. It must actually be less than zero because the above mentioned pressure, p, is putatively decreasing and has actually become negative in the recent epoch due to “acceleration”.
 

Negative pressure always implies external suction. Blatant explicit referral to an external influence must be avoided, however, since even amateur cosmologists will hesitate to infer that there is, in fact, an “outside” to the universe. So, “-p” must be internalized by characterizing it as a result of some kind of DE, not really externally applied w. 

That is, the so called “equation of state” (the stationary condition summary equation for this ideal gas) of the universe is derived from a form of pv = nRT (the ideal gas law) and basic thermodynamics. Simplifying the development of the idea, pv = w (remember, w is work, a kind of energy and pv is the work that is done on the gas or could be done by the gas if it got compressed or was allowed to expand to or from nearest to zero pressure and nearest to infinite volume), and v = 1, the current and ongoing value for the putative volume of the universe in natural units.

That v = 1 at all times is a gross distortion too. If v = 1 today, it cannot have been equal to 1 six or eight billion years ago, but this is what such a simple timeless model must assume. Some have called the FE/FLRW model a “dynamic” model. This must be some kind of joke, for the FE are anything but dynamic in nature. 

Time enters the FE/FLRW model only as a so-called scale factor, “a” or “R”. The “scale factor”, in natural units, is used only to compute a Hubble parameter, H, which is equal to 1 only when time is equal to 1, meaning “the present”. H is used for ?  So, time dependence is introduced despite the contrary implication of timelessness in all the other hidden postulates, unstated assumptions and given definitions. “Self‑consistent” is not an adjective that can be used to describe the FE/FLRW model.


The scale factor concept is said to follow from the approximations of homogeneity and isotropism inherent in the CP. Nobody ever mentions exactly how it follows logically. They do not dare because it certainly does not so follow without still more hidden postulates and unstated assumptions. If more layers of approximations and assumptions are not to be added, then it is really just another ad hoc add-on.


Now, if the expansion rate is supposed to be accelerating, Ho is not constant and then 

w = -p , 

as if suction is being applied to the universe. 

But, since we must deem this to be impossible, we cannot allow even any contribution to p to be negative, then

 

w ≤ 0 , 

which, by its sign and the definition of w, says that this work energy (if there is any) must come from within the universe. From whence does it issue?20

To satisfy the conservation law, there must always have been an untapped and invisible reservoir of such “dark” energy. So, it must also have an impalpable, unmeasurable mass-equivalent. Except for the stack of assumptions that have been accumulated, this is said to solve “the missing mass problem”. Being unmeasurable is considered to be only a technicality. 

But, technicalities are what science is all about. To sidestep this problem, it has been seriously suggested that the scientific method should literally be dumped21 

However, being not really this simple, but purely as a debate point, one might insist that by telescopically observing phenomena in the universe as far back in time as 8 to 10 billion years, for well over its half-life from Hubble’s Law, this current latter phase is seen to have taken quite a bit longer than the former. Yet, things are proven to have not really changed that much since then. The whole process has clearly progressed only marginally in over 8 billion years. Yet, up until more than 10 billion years ago, the changes must have been dramatic. These earlier spectacular changes took place in less than 4 billion years. 

So, the whole process, including the expansion rate, must actually be decelerating. This is pure logic, an inescapable mathematical certainty. But, many cosmologists somehow insist that the expansion rate is indeed currently accelerating due to DE.  

Some of these points may be contradicted by a good debater. But, any one of them will demolish the FE/FLRW. All these critical points would have to be refuted simultaneously in defense of the FE/FLRW and DE. This cannot be done without self contradiction. 

It will be very bad for science if we have to backtrack again on such matters as accelerating Hubble expansion and dark energy. Scientists should be absolutely triple certain before they make even preliminary pronouncements to the Press. 

But, as Lev Landau, the Nobel laureate physicist, said: “Cosmologists are always wrong, but never in doubt.”22 

BIBLIOGRAPHY 

1  The composition of the Friedmann Fluid is a mixture of ideal gases that includes the
    spacetime continuum. See Wikipedia, Friedmann Equations – Mixtures, also http://en.wikipedia.org/wiki/Equation_of_state_%28cosmology%29 and references therein.

 

2  Ideal Gas, see any good college or high school chemistry text or Wikipedia or

3  Albert Einstein   The Collected Papers, vol. 6,   Princeton University Press, 1997
  The Foundation of the General Theory of Relativity, DOC. 30 

4  Hubble expansion, search “Hubble’s Law” or “Edwin Hubble” or see Wikipedia –
    otherwise, this is common knowledge or else see http://astrosun2.astro.cornell.edu/academics/courses//astro201/hubbles_law.htm  

5  Friedmann equations - variables p and rho, see any detailed treatment of the FE or
    see Wikipedia 

6  Spacetime as a superfluid – see ref. 3 and
   The Perfect Relativistic Gas          T. Y. Thomas
   Proceedings of the National Academy of Sciences of the United States of America, Vol. 51, No. 3 (Mar. 15,
   1964), pp. 363-367 (article consists of 5 pages) Published by: National Academy of Sciences

There are literally thousands of internet search hits when a search is done on spacetime superfluid, spacetime ideal fluid, spacetime perfect fluid, spacetime ideal gas, etc.  

7  Majority of sensible matter  (by volume) in the universe is hydrogen, H, or H2 and a
    little He – search “composition of the universe” – such as    http://map.gsfc.nasa.gov/universe/uni_matter.html  

8  Average matter density of the universe, search or see

9  Cosmological Principle as an approximation – almost any search on “critique of the
    cosmological principle” will reveal statements referencing this. 

10  Ibid

11  “Gas laws” do not work in the presence of “condensed phases” – search on these
      phrases – it is very difficult to find an explicit statement. However, if one searches on “inhomogeneous or heterogeneous equilibrium” one may get better results. One can write an equilibrium constant expression for the spacetime continuum and matter/energy phases in the universe. Clearly, the ideal gas model makes no room for such an addendum to its mathematical description of the universe. But, such an equilibrium expression would itself be a huge approximation because the universe is definitely not at equilibrium. 



14  http://arxiv.org/abs/astro-ph/9804065    Snapshot Distances to Type Ia Supernovae -- All in “One” Night's Work    Adam G. Riess, Peter Nugent, Alexei V. Filippenko, Robert P. Kirshner, Saul Perlmutter  

15  http://arxiv.org/abs/astro-ph/9901052   Constraining dark energy with SNe Ia and
     large-scale structure    Saul Perlmutter (LBNL), Michael S. Turner (Chicago/FNAL), Martin White (UIUC) 

16  http://arxiv.org/abs/astro-ph/9807008    Results from the High-Z Supernova
     Search Team   Alexei V. Filippenko, Adam G. Riess 

17  Fudge factor or “adjustment”  http://arxiv.org/abs/astro-ph/0201034   Michael Rowan‑Robinson 

18  P&R “The best Hubble constant data to date.” quote 

19  http://www.lonetree-pictures.com/    The data for the diagram showing linear drop in the universe expansion rate with time is got from numerous observations of the Hubble constant obtained for various distances from earth. The data is converted to natural units. The original data easily can be got by searching on the term “Hubble diagram”.  

20   Self contradiction is the hallmark of the FE/FLRW model. On one hand, the ideal gas model requires that the universe be bounded with variable volume and at equilibrium at all times. If there are to be changes, they must occur in infinitesimally and immeasurably small increments. The equivalent statement is that there must be no measurable turbulence. The universe is changing only at measurably large increments and it is notoriously turbulent.  

On the other hand, for w to be negative leaving p as a positive quantity, implies that the universe is unbounded with a heretofore unrecognized internal or potential energy, contrary to the model itself. So, if contradictions can be overlooked, this really means that we must still be in a Guthian “false vacuum” state. Then, the whole universe must be describable by reference to a time dependent Schroedinger-like equation. 

Physics will never accept that we have been and continue to be part of a purely quantum object. Nor will it accept that there are two, discrete equally valid sides to the coin of reality; quantum and relativity theories. The Holy Grail of a grand unified theory or a theory of everything must exist. It was Einstein’s dream, after all. “M Theory” must be real, say theoretical physicists.  

Physics will never just “let it be”. Theoretical physicists will try to force relativity and quantum theories to meld into a single great façade simply because they can, not because it may afford us any higher degree of truth. It is mindless seeking for seeking’s sake. Theories that are so much harder to understand and to use really constitute no improvement. They simplify nothing while complicating everything. This is not what science should be about. It is not what art is about either. The essence of art is to know when to stop. 

21  "The scientific method should be dumped" - quote 

22  Lev Landau quote from

 

No comments: