Saturday, March 15, 2014

Dark Energy, Schmark Energy


Extraordinary claims require extraordinary proof 

DARK ENERGY, SCHMARK ENERGY 

There is no excuse for the current mad rush to a messy consensus and the cockeyed conclusion that some weird stuff some call dark energy makes up the bulk of the universe. Dark energy is truly miraculous and qualifies as a supernatural explanation of deep space observations! 

Since there is no experiment that could confirm or deny the existence of dark energy, it cannot be considered as a contender. Extraordinary proof is out of the question but, it is among the most extraordinary claims ever made. Its disqualification as a viable hypothesis under the rubric of the Scientific Method means that we must start from scratch.  

Those with the ability must recalculate the main assumption that underlies the dark energy hypothesis. Let the theoretical chips tumble where they may and the intellectual trees drop where they will. 

This dark energy hypothesis is designed to explain just one observation that is currently looked upon from the so called "weird science" perspective. This particular peek at reality is from a point of view that is based on just one interpretation of the real distances to type 1a supernovae. They are now interpreted to be farther away than their redshifts would indicate. But, an estimate of the most serious interference to this interpretation, that there may be much more or much less "grey" dust in the early universe than was initially thought, must be in error. 

This currently calculated observational result and theoretic premise is that there was not enough extra dust to cause the redshift effect.  But, the effect itself is a result of an interpretation that may itself also be wrong. 

This apparent "phenomenon" is that since these exceedingly distant supernovae are farther away than the distances indicated by their redshifts, then the universe "must" have begun expanding at an accelerating rate in recent epochs. The only thing that we seem able to think of that might explain this strange interpretation is this unscientific postulate of dark energy.
 

For various reasons, all subsequent observational interpretations may well have been biased to favor the assumption of dark energy to begin with, so it may not be too surprising that they do indeed tend to prove it. 

There has been no effort to go back and reinterpret "grey dust" observations nor to recalculate the relative theoretic prevalence of "metals" in the earliest phases of the universe's development. If mathematical assumptions are changed sufficiently, the concentration of heavier nuclei could easily rise to the point that more dust formed than is now thought. Presto! There was much more dust present to dim the images and the supernovae in question are as nearby as their redshifts indicate. The manner in which the assumptions must be jiggered may well lead to a bigger paradigm shift, a more profound revolution, than dark energy itself. If weird science is for what governments will pay, a bonanza is waiting to be won. 

Furthermore, gravitational redshifts have been totally neglected. Relativistic redshift is distinct and is applied to the data. But, the young universe was very much more dense and its much more intense gravitational field would have influenced any light traveling on a trajectory that would tend to have it escape - that would have it end up in our eyes. 

Why there is not much more skepticism about the contradiction between luminosity distances and redshift distances in the first place completely escapes me! 

Still, some critics of dark energy do not seem to have a clue. For instance, some confuse Doppler redshifts with relativistic redshifts. It makes a difference! 

Regarding those extremely distant type 1a supernova results, there is indeed much more to Relativity than relativistic redshifts. The gravitational field in the early universe must have been VERY much greater than it is now. So an additional Einsteinian gravitational redshift should be added to all light traveling on an "outward" or escape trajectory, that is, toward our eyes in space and time. But this is not done. If this correction was carefully applied, then the redshift data would be translated to the same coordinates as the dim supernova results and the discrepancy would disappear. 

Regardless, dark energy is not necessary. How then do we account for a very nearly flat universe if dark energy does not add to the total mass/energy inventory? 

Easy. 

The observable universe would be situated within a polydimensional surface or "brane" of an extraordinarily large black hole or something much like a black hole that may be in motion. Its total dimensionality could well be significantly higher than superstring/M-theory implies. There may be evidence of 11 dimensions in our universe because our very existence means that this, our cosmos, resamples the superdimensional volume of the interior of this theoretically invisible "Dark Pit" or "Dim Seed". 

Clearly, it would not be not black. Still, no matter falls out of its event horizon, the surface where events occur, where things happen. No matter disappears into it, so, it does not generate a whole spectrum of radiation on the way in. The Dark Pit would be completely beneath visibility. But, we may deduce its presence and, perhaps, devise tests for the hypothesis of its existence.  

If we cannot do this, then it is a nonstarter and the idea must be rejected, in strictest accordance with the scientific method. 

The mass of the Dim Seed or Dark Pit would be many times the total estimated mass of the observable universe, whatever may be required to make up for the total energy/mass deficit. The curvature of its surface would be very nearly flat, as theory and observation require, but not exactly flat. The tiniest variations from flatness, though they may show up within the error tolerances of measurements, could be real, in other words.  

In order to make its surface flat enough without exceeding the mass limit implied by the matter/energy shortfall, it would be spinning very fast. Residual anisotropy that has in fact been discovered that is not accounted for by our laboratorys' proper motion through the cosmos could be due to this rotation. This means that our universe would be spinning too, of course, and not necessarily at the very same rate or direction. By analogy, the spin rates of the sun are nonuniform. Precise measurements of the residual cosmic anisotropy (RCA) could nail down the probable spin rates. This idea may also have implications pointing to the real nature of mass and of matter itself. 

There may not have been a Big Bang either, this may imply. If this Dark Seed's dimensionality could include several orthogonal dimensions of time, its shape might still be like a flattened sphere or nearly flat pancake, but what is regarded as expansion of our universe, even acceleration of expansion, may just be due to its turbulent rotation in time. With a correct more detailed model, the properties of the Cosmic Microwave Background might be accurately predicted. Then the cyclic model of cosmogony may still make more sense. 

The idea of multiple dimensions in time is not such an outlandish proposal. All spatial dimensions may just as well be described as time dimensions because of the limit on the speed of light. The word "spacetime" means equivalence between space and time just as surely as E=mc^2 means equivalence between matter and energy. In certain cases like this, it may be more enlightening to think of certain dimensions in terms of a time reference instead of a distance or length definition. 

The Wiz's Dark "Horse Matter" is another even deeper and weightier mass of a different kind (and of a different color, of course). We all seem to like Alan Guth's inflationary universe scenario but we also seem to be unwilling to carry the implication of his ideas to their logical end. He postulates that the universe began as a quantum virtual particle that sprang into existence in an extremely high energy state largely because the whole field of spacetime was excited. It was a "false vacuum".  This quantum spacetime just naturally existed in such a very high energy excited state because, statistically, this is the most probable initial condition.

We cannot ignore the statistical nature of quantum mechanics here either. Analytically, not only must there be an antiparticle "out there" somewhere, the particle/antiparticle pair must generate interference particles, +/- and -/+. Statistically, they must be just as real as terra firma itself. For, as we all know, repeated observation, even of the surface of the Earth, gives only a statistical result. I can interpret what I see only because I have seen images like it several times before. I can regard a blue streak in the image as a river only because I have seen rivers up close and have even swum in them. If I compare my preceptions with those reported by others, the picture gains another layer of statistical importance. We gain a consenus only statistically.  

Furthermore, since we imply by this line of thinking that there is a reality outside this universe, there must be particles naught- and naught+,  -naught and +naught, representing the zeroth, or "nonexistent" states. Such entities may be nonexistent only from the perspective of a certain aspect of the meta-cosmos of which ours may be merely a subuniverse.  

They would all have mass. They should still be "close" enough for their gravitational influence to be felt within our particular version of this ultraverse. If a count is made of the possible quantum states that this scenario implies, the inferred energy/mass inventory, as a percentage, comes out just right. 

All this can be easily expressed algebraically. I have a discourse on the quantum operator algebra of this concept that shows that it is rigorously logical. I will furnish a copy on request.

Or else - to say the same thing, actually - the Many Worlds "multiverse" hypothesis may be true. But the number of interference and mixed existence states may be much larger than what I may imply above. The gravitational influence of Many Worlds would be felt within galaxies that rotate faster than we think they should. Either of these two approaches lead to the conclusion that we observe only a small fraction of the matter that is really there. That is, unless we do careful rotation rate studies on as many galaxies as we can. 

I am sure this has all been thought of before, but it bears repeating, especially because it solves problems that we never anticipated that it would. For instance, it answers the question of why there is not any antimatter in our universe. Well, there is, except that all we can see of one is its contribution to gravitational effects on the rotation rates of most spiral galaxies. 

And, it gives us a solid mechanism to explain those amazing galactic rotation rate profiles explicit in observations. These fit the concise mathematical treatment of the MOND effect (MOdified Newtonian Dynamics). Moreover, this mechanism does the job while preserving relativity.  

If deep space observations constitute quantum statistical effects, such observations must include an aspect of Heisenberg uncertainty. The unseen Multi-Quantum or Many Worlds phantom galaxies would not all rotate on the same axis nor in exactly the same positions nor orientations. Surely, a simple, straightforward statistical distribution of directions, rates and orientations could be found that precisley duplicates the MOND phenomenon. A distribution of  such distributions would enable hypotheses that explain just exactly why some galaxies fail to show the effect. So, the exceptions would prove the rule, perhaps. 

Remember, this discussion refers to both dark energy and dark matter. Reference is made to an hypothetical Dark Pit and to a quantum multiverse.  They must be aspects of the same entity. But, I have finally run out of words to describe it! Surely, if these ideas have any merit, superstring theory will have something to say about it. Can superstring theory handle a mere brane with 11 dimensions? Is this theory scalable? Am I talking about the inneffable M-theory?  

Returning to reality, why neglect easy solutions like this in favor of complex and difficult ad hoc postulates that raise more questions than they solve? 

In future posts I will explore this and present my most favored model for the inflationary expansion of the universe. It shows how the data should properly be interpreted. 

The main thing, though, is that dark energy is unnecessary. 

 

Critique of the Universe


CRITIQUE

of

THE UNIVERSE

 

The Friedmann Equations, FLRW Metric,

Hubble’s Law Acceleration and Dark energy

 

Note that using the “Standard Model of Cosmology” with its necessary assumption of a Hubble constant, Ho, to compute an accelerating universe expansion rate, with Ho ≠ k, is self-contradictory.

The Friedmann equations (FE) and the Friedmann/Lemaitre/Robertson/Walker (FLRW) metric upon which the FE are based have been described as a “first approximation” but, this is misleading. Both the FE and the FLRW metric, as they are always presented, contain so many layered unstated assumptions as well as some acknowledged “guesstimates” that the result is not merely a “first approximation”, but is a blatant mistake. The hidden postulates and unstated assumptions are far more important then the explicit ones. 

The Introduction and the Solutions sections of Wikipedia’s the FLRW Metric and the FE describe the substrate matrix and substance of the universe together as comprising a fluid. Due to the required properties (such as frictionlessness) of any such substrate matrix for the whole universe, what they mean by a “fluid” is really an “ideal” or “perfect” fluid. Both matter/energy and the spacetime continuum under general relativity comprise this fluid1, which can only be described as an ideal gas2 simply because no other metaphor is appropriate. 

The universe is described as comprising a continuum, an entity which is really unlike any solid, liquid or gas. Therefore, to refer to the universe and the continuum together as expanding without limit or as having a pressure or density, per se, is pure expedience. Einstein described the space-time continuum as a matrix of infinitesimal (infinitely small, infinitely closely spaced and infinitely numerous) massless particles having no mutual affinity3 (so, it must have the nature of an infinitely deep fractal chaos, completely unlike a gas). Yet, the universe appears to be indefinitely compressible, as is demonstrated by its extreme expansion4 – it is observed to seem indefinitely decompressible. So, any reference by the FE to a “ground” having mass density, ρ or rho, and a pressure, p5, must indeed refer to a system comprised of a putative ideal gas because this sub-floor would then meet the definition. This is especially so in the absence of any qualifier. Inadequate as it may be, there simply is no alternate interpretation. 

Semantic arguments will not fix the flaws in the inherent, fully explicit complete definitions of the FE/FLRW. The ideal gas model also presumes that this system is bounded and it must be at equilibrium. It presumes it is bounded because, if it is not, it is cannot be at equilibrium. The ideal gas law and its corollaries simply do not apply to any system that is not “stationary”. So, the equilibrium condition is prerequisite. Then, indeed, the universe must also be timeless. 

As mentioned above, this ideal gas must be frictionless: it must be an explicitly defined superfluid6. Altogether, on the surface, the ideal gas model seems not to be such a terribly bad approximation since the vast majority of the volume of the universe is filled with hydrogen with a little helium thrown in7, all embedded in superfluid spacetime. Assuming we can mix the spacetime continuum and its “content” this way, as if the content is separate and apart, this mixture exists at an extremely low pressure, which would facilitate the ideal gas approximation. The average density of the universe is thought to be only about 6 atoms or molecules per cubic meter8. 

In an expanding universe, stars, galaxies, nebulae, planets and people may be considered to be just along for the ride because according to the approximations made - strictly for the sake of computational tractability - they are not supposed to comprise a significant new condensed phase9.
 
That condensed or more highly compressed phases are supposed to constitute an insignificant contribution to the nature and properties of the universe is summarized by the so-called “Cosmological Principle” (CP). The total density of the universe may be considered identical with an average density, as if the condensed matter is all smeared out evenly over the whole volume. In other words, the CP approximation says that there really are no such things as stars, galaxies, nebulae, planets or people. And, in effect, this approximation imputes mass to the continuum. This might have useful implications, but it is not part of the general theory of relativity, having no experimental basis.

Moreover, CP says that everywhere one looks, in any direction, from any location in the universe, in this homogeneous rarified soup of mainly hydrogen and helium embedded in a relativistic spacetime matrix, the view is exactly the same. In reality, the universe may be very lumpy and heterogeneous, but this detail is handled separately by the “new standard model of precision cosmology” as an ugly ad hoc add-on. In other words, a lumpy bumpy model is superimposed upon the smooth and creamy model to make a sort of stack of pancakes - with butter and maple syrup too, no doubt. 

“In a strictly FLRW model, there are no clusters of galaxies, stars or people, since these are objects much denser than a typical part of the universe. Nonetheless, the FE-FLRW model is used as a first approximation for the evolution of the real, lumpy universe because it is simple to calculate, and models which calculate the lumpiness in the universe are added onto the FLRW model as extensions.” from Wikipedia, The FLRW Metric10

Needless to say, there are a few problems with these approximations and assumptions. The most severe problem is that the “consensus” of cosmologists (and Wikipedia editors too) does not recognize any of the problems. Conventional wisdom among astrophysicists and such amateurs alike does not accord the FE-FLRW “standard model” with nearly enough skepticism. 

The Trouble with All “Models”  

Models are always designed to simplify and sometimes even to oversimplify. Otherwise, they would be called “ab initio” exact descriptions. At best, this model is indeed a “first approximation”. The math is unsolvable if the lumpiness of the real universe is included, so this “detail” is left out when computing numerical properties of the universe like density, pressure and even the interpretation of redshifts. So, nobody should be surprised when queer circular conclusions are reached such as an accelerating expansion rate and “dark energy”. 

One trouble is that the condensed or relatively compressed or compact matter in the universe constitutes new phases which, over the billions of years that the universe has existed, must be expected to behave as if they had a vapor pressure. They are not inert. 

They are not degenerate either. So, a mixture containing a gas and these other more compact phases cannot be treated like a pure virgin gas. Any amount of an active compact or condensed phase, no matter how small, a physical chemist will say, will upset pressure and density calculations for any kind of a putative gas11.
 
Plus, any physicist will aver that a dispersed suspension within any kind of fluid containing more than one phase is expected to have altered bulk properties like the way it transmits light and other energy. This will be true even if these other phases are indeed actually otherwise inert. In the universe, the percentage of additional phases may be very small, but the effects that are to be detected are also very small. This should be of concern to “modern precision cosmologists”. 

Using the ideal gas law does indeed require one to presume that the system is at a stationary timeless condition of equilibrium. But, the universe is demonstrably not at equilibrium. This fact not only upsets the direct mathematical approximations, but it seriously upsets any indirect theoretical physics of all such oversimplified models that presuppose the ideal gas law, like the Friedmann equations. 

Equilibrium
 
The FE/FLRW model indeed requires one to presume that the fluid system, regardless of type, must be truly at equilibrium, including uniform constant temperature, because it refers the system to a “state” variable, w. Therefore, since the universe must always be in some particular state, a value for an equilibrium constant, K(eq), may be defined. 

Repeating, the universe is demonstrably not at equilibrium. Also, if it is insisted, for the sake of argument, that the universe is really held to be at equilibrium at all times, strictly speaking, then its processes must be held to be thermodynamically reversible. This is huge: the real, practical and eschatological thermodynamic implications of this are stunning. If it is held, for the sake of argument, that these auxiliary implications simply do not apply, then this constitutes another colossal set of assumptions. 

Equilibrium also means that the magnitude of the equilibrium constant points to whether the system proceeds toward completion of the implied process or “reaction”, that is, toward the final yield of “products”. Or else, it stays close to the initial stage composed mainly of beginning substances, phases or “reactants”. 

Thus

 

M0(s) ↔ M0(g) .

 

In other words, according to Alan Guth’s inflationary scenario, the “inflaton”, a potentially huge parcel or domain of greatly excited false vacuum, an infinitely dense solid clot of pure energy that must arise probabilistically, call it M0(s), must ultimately decay to its final ground state, a true vacuum, labeled M0(g). At the finish, the ground state comprises nothing else but the simple vacuum except, perhaps, a stray degenerate photon.  

The equilibrium constant would then be written 

K(eq) = [M0(g)] / [ M0(s)] ,
 
where the brackets denote concentration, partial pressure or partial density. 

With the bracketed quantities in natural units, a vast volume containing a unit “quantity” of the true vacuum denotes the final product “density” or concentration, [M0(g)]. This is divided by a unit quantity of matter, [ M0(s)], as the clot of pure energy from Guth’s super dense, ultra intense solid energy “inflaton” point particle. This is expressed as  

K(eq) = 1/1 = 1 . 

By its intermediate magnitude alone, a process physicist or physical chemist would know, this nonzero low value neatly admits or allows that the evolution of the universe should obviously still be continuing. The implication is that it will take a long time to reach equilibrium. So, both the stated and hidden FE/FLRW assumptions actually will be met only at such time as this equilibrium condition becomes a reality, perhaps more than 2 trillion years hence. By their own definitions, the hidden assumptions can neither pertain to our real universe at this moment nor at any time in the near future. 

Eternal Homogeneity

Universe homogeneity is part of the so-called “Cosmological Principle” (CP). The CP approximation says that there really are no such things as stars, galaxies or people; there are no relative concentrations of matter. Another unstated presumption is that this condition must have persisted throughout time - since the beginning ‑ or else the other affected assumptions will never have had a chance to initialize. That is to say, crucial characters most typical of the deep past cannot possibly obey the implications of CP because the expansion process itself must interfere. 

That is, CP homogeneity maintains that such characters should be considered to typify, help or epitomize the universe’s evolution, but cannot have evolved themselves. Yet, the crucial features of the universe really do evolve. This is what cosmology is all about! So, as another contradiction in terms, eternal homogeneity is impossible. 

The so called “perfect” timeless CP is not different from the true and complete definition of the putatively less stringent CP because whenever we look at distant celestial objects, we look far back in time. Any statement of the CP must assume this as a sort of “timelessness”. We deal with spacetime, after all, when we make astronomical observations. The ideal gas model also assumes the same sort of timelessness and so does not constitute a dynamic model at all. Far from it. It treats a dynamic process as if it was static, another hidden postulate. 

So, homogeneity, as here discussed, is not a property of the universe. The universe simply is not homogeneous. This assumption is totally bogus. Why should it be relevant that the universe could be considered to be homogeneous on scales of more than 100 megaparsecs? What has scale got to do with it? Whatever the answer, we must “Prove it.” 

Prove it, consistent with the scientific method, with explicit falsifiable premises and without a trace of circular logic. This cannot be done. The universe is either homogeneous or it is not, there is no middle ground. 

Plus, there are structures in the universe that are larger than 100 megaparsecs, such as several “great walls” or “sheets” of galaxy clusters and superclusters12. Furthermore, every spiral galaxy has a supermassive black-hole in its core that possesses an hyperbolic gravitational potential around it. This is due to the relativistic nature of black-holes which are supposed to exist as gravitational or spacetime singularities. 

Because of this detail of its relativistic geometrical nature, a gravitational singularity must possess an hyperbolic gravity field. Hyperbolic gravitational potentials (proportional to 1/r) do not fall off very rapidly, nearer to zero, like the normal parabolic potentials of Newton’s Law (proportional to 1/r2). Hyperbolic supermassive black-hole potentials extend to infinity, or at least to a lot farther than 100 megaparsecs. They could actually explain “dark matter” (DM) because the presence of the galactic disc means that the potential falls off very slowly indeed (as 1/r + 1/r2). These debate points mean that the universe is actually as heterogeneous as Swiss cheese. 

Furthermore, if it is seen that DM is a result of this hyperbolic field, it would confirm that black-holes really are relativistic singularities. Some of the competitors to Einstein’s theory might thus be eliminated. So, we would not need “M theory” or “Supersymmetry”. It would be so sad if there is no such thing as an ultra-massive Higgs boson or a DM particle. Astrophysicists may be so willing to uncritically accept the FE/FLRW model and DE with DM because so many of their colleagues’ whole careers depend on it. 

Isotropism 

And, we cannot assume that the universe is isotropic either. George Ellis has repeatedly maintained and others have also pointed out that, since we cannot observe the universe in the direction of the plane of our galaxy, we cannot be sure that it is isotropic. Even if we could observe in that direction, our light horizon may not extend far enough to confirm that we are not inside a huge cosmic void. Interpolation to fill in the blocked zone must use all sorts of assumptions that subvert what we would be trying to define. If we are in a void, redshift measurements at very large distances will be skewed to appear as if the universe’s expansion rate is accelerating13. 

Saul Perlmutter and Adam Riess 

Saul Perlmutter and Adam Riess each claim to head up independent teams of researchers that have both uncovered evidence for accelerating expansion and dark energy. But, their efforts were not independent but were a concerted collaboration14. 

But, this observation of acceleration, made by Saul Perlmutter15 et al. and Adam Riess16 et al. is the result of assuming yet another item. They presume that the Hubble constant is, ever will be and surely always has been truly constant. Because they insist on this unstated premise, they multiplied the magnitude modulus versus redshift data for nearer or not so old type Ia supernovae by a fudge factor to bring this data into line with data got for much more distant and older data. This produces a nice straight line and irons out the kink that embarrassingly shows between segments of a simultaneous plot of the two data sets. Then the slope of the straight line is artificially made constant, like data for a well behaved Hubble diagram should be. It does not matter to either of them that this kink in the straight line between the linked data sets could just as well denote an unresolved systematic error. 

In an exercise of pure faith, this fudge factor alone, all by itself, is said to indicate acceleration17 because its sign is positive. 

Yet, one could just as well apply a fudge factor to the more “ancient” supernova data and bring this segment of the curve into alignment with that of the “younger” SNe Ia data. Then, the fudge factor would say that the universe is decelerating, not accelerating. That the whole argument for acceleration depends on the arbitrary application of a manufactured fudge factor is very disturbing. 

It is another case of intellectual recklessness within the debate. And the media, including Wikipedia editors, go for this bait. 

With amazing hubris, both Perlmutter and Riess claim that theirs is the only good Hubble constant data that has ever been obtained18. But, their magnitude modulus versus redshift data for SNe Ia were calibrated using data got from analysis of Cepheid variable stars. Earlier modern Hubble constant determinations were also got by using Cepheid variable stars. So, this data should be every bit as good. But, all previous determinations show that Ho is not constant and is decreasing with time, not increasing19. 

Dark Energy 

Now, the FE/FLRW model implicitly refers to the whole universe as if it is an homogeneous laboratory subsystem. The FE must allow that energy or thermodynamic work, w, can be done upon the universe and it can do work on its parent system. This presumption that the universe must do work is used to conclude that a perceived acceleration in the rate of expansion means that there is such a thing as “dark energy” (DE). But, this system/subsystem implication is never mentioned by anybody. It is yet another hidden presumption. 

DE purportedly follows from the idea that there must be a real discrete value for w. It must actually be less than zero because the above mentioned pressure, p, is putatively decreasing and has actually become negative in the recent epoch due to “acceleration”.
 

Negative pressure always implies external suction. Blatant explicit referral to an external influence must be avoided, however, since even amateur cosmologists will hesitate to infer that there is, in fact, an “outside” to the universe. So, “-p” must be internalized by characterizing it as a result of some kind of DE, not really externally applied w. 

That is, the so called “equation of state” (the stationary condition summary equation for this ideal gas) of the universe is derived from a form of pv = nRT (the ideal gas law) and basic thermodynamics. Simplifying the development of the idea, pv = w (remember, w is work, a kind of energy and pv is the work that is done on the gas or could be done by the gas if it got compressed or was allowed to expand to or from nearest to zero pressure and nearest to infinite volume), and v = 1, the current and ongoing value for the putative volume of the universe in natural units.

That v = 1 at all times is a gross distortion too. If v = 1 today, it cannot have been equal to 1 six or eight billion years ago, but this is what such a simple timeless model must assume. Some have called the FE/FLRW model a “dynamic” model. This must be some kind of joke, for the FE are anything but dynamic in nature. 

Time enters the FE/FLRW model only as a so-called scale factor, “a” or “R”. The “scale factor”, in natural units, is used only to compute a Hubble parameter, H, which is equal to 1 only when time is equal to 1, meaning “the present”. H is used for ?  So, time dependence is introduced despite the contrary implication of timelessness in all the other hidden postulates, unstated assumptions and given definitions. “Self‑consistent” is not an adjective that can be used to describe the FE/FLRW model.


The scale factor concept is said to follow from the approximations of homogeneity and isotropism inherent in the CP. Nobody ever mentions exactly how it follows logically. They do not dare because it certainly does not so follow without still more hidden postulates and unstated assumptions. If more layers of approximations and assumptions are not to be added, then it is really just another ad hoc add-on.


Now, if the expansion rate is supposed to be accelerating, Ho is not constant and then 

w = -p , 

as if suction is being applied to the universe. 

But, since we must deem this to be impossible, we cannot allow even any contribution to p to be negative, then

 

w ≤ 0 , 

which, by its sign and the definition of w, says that this work energy (if there is any) must come from within the universe. From whence does it issue?20

To satisfy the conservation law, there must always have been an untapped and invisible reservoir of such “dark” energy. So, it must also have an impalpable, unmeasurable mass-equivalent. Except for the stack of assumptions that have been accumulated, this is said to solve “the missing mass problem”. Being unmeasurable is considered to be only a technicality. 

But, technicalities are what science is all about. To sidestep this problem, it has been seriously suggested that the scientific method should literally be dumped21 

However, being not really this simple, but purely as a debate point, one might insist that by telescopically observing phenomena in the universe as far back in time as 8 to 10 billion years, for well over its half-life from Hubble’s Law, this current latter phase is seen to have taken quite a bit longer than the former. Yet, things are proven to have not really changed that much since then. The whole process has clearly progressed only marginally in over 8 billion years. Yet, up until more than 10 billion years ago, the changes must have been dramatic. These earlier spectacular changes took place in less than 4 billion years. 

So, the whole process, including the expansion rate, must actually be decelerating. This is pure logic, an inescapable mathematical certainty. But, many cosmologists somehow insist that the expansion rate is indeed currently accelerating due to DE.  

Some of these points may be contradicted by a good debater. But, any one of them will demolish the FE/FLRW. All these critical points would have to be refuted simultaneously in defense of the FE/FLRW and DE. This cannot be done without self contradiction. 

It will be very bad for science if we have to backtrack again on such matters as accelerating Hubble expansion and dark energy. Scientists should be absolutely triple certain before they make even preliminary pronouncements to the Press. 

But, as Lev Landau, the Nobel laureate physicist, said: “Cosmologists are always wrong, but never in doubt.”22 

BIBLIOGRAPHY 

1  The composition of the Friedmann Fluid is a mixture of ideal gases that includes the
    spacetime continuum. See Wikipedia, Friedmann Equations – Mixtures, also http://en.wikipedia.org/wiki/Equation_of_state_%28cosmology%29 and references therein.

 

2  Ideal Gas, see any good college or high school chemistry text or Wikipedia or

3  Albert Einstein   The Collected Papers, vol. 6,   Princeton University Press, 1997
  The Foundation of the General Theory of Relativity, DOC. 30 

4  Hubble expansion, search “Hubble’s Law” or “Edwin Hubble” or see Wikipedia –
    otherwise, this is common knowledge or else see http://astrosun2.astro.cornell.edu/academics/courses//astro201/hubbles_law.htm  

5  Friedmann equations - variables p and rho, see any detailed treatment of the FE or
    see Wikipedia 

6  Spacetime as a superfluid – see ref. 3 and
   The Perfect Relativistic Gas          T. Y. Thomas
   Proceedings of the National Academy of Sciences of the United States of America, Vol. 51, No. 3 (Mar. 15,
   1964), pp. 363-367 (article consists of 5 pages) Published by: National Academy of Sciences

There are literally thousands of internet search hits when a search is done on spacetime superfluid, spacetime ideal fluid, spacetime perfect fluid, spacetime ideal gas, etc.  

7  Majority of sensible matter  (by volume) in the universe is hydrogen, H, or H2 and a
    little He – search “composition of the universe” – such as    http://map.gsfc.nasa.gov/universe/uni_matter.html  

8  Average matter density of the universe, search or see

9  Cosmological Principle as an approximation – almost any search on “critique of the
    cosmological principle” will reveal statements referencing this. 

10  Ibid

11  “Gas laws” do not work in the presence of “condensed phases” – search on these
      phrases – it is very difficult to find an explicit statement. However, if one searches on “inhomogeneous or heterogeneous equilibrium” one may get better results. One can write an equilibrium constant expression for the spacetime continuum and matter/energy phases in the universe. Clearly, the ideal gas model makes no room for such an addendum to its mathematical description of the universe. But, such an equilibrium expression would itself be a huge approximation because the universe is definitely not at equilibrium. 



14  http://arxiv.org/abs/astro-ph/9804065    Snapshot Distances to Type Ia Supernovae -- All in “One” Night's Work    Adam G. Riess, Peter Nugent, Alexei V. Filippenko, Robert P. Kirshner, Saul Perlmutter  

15  http://arxiv.org/abs/astro-ph/9901052   Constraining dark energy with SNe Ia and
     large-scale structure    Saul Perlmutter (LBNL), Michael S. Turner (Chicago/FNAL), Martin White (UIUC) 

16  http://arxiv.org/abs/astro-ph/9807008    Results from the High-Z Supernova
     Search Team   Alexei V. Filippenko, Adam G. Riess 

17  Fudge factor or “adjustment”  http://arxiv.org/abs/astro-ph/0201034   Michael Rowan‑Robinson 

18  P&R “The best Hubble constant data to date.” quote 

19  http://www.lonetree-pictures.com/    The data for the diagram showing linear drop in the universe expansion rate with time is got from numerous observations of the Hubble constant obtained for various distances from earth. The data is converted to natural units. The original data easily can be got by searching on the term “Hubble diagram”.  

20   Self contradiction is the hallmark of the FE/FLRW model. On one hand, the ideal gas model requires that the universe be bounded with variable volume and at equilibrium at all times. If there are to be changes, they must occur in infinitesimally and immeasurably small increments. The equivalent statement is that there must be no measurable turbulence. The universe is changing only at measurably large increments and it is notoriously turbulent.  

On the other hand, for w to be negative leaving p as a positive quantity, implies that the universe is unbounded with a heretofore unrecognized internal or potential energy, contrary to the model itself. So, if contradictions can be overlooked, this really means that we must still be in a Guthian “false vacuum” state. Then, the whole universe must be describable by reference to a time dependent Schroedinger-like equation. 

Physics will never accept that we have been and continue to be part of a purely quantum object. Nor will it accept that there are two, discrete equally valid sides to the coin of reality; quantum and relativity theories. The Holy Grail of a grand unified theory or a theory of everything must exist. It was Einstein’s dream, after all. “M Theory” must be real, say theoretical physicists.  

Physics will never just “let it be”. Theoretical physicists will try to force relativity and quantum theories to meld into a single great façade simply because they can, not because it may afford us any higher degree of truth. It is mindless seeking for seeking’s sake. Theories that are so much harder to understand and to use really constitute no improvement. They simplify nothing while complicating everything. This is not what science should be about. It is not what art is about either. The essence of art is to know when to stop. 

21  "The scientific method should be dumped" - quote 

22  Lev Landau quote from