Articles posted in 2020, 2019, 2017 and 18, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, and 2007
Preface
The purpose of this blog is to elaborate on the theoretical ideas contained in its companion book “The Reality of Four Spatial Dimensions”
in Thomas S. Kuhn’s book “The Structure of Scientific Revolution” he documents the doubts that precipitate a paradigm change in scientific thought.
For example, even though one could still make accurate predictions of planetary motions using the 15 century geocentric models it became increasing more difficult to integrate that concept with the more accurate observational data provided by the new technologies of that day. This resulted in some scientists questioning their validity.
He suggests the doubt generated by its persistent inability of to explain new data lead many scientists of that period to adopt the simpler rules of the revolutionary heliocentric model.
Modern physics appears to be on the verge of a similar revolution because the discoveries of dark matter and dark energy are extremely difficult to integrate into its current theoretical models.
As Thomas S. Kuhn points out failure of an existing paradigm is a prelude to the search for a new one.
It continues the search, began in its companion book the “The Reality of Four Spatial Dimensions” to not only explain how one can seamlessly integrate the observations of dark matter and dark energy into a theoretical model based on the existence of four *spatial* dimensions but to provide a unifying mechanism responsible for the four forces of nature (gravity, electromagnetism, the weak, and strong) governing the interactions of matter, energy, space, and time.
Each article covers one aspect of a search for the “reality” it defines. For example, the article “What is dark energy” defines its casually in terms of an interaction of threedimensional space with a fourth while others derive the quantum mechanical properties of energy/mass in terms of a resonant system formed by a matter wave on a “surface” of a threedimensional space manifold with respect to a fourth *spatial* dimension.
It is not meant to verify the many answers found in the book “The Reality of Four Spatial Dimensions”. Instead it is meant to give the scientific community the specific information and experiment techniques required to either verify or falsify it contents. It relies less on mathematics and more on conceptual logic and thought experiments (much like Albert Einstein did) to show how one can explain and predict all modern observations by extrapolating the rules defining classical threedimensional space to a fourth *spatial* dimension.
Copyright Jeffrey O’Callaghan 2020
“The universe’s most powerful enabling tool is not knowledge or understanding but imagination because it extends the reality of one’s environment.”
Topic  Date posted  catorgory  Author 
 May 1, 2021  Jeff  
Apr. 15, 2021  Jeff  
Apr. 1, 2021  Jeff  
Merging the collapse of the wave function with Einstein Theories of Relativity  Mar. 12, 2021  Jeff  
Mar. 6, 2021  Jeff  
Could Black holes be responsible for the expansion period in our universe’s history?  Jan.1,2021  Jeff 
Articles posted in2019, 2017 and 18, 2016, 2015, 2014, 2013, 2012, 2011, 2010, 2009, 2008, and 2007
The Road to Unifying  The Road to Unifying  The Road to Unifying 
15
Cosmologists have not yet been able to determine if the universe will keep on expanding or enter a contraction phase but if it does it will generate a lot of heat. I wonder what effect that heat would have on a black hole. My intuition is that the smaller ones may just dissolve or return the ions they are made up of back to space but the larger ones MAY I repeat MAY exploded like a kernel of popcorn. The heat created by this expulsion would cascade to other black holes causing them to explode in rapid succession. The energy released by a single one would only result in a small increase in the rate of the universe expansion. However, the explosions of large numbers over a short period of time COULD result in a very rapid expansion that might approach that of the inflationary model based on the expansion of a singularity.
Some will probably say that is it crazy to assume that a black hole can explode however I think it is crazier to assume that the explosion of a single onedimensional point called a singularity can result in the observables properties of our universe.
One advantage to basing an inflationary model on the explosions of black holes is that it defines a mechanism for the start of the inflationary period in terms of an observable properties of our universe. Additionally, one can, through observations estimate the total energy content of all of the black holes in universe AT THE TIME OF ITS COLLAPSE based on how many presently exist. This would allow one to estimate the rate of the universe’s inflationary expansion caused by a rapid release of their energy.
To determine if this IDEA is viable solution one would have to first determine if heat can cause a black hole to explode. If it can one could use their observable properties to mathematically quantify the temperature required for that to occur. We can also estimate the maximum temperature the complete collapse of the universe would attain. If that value is greater than the temperature required to cause a black hole to explode it would add creditability to the above IDEA. After that it should be possible to determine rate at which the energy of the explosion of a single black hole will ripple through rest and cause them to explode. Since, as was mentioned earlier because we can estimate, based on observations the total energy of all of the black holes in our observable universe AT THE TIME OF ITS COLLAPSE we mathematically determine rate at which energy is released and therefore the rate of the universe’s expansion at each point in its evolution.
In other words, it allows us to define an inflationary period in our universe’s evolution based on the mathematical analysis of the observable properties of our environment instead of the unobservable properties of a quantum singularity.
Copyright Jeffrey O’Callaghan Jan. 2021
The Road to Unifying  The Road to Unifying  The Road to Unifying 
15
Dark Matter is a form of matter which is thought to account for approximately 85% of the matter in the universe and the remaining is made up visible or baryonic matter. Its presence is implied in a variety of astrophysical observations, including the gravitational affects has on the orbits of stars in galaxies which cannot be explained by accepted theories of gravity unless more matter is present than can be seen. The reason it is called dark because it does not appear to interact with the electromagnetic field, which means it does not absorb, reflect or emit electromagnetic radiation, which is why it is difficult to detect.
However, we disagree that it cannot be explained by accepted theories of gravity because Einstein defined gravity in terms of the “depth” of a gravity well or distortion in the “surface” of spacetime caused by the energy density of an environment and NOT on existence of visible or baryonic matter. This means the energy of electromagnetic fields, photons and all other forms of energy along with that associated with visible matter must be taken consideration when determining the energy density of space and therefore ITS gravitational potential.
This suggest the reason it does not appear to interact with the electromagnetic field is because a large part of it is an electromagnetic field.
However, the observation that electromagnetic energy prevents the gravitational collapse of the visible matter in stars suggests that its gravitational potential is oppositely directed with respect to it.
Some might say, if that were true it should have the same effect on the orbits of planets as it does on stars in galaxies. The reason it DOES NOT is because, as was just mentioned with it opposes that of visible matter which prevents it from sinking to the bottom of a star’s gravity well.
One can understand why by using an analogy of a jar containing water and oil where the water represents electromagnetic energy while the oil represents that of visible matter. The water prevents the oil from sinking to the bottom because its directional energy is opposite or is more buoyant than the water. This would be analogous to how the heat associated with electromagnetic energy prevents the visible matter in stars from sinking to the bottom of their gravity well. In other words, the energy density of electromagnetic energy offsets that of the visible matter.
However, as was mentioned earlier Einstein defined gravity in terms of the “depth” of a gravity well or distortion in the “surface” of spacetime caused by the energy density of an environment NOT on existence of visible of baryonic matter.
Therefore, to determine the total gravitation potential or depth of the gravity well of a solar system one must add the energy density associated with both electromagnetic energy and its visible matter.
However, to define the gravitational potential on objects which are gravitational bound to a star one would have to use only the visible matter because as mentioned earlier electromagnetic energy offsets that of the visible matter. Therefore, any objects gravitational bound to a star would only experience the gravitational potential of the visible matter because the gravity well of the entire solar system is offset by the electromagnetic energy.
However, one can also use the example of the jar mentioned earlier to understand why stars orbiting in galaxies are affected by both the energy density of electromagnetic energy and visible matter. One outside the jar would add the height of the oil to the water to get its total height while from the inside one would measure it from the oil water line. Similarly. if one views the gravity well from an object orbiting a solar system one would have to use only the energy contributed by the visible matter. However, if one viewed it form an object that was NOT gravitationally bound to it one would have to measure the contribution provided by both the visible matter and electromagnetic energy.
This also tells us any form of energy that counteracts that of visible matter must also be consider a component of the Dark Matter. For example, the orbital of the stars in a galactic would have to be included because it also adds to the energy density of the space they occupy. In other words, not only do you have to add the energy density contributed by electromagnetic energy to that of the visible matter in stars but you must also add the orbital energy of both the visible matter and their electromagnetic component to determine its content in galaxies. Additionally, the fact that galaxies are gravitational bound in galactic clusters means you must also consider the energy density contributed by their rotational energy to determine the universe’s total Dark mater component.
However, the OBSERVATION that electromagnetic energy offsets the gravitational potential of the visible matter in stars tells us it must contribute AT LEAST an equal amount to universe’s total gravitational potential. The remaining Dark matter could be provided by the energy density contributed by dust, helium atoms, black holes along their orbital energy.
It should be remembered; Einstein defined the depth of a gravity well in space in terms of the absolute value of its energy density. Therefore, to determine the total gravitational potential of both Dark and visible matter one must include all forms of energy including visible matter, to determine their value.
The Road to Unifying 


There are two ways science attempts to explain and define the behavior of our universe. The first is Quantum mechanics or the branch of physics defines its evolution in terms of the probabilities associated with the wave function. The other is the deterministic universe of Einstein which defines its evolution in terms of a physical interaction between space and time Specifically, Einstein determines the position of particles in terms of where a distortion in spacetime caused by increase in the energy density in the space where it is located.
While quantum mechanics uses the mathematical interpretation of the wave function to define the most probable position of a particle when observed.
Since we all live in the same world you would expect the probabilistic approach of quantum mechanics to be compatible with the deterministic one of Einstein. Unfortunately, they define two different worlds which appear to be incompatible. One defines existence in terms of the probabilities while the other defines it in terms of the deterministic of properties of space and time.
However, even though those probabilities appears to be incompatible with Relativity’s deterministic it can be explained in terms of a physical interaction between space and time. For example, when we role dice in a casino most of us realize the probability of a six appearing is related to or caused by its physical interaction with properties of the table in the casino where it is rolled. Putting it another way what defines the fact that six appears is NOT the probability of getting one but the interaction of the dice with the table and the casino it occupies.
Therefore, to integrate the probabilistic interpretation of the wave function in terms of the deterministic properties of space time one must show how and why an interaction between them is responsible for the position of a particle when observed .
One way of doing this is to use the fact that evolutions in both a quantum and spacetime environments are controlled by a wave. For example, Relativity defines evolution of spacetime in terms of the energy propagated by electromagnetic wave while Quantum Mechanics defines it in terms of the mathematical evolution of the wave function. (Einstein provided a mechanism for the propagation an electromagnetic wave through spacetime when he defined gravitational energy in terms of a curvature in it. This means we may be to derive the probabilistic environment of quantum mechanics to the deterministic one of Einstein if we can show how an physical interaction between space and time is responsible for those probabilities
This suggests the wave function that governs the evolution of a quantum environment may be a mathematical representation of an electromagnetic wave that governs evolution in the world of Relativity. This means we should be able to derive the probabilistic environment of quantum mechanics in terms of the deterministic properties of an electromagnetic wave in space time.
One can accomplish this by using the science of wave mechanics and the concepts of Einstein’s theories.
For example, the science of wave mechanics along with the fact Relatively tells us wave energy moves continuously through spacetime unless it is prevented from doing so by someone observing or something interacting with it. This would result in its energy being confined to threedimensional space. The science of wave mechanic also tells us the threedimensional “walls” of this confinement will result in its energy being reflected back on itself thereby creating a resonant or standing wave in threedimensional space. This would cause its wave energy to COLLAPSE and concentrated at the point in space were a particle would be found. Additionally, wave mechanics also tells us the energy of a resonant system, such as a standing wave can only take on the discrete or quantized values associated with its fundamental or a harmonic of its fundamental frequency. However, the particle created when someone observes an electromagnetic wave would occupy an extended volume of space defined by the wavelength of its standing wave. Putting it another way what defines the fact that a particle appears where it does is NOT determined by probabilities associated with the wave function but the the deterministic interaction of an electromagnetic wave with an observer.
However, the probabilistic interpretation of the wave function is necessary because it defines the position of a particle in terms of mathematical point in space which it randomly defines respect to a center of a particle. Therefore, the randomness of where that point is with respect to a particle’s center will result in its position, when observed to be randomly distributed in space. This means one must define where it appears in terms of probabilities to average the deviations that are caused by the random placement of that point.
The reason why Relativity is deterministic is because those deviations are average out by the large number of particles in objects like the moon and planets. This shows it is possible to derive the probabilistic world of quantum mechanics in terms of the determinism of spacetime by assuming the wavefunciton is a mathematical representation of an electromagnetic in it
Copyright Jeffrey O’Callaghan 2020
The Road to Unifying 


The Big Bang Theory is the leading explanation about how the universe began. At its simplest, it says the universe as we know it started with a small singularity, then inflated over the next 13.8 billion years to the cosmos that we know today.
Because current instruments don’t allow astronomers to peer back at the universe’s birth, much of what we understand about the Big Bang Theory comes from mathematical formulas and models. Astronomers can, however, see the “echo” of the expansion through a phenomenon known as the cosmic microwave background.
The idea the universe was smaller in the beginning was supported by Edwin Hubble in 1929 it expanding.
Later, a few physicists led by George Gamow a proponent of the big bang model showed an expanding universe meant that it might have had its beginning in a very hot infinitely dense environment, which then expanded to generate the one we live in today.
They were able to show only radiation emitted approximately 300,000 years after the beginnings of the expansion should be visible today because before that time the universe was so hot that protons and electrons existed only as free ions making the universe opaque to radiation. This period is referred as the age of “recombination”.
Additionally, they predicted this Cosmic Background Radiation or what was left over from the age of recombination would have cooled form several thousand degrees Kelvin back when it was generated to 2.7 today due to the expansion of the universe. Many thought its discovery 1965 by Penzias and Wilson provided its verification
However, there was a problem with assuming the universe begin as an expansion of in an infinitely dense hot environment because one would expect it and the Cosmic Background Radiation to be homogeneous because an infinitely dense environment must have been, by definition homogeneous. Therefore, if the universe was homogeneous when it began it should still be.
But the existence of galactic clusters and the variations in the intensity of the cosmic background radiation discovered by NASA’s WMAP satellite showed the universe is not and therefore, was not homogeneous either now or at the time when the Cosmic Background Radiation was emitted.
Many proponents of the big bang model assume that these “anisotropy” in the universe are caused by quantum fluctuations in the energy density of space. They define quantum fluctuations as a temporary change in the energy of space caused by the uncertainty principle.
However, there is an error in the math used to predict both effects the expansion of singularity at its origin and quantum fluctuations in the energy density of space would have on the evolution of the universe.
Einstein mathematics tell us time slows as the gravitational or energy density increases and will eventually stop if it becomes great enough. While observation of black holes provides verification of his math because it is observed that time does slow to a stop when it reaches a critical energy density at its event horizon. Additionally, Schwarzschild was able to use Einstein’s math to calculate the radius of a black hole were the energy density would be great enough to stop time.
This means the math used by the proponents of the big bang is INCORRECT if they did not include the effect the energy density around a singularity or quantum fluctuation would have on its evolution.
This is because observationally verified math of Schwarzschild tells us there is a minimum radius the total energy content of the universe can occupy for time to move forward. Since evolution cannot occur in an environment where time has stopped that is also MINIMUM RADIUS of the universe which could expand form which IS larger than a singularity.
In other words, if they had included the effect energy density has on time, they would have realized that the universe could not have originated from a singularity.
Some may say that the energy density of expanding universe would not effect the rate at which time passes but they would be wrong because Einstein’s tells us it is only related to differential energy density. In other words, he tells us the rate at which time slows and where it would stop and prevent further expansion would be determined by the differential energy density between the center of its expansion and its outer edge. This point would define the minimum volume it would have to have before its expansion could take place.
However, there is a similar error in the math behind the assumption that quantum fluctuations are responsible for “anisotropy” in Cosmic Background Radiation because energy could not expand from one because the energy density surrounding it would cause time to stop. Therefore, quantum fluctuation could not affect the evolution of the universe or be responsible for “anisotropy” in Cosmic Background Radiation because as was just mentioned evolution cannot occur in an environment where time has stopped.
Some might disagree because they say the energy in a singularity and that contained in a quantum fluctuation would be powerful enough to overcome the stopping of time predicted by Einstein mathematics. However, they would be wrong because the mathematics of Einstein tells that when the energy density reaches a certain level time will stop. It does not say that an increase beyond that point will allow time to move again.
As was mentioned earlier, current instruments don’t allow astronomers to peer back at the universe’s birth, much of what we understand about the Big Bang Theory comes from mathematical formulas and model
However, we may be able to define the origin of the present universe in terms of its observable properties.
We still have not been able to determine if the universe will continue to expand indefinitely or if it will eventually collapse in on itself. However, if one assumes it does, we could develop a mathematically model which would tell us when the heat generated by its collapse would be enough to cause it to reexpand. Additionally, one could determine if that heat occurred AFTER that required to free protons and electrons from each other thereby allowing another age of “recombination” when it started to reexpand.
This would also give mathematicians the ability to more precisely determine the age of universe because we can observe when age of “recombination” occurred and project back from that point in time to when the additional heat generated by its continued collapse was great enough to cause it to reexpand.
In others words we have the ability to define the origin of the present universe and anisotropy” in Cosmic Background Radiation in terms of a mathematical model based on real time observations of the present universe.
The science of Astrophysics is base almost exclusively on observations. Therefore, the question they must ask themselves is “If we have two models for the origin of the universe that predict the same outcome which one should we assume is correct?” The one that make is predictions based on the observable properties of our present universe or one that defines it origins in terms of the unobservable properties of a singularity.
Copyright Jeffery O’Callaghan Nov. 2020
The Road to Unifying  The Road to Unifying  The Road to Unifying 