History has shown that science cannot save a theoretical model that does not reflect the "reality" of current observations by randomly adding new parameters.

For example when the geocentric model of planetary motion was first proposed it was a good fit to the observational data available at the time.  However it became necessary to modify its theoretical structure to keep it in agreement with the new data provide by advancements in observational technologies.

There is absolutely nothing wrong with this if those modifications provide a deeper understanding of the processes and mechanisms it exposes.

However there is something very wrong with just adding something in an ad-hoc manner to make it fit the data.

For example components called epicycles were randomly added the geocentric model of planetary motion to allow it to conform to more accurate observation data as it became available in the 15 hundreds.  These add-ons were made on an individual bases and did nothing to help understand the mechanisms responsible for their motion.

In other words the scientific community in the 15 hundreds was unable or unwilling to consider the fact that their planetary models may be wrong because they required continued modification on an individual bases to conform to new data.  This is true even though many Greek, Indian, and Muslim savants had published heliocentric hypotheses centuries before which did not need these continued modifications and gave a more encompassing and consistent explanation of planetary motion.

This denial of a fundamental flaw in its structure delayed the advancement of the science of planetary motion in the European community for several centuries because as was just mentioned they were or should have been aware of the fact that there was a more encompassing theory available.

However this lesson seems to have been lost by many of today’s scientists.

For example Alan Guth proposed the cosmological inflation model which assumes that early in the universe’s evolution it underwent a period of extremely rapid (exponential) expansion. 

It was developed around 1980 to explain several inconsistencies with the standard Big Bang theory, in which the universe expands relatively gradually throughout its history

Inflationary cosmology on trial

The Big Bang theory postulates the universe emerged from what is called a singularity and is presently expanding from the tremendously hot dense environment associated with it.  Additionally it assumes the momentum generated, in part by the heat of that environment is sustaining the expansion.

However as the National Aeronautics and Space Administration points on their web site there are several observational inconsistencies with it.

Notably

The Flatness Problem:

WMAP has determined the geometry of the universe to be nearly flat. However, under Big Bang cosmology, curvature grows with time. A universe as flat as we see it today would require an extreme fine-tuning of conditions in the past, which would be an unbelievable coincidence.

The Horizon Problem:

Distant regions of space in opposite directions of the sky are so far apart that, assuming standard Big Bang expansion, they could never have been in casual contact with each other. This is because the light travel time between them exceeds the age of the universe. Yet the uniformity of the cosmic microwave background temperature tells us that these regions must have been in contact with each other in the past.

The Inflation attempts to resolve these inconsistencies by assuming that the universe’s it underwent an exponential expansion early in its evolution.

For example one can understand why the universe appear to be flat by assuming it underwent an exponential expansion early in its history by imagining you are living on the surface of a soccer ball. It might be obvious to you that this surface was curved.  However, if that ball expanded to the size of the Earth, it would appear flat to you, even though it is still a sphere on larger scales. Now imagine increasing the size of that ball to astronomical scales. To you, it would appear to be flat as far as you could see, even though it might have been curved to start with. Similarly an exponential expansion of our universe would stretch any initial curvature of the 3-dimensional universe to near flatness.

Inflation also appears to solve the Horizon Problem because it assumes the early universe experienced a burst of exponential expansion.  It follows that distant regions were actually much closer together prior to Inflation than they would have been with only standard Big Bang expansion. Thus, such regions could have been in casual contact prior to Inflation and could have attained a uniform temperature.

However the inflationary period appears to have been randomly added to the big bang theory simply to allow it to conform to more accurate observation data similar to the way epicycles were randomly added the geocentric model of planetary motion to allow it to conform as more accurate observation data in the 15 hundreds.

The randomness of this add-on is made apparent by the fact that as of yet there is absolutely no observational evidence to support its existence except allow the Big Bang theory to conform to current observations.

Another major problem with the inflationary model is that it violates one of the most sacred and tested laws of physics; the law of conservation of energy/mass

The reason this presents a problem because the law of conservation of energy/mass says that in a closed system it cannot be created or destroyed.  Since, by definition our universe is a closed system according to it energy/mass cannot be created or destroyed in it.

Therefore, one has to wonder where is the energy required to fuel this rapid inflationary expansion came form

Granted some cleaver scientists have come up with a mathematical model of what could be responsible for it but it has no basis in observations. 

For example some will try to convince you that a mathematical construct called an inflation field is responsible.  However, it seems a bit contrived because even though an inflation field may be responsible for the universe’s expansion there is absolutely no observational evidence supporting its existence. What is even more damaging to its validity is that, as was mentioned earlier it goes against one of the most revered laws of physics; that of the law of conservation of energy/mass because it does not define how or where the energy fueling the inflation field originated from.  In other words it assumes that energy just appeared out of nothing which is a violation of that law.

Even more disturbing is that the proponents of the inflationary model must fine tune many of its parameters to make or force its theoretical predictions to agree with observations.   In other words they must arbitrarily modify on an individual bases specific perimeters to make it conform to observations similar to the way the scientific community in the 15 hundreds had to make modifications in an in individual basis to the geocentric model force it to conform the world around them.

As mentioned earlier this denial of a fundamental flaw in the theoretical structure of their evolutionary model may be delaying the advancement of the modern cosmology because, as is show below there is another one which does not violate any of the accepted laws of physics and does not require fine tuning.  This is  because all of its parameters are quantifiable using the fundamental laws that govern our universe and the currently accepted physical parameters such as Planck’s and Newton’s gravitational constants.

We know from observations the equation E=mc^2 defines the equivalence between mass and energy in all environments and since mass is associated with the attractive properties of gravity it also tells us, because of this equivalence the kinetic energy associated with the universe’s expansion also posse those attractive properties.  However the law of conservation of energy/mass tells us that in a closed system, such as our universe the creation of kinetic energy cannot exceed the gravitational energy associated with its total energy/mass.

However, not all of the energy of associated with the universe’s expansion is directed towards it because of the random motion of its energy/mass components.  For example, observations indicate that some stars and galaxies are moving towards not away us.  Therefore, not all of the energy present at the time of its origin is directed towards its expansion.

As mentioned earlier the law of conservation of energy/mass tells us that the kinetic energy of the universe’s energy/mass cannot exceed its gravitational contractive properties.  However because some of the kinetic energy of its components is not directed towards its expansion the total gravitational contractive properties of its energy/mass must exceed the kinetic energy of its expansive components.   Therefore, at some point in time the gravitation contractive potential of its energy/mass must exceed the kinetic energy of its expansion because as just mentioned not all of its kinetic energy is directed towards its expansion.  Therefore at that point, in time the universe will have to enter a contractive phase.

(Many physicists would disagree because recent observations suggest that a force called Dark energy is causing the expansion of the universe accelerate.  Therefore they believe that its expansion will continue forever.  However, as was shown in the article "Dark Energy and the evolution of the universe" if one assumes the law of conservation of mass/energy is valid, as we have done here than the gravitational contractive properties of its mass equivalent eventually will have to exceed its expansive energy and therefore the universe must at some time in the future enter a contractive phase.  We must discard that law to assume otherwise. There are no other options)

We know from observations that heat is generated when we compress a gas and that this heat creates pressure that opposes further contractions.

Similarly the contraction of the universe will create heat which will oppose its further contractions.

Therefore the velocity of the contractions will increase until the momentum of the galaxies, planets, components of the universe equals the radiation pressure generated by the heat of its contraction.

At this point in time the total kinetic energy of the collapsing universe would be equal and oppositely directed with respect to the radiation pressure associated with the heat of its collapse.  From this point on the velocity of the contraction will slow due to the radiation pressure and be maintained by the momentum associated with the remaining mass component of the universe.

However, after a certain point in time the heat and radiation pressure generated by its contraction will become great enough to ionize the remaining mass and cause it to reexpand because the expansive forces associated with the radiation pressure caused by its collapse will exceed the contractive forces associated with its gravitational mass.

This will result in the universe entering an expansive phase and going through another age of recombination when the comic background radiation was emitted. The reason it will experience an age of recombination as it passes through each cycle is because the heat of its collapse would be great enough to completely ionize all forms of matter.

However, at some point in time the contraction phase will begin again because as mentioned earlier its kinetic energy cannot exceed the gravitational energy associated with the total mass/energy in the universe.

Since the universe is a closed system, the amplitude of the expansions and contractions will remain constant because the law of conservation of mass/energy dictates the total mass and energy in a closed system remains constant.

This results in the universe experiencing in a never-ending cycle of expansions and contractions of equal magnitudes.

Many cosmologists do not accept the cyclical scenario of expansion and contractions because they believe a collapsing universe would end in the formation of a singularity similar to the ones found in a black hole and therefore, it could not re-expand.

However, according to the first law of thermodynamic the universe would have to begin expanding before it reached a singularity because that law states that energy in an isolated system can neither be created nor destroyed

Therefore because the universe is by definition an isolated system; the energy generated by its gravitational collapse cannot be radiated to another volume but must remain within it. This means the radiation pressure exerted by its collapse must eventually exceed momentum of its contraction and the universe would have to enter an expansion phase because its momentum will carry it beyond the equilibrium point were the radiation pressure is greater that the momentum of its mass. This will cause the mass/energy of our three dimensional universe to oscillate around a point in the fourth *spatial* dimension.

This would be analogous to the how momentum of a mass on a spring causes it spring to stretch beyond its equilibrium point resulting it osculating around it.

There can be no other interoperation if one assumes the validity of the first law of thermodynamics which states that the total energy of our three dimensional universe is defined by its mass and the momentum of its components. Therefore, when one decreases the other must increase and therefore it must oscillate around a point in four dimensions.

The reason a singularity can form in black hole is because it is not an isolate system therefore the thermal radiation associated with its collapse can be radiated into the surrounding space. Therefore, its collapse can continue because momentum of its mass can exceed the radiation pressure cause by its collapse in the volume surrounding a black hole.

If this theoretical model is valid the heat generated by the collapse of the universe must raise the temperature to a point where protons and neutrons would become dissociated into their component parts and electrons would be strip off all matter thereby making the universe opaque to radiation.  It would remain that way until it entered the expansion phase and cooled enough to allow matter to recapture and hold on to them.  This Age of Recombination, as cosmologists like to call it is when the Cosmic Background Radiation was emitted.

One could quantify this scenario by using the first law of thermodynamics to calculate the temperature of the universe when the radiation pressure generated by its gravitational collapse exceeds the momentum of that collapse and see if it is great enough to cause the complete disassociation of the proton and neutron into their quark components as it must to account for their observed properties and that of the Cosmic back ground radiation.

The above theoretical model does not require any adhoc add-on likes an inflation field to explain where the energy fueling our universe’s current expansion came from because it is based solely on the currently accepted and observable laws of nature.

(Many would attempt to discredit it by pointing to the work by Richard C. Tolman in 1934 which showed that that due to the Second Law of Thermodynamics entropy can only increase therefore the period between cycles would become longer and longer and eventfully would stop.

However if, as we are suggesting above that the universe’s energy/mass forms a resonate system in space similar to the one the article "Why is energy/mass quantized" Oct. 10, 2007 showed was responsible for the stability of the energy/mass of the atom one would understand how with the universe’s expansion and contractions w3ould result in the formation of a resonant system that would maintain the stability of those expansions and contractions.)

Yet what makes this theoretical model different from all others is that one can also define a solution to the horizon and flatness problems using the same logic and currently accepted laws of nature that were used above to derive the current expansion of our universe.

*****

The Horizon Problem

The resolution of the horizon problem can be found in the fact that the repeated cycles mentioned above would allow different regions of the universe to mix and equalize thereby explaining why their temperature and other physical properties are almost identical.

This would be analogous to mixing the content of two cans of paint by pouring one into the other.  The evenness of the mixture would increase in proportion to the number of times one pored one can into the other.

Similarly the evenness of the temperature distribution and physical properties of the universe would increase and level of after a specific calculable number of cycles.

However it also explains why there are small temperature and other physical irregularities in the large-scale structure of the universe.

One cannot completely mix two different colors of paint no matter how many times they pour one can into another because the random motion of the different colored paint molecules means that some regions will have more of one color that the other.

Similarly the quantum fluctuations associated with the baryonic component of the universe means that some regions will have more matter or be denser that others no matter how many cycles of expansions or contractions it has undergone.  These areas would be where the large-scale structures such as galaxies and galactic clusters should exist. This gives yet another way of quantifying the above theatrical model with observations because the number and distribution of large these large scale structures would be dependent on their thermodynamic properties.

*****

The Flatness Problem

Unfortunately understanding why our universe appears to be flat cannot be easily understood in terms of the space-time concepts of relativity because it involves the spatial not the time properties of our universe.

However Einstein gave us the ability to convert the time properties of a space-time universe to its spatial counterpart when he used the equation E=mc^2 and the constant velocity of light in that equation to define the balance between energy and mass because it provided a method of converting a unit of space-time he associated with energy to a unit of space he associated with mass.   Additionally because the velocity of light is constant he also defined a one to one quantitative correspondence between his space-time universe and one made up of four *spatial* dimensions.

In other words by defining the geometric properties of a space-time universe in terms of mass/energy and the constant velocity of light he provided a quantitative and qualitative means of redefining his space-time universe in terms of the geometry of four *spatial* dimensions.

Observations of our environment tell us that all forms of mass have a spatial component or volume and because of the equivalence defined by Einstein’s one must also assume that energy must have spatial properties.

This and the fact that one can use the equation E=mc^2 to quantitatively derive the spatial properties of energy in a space-time universe in terms of four *spatial* dimensions is one the bases of assuming as was done in the article “Defining energy” Nov 27, 2007 that all forms of energy can be derived in terms of a spatial displacement in a “surface” of a three-dimensional space manifold with respect to a fourth *spatial* dimension.

One of the advantages to this approach is that it allows one to theoretically derive the energy of the universe’s momentum in terms (as was done in that article) of oppositely directed displacements in a “surface” of a three-dimensional space manifold with respect to the energy density of its matter component.  This means that the “flatness” of our universe would be an intrinsic property of its existence and would not require the fine-tuning of any of its components to explain it.

For example observations of the three-dimension environment occupied by a piece of paper shows us that if one crumples a piece that was original flat and views its entire surface, the overall magnitude of the displacement caused by that crumpling would be zero because the height of it above its surface would be offset by an oppositely directed one below its surface.  Therefore, if one views its overall surface only with respect to its height its curvature would appear to be flat.

Similarly, if the energy density associated with the momentum of the universe’s expansion is a result of oppositely directed displacement in a “surface” of a three-dimensional space manifold with respect to that associated with its matter component their overall density would appear to be flat because, similar to a crumpled piece of paper the “depth” of the displacement below its “surface” caused by matter would offset by the “height” of the displacement caused by its momentum.

Many proponents of the Big Bang Model assume it began from the expansion of mass and energy around a one-dimensional point.  However, if we are correct in assuming that density of the mass and energy components of our universe are a result of oppositely directed curvatures in a “surface” of a three-dimensional space manifold, the universe must have been flat with respect to their density at the time of the Big Bang.  This is because a one-dimensional point would have no “vertical” component with respect to a fourth *spatial* dimension and therefore the “surface” of three-dimensional space originating from it would be “flat”.

However, if the universe was flat with respect to the density of energy/mass in the beginning its overall geometry would remain flat throughout its entire expansive history because its expansion would result in a proportional reduction in the displacements above and below its three-dimensional “surface” as it expanded.

This would be analogous to why the overall flatness of a crumpled piece of paper does not change if one smoothed or stretches it because that would result in a proportional decrease in the height of the wrinkles above and below its original surface.

It is not possible to define the mechanism responsible for the flatness of our universe if one defines it in terms of four-dimensional space-time because time moves only in one direction forward and therefore cannot support the bi-directional movement required define the apparent flatness our universe in terms of its geometry.  This is why it necessary, as was done earlier to redefine the Einstein’s space-time concept its four spatial dimension equivalent.

The above theoretical model has the advantages over an inflationary one because it allows one to quantify its early history when particle formation took place in terms of the first law of thermodynamics.  For example one could use that law to calculate when the radiation pressure generated by its gravitational collapse would exceed the momentum of that collapse thereby determining when and what the conditions were when the expansion began. This means that scientist’s would not have to fine tune any of its parameters to make it conform to observations because those parameters would be determined by Planck’s and the gravitational constant and the laws that govern their interaction with the energy/mass of our universe.

One purpose for studying history is to learn from our mistakes and hopefully eliminate or at least minimize the possibly of repeating them.

Unfortunately modern scientists seem to have ignored the lesson taught to us by their 15 century brothers in that they do not realize that the denial of a fundamental flaw in their understand of the evolution structure of our universe may be  causing a delayed the advancement of their science.

Later Jeff

Copyright Jeffrey O’Callaghan 2014


 

Anthology of
The Imagineer’s Chronicles
Vol. 1 thru 5

2007 thru 2014

 
Ebook
$10.50

 

The Reality
of the Fourth
Spatial
Dimension

 
Paperback
$9.77
Ebook

$6.24

   

The Imagineer’s
Chronicles
Vol. 5 — 2014

 
Paperback
$13.36
Ebook
$9.97

 
 

The Imagineer’s
Chronicles
Vol. 4 — 2013

 
Paperback
$13.29
E-book
$7.99

The Imagineer’s
Chronicles
Vol. 3 — 2012

  
Paperback
$10.96
Ebook
$6.55

The Imagineer’s
Chronicles
Vol. 2 — 2011

 
Paperback
$8.32
Ebook
$6.57

 
 

 

 


The Imagineer’s
Chronicles
2007 thru 2010

 
Paperback
$14.97
Ebook
$7.82

 
     
 

Quantum entanglement is the name that describes the way that particles can share information and interact with each other regardless of how far apart they are.

For example an electron in certain atoms will spontaneously decay after being excited by emitting pairs of polarized photons such that one is aligned horizontally the other vertically.  According to quantum mechanics these photons are entangled and act of observing one instantly affects the other no matter how far they are apart.

Lecture 1 of Leonard Susskind’s course concentrating on Quantum Entanglements

This instantaneous communication between the entangled photons is at the heart of quantum entanglement.  This is the "spooky action at a distance" Einstein believed was theoretically implausible because according to Relativistic theories information cannot be propagated instantaneously but only at the speed of light.

To demonstrate this 1935, Einstein co-authored a paper with Podolsky and Rosen which was intended to show that Quantum Mechanics could not be a complete theory of nature.  The first thing to notice is that Einstein was not trying to disprove Quantum Mechanics in any way.  In fact, he was well aware of its power to predict the outcomes of various experiments.  What he was trying to show was that there must be a "hidden variable" that would allow Quantum Mechanics to become a complete theory of nature

The argument begins by assuming that there are two systems, A and B (which might be two free particles), whose wave functions are known.  Then, if A and B interact for a short period of time, one can determine the wave function which results after this interaction via the Schrödinger equation or some other Quantum Mechanical equation of state.  Now, let us assume that A and B move far apart, so far apart that they can no longer interact in any fashion.  In other words, A and B have moved outside of each other’s light cones and therefore are spacelike separated.

With this situation in mind, Einstein asked the question: what happens if one makes a measurement on system A?  Say, for example, one measures the momentum value for system A.  Then, using the conservation of momentum and our knowledge of the system before the interaction, one can infer the momentum of system B.  Thus, by making a momentum measurement of A, one can also measure the momentum of B.  Recall now that A and B are "spacelike" separated, and thus they cannot communicate in any way.  This separation means that B must have had the inferred value of momentum not only in the instant after one makes a measurement at A, but also in the few moments before the measurement was made.  If, on the other hand, it were the case that the measurement at A had somehow caused B to enter into a particular momentum state, then there would need to be a way for A to signal B and tell it that a measurement took place.  However, the two systems cannot communicate in any way!

If one examines the wave function at the moment just before the measurement at A is made, one finds that there is no certainty as to the momentum of B because the combined system is in a superposition of multiple momentum eigenstates of A and B.  So, even though system B must be in a definite state before the measurement at A takes place, the wave function description of this system cannot tell us what that momentum is!  Therefore, since system B has a definite momentum and since Quantum Mechanics cannot predict this momentum, Quantum Mechanics must be incomplete.

In response to Einstein’s argument about incompleteness of Quantum Mechanics, John Bell derived a mathematical formula that quantified what you would get if you made measurements of the superposition of the multiple momentum eigenstates of two particles.  If local realism was correct, the correlation between measurements made on one of the pair and those made on its partner could not exceed a certain amount, because of each particle’s limited influence.

In other words he showed there must exist inequities in the measurements made on pairs of particles that cannot be violated in any world that included both their physical reality and their separability because of the limited influence they can have on each other when they are "spacelike" separated.

When Bell published his theorem in1964 the technology to verify or reject it did not exist.  However in the early 1980s, Allen Aspect performed an experiment with polarized photons that showed that the inequities it contained were violated.

Many believed this provided experimental verification of the concept of Quantum entanglement.  Additionally it meant that science has to accept that either the reality of our physical world or the concept of separability does not exist.

However this may not be true because in the article “The *reality* of quantum probabilities” Mar. 31 2011 it was shown the probability functions quantum mechanics associates the wave function can be understood by assuming it is physically a result of a matter wave moving on a "surface" of a three-dimensional space manifold with respect to a fourth *spatial* dimension.

Very briefly the article "Why is energy/mass quantized?" Oct. 4, 2007 showed that one can derive the quantum mechanical properties energy/mass by extrapolating the laws of classical resonance to a matter wave moving on a "surface" of a three-dimensional space manifold with respect to a fourth *spatial* dimension.

(Louis de Broglie was the first to predict the existence of a continuous form of energy/mass when he theorized all particles have a wave component.  His theories were confirmed by the discovery of electron diffraction by crystals in 1927 by Davisson and Germer.)

It showed the four conditions required for resonance to occur in a classical environment, an object, or substance with a natural frequency, a forcing function at the same frequency as the natural frequency, the lack of a damping frequency and the ability for the substance to oscillate spatial would be meet in one consisting of a continuous non-quantized field of energy/mass and four *spatial* dimensions.

The existence of four *spatial* dimensions would give a matter wave the ability to oscillate spatially on a "surface" between a third and fourth *spatial* dimensions thereby fulfilling one of the requirements for classical resonance to occur.

These oscillations would be caused by an event such as the decay of a subatomic particle or the shifting of an electron in an atomic orbital.  This would force space (the substance) to oscillate with the frequency associated with the energy of that event.

However, the oscillations caused by such an event would serve as forcing function allowing a resonant system or "structure" to be established in it.

Observations of a three-dimensional environment show the energy associated with resonant system can only take on the incremental or discreet values associated with a fundamental or a harmonic of the fundamental frequency of its environment.

Similarly the energy associated with resonant systems in four *spatial* dimensions could only take on the incremental or discreet values associated a fundamental or a harmonic of the fundamental frequency of its environment.

Therefore this defines a physical mechanism responsible for why energy/mass is quantized in terms of a matter wave move on a surface of a three dimensional with respect to a fourth *spatial* dimension.

In earlier article "Embedded dimensions" Oct. 4, 2007 it was shown that one can derive all forms of energy including that of quantum systems in terms of displacement in a *surface* of a three-dimensional space manifold with respect to a fourth *spatial* dimension.

However assuming its energy is result of a displacement in four *spatial* dimension allows one to derive, the probability distribution associated with the wave function of individual particles by extrapolating the laws of a three-dimensional environments to a fourth *spatial* dimension.

Classical mechanics tell us that because of the continuous properties of space the oscillations the article “Why is energy/mass quantized?” associated with a quantum system would be distributed throughout the entire "surface" a three-dimensional space manifold with respect to a fourth *spatial* dimension.

This would be analogous to what happens when one vibrates a rod on a continuous rubber diaphragm.  The oscillations caused by the vibrations would be felt over its entire surface while their magnitudes would be greatest at the point of contact and decreases as one moves away from it.

However, this means if one extrapolates the mechanics of the rubber diaphragm to a "surface" of a three-dimensional space manifold one must assume the oscillations associated with each individual quantum system exists everywhere in three-dimensional space.  This also means there would be a non-zero probability they could be found anywhere in our three-dimensional environment.

As mentioned earlier the article “Why is energy/mass quantized?” showed a quantum mechanical system is a result of a resonant structure formed on the "surface" of a three-dimensional space manifold with respect to a fourth *spatial* dimension.

Yet Classical Wave Mechanics tells us that resonance would most probably occur on the surface of the rubber diaphragm were the magnitude of the vibrations is greatest and would diminish as one move away from that point,

Similarly a quantum system would most probably be found were the magnitude of the vibrations in a "surface" of a three-dimensional space manifold is greatest and would diminish as one move away from that point,

However this also means each individual particle in a quantum system has its own wave and probably function and therefore the total probability of a quantum system being in a given configuration when observed would be equal to the sum of the individual probability functions of each particle in that system.

As mentioned earlier Allen Aspect verified that Bell inequities were violated by the quantum mechanical measurements made on pairs of polarized photons that were spacelike separated or in different local realities.

Yet, as just mentioned the wave or probability function of a quantum system is a summation of the probably function of all of the particles it contains.  Therefore, two particles which originated in the same quantum system and were moving in opposite directions would have identical wave or probability functions even if they were not physically connect.

The measurements Allen Aspect made on the polarized photon that verified that Bells inequity was violated involved finding a correlation between the probabilities of each particle being in a given configuration based on the concepts of quantum mechanics.  When this correlation was found many assumed that somehow they must be entangled or physical connected even though they were in different local realities.  In other words the Newtonian concept separability does not apply to quantum environment. 

However, this may not be true.

According to quantum mechanics act of measuring the state of one of pair of entangled photons instantly affects the measurement of the other no matter how far they are apart.  Yet if it is true as mentioned earlier that each particle has a separate but identical wave or probably function as it move through space the measurement of the state of one particle would be reflected in the measurement of the other because those measured states will have the same probability of occurring in each particle.

In other words the reason why Bell’s inequity is violated in quantum system is not because they are physically entangled or connected at the time of measurement but because their individual wave or probability functions were "entangled" or identical at the time of their separations and remained that way as they moved apart.  Therefore even though they are not physical connected measurements based on their quantum mechanical probability function would be.

Additionally quantum entanglement is defined in terms probability. Therefore, there would be a non-zero probably that bell’s inequity will be violated when measuring the influence of one particle on another because those measurement are based on probabilities. Therefore, one could mathematical quantify the scenario proposed above because the probability of this occurring should mirror the individual quantum mechanical probability function of each individual particle.

But to say that the correlation between measurements of the quantum characteristics of two particles is because they are entangle or are physically connected is like saying the correlation between the color characteristics of the hair of identical twins is because they have been physically connect throughout their entire life.

This shows how one can by extrapolating the classical laws governing a three-dimensional environment to a fourth *spatial* dimension define a mechanism responsible for the correlation of the quantum mechanical measurements of particles that exist in non-local environments while maintaining the classical concepts of reality and separability.

Later Jeff

Copyright Jeffrey O’Callaghan 2011


 

Anthology of
The Imagineer’s Chronicles
Vol. 1 thru 5

2007 thru 2014

 
Ebook
$10.50

 

The Reality
of the Fourth
Spatial
Dimension

 
Paperback
$9.77
Ebook

$6.24

      

The Imagineer’s
Chronicles
Vol. 5 — 2014

 
Paperback
$13.36
Ebook
$9.97

 
 

The Imagineer’s
Chronicles
Vol. 4 — 2013

 
Paperback
$13.29
E-book
$7.99

The Imagineer’s
Chronicles
Vol. 3 — 2012

 
Paperback
$10.96
Ebook
$6.55

The Imagineer’s
Chronicles
Vol. 2 — 2011

 
Paperback
$8.32
Ebook
$6.57

 
 

 

 


The Imagineer’s
Chronicles
2007 thru 2010

 
Paperback
$14.97
Ebook
$7.82

 
     
 

« Previous Articles    
The Imagineer's Chronicles is based on WordPress platform, RSS tech , RSS comments design by Gx3.