Tuesday, December 07, 2010

Physical Cosmology

Physical cosmology

Physical cosmology
WMAP 2010.png
Universe · Big Bang
Age of the universe
Timeline of the Big Bang
Ultimate fate of the universe
Physical cosmology, as a branch of astronomy, is the study of the largest-scale structures and dynamics of the universe and is concerned with fundamental questions about its formation and evolution.[1] For most of human history, it was a branch of metaphysics and religion. Cosmology as a science originated with the Copernican principle, which implies that celestial bodies obey identical physical laws to those on Earth, and Newtonian mechanics, which first allowed us to understand those laws.

Physical cosmology, as it is now understood, began with the twentieth century development of Albert Einstein's general theory of relativity and better astronomical observations of extremely distant objects. These advances made it possible to speculate about the origin of the universe, and allowed scientists to establish the Big Bang Theory as the leading cosmological model. Some researchers still advocate a handful of alternative cosmologies; however, cosmologists generally agree that the Big Bang theory best explains observations.

Cosmology draws heavily on the work of many disparate areas of research in physics. Areas relevant to cosmology include particle physics experiments and theory, including string theory, astrophysics, general relativity, and plasma physics. Thus, cosmology unites the physics of the largest structures in the universe with the physics of the smallest structures in the universe.

Contents

History of physical cosmology

Modern cosmology developed along tandem observational and theoretical tracks. In 1915, Albert Einstein developed his theory of general relativity. At the time, physicists believed in a perfectly static universe without beginning or end. Einstein added a cosmological constant to his theory to try to force it to allow for a static universe with matter in it. The so-called Einstein universe is, however, unstable. It is bound to eventually start expanding or contracting. The cosmological solutions of general relativity were found by Alexander Friedmann, whose equations describe the Friedmann-Lemaître-Robertson-Walker universe, which may expand or contract.

In the 1910s, Vesto Slipher (and later Carl Wilhelm Wirtz) interpreted the red shift of spiral nebulae as a Doppler shift that indicated they were receding from Earth. However, it is difficult to determine the distance to astronomical objects. One way is to compare the physical size of an object to its angular size, but a physical size must be assumed to do this. Another method is to measure the brightness of an object and assume an intrinsic luminosity, from which the distance may be determined using the inverse square law. Due to the difficulty of using these methods, they did not realize that the nebulae were actually galaxies outside our own Milky Way, nor did they speculate about the cosmological implications. In 1927, the Belgian Roman Catholic priest Georges Lemaître independently derived the Friedmann-Lemaître-Robertson-Walker equations and proposed, on the basis of the recession of spiral nebulae, that the universe began with the "explosion" of a "primeval atom"—which was later called the Big Bang. In 1929, Edwin Hubble provided an observational basis for Lemaître's theory. Hubble showed that the spiral nebulae were galaxies by determining their distances using measurements of the brightness of Cepheid variable stars. He discovered a relationship between the redshift of a galaxy and its distance. He interpreted this as evidence that the galaxies are receding from Earth in every direction at speeds directly proportional to their distance. This fact is now known as Hubble's law, though the numerical factor Hubble found relating recessional velocity and distance was off by a factor of ten, due to not knowing at the time about different types of Cepheid variables.
Given the cosmological principle, Hubble's law suggested that the universe was expanding. There were two primary explanations put forth for the expansion of the universe. One was Lemaître's Big Bang theory, advocated and developed by George Gamow. The other possibility was Fred Hoyle's steady state model in which new matter would be created as the galaxies moved away from each other. In this model, the universe is roughly the same at any point in time.

For a number of years the support for these theories was evenly divided. However, the observational evidence began to support the idea that the universe evolved from a hot dense state. The discovery of the cosmic microwave background in 1965 lent strong support to the Big Bang model, and since the precise measurements of the cosmic microwave background by the Cosmic Background Explorer in the early 1990s, few cosmologists have seriously proposed other theories of the origin and evolution of the cosmos. One consequence of this is that in standard general relativity, the universe began with a singularity, as demonstrated by Stephen Hawking and Roger Penrose in the 1960s.

History of the Universe

The history of the universe is a central issue in cosmology. The history of the universe is divided into different periods called epochs, according to the dominant forces and processes in each period. The standard cosmological model is known as the ΛCDM model.

Equations of motion

The equations of motion governing the universe as a whole are derived from general relativity with a small, positive cosmological constant. The solution is an expanding universe; due to this expansion the radiation and matter in the universe are cooled down and become diluted. At first, the expansion is slowed down by gravitation due to the radiation and matter content of the universe. However, as these become diluted, the cosmological constant becomes more dominant and the expansion of the universe starts to accelerate rather than decelerate. In our universe this has already happened, billions of years ago.

Particle physics in cosmology

Particle physics is important to the behavior of the early universe, since the early universe was so hot that the average energy density was very high. Because of this, scattering processes and decay of unstable particles are important in cosmology.

As a rule of thumb, a scattering or a decay process is cosmologically important in a certain cosmological epoch if the time scale describing that process is smaller or comparable to the time scale of the expansion of the universe, which is 1 / H with H being the Hubble constant at that time. This is roughly equal to the age of the universe at that time.

Timeline of the Big Bang

Observations suggest that the universe began around 13.7 billion years ago. Since then, the evolution of the universe has passed through three phases. The very early universe, which is still poorly understood, was the split second in which the universe was so hot that particles had energies higher than those currently accessible in particle accelerators on Earth. Therefore, while the basic features of this epoch have been worked out in the Big Bang theory, the details are largely based on educated guesses. Following this, in the early universe, the evolution of the universe proceeded according to known high energy physics. This is when the first protons, electrons and neutrons formed, then nuclei and finally atoms. With the formation of neutral hydrogen, the cosmic microwave background was emitted. Finally, the epoch of structure formation began, when matter started to aggregate into the first stars and quasars, and ultimately galaxies, clusters of galaxies and superclusters formed. The future of the universe is not yet firmly known, but according to the ΛCDM model it will continue expanding forever.

Areas of study

Below, some of the most active areas of inquiry in cosmology are described, in roughly chronological order. This does not include all of the Big Bang cosmology, which is presented in Timeline of the Big Bang.

The very early universe

While the early, hot universe appears to be well explained by the Big Bang from roughly 10−33 seconds onwards, there are several problems. One is that there is no compelling reason, using current particle physics, to expect the universe to be flat, homogeneous and isotropic (see the cosmological principle). Moreover, grand unified theories of particle physics suggest that there should be magnetic monopoles in the universe, which have not been found. These problems are resolved by a brief period of cosmic inflation, which drives the universe to flatness, smooths out anisotropies and inhomogeneities to the observed level, and exponentially dilutes the monopoles. The physical model behind cosmic inflation is extremely simple, however it has not yet been confirmed by particle physics, and there are difficult problems reconciling inflation and quantum field theory. Some cosmologists think that string theory and brane cosmology will provide an alternative to inflation.

Another major problem in cosmology is what caused the universe to contain more particles than antiparticles. Cosmologists can observationally deduce that the universe is not split into regions of matter and antimatter. If it were, there would be X-rays and gamma rays produced as a result of annihilation, but this is not observed. This problem is called the baryon asymmetry, and the theory to describe the resolution is called baryogenesis. The theory of baryogenesis was worked out by Andrei Sakharov in 1967, and requires a violation of the particle physics symmetry, called CP-symmetry, between matter and antimatter. Particle accelerators, however, measure too small a violation of CP-symmetry to account for the baryon asymmetry. Cosmologists and particle physicists are trying to find additional violations of the CP-symmetry in the early universe that might account for the baryon asymmetry.

Both the problems of baryogenesis and cosmic inflation are very closely related to particle physics, and their resolution might come from high energy theory and experiment, rather than through observations of the universe.

Big bang nucleosynthesis

Big Bang Nucleosynthesis is the theory of the formation of the elements in the early universe. It finished when the universe was about three minutes old and its temperature dropped below that at which nuclear fusion could occur. Big Bang nucleosynthesis had a brief period during which it could operate, so only the very lightest elements were produced. Starting from hydrogen ions (protons), it principally produced deuterium, helium-4 and lithium. Other elements were produced in only trace abundances. The basic theory of nucleosynthesis was developed in 1948 by George Gamow, Ralph Asher Alpher and Robert Herman. It was used for many years as a probe of physics at the time of the Big Bang, as the theory of Big Bang nucleosynthesis connects the abundances of primordial light elements with the features of the early universe. Specifically, it can be used to test the equivalence principle, to probe dark matter, and test neutrino physics. Some cosmologists have proposed that Big Bang nucleosynthesis suggests there is a fourth "sterile" species of neutrino.

Cosmic microwave background

The cosmic microwave background is radiation left over from decoupling after the epoch of recombination when neutral atoms first formed. At this point, radiation produced in the Big Bang stopped Thomson scattering from charged ions. The radiation, first observed in 1965 by Arno Penzias and Robert Woodrow Wilson, has a perfect thermal black-body spectrum. It has a temperature of 2.7 kelvins today and is isotropic to one part in 105. Cosmological perturbation theory, which describes the evolution of slight inhomogeneities in the early universe, has allowed cosmologists to precisely calculate the angular power spectrum of the radiation, and it has been measured by the recent satellite experiments (COBE and WMAP) and many ground and balloon-based experiments (such as Degree Angular Scale Interferometer, Cosmic Background Imager, and Boomerang). One of the goals of these efforts is to measure the basic parameters of the Lambda-CDM model with increasing accuracy, as well as to test the predictions of the Big Bang model and look for new physics. The recent measurements made by WMAP, for example, have placed limits on the neutrino masses.

Newer experiments, such as QUIET and the Atacama Cosmology Telescope, are trying to measure the polarization of the cosmic microwave background. These measurements are expected to provide further confirmation of the theory as well as information about cosmic inflation, and the so-called secondary anisotropies, such as the Sunyaev-Zel'dovich effect and Sachs-Wolfe effect, which are caused by interaction between galaxies and clusters with the cosmic microwave background.

Formation and evolution of large-scale structure

Understanding the formation and evolution of the largest and earliest structures (i.e., quasars, galaxies, clusters and superclusters) is one of the largest efforts in cosmology. Cosmologists study a model of hierarchical structure formation in which structures form from the bottom up, with smaller objects forming first, while the largest objects, such as superclusters, are still assembling. One way to study structure in the universe is to survey the visible galaxies, in order to construct a three-dimensional picture of the galaxies in the universe and measure the matter power spectrum. This is the approach of the Sloan Digital Sky Survey and the 2dF Galaxy Redshift Survey.

Another tool for understanding structure formation is simulations, which cosmologists use to study the gravitational aggregation of matter in the universe, as it clusters into filaments, superclusters and voids. Most simulations contain only non-baryonic cold dark matter, which should suffice to understand the universe on the largest scales, as there is much more dark matter in the universe than visible, baryonic matter. More advanced simulations are starting to include baryons and study the formation of individual galaxies. Cosmologists study these simulations to see if they agree with the galaxy surveys, and to understand any discrepancy.

Other, complementary observations to measure the distribution of matter in the distant universe and to probe reionization include:
  • The Lyman alpha forest, which allows cosmologists to measure the distribution of neutral atomic hydrogen gas in the early universe, by measuring the absorption of light from distant quasars by the gas.
  • The 21 centimeter absorption line of neutral atomic hydrogen also provides a sensitive test of cosmology
  • Weak lensing, the distortion of a distant image by gravitational lensing due to dark matter.
These will help cosmologists settle the question of when and how structure formed in the universe.

Dark matter

Evidence from Big Bang nucleosynthesis, the cosmic microwave background and structure formation suggests that about 23% of the mass of the universe consists of non-baryonic dark matter, whereas only 4% consists of visible, baryonic matter. The gravitational effects of dark matter are well understood, as it behaves like a cold, non-radiative fluid that forms haloes around galaxies. Dark matter has never been detected in the laboratory, and the particle physics nature of dark matter remains completely unknown. Without observational constraints, there are a number of candidates, such as a stable supersymmetric particle, a weakly interacting massive particle, an axion, and a massive compact halo object. Alternatives to the dark matter hypothesis include a modification of gravity at small accelerations (MOND) or an effect from brane cosmology.

Dark energy

If the universe is flat, there must be an additional component making up 73% (in addition to the 23% dark matter and 4% baryons) of the energy density of the universe. This is called dark energy. In order not to interfere with Big Bang nucleosynthesis and the cosmic microwave background, it must not cluster in haloes like baryons and dark matter. There is strong observational evidence for dark energy, as the total energy density of the universe is known through constraints on the flatness of the universe, but the amount of clustering matter is tightly measured, and is much less than this. The case for dark energy was strengthened in 1999, when measurements demonstrated that the expansion of the universe has begun to gradually accelerate.

Apart from its density and its clustering properties, nothing is known about dark energy. Quantum field theory predicts a cosmological constant much like dark energy, but 120 orders of magnitude larger than that observed. Steven Weinberg and a number of string theorists (see string landscape) have used this as evidence for the anthropic principle, which suggests that the cosmological constant is so small because life (and thus physicists, to make observations) cannot exist in a universe with a large cosmological constant, but many people find this an unsatisfying explanation. Other possible explanations for dark energy include quintessence or a modification of gravity on the largest scales. The effect on cosmology of the dark energy that these models describe is given by the dark energy's equation of state, which varies depending upon the theory. The nature of dark energy is one of the most challenging problems in cosmology.

A better understanding of dark energy is likely to solve the problem of the ultimate fate of the universe. In the current cosmological epoch, the accelerated expansion due to dark energy is preventing structures larger than superclusters from forming. It is not known whether the acceleration will continue indefinitely, perhaps even increasing until a big rip, or whether it will eventually reverse.

Other areas of inquiry

Cosmologists also study:

See also

References

  1. ^ For an overview, see George FR Ellis (2006). "Issues in the Philosophy of Cosmology". In Jeremy Butterfield & John Earman. Philosophy of Physics (Handbook of the Philosophy of Science) 3 volume set. North Holland. pp. 1183ff. ISBN 0444515607. http://arxiv.org/abs/astro-ph/0602280v2. 

Further reading

Popular

Textbooks

  • Cheng, Ta-Pei (2005). Relativity, Gravitation and Cosmology: a Basic Introduction. Oxford and New York: Oxford University Press. ISBN 0-19-852957-0.  Introductory cosmology and general relativity without the full tensor apparatus, deferred until the last part of the book.
  • Dodelson, Scott (2003). Modern Cosmology. Academic Press. ISBN 0-12-219141-2.  An introductory text, released slightly before the WMAP results.
  • Grøn, Øyvind; Hervik, Sigbjørn (2007). Einstein's General Theory of Relativity with Modern Applications in Cosmology. New York: Springer. ISBN 978-0-387-69199-2. 
  • Harrison, Edward (2000). Cosmology: the science of the universe. Cambridge University Press. ISBN 0-521-66148-X.  For undergraduates; mathematically gentle with a strong historical focus.
  • Kutner, Marc (2003). Astronomy: A Physical Perspective. Cambridge University Press. ISBN 0-521-52927-1.  An introductory astronomy text.
  • Kolb, Edward; Michael Turner (1988). The Early Universe. Addison-Wesley. ISBN 0-201-11604-9.  The classic reference for researchers.
  • Liddle, Andrew (2003). An Introduction to Modern Cosmology. John Wiley. ISBN 0-470-84835-9.  Cosmology without general relativity.
  • Liddle, Andrew; David Lyth (2000). Cosmological Inflation and Large-Scale Structure. Cambridge. ISBN 0-521-57598-2.  An introduction to cosmology with a thorough discussion of inflation.
  • Mukhanov, Viatcheslav (2005). Physical Foundations of Cosmology. Cambridge University Press. ISBN 0-521-56398-4. 
  • Padmanabhan, T. (1993). Structure formation in the universe. Cambridge University Press. ISBN 0-521-42486-0.  Discusses the formation of large-scale structures in detail.
  • Peacock, John (1998). Cosmological Physics. Cambridge University Press. ISBN 0-521-42270-1.  An introduction including more on general relativity and quantum field theory than most.
  • Peebles, P. J. E. (1993). Principles of Physical Cosmology. Princeton University Press. ISBN 0-691-01933-9.  Strong historical focus.
  • Peebles, P. J. E. (1980). The Large-Scale Structure of the Universe. Princeton University Press. ISBN 0-691-08240-5.  The classic work on large scale structure and correlation functions.
  • Rees, Martin (2002). New Perspectives in Astrophysical Cosmology. Cambridge University Press. ISBN 0-521-64544-1. 
  • Weinberg, Steven (1971). Gravitation and Cosmology. John Wiley. ISBN 0-471-92567-5.  A standard reference for the mathematical formalism.
  • Weinberg, Steven (2008). Cosmology. Oxford University Press. ISBN 0198526822. 
  • Benjamin Gal-Or, “Cosmology, Physics and Philosophy”, Springer Verlag, 1981, 1983, 1987, ISBN 0-387-90581-2, ISBN 0387965262.

External links

From groups

From individuals

Saturday, December 04, 2010

Thinking Outside the Box, People Like Veneziano, Turok and Penrose

Credit: V.G.Gurzadyan and R.Penrose


Dark circles indicate regions in space where the cosmic microwave background has temperature variations that are lower than average. The features hint that the universe was born long before the Big Bang 13.7 billion years ago and had undergone myriad cycles of birth and death before that time. See: Cosmic rebirth
***

Concentric circles in WMAP data may provide evidence of violent pre-Big-Bang activity

Abstract: Conformal cyclic cosmology (CCC) posits the existence of an aeon preceding our Big Bang 'B', whose conformal infinity 'I' is identified, conformally, with 'B', now regarded as a spacelike 3-surface. Black-hole encounters, within bound galactic clusters in that previous aeon, would have the observable effect, in our CMB sky, of families of concentric circles over which the temperature variance is anomalously low, the centre of each such family representing the point of 'I' at which the cluster converges. These centres appear as fairly randomly distributed fixed points in our CMB sky. The analysis of Wilkinson Microwave Background Probe's (WMAP) cosmic microwave background 7-year maps does indeed reveal such concentric circles, of up to 6{\sigma} significance. This is confirmed when the same analysis is applied to BOOMERanG98 data, eliminating the possibility of an instrumental cause for the effects. These observational predictions of CCC would not be easily explained within standard inflationary cosmology.
Update:Penrose’s Cyclic Cosmology  by Sean Carroll

In response too....

More on the low variance circles in CMB sky

Abstract: Two groups [3,4] have confirmed the results of our paper concerning the actual existence of low variance circles in the cosmic microwave background (CMB) sky. They also point out that the effect does not contradict the LCDM model - a matter which is not in dispute. We point out two discrepancies between their treatment and ours, however, one technical, the other having to do with the very understanding of what constitutes a Gaussian random signal. Both groups simulate maps using the CMB power spectrum for LCDM, while we simulate a pure Gaussian sky plus the WMAP's noise, which points out the contradiction with a common statement [3] that "CMB signal is random noise of Gaussian nature". For as it was shown in [5], the random component is a minor one in the CMB signal, namely, about 0.2. Accordingly, the circles we saw are a real structure of the CMB sky and they are not of a random Gaussian nature. Although the structures studied certainly cannot contradict the power spectrum, which is well fitted by LCDM model, we particularly emphasize that the low variance circles occur in concentric families, and this key fact cannot be explained as a purely random effect. It is, however a clear prediction of conformal cyclic cosmology.


Wednesday, December 01, 2010

Holometer

Holometer Revised


This plot shows the sensitivity of various experiments to fluctuations in space and time. Horizontal axis is the log of apparatus size (or duration time the speed of light), in meters; vertical axis is the log of the rms fluctuation amplitude in the same units. The lower left corner represents the Planck length or time. In these units, the size of the observable universe is about 26. Various physical systems and experiments are plotted. The "holographic noise" line represents the rms transverse holographic fluctuation amplitude on a given scale. The most sensitive experiments are Michelson interferometers.

The Fermilab Holometer in Illinois is currently under construction and will be the world's most sensitive laser interferometer when complete, surpassing the sensitivity of the GEO600 and LIGO systems, and theoretically able to detect holographic fluctuations in spacetime.[1][2][3]

The Holometer may be capable of meeting or exceeding the sensitivity required to detect the smallest units in the universe called Planck units.[1] Fermilab states, "Everyone is familiar these days with the blurry and pixelated images, or noisy sound transmission, associated with poor internet bandwidth. The Holometer seeks to detect the equivalent blurriness or noise in reality itself, associated with the ultimate frequency limit imposed by nature."[2]
Craig Hogan, a particle astrophysicist at Fermilab, states about the experiment, "What we’re looking for is when the lasers lose step with each other. We’re trying to detect the smallest unit in the universe. This is really great fun, a sort of old-fashioned physics experiment where you don’t know what the result will be."

Experimental physicist Hartmut Grote of the Max Planck Institute in Germany, states that although he is skeptical that the apparatus will successfully detect the holographic fluctuations, if the experiment is successful "it would be a very strong impact to one of the most open questions in fundamental physics. It would be the first proof that space-time, the fabric of the universe, is quantized."[1]

References

  1. ^ a b c Mosher, David (2010-10-28). "World’s Most Precise Clocks Could Reveal Universe Is a Hologram". Wired. http://www.wired.com/wiredscience/2010/10/holometer-universe-resolution/. 
  2. ^ a b "The Fermilab Holometer". Fermi National Accelerator Laboratory. http://holometer.fnal.gov/. Retrieved 2010-11-01. 
  3. ^ Dillow, Clay (2010-10-21). "Fermilab is Building a 'Holometer' to Determine Once and For All Whether Reality Is Just an Illusion". Popular Science. http://www.popsci.com/science/article/2010-10/fermilab-building-holometer-determine-if-universe-just-hologram.

***
Fermilab Holometer
About a hundred years ago, the German physicist Max Planck introduced the idea of a fundamental, natural length or time, derived from fundamental constants. We now call these the Planck length, lp = √hG/2π c3 = 1.6 × 10-35 meters. Light travels one Planck length in the Planck time, tp = √hG/2π c5 = 5.4 × 10-44seconds. 
The physics of space and time is expected to change radically on such small scales. For example, a particle confined to a Planck volume automatically collapses to a black hole. 
See: Fermilab Holometer

***

A Conceptual Drawing of the 'Holometer' via Symmetry

“The shaking of spacetime occurs at a million times per second, a thousand times what your ear can hear,” said Fermilab experimental physicist Aaron Chou, whose lab is developing prototypes for the holometer. “Matter doesn’t like to shake at that speed. You could listen to gravitational frequencies with headphones.”
The whole trick, Chou says, is to prove that the vibrations don’t come from the instrument. Using technology similar to that in noise-cancelling headphones, sensors outside the instrument detect vibrations and shake the mirror at the same frequency to cancel them. Any remaining shakiness at high frequency, the researchers propose, will be evidence of blurriness in spacetime
“With the holometer’s long arms, we’re magnifying spacetime’s uncertainty,” Chou said.
See: Hogan’s holometer: Testing the hypothesis of a holographic universe

***

Conclusion:


Tuesday, November 23, 2010

The Synapse of the Wondering Mind

Click here for Penrose's Seminar

While trying to organize my thoughts about the title of this blog entry, it becomes apparent to me that the potential of neurological transposition of electrical pulses is part of the function of the physical system in order to operate, while I am thinking something much different.

It is the idea of our being receptive too something more then a signal transfer within the physical system of pathways established through repetitive use, but also the finding of that location, to receive.It is one where we can accept something into ourselves as information from another. As accepting information from around us. Information is energy?

***


Structure of a typical chemical synapse
In the nervous system, a synapse is a junction that permits a neuron to pass an electrical or chemical signal to another cell. The word "synapse" comes from "synaptein", which Sir Charles Scott Sherrington and colleagues coined from the Greek "syn-" ("together") and "haptein" ("to clasp").

Synapses are essential to neuronal function: neurons are cells that are specialized to pass signals to individual target cells, and synapses are the means by which they do so. At a synapse, the plasma membrane of the signal-passing neuron (the presynaptic neuron) comes into close apposition with the membrane of the target (postsynaptic) cell. Both the presynaptic and postsynaptic sites contain extensive arrays of molecular machinery that link the two membranes together and carry out the signaling process. In many synapses, the presynaptic part is located on an axon, but some presynaptic sites are located on a dendrite or soma.
There are two fundamentally different types of synapse:
  • In a chemical synapse, the presynaptic neuron releases a chemical called a neurotransmitter that binds to receptors located in the postsynaptic cell, usually embedded in the plasma membrane. Binding of the neurotransmitter to a receptor can affect the postsynaptic cell in a wide variety of ways.
  • In an electrical synapse, the presynaptic and postsynaptic cell membranes are connected by channels that are capable of passing electrical current, causing voltage changes in the presynaptic cell to induce voltage changes in the postsynaptic cell.

***

The Einstein-Podolsky-Rosen Argument in Quantum Theory

First published Mon May 10, 2004; substantive revision Wed Aug 5, 2009

In the May 15, 1935 issue of Physical Review Albert Einstein co-authored a paper with his two postdoctoral research associates at the Institute for Advanced Study, Boris Podolsky and Nathan Rosen. The article was entitled “Can Quantum Mechanical Description of Physical Reality Be Considered Complete?” (Einstein et al. 1935). Generally referred to as “EPR”, this paper quickly became a centerpiece in the debate over the interpretation of the quantum theory, a debate that continues today. The paper features a striking case where two quantum systems interact in such a way as to link both their spatial coordinates in a certain direction and also their linear momenta (in the same direction). As a result of this “entanglement”, determining either position or momentum for one system would fix (respectively) the position or the momentum of the other. EPR use this case to argue that one cannot maintain both an intuitive condition of local action and the completeness of the quantum description by means of the wave function. This entry describes the argument of that 1935 paper, considers several different versions and reactions, and explores the ongoing significance of the issues they raise. See Also:Historical Figures Lead Us to the Topic of Entanglement
When looking at Penrose's seminar and you have clicked on the image, the idea presented itself to me that if one was to seek "a method by determination" I might express color of gravity as a exchange in principle as if spooky action at a distance, as an expression of a representative example of colorimetric expressions.

Science and TA by Chris Boyd
Do we selectively ignore other models from artificial intelligence such as Zadeh's Fuzzy Logic? This is a logic used to model perception and used in newly designed "smart" cameras. Where standard logic must give a true or false value to every proposition, fuzzy logic assigns a certainty value between zero and one to each of the propositions, so that we say a statement is .7 true and .3 false. Is this theory selectively ignored to support our theories?

Here fuzzy logic and TA had served in principal to show orders between "O and 1" as potentials of connection between the source of exchange between those two individuals. I see "cryptography" as an example of this determination  as a defined state of reductionism through that exchange.

Stuart Kauffman raises his own philosophical ideas in "Beyond Einstein and Schrodinger? The Quantum Mechanics of Closed Quantum Systems" about such things,  that lead to further  ideas on his topic, has blocked my comments there, so I see no use in further participating and offering ideas for his efforts toward "data mining" with regard to his biological methods to determination.

I can say it has sparked further interest in my own assessment of "seeking to understand color of gravity" as a method to determination,  as a state of deduction orientation, that we might get from a self evidential result from exchange,  as a "cause of determination" as to our futures.

While I have listed here between two individuals these thoughts also act as "an antennae" toward a universal question of "what one asks shall in some form be answered."

Not just a "blank slate" but one with something written on it. What design then predates physical expression, as if one could now define the human spirit and character, as  the soul in constant expression through materiality? An "evolution of spirit" then making manifest our progressions, as leading from one position to another.


***
See Also:

The Synapse is a Portal of the Thinking Mind

Thursday, November 18, 2010

QGP Research Advances

“We can say that the system definitely flows like a liquid,” says Harris.


One of the first lead-ion collisions in the LHC as recorded by the ATLAS experiment on November 8, 2010. Image courtesy CERN.

***
Scientists from the ALICE experiment at CERN’s Large Hadron Collider have publicly revealed the first measurements from the world’s highest energy heavy-ion collisions. In two papers posted today to the arXiv.org website, the collaboration describes two characteristics of the collisions: the number of particles produced from the most head-on collisions; and, for more glancing blows, the flow of the system of two colliding nuclei.
Both measurements serve to rule out some theories about how the universe behaves at its most fundamental, despite being based on a relatively small number of collisions collected in the first few days of LHC running with lead-ion beams.
In the first measurement, scientists counted the charged particles that were produced from a few thousand of the most central lead-ion collisions—those where the lead nuclei hit each other head-on. The result showed that about 18,000 particles are produced from collisions of lead ions, which is about 2.2 times more particles than produced in similar collisions of gold ions at Brookhaven National Laboratory’s Relativistic Heavy Ion Collider.
See: ALICE experiment announces first results from LHC’s lead-ion collisions

Wednesday, November 17, 2010

Entanglement is a key feature of the way complexity....

LET’S CALL IT PLECTICS Murray Gell-Mann
It is appropriate that plectics refers to entanglement or the lack thereof, since entanglement is a key feature of the way complexity arises out of simplicity, making our subject worth studying. Forexample, all of us human beings and all the objects with which we deal are essentially bundles of simple quarks and electrons. If each of those particles had to be in its own independent state, we could not exist and neither could the other objects. It is the entanglement of the states of the particles that is responsible for matter as we know it.
http://tuvalu.santafe.edu/~mgm/Site/Publications_files/MGM%20118.pdf

I wanted to refer to this article and have in previous entries.  As of today, those current blog entries should have this new link  as referenced. Will be correcting as blog entries appear.

Sunday, November 14, 2010

Gravimetry

For the chemical analysis technique, see Gravimetric analysis.


Gravity map of the Southern Ocean around the Antarctic continent
Author-Hannes Grobe, AWI

This gravity field was computed from sea-surface height measurements collected by the US Navy GEOSAT altimeter between March, 1985, and January, 1990. The high density GEOSAT Geodetic Mission data that lie south of 30 deg. S were declassified by the Navy in May of 1992 and contribute most of the fine-scale gravity information.

The Antarctic continent itself is shaded in blue depending on the thickness of the ice sheet (blue shades in steps of 1000 m); light blue is shelf ice; gray lines are the major ice devides; pink spots are parts of the continent which are not covered by ice; gray areas have no data.
Gravimetry is the measurement of the strength of a gravitational field. Gravimetry may be used when either the magnitude of gravitational field or the properties of matter responsible for its creation are of interest. The term gravimetry or gravimetric is also used in chemistry to define a class of analytical procedures, called gravimetric analysis relying upon weighing a sample of material.

Contents

Units of measurement

Gravity is usually measured in units of acceleration. In the SI system of units, the standard unit of acceleration is 1 metre per second squared (abbreviated as m/s2). Other units include the gal (sometimes known as a galileo, in either case with symbol Gal), which equals 1 centimetre per second squared, and the g (gn), equal to 9.80665 m/s2. The value of the gn approximately equals the acceleration due to gravity at the Earth's surface (although the actual acceleration g varies fractionally from place to place).

How gravity is measured

An instrument used to measure gravity is known as a gravimeter, or gravitometer. Since general relativity regards the effects of gravity as indistinguishable from the effects of acceleration, gravimeters may be regarded as special purpose accelerometers. Many weighing scales may be regarded as simple gravimeters. In one common form, a spring is used to counteract the force of gravity pulling on an object. The change in length of the spring may be calibrated to the force required to balance the gravitational pull. The resulting measurement may be made in units of force (such as the newton), but is more commonly made in units of gals.

More sophisticated gravimeters are used when precise measurements are needed. When measuring the Earth's gravitational field, measurements are made to the precision of microgals to find density variations in the rocks making up the Earth. Several types of gravimeters exist for making these measurements, including some that are essentially refined versions of the spring scale described above. These measurements are used to define gravity anomalies.

Besides precision, also stability is an important property of a gravimeter, as it allows the monitoring of gravity changes. These changes can be the result of mass displacements inside the Earth, or of vertical movements of the Earth's crust on which measurements are being made: remember that gravity decreases 0.3 mGal for every metre of height. The study of gravity changes belongs to geodynamics.

The majority of modern gravimeters use specially-designed quartz zero-length springs to support the test mass. Zero length springs do not follow Hooke's Law, instead they have a force proportional to their length. The special property of these springs is that the natural resonant period of oscillation of the spring-mass system can be made very long - approaching a thousand seconds. This detunes the test mass from most local vibration and mechanical noise, increasing the sensitivity and utility of the gravimeter. The springs are quartz so that magnetic and electric fields do not affect measurements. The test mass is sealed in an air-tight container so that tiny changes of barometric pressure from blowing wind and other weather do not change the buoyancy of the test mass in air.

Spring gravimeters are, in practice, relative instruments which measure the difference in gravity between different locations. A relative instrument also requires calibration by comparing instrument readings taken at locations with known complete or absolute values of gravity. Absolute gravimeters provide such measurements by determining the gravitational acceleration of a test mass in vacuum. A test mass is allowed to fall freely inside a vacuum chamber and its position is measured with a laser interferometer and timed with an atomic clock. The laser wavelength is known to ±0.025 ppb and the clock is stable to ±0.03 ppb as well. Great care must be taken to minimize the effects of perturbing forces such as residual air resistance (even in vacuum) and magnetic forces. Such instruments are capable of an accuracy of a few parts per billion or 0.002 mGal and reference their measurement to atomic standards of length and time. Their primary use is for calibrating relative instruments, monitoring crustal deformation, and in geophysical studies requiring high accuracy and stability. However, absolute instruments are somewhat larger and significantly more expensive than relative spring gravimeters, and are thus relatively rare.

Gravimeters have been designed to mount in vehicles, including aircraft, ships and submarines. These special gravimeters isolate acceleration from the movement of the vehicle, and subtract it from measurements. The acceleration of the vehicles is often hundreds or thousands of times stronger than the changes being measured. A gravimeter (the Lunar Surface Gravimeter) was also deployed on the surface of the moon during the Apollo 17 mission, but did not work due to a design error. A second device (the Traverse Gravimeter Experiment) functioned as anticipated.

Microgravimetry

Microgravimetry is a rising and important branch developed on the foundation of classical gravimetry.

Microgravity investigations are carried out in order to solve various problems of engineering geology, mainly location of voids and their monitoring. Very detailed measurements of high accuracy can indicate voids of any origin, provided the size and depth are large enough to produce gravity effect stronger than is the level of confidence of relevant gravity signal.

History

The modern gravimeter was developed by Lucien LaCoste and Arnold Romberg in 1936.

They also invented most subsequent refinements, including the ship-mounted gravimeter, in 1965, temperature-resistant instruments for deep boreholes, and lightweight hand-carried instruments. Most of their designs remain in use (2005) with refinements in data collection and data processing.

See also

The Lunar Far Side: The Side Never Seen from Earth

                                                            Mass concentration (astronomy)

This figure shows the topography (top) and corresponding gravity (bottom) signal of Mare Smythii at the Moon. It nicely illustrates the term "mascon". Author Martin Pauer

While article is from Tuesday, June 22, 2010 9:00 PM it still amazes me how we see the moon in context of it's coloring.
Topography when seen in context of landscape, how we measure aspects of the gravitational field supply us with a more realistic interpretation of the globe as a accurate picture of how that sphere(isostatic equilibrium)  looks.


Image Credit: NASA/Goddard
Ten Cool Things Seen in the First Year of LRO

Tidal forces between the moon and the Earth have slowed the moon' rotation so that one side of the moon always faces toward our planet. Though sometimes improperly referred to as the "dark side of the moon," it should correctly be referred to as the "far side of the moon" since it receives just as much sunlight as the side that faces us. The dark side of the moon should refer to whatever hemisphere isn't lit at a given time. Though several spacecraft have imaged the far side of the moon since then, LRO is providing new details about the entire half of the moon that is obscured from Earth. The lunar far side is rougher and has many more craters than the near side, so quite a few of the most fascinating lunar features are located there, including one of the largest known impact craters in the solar system, the South Pole-Aitken Basin. The image highlighted here shows the moon's topography from LRO's LOLA instruments with the highest elevations up above 20,000 feet in red and the lowest areas down below -20,000 feet in blue.

Learn More About Far side of the Moon

***
 Credit: NASA/Goddard/MIT/Brown

Figure 4: A lunar topographic map showing the Moon from the vantage point of the eastern limb. On the left side of the Moon seen in this view is part of the familiar part of the Moon observed from Earth (the eastern part of the nearside). In the middle left-most part of the globe is Mare Tranquillitatis (light blue) the site of the Apollo 11 landing, and above this an oval-appearing region (Mare Serenitatis; dark blue) the site of the Apollo 17 landing. Most of the dark blue areas are lunar maria, low lying regions composed of volcanic lava flows that formed after the heavily cratered lunar highlands (and are thus much less cratered). The topography is derived from over 2.4 billion shots made by the Lunar Orbiter Laser Altimeter (LOLA) instrument on board the NASA Lunar Reconnaissance Orbiter. The large near-circular basins show the effects of the early impacts on early planetary crusts in the inner solar system, including the Earth. 

***
 Author and Image Credit: Mark A. Wieczorek
Radial gravitational anomaly at the surface of the Moon as determined from the gravity model LP150Q. The contribution due to the rotational flattening has been removed for clarity, and positive anomalies correspond to an increase in magnitude of the gravitational acceleration. Data are presented in two Lambert azimuthal equal area projections.
The major characteristic of the Moon's gravitational field is the presence of mascons, which are large positive gravity anomalies associated with some of the giant impact basins. These anomalies greatly influence the orbit of spacecraft about the Moon, and an accurate gravitational model is necessary in the planning of both manned and unmanned missions. They were initially discovered by the analysis of Lunar Orbiter tracking data,[2] since navigation tests prior to the Apollo program experienced positioning errors much larger than mission specifications.

Wednesday, November 10, 2010

It's Neither World, not Nether

Netherworld is often used as a synonym for Underworld.

Okay this may seem like a strange title, but believe me when I say how fascinating that such dynamics in meeting "each other: will allow something to "pop" right out of existence.

Underworld is a region in some religions and in mythologies which is thought to be under the surface of the earth.[1] It could be a place where the souls of the recently departed go, and, in some traditions, it is identified with Hell or the realm of death. In other traditions, however, such as animistic traditions, it could be seen as the place where life appears to have originated from (such as plant life, water, etc.) and a place to which life must return at life's end, with no negative undertones.

I mean I am not quite sure how this post must materialize, to conclude "non-existence" until it is clear, that such dynamics  will allow such a thing to happen, that one could say indeed,  they have completed their journey.

Now can I say that this is the process of the universe,  I can't be sure.I know that in the "mediation process" for concluding the experience,  such an experience has to come undone. Again this is such a strange thing in my mind that I had to say that "I was the experience" until such a time, that going along with other things in sameness of dynamics, that it was hard at first to see this dynamics in play as being apart from it.  I could actually only say enough of this experience to concluded  the realization of coming undone. Hmm...

To solidify this until understanding, I relived these things until I saw the last of the tension ebb away to allow  "a tension" to become undone. As if such tension "had to exist" until the very bubble that harbored and allowed all of the world of our expediency no longer supported such a viable option as that bubble.

I know this is not such a cute analogy but to get to the essence of the story then it has to be understood that underneath "this experience"  is a dynamcial revelation of sorts that hides the equation of such an experience?

You should know then that I see this very schematics of the world as having this nature to it that we may describe reality as something closer to the definition of it's very existence and that such a attempt at describing nature was to get to the very end of what begins? Imagine arriving at the juxtaposition of such a point?

How are We to Contained Experience?

In mathematics, the Klein bottle ([klaɪ̯n]) is a non-orientable surface, informally, a surface (a two-dimensional manifold) with no identifiable "inner" and "outer" sides. Other related non-orientable objects include the Möbius strip and the real projective plane. Whereas a Möbius strip is a two-dimensional surface with boundary, a Klein bottle has no boundary. (For comparison, a sphere is an orientable surface with no boundary.)
By adding a fourth dimension to the three dimensional space, the self-intersection can be eliminated. Gently push a piece of the tube containing the intersection out of the original three dimensional space. A useful analogy is to consider a self-intersecting curve on the plane; self-intersections can be eliminated by lifting one strand off the plane.
This immersion is useful for visualizing many properties of the Klein bottle. For example, the Klein bottle has no boundary, where the surface stops abruptly, and it is non-orientable, as reflected in the one-sidedness of the immersion.

The geometry was revealing as I tried to encapsulate this point, so as to see where such a description fell away from all that we can contain of the world. That we can truly say we had indeed let go. To imagine then that one's grip on things became ever tighter, while wishing to let the strength of this while becoming ever stronger to fall away.

"While Gassner was watching television, the natural motion of the Earth must have carried him through a small non-orientable pocket of the universe," said Boris Harkov, a mathematician at the Massachusetts Institute of Technology in Cambridge. "That's the only reasonable explanation."

One way to test the orientation of the universe is to hurl a right-handed glove into the air and see if it falls back to Earth as a left-handed glove--if it does, the universe must be non-orientable. Since Gassner's announcement, physicists have been carrying out such experiments, both outdoors and in Gassner's TV room, but so far all tests have come back negative. Still, many researchers are optimistic. "I'm confident that the glove will flip soon," said Chen Xiang, an experimental physicist at Brookhaven National Laboratory in New York. The Klarreich Occasionally


Ultimate realization that what is negative is a positive toward completion.That is how one might define the whole perspective of validation of no longer being negative?

As if one wold realize that such a tension revealed in the Tao, no longer existed in the picture as a demonstration of the Tao now gone.
Now, such a object seemed part of the experience,  as to the unfolding, yet in my inadequate understanding how could such a thing be taken down to such a point as to say it no longer existed. How can I say say such a geometry was part of that process while I struggle to define such an action as falling away or reducing it to such a point of nothing?

It's enough then that one sees "around that point"  that the ultimate quest envisions such  an "undoing" that we see where the relevance of such a tension can and should no longer exist?

The Experience Most Fitting then ?

As I relayed earlier I experience many things until I understood this undoing, that such reason then to awareness of "what should be" was capsulized in only one example. How shall I say it then that I understood all that befell me to dissolution to show that such a demonstration was complete. I would still be here? That such an equation of resistance could have been imparted not only in the equation, but in the telling of the experience too?

While I show by experience such an example it should be taken that in this example I have changed the name of the person in order to protect our association. Shall I be so forthcoming that only the "object of relation" shall be the only thing identifiable  so as to know that this association is very real to me, and only to me by that person's identification as an experience that is real? Aw....well anyway "more then one" for sure, as to the way in which I use that experience to demonstrate.

It all began, as I noticed a tension in his voice, as he slipped into the realization of something that had happen to him earlier in that day. I was taken to a "good observation point" so that I might admit to seeing what he was seeing.  As hard as I looked at first I could not tell what he was so upset about that I tried ever harder  to see, that slowly I understood then what he was pointing at. Why such a tension could exist in him and his voice, that such a rectification and adjustment was needed in order to make this right.

As I relay this situation it was apparent at the time of such a demonstration, as to a example that this situation popped up,  as such a reason to be demonstrated that to make it right, had to be the undoing of what made it wrong you see. To make the point ever driven home for realization was to demonstrate that such undoing had to rectify the situation of where it began, so of course,  all actions taken to get it fixed. Could it have ever been undone?

Well as if I understood why such an experience came frothing to the surface of awareness I thought to conclude this example by what I saw, that it took me by realization that "in turning" to back up, a hand imprint in oil was left on the back of the seat in order for the person to complete the job. A "new point of tension" by not washing their hands, or not covering  pristine upholstery that was just purchased, was created.

All of this has to be undone in order for one to say that this experience has popped out of existence you see?

That was how such a demonstration was shown to be reasonable in my mind for such an equation to manifest such a description about that experience that I could say that it was reasonable to me that I had understood.

Was it a good example rests on you to be sure.

***
Physically, the effect can be interpreted as an object moving from the "false vacuum" (where = 0) to the more stable "true vacuum" (where = v). Gravitationally, it is similar to the more familiar case of moving from the hilltop to the valley. In the case of Higgs field, the transformation is accompanied with a "phase change", which endows mass to some of the particles

"Quantum Field Theory

Quantum Vacuum:

In classical physics, empty space is called the vacuum. The classical vacuum is utterly featureless. However, in quantum theory, the vacuum is a much more complex entity. The uncertainty principle allows virtual particles (each corresponding to a quantum field) continually materialize out of the vacuum, propagate for a short time and then vanish. " http://universe-review.ca/R15-12-QFT.htm#vacuum

"The idea behind the Coleman-De Luccia instanton, discovered in 1987, is that the matter in the early universe is initially in a state known as a false vacuum. A false vacuum is a classically stable excited state which is quantum mechanically unstable." http://www.damtp.cam.ac.uk/research/gr/public/qg_qc.html