Showing posts with label General Relativity. Show all posts
Showing posts with label General Relativity. Show all posts

Saturday, September 15, 2012

Noncommutative standard model




In theoretical particle physics, the non-commutative Standard Model, mainly due to the French mathematician Alain Connes, uses his noncommutative geometry to devise an extension of the Standard Model to include a modified form of general relativity. This unification implies a few constraints on the parameters of the Standard Model. Under an additional assumption, known as the "big desert" hypothesis, one of these constraints determines the mass of the Higgs boson to be around 170 GeV, comfortably within the range of the Large Hadron Collider. Recent Tevatron experiments exclude a Higgs mass of 158 to 175 GeV at the 95% confidence level.[1] However, the previously computed Higgs mass was found to have an error, and more recent calculations are in line with the measured Higgs mass. [2]

 

Contents

 

Background


Current physical theory features four elementary forces: the gravitational force, the electromagnetic force, the weak force, and the strong force. Gravity has an elegant and experimentally precise theory: Einstein's general relativity. It is based on Riemannian geometry and interprets the gravitational force as curvature of space-time. Its Lagrangian formulation requires only two empirical parameters, the gravitational constant and the cosmological constant.

The other three forces also have a Lagrangian theory, called the Standard Model. Its underlying idea is that they are mediated by the exchange of spin-1 particles, the so-called gauge bosons. The one responsible for electromagnetism is the photon. The weak force is mediated by the W and Z bosons; the strong force, by gluons. The gauge Lagrangian is much more complicated than the gravitational one: at present, it involves some 30 real parameters, a number that could increase. What is more, the gauge Lagrangian must also contain a spin 0 particle, the Higgs boson, to give mass to the spin 1/2 and spin 1 particles. This particle has yet to be observed, and if it is not detected at the Large Hadron Collider in Geneva, the consistency of the Standard Model is in doubt.

Alain Connes has generalized Bernhard Riemann's geometry to noncommutative geometry. It describes spaces with curvature and uncertainty. Historically, the first example of such a geometry is quantum mechanics, which introduced Heisenberg's uncertainty relation by turning the classical observables of position and momentum into noncommuting operators. Noncommutative geometry is still sufficiently similar to Riemannian geometry that Connes was able to rederive general relativity. In doing so, he obtained the gauge Lagrangian as a companion of the gravitational one, a truly geometric unification of all four fundamental interactions. Connes has thus devised a fully geometric formulation of the Standard Model, where all the parameters are geometric invariants of a noncommutative space. A result is that parameters like the electron mass are now analogous to purely mathematical constants like pi. In 1929 Weyl wrote Einstein that any unified theory would need to include the metric tensor, a gauge field, and a matter field. Einstein considered the Einstein-Maxwell-Dirac system by 1930. He probably didn't develop it because he was unable to geometricize it. It can now be geometricized as a non-commutative geometry.

See also

 

 

Notes

 

  1. ^ The TEVNPH Working Group [1]
  2. ^ Resilience of the Spectral Standard Model [2]

 

References

 

 

External links

 


Update:


Wednesday, December 14, 2011

Explanation on Quantum Gravity in a Nutshell

Although Aristotle in general had a more empirical and experimental attitude than Plato, modern science did not come into its own until Plato's Pythagorean confidence in the mathematical nature of the world returned with Kepler, Galileo, and Newton. For instance, Aristotle, relying on a theory of opposites that is now only of historical interest, rejected Plato's attempt to match the Platonic Solids with the elements -- while Plato's expectations are realized in mineralogy and crystallography, where the Platonic Solids occur naturally.Plato and Aristotle, Up and Down-Kelley L. Ross, Ph.D.



The goal of string theory is to explain the "?" in the above diagram.


 I enjoyed the Livescribe demonstration by Clifford of  Asymptotia along with the explanation for Quantum Gravity. The two pillars for me were very emblematic with regards to "pillars of science."  This as well as the arch  is very fitting to me of what becomes self evident. If  under such an examination of the two areas Clifford is talking about,  Quantum Mechanics and General Relativity then are the attempts at unification.

 
The Yorck Project: 10.000 Meisterwerke der Malerei. DVD-ROM, 2002. ISBN 3936122202. Distributed by DIRECTMEDIA Publishing GmbH.


That question mark can be demonstrated above as to where in the location in Cliffords diagrams is related to the Aristotelian Arch in my view?

See:

Thursday, November 24, 2011

Relativistic Mechanical Quantities

A number of ordinary mechanical quantities take on a different form as the speed approaches the speed of light.


Relativistic Mechanical Quantities(Link)
***


Kinematic Time Shift Calculation 

Hafele and Keating Experiment

Usefulness of the Quantity pc

Calorimeters for High Energy Physics experiments – part 1

April 6, 2008 by Dorigo

 

 ***


first tau-neutrino “appearing” out of several billion of billions muon neutrinos

Also See:


Lepton


Lepton
Beta Negative Decay.svg
Leptons are involved in several processes such as beta decay.
Composition Elementary particle
Statistics Fermionic
Generation 1st, 2nd, 3rd
Interactions Electromagnetism, Gravitation, Weak
Symbol l
Antiparticle Antilepton (l)
Types 6 (electron, electron neutrino, muon, muon neutrino, tau, tau neutrino)
Electric charge +1 e, 0 e, −1 e
Color charge No
Spin 12
A lepton is an elementary particle and a fundamental constituent of matter.[1] The best known of all leptons is the electron which governs nearly all of chemistry as it is found in atoms and is directly tied to all chemical properties. Two main classes of leptons exist: charged leptons (also known as the electron-like leptons), and neutral leptons (better known as neutrinos). Charged leptons can combine with other particles to form various composite particles such as atoms and positronium, while neutrinos rarely interact with anything, and are consequently rarely observed.
There are six types of leptons, known as flavours, forming three generations.[2] The first generation is the electronic leptons, comprising the electron (e) and electron neutrino (ν
e
); the second is the muonic leptons, comprising the muon (μ) and muon neutrino (ν
μ
); and the third is the tauonic leptons, comprising the tau (τ) and the tau neutrino (ν
τ
). Electrons have the least mass of all the charged leptons. The heavier muons and taus will rapidly change into electrons through a process of particle decay: the transformation from a higher mass state to a lower mass state. Thus electrons are stable and the most common charged lepton in the universe, whereas muons and taus can only be produced in high energy collisions (such as those involving cosmic rays and those carried out in particle accelerators).

Leptons have various intrinsic properties, including electric charge, spin, and mass. Unlike quarks however, leptons are not subject to the strong interaction, but they are subject to the other three fundamental interactions: gravitation, electromagnetism (excluding neutrinos, which are electrically neutral), and the weak interaction. For every lepton flavor there is a corresponding type of antiparticle, known as antilepton, that differs from the lepton only in that some of its properties have equal magnitude but opposite sign. However, according to certain theories, neutrinos may be their own antiparticle, but it is not currently known whether this is the case or not.

The first charged lepton, the electron, was theorized in the mid-19th century by several scientists[3][4][5] and was discovered in 1897 by J. J. Thomson.[6] The next lepton to be observed was the muon, discovered by Carl D. Anderson in 1936, but it was erroneously classified as a meson at the time.[7] After investigation, it was realized that the muon did not have the expected properties of a meson, but rather behaved like an electron, only with higher mass. It took until 1947 for the concept of "leptons" as a family of particle to be proposed.[8] The first neutrino, the electron neutrino, was proposed by Wolfgang Pauli in 1930 to explain certain characteristics of beta decay.[8] It was first observed in the Cowan–Reines neutrino experiment conducted by Clyde Cowan and Frederick Reines in 1956.[8][9] The muon neutrino was discovered in 1962 by Leon M. Lederman, Melvin Schwartz and Jack Steinberger,[10] and the tau discovered between 1974 and 1977 by Martin Lewis Perl and his colleagues from the Stanford Linear Accelerator Center and Lawrence Berkeley National Laboratory.[11] The tau neutrino remained elusive until July 2000, when the DONUT collaboration from Fermilab announced its discovery.[12][13]

Leptons are an important part of the Standard Model. Electrons are one of the components of atoms, alongside protons and neutrons. Exotic atoms with muons and taus instead of electrons can also be synthesized, as well as lepton–antilepton particles such as positronium.

2011 Review of Particle Physics.
  Please use this CITATION: K. Nakamura et al. (Particle Data Group), Journal of Physics G37, 075021 (2010) and 2011 partial update for the 2012 edition.






Sunday, November 20, 2011

Energy Boost From Shock Front

Main Components of CNGS
A 400 GeV/c proton beam is extracted from the SPS in 10.5 microsecond short pulses of 2.4x1013 protons per pulse. The proton beam is transported through the transfer line TT41 to the CNGS target T40. The target consists of a series of graphite rods, which are cooled by a recirculated helium flow. Secondary pions and kaons of positive charge produced in the target are focused into a parallel beam by a system of two pulsed magnetic lenses, called horn and reflector. A 1 km long evacuated decay pipe allows the pions and kaons to decay into their daughter particles - of interest here is mainly the decay into muon-neutrinos and muons. The remaining hadrons (protons, pions, kaons) are absorbed in an iron beam dump with a graphite core. The muons are monitored in two sets of detectors downstream of the dump. Further downstream, the muons are absorbed in the rock while the neutrinos continue their travel towards Gran Sasso.microsecond short pulses of 2.4x1013 protons per
 For me it has been an interesting journey in trying to understand the full context of a event in space sending information through out the cosmos in ways that are not limited to the matter configurations that would affect signals of those events.

In astrophysics, the most widely discussed mechanism of particle acceleration is the first-order Fermi process operating at collisionless shocks. It is based on the idea that particles undergo stochastic elastic scatterings both upstream and downstream of the shock front. This causes particles to wander across the shock repeatedly. On each crossing, they receive an energy boost as a result of the relative motion of the upstream and downstream plasmas. At non-relativistic shocks, scattering causes particles to diffuse in space, and the mechanism, termed "diffusive shock acceleration," is widely thought to be responsible for the acceleration of cosmic rays in supernova remnants. At relativistic shocks, the transport process is not spatial diffusion, but the first-order Fermi mechanism operates nevertheless (for reviews, see Kirk & Duffy 1999; Hillas 2005). In fact, the first ab initio demonstrations of this process using particle-in-cell (PIC) simulations have recently been presented for the relativistic case (Spitkovsky 2008b; Martins et al. 2009; Sironi & Spitkovsky 2009).
Several factors, such as the lifetime of the shock front or its spatial extent, can limit the energy to which particles can be accelerated in this process. However, even in the absence of these, acceleration will ultimately cease when the radiative energy losses that are inevitably associated with the scattering process overwhelm the energy gains obtained upon crossing the shock. Exactly when this happens depends on the details of the scattering process. See: RADIATIVE SIGNATURES OF RELATIVISTIC SHOCKS

So in soliton expressions while trying to find such an example here in the blog does not seem to be offering itself in the animations of the boat traveling down the channel we are so familiar with that for me this was the idea of the experimental processes unfolding at LHC. The collision point creates shock waves\particle sprays as Jets?


Soliton


Solitary wave in a laboratory wave channel.
In mathematics and physics, a soliton is a self-reinforcing solitary wave (a wave packet or pulse) that maintains its shape while it travels at constant speed. Solitons are caused by a cancellation of nonlinear and dispersive effects in the medium. (The term "dispersive effects" refers to a property of certain systems where the speed of the waves varies according to frequency.) Solitons arise as the solutions of a widespread class of weakly nonlinear dispersive partial differential equations describing physical systems. The soliton phenomenon was first described by John Scott Russell (1808–1882) who observed a solitary wave in the Union Canal in Scotland. He reproduced the phenomenon in a wave tank and named it the "Wave of Translation".

So in a sense the shock front\horn for me in respect of Gran Sasso is the idea that such a front becomes a dispersive element in medium expression of earth to know that such densities in earth have a means by which we can measure relativist interpretations as assign toward density determinations in the earth.  Yet,  there are things not held to this distinction so know that they move on past such targets so as to show cosmological considerations are just as relevant today as they are while we set up the experimental avenues toward identifying this relationship here on earth.

 For more than a decade, scientists have seen evidence that the three known types of neutrinos can morph into each other. Experiments have found that muon neutrinos disappear, with some of the best measurements provided by the MINOS experiment. Scientists think that a large fraction of these muon neutrinos transform into tau neutrinos, which so far have been very hard to detect, and they suspect that a tiny fraction transform into electron neutrinos. See: Fermilab experiment weighs in on neutrino mystery

When looking out at the universe such perspective do not hold relevant for those not looking past the real toward the abstract? To understand the distance measure of binary star of Taylor and Hulse,  such signals need to be understood in relation to what is transmitted out into the cosmos? How are we measuring that distance? For some who are even more abstractedly gifted they may see the waves generated in gravitational expression. So this becomes a means which which to ask if the binary stars are getting closer then how is this distance measured? You see?


Measurement of the neutrino velocity with the OPERA detectorin the CNGS beam 





Monday, October 12, 2009

Universality Can Lead too, Isostatic Adjustment


Pressure and heat melts protons and neutrons into a new state of matter - the quark gluon plasma.


Now you must know that this entry holds philosophical perspective and is the mandate of Night Light Mining Company to explore the potentials of planetary and geological data gained from scientific analysis to help the society of earth to move farther out into space, and to colonize.

Why are Planets Round?

It is always interesting to see water in space.

Image: NASA/JPL-
Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."

With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes.



I wanted to explore the philosophical bend first, as it sets the tone for analysis not only of the potentials of planets but of what we can gained from understanding the place of values we can set around ourselves.


Two-dimensional analogy of space–time distortion. Matter changes the geometry of spacetime, this (curved) geometry being interpreted as gravity. White lines do not represent the curvature of space but instead represent the coordinate system imposed on the curved spacetime, which would be rectilinear in a flat spacetime. See: Spacetime


Be it known then, that such universality can exist in principle around this "central core" that such equatorial measures are distinctive and related to the equatorial possibility of Inverse Square Law, that as a mathematical principle, this is brought to bear on how we solidify the substance of the elemental table, that we can say, indeed, that such values can be assigned in "refractive light" to values which are built to become "round in planetary constitution."



The life cycle of a lunar impact and associated time and special scales. The LCROSS measurement methods are “layered” in response to the rapidly evolving impact environment. See: Impact:Lunar CRater Observation Satellite (LCROSS)



It becomes an evolutionary discourse then about what began from universality "in principle" can become such a state as evident in the framework of elemental consideration, that one might say indeed that it is "this constitution" that will signify the relevance to the spacetime fabric and it's settled orbit.

***


See Also:

Isostatic Adjustment is Why Planets are Round?

Centroids

Thursday, January 29, 2009

Formation of Gravity

Wegener proposed that the continents floated somewhat like icebergs in water. Wegener also noted that the continents move up and down to maintain equilibrium in a process called isostasy.Alfred Wegener


Just thought I would add this for consideration. Grace satellite does a wonderful job of discerning this feature? Amalgamating differing perspectives allows one to encapsulate a larger view on the reality of Earth. More then the sphere. More then, what Joseph Campbell describes:

The Power of Myth With Bill Moyers, by Joseph Campbell , Introduction that Bill Moyers writes,

"Campbell was no pessimist. He believed there is a "point of wisdom beyond the conflicts of illusion and truth by which lives can be put back together again." Finding it is the "prime question of the time." In his final years he was striving for a new synthesis of science and spirit. "The shift from a geocentric to a heliocentric world view," he wrote after the astronauts touched the moon, "seemed to have removed man from the center-and the center seemed so important...


While one can indeed approximate according to the spherical cow, in terms of events in the cosmos, I was being more specific when it comes to demonstrating a geometrical feature of the sphere in terms of the geometry of the Centroid. This feature is embedded in the validation of the sphere in regard to gravity?

Image: NASA/JPL-
Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."

With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes.


***


It was important to see how such planets form and given their "Mass and densities" which I thought to show how such a valuation could be seen in relation to the variance of gravity so it is understood.

Isostasy (Greek isos = "equal", stásis = "standstill") is a term used in geology to refer to the state of gravitational equilibrium between the earth's lithosphere and asthenosphere such that the tectonic plates "float" at an elevation which depends on their thickness and density. This concept is invoked to explain how different topographic heights can exist at the Earth's surface. When a certain area of lithosphere reaches the state of isostasy, it is said to be in isostatic equilibrium. Isostasy is not a process that upsets equilibrium, but rather one which restores it (a negative feedback). It is generally accepted that the earth is a dynamic system that responds to loads in many different ways, however isostasy provides an important 'view' of the processes that are actually happening. Nevertheless, certain areas (such as the Himalayas) are not in isostatic equilibrium, which has forced researchers to identify other reasons to explain their topographic heights (in the case of the Himalayas, by proposing that their elevation is being "propped-up" by the force of the impacting Indian plate).

In the simplest example, isostasy is the principle of buoyancy observed by Archimedes in his bath, where he saw that when an object was immersed, an amount of water equal in volume to that of the object was displaced. On a geological scale, isostasy can be observed where the Earth's strong lithosphere exerts stress on the weaker asthenosphere which, over geological time flows laterally such that the load of the lithosphere is accommodated by height adjustments.


***


Such strength variances can be attributed to the height with which this measure is taken(time clocks and such) and such a validation in terms of Inverse Square Law goes to help to identify this strength and weakness, according to the nature of the mass and density of the planet.



As one of the fields which obey the general inverse square law, the gravity field can be put in the form shown below, showing that the acceleration of gravity, g, is an expression of the intensity of the gravity field.
See: Hyperphysics-Inverse Square Law-Gravity

***


It is important then such a measure of the energy needed in which to overcome the pull of the earth, then was assigned it's energy value so such calculations are then validated in the escape velocity. There are other ways in which to measure spots in space when holding a bulk view of the reality in regards to gravity concentrations and it locations.

See: Hyperphysics-Gravity-Escape Velocity

***


See Also:
  • Isostatic Adjustment is Why Planets are Round?
  • Concepts of the Fifth Dimension
  • Dealing With a 5D World
  • Wednesday, January 23, 2008

    Ueber die Hypothesen, welche der Geometrie zu Grunde liegen.

    As I pounder the very basis of my thoughts about geometry based on the very fabric of our thinking minds, it has alway been a reductionist one in my mind, that the truth of the reality would a geometrical one.



    The emergence of Maxwell's equations had to be included in the development of GR? Any Gaussian interpretation necessary, so that the the UV coordinates were well understood from that perspective as well. This would be inclusive in the approach to the developments of GR. As a hobbyist myself of the history of science, along with the developments of today, I might seem less then adequate in the adventure, I persevere.




    On the Hypotheses which lie at the Bases of Geometry.
    Bernhard Riemann
    Translated by William Kingdon Clifford

    [Nature, Vol. VIII. Nos. 183, 184, pp. 14--17, 36, 37.]

    It is known that geometry assumes, as things given, both the notion of space and the first principles of constructions in space. She gives definitions of them which are merely nominal, while the true determinations appear in the form of axioms. The relation of these assumptions remains consequently in darkness; we neither perceive whether and how far their connection is necessary, nor a priori, whether it is possible.

    From Euclid to Legendre (to name the most famous of modern reforming geometers) this darkness was cleared up neither by mathematicians nor by such philosophers as concerned themselves with it. The reason of this is doubtless that the general notion of multiply extended magnitudes (in which space-magnitudes are included) remained entirely unworked. I have in the first place, therefore, set myself the task of constructing the notion of a multiply extended magnitude out of general notions of magnitude. It will follow from this that a multiply extended magnitude is capable of different measure-relations, and consequently that space is only a particular case of a triply extended magnitude. But hence flows as a necessary consequence that the propositions of geometry cannot be derived from general notions of magnitude, but that the properties which distinguish space from other conceivable triply extended magnitudes are only to be deduced from experience. Thus arises the problem, to discover the simplest matters of fact from which the measure-relations of space may be determined; a problem which from the nature of the case is not completely determinate, since there may be several systems of matters of fact which suffice to determine the measure-relations of space - the most important system for our present purpose being that which Euclid has laid down as a foundation. These matters of fact are - like all matters of fact - not necessary, but only of empirical certainty; they are hypotheses. We may therefore investigate their probability, which within the limits of observation is of course very great, and inquire about the justice of their extension beyond the limits of observation, on the side both of the infinitely great and of the infinitely small.



    For me the education comes, when I myself am lured by interest into a history spoken to by Stefan and Bee of Backreaction. The "way of thought" that preceded the advent of General Relativity.


    Einstein urged astronomers to measure the effect of gravity on starlight, as in this 1913 letter to the American G.E. Hale. They could not respond until the First World War ended.

    Translation of letter from Einstein's to the American G.E. Hale by Stefan of BACKREACTION

    Zurich, 14 October 1913

    Highly esteemed colleague,

    a simple theoretical consideration makes it plausible to assume that light rays will experience a deviation in a gravitational field.

    [Grav. field] [Light ray]

    At the rim of the Sun, this deflection should amount to 0.84" and decrease as 1/R (R = [strike]Sonnenradius[/strike] distance from the centre of the Sun).

    [Earth] [Sun]

    Thus, it would be of utter interest to know up to which proximity to the Sun bright fixed stars can be seen using the strongest magnification in plain daylight (without eclipse).


    Fast Forward to an Effect

    Bending light around a massive object from a distant source. The orange arrows show the apparent position of the background source. The white arrows show the path of the light from the true position of the source.

    The fact that this does not happen when gravitational lensing applies is due to the distinction between the straight lines imagined by Euclidean intuition and the geodesics of space-time. In fact, just as distances and lengths in special relativity can be defined in terms of the motion of electromagnetic radiation in a vacuum, so can the notion of a straight geodesic in general relativity.



    To me, gravitational lensing is a cumulative affair that such a geometry borne into mind, could have passed the postulates of Euclid, and found their way to leaving a "indelible impression" that the resources of the mind in a simple system intuits.

    Einstein, in the paragraph below makes this clear as he ponders his relationship with Newton and the move to thinking about Poincaré.

    The move to non-euclidean geometries assumes where Euclid leaves off, the basis of Spacetime begins. So such a statement as, where there is no gravitational field, the spacetime is flat should be followed by, an euclidean, physical constant of a straight line=C?

    Einstein:

    I attach special importance to the view of geometry which I have just set forth, because without it I should have been unable to formulate the theory of relativity. ... In a system of reference rotating relatively to an inert system, the laws of disposition of rigid bodies do not correspond to the rules of Euclidean geometry on account of the Lorentz contraction; thus if we admit non-inert systems we must abandon Euclidean geometry. ... If we deny the relation between the body of axiomatic Euclidean geometry and the practically-rigid body of reality, we readily arrive at the following view, which was entertained by that acute and profound thinker, H. Poincare:--Euclidean geometry is distinguished above all other imaginable axiomatic geometries by its simplicity. Now since axiomatic geometry by itself contains no assertions as to the reality which can be experienced, but can do so only in combination with physical laws, it should be possible and reasonable ... to retain Euclidean geometry. For if contradictions between theory and experience manifest themselves, we should rather decide to change physical laws than to change axiomatic Euclidean geometry. If we deny the relation between the practically-rigid body and geometry, we shall indeed not easily free ourselves from the convention that Euclidean geometry is to be retained as the simplest. (33-4)


    It is never easy for me to see how I could have moved from what was Euclid's postulates, to have graduated to my "sense of things" to have adopted this, "new way of seeing" that is also accumulative to the inclusion of gravity as a concept relevant to all aspects of the way in which one can see reality.

    See:

  • On the Hypothese at the foundations of Geometry

  • Gravity and Electromagnetism?

  • "The Confrontation between General Relativity and Experiment" by Clifford M. Will
  • Tuesday, December 11, 2007

    The Other Side of the Coin

    Susan Holmes- Statistician Persi Diaconis' mechanical coin flipper.

    In football's inaugural kickoff coin toss, the coin is not caught but allowed to bounce on the ground. That introduces an extra complication, one mathematicians have yet to sort out.




    Persi Diaconis See here.

    The Ground State

    There is always an "inverse order to Gravity" that helps one see in ways that we are not accustom too. The methods of "prospective measurements" in science have taken a radical turn? Satellites as a measure, have focused our views.



    While one may now look at the "sun in a different way" it had to first display itself across the "neutrino Sudbury screen" before we knew to picture the sun now in the way we do. It was progressive, in the way the sun now forms a picture of what we now know in measure.

    So you try and bring it all together under this "new way of seeing" and hopefully your account of "the way reality is," is shared by others who now understand what the heck I am doing?

    To get a simple physical understanding of what the acoustic oscillations are, it may be helpful to change the perspective. Normally, the common way of presenting the phenomenon has been in terms of standing waves where the analysis is done in Fourier space. But the baryon-photon fluid really is just carrying sound waves, and the dispersion relation is even pretty linear. So let’s instead think of things in terms of traveling waves in real spacehttp://72.14.253.104/search?q=cache:xLcnPGO6BDQJ:cmb.as.arizona.edu/~eisenste/acousticpeak/spherical_acoustic.ps+Fourier+space+when+I%27m+thinking+about+sound.&hl=en&ct=clnk&cd=1&gl=ca-Steward Observatory, University of Arizona
    c 2005


    "Uncertainty" has this way of rearing it's head once we reduce our perspective to the microscopic principals(sand), yet, on the other side of the coin, how is it that only 5% of mass determination allows us to see the universe mapped in the way it has in regards to the CMB?

    There is this "entropic valuation" and with it, temperature. Some do not like the porridge "to hot or to cold," with regards to "living in a place" within the universe.

    So I'll repeat the blog comment entry here in this blog so one can gather some of what I mean.

    At 2:56 AM, December 11, 2007, Plato said...
    As a lay person with regards to the complexity of the language(sound)and universe, it is sometimes reduced to "seeing in ways that are much easier to deal with," although of course, it may not be the same for everyone?:)

    :)Something good science people "do not want to hear?"

    Good link in html.

    The launching of the sound waves is very similar to dropping a rock in a pond and seeing the circular wave come off (obviously that a gravity wave, not a compressional wave, but I’m focusing on the geometry). The difference here is that the area where the “rock” entered is still the most likely region to form galaxies; the spherical shell that it produced is only carrying 5% of the mass.

    Hopefully, this demystifies the effect: we’re seeing the imprint of spherical sound waves launched from the sites of dark matter overdensities in the early universe. But also I hope it makes it more clear as to why this effect is so robust: the propagation of sound in the baryon-photon plasma is very simple, and all we’re doing is measuring how far it got.


    "Mapping," had to begin somewhere. Whatever that may mean,one may think of Mendeleeev or Newlands.

    Generally Grouping Order increases the density of objects within a frame of reference, resulting in a more pronounced single object.


    "Sand with pebbles" on a beach? It had to arise from someplace?

    The other side of the Coin is?

    This recording was produced by converting into audible sounds some of the radar echoes received by Huygens during the last few kilometres of its descent onto Titan. As the probe approaches the ground, both the pitch and intensity increase. Scientists will use intensity of the echoes to speculate about the nature of the surface.


    and not to be undone.

    Mass results in an increase in the gravitational force exerted by an object. Density fluctuations on the surface of the Earth and in the underlying mantle are thus reflected invariations in the gravity field.As the twin GRACE satellites orbit the Earth together, these gravity field variations cause infinitesimal changes in the distance between the two. These changes will be measured with unprecedented accuracy by the instruments aboard GRACE leading to a more precise rendering of the gravitational field than has ever been possible to date.


    Layman pondering.


    So now that you have this "comprehensive view" I have gained on the way I am seeing the universe. You can "now see" how diverse the application of sound in analogy is. It is helping me to develop the "Colour of Gravity" as a artistic endeavour. I refrain from calling it "scientific" and be labelled a crackpot.

    A Synesthesic View on Life.

    Who knows how I can put these things together and come up with what I do. Yet, it had not gone unnoticed that such concepts could merge into one another, and come out with some tangible result as a "artistic effort." Some may be used to the paintings of Kandinsky(abstract), yet the plethora of imaging that unfolds in the conceptual framework might have been self evident, from such a chaotic mess of the layman's view here?

    Tuesday, December 04, 2007

    Descriptive geometry

    At this point in the development, although geometry provided a common framework for all the forces, there was still no way to complete the unification by combining quantum theory and general relativity. Since quantum theory deals with the very small and general relativity with the very large, many physicists feel that, for all practical purposes, there is no need to attempt such an ultimate unification. Others however disagree, arguing that physicists should never give up on this ultimate search, and for these the hunt for this final unification is the ‘holy grail’. Michael Atiyah


    The search for this "cup that overflow" is at the heart of all who venture for the lifeblood of the mystery of life. While Atiyah speaks to a unification of Quantum theory and Relativity, it is not without a understanding on Einstein's part that having gained from Marcel Grossmann, that such a descriptive geometry could be leading Einstein to discover the very basis of General relativity?

    Marcel Grossmann was a mathematician, and a friend and classmate of Albert Einstein. He became a Professor of Mathematics at the Federal Polytechnic Institute in Zurich, today the ETH Zurich, specialising in descriptive geometry.


    So what use "this history" in face of the unification of the very large with the very small? How far back should one go to know that the steps previous were helping to shape perspective for the future. Allow for perspective to be changed, so that new avenues of research could spring forth

    Gaspard Monge, Comte de Péluse-Portrait by Naigeon in the Musée de Beaune Born: 9 May 1746 in Beaune, Bourgogne, France
    Died: 28 July 1818 in Paris, France-was a French mathematician and inventor of descriptive geometry.


    Monge contributed (1770–1790) to the Memoirs of the Academy of Turin, the Mémoires des savantes étrangers of the Academy of Paris, the Mémoires of the same Academy, and the Annales de chimie, various mathematical and physical papers. Among these may be noticed the memoir "Sur la théorie des déblais et des remblais" (Mém. de l’acad. de Paris, 1781), which, while giving a remarkably elegant investigation in regard to the problem of earth-work referred to in the title, establishes in connection with it his capital discovery of the curves of curvature of a surface. Leonhard Euler, in his paper on curvature in the Berlin Memoirs for 1760, had considered, not the normals of the surface, but the normals of the plane sections through a particular normal, so that the question of the intersection of successive normals of the surface had never presented itself to him. Monge's memoir just referred to gives the ordinary differential equation of the curves of curvature, and establishes the general theory in a very satisfactory manner; but the application to the interesting particular case of the ellipsoid was first made by him in a later paper in 1795. (Monge's 1781 memoir is also the earliest known anticipation of Linear Programming type of problems, in particular of the transportation problem. Related to that, the Monge soil-transport problem leads to a weak-topology definition of a distance between distributions rediscovered many times since by such as L. V. Kantorovich, P. Levy, L. N. Wasserstein, and a number of others; and bearing their names in various combinations in various contexts.) A memoir in the volume for 1783 relates to the production of water by the combustion of hydrogen; but Monge's results had been anticipated by Henry Cavendish.


    Descriptive geometry

    Example of four different 2D representations of the same 3D object

    Descriptive geometry is the branch of geometry which allows the representation of three-dimensional objects in two dimensions, by using a specific set of procedures. The resulting techniques are important for engineering, architecture, design and in art. [1] The theoretical basis for descriptive geometry is provided by planar geometric projections. Gaspard Monge is usually considered the "father of descriptive geometry". He first developed his techniques to solve geometric problems in 1765 while working as a draftsman for military fortifications, and later published his findings. [2]

    Monge's protocols allow an imaginary object to be drawn in such a way that it may be 3-D modeled. All geometric aspects of the imaginary object are accounted for in true size/to-scale and shape, and can be imaged as seen from any position in space. All images are represented on a two-dimensional drawing surface.

    Descriptive geometry uses the image-creating technique of imaginary, parallel projectors emanating from an imaginary object and intersecting an imaginary plane of projection at right angles. The cumulative points of intersections create the desired image.


    So given the tools, we learnt to see how objects within a referenced space, given to such coordinates, have been defined in that same space. Where is this point with in that reference frame?

    What is born within that point, that through it is emergent product. Becomes a thing of expression from nothing? It's design and all, manifested as a entropic valuation of the cooling period? Crystalline shapes born by design, and by element from whence it's motivation come? An arrow of time?

    Monday, November 12, 2007

    Where Spacetime is flat?

    ......A Condensative Result exists. Where "energy concentrates" and expresses outward.

    I mean if I were to put on my eyeglasses, and these glasses were given to a way of seeing this universe, why not look at the whole universe bathed in such spacetime fabric?

    This a opportunity to get "two birds" with one stone?

    I was thinking of Garrett's E8 Theory article and Stefan's here.

    On March 31, 2006 the high-resolution gravity field model EIGEN-GL04C has been released. This model is a combination of GRACE and LAGEOS mission plus 0.5 x 0.5 degrees gravimetry and altimetry surface data and is complete to degree and order 360 in terms of spherical harmonic coefficients.

    High-resolution combination gravity models are essential for all applications where a precise knowledge of the static gravity potential and its gradients is needed in the medium and short wavelength spectrum. Typical examples are precise orbit determination of geodetic and altimeter satellites or the study of the Earth's crust and mantle mass distribution.

    But, various geodetic and altimeter applications request also a pure satellite-only gravity model. As an example, the ocean dynamic topography and the derived geostrophic surface currents, both derived from altimeter measurements and an oceanic geoid, would be strongly correlated with the mean sea surface height model used to derive terrestrial gravity data for the combination model.

    Therefore, the satellite-only part of EIGEN-GL04C is provided here as EIGEN-GL04S1. The contributing GRACE and Lageos data are already described in the EIGEN-GL04C description. The satellite-only model has been derived from EIGEN-GL04C by reduction of the terrestrial normal equation system and is complete up to degree and order 150.


    How many really understand/see the production of gravitational waves in regards to Taylor and Hulse?

    To see Stefan's correlation in terms of "wave production" is a dynamical quality to what is still be experimentally looked for by LIGO?

    As scientists, do you know this?

    6:41 AM, November 11, 2007
    See here

    Thus the binary pulsar PSR1913+16 provides a powerful test of the predictions of the behavior of time perceived by a distant observer according to Einstein's Theory of Relativity.


    Since we know the theory of Relativity is about Gravity, then how is it the applications can be extended to the way we see "anew" in our world?

    A sphere, our earth, not so round anymore.

    Uncle has tried to correct me on "isostatic adjustment."

    Derek Sears, professor of cosmochemistry at the University of Arkansas, explains. See here

    Planets are round because their gravitational field acts as though it originates from the center of the body and pulls everything toward it. With its large body and internal heating from radioactive elements, a planet behaves like a fluid, and over long periods of time succumbs to the gravitational pull from its center of gravity. The only way to get all the mass as close to planet's center of gravity as possible is to form a sphere. The technical name for this process is "isostatic adjustment."

    With much smaller bodies, such as the 20-kilometer asteroids we have seen in recent spacecraft images, the gravitational pull is too weak to overcome the asteroid's mechanical strength. As a result, these bodies do not form spheres. Rather they maintain irregular, fragmentary shapes. K. Shumacker. Scientific America


    Do not have time to follow up at this moment.

    7:02 AM, November 11, 2007
    .....and here.


    In context of the post and differences, I may not have pointed to the substance of the post, yet I would have dealt with my problem in seeing.

    In general terms, gravitational waves are radiated by objects whose motion involves acceleration, provided that the motion is not perfectly spherically symmetric (like a spinning, expanding or contracting sphere) or cylindrically symmetric (like a spinning disk).

    A simple example is the spinning dumbbell. Set upon one end, so that one side of the dumbell is on the ground and the other end is pointing up, the dumbbell will not radiate when it spins around its vertical axis but will radiate if it tumbles end-over-end. The heavier the dumbbell, and the faster it tumbles, the greater is the gravitational radiation it will give off. If we imagine an extreme case in which the two weights of the dumbbell are massive stars like neutron stars or black holes, orbiting each other quickly, then significant amounts of gravitational radiation would be given off.


    Given the context of the "whole universe" what is actually pervading, if one did not include gravity?



    So singularities are pointing to the beginning(i), yet, we do not know if we should just say, the Big Bang, because, one would had to have calculated the energy used and where did it come from "previous" to manifest?

    So some will have this philosophical position about "nothing(?)," and "everything as already existing."

    Wherever there are no gravitational waves the space time is flat. One would have to define these two variances. One from understanding the relation to "radiation" and the other "perfectly spherically symmetric."