Showing posts sorted by date for query lorentz. Sort by relevance Show all posts
Showing posts sorted by date for query lorentz. Sort by relevance Show all posts

Monday, May 21, 2012

Digital Physics

In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is, at heart, describable by information, and is therefore computable. Therefore, the universe can be conceived as either the output of a computer program or as a vast, digital computation device (or, at least, mathematically isomorphic to such a device).

Digital physics is grounded in one or more of the following hypotheses; listed in order of increasing strength. The universe, or reality:

 

Contents

 

 

History

 

Every computer must be compatible with the principles of information theory, statistical thermodynamics, and quantum mechanics. A fundamental link among these fields was proposed by Edwin Jaynes in two seminal 1957 papers.[1] Moreover, Jaynes elaborated an interpretation of probability theory as generalized Aristotelian logic, a view very convenient for linking fundamental physics with digital computers, because these are designed to implement the operations of classical logic and, equivalently, of Boolean algebra.[2]
The hypothesis that the universe is a digital computer was pioneered by Konrad Zuse in his book Rechnender Raum (translated into English as Calculating Space). The term digital physics was first employed by Edward Fredkin, who later came to prefer the term digital philosophy.[3] Others who have modeled the universe as a giant computer include Stephen Wolfram,[4] Juergen Schmidhuber,[5] and Nobel laureate Gerard 't Hooft.[6] These authors hold that the apparently probabilistic nature of quantum physics is not necessarily incompatible with the notion of computability. Quantum versions of digital physics have recently been proposed by Seth Lloyd,[7] David Deutsch, and Paola Zizzi.[8]

Related ideas include Carl Friedrich von Weizsäcker's binary theory of ur-alternatives, pancomputationalism, computational universe theory, John Archibald Wheeler's "It from bit", and Max Tegmark's ultimate ensemble.

Digital physics

 

Overview

 

Digital physics suggests that there exists, at least in principle, a program for a universal computer which computes the evolution of the universe. The computer could be, for example, a huge cellular automaton (Zuse 1967[9]), or a universal Turing machine, as suggested by Schmidhuber (1997), who pointed out that there exists a very short program that can compute all possible computable universes in an asymptotically optimal way.

Some try to identify single physical particles with simple bits. For example, if one particle, such as an electron, is switching from one quantum state to another, it may be the same as if a bit is changed from one value (0, say) to the other (1). A single bit suffices to describe a single quantum switch of a given particle. As the universe appears to be composed of elementary particles whose behavior can be completely described by the quantum switches they undergo, that implies that the universe as a whole can be described by bits. Every state is information, and every change of state is a change in information (requiring the manipulation of one or more bits). Setting aside dark matter and dark energy, which are poorly understood at present, the known universe consists of about 1080 protons and the same number of electrons. Hence, the universe could be simulated by a computer capable of storing and manipulating about 1090 bits. If such a simulation is indeed the case, then hypercomputation would be impossible.

Loop quantum gravity could lend support to digital physics, in that it assumes space-time is quantized. Paola Zizzi has formulated a realization of this concept in what has come to be called "computational loop quantum gravity", or CLQG.[10][11] Other theories that combine aspects of digital physics with loop quantum gravity are those of Marzuoli and Rasetti[12][13] and Girelli and Livine.[14]

Weizsäcker's ur-alternatives

 

Physicist Carl Friedrich von Weizsäcker's theory of ur-alternatives (archetypal objects), first publicized in his book The Unity of Nature (1980),[15] further developed through the 1990s,[16][17] is a kind of digital physics as it axiomatically constructs quantum physics from the distinction between empirically observable, binary alternatives. Weizsäcker used his theory to derive the 3-dimensionality of space and to estimate the entropy of a proton falling into a black hole.

Pancomputationalism or the computational universe theory

 

Pancomputationalism (also known as pan-computationalism, naturalist computationalism) is a view that the universe is a huge computational machine, or rather a network of computational processes which, following fundamental physical laws, computes (dynamically develops) its own next state from the current one.[18]
A computational universe is proposed by Jürgen Schmidhuber in a paper based on Konrad Zuse's assumption (1967) that the history of the universe is computable. He pointed out that the simplest explanation of the universe would be a very simple Turing machine programmed to systematically execute all possible programs computing all possible histories for all types of computable physical laws. He also pointed out that there is an optimally efficient way of computing all computable universes based on Leonid Levin's universal search algorithm (1973). In 2000 he expanded this work by combining Ray Solomonoff's theory of inductive inference with the assumption that quickly computable universes are more likely than others. This work on digital physics also led to limit-computable generalizations of algorithmic information or Kolmogorov complexity and the concept of Super Omegas, which are limit-computable numbers that are even more random (in a certain sense) than Gregory Chaitin's number of wisdom Omega.

Wheeler's "it from bit"

 

Following Jaynes and Weizsäcker, the physicist John Archibald Wheeler wrote the following:

[...] it is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer. (John Archibald Wheeler 1998: 340)

It from bit. Otherwise put, every 'it'—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. 'It from bit' symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances—an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes–no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe. (John Archibald Wheeler 1990: 5)

David Chalmers of the Australian National University summarised Wheeler's views as follows:

Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this 'it from bit' doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world.[19]

Chris Langan also builds upon Wheeler's views in his epistemological metatheory:

The Future of Reality Theory According to John Wheeler: In 1979, the celebrated physicist John Wheeler, having coined the phrase “black hole”, put it to good philosophical use in the title of an exploratory paper, Beyond the Black Hole, in which he describes the universe as a self-excited circuit. The paper includes an illustration in which one side of an uppercase U, ostensibly standing for Universe, is endowed with a large and rather intelligent-looking eye intently regarding the other side, which it ostensibly acquires through observation as sensory information. By dint of placement, the eye stands for the sensory or cognitive aspect of reality, perhaps even a human spectator within the universe, while the eye’s perceptual target represents the informational aspect of reality. By virtue of these complementary aspects, it seems that the universe can in some sense, but not necessarily that of common usage, be described as “conscious” and “introspective”…perhaps even “infocognitive”.[20]

The first formal presentation of the idea that information might be the fundamental quantity at the core of physics seems to be due to Frederick W. Kantor (a physicist from Columbia University). Kantor's book Information Mechanics (Wiley-Interscience, 1977) developed this idea in detail, but without mathematical rigor.

The toughest nut to crack in Wheeler's research program of a digital dissolution of physical being in a unified physics, Wheeler himself says, is time. In a 1986 eulogy to the mathematician, Hermann Weyl, he proclaimed: "Time, among all concepts in the world of physics, puts up the greatest resistance to being dethroned from ideal continuum to the world of the discrete, of information, of bits. ... Of all obstacles to a thoroughly penetrating account of existence, none looms up more dismayingly than 'time.' Explain time? Not without explaining existence. Explain existence? Not without explaining time. To uncover the deep and hidden connection between time and existence ... is a task for the future."[21] The Australian phenomenologist, Michael Eldred, comments:

The antinomy of the continuum, time, in connection with the question of being ... is said by Wheeler to be a cause for dismay which challenges future quantum physics, fired as it is by a will to power over moving reality, to "achieve four victories" (ibid.)... And so we return to the challenge to "[u]nderstand the quantum as based on an utterly simple and—when we see it—completely obvious idea" (ibid.) from which the continuum of time could be derived. Only thus could the will to mathematically calculable power over the dynamics, i.e. the movement in time, of beings as a whole be satisfied.[22][23]

Digital vs. informational physics

 

Not every informational approach to physics (or ontology) is necessarily digital. According to Luciano Floridi,[24] "informational structural realism" is a variant of structural realism that supports an ontological commitment to a world consisting of the totality of informational objects dynamically interacting with each other. Such informational objects are to be understood as constraining affordances.

Digital ontology and pancomputationalism are also independent positions. In particular, John Wheeler advocated the former but was silent about the latter; see the quote in the preceding section.
On the other hand, pancomputationalists like Lloyd (2006), who models the universe as a quantum computer, can still maintain an analogue or hybrid ontology; and informational ontologists like Sayre and Floridi embrace neither a digital ontology nor a pancomputationalist position.[25]

Computational foundations

 

Turing machines

 

Theoretical computer science is founded on the Turing machine, an imaginary computing machine first described by Alan Turing in 1936. While mechanically simple, the Church-Turing thesis implies that a Turing machine can solve any "reasonable" problem. (In theoretical computer science, a problem is considered "solvable" if it can be solved in principle, namely in finite time, which is not necessarily a finite time that is of any value to humans.) A Turing machine therefore sets the practical "upper bound" on computational power, apart from the possibilities afforded by hypothetical hypercomputers.

Wolfram's principle of computational equivalence powerfully motivates the digital approach. This principle, if correct, means that everything can be computed by one essentially simple machine, the realization of a cellular automaton. This is one way of fulfilling a traditional goal of physics: finding simple laws and mechanisms for all of nature.

Digital physics is falsifiable in that a less powerful class of computers cannot simulate a more powerful class. Therefore, if our universe is a gigantic simulation, that simulation is being run on a computer at least as powerful as a Turing machine. If humans succeed in building a hypercomputer, then a Turing machine cannot have the power required to simulate the universe.

The Church–Turing (Deutsch) thesis

 

The classic Church–Turing thesis claims that any computer as powerful as a Turing machine can, in principle, calculate anything that a human can calculate, given enough time. A stronger version, not attributable to Church or Turing,[26] claims that a universal Turing machine can compute anything any other Turing machine can compute - that it is a generalizable Turing machine. But the limits of practical computation are set by physics, not by theoretical computer science:

"Turing did not show that his machines can solve any problem that can be solved 'by instructions, explicitly stated rules, or procedures', nor did he prove that the universal Turing machine 'can compute any function that any computer, with any architecture, can compute'. He proved that his universal machine can compute any function that any Turing machine can compute; and he put forward, and advanced philosophical arguments in support of, the thesis here called Turing's thesis. But a thesis concerning the extent of effective methods—which is to say, concerning the extent of procedures of a certain sort that a human being unaided by machinery is capable of carrying out—carries no implication concerning the extent of the procedures that machines are capable of carrying out, even machines acting in accordance with 'explicitly stated rules.' For among a machine's repertoire of atomic operations there may be those that no human being unaided by machinery can perform." [27]

On the other hand, if two further conjectures are made, along the lines that:

  • hypercomputation always involves actual infinities;
  • there are no actual infinities in physics,

the resulting compound principle does bring practical computation within Turing's limits.
As David Deutsch puts it:

"I can now state the physical version of the Church-Turing principle: 'Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means.' This formulation is both better defined and more physical than Turing's own way of expressing it."[28] (Emphasis added)

This compound conjecture is sometimes called the "strong Church-Turing thesis" or the Church–Turing–Deutsch principle.

Criticism

 

The critics of digital physics—including physicists[citation needed] who work in quantum mechanics—object to it on several grounds.

Physical symmetries are continuous

 

One objection is that extant models of digital physics are incompatible[citation needed] with the existence of several continuous characters of physical symmetries, e.g., rotational symmetry, translational symmetry, Lorentz symmetry, and electroweak symmetry, all central to current physical theory.

Proponents of digital physics claim that such continuous symmetries are only convenient (and very good) approximations of a discrete reality. For example, the reasoning leading to systems of natural units and the conclusion that the Planck length is a minimum meaningful unit of distance suggests that at some level space itself is quantized.[29]

Locality

 

Some argue[citation needed] that extant models of digital physics violate various postulates of quantum physics. For example, if these models are not grounded in Hilbert spaces and probabilities, they belong to the class of theories with local hidden variables that some deem ruled out experimentally using Bell's theorem. This criticism has two possible answers. First, any notion of locality in the digital model does not necessarily have to correspond to locality formulated in the usual way in the emergent spacetime. A concrete example of this case was recently given by Lee Smolin.[30] Another possibility is a well-known loophole in Bell's theorem known as superdeterminism (sometimes referred to as predeterminism).[31] In a completely deterministic model, the experimenter's decision to measure certain components of the spins is predetermined. Thus, the assumption that the experimenter could have decided to measure different components of the spins than he actually did is, strictly speaking, not true.

Physical theory requires the continuum

 

It has been argued[weasel words] that digital physics, grounded in the theory of finite state machines and hence discrete mathematics, cannot do justice to a physical theory whose mathematics requires the real numbers, which is the case for all physical theories having any credibility.

But computers can manipulate and solve formulas describing real numbers using symbolic computation, thus avoiding the need to approximate real numbers by using an infinite number of digits.

Before symbolic computation, a number—in particular a real number, one with an infinite number of digits—was said to be computable if a Turing machine will continue to spit out digits endlessly. In other words, there is no "last digit". But this sits uncomfortably with any proposal that the universe is the output of a virtual-reality exercise carried out in real time (or any plausible kind of time). Known physical laws (including quantum mechanics and its continuous spectra) are very much infused with real numbers and the mathematics of the continuum.

"So ordinary computational descriptions do not have a cardinality of states and state space trajectories that is sufficient for them to map onto ordinary mathematical descriptions of natural systems. Thus, from the point of view of strict mathematical description, the thesis that everything is a computing system in this second sense cannot be supported".[32]

For his part, David Deutsch generally takes a "multiverse" view to the question of continuous vs. discrete. In short, he thinks that “within each universe all observable quantities are discrete, but the multiverse as a whole is a continuum. When the equations of quantum theory describe a continuous but not-directly-observable transition between two values of a discrete quantity, what they are telling us is that the transition does not take place entirely within one universe. So perhaps the price of continuous motion is not an infinity of consecutive actions, but an infinity of concurrent actions taking place across the multiverse.” January, 2001 The Discrete and the Continuous, an abridged version of which appeared in The Times Higher Education Supplement.

See also

 

 

References

 

  1. ^ Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics," Phys. Rev 106: 620.
    Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics II," Phys. Rev. 108: 171.
  2. ^ Jaynes, E. T., 1990, "Probability Theory as Logic," in Fougere, P.F., ed., Maximum-Entropy and Bayesian Methods. Boston: Kluwer.
  3. ^ See Fredkin's Digital Philosophy web site.
  4. ^ A New Kind of Science website. Reviews of ANKS.
  5. ^ Schmidhuber, J., "Computer Universes and an Algorithmic Theory of Everything."
  6. ^ G. 't Hooft, 1999, "Quantum Gravity as a Dissipative Deterministic System," Class. Quant. Grav. 16: 3263-79.
  7. ^ Lloyd, S., "The Computational Universe: Quantum gravity from quantum computation."
  8. ^ Zizzi, Paola, "Spacetime at the Planck Scale: The Quantum Computer View."
  9. ^ Zuse, Konrad, 1967, Elektronische Datenverarbeitung vol 8., pages 336-344
  10. ^ Zizzi, Paola, "A Minimal Model for Quantum Gravity."
  11. ^ Zizzi, Paola, "Computability at the Planck Scale."
  12. ^ Marzuoli, A. and Rasetti, M., 2002, "Spin Network Quantum Simulator," Phys. Lett. A306, 79-87.
  13. ^ Marzuoli, A. and Rasetti, M., 2005, "Computing Spin Networks," Annals of Physics 318: 345-407.
  14. ^ Girelli, F.; Livine, E. R., 2005, "[1]" Class. Quant. Grav. 22: 3295-3314.
  15. ^ von Weizsäcker, Carl Friedrich (1980). The Unity of Nature. New York: Farrar, Straus, and Giroux.
  16. ^ von Weizsäcker, Carl Friedrich (1985) (in German). Aufbau der Physik [The Structure of Physics]. Munich. ISBN 3-446-14142-1.
  17. ^ von Weizsäcker, Carl Friedrich (1992) (in German). Zeit und Wissen.
  18. ^ Papers on pancompuationalism
  19. ^ Chalmers, David. J., 1995, "Facing up to the Hard Problem of Consciousness," Journal of Consciousness Studies 2(3): 200-19. This paper cites John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley. Also see Chalmers, D., 1996. The Conscious Mind. Oxford Univ. Press.
  20. ^ Langan, Christopher M., 2002, "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory, pg. 7" Progress in Complexity, Information and Design
  21. ^ Wheeler, John Archibald, 1986, "Hermann Weyl and the Unity of Knowledge"
  22. ^ Eldred, Michael, 2009, 'Postscript 2: On quantum physics' assault on time'
  23. ^ Eldred, Michael, 2009, The Digital Cast of Being: Metaphysics, Mathematics, Cartesianism, Cybernetics, Capitalism, Communication ontos, Frankfurt 2009 137 pp. ISBN 978-3-86838-045-3
  24. ^ Floridi, L., 2004, "Informational Realism," in Weckert, J., and Al-Saggaf, Y, eds., Computing and Philosophy Conference, vol. 37."
  25. ^ See Floridi talk on Informational Nature of Reality, abstract at the E-CAP conference 2006.
  26. ^ B. Jack Copeland, Computation in Luciano Floridi (ed.), The Blackwell guide to the philosophy of computing and information, Wiley-Blackwell, 2004, ISBN 0-631-22919-1, pp. 10-15
  27. ^ Stanford Encyclopedia of Philosophy: "The Church-Turing thesis" -- by B. Jack Copeland.
  28. ^ David Deutsch, "Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer."
  29. ^ John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley.
  30. ^ L. Smolin, "Matrix models as non-local hidden variables theories."
  31. ^ J. S. Bell, 1981, "Bertlmann's socks and the nature of reality," Journal de Physique 42 C2: 41-61.
  32. ^ Piccinini, Gualtiero, 2007, "Computational Modelling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?" Australasian Journal of Philosophy 85(1): 93-115.

 

Further reading

 

 

External links

 

Wednesday, October 12, 2011

Seeing Underlying Structures

 There is  gap between,  "Proton Collision ->Decay to Muons and Muon Neutrinos ->Tau Neutrino ->[gap] tau lepton may travel some tens of microns before decaying back into neutrino and charged tracks." Use the case of Relativistic Muons?


 An analysis of four Fermi-detected gamma-ray bursts (GRBs) is given that sets upper limits on the energy dependence of the speed and dispersion of light across the universe. The analysis focuses on photons recorded above 1 GeV for Fermi detected GRB 080916C, GRB 090510A, GRB090902B, and GRB 090926A. Upper limits on time scales for statistically significant bunching of photon arrival times were found and cataloged. In particular, the most stringent limit was found for GRB 090510A at redshift z & 0.897 for which t < 0.00136 sec, a limit driven by three separate photon bunchings. These photons occurred among the first seven super-GeV photons recorded for GRB 090510A and contain one pair with an energy difference of E & 23.5 GeV. The next most limiting burst was GRB 090902B at a redshift of z & 1.822 for which t < 0.161, a limit driven by several groups of photons, one pair of which had an energy difference E & 1.56 GeV. Resulting limits on the differential speed of light and Lorentz invariance were found for all of these GRBs independently. The strongest limit was for GRB 090510A with c/c < 6.09 x 10−21. Given generic dispersion relations across the universe where the time delay is proportional to the photon energy to the first or second power, the most stringent limits on the dispersion strengths were k1 < 1.38 x 10−5 sec Gpc−1 GeV−1 and k2 < 3.04 x 10−7 sec Gpc−1 GeV−2 respectively. Such upper limits result in upper bounds on dispersive effects created, for example, by dark energy, dark matter or the spacetime foam of quantum gravity. Relating these dispersion constraints to loop quantum gravity
energy scales specifically results in limits of M1c2 > 7.43 x 1021 GeV and M2c2 > 7.13 x 1011 GeV respectively. See: Limiting properties of light and the universe with high energy photons from Fermi-detected Gamma Ray Bursts


The point here is that Energetic disposition of flight time and Fermi Calorimetry result point toward GRB emission and directly determination of GRB emission allocates potential of underlying structure W and the electron-neutrino fields?

Fig. 3: An electron, as it travels, may become a more complex combination of disturbances in two or more fields. It occasionally is a mixture of disturbances in the photon and electron fields; more rarely it is a disturbance in the W and the electron-neutrino fields. See: Another Speed Bump for Superluminal Neutrinos Posted on October 11, 2011 at, "Of Particular Significance"
***
What I find interesting is that Tamburini and Laveder do not stop at discussing the theoretical interpretation of the alleged superluminal motion, but put their hypothesis to the test by comparing known measurements of neutrino velocity on a graph, where the imaginary mass is computed from the momentum of neutrinos and the distance traveled in a dense medium. The data show a very linear behaviour, which may constitute an explanation of the Opera effect: See: Tamburini: Neutrinos Are Majorana Particles, Relativity Is OK


See Also:

Sunday, April 17, 2011

Space

Space is the boundless, three-dimensional extent in which objects and events occur and have relative position and direction.[1] Physical space is often conceived in three linear dimensions, although modern physicists usually consider it, with time, to be part of the boundless four-dimensional continuum known as spacetime. In mathematics one examines 'spaces' with different numbers of dimensions and with different underlying structures. The concept of space is considered to be of fundamental importance to an understanding of the physical universe although disagreement continues between philosophers over whether it is itself an entity, a relationship between entities, or part of a conceptual framework.

Debates concerning the nature, essence and the mode of existence of space date back to antiquity; namely, to treatises like the Timaeus of Plato, in his reflections on what the Greeks called: chora / Khora (i.e. 'space'), or in the Physics of Aristotle (Book IV, Delta) in the definition of topos (i.e. place), or even in the later 'geometrical conception of place' as 'space qua extension' in the Discourse on Place (Qawl fi al-makan) of the 11th century Arab polymath Ibn al-Haytham (Alhazen).[2] Many of these classical philosophical questions were discussed in the Renaissance and then reformulated in the 17th century, particularly during the early development of classical mechanics. In Isaac Newton's view, space was absolute - in the sense that it existed permanently and independently of whether there were any matter in the space.[3]

Other natural philosophers, notably Gottfried Leibniz, thought instead that space was a collection of relations between objects, given by their distance and direction from one another. In the 18th century, the philosopher and theologian George Berkely attempted to refute the 'visibility of spatial depth' in his Essay Towards a New Theory of Vision. Later, the great metaphysician Immanuel Kant described space and time as elements of a systematic framework that humans use to structure their experience; he referred to 'space' in his Critique of Pure Reason as being: a subjective 'pure a priori form of intuition', hence that its existence depends on our human faculties.

In the 19th and 20th centuries mathematicians began to examine non-Euclidean geometries, in which space can be said to be curved, rather than flat. According to Albert Einstein's theory of general relativity, space around gravitational fields deviates from Euclidean space.[4] Experimental tests of general relativity have confirmed that non-Euclidean space provides a better model for the shape of space.

Contents

Philosophy of space

Leibniz and Newton

In the seventeenth century, the philosophy of space and time emerged as a central issue in epistemology and metaphysics. At its heart, Gottfried Leibniz, the German philosopher-mathematician, and Isaac Newton, the English physicist-mathematician, set out two opposing theories of what space is. Rather than being an entity that independently exists over and above other matter, Leibniz held that space is no more than the collection of spatial relations between objects in the world: "space is that which results from places taken together".[5] Unoccupied regions are those that could have objects in them, and thus spatial relations with other places. For Leibniz, then, space was an idealised abstraction from the relations between individual entities or their possible locations and therefore could not be continuous but must be discrete.[6] Space could be thought of in a similar way to the relations between family members. Although people in the family are related to one another, the relations do not exist independently of the people.[7] Leibniz argued that space could not exist independently of objects in the world because that implies a difference between two universes exactly alike except for the location of the material world in each universe. But since there would be no observational way of telling these universes apart then, according to the identity of indiscernibles, there would be no real difference between them. According to the principle of sufficient reason, any theory of space that implied that there could be these two possible universes, must therefore be wrong.[8]


Newton took space to be more than relations between material objects and based his position on observation and experimentation. For a relationist there can be no real difference between inertial motion, in which the object travels with constant velocity, and non-inertial motion, in which the velocity changes with time, since all spatial measurements are relative to other objects and their motions. But Newton argued that since non-inertial motion generates forces, it must be absolute.[9] He used the example of water in a spinning bucket to demonstrate his argument. Water in a bucket is hung from a rope and set to spin, starts with a flat surface. After a while, as the bucket continues to spin, the surface of the water becomes concave. If the bucket's spinning is stopped then the surface of the water remains concave as it continues to spin. The concave surface is therefore apparently not the result of relative motion between the bucket and the water.[10] Instead, Newton argued, it must be a result of non-inertial motion relative to space itself. For several centuries the bucket argument was decisive in showing that space must exist independently of matter.

Kant

In the eighteenth century the German philosopher Immanuel Kant developed a theory of knowledge in which knowledge about space can be both a priori and synthetic.[11] According to Kant, knowledge about space is synthetic, in that statements about space are not simply true by virtue of the meaning of the words in the statement. In his work, Kant rejected the view that space must be either a substance or relation. Instead he came to the conclusion that space and time are not discovered by humans to be objective features of the world, but are part of an unavoidable systematic framework for organizing our experiences.[12]

Non-Euclidean geometry

Spherical geometry is similar to elliptical geometry. On the surface of a sphere there are no parallel lines.
 
Euclid's Elements contained five postulates that form the basis for Euclidean geometry. One of these, the parallel postulate has been the subject of debate among mathematicians for many centuries. It states that on any plane on which there is a straight line L1 and a point P not on L1, there is only one straight line L2 on the plane that passes through the point P and is parallel to the straight line L1. Until the 19th century, few doubted the truth of the postulate; instead debate centered over whether it was necessary as an axiom, or whether it was a theory that could be derived from the other axioms.[13] Around 1830 though, the Hungarian János Bolyai and the Russian Nikolai Ivanovich Lobachevsky separately published treatises on a type of geometry that does not include the parallel postulate, called hyperbolic geometry. In this geometry, an infinite number of parallel lines pass through the point P. Consequently the sum of angles in a triangle is less than 180o and the ratio of a circle's circumference to its diameter is greater than pi. In the 1850s, Bernhard Riemann developed an equivalent theory of elliptical geometry, in which no parallel lines pass through P. In this geometry, triangles have more than 180o and circles have a ratio of circumference-to-diameter that is less than pi.
Type of geometry Number of parallels Sum of angles in a triangle Ratio of circumference to diameter of circle Measure of curvature
Hyperbolic Infinite < 180o > π < 0
Euclidean 1 180o π 0
Elliptical 0 > 180o < π > 0

Gauss and Poincaré

Although there was a prevailing Kantian consensus at the time, once non-Euclidean geometries had been formalised, some began to wonder whether or not physical space is curved. Carl Friedrich Gauss, a German mathematician, was the first to consider an empirical investigation of the geometrical structure of space. He thought of making a test of the sum of the angles of an enormous stellar triangle and there are reports he actually carried out a test, on a small scale, by triangulating mountain tops in Germany.[14]

Henri Poincaré, a French mathematician and physicist of the late 19th century introduced an important insight in which he attempted to demonstrate the futility of any attempt to discover which geometry applies to space by experiment.[15] He considered the predicament that would face scientists if they were confined to the surface of an imaginary large sphere with particular properties, known as a sphere-world. In this world, the temperature is taken to vary in such a way that all objects expand and contract in similar proportions in different places on the sphere. With a suitable falloff in temperature, if the scientists try to use measuring rods to determine the sum of the angles in a triangle, they can be deceived into thinking that they inhabit a plane, rather than a spherical surface.[16] In fact, the scientists cannot in principle determine whether they inhabit a plane or sphere and, Poincaré argued, the same is true for the debate over whether real space is Euclidean or not. For him, which geometry was used to describe space, was a matter of convention.[17] Since Euclidean geometry is simpler than non-Euclidean geometry, he assumed the former would always be used to describe the 'true' geometry of the world.[18]

Einstein

In 1905, Albert Einstein published a paper on a special theory of relativity, in which he proposed that space and time be combined into a single construct known as spacetime. In this theory, the speed of light in a vacuum is the same for all observers—which has the result that two events that appear simultaneous to one particular observer will not be simultaneous to another observer if the observers are moving with respect to one another. Moreover, an observer will measure a moving clock to tick more slowly than one that is stationary with respect to them; and objects are measured to be shortened in the direction that they are moving with respect to the observer.

Over the following ten years Einstein worked on a general theory of relativity, which is a theory of how gravity interacts with spacetime. Instead of viewing gravity as a force field acting in spacetime, Einstein suggested that it modifies the geometric structure of spacetime itself.[19] According to the general theory, time goes more slowly at places with lower gravitational potentials and rays of light bend in the presence of a gravitational field. Scientists have studied the behaviour of binary pulsars, confirming the predictions of Einstein's theories and Non-Euclidean geometry is usually used to describe spacetime.

Mathematics

In modern mathematics spaces are defined as sets with some added structure. They are frequently described as different types of manifolds, which are spaces that locally approximate to Euclidean space, and where the properties are defined largely on local connectedness of points that lie on the manifold. There are however, many diverse mathematical objects that are called spaces. For example, vector spaces such as function spaces may have infinite numbers of independent dimensions and a notion of distance very different to Euclidean space, and topological spaces replace the concept of distance with a more abstract idea of nearness.

Physics

Classical mechanics

Classical mechanics
History of classical mechanics · Timeline of classical mechanics
[hide]Fundamental concepts
Space · Time · Velocity · Speed · Mass · Acceleration · Gravity · Force · Impulse · Torque / Moment / Couple · Momentum · Angular momentum · Inertia · Moment of inertia · Reference frame · Energy · Kinetic energy · Potential energy · Mechanical work · Virtual work · D'Alembert's principle
v · d · e
Space is one of the few fundamental quantities in physics, meaning that it cannot be defined via other quantities because nothing more fundamental is known at the present. On the other hand, it can be related to other fundamental quantities. Thus, similar to other fundamental quantities (like time and mass), space can be explored via measurement and experiment.

Astronomy

Astronomy is the science involved with the observation, explanation and measuring of objects in outer space.

Relativity

Before Einstein's work on relativistic physics, time and space were viewed as independent dimensions. Einstein's discoveries showed that due to relativity of motion our space and time can be mathematically combined into one object — spacetime. It turns out that distances in space or in time separately are not invariant with respect to Lorentz coordinate transformations, but distances in Minkowski space-time along space-time intervals are—which justifies the name.
In addition, time and space dimensions should not be viewed as exactly equivalent in Minkowski space-time. One can freely move in space but not in time. Thus, time and space coordinates are treated differently both in special relativity (where time is sometimes considered an imaginary coordinate) and in general relativity (where different signs are assigned to time and space components of spacetime metric).

Furthermore, in Einstein's general theory of relativity, it is postulated that space-time is geometrically distorted- curved -near to gravitationally significant masses.[20]

Experiments are ongoing to attempt to directly measure gravitational waves. This is essentially solutions to the equations of general relativity, which describe moving ripples of spacetime. Indirect evidence for this has been found in the motions of the Hulse-Taylor binary system.

Cosmology

Relativity theory leads to the cosmological question of what shape the universe is, and where space came from. It appears that space was created in the Big Bang, 13.7 billion years ago and has been expanding ever since. The overall shape of space is not known, but space is known to be expanding very rapidly due to the Cosmic Inflation.

Spatial measurement

The measurement of physical space has long been important. Although earlier societies had developed measuring systems, the International System of Units, (SI), is now the most common system of units used in the measuring of space, and is almost universally used.

Currently, the standard space interval, called a standard meter or simply meter, is defined as the distance traveled by light in a vacuum during a time interval of exactly 1/299,792,458 of a second. This definition coupled with present definition of the second is based on the special theory of relativity in which the speed of light plays the role of a fundamental constant of nature.

Geographical space

Geography is the branch of science concerned with identifying and describing the Earth, utilizing spatial awareness to try and understand why things exist in specific locations. Cartography is the mapping of spaces to allow better navigation, for visualization purposes and to act as a locational device. Geostatistics apply statistical concepts to collected spatial data to create an estimate for unobserved phenomena.

Geographical space is often considered as land, and can have a relation to ownership usage (in which space is seen as property or territory). While some cultures assert the rights of the individual in terms of ownership, other cultures will identify with a communal approach to land ownership, while still other cultures such as Australian Aboriginals, rather than asserting ownership rights to land, invert the relationship and consider that they are in fact owned by the land. Spatial planning is a method of regulating the use of space at land-level, with decisions made at regional, national and international levels. Space can also impact on human and cultural behavior, being an important factor in architecture, where it will impact on the design of buildings and structures, and on farming.

Ownership of space is not restricted to land. Ownership of airspace and of waters is decided internationally. Other forms of ownership have been recently asserted to other spaces — for example to the radio bands of the electromagnetic spectrum or to cyberspace.

Public space is a term used to define areas of land as collectively owned by the community, and managed in their name by delegated bodies; such spaces are open to all. While private property is the land culturally owned by an individual or company, for their own use and pleasure.

Abstract space is a term used in geography to refer to a hypothetical space characterized by complete homogeneity. When modeling activity or behavior, it is a conceptual tool used to limit extraneous variables such as terrain.

In psychology

Psychologists first began to study the way space is perceived in the middle of the 19th century. Those now concerned with such studies regard it as a distinct branch of psychology. Psychologists analyzing the perception of space are concerned with how recognition of an object's physical appearance or its interactions are perceived.

Other, more specialized topics studied include amodal perception and object permanence. The perception of surroundings is important due to its necessary relevance to survival, especially with regards to hunting and self preservation as well as simply one's idea of personal space.

Several space-related phobias have been identified, including agoraphobia (the fear of open spaces), astrophobia (the fear of celestial space) and claustrophobia (the fear of enclosed spaces).

See also

References

  1. ^ Britannica Online Encyclopedia: Space
  2. ^ Refer to Plato's Timaeus in the Loeb Classical Library, Harvard University, and to his reflections on: Chora / Khora. See also Aristotle's Physics, Book IV, Chapter 5, on the definition of topos. Concerning Ibn al-Haytham's 11th century conception of 'geometrical place' as 'spatial extension', which is akin to Descartes' and Leibniz's 17th century notions of extensio and analysis situs, and his own mathematical refutation of Aristotle's definition of topos in natural philosophy, refer to: Nader El-Bizri, 'In Defence of the Sovereignty of Philosophy: al-Baghdadi's Critique of Ibn al-Haytham's Geometrisation of Place', Arabic Sciences and Philosophy: A Historical Journal (Cambridge University Press), Vol.17 (2007), pp. 57-80.
  3. ^ French and Ebison, Classical Mechanics, p. 1
  4. ^ Carnap, R. An introduction to the Philosophy of Science
  5. ^ Leibniz, Fifth letter to Samuel Clarke
  6. ^ Vailati, E, Leibniz & Clarke: A Study of Their Correspondence p. 115
  7. ^ Sklar, L, Philosophy of Physics, p. 20
  8. ^ Sklar, L, Philosophy of Physics, p. 21
  9. ^ Sklar, L, Philosophy of Physics, p. 22
  10. ^ Newton's bucket
  11. ^ Carnap, R, An introduction to the philosophy of science, p. 177-178
  12. ^ Lucas, John Randolph. Space, Time and Causality. p. 149. ISBN 0198750579.
  13. ^ Carnap, R, An introduction to the philosophy of science, p. 126
  14. ^ Carnap, R, An introduction to the philosophy of science, p. 134-136
  15. ^ Jammer, M, Concepts of Space, p. 165
  16. ^ A medium with a variable index of refraction could also be used to bend the path of light and again deceive the scientists if they attempt to use light to map out their geometry
  17. ^ Carnap, R, An introduction to the philosophy of science, p. 148
  18. ^ Sklar, L, Philosophy of Physics, p. 57
  19. ^ Sklar, L, Philsosophy of Physics, p. 43
  20. ^ chapters 8 and 9- John A. Wheeler "A Journey Into Gravity and Spacetime" Scientific American ISBN 0-7167-6034-7