Monday, May 21, 2012

Digital Physics

In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is, at heart, describable by information, and is therefore computable. Therefore, the universe can be conceived as either the output of a computer program or as a vast, digital computation device (or, at least, mathematically isomorphic to such a device).

Digital physics is grounded in one or more of the following hypotheses; listed in order of increasing strength. The universe, or reality:

 

Contents

 

 

History

 

Every computer must be compatible with the principles of information theory, statistical thermodynamics, and quantum mechanics. A fundamental link among these fields was proposed by Edwin Jaynes in two seminal 1957 papers.[1] Moreover, Jaynes elaborated an interpretation of probability theory as generalized Aristotelian logic, a view very convenient for linking fundamental physics with digital computers, because these are designed to implement the operations of classical logic and, equivalently, of Boolean algebra.[2]
The hypothesis that the universe is a digital computer was pioneered by Konrad Zuse in his book Rechnender Raum (translated into English as Calculating Space). The term digital physics was first employed by Edward Fredkin, who later came to prefer the term digital philosophy.[3] Others who have modeled the universe as a giant computer include Stephen Wolfram,[4] Juergen Schmidhuber,[5] and Nobel laureate Gerard 't Hooft.[6] These authors hold that the apparently probabilistic nature of quantum physics is not necessarily incompatible with the notion of computability. Quantum versions of digital physics have recently been proposed by Seth Lloyd,[7] David Deutsch, and Paola Zizzi.[8]

Related ideas include Carl Friedrich von Weizsäcker's binary theory of ur-alternatives, pancomputationalism, computational universe theory, John Archibald Wheeler's "It from bit", and Max Tegmark's ultimate ensemble.

Digital physics

 

Overview

 

Digital physics suggests that there exists, at least in principle, a program for a universal computer which computes the evolution of the universe. The computer could be, for example, a huge cellular automaton (Zuse 1967[9]), or a universal Turing machine, as suggested by Schmidhuber (1997), who pointed out that there exists a very short program that can compute all possible computable universes in an asymptotically optimal way.

Some try to identify single physical particles with simple bits. For example, if one particle, such as an electron, is switching from one quantum state to another, it may be the same as if a bit is changed from one value (0, say) to the other (1). A single bit suffices to describe a single quantum switch of a given particle. As the universe appears to be composed of elementary particles whose behavior can be completely described by the quantum switches they undergo, that implies that the universe as a whole can be described by bits. Every state is information, and every change of state is a change in information (requiring the manipulation of one or more bits). Setting aside dark matter and dark energy, which are poorly understood at present, the known universe consists of about 1080 protons and the same number of electrons. Hence, the universe could be simulated by a computer capable of storing and manipulating about 1090 bits. If such a simulation is indeed the case, then hypercomputation would be impossible.

Loop quantum gravity could lend support to digital physics, in that it assumes space-time is quantized. Paola Zizzi has formulated a realization of this concept in what has come to be called "computational loop quantum gravity", or CLQG.[10][11] Other theories that combine aspects of digital physics with loop quantum gravity are those of Marzuoli and Rasetti[12][13] and Girelli and Livine.[14]

Weizsäcker's ur-alternatives

 

Physicist Carl Friedrich von Weizsäcker's theory of ur-alternatives (archetypal objects), first publicized in his book The Unity of Nature (1980),[15] further developed through the 1990s,[16][17] is a kind of digital physics as it axiomatically constructs quantum physics from the distinction between empirically observable, binary alternatives. Weizsäcker used his theory to derive the 3-dimensionality of space and to estimate the entropy of a proton falling into a black hole.

Pancomputationalism or the computational universe theory

 

Pancomputationalism (also known as pan-computationalism, naturalist computationalism) is a view that the universe is a huge computational machine, or rather a network of computational processes which, following fundamental physical laws, computes (dynamically develops) its own next state from the current one.[18]
A computational universe is proposed by Jürgen Schmidhuber in a paper based on Konrad Zuse's assumption (1967) that the history of the universe is computable. He pointed out that the simplest explanation of the universe would be a very simple Turing machine programmed to systematically execute all possible programs computing all possible histories for all types of computable physical laws. He also pointed out that there is an optimally efficient way of computing all computable universes based on Leonid Levin's universal search algorithm (1973). In 2000 he expanded this work by combining Ray Solomonoff's theory of inductive inference with the assumption that quickly computable universes are more likely than others. This work on digital physics also led to limit-computable generalizations of algorithmic information or Kolmogorov complexity and the concept of Super Omegas, which are limit-computable numbers that are even more random (in a certain sense) than Gregory Chaitin's number of wisdom Omega.

Wheeler's "it from bit"

 

Following Jaynes and Weizsäcker, the physicist John Archibald Wheeler wrote the following:

[...] it is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer. (John Archibald Wheeler 1998: 340)

It from bit. Otherwise put, every 'it'—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. 'It from bit' symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances—an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes–no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe. (John Archibald Wheeler 1990: 5)

David Chalmers of the Australian National University summarised Wheeler's views as follows:

Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this 'it from bit' doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world.[19]

Chris Langan also builds upon Wheeler's views in his epistemological metatheory:

The Future of Reality Theory According to John Wheeler: In 1979, the celebrated physicist John Wheeler, having coined the phrase “black hole”, put it to good philosophical use in the title of an exploratory paper, Beyond the Black Hole, in which he describes the universe as a self-excited circuit. The paper includes an illustration in which one side of an uppercase U, ostensibly standing for Universe, is endowed with a large and rather intelligent-looking eye intently regarding the other side, which it ostensibly acquires through observation as sensory information. By dint of placement, the eye stands for the sensory or cognitive aspect of reality, perhaps even a human spectator within the universe, while the eye’s perceptual target represents the informational aspect of reality. By virtue of these complementary aspects, it seems that the universe can in some sense, but not necessarily that of common usage, be described as “conscious” and “introspective”…perhaps even “infocognitive”.[20]

The first formal presentation of the idea that information might be the fundamental quantity at the core of physics seems to be due to Frederick W. Kantor (a physicist from Columbia University). Kantor's book Information Mechanics (Wiley-Interscience, 1977) developed this idea in detail, but without mathematical rigor.

The toughest nut to crack in Wheeler's research program of a digital dissolution of physical being in a unified physics, Wheeler himself says, is time. In a 1986 eulogy to the mathematician, Hermann Weyl, he proclaimed: "Time, among all concepts in the world of physics, puts up the greatest resistance to being dethroned from ideal continuum to the world of the discrete, of information, of bits. ... Of all obstacles to a thoroughly penetrating account of existence, none looms up more dismayingly than 'time.' Explain time? Not without explaining existence. Explain existence? Not without explaining time. To uncover the deep and hidden connection between time and existence ... is a task for the future."[21] The Australian phenomenologist, Michael Eldred, comments:

The antinomy of the continuum, time, in connection with the question of being ... is said by Wheeler to be a cause for dismay which challenges future quantum physics, fired as it is by a will to power over moving reality, to "achieve four victories" (ibid.)... And so we return to the challenge to "[u]nderstand the quantum as based on an utterly simple and—when we see it—completely obvious idea" (ibid.) from which the continuum of time could be derived. Only thus could the will to mathematically calculable power over the dynamics, i.e. the movement in time, of beings as a whole be satisfied.[22][23]

Digital vs. informational physics

 

Not every informational approach to physics (or ontology) is necessarily digital. According to Luciano Floridi,[24] "informational structural realism" is a variant of structural realism that supports an ontological commitment to a world consisting of the totality of informational objects dynamically interacting with each other. Such informational objects are to be understood as constraining affordances.

Digital ontology and pancomputationalism are also independent positions. In particular, John Wheeler advocated the former but was silent about the latter; see the quote in the preceding section.
On the other hand, pancomputationalists like Lloyd (2006), who models the universe as a quantum computer, can still maintain an analogue or hybrid ontology; and informational ontologists like Sayre and Floridi embrace neither a digital ontology nor a pancomputationalist position.[25]

Computational foundations

 

Turing machines

 

Theoretical computer science is founded on the Turing machine, an imaginary computing machine first described by Alan Turing in 1936. While mechanically simple, the Church-Turing thesis implies that a Turing machine can solve any "reasonable" problem. (In theoretical computer science, a problem is considered "solvable" if it can be solved in principle, namely in finite time, which is not necessarily a finite time that is of any value to humans.) A Turing machine therefore sets the practical "upper bound" on computational power, apart from the possibilities afforded by hypothetical hypercomputers.

Wolfram's principle of computational equivalence powerfully motivates the digital approach. This principle, if correct, means that everything can be computed by one essentially simple machine, the realization of a cellular automaton. This is one way of fulfilling a traditional goal of physics: finding simple laws and mechanisms for all of nature.

Digital physics is falsifiable in that a less powerful class of computers cannot simulate a more powerful class. Therefore, if our universe is a gigantic simulation, that simulation is being run on a computer at least as powerful as a Turing machine. If humans succeed in building a hypercomputer, then a Turing machine cannot have the power required to simulate the universe.

The Church–Turing (Deutsch) thesis

 

The classic Church–Turing thesis claims that any computer as powerful as a Turing machine can, in principle, calculate anything that a human can calculate, given enough time. A stronger version, not attributable to Church or Turing,[26] claims that a universal Turing machine can compute anything any other Turing machine can compute - that it is a generalizable Turing machine. But the limits of practical computation are set by physics, not by theoretical computer science:

"Turing did not show that his machines can solve any problem that can be solved 'by instructions, explicitly stated rules, or procedures', nor did he prove that the universal Turing machine 'can compute any function that any computer, with any architecture, can compute'. He proved that his universal machine can compute any function that any Turing machine can compute; and he put forward, and advanced philosophical arguments in support of, the thesis here called Turing's thesis. But a thesis concerning the extent of effective methods—which is to say, concerning the extent of procedures of a certain sort that a human being unaided by machinery is capable of carrying out—carries no implication concerning the extent of the procedures that machines are capable of carrying out, even machines acting in accordance with 'explicitly stated rules.' For among a machine's repertoire of atomic operations there may be those that no human being unaided by machinery can perform." [27]

On the other hand, if two further conjectures are made, along the lines that:

  • hypercomputation always involves actual infinities;
  • there are no actual infinities in physics,

the resulting compound principle does bring practical computation within Turing's limits.
As David Deutsch puts it:

"I can now state the physical version of the Church-Turing principle: 'Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means.' This formulation is both better defined and more physical than Turing's own way of expressing it."[28] (Emphasis added)

This compound conjecture is sometimes called the "strong Church-Turing thesis" or the Church–Turing–Deutsch principle.

Criticism

 

The critics of digital physics—including physicists[citation needed] who work in quantum mechanics—object to it on several grounds.

Physical symmetries are continuous

 

One objection is that extant models of digital physics are incompatible[citation needed] with the existence of several continuous characters of physical symmetries, e.g., rotational symmetry, translational symmetry, Lorentz symmetry, and electroweak symmetry, all central to current physical theory.

Proponents of digital physics claim that such continuous symmetries are only convenient (and very good) approximations of a discrete reality. For example, the reasoning leading to systems of natural units and the conclusion that the Planck length is a minimum meaningful unit of distance suggests that at some level space itself is quantized.[29]

Locality

 

Some argue[citation needed] that extant models of digital physics violate various postulates of quantum physics. For example, if these models are not grounded in Hilbert spaces and probabilities, they belong to the class of theories with local hidden variables that some deem ruled out experimentally using Bell's theorem. This criticism has two possible answers. First, any notion of locality in the digital model does not necessarily have to correspond to locality formulated in the usual way in the emergent spacetime. A concrete example of this case was recently given by Lee Smolin.[30] Another possibility is a well-known loophole in Bell's theorem known as superdeterminism (sometimes referred to as predeterminism).[31] In a completely deterministic model, the experimenter's decision to measure certain components of the spins is predetermined. Thus, the assumption that the experimenter could have decided to measure different components of the spins than he actually did is, strictly speaking, not true.

Physical theory requires the continuum

 

It has been argued[weasel words] that digital physics, grounded in the theory of finite state machines and hence discrete mathematics, cannot do justice to a physical theory whose mathematics requires the real numbers, which is the case for all physical theories having any credibility.

But computers can manipulate and solve formulas describing real numbers using symbolic computation, thus avoiding the need to approximate real numbers by using an infinite number of digits.

Before symbolic computation, a number—in particular a real number, one with an infinite number of digits—was said to be computable if a Turing machine will continue to spit out digits endlessly. In other words, there is no "last digit". But this sits uncomfortably with any proposal that the universe is the output of a virtual-reality exercise carried out in real time (or any plausible kind of time). Known physical laws (including quantum mechanics and its continuous spectra) are very much infused with real numbers and the mathematics of the continuum.

"So ordinary computational descriptions do not have a cardinality of states and state space trajectories that is sufficient for them to map onto ordinary mathematical descriptions of natural systems. Thus, from the point of view of strict mathematical description, the thesis that everything is a computing system in this second sense cannot be supported".[32]

For his part, David Deutsch generally takes a "multiverse" view to the question of continuous vs. discrete. In short, he thinks that “within each universe all observable quantities are discrete, but the multiverse as a whole is a continuum. When the equations of quantum theory describe a continuous but not-directly-observable transition between two values of a discrete quantity, what they are telling us is that the transition does not take place entirely within one universe. So perhaps the price of continuous motion is not an infinity of consecutive actions, but an infinity of concurrent actions taking place across the multiverse.” January, 2001 The Discrete and the Continuous, an abridged version of which appeared in The Times Higher Education Supplement.

See also

 

 

References

 

  1. ^ Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics," Phys. Rev 106: 620.
    Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics II," Phys. Rev. 108: 171.
  2. ^ Jaynes, E. T., 1990, "Probability Theory as Logic," in Fougere, P.F., ed., Maximum-Entropy and Bayesian Methods. Boston: Kluwer.
  3. ^ See Fredkin's Digital Philosophy web site.
  4. ^ A New Kind of Science website. Reviews of ANKS.
  5. ^ Schmidhuber, J., "Computer Universes and an Algorithmic Theory of Everything."
  6. ^ G. 't Hooft, 1999, "Quantum Gravity as a Dissipative Deterministic System," Class. Quant. Grav. 16: 3263-79.
  7. ^ Lloyd, S., "The Computational Universe: Quantum gravity from quantum computation."
  8. ^ Zizzi, Paola, "Spacetime at the Planck Scale: The Quantum Computer View."
  9. ^ Zuse, Konrad, 1967, Elektronische Datenverarbeitung vol 8., pages 336-344
  10. ^ Zizzi, Paola, "A Minimal Model for Quantum Gravity."
  11. ^ Zizzi, Paola, "Computability at the Planck Scale."
  12. ^ Marzuoli, A. and Rasetti, M., 2002, "Spin Network Quantum Simulator," Phys. Lett. A306, 79-87.
  13. ^ Marzuoli, A. and Rasetti, M., 2005, "Computing Spin Networks," Annals of Physics 318: 345-407.
  14. ^ Girelli, F.; Livine, E. R., 2005, "[1]" Class. Quant. Grav. 22: 3295-3314.
  15. ^ von Weizsäcker, Carl Friedrich (1980). The Unity of Nature. New York: Farrar, Straus, and Giroux.
  16. ^ von Weizsäcker, Carl Friedrich (1985) (in German). Aufbau der Physik [The Structure of Physics]. Munich. ISBN 3-446-14142-1.
  17. ^ von Weizsäcker, Carl Friedrich (1992) (in German). Zeit und Wissen.
  18. ^ Papers on pancompuationalism
  19. ^ Chalmers, David. J., 1995, "Facing up to the Hard Problem of Consciousness," Journal of Consciousness Studies 2(3): 200-19. This paper cites John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley. Also see Chalmers, D., 1996. The Conscious Mind. Oxford Univ. Press.
  20. ^ Langan, Christopher M., 2002, "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory, pg. 7" Progress in Complexity, Information and Design
  21. ^ Wheeler, John Archibald, 1986, "Hermann Weyl and the Unity of Knowledge"
  22. ^ Eldred, Michael, 2009, 'Postscript 2: On quantum physics' assault on time'
  23. ^ Eldred, Michael, 2009, The Digital Cast of Being: Metaphysics, Mathematics, Cartesianism, Cybernetics, Capitalism, Communication ontos, Frankfurt 2009 137 pp. ISBN 978-3-86838-045-3
  24. ^ Floridi, L., 2004, "Informational Realism," in Weckert, J., and Al-Saggaf, Y, eds., Computing and Philosophy Conference, vol. 37."
  25. ^ See Floridi talk on Informational Nature of Reality, abstract at the E-CAP conference 2006.
  26. ^ B. Jack Copeland, Computation in Luciano Floridi (ed.), The Blackwell guide to the philosophy of computing and information, Wiley-Blackwell, 2004, ISBN 0-631-22919-1, pp. 10-15
  27. ^ Stanford Encyclopedia of Philosophy: "The Church-Turing thesis" -- by B. Jack Copeland.
  28. ^ David Deutsch, "Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer."
  29. ^ John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley.
  30. ^ L. Smolin, "Matrix models as non-local hidden variables theories."
  31. ^ J. S. Bell, 1981, "Bertlmann's socks and the nature of reality," Journal de Physique 42 C2: 41-61.
  32. ^ Piccinini, Gualtiero, 2007, "Computational Modelling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?" Australasian Journal of Philosophy 85(1): 93-115.

 

Further reading

 

 

External links

 

First Alcibiades

Papyrus fragment of Alcibiades I, section 131.c-e.

The First Alcibiades or Alcibiades I (Ancient Greek: Ἀλκιβιάδης αʹ) is a dialogue featuring Alcibiades in conversation with Socrates. It is ascribed to Plato, although scholars are divided on the question of its authenticity.

Contents

Content



In the preface Alcibiades is described as an ambitious young man who is eager to enter public life. He is extremely proud of his good looks, noble birth, many friends, possessions and his connection to Pericles, the leader of the Athenian state. Alcibiades has many admirers but they have all run away, afraid of his coldness. Socrates was the first of his admirers but he has not spoken to him for many years. Now the older man tries to help the youth with his questions before Alcibiades presents himself in front of the Athenian assembly. For the rest of the dialogue Socrates explains the many reasons why Alcibiades needs him. By the end of Alcibiades I, the youth is much persuaded by Socrates' reasoning, and accepts him as his mentor.

The first topic they enter is the essence of politics – war and peace. Socrates claims that people should fight on just grounds but he doubts that Alcibiades has got any knowledge about justice. Prodded by Socrates’ questioning Alcibiades admits that he has never learned the nature of justice from a master nor has discovered it by himself .

Alcibiades suggests that politics is not about justice but expediency and the two principles could be opposed. Socrates persuades him that he was mistaken, and there is no expediency without justice. The humiliated youth concedes that he knows nothing about politics.

Later Alcibiades says that he is not concerned about his ignorance because all the other Athenian politicians are ignorant. Socrates reminds him that his true rivals are the kings of Sparta and Persia. He delivers a long lecture about the careful education, glorious might and unparalleled richness of these foreign rulers. Alcibiades has got cold feet which was exactly the purpose of Socrates’ speech.

After this interlude the dialogue proceeds with further questioning about the rules of society. Socrates points to the many contradictions in Alcibiades’ thoughts. Later they agree that man has to follow the advise of the famous Delphic phrase: gnōthi seautón meaning know thyself. They discuss that the "ruling principle" of man is not the body but the soul. Somebody's true lover loves his soul, while the lover of the body flies as soon as the youth fades. With this Socrates proves that he is the only true lover of Alcibiades. "From this day forward, I must and will follow you as you have followed me; I will be the disciple, and you shall be my master", proclaims the youth. Together they will work on to improve Alcibiades' character because only the virtuous has the right to govern. Tyrannical power should not be the aim of individuals but people accept to be commanded by a superior.

In the last sentence Socrates expresses his hope that Alcibiades will persist but he has fears because the power of the state "may be too much" for both of them.
 

Authenticity

 

In antiquity Alcibiades I was regarded as the best text to introduce one to Platonic philosophy, which may be why it has continued to be included in the Platonic corpus since then. The authenticity of the dialogue was never doubted in antiquity. It was not until 1836 that the German scholar Friedrich Schleiermacher argued against the ascription to Plato.[1] Subsequently its popularity declined. However, stylometrical research supports Plato's authorship,[2] and some scholars have recently defended its authenticity.[3]

Dating

 

Traditionally, the First Alcibiades has been considered an early dialogue. Gerard Ledger's stylometric analysis supported this tradition, dating the work to the 390's.[4] Julia Annas, in supporting the authenticity of Rival Lovers, saw both dialogues as laying the foundation for ideas Plato would later develop in Charmides.

A later dating has also been defended. Nicholas Denyer suggests that it was written in the 350's BC, when Plato, back in Athens, could reflect on the similarities between Dionysius II of Syracuse (as we know him from the Seventh Letter) and Alcibiades—two young men interested in philosophy but compromised by their ambition and faulty early education.[5] This hypothesis requires skepticism about what is usually regarded as the only fairly certain result of Platonic stylometry, Plato's marked tendency to avoid hiatus in the six dialogues widely believed to have been composed in the period to which Denyer assigns First Alcibiades (Timaeus, Critias, Sophist, Statesman, Philebus, and Laws).[6]


R.S. Bluck, although unimpressed by previous arguments against the dialogue's authenticity, tentatively suggests a date after the end of Plato's life, approximately 343/2 BC, based especially on "a striking parallelism between the Alcibiades and early works of Aristotle, as well as certain other compositions that probably belong to the same period as the latter."[8]

References

 

  1. ^ Denyer (2001): 15.
  2. ^ Young (1998): 35-36.
  3. ^ Denyer (2001): 14-26.
  4. ^ Young (1998)
  5. ^ Denyer (2001): 11-14. Cf. 20-24
  6. ^ Denyer (2001): 23 n. 19
  7. ^ Pamela M. Clark, "The Greater Alcibiades," Classical Quarterly N.S. 5 (1955), pp. 231-240
  8. ^ R.S. Bluck, "The Origin of the Greater Alcibiades," Classical Quarterly N.S. 3 (1953), pp. 46-52

 

Bibliography

 

  • Denyer, Nicholas, "introduction", in Plato, Alcibiades, Nicholas Denyer (ed.) (Cambridge: Cambridge University Press, 2001): 1-26.
  • Foucault, Michel, The Hermeneutics of the Subject: Lectures at the Collège de France, 1981–1982 (New York: Picador, 2005).
  • Young, Charles M., "Plato and Computer Dating", in Nicholas D. Smith (ed.), Plato: Critical Assessments volume 1: General Issues of Interpretation (London: Routledge, 1998): 29-49.

 

External links

 

Thursday, May 17, 2012

Particle Dark Matter

Looks interesting.


Dark matter is among the most important open problems in modern physics. Aimed at graduate students and researchers, this book describes the theoretical and experimental aspects of the dark matter problem in particle physics, astrophysics and cosmology. Featuring contributions from 48 leading theorists and experimentalists, it presents many aspects, from astrophysical observations to particle physics candidates, and from the prospects for detection at colliders to direct and indirect searches. The book introduces observational evidence for dark matter along with a detailed discussion of the state-of-the-art of numerical simulations and alternative explanations in terms of modified gravity. It then moves on to the candidates arising from theories beyond the Standard Model of particle physics, and to the prospects for detection at accelerators. It concludes by looking at direct and indirect dark matter searches, and the prospects for detecting the particle nature of dark matter with astrophysical experiments. See:Cambridge University Press

NASA's Fermi Spots 'Superflares' in the Crab Nebula


The famous Crab Nebula supernova remnant has erupted in an enormous flare five times more powerful than any previously seen from the object. The outburst was first detected by NASA's Fermi Gamma-ray Space Telescope on April 12 and lasted six days.
The nebula, which is the wreckage of an exploded star whose light reached Earth in 1054, is one of the most studied objects in the sky. At the heart of an expanding gas cloud lies what's left of the original star's core, a superdense neutron star that spins 30 times a second. With each rotation, the star swings intense beams of radiation toward Earth, creating the pulsed emission characteristic of spinning neutron stars (also known as pulsars).
Apart from these pulses, astrophysicists regarded the Crab Nebula to be a virtually constant source of high-energy radiation. But in January, scientists associated with several orbiting observatories -- including NASA's Fermi, Swift and Rossi X-ray Timing Explorer -- reported long-term brightness changes at X-ray energies.
Scientists think that the flares occur as the intense magnetic field near the pulsar undergoes sudden restructuring. Such changes can accelerate particles like electrons to velocities near the speed of light. As these high-speed electrons interact with the magnetic field, they emit gamma rays in a process known as synchrotron emission.
To account for the observed emission, scientists say that the electrons must have energies 100 times greater than can be achieved in any particle accelerator on Earth. This makes them the highest-energy electrons known to be associated with any cosmic source.
Based on the rise and fall of gamma rays during the April outbursts, scientists estimate that the size of the emitting region must be comparable in size to the solar system. If circular, the region must be smaller than roughly twice Pluto's average distance from the sun.NASA's Fermi Spots 'Superflares' in the Crab Nebula






 Like a July 4 fireworks display a young, glittering collection of stars looks like an aerial burst. The cluster is surrounded by clouds of interstellar gas and dust - the raw material for new star formation. The nebula, located 20,000 light-years away in the constellation Carina, contains a central cluster of huge, hot stars, called NGC 3603.

This environment is not as peaceful as it looks. Ultraviolet radiation and violent stellar winds have blown out an enormous cavity in the gas and dust enveloping the cluster, providing an unobstructed view of the cluster.</br>

Most of the stars in the cluster were born around the same time but differ in size, mass, temperature, and color. The course of a star's life is determined by its mass, so a cluster of a given age will contain stars in various stages of their lives, giving an opportunity for detailed analyses of stellar life cycles. NGC 3603 also contains some of the most massive stars known. These huge stars live fast and die young, burning through their hydrogen fuel quickly and ultimately ending their lives in supernova explosions.</br>

Star clusters like NGC 3603 provide important clues to understanding the origin of massive star formation in the early, distant universe. Astronomers also use massive clusters to study distant starbursts that occur when galaxies collide, igniting a flurry of star formation. The proximity of NGC 3603 makes it an excellent lab for studying such distant and momentous events.</br>

This Hubble Space Telescope image was captured in August 2009 and December 2009 with the Wide Field Camera 3 in both visible and infrared light, which trace the glow of sulfur, hydrogen, and iron.</br>

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc. in Washington, D.C. </br>See:Starburst Cluster Shows Celestial Fireworks

Antimatter


See: Antimatter 101

Wednesday, May 16, 2012

Euler Diagram



An Euler diagram illustrating that the set of "animals with four legs" is a subset of "animals", but the set of "minerals" is disjoint (has no members in common) with "animals".
An Euler diagram is a diagrammatic means of representing sets and their relationships. The first use of "Eulerian circles" is commonly attributed to Swiss mathematician Leonhard Euler (1707–1783). They are closely related to Venn diagrams.

Venn and Euler diagrams were incorporated as part of instruction in set theory as part of the new math movement in the 1960s. Since then, they have also been adopted by other curriculum fields such as reading.[1]

Contents

Overview

Euler diagrams consist of simple closed curves (usually circles) in the plane that depict sets. The sizes or shapes of the curves are not important: the significance of the diagram is in how they overlap. The spatial relationships between the regions bounded by each curve (overlap, containment or neither) corresponds to set-theoretic relationships (intersection, subset and disjointness).
Each Euler curve divides the plane into two regions or "zones": the interior, which symbolically represents the elements of the set, and the exterior, which represents all elements that are not members of the set. Curves whose interior zones do not intersect represent disjoint sets. Two curves whose interior zones intersect represent sets that have common elements; the zone inside both curves represents the set of elements common to both sets (the intersection of the sets). A curve that is contained completely within the interior zone of another represents a subset of it.


Examples of small Venn diagrams (on left) with shaded regions representing empty sets, showing how they can be easily transformed into equivalent Euler diagrams (right).
Venn diagrams are a more restrictive form of Euler diagrams. A Venn diagram must contain all the possible zones of overlap between its curves, representing all combinations of inclusion/exclusion of its constituent sets, but in an Euler diagram some zones might be missing. When the number of sets grows beyond 3, or even with three sets, but under the allowance of more than two curves passing at the same point, we start seeing the appearance of multiple mathematically unique Venn diagrams. Venn diagrams represent the relationships between n sets, with 2n zones, Euler diagrams may not have all zones. (An example is given below in the History section; in the top-right illustration the O and I diagrams are merely rotated; Venn stated that this difficulty in part led him to develop his diagrams).

In a logical setting, one can use model theoretic semantics to interpret Euler diagrams, within a universe of discourse. In the examples above, the Euler diagram depicts that the sets Animal and Mineral are disjoint since the corresponding curves are disjoint, and also that the set Four Legs is a subset of the set of Animals. The Venn diagram, which uses the same categories of Animal, Mineral, and Four Legs, does not encapsulate these relationships. Traditionally the emptiness of a set in Venn diagrams is depicted by shading in the region. Euler diagrams represent emptiness either by shading or by the use of a missing region.
Often a set of well-formedness conditions are imposed; these are topological or geometric constraints imposed on the structure of the diagram. For example, connectedness of zones might be enforced, or concurrency of curves or multiple points might be banned, as might tangential intersection of curves. In the diagram to the right, examples of small Venn diagrams are transformed into Euler diagrams by sequences of transformations; some of the intermediate diagrams have concurrency of curves. However, this sort of transformation of a Venn diagram with shading into an Euler diagram without shading is not always possible. There are examples of Euler diagrams with 9 sets that are not drawable using simple closed curves without the creation of unwanted zones since they would have to have non-planar dual graphs.
 

History

 


Photo of page from Hamilton's 1860 "Lectures" page 180. (Click on it, up to two times, to enlarge). The symbolism A, E, I, and O refer to the four forms of the syllogism. The small text to the left says: "The first employment of circular diagrams in logic improperly ascribed to Euler. To be found in Christian Weise."

On the right is a photo of page 74 from Couturat 1914 wherein he labels the 8 regions of the Venn diagram. The modern name for these "regions" is minterms. These are shown on the left with the variables x, y and z per Venn's drawing. The symbolism is as follows: logical AND ( & ) is represented by arithmetic multiplication, and the logical NOT ( ~ )is represented by " ' " after the variable, e.g. the region x'y'z is read as "NOT x AND NOT y AND z" i.e. ~x & ~y & z.

Both the Veitch and Karnaugh diagrams show all the minterms, but the Veitch is not particularly useful for reduction of formulas. Observe the strong resemblance between the Venn and Karnaugh diagrams; the colors and the variables x, y, and z are per Venn's example.
As shown in the illustration to the right, Sir William Hamilton in his posthumously published Lectures on Metaphysics and Logic (1858–60) asserts that the original use of circles to "sensualize ... the abstractions of Logic" (p. 180) was not Leonhard Paul Euler (1707–1783) but rather Christian Weise (?–1708) in his Nucleus Logicoe Weisianoe that appeared in 1712 posthumously. He references Euler's Letters to a German Princess on different Matters of Physics and Philosophy1" [1Partie ii., Lettre XXXV., ed. Cournot. – ED.][2]
In Hamilton's illustration the four forms of the syllogism as symbolized by the drawings A, E, I and O are:[3]
  • A: The Universal Affirmative, Example: "All metals are elements".
  • E: The Universal Negative, Example: "No metals are compound substances".
  • I: The Particular Affirmative, Example: "Some metals are brittle".
  • O: The Particular Negative, Example: "Some metals are not brittle".
In his 1881 Symbolic Logic Chapter V "Diagrammatic Representation", John Venn (1834–1923) comments on the remarkable prevalence of the Euler diagram:
"...of the first sixty logical treatises, published during the last century or so, which were consulted for this purpose:-somewhat at random, as they happened to be most accessible :-it appeared that thirty four appealed to the aid of diagrams, nearly all of these making use of the Eulerian Scheme." (Footnote 1 page 100)

Composite of two pages 115–116 from Venn 1881 showing his example of how to convert a syllogism of three parts into his type of diagram. Venn calls the circles "Eulerian circles" (cf Sandifer 2003, Venn 1881:114 etc) in the "Eulerian scheme" (Venn 1881:100) of "old-fashioned Eulerian diagrams" (Venn 1881:113).
But nevertheless, he contended "the inapplicability of this scheme for the purposes of a really general Logic" (page 100) and in a footnote observed that "it fits in but badly even with the four propositions of the common Logic [the four forms of the syllogism] to which it is normally applied" (page 101). Venn ends his chapter with the observation that will be made in the examples below – that their use is based on practice and intuition, not on a strict algorithmic practice:
“In fact ... those diagrams not only do not fit in with the ordinary scheme of propositions which they are employed to illustrate, but do not seem to have any recognized scheme of propositions to which they could be consistently affiliated.” (pp. 124–125)
Finally, in his Chapter XX HISTORIC NOTES Venn gets to a crucial criticism (italicized in the quote below); observe in Hamilton's illustration that the O (Particular Negative) and I (Particular Affirmative) are simply rotated:
"We now come to Euler's well-known circles which were first described in his Lettres a une Princesse d'Allemagne (Letters 102–105). The weak point about these consists in the fact that they only illustrate in strictness the actual relations of classes to one another, rather than the imperfect knowledge of these relations which we may possess, or wish to convey, by means of the proposition. Accordingly they will not fit in with the propositions of common logic, but demand the constitution of a new group of appropriate elementary propositions.... This defect must have been noticed from the first in the case of the particular affirmative and negative, for the same diagram is commonly employed to stand for them both, which it does indifferently well". (italics added: page 424)
(Sandifer 2003 reports that Euler makes such observations too; Euler reports that his figure 45 (a simple intersection of two circles) has 4 different interpretations). Whatever the case, armed with these observations and criticisms, Venn then demonstrates (pp. 100–125) how he derived what has become known as his Venn diagrams from the "old-fashioned Euler diagrams". In particular he gives an example, shown on the left.
By 1914 Louis Couturat (1868–1914) had labeled the terms as shown on the drawing on the right. Moreover, he had labeled the exterior region (shown as a'b'c') as well. He succinctly explains how to use the diagram – one must strike out the regions that are to vanish:
"VENN'S method is translated in geometrical diagrams which represent all the constituents, so that, in order to obtain the result, we need only strike out (by shading) those which are made to vanish by the data of the problem." (italics added p. 73)
Given the Venn's assignments, then, the unshaded areas inside the circles can be summed to yield the following equation for Venn's example:
"No Y is Z and ALL X is Y: therefore No X is Z" has the equation x'yz' + xyz' + x'y'z for the unshaded area inside the circles (but note that this is not entirely correct; see the next paragraph).
In Venn the 0th term, x'y'z', i.e. the background surrounding the circles, does not appear. Nowhere is it discussed or labeled, but Couturat corrects this in his drawing. The correct equation must include this unshaded area shown in boldface:
"No Y is Z and ALL X is Y: therefore No X is Z" has the equation x'yz' + xyz' + x'y'z + x'y'z' .
In modern usage the Venn diagram includes a "box" that surrounds all the circles; this is called the universe of discourse or the domain of discourse.
Couturat now observes that, in a direct algorithmic (formal, systematic) manner, one cannot derive reduced Boolean equations, nor does it show how to arrive at the conclusion "No X is Z". Couturat concluded that the process "has ... serious inconveniences as a method for solving logical problems":
"It does not show how the data are exhibited by canceling certain constituents, nor does it show how to combine the remaining constituents so as to obtain the consequences sought. In short, it serves only to exhibit one single step in the argument, namely the equation of the problem; it dispenses neither with the previous steps, i. e., "throwing of the problem into an equation" and the transformation of the premises, nor with the subsequent steps, i. e., the combinations that lead to the various consequences. Hence it is of very little use, inasmuch as the constituents can be represented by algebraic symbols quite as well as by plane regions, and are much easier to deal with in this form."(p. 75)
Thus the matter would rest until 1952 when Maurice Karnaugh (1924– ) would adapt and expand a method proposed by Edward W. Veitch; this work would rely on the truth table method precisely defined in Emil Post's 1921 PhD thesis "Introduction to a general theory of elementary propositions" and the application of propositional logic to switching logic by (among others) Claude Shannon, George Stibitz, and Alan Turing.[4] For example, in chapter "Boolean Algebra" Hill and Peterson (1968, 1964) present sections 4.5ff "Set Theory as an Example of Boolean Algebra" and in it they present the Venn diagram with shading and all. They give examples of Venn diagrams to solve example switching-circuit problems, but end up with this statement:
"For more than three variables, the basic illustrative form of the Venn diagram is inadequate. Extensions are possible, however, the most convenient of which is the Karnaugh map, to be discussed in Chapter 6." (p. 64)
In Chapter 6, section 6.4 "Karnaugh Map Representation of Boolean Functions" they begin with:
"The Karnaugh map1 [1Karnaugh 1953] is one of the most powerful tools in the repertory of the logic designer. ... A Karnaugh map may be regarded either as a pictorial form of a truth table or as an extension of the Venn diagram." (pp. 103–104)
The history of Karnaugh's development of his "chart" or "map" method is obscure. Karnaugh in his 1953 referenced Veitch 1951, Veitch referenced Claude E. Shannon 1938 (essentially Shannon's Master's thesis at M.I.T.), and Shannon in turn referenced, among other authors of logic texts, Couturat 1914. In Veitch's method the variables are arranged in a rectangle or square; as described in Karnaugh map, Karnaugh in his method changed the order of the variables to correspond to what has become known as (the vertices of) a hypercube.

Example: Euler- to Venn-diagram and Karnaugh map

This example shows the Euler and Venn diagrams and Karnaugh map deriving and verifying the deduction "No X's are Z's". In the illustration and table the following logical symbols are used:
1 can be read as "true", 0 as "false"
~ for NOT and abbreviated to ' when illustrating the minterms e.g. x' =defined NOT x,
+ for Boolean OR (from Boolean algebra: 0+0=0, 0+1 = 1+0 = 1, 1+1=1)
& (logical AND) between propositions; in the mintems AND is omitted in a manner similar to arithmetic multiplication: e.g. x'y'z =defined ~x & ~y & z (From Boolean algebra: 0*0=0, 0*1 = 1*0=0, 1*1 = 1, where * is shown for clarity)
→ (logical IMPLICATION): read as IF ... THEN ..., or " IMPLIES ", P → Q =defined NOT P OR Q

Before it can be presented in a Venn diagram or Karnaugh Map, the Euler diagram's syllogism "No Y is Z, All X is Y" must first be reworded into the more formal language of the propositional calculus: " 'It is not the case that: Y AND Z' AND 'If an X then a Y' ". Once the propositions are reduced to symbols and a propositional formula ( ~(y & z) & (x → y) ), one can construct the formula's truth table; from this table the Venn and/or the Karnaugh map are readily produced. By use of the adjacency of "1"s in the Karnaugh map (indicated by the grey ovals around terms 0 and 1 and around terms 2 and 6) one can "reduce" the example's Boolean equation i.e. (x'y'z' + x'y'z) + (x'yz' + xyz') to just two terms: x'y' + yz'. But the means for deducing the notion that "No X is Z", and just how the reduction relates to this deduction, is not forthcoming from this example.
Given a proposed conclusion such as "No X is a Z", one can test whether or not it is a correct deduction by use of a truth table. The easiest method is put the starting formula on the left (abbreviate it as "P") and put the (possible) deduction on the right (abbreviate it as "Q") and connect the two with logical implication i.e. P → Q, read as IF P THEN Q. If the evaluation of the truth table produces all 1's under the implication-sign (→, the so-called major connective) then P → Q is a tautology. Given this fact, one can "detach" the formula on the right (abbreviated as "Q") in the manner described below the truth table.
Given the example above, the formula for the Euler and Venn diagrams is:
"No Y's are Z's" and "All X's are Y's": ( ~(y & z) & (x → y) ) =defined P
And the proposed deduction is:
"No X's are Z's": ( ~ (x & z) ) =defined Q
So now the formula to be evaluated can be abbreviated to:
( ~(y & z) & (x → y) ) → ( ~ (x & z) ): P → Q
IF ( "No Y's are Z's" and "All X's are Y's" ) THEN ( "No X's are Z's" )
The Truth Table demonstrates that the formula ( ~(y & z) & (x → y) ) → ( ~ (x & z) ) is a tautology as shown by all 1's in yellow column..
Square # Venn, Karnaugh region
x y z
(~ (y & z) & (x y)) (~ (x & z))
0 x'y'z' 0 0 0 1 0 0 0 1 0 1 0 1 1 0 0 0
1 x'y'z 0 0 1 1 0 0 1 1 0 1 0 1 1 0 0 1
2 x'yz' 0 1 0 1 1 0 0 1 0 1 1 1 1 0 0 0
3 x'yz 0 1 1 0 1 1 1 0 0 1 1 1 1 0 0 1
4 xy'z' 1 0 0 1 0 0 0 0 1 0 0 1 1 1 0 0
5 xy'z 1 0 1 1 0 0 1 0 1 0 0 1 0 1 1 1
6 xyz' 1 1 0 1 1 0 0 1 1 1 1 1 1 1 0 0
7 xyz 1 1 1 0 1 1 1 0 1 1 1 1 0 1 1 1
At this point the above implication P → Q (i.e. ~(y & z) & (x → y) ) → ~(x & z) ) is still a formula, and the deduction – the "detachment" of Q out of P → Q – has not occurred. But given the demonstration that P → Q is tautology, the stage is now set for the use of the procedure of modus ponens to "detach" Q: "No X's are Z's" and dispense with the terms on the left.[5]
Modus ponens (or "the fundamental rule of inference"[6]) is often written as follows: The two terms on the left, "P → Q" and "P", are called premises (by convention linked by a comma), the symbol ⊢ means "yields" (in the sense of logical deduction), and the term on the right is called the conclusion:
P → Q, P ⊢ Q
For the modus ponens to succeed, both premises P → Q and P must be true. Because, as demonstrated above the premise P → Q is a tautology, "truth" is always the case no matter how x, y and z are valued, but "truth" will only be the case for P in those circumstances when P evaluates as "true" (e.g. rows 0 OR 1 OR 2 OR 6: x'y'z' + x'y'z + x'yz' + xyz' = x'y' + yz').[7]
P → Q , P ⊢ Q
i.e.: ( ~(y & z) & (x → y) ) → ( ~ (x & z) ) , ( ~(y & z) & (x → y) ) ⊢ ( ~ (x & z) )
i.e.: IF "No Y's are Z's" and "All X's are Y's" THEN "No X's are Z's", "No Y's are Z's" and "All X's are Y's" ⊢ "No X's are Z's"
One is now free to "detach" the conclusion "No X's are Z's", perhaps to use it in a subsequent deduction (or as a topic of conversation).
The use of tautological implication means that other possible deductions exist besides "No X's are Z's"; the criterion for a successful deduction is that the 1's under the sub-major connective on the right include all the 1's under the sub-major connective on the left (the major connective being the implication that results in the tautology). For example, in the truth table, on the right side of the implication (→, the major connective symbol) the bold-face column under the sub-major connective symbol " ~ " has the all the same 1s that appear in the bold-faced column under the left-side sub-major connective & (rows 0, 1, 2 and 6), plus two more (rows 3 and 4).

Gallery

Footnotes

  1. ^ Strategies for Reading Comprehension Venn Diagrams
  2. ^ By the time these lectures of Hamilton were published, Hamilton too had died. His editors (symbolized by ED.), responsible for most of the footnoting, were the logicians Henry Longueville Mansel and John Veitch.
  3. ^ Hamilton 1860:179. The examples are from Jevons 1881:71ff.
  4. ^ See footnote at George Stibitz.
  5. ^ This is a sophisticated concept. Russell and Whitehead (2nd edition 1927) in their Principia Mathematica describe it this way: "The trust in inference is the belief that if the two former assertions [the premises P, P→Q ] are not in error, the final assertion is not in error . . . An inference is the dropping of a true premiss [sic]; it is the dissolution of an implication" (p. 9). Further discussion of this appears in "Primitive Ideas and Propositions" as the first of their "primitive propositions" (axioms): *1.1 Anything implied by a true elementary proposition is true" (p. 94). In a footnote the authors refer the reader back to Russell's 1903 Principles of Mathematics §38.
  6. ^ cf Reichenbach 1947:64
  7. ^ Reichenbach discusses the fact that the implication P → Q need not be a tautology (a so-called "tautological implication"). Even "simple" implication (connective or adjunctive) will work, but only for those rows of the truth table that evaluate as true, cf Reichenbach 1947:64–66.

References

By date of publishing:
  • Sir William Hamilton 1860 Lectures on Metaphysics and Logic edited by Henry Longueville Mansel and John Veitch, William Blackwood and Sons, Edinburgh and London.
  • W. Stanley Jevons 1880 Elemetnary Lessons in Logic: Deductive and Inductive. With Copious Questions and Examples, and a Vocabulary of Logical Terms, M. A. MacMillan and Co., London and New York.
  • John Venn 1881 Symbolic Logic, MacMillan and Co., London.
  • Alfred North Whitehead and Bertrand Russell 1913 1st edition, 1927 2nd edition Principia Mathematica to *56 Cambridge At The University Press (1962 edition), UK, no ISBN.
  • Louis Couturat 1914 The Algebra of Logic: Authorized English Translation by Lydia Gillingham Robinson with a Preface by Philip E. B. Jourdain, The Open Court Publishing Company, Chicago and London.
  • Emil Post 1921 "Introduction to a general theory of elementary propositions" reprinted with commentary by Jean van Heijenoort in Jean van Heijenoort, editor 1967 From Frege to Gödel: A Sourcebook of Mathematical Logic, 1879–1931, Harvard University Press, Cambridge, MA, ISBN 0-674-42449-8 (pbk.)
  • Claude E. Shannon 1938 "A Symbolic Analysis of Relay and Switching Circuits", Transactions American Institute of Electrical Engineers vol 57, pp. 471–495. Derived from Claude Elwood Shannon: Collected Papers edited by N.J.A. Solane and Aaron D. Wyner, IEEE Press, New York.
  • Hans Reichenbach 1947 Elements of Symbolic Logic republished 1980 by Dover Publications, Inc., NY, ISBN 0-486-24004-5.
  • Edward W. Veitch 1952 "A Chart Method for Simplifying Truth Functions", Transactions of the 1952 ACM Annual Meeting, ACM Annual Conference/Annual Meeting "Pittsburgh", ACM, NY, pp. 127–133.
  • Maurice Karnaugh November 1953 The Map Method for Synthesis of Combinational Logic Circuits, AIEE Committee on Technical Operations for presentation at the AIEE summer General Meeting, Atlantic City, N. J., June 15–19, 1953, pp. 593–599.
  • Frederich J. Hill and Gerald R. Peterson 1968, 1974 Introduction to Switching Theory and Logical Design, John Wiley & Sons NY, ISBN 0-71-39882-9.
  • Ed Sandifer 2003 How Euler Did It, http://www.maa.org/editorial/euler/How%20Euler%20Did%20It%2003%20Venn%20Diagrams.pdf

External links

Tuesday, May 15, 2012

Where is LHC Headed?

The speakers are: Michael Peskin (author of the famous QFT textbook) Nima Arkani-Hamed, Riccardo Rattazzi, Gavin Salam, Matt Strassler and Raman Sundrum (or Randall-Sundrum fame).