Wednesday, May 23, 2012

Hypercomputation

Hypercomputation or super-Turing computation refers to models of computation that go beyond, or are incomparable to, Turing computability. This includes various hypothetical methods for the computation of non-Turing-computable functions, following super-recursive algorithms (see also supertask). The term "super-Turing computation" appeared in a 1995 Science paper by Hava Siegelmann. The term "hypercomputation" was introduced in 1999 by Jack Copeland and Diane Proudfoot.[1]

The terms are not quite synonymous: "super-Turing computation" usually implies that the proposed model is supposed to be physically realizable, while "hypercomputation" does not.

Technical arguments against the physical realizability of hypercomputations have been presented.

Contents

 

History

 

A computational model going beyond Turing machines was introduced by Alan Turing in his 1938 PhD dissertation Systems of Logic Based on Ordinals.[2] This paper investigated mathematical systems in which an oracle was available, which could compute a single arbitrary (non-recursive) function from naturals to naturals. He used this device to prove that even in those more powerful systems, undecidability is still present. Turing's oracle machines are strictly mathematical abstractions, and are not physically realizable.[3]

Hypercomputation and the Church–Turing thesis

 

The Church–Turing thesis states that any function that is algorithmically computable can be computed by a Turing machine. Hypercomputers compute functions that a Turing machine cannot, hence, not computable in the Church-Turing sense.
An example of a problem a Turing machine cannot solve is the halting problem. A Turing machine cannot decide if an arbitrary program halts or runs forever. Some proposed hypercomputers can simulate the program for an infinite number of steps and tell the user whether or not the program halted.

Hypercomputer proposals

 

  • A Turing machine that can complete infinitely many steps. Simply being able to run for an unbounded number of steps does not suffice. One mathematical model is the Zeno machine (inspired by Zeno's paradox). The Zeno machine performs its first computation step in (say) 1 minute, the second step in ½ minute, the third step in ¼ minute, etc. By summing 1+½+¼+... (a geometric series) we see that the machine performs infinitely many steps in a total of 2 minutes. However, some[who?] people claim that, following the reasoning from Zeno's paradox, Zeno machines are not just physically impossible, but logically impossible.[4]
  • Turing's original oracle machines, defined by Turing in 1939.
  • In mid 1960s, E Mark Gold and Hilary Putnam independently proposed models of inductive inference (the "limiting recursive functionals"[5] and "trial-and-error predicates",[6] respectively). These models enable some nonrecursive sets of numbers or languages (including all recursively enumerable sets of languages) to be "learned in the limit"; whereas, by definition, only recursive sets of numbers or languages could be identified by a Turing machine. While the machine will stabilize to the correct answer on any learnable set in some finite time, it can only identify it as correct if it is recursive; otherwise, the correctness is established only by running the machine forever and noting that it never revises its answer. Putnam identified this new interpretation as the class of "empirical" predicates, stating: "if we always 'posit' that the most recently generated answer is correct, we will make a finite number of mistakes, but we will eventually get the correct answer. (Note, however, that even if we have gotten to the correct answer (the end of the finite sequence) we are never sure that we have the correct answer.)"[6] L. K. Schubert's 1974 paper "Iterated Limiting Recursion and the Program Minimization Problem" [7] studied the effects of iterating the limiting procedure; this allows any arithmetic predicate to be computed. Schubert wrote, "Intuitively, iterated limiting identification might be regarded as higher-order inductive inference performed collectively by an ever-growing community of lower order inductive inference machines."
  • A real computer (a sort of idealized analog computer) can perform hypercomputation[8] if physics admits general real variables (not just computable reals), and these are in some way "harnessable" for computation. This might require quite bizarre laws of physics (for example, a measurable physical constant with an oracular value, such as Chaitin's constant), and would at minimum require the ability to measure a real-valued physical value to arbitrary precision despite thermal noise and quantum effects.
  • A proposed technique known as fair nondeterminism or unbounded nondeterminism may allow the computation of noncomputable functions.[9] There is dispute in the literature over whether this technique is coherent, and whether it actually allows noncomputable functions to be "computed".
  • It seems natural that the possibility of time travel (existence of closed timelike curves (CTCs)) makes hypercomputation possible by itself. However, this is not so since a CTC does not provide (by itself) the unbounded amount of storage that an infinite computation would require. Nevertheless, there are spacetimes in which the CTC region can be used for relativistic hypercomputation.[10] Access to a CTC may allow the rapid solution to PSPACE-complete problems, a complexity class which while Turing-decidable is generally considered computationally intractable.[11][12]
  • According to a 1992 paper,[13] a computer operating in a Malament-Hogarth spacetime or in orbit around a rotating black hole[14] could theoretically perform non-Turing computations.[15][16]
  • In 1994, Hava Siegelmann proved that her new (1991) computational model, the Artificial Recurrent Neural Network (ARNN), could perform hypercomputation (using infinite precision real weights for the synapses). It is based on evolving an artificial neural network through a discrete, infinite succession of states.[17]
  • The infinite time Turing machine is a generalization of the Zeno machine, that can perform infinitely long computations whose steps are enumerated by potentially transfinite ordinal numbers. It models an otherwise-ordinary Turing machine for which non-halting computations are completed by entering a special state reserved for reaching a limit ordinal and to which the results of the preceding infinite computation are available.[18]
  • Jan van Leeuwen and Jiří Wiedermann wrote a 2000 paper[19] suggesting that the Internet should be modeled as a nonuniform computing system equipped with an advice function representing the ability of computers to be upgraded.
  • A symbol sequence is computable in the limit if there is a finite, possibly non-halting program on a universal Turing machine that incrementally outputs every symbol of the sequence. This includes the dyadic expansion of π and of every other computable real, but still excludes all noncomputable reals. Traditional Turing machines cannot edit their previous outputs; generalized Turing machines, as defined by Jürgen Schmidhuber, can. He defines the constructively describable symbol sequences as those that have a finite, non-halting program running on a generalized Turing machine, such that any output symbol eventually converges, that is, it does not change any more after some finite initial time interval. Due to limitations first exhibited by Kurt Gödel (1931), it may be impossible to predict the convergence time itself by a halting program, otherwise the halting problem could be solved. Schmidhuber ([20][21]) uses this approach to define the set of formally describable or constructively computable universes or constructive theories of everything. Generalized Turing machines can solve the halting problem by evaluating a Specker sequence.
  • A quantum mechanical system which somehow uses an infinite superposition of states to compute a non-computable function.[22] This is not possible using the standard qubit-model quantum computer, because it is proven that a regular quantum computer is PSPACE-reducible (a quantum computer running in polynomial time can be simulated by a classical computer running in polynomial space).[23]
  • In 1970, E.S. Santos defined a class of fuzzy logic-based "fuzzy algorithms" and "fuzzy Turing machines".[24] Subsequently, L. Biacino and G. Gerla showed that such a definition would allow the computation of nonrecursive languages; they suggested an alternative set of definitions without this difficulty.[25] Jiří Wiedermann analyzed the capabilities of Santos' original proposal in 2004.[26]
  • Dmytro Taranovsky has proposed a finitistic model of traditionally non-finitistic branches of analysis, built around a Turing machine equipped with a rapidly increasing function as its oracle. By this and more complicated models he was able to give an interpretation of second-order arithmetic.[27]

 

Analysis of capabilities


Many hypercomputation proposals amount to alternative ways to read an oracle or advice function embedded into an otherwise classical machine. Others allow access to some higher level of the arithmetic hierarchy. For example, supertasking Turing machines, under the usual assumptions, would be able to compute any predicate in the truth-table degree containing \Sigma^0_1 or \Pi^0_1. Limiting-recursion, by contrast, can compute any predicate or function in the corresponding Turing degree, which is known to be \Delta^0_2. Gold further showed that limiting partial recursion would allow the computation of precisely the \Sigma^0_2 predicates.
Model Computable predicates Notes Refs
supertasking tt(\Sigma^0_1, \Pi^0_1) dependent on outside observer [28]
limiting/trial-and-error  \Delta^0_2
[5]
iterated limiting (k times)  \Delta^0_{k+1}
[7]
Blum-Shub-Smale machine
incomparable with traditional computable real functions. [29]
Malament-Hogarth spacetime HYP Dependent on spacetime structure [30]
Analog recurrent neural network  \Delta^0_1[f] f is an advice function giving connection weights; size is bounded by runtime [31][32]
Infinite time Turing machine  \ge T(\Sigma^1_1)
[33]
Classical fuzzy Turing machine  \Sigma^0_1 \cup \Pi^0_1 For any computable t-norm [34]
Increasing function oracle  \Delta^1_1 For the one-sequence model;  \Pi^1_1 are r.e. [27]

 

Taxonomy of "super-recursive" computation methodologies


Burgin has collected a list of what he calls "super-recursive algorithms" (from Burgin 2005: 132):
  • limiting recursive functions and limiting partial recursive functions (E. M. Gold[5])
  • trial and error predicates (Hilary Putnam[6])
  • inductive inference machines (Carl Herbert Smith)
  • inductive Turing machines (one of Burgin's own models)
  • limit Turing machines (another of Burgin's models)
  • trial-and-error machines (Ja. Hintikka and A. Mutanen [35])
  • general Turing machines (J. Schmidhuber[21])
  • Internet machines (van Leeuwen, J. and Wiedermann, J.[19])
  • evolutionary computers, which use DNA to produce the value of a function (Darko Roglic[36])
  • fuzzy computation (Jiří Wiedermann[26])
  • evolutionary Turing machines (Eugene Eberbach[37])
In the same book, he presents also a list of "algorithmic schemes":

 

Criticism


Martin Davis, in his writings on hypercomputation [39] [40] refers to this subject as "a myth" and offers counter-arguments to the physical realizability of hypercomputation. As for its theory, he argues against the claims that this is a new field founded in 1990s. This point of view relies on the history of computability theory (degrees of unsolvability, computability over functions, real numbers and ordinals), as also mentioned above.
Andrew Hodges wrote a critical commentary[41] on Copeland and Proudfoot's article[1].

 

See also

 

References

  1. ^ a b Copeland and Proudfoot, Alan Turing's forgotten ideas in computer science. Scientific American, April 1999
  2. ^ Alan Turing, 1939, Systems of Logic Based on Ordinals Proceedings London Mathematical Society Volumes 2–45, Issue 1, pp. 161–228.[1]
  3. ^ "Let us suppose that we are supplied with some unspecified means of solving number-theoretic problems; a kind of oracle as it were. We shall not go any further into the nature of this oracle apart from saying that it cannot be a machine" (Undecidable p. 167, a reprint of Turing's paper Systems of Logic Based On Ordinals)
  4. ^ These models have been independently developed by many different authors, including Hermann Weyl (1927). Philosophie der Mathematik und Naturwissenschaft.; the model is discussed in Shagrir, O. (June 2004). "Super-tasks, accelerating Turing machines and uncomputability". Theor. Comput. Sci. 317, 1-3 317: 105–114. doi:10.1016/j.tcs.2003.12.007. and in Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
  5. ^ a b c E. M. Gold (1965). "Limiting Recursion". Journal of Symbolic Logic 30 (1): 28–48. doi:10.2307/2270580. JSTOR 2270580., E. Mark Gold (1967). "Language identification in the limit". Information and Control 10 (5): 447–474. doi:10.1016/S0019-9958(67)91165-5.
  6. ^ a b c Hilary Putnam (1965). "Trial and Error Predicates and the Solution to a Problem of Mostowksi". Journal of Symbolic Logic 30 (1): 49–57. doi:10.2307/2270581. JSTOR 2270581.
  7. ^ a b L. K. Schubert (July 1974). "Iterated Limiting Recursion and the Program Minimization Problem". Journal of the ACM 21 (3): 436–445. doi:10.1145/321832.321841.
  8. ^ Arnold Schönhage, "On the power of random access machines", in Proc. Intl. Colloquium on Automata, Languages, and Programming (ICALP), pages 520-529, 1979. Source of citation: Scott Aaronson, "NP-complete Problems and Physical Reality"[2] p. 12
  9. ^ Edith Spaan, Leen Torenvliet and Peter van Emde Boas (1989). "Nondeterminism, Fairness and a Fundamental Analogy". EATCS bulletin 37: 186–193.
  10. ^ Hajnal Andréka, István Németi and Gergely Székely, Closed Timelike Curves in Relativistic Computation, 2011.[3]
  11. ^ Todd A. Brun, Computers with closed timelike curves can solve hard problems, Found.Phys.Lett. 16 (2003) 245-253.[4]
  12. ^ S. Aaronson and J. Watrous. Closed Timelike Curves Make Quantum and Classical Computing Equivalent [5]
  13. ^ Hogarth, M., 1992, ‘Does General Relativity Allow an Observer to View an Eternity in a Finite Time?’, Foundations of Physics Letters, 5, 173–181.
  14. ^ István Neméti; Hajnal Andréka (2006). "Can General Relativistic Computers Break the Turing Barrier?". Logical Approaches to Computational Barriers, Second Conference on Computability in Europe, CiE 2006, Swansea, UK, June 30-July 5, 2006. Proceedings. Lecture Notes in Computer Science. 3988. Springer. doi:10.1007/11780342.
  15. ^ Etesi, G., and Nemeti, I., 2002 'Non-Turing computations via Malament-Hogarth space-times', Int.J.Theor.Phys. 41 (2002) 341–370, Non-Turing Computations via Malament-Hogarth Space-Times:.
  16. ^ Earman, J. and Norton, J., 1993, ‘Forever is a Day: Supertasks in Pitowsky and Malament-Hogarth Spacetimes’, Philosophy of Science, 5, 22–42.
  17. ^ Verifying Properties of Neural Networks p.6
  18. ^ Joel David Hamkins and Andy Lewis, Infinite time Turing machines, Journal of Symbolic Logic, 65(2):567-604, 2000.[6]
  19. ^ a b Jan van Leeuwen; Jiří Wiedermann (September 2000). "On Algorithms and Interaction". MFCS '00: Proceedings of the 25th International Symposium on Mathematical Foundations of Computer Science. Springer-Verlag.
  20. ^ Jürgen Schmidhuber (2000). "Algorithmic Theories of Everything". Sections in: Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit. International Journal of Foundations of Computer Science ():587-612 (). Section 6 in: the Speed Prior: A New Simplicity Measure Yielding Near-Optimal Computable Predictions. in J. Kivinen and R. H. Sloan, editors, Proceedings of the 15th Annual Conference on Computational Learning Theory (COLT ), Sydney, Australia, Lecture Notes in Artificial Intelligence, pages 216--228. Springer, . 13 (4): 1–5. arXiv:quant-ph/0011122.
  21. ^ a b J. Schmidhuber (2002). "Hierarchies of generalized Kolmogorov complexities and nonenumerable universal measures computable in the limit". International Journal of Foundations of Computer Science 13 (4): 587–612. doi:10.1142/S0129054102001291.
  22. ^ There have been some claims to this effect; see Tien Kieu (2003). "Quantum Algorithm for the Hilbert's Tenth Problem". Int. J. Theor. Phys. 42 (7): 1461–1478. arXiv:quant-ph/0110136. doi:10.1023/A:1025780028846.. & the ensuing literature. Errors have been pointed out in Kieu's approach by Warren D. Smith in Three counterexamples refuting Kieu’s plan for “quantum adiabatic hypercomputation”; and some uncomputable quantum mechanical tasks
  23. ^ Bernstein and Vazirani, Quantum complexity theory, SIAM Journal on Computing, 26(5):1411-1473, 1997. [7]
  24. ^ Santos, Eugene S. (1970). "Fuzzy Algorithms". Information and Control 17 (4): 326–339. doi:10.1016/S0019-9958(70)80032-8.
  25. ^ Biacino, L.; Gerla, G. (2002). "Fuzzy logic, continuity and effectiveness". Archive for Mathematical Logic 41 (7): 643–667. doi:10.1007/s001530100128. ISSN 0933-5846.
  26. ^ a b Wiedermann, Jiří (2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theor. Comput. Sci. 317 (1–3): 61–69. doi:10.1016/j.tcs.2003.12.004.
  27. ^ a b Dmytro Taranovsky (July 17, 2005). "Finitism and Hypercomputation". Retrieved Apr 26, 2011.
  28. ^ Petrus H. Potgieter (July 2006). "Zeno machines and hypercomputation". Theoretical Computer Science 358 (1): 23–33. doi:10.1016/j.tcs.2005.11.040.
  29. ^ Lenore Blum, Felipe Cucker, Michael Shub, and Stephen Smale. Complexity and Real Computation. ISBN 0-387-98281-7.
  30. ^ P. D. Welch (10-Sept-2006). The extent of computation in Malament-Hogarth spacetimes. arXiv:gr-qc/0609035.
  31. ^ Hava Siegelmann (April 1995). "Computation Beyond the Turing Limit". Science 268 (5210): 545–548. doi:10.1126/science.268.5210.545. PMID 17756722.
  32. ^ Hava Siegelmann; Eduardo Sontag (1994). "Analog Computation via Neural Networks". Theoretical Computer Science 131 (2): 331–360. doi:10.1016/0304-3975(94)90178-3.
  33. ^ Joel David Hamkins; Andy Lewis (2000). "Infinite Time Turing machines". Journal of Symbolic Logic 65 (2): 567=604.
  34. ^ Jiří Wiedermann (June 4, 2004). "Characterizing the super-Turing computing power and efficiency of classical fuzzy Turing machines". Theoretical Computer Science (Elsevier Science Publishers Ltd. Essex, UK) 317 (1–3).
  35. ^ Hintikka, Ja; Mutanen, A. (1998). "An Alternative Concept of Computability". Language, Truth, and Logic in Mathematics. Dordrecht. pp. 174–188.
  36. ^ Darko Roglic (24–Jul–2007). "The universal evolutionary computer based on super-recursive algorithms of evolvability". arXiv:0708.2686 [cs.NE].
  37. ^ Eugene Eberbach (2002). "On expressiveness of evolutionary computation: is EC algorithmic?". Computational Intelligence, WCCI 1: 564–569. doi:10.1109/CEC.2002.1006988.
  38. ^ Borodyanskii, Yu M; Burgin, M. S. (1994). "Operations and compositions in transrecursive operators". Cybernetics and Systems Analysis 30 (4): 473–478. doi:10.1007/BF02366556.
  39. ^ Davis, Martin, Why there is no such discipline as hypercomputation, Applied Mathematics and Computation, Volume 178, Issue 1, 1 July 2006, Pages 4–7, Special Issue on Hypercomputation
  40. ^ Davis, Martin (2004). "The Myth of Hypercomputation". Alan Turing: Life and Legacy of a Great Thinker. Springer.
  41. ^ Andrew Hodges (retrieved 23 September 2011). "The Professors and the Brainstorms". The Alan Turing Home Page.

 

Further reading

 

External links

Monday, May 21, 2012

Digital Physics

In physics and cosmology, digital physics is a collection of theoretical perspectives based on the premise that the universe is, at heart, describable by information, and is therefore computable. Therefore, the universe can be conceived as either the output of a computer program or as a vast, digital computation device (or, at least, mathematically isomorphic to such a device).

Digital physics is grounded in one or more of the following hypotheses; listed in order of increasing strength. The universe, or reality:

 

Contents

 

 

History

 

Every computer must be compatible with the principles of information theory, statistical thermodynamics, and quantum mechanics. A fundamental link among these fields was proposed by Edwin Jaynes in two seminal 1957 papers.[1] Moreover, Jaynes elaborated an interpretation of probability theory as generalized Aristotelian logic, a view very convenient for linking fundamental physics with digital computers, because these are designed to implement the operations of classical logic and, equivalently, of Boolean algebra.[2]
The hypothesis that the universe is a digital computer was pioneered by Konrad Zuse in his book Rechnender Raum (translated into English as Calculating Space). The term digital physics was first employed by Edward Fredkin, who later came to prefer the term digital philosophy.[3] Others who have modeled the universe as a giant computer include Stephen Wolfram,[4] Juergen Schmidhuber,[5] and Nobel laureate Gerard 't Hooft.[6] These authors hold that the apparently probabilistic nature of quantum physics is not necessarily incompatible with the notion of computability. Quantum versions of digital physics have recently been proposed by Seth Lloyd,[7] David Deutsch, and Paola Zizzi.[8]

Related ideas include Carl Friedrich von Weizsäcker's binary theory of ur-alternatives, pancomputationalism, computational universe theory, John Archibald Wheeler's "It from bit", and Max Tegmark's ultimate ensemble.

Digital physics

 

Overview

 

Digital physics suggests that there exists, at least in principle, a program for a universal computer which computes the evolution of the universe. The computer could be, for example, a huge cellular automaton (Zuse 1967[9]), or a universal Turing machine, as suggested by Schmidhuber (1997), who pointed out that there exists a very short program that can compute all possible computable universes in an asymptotically optimal way.

Some try to identify single physical particles with simple bits. For example, if one particle, such as an electron, is switching from one quantum state to another, it may be the same as if a bit is changed from one value (0, say) to the other (1). A single bit suffices to describe a single quantum switch of a given particle. As the universe appears to be composed of elementary particles whose behavior can be completely described by the quantum switches they undergo, that implies that the universe as a whole can be described by bits. Every state is information, and every change of state is a change in information (requiring the manipulation of one or more bits). Setting aside dark matter and dark energy, which are poorly understood at present, the known universe consists of about 1080 protons and the same number of electrons. Hence, the universe could be simulated by a computer capable of storing and manipulating about 1090 bits. If such a simulation is indeed the case, then hypercomputation would be impossible.

Loop quantum gravity could lend support to digital physics, in that it assumes space-time is quantized. Paola Zizzi has formulated a realization of this concept in what has come to be called "computational loop quantum gravity", or CLQG.[10][11] Other theories that combine aspects of digital physics with loop quantum gravity are those of Marzuoli and Rasetti[12][13] and Girelli and Livine.[14]

Weizsäcker's ur-alternatives

 

Physicist Carl Friedrich von Weizsäcker's theory of ur-alternatives (archetypal objects), first publicized in his book The Unity of Nature (1980),[15] further developed through the 1990s,[16][17] is a kind of digital physics as it axiomatically constructs quantum physics from the distinction between empirically observable, binary alternatives. Weizsäcker used his theory to derive the 3-dimensionality of space and to estimate the entropy of a proton falling into a black hole.

Pancomputationalism or the computational universe theory

 

Pancomputationalism (also known as pan-computationalism, naturalist computationalism) is a view that the universe is a huge computational machine, or rather a network of computational processes which, following fundamental physical laws, computes (dynamically develops) its own next state from the current one.[18]
A computational universe is proposed by Jürgen Schmidhuber in a paper based on Konrad Zuse's assumption (1967) that the history of the universe is computable. He pointed out that the simplest explanation of the universe would be a very simple Turing machine programmed to systematically execute all possible programs computing all possible histories for all types of computable physical laws. He also pointed out that there is an optimally efficient way of computing all computable universes based on Leonid Levin's universal search algorithm (1973). In 2000 he expanded this work by combining Ray Solomonoff's theory of inductive inference with the assumption that quickly computable universes are more likely than others. This work on digital physics also led to limit-computable generalizations of algorithmic information or Kolmogorov complexity and the concept of Super Omegas, which are limit-computable numbers that are even more random (in a certain sense) than Gregory Chaitin's number of wisdom Omega.

Wheeler's "it from bit"

 

Following Jaynes and Weizsäcker, the physicist John Archibald Wheeler wrote the following:

[...] it is not unreasonable to imagine that information sits at the core of physics, just as it sits at the core of a computer. (John Archibald Wheeler 1998: 340)

It from bit. Otherwise put, every 'it'—every particle, every field of force, even the space-time continuum itself—derives its function, its meaning, its very existence entirely—even if in some contexts indirectly—from the apparatus-elicited answers to yes-or-no questions, binary choices, bits. 'It from bit' symbolizes the idea that every item of the physical world has at bottom—a very deep bottom, in most instances—an immaterial source and explanation; that which we call reality arises in the last analysis from the posing of yes–no questions and the registering of equipment-evoked responses; in short, that all things physical are information-theoretic in origin and that this is a participatory universe. (John Archibald Wheeler 1990: 5)

David Chalmers of the Australian National University summarised Wheeler's views as follows:

Wheeler (1990) has suggested that information is fundamental to the physics of the universe. According to this 'it from bit' doctrine, the laws of physics can be cast in terms of information, postulating different states that give rise to different effects without actually saying what those states are. It is only their position in an information space that counts. If so, then information is a natural candidate to also play a role in a fundamental theory of consciousness. We are led to a conception of the world on which information is truly fundamental, and on which it has two basic aspects, corresponding to the physical and the phenomenal features of the world.[19]

Chris Langan also builds upon Wheeler's views in his epistemological metatheory:

The Future of Reality Theory According to John Wheeler: In 1979, the celebrated physicist John Wheeler, having coined the phrase “black hole”, put it to good philosophical use in the title of an exploratory paper, Beyond the Black Hole, in which he describes the universe as a self-excited circuit. The paper includes an illustration in which one side of an uppercase U, ostensibly standing for Universe, is endowed with a large and rather intelligent-looking eye intently regarding the other side, which it ostensibly acquires through observation as sensory information. By dint of placement, the eye stands for the sensory or cognitive aspect of reality, perhaps even a human spectator within the universe, while the eye’s perceptual target represents the informational aspect of reality. By virtue of these complementary aspects, it seems that the universe can in some sense, but not necessarily that of common usage, be described as “conscious” and “introspective”…perhaps even “infocognitive”.[20]

The first formal presentation of the idea that information might be the fundamental quantity at the core of physics seems to be due to Frederick W. Kantor (a physicist from Columbia University). Kantor's book Information Mechanics (Wiley-Interscience, 1977) developed this idea in detail, but without mathematical rigor.

The toughest nut to crack in Wheeler's research program of a digital dissolution of physical being in a unified physics, Wheeler himself says, is time. In a 1986 eulogy to the mathematician, Hermann Weyl, he proclaimed: "Time, among all concepts in the world of physics, puts up the greatest resistance to being dethroned from ideal continuum to the world of the discrete, of information, of bits. ... Of all obstacles to a thoroughly penetrating account of existence, none looms up more dismayingly than 'time.' Explain time? Not without explaining existence. Explain existence? Not without explaining time. To uncover the deep and hidden connection between time and existence ... is a task for the future."[21] The Australian phenomenologist, Michael Eldred, comments:

The antinomy of the continuum, time, in connection with the question of being ... is said by Wheeler to be a cause for dismay which challenges future quantum physics, fired as it is by a will to power over moving reality, to "achieve four victories" (ibid.)... And so we return to the challenge to "[u]nderstand the quantum as based on an utterly simple and—when we see it—completely obvious idea" (ibid.) from which the continuum of time could be derived. Only thus could the will to mathematically calculable power over the dynamics, i.e. the movement in time, of beings as a whole be satisfied.[22][23]

Digital vs. informational physics

 

Not every informational approach to physics (or ontology) is necessarily digital. According to Luciano Floridi,[24] "informational structural realism" is a variant of structural realism that supports an ontological commitment to a world consisting of the totality of informational objects dynamically interacting with each other. Such informational objects are to be understood as constraining affordances.

Digital ontology and pancomputationalism are also independent positions. In particular, John Wheeler advocated the former but was silent about the latter; see the quote in the preceding section.
On the other hand, pancomputationalists like Lloyd (2006), who models the universe as a quantum computer, can still maintain an analogue or hybrid ontology; and informational ontologists like Sayre and Floridi embrace neither a digital ontology nor a pancomputationalist position.[25]

Computational foundations

 

Turing machines

 

Theoretical computer science is founded on the Turing machine, an imaginary computing machine first described by Alan Turing in 1936. While mechanically simple, the Church-Turing thesis implies that a Turing machine can solve any "reasonable" problem. (In theoretical computer science, a problem is considered "solvable" if it can be solved in principle, namely in finite time, which is not necessarily a finite time that is of any value to humans.) A Turing machine therefore sets the practical "upper bound" on computational power, apart from the possibilities afforded by hypothetical hypercomputers.

Wolfram's principle of computational equivalence powerfully motivates the digital approach. This principle, if correct, means that everything can be computed by one essentially simple machine, the realization of a cellular automaton. This is one way of fulfilling a traditional goal of physics: finding simple laws and mechanisms for all of nature.

Digital physics is falsifiable in that a less powerful class of computers cannot simulate a more powerful class. Therefore, if our universe is a gigantic simulation, that simulation is being run on a computer at least as powerful as a Turing machine. If humans succeed in building a hypercomputer, then a Turing machine cannot have the power required to simulate the universe.

The Church–Turing (Deutsch) thesis

 

The classic Church–Turing thesis claims that any computer as powerful as a Turing machine can, in principle, calculate anything that a human can calculate, given enough time. A stronger version, not attributable to Church or Turing,[26] claims that a universal Turing machine can compute anything any other Turing machine can compute - that it is a generalizable Turing machine. But the limits of practical computation are set by physics, not by theoretical computer science:

"Turing did not show that his machines can solve any problem that can be solved 'by instructions, explicitly stated rules, or procedures', nor did he prove that the universal Turing machine 'can compute any function that any computer, with any architecture, can compute'. He proved that his universal machine can compute any function that any Turing machine can compute; and he put forward, and advanced philosophical arguments in support of, the thesis here called Turing's thesis. But a thesis concerning the extent of effective methods—which is to say, concerning the extent of procedures of a certain sort that a human being unaided by machinery is capable of carrying out—carries no implication concerning the extent of the procedures that machines are capable of carrying out, even machines acting in accordance with 'explicitly stated rules.' For among a machine's repertoire of atomic operations there may be those that no human being unaided by machinery can perform." [27]

On the other hand, if two further conjectures are made, along the lines that:

  • hypercomputation always involves actual infinities;
  • there are no actual infinities in physics,

the resulting compound principle does bring practical computation within Turing's limits.
As David Deutsch puts it:

"I can now state the physical version of the Church-Turing principle: 'Every finitely realizable physical system can be perfectly simulated by a universal model computing machine operating by finite means.' This formulation is both better defined and more physical than Turing's own way of expressing it."[28] (Emphasis added)

This compound conjecture is sometimes called the "strong Church-Turing thesis" or the Church–Turing–Deutsch principle.

Criticism

 

The critics of digital physics—including physicists[citation needed] who work in quantum mechanics—object to it on several grounds.

Physical symmetries are continuous

 

One objection is that extant models of digital physics are incompatible[citation needed] with the existence of several continuous characters of physical symmetries, e.g., rotational symmetry, translational symmetry, Lorentz symmetry, and electroweak symmetry, all central to current physical theory.

Proponents of digital physics claim that such continuous symmetries are only convenient (and very good) approximations of a discrete reality. For example, the reasoning leading to systems of natural units and the conclusion that the Planck length is a minimum meaningful unit of distance suggests that at some level space itself is quantized.[29]

Locality

 

Some argue[citation needed] that extant models of digital physics violate various postulates of quantum physics. For example, if these models are not grounded in Hilbert spaces and probabilities, they belong to the class of theories with local hidden variables that some deem ruled out experimentally using Bell's theorem. This criticism has two possible answers. First, any notion of locality in the digital model does not necessarily have to correspond to locality formulated in the usual way in the emergent spacetime. A concrete example of this case was recently given by Lee Smolin.[30] Another possibility is a well-known loophole in Bell's theorem known as superdeterminism (sometimes referred to as predeterminism).[31] In a completely deterministic model, the experimenter's decision to measure certain components of the spins is predetermined. Thus, the assumption that the experimenter could have decided to measure different components of the spins than he actually did is, strictly speaking, not true.

Physical theory requires the continuum

 

It has been argued[weasel words] that digital physics, grounded in the theory of finite state machines and hence discrete mathematics, cannot do justice to a physical theory whose mathematics requires the real numbers, which is the case for all physical theories having any credibility.

But computers can manipulate and solve formulas describing real numbers using symbolic computation, thus avoiding the need to approximate real numbers by using an infinite number of digits.

Before symbolic computation, a number—in particular a real number, one with an infinite number of digits—was said to be computable if a Turing machine will continue to spit out digits endlessly. In other words, there is no "last digit". But this sits uncomfortably with any proposal that the universe is the output of a virtual-reality exercise carried out in real time (or any plausible kind of time). Known physical laws (including quantum mechanics and its continuous spectra) are very much infused with real numbers and the mathematics of the continuum.

"So ordinary computational descriptions do not have a cardinality of states and state space trajectories that is sufficient for them to map onto ordinary mathematical descriptions of natural systems. Thus, from the point of view of strict mathematical description, the thesis that everything is a computing system in this second sense cannot be supported".[32]

For his part, David Deutsch generally takes a "multiverse" view to the question of continuous vs. discrete. In short, he thinks that “within each universe all observable quantities are discrete, but the multiverse as a whole is a continuum. When the equations of quantum theory describe a continuous but not-directly-observable transition between two values of a discrete quantity, what they are telling us is that the transition does not take place entirely within one universe. So perhaps the price of continuous motion is not an infinity of consecutive actions, but an infinity of concurrent actions taking place across the multiverse.” January, 2001 The Discrete and the Continuous, an abridged version of which appeared in The Times Higher Education Supplement.

See also

 

 

References

 

  1. ^ Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics," Phys. Rev 106: 620.
    Jaynes, E. T., 1957, "Information Theory and Statistical Mechanics II," Phys. Rev. 108: 171.
  2. ^ Jaynes, E. T., 1990, "Probability Theory as Logic," in Fougere, P.F., ed., Maximum-Entropy and Bayesian Methods. Boston: Kluwer.
  3. ^ See Fredkin's Digital Philosophy web site.
  4. ^ A New Kind of Science website. Reviews of ANKS.
  5. ^ Schmidhuber, J., "Computer Universes and an Algorithmic Theory of Everything."
  6. ^ G. 't Hooft, 1999, "Quantum Gravity as a Dissipative Deterministic System," Class. Quant. Grav. 16: 3263-79.
  7. ^ Lloyd, S., "The Computational Universe: Quantum gravity from quantum computation."
  8. ^ Zizzi, Paola, "Spacetime at the Planck Scale: The Quantum Computer View."
  9. ^ Zuse, Konrad, 1967, Elektronische Datenverarbeitung vol 8., pages 336-344
  10. ^ Zizzi, Paola, "A Minimal Model for Quantum Gravity."
  11. ^ Zizzi, Paola, "Computability at the Planck Scale."
  12. ^ Marzuoli, A. and Rasetti, M., 2002, "Spin Network Quantum Simulator," Phys. Lett. A306, 79-87.
  13. ^ Marzuoli, A. and Rasetti, M., 2005, "Computing Spin Networks," Annals of Physics 318: 345-407.
  14. ^ Girelli, F.; Livine, E. R., 2005, "[1]" Class. Quant. Grav. 22: 3295-3314.
  15. ^ von Weizsäcker, Carl Friedrich (1980). The Unity of Nature. New York: Farrar, Straus, and Giroux.
  16. ^ von Weizsäcker, Carl Friedrich (1985) (in German). Aufbau der Physik [The Structure of Physics]. Munich. ISBN 3-446-14142-1.
  17. ^ von Weizsäcker, Carl Friedrich (1992) (in German). Zeit und Wissen.
  18. ^ Papers on pancompuationalism
  19. ^ Chalmers, David. J., 1995, "Facing up to the Hard Problem of Consciousness," Journal of Consciousness Studies 2(3): 200-19. This paper cites John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley. Also see Chalmers, D., 1996. The Conscious Mind. Oxford Univ. Press.
  20. ^ Langan, Christopher M., 2002, "The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory, pg. 7" Progress in Complexity, Information and Design
  21. ^ Wheeler, John Archibald, 1986, "Hermann Weyl and the Unity of Knowledge"
  22. ^ Eldred, Michael, 2009, 'Postscript 2: On quantum physics' assault on time'
  23. ^ Eldred, Michael, 2009, The Digital Cast of Being: Metaphysics, Mathematics, Cartesianism, Cybernetics, Capitalism, Communication ontos, Frankfurt 2009 137 pp. ISBN 978-3-86838-045-3
  24. ^ Floridi, L., 2004, "Informational Realism," in Weckert, J., and Al-Saggaf, Y, eds., Computing and Philosophy Conference, vol. 37."
  25. ^ See Floridi talk on Informational Nature of Reality, abstract at the E-CAP conference 2006.
  26. ^ B. Jack Copeland, Computation in Luciano Floridi (ed.), The Blackwell guide to the philosophy of computing and information, Wiley-Blackwell, 2004, ISBN 0-631-22919-1, pp. 10-15
  27. ^ Stanford Encyclopedia of Philosophy: "The Church-Turing thesis" -- by B. Jack Copeland.
  28. ^ David Deutsch, "Quantum Theory, the Church-Turing Principle and the Universal Quantum Computer."
  29. ^ John A. Wheeler, 1990, "Information, physics, quantum: The search for links" in W. Zurek (ed.) Complexity, Entropy, and the Physics of Information. Redwood City, CA: Addison-Wesley.
  30. ^ L. Smolin, "Matrix models as non-local hidden variables theories."
  31. ^ J. S. Bell, 1981, "Bertlmann's socks and the nature of reality," Journal de Physique 42 C2: 41-61.
  32. ^ Piccinini, Gualtiero, 2007, "Computational Modelling vs. Computational Explanation: Is Everything a Turing Machine, and Does It Matter to the Philosophy of Mind?" Australasian Journal of Philosophy 85(1): 93-115.

 

Further reading

 

 

External links

 

First Alcibiades

Papyrus fragment of Alcibiades I, section 131.c-e.

The First Alcibiades or Alcibiades I (Ancient Greek: Ἀλκιβιάδης αʹ) is a dialogue featuring Alcibiades in conversation with Socrates. It is ascribed to Plato, although scholars are divided on the question of its authenticity.

Contents

Content



In the preface Alcibiades is described as an ambitious young man who is eager to enter public life. He is extremely proud of his good looks, noble birth, many friends, possessions and his connection to Pericles, the leader of the Athenian state. Alcibiades has many admirers but they have all run away, afraid of his coldness. Socrates was the first of his admirers but he has not spoken to him for many years. Now the older man tries to help the youth with his questions before Alcibiades presents himself in front of the Athenian assembly. For the rest of the dialogue Socrates explains the many reasons why Alcibiades needs him. By the end of Alcibiades I, the youth is much persuaded by Socrates' reasoning, and accepts him as his mentor.

The first topic they enter is the essence of politics – war and peace. Socrates claims that people should fight on just grounds but he doubts that Alcibiades has got any knowledge about justice. Prodded by Socrates’ questioning Alcibiades admits that he has never learned the nature of justice from a master nor has discovered it by himself .

Alcibiades suggests that politics is not about justice but expediency and the two principles could be opposed. Socrates persuades him that he was mistaken, and there is no expediency without justice. The humiliated youth concedes that he knows nothing about politics.

Later Alcibiades says that he is not concerned about his ignorance because all the other Athenian politicians are ignorant. Socrates reminds him that his true rivals are the kings of Sparta and Persia. He delivers a long lecture about the careful education, glorious might and unparalleled richness of these foreign rulers. Alcibiades has got cold feet which was exactly the purpose of Socrates’ speech.

After this interlude the dialogue proceeds with further questioning about the rules of society. Socrates points to the many contradictions in Alcibiades’ thoughts. Later they agree that man has to follow the advise of the famous Delphic phrase: gnōthi seautón meaning know thyself. They discuss that the "ruling principle" of man is not the body but the soul. Somebody's true lover loves his soul, while the lover of the body flies as soon as the youth fades. With this Socrates proves that he is the only true lover of Alcibiades. "From this day forward, I must and will follow you as you have followed me; I will be the disciple, and you shall be my master", proclaims the youth. Together they will work on to improve Alcibiades' character because only the virtuous has the right to govern. Tyrannical power should not be the aim of individuals but people accept to be commanded by a superior.

In the last sentence Socrates expresses his hope that Alcibiades will persist but he has fears because the power of the state "may be too much" for both of them.
 

Authenticity

 

In antiquity Alcibiades I was regarded as the best text to introduce one to Platonic philosophy, which may be why it has continued to be included in the Platonic corpus since then. The authenticity of the dialogue was never doubted in antiquity. It was not until 1836 that the German scholar Friedrich Schleiermacher argued against the ascription to Plato.[1] Subsequently its popularity declined. However, stylometrical research supports Plato's authorship,[2] and some scholars have recently defended its authenticity.[3]

Dating

 

Traditionally, the First Alcibiades has been considered an early dialogue. Gerard Ledger's stylometric analysis supported this tradition, dating the work to the 390's.[4] Julia Annas, in supporting the authenticity of Rival Lovers, saw both dialogues as laying the foundation for ideas Plato would later develop in Charmides.

A later dating has also been defended. Nicholas Denyer suggests that it was written in the 350's BC, when Plato, back in Athens, could reflect on the similarities between Dionysius II of Syracuse (as we know him from the Seventh Letter) and Alcibiades—two young men interested in philosophy but compromised by their ambition and faulty early education.[5] This hypothesis requires skepticism about what is usually regarded as the only fairly certain result of Platonic stylometry, Plato's marked tendency to avoid hiatus in the six dialogues widely believed to have been composed in the period to which Denyer assigns First Alcibiades (Timaeus, Critias, Sophist, Statesman, Philebus, and Laws).[6]


R.S. Bluck, although unimpressed by previous arguments against the dialogue's authenticity, tentatively suggests a date after the end of Plato's life, approximately 343/2 BC, based especially on "a striking parallelism between the Alcibiades and early works of Aristotle, as well as certain other compositions that probably belong to the same period as the latter."[8]

References

 

  1. ^ Denyer (2001): 15.
  2. ^ Young (1998): 35-36.
  3. ^ Denyer (2001): 14-26.
  4. ^ Young (1998)
  5. ^ Denyer (2001): 11-14. Cf. 20-24
  6. ^ Denyer (2001): 23 n. 19
  7. ^ Pamela M. Clark, "The Greater Alcibiades," Classical Quarterly N.S. 5 (1955), pp. 231-240
  8. ^ R.S. Bluck, "The Origin of the Greater Alcibiades," Classical Quarterly N.S. 3 (1953), pp. 46-52

 

Bibliography

 

  • Denyer, Nicholas, "introduction", in Plato, Alcibiades, Nicholas Denyer (ed.) (Cambridge: Cambridge University Press, 2001): 1-26.
  • Foucault, Michel, The Hermeneutics of the Subject: Lectures at the Collège de France, 1981–1982 (New York: Picador, 2005).
  • Young, Charles M., "Plato and Computer Dating", in Nicholas D. Smith (ed.), Plato: Critical Assessments volume 1: General Issues of Interpretation (London: Routledge, 1998): 29-49.

 

External links

 

Thursday, May 17, 2012

Particle Dark Matter

Looks interesting.


Dark matter is among the most important open problems in modern physics. Aimed at graduate students and researchers, this book describes the theoretical and experimental aspects of the dark matter problem in particle physics, astrophysics and cosmology. Featuring contributions from 48 leading theorists and experimentalists, it presents many aspects, from astrophysical observations to particle physics candidates, and from the prospects for detection at colliders to direct and indirect searches. The book introduces observational evidence for dark matter along with a detailed discussion of the state-of-the-art of numerical simulations and alternative explanations in terms of modified gravity. It then moves on to the candidates arising from theories beyond the Standard Model of particle physics, and to the prospects for detection at accelerators. It concludes by looking at direct and indirect dark matter searches, and the prospects for detecting the particle nature of dark matter with astrophysical experiments. See:Cambridge University Press

NASA's Fermi Spots 'Superflares' in the Crab Nebula


The famous Crab Nebula supernova remnant has erupted in an enormous flare five times more powerful than any previously seen from the object. The outburst was first detected by NASA's Fermi Gamma-ray Space Telescope on April 12 and lasted six days.
The nebula, which is the wreckage of an exploded star whose light reached Earth in 1054, is one of the most studied objects in the sky. At the heart of an expanding gas cloud lies what's left of the original star's core, a superdense neutron star that spins 30 times a second. With each rotation, the star swings intense beams of radiation toward Earth, creating the pulsed emission characteristic of spinning neutron stars (also known as pulsars).
Apart from these pulses, astrophysicists regarded the Crab Nebula to be a virtually constant source of high-energy radiation. But in January, scientists associated with several orbiting observatories -- including NASA's Fermi, Swift and Rossi X-ray Timing Explorer -- reported long-term brightness changes at X-ray energies.
Scientists think that the flares occur as the intense magnetic field near the pulsar undergoes sudden restructuring. Such changes can accelerate particles like electrons to velocities near the speed of light. As these high-speed electrons interact with the magnetic field, they emit gamma rays in a process known as synchrotron emission.
To account for the observed emission, scientists say that the electrons must have energies 100 times greater than can be achieved in any particle accelerator on Earth. This makes them the highest-energy electrons known to be associated with any cosmic source.
Based on the rise and fall of gamma rays during the April outbursts, scientists estimate that the size of the emitting region must be comparable in size to the solar system. If circular, the region must be smaller than roughly twice Pluto's average distance from the sun.NASA's Fermi Spots 'Superflares' in the Crab Nebula






 Like a July 4 fireworks display a young, glittering collection of stars looks like an aerial burst. The cluster is surrounded by clouds of interstellar gas and dust - the raw material for new star formation. The nebula, located 20,000 light-years away in the constellation Carina, contains a central cluster of huge, hot stars, called NGC 3603.

This environment is not as peaceful as it looks. Ultraviolet radiation and violent stellar winds have blown out an enormous cavity in the gas and dust enveloping the cluster, providing an unobstructed view of the cluster.</br>

Most of the stars in the cluster were born around the same time but differ in size, mass, temperature, and color. The course of a star's life is determined by its mass, so a cluster of a given age will contain stars in various stages of their lives, giving an opportunity for detailed analyses of stellar life cycles. NGC 3603 also contains some of the most massive stars known. These huge stars live fast and die young, burning through their hydrogen fuel quickly and ultimately ending their lives in supernova explosions.</br>

Star clusters like NGC 3603 provide important clues to understanding the origin of massive star formation in the early, distant universe. Astronomers also use massive clusters to study distant starbursts that occur when galaxies collide, igniting a flurry of star formation. The proximity of NGC 3603 makes it an excellent lab for studying such distant and momentous events.</br>

This Hubble Space Telescope image was captured in August 2009 and December 2009 with the Wide Field Camera 3 in both visible and infrared light, which trace the glow of sulfur, hydrogen, and iron.</br>

The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency. NASA’s Goddard Space Flight Center manages the telescope. The Space Telescope Science Institute (STScI) conducts Hubble science operations. STScI is operated for NASA by the Association of Universities for Research in Astronomy, Inc. in Washington, D.C. </br>See:Starburst Cluster Shows Celestial Fireworks