Showing posts with label Computers. Show all posts
Showing posts with label Computers. Show all posts

Monday, May 30, 2011

Do Cellphones Cause Brain Cancer?

Bruce Gilden/Magnum, for The New York Times
On Jan. 21, 1993, the television talk-show host Larry King featured an unexpected guest on his program. It was the evening after Inauguration Day in Washington, and the television audience tuned in expecting political commentary. But King turned, instead, to a young man from Florida, David Reynard, who had filed a tort claim against the cellphone manufacturer NEC and the carrier GTE Mobilnet, claiming that radiation from their phones caused or accelerated the growth of a brain tumor in his wife. See: Do Cellphones Cause Brain Cancer?

***
Fidel wrote:
I'm not sure what it has to do with CERN. I believer there are possibilities for quantum and-or computing in general that may emerge as a result of the science produced from CERN.


Quality researchers as Stafford Beer would have been aware of other work in progress so as to fine tuned their own processes. It was off the cuff, and might not mean anything for sure, it's just something that popped into mind. Mind you,  he might not have been as connected as we are today with research and evidence brought forth.





Cybersyn control room

A "central location" sensitive to the nervous system(control center)? Yes in relation too, "I believe one of the objectives of the Santiago experiment was to prove that something about socialism is possible and which its free market friendly critics still ask today. The biological association is very real and part of the question is  "whether the soul is an active component of the reality of computerized developer as an effective decision making without part of the emotive connection we have as an emphatic quality of being,"  that is ever the strive toward future development of that DNA computerized structure? Memory inducement? Long term "smell" associations? Learning and education?

How is cell phone frequencies affecting the DNA structure? What conclusive proof so that we say greater speeds from fiber optic, while reducing wireless as an affect on sperm? Has a design been implemented as to structure the computer as a architectural structure that is beneficial to humanities goal of society progressing or retrograding with the creation of Frankenstein? Intellectual drones......so many bit constructs, as manufactured by an advanced society? Economically why would anyone have to think economically about it,  so that it can take care of itself?


Towards quantum chemistry on a quantum computer

B. P. Lanyon1,2, J. D. Whitfield4, G. G. Gillett1,2, M. E. Goggin1,5, M. P. Almeida1,2, I. Kassal4, J. D. Biamonte4,6, M. Mohseni4,6, B. J. Powell1,3, M. Barbieri1,2,6, A. Aspuru-Guzik4 & A. G. White1,2

Abstract

Exact first-principles calculations of molecular properties are currently intractable because their computational cost grows exponentially with both the number of atoms and basis set size. A solution is to move to a radically different model of computing by building a quantum computer, which is a device that uses quantum systems themselves to store and process data. Here we report the application of the latest photonic quantum computer technology to calculate properties of the smallest molecular system: the hydrogen molecule in a minimal basis. We calculate the complete energy spectrum to 20 bits of precision and discuss how the technique can be expanded to solve large-scale chemical problems that lie beyond the reach of modern supercomputers. These results represent an early practical step toward a powerful tool with a broad range of quantum-chemical applications.

Quantum Chlorophyll as a dissipative messenger toward construction of the "emotive system" as a centralized endocrine association of messengers...to activate the real human values of caring?



Photos By: Illustration by Megan Gundrum, fifth-year DAAP student
For decades, farmers have been trying to find ways to get more energy out of the sun.

In natural photosynthesis, plants take in solar energy and carbon dioxide and then convert it to oxygen and sugars. The oxygen is released to the air and the sugars are dispersed throughout the plant — like that sweet corn we look for in the summer. Unfortunately, the allocation of light energy into products we use is not as efficient as we would like. Now engineering researchers at the University of Cincinnati are doing something about that.See:Frogs, Foam and Fuel: UC Researchers Convert Solar Energy to Sugars

Fidel wrote:
How does a socialist economic system provide feedback to central planners concerning demand for goods and services?
As I've grown older and watched society in progress it has been of increasing concerned to me that we have lost something in our caring of the whole system, in face of part of that system. Profit orientation has done that when it has come to what we think should happen in regards to privatization and the loss of public accountability with regard to cost of living. Monopolies, and how we don't recognize them or their affect on the way we live.

I am not well educated, although have watch and been a part of the evolution of the internet as it has come forward in expression so I have learn to use it's language to help display the things I have learn. Just put it out there. So it is important that what is presented is accurate. So the push for educational facilities to open them self up to the general public to cater not only to its students but to allow the populace to access the same information.

The Manhattan Economic question? What is the best and fairest way in which to design an economic system that takes care of the imbalances that seem to thrive in the present capitalistic system?

The potential for me is to recognize that same population has very bright minds(young and upcoming and the aged :) who are not just part of the educational system but reside quietly in the populace,  unaware as a potential resource, while innovative,  are just happy to share some of their ideas..

Researching amongst reputable scientists has allowed me to access the process of accountability as to the evolution of ideas. The capabilities needed in terms of finalizing the recognition of that creativity that resides in and when society has been taken care of, allows art and science to excel.  The livelihoods can allow that potential as an environment conducive to further evolution of our societies.

***

We might want to shine a light on it all. :)


All of this is a recognition of what must take place not only within our societies as to the questions of being as to what we want built in the substructure(underlying as a unconscious direction of our reality movement in production of being) as a conscious movement toward the development of those same societies. This sense of being personally ad culturallly "project outward."

If you are not aware of what is the undercurrent of the being as a person....what troubles it as it sleeps,  then what say you about the direction this subconscious minds takes as it display's it's warning for you. This dreaming reality predictor of what is to come. Not to take heed of the warning of our culture and the deep seated want for a fairer and just culture. The being,  as to progress the soul's desire for meaning and expressionism, as to learn and evolve?

Friday, February 12, 2010

The Last Question by Isaac Asimov

The problem of heat can be a frustrating one if one can contend with the computer chips and how this may of resulted in a reboot of the machine( or it's death) into a better state of existence then what was previously used in working model form.

So the perfection is to the very defining model of a super race that is devoid of all the trappings in human form that can be ruled by the mistakes of combining body parts from Frankenstein sense to what the new terminator models have in taken over..but they are not human?

Multivac is a advanced computer that solves many of the world’s problems. The story opens on May 14, 2061 when Multivac has built a space station to harness the power of the sun – effectively giving humans access to a nearly unlimited source of power. Ah – and that’s the key, it is nearly unlimited. In fact two of Multivac’s technicians argue about this very idea – how long will humankind be able to glean energy from the universe? They decide to ask Multivac for the answer, and all it can say is “INSUFFICIENT DATA FOR MEANINGFUL ANSWER.” Oh well, it was a good idea, and through several smaller stories we see that many more people ask Multivac the same question. Multivac has a difficult time answering – it is a hard question after all! But when do we (and Multivac) finally learn the answer? As you’ve probably guessed – not until the very end of the story.
“You ask Multivac. I dare you. Five dollars says it can’t be done.”
“Adell was just drunk enough to try, just sober enough to be able to phrase the necessary symbols and operations into a question which, in words, might have corresponded to this: Will mankind one day without the net expenditure of energy be able to restore the sun to its full youthfulness even after it had died of old age?
Or maybe it could be put more simply like this: How can the net amount of entropy of the universe be massively decreased?
Multivac fell dead and silent. The slow flashing of lights ceased, the distant sounds of clicking relays ended.
Then, just as the frightened technicians felt they could hold their breath no longer, there was a sudden springing to life of the teletype attached to that portion of Multivac. Five words were printed: INSUFFICIENT DATA FOR MEANINGFUL ANSWER.

***

Timeframe for heat death

From the Big Bang through the present day and well into the future, matter and dark matter in the universe is concentrated in stars, galaxies, and galaxy clusters. Therefore, the universe is not in thermodynamic equilibrium and objects can do physical work.[11], §VID. The decay time of a roughly galaxy-mass (1011 solar masses) supermassive black hole due to Hawking radiation is on the order of 10100 years,[12], so entropy can be produced until at least that time. After that time, the universe enters the so-called dark era, and is expected to consist chiefly of a dilute gas of photons and leptons.[11], §VIA. With only very diffuse matter remaining, activity in the universe will have tailed off dramatically, with very low energy levels and very large time scales. Speculatively, it is possible that the Universe may enter a second inflationary epoch, or, assuming that the current vacuum state is a false vacuum, the vacuum may decay into a lower-energy state.[11], §VE. It is also possible that entropy production will cease and the universe will achieve heat death.[11], §VID.

*** 

Creating the Perfect Human Being or Maybe.....

..... a Frankenstein? :)

Artificial Intelligence (AI) is the intelligence of machines and the branch of computer scienceintelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3][4] which aims to create it. Textbooks define the field as "the study and design of defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[5] This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.[6] Artificial intelligence has been the subject of breathtaking optimism,[7] has suffered stunning setbacks[8][9] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[11] General intelligence (or "strong AI") is still a long-term goal of (some) research.[12]

Saturday, October 03, 2009

Creating the Perfect Human Being or Maybe.....

..... a Frankenstein?:)




Seriously , there are defined differences in the human being versus AI Intelligence. I think people have a tendency to blurr the lines on machinery. This of course required some reading and wiki quotes herein help to orientate.

Of course the pictures in fiction development are closely related to the approach to development, while in some respects it represents to be more the development of the perfect human being


It seems there is a quest "to develop" human beings, not just robots.


Artificial Intelligence (AI) is the intelligence of machines and the branch of computer scienceintelligent agents,"[1] where an intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success.[2] John McCarthy, who coined the term in 1956,[3][4] which aims to create it. Textbooks define the field as "the study and design of defines it as "the science and engineering of making intelligent machines."
The field was founded on the claim that a central property of humans, intelligence—the sapience of Homo sapiens—can be so precisely described that it can be simulated by a machine.[5] This raises philosophical issues about the nature of the mind and limits of scientific hubris, issues which have been addressed by myth, fiction and philosophy since antiquity.[6] Artificial intelligence has been the subject of breathtaking optimism,[7] has suffered stunning setbacks[8][9] and, today, has become an essential part of the technology industry, providing the heavy lifting for many of the most difficult problems in computer science.
AI research is highly technical and specialized, deeply divided into subfields that often fail to communicate with each other.[10] Subfields have grown up around particular institutions, the work of individual researchers, the solution of specific problems, longstanding differences of opinion about how AI should be done and the application of widely differing tools. The central problems of AI include such traits as reasoning, knowledge, planning, learning, communication, perception and the ability to move and manipulate objects.[11] General intelligence (or "strong AI") is still a long-term goal of (some) research.[12]



Rusty the Tin man

Lacking a heart.....

Knowledge representation

Knowledge representation[43] and knowledge engineering[44] are central to AI research. Many of the problems machines are expected to solve will require extensive knowledge about the world. Among the things that AI needs to represent are: objects, properties, categories and relations between objects;[45] situations, events, states and time;[46] causes and effects;[47][48] and many other, less well researched domains. A complete representation of "what exists" is an ontology[49] (borrowing a word from traditional philosophy), of which the most general are called upper ontologies. knowledge about knowledge (what we know about what other people know);
Among the most difficult problems in knowledge representation are:
Default reasoning and the qualification problem
Many of the things people know take the form of "working assumptions." For example, if a bird comes up in conversation, people typically picture an animal that is fist sized, sings, and flies. None of these things are true about all birds. John McCarthy identified this problem in 1969[50] as the qualification problem: for any commonsense rule that AI researchers care to represent, there tend to be a huge number of exceptions. Almost nothing is simply true or false in the way that abstract logic requires. AI research has explored a number of solutions to this problem.[51]
The breadth of commonsense knowledge
The number of atomic facts that the average person knows is astronomical. Research projects that attempt to build a complete knowledge base of commonsense knowledgeCyc) require enormous amounts of laborious ontological engineering — they must be built, by hand, one complicated concept at a time.[52] A major goal is to have the computer understand enough concepts to be able to learn by reading from sources like the internet, and thus be able to add to its own ontology. (e.g.,
The subsymbolic form of some commonsense knowledge
Much of what people know is not represented as "facts" or "statements" that they could actually say out loud. For example, a chess master will avoid a particular chess position because it "feels too exposed"[53] or an art critic can take one look at a statue and instantly realize that it is a fake.[54] These are intuitions or tendencies that are represented in the brain non-consciously and sub-symbolically.[55] Knowledge like this informs, supports and provides a context for symbolic, conscious knowledge. As with the related problem of sub-symbolic reasoning, it is hoped that situated AI or computational intelligence will provide ways to represent this kind of knowledge.[55]


Bicentennial man

....they wanted to embed robotic feature with emotive functions...

Social intelligence


Kismet, a robot with rudimentary social skills
Emotion and social skills[73] play two roles for an intelligent agent. First, it must be able to predict the actions of others, by understanding their motives and emotional states. (This involves elements of game theory, decision theory, as well as the ability to model human emotions and the perceptual skills to detect emotions.) Also, for good human-computer interaction, an intelligent machine also needs to display emotions. At the very least it must appear polite and sensitive to the humans it interacts with. At best, it should have normal emotions itself.



....finally, having the ability to dream:)

Integrating the approaches

Intelligent agent paradigm
An intelligent agent is a system that perceives its environment and takes actions which maximizes its chances of success. The simplest intelligent agents are programs that solve specific problems. The most complicated intelligent agents are rational, thinking humans.[92] The paradigm gives researchers license to study isolated problems and find solutions that are both verifiable and useful, without agreeing on one single approach. An agent that solves a specific problem can use any approach that works — some agents are symbolic and logical, some are sub-symbolic neural networks and others may use new approaches. The paradigm also gives researchers a common language to communicate with other fields—such as decision theory and economics—that also use concepts of abstract agents. The intelligent agent paradigm became widely accepted during the 1990s.[93]

Agent architectures and cognitive architectures

Researchers have designed systems to build intelligent systems out of interacting intelligent agents in a multi-agent system.[94] A system with both symbolic and sub-symbolic components is a hybrid intelligent system, and the study of such systems is artificial intelligence systems integration. A hierarchical control system provides a bridge between sub-symbolic AI at its lowest, reactive levels and traditional symbolic AI at its highest levels, where relaxed time constraints permit planning and world modelling.[95] Rodney Brooks' subsumption architecture was an early proposal for such a hierarchical system.
So to me there is an understanding that needs to remain consistent in our views as one moves forward here to see that what is create is not really the human being that we are, but a manifestation of. I think people tend to "loose perspective" on human intelligence versus A.I. So that the issue then is to note these differences? This distinction to me rests in "what outcomes are possible in the diversity of human population matched to a purpose for personal development toward an ideal." No match can be found in terms of this creative attachment which can arise distinctive to each person's in probable outcome. The difference here is that "if" all knowledge already existed, and "if" we were to have access to this "collective unconscious per say," then how it is that such thinking cannot point toward new paradigms for personal development that are developed in society? New science? AI Intelligence already has all these knowledge factors inclusive, so it can give outcomes according to a "quantum leap??":) No, it needs human intervention, or AI can already give us that new science? You see? There would be "no need" for an Einstein?


***

Thursday, September 24, 2009

DNA Computing

DNA computing is a form of computing which uses DNA, biochemistry and molecular biology, instead of the traditional silicon-based computer technologies. DNA computing, or, more generally, molecular computing, is a fast developing interdisciplinary area. Research and development in this area concerns theory, experiments and applications of DNA computing See:DNA computing

***


Clifford of Asymptotia is hosting a guest post by Len Adleman: Quantum Mechanics and Mathematical Logic.


Today I’m pleased to announce that we have a guest post from a very distinguished colleague of mine, Len Adleman. Len is best known as the “A” in RSA and the inventor of DNA-computing. He is a Turing Award laureate. However, he considers himself “a rank amateur” (his words!) as a physicist.

Len Adleman-For a long time, physicists have struggled with perplexing “meta-questions” (my phrase): Does God play dice with the universe? Does a theory of everything exist? Do parallel universes exist? As the physics community is acutely aware, these are extremely difficult questions and one may despair of ever finding meaningful answers. The mathematical community has had its own meta-questions that are no less daunting: What is “truth”? Do infinitesimals exist? Is there a single set of axioms from which all of mathematics can be derived? In what many consider to be on the short list of great intellectual achievements, Frege, Russell, Tarski, Turing, Godel, and other logicians were able to clear away the fog and sort these questions out. The framework they created, mathematical logic, has put a foundation under mathematics, provided great insights and profound results. After many years of consideration, I have come to believe that mathematical logic, suitably extended and modified (perhaps to include complexity theoretic ideas), has the potential to provide the same benefits to physics. In the following remarks, I will explore this possibility.

*** 
 


See Also:
  • Riemann Hypothesis: A Pure Love of Math

  • Ideas on Quantum Interrogation

  • Mersenne Prime: One < the Power of two

  • Lingua Cosmica
  • Friday, February 23, 2007

    Light and Matter United

    Murray Gellman: On Plectics

    It is appropriate that plectics refers to entanglement or the lack thereof, since entanglement is a key feature of the way complexity arises out of simplicity, making our subject worth studying.


    I was talking about some things in terms of the computerization that had gone to another level in perspective(gravitational entanglement). It was about how "technologies can change" while we had been held to a certain kind of thinking. While the limitations held us too, "not knowing what computerization may bring."

    IN regards to the Landscape

    It was important to understand why there would be such divergences in perspective and how these would be lined up? Some of course did not want to take the time, but it was important to me to understand the "philosophical position" taken.



    One could just as well venture to the condense matter theorist and said, what building blocks shall we use? One should not think the "history of Platonism" without some "other influences" to consider. Least you assign it to a "another particular subject" in it's present incarnation? An Oscillatory String Universe?

    So the evolution here is much more then the "circumspect of the biological function," but may possible include other things that have not been considered?

    Physiologically, the "biological function" had some other relation? So abstract that I assigned the photon? So I said "feelings," while Einstein might assigned them to a "short or long time" considering his state of mind? :)

    More thought of course here on the "fictional presentation" submitted previous. As a layman I have a problem in that regard. :)


    Who would have ever thought that such "concepts that moved from my mind had some relation to the world at large in our emotive consequences?" That from such thinking, I would adopt fictional perspective to help me deal with what the advancements may be in the computerized world?

    I already had to know of the development in terms of entanglement's issues, to move it one step further not only in my thinking, but in understanding that as you adopt model apprehensions, how shall these lead you to conclude that the physics in association, had some relevance to it.

    The cosmologist may say that indeed the gravitational consequences and waves have as yet to be considered, but, if held to the photon in it's affect in such a gravitational field, then what would we want from such a graviton condensation?

    It's colouring?

    So of course it is appropriate that I show you the comment I made above, and then you judge for your self the thought I held in fiction, in relation to this article below as I am making that statement.

    Lene Hau explains how she stops light in one place then retrieves and speeds it up in a completely separate place.
    Staff photo Justin Ide/Harvard News Office
    Atoms at room temperature move in a random, chaotic way. But when chilled in a vacuum to about 460 degrees below zero Fahrenheit, under certain conditions millions of atoms lock together and behave as a single mass. When a laser beam enters such a condensate, the light leaves an imprint on a portion of the atoms. That imprint moves like a wave through the cloud and exits at a speed of about 700 feet per hour. This wave of matter will keep going and enter another nearby ultracold condensate. That's how light moves darkly from one cloud to another in Hau's laboratory.

    This invisible wave of matter keeps going unless it's stopped in the second cloud with another laser beam, after which it can be revived as light again.

    Atoms in matter waves exist in slightly different energy levels and states than atoms in the clouds they move through. These energy states match the shape and phase of the original light pulse. To make a long story short, information in this form can be made absolutely tamper proof. Personal information would be perfectly safe.

    Saturday, January 06, 2007

    Mersenne Prime: One < the Power of two


    It looks as though primes tend to concentrate in certain curves that swoop away to the northwest and southwest, like the curve marked by the blue arrow. (The numbers on that curve are of the form x(x+1) + 41, the famous prime-generating formula discovered by Euler in 1774.)


    This is part of the education of my learning to understand the implications of the work of Riemann in context of the Riemann Hypothesis. Part of understanding what this application can do in terms helping us to see what has developed "from abstractions of mathematics," to have us now engaged in the "real world" of computation.

    In mathematics, a power of two is any of the nonnegative integer powers of the number two; in other words, two multiplied by itself a certain number of times. Note that one is a power (the zeroth power) of two. Written in binary, a power of two always has the form 10000...0, just like a power of ten in the decimal system.

    Because two is the base of the binary system, powers of two are important to computer science. Specifically, two to the power of n is the number of ways the bits in a binary integer of length n can be arranged, and thus numbers that are one less than a power of two denote the upper bounds of integers in binary computers (one less because 0, not 1, is used as the lower bound). As a consequence, numbers of this form show up frequently in computer software. As an example, a video game running on an 8-bit system, might limit the score or the number of items the player can hold to 255 — the result of a byte, which is 8 bits long, being used to store the number, giving a maximum value of 28−1 = 255.


    I look forward to the help in terms of learning to understand this "ability of the mind" to envision the dynamical nature of the abstract. To help us develop, "the models of physics" in our thinking. To learn, about what is natural in our world, and the "mathematical patterns" that lie underneath them.

    What use the mind's attempt to see mathematics in such models?

    "Brane world thinking" that has a basis in Ramanujan modular forms, as a depiction of those brane surface workings? That such a diversion would "force the mind" into other "abstract realms" to ask, "what curvatures could do" in terms of a "negative expressive" state in that abstract world.

    Are our minds forced to cope with the "quantum dynamical world of cosmology" while we think about what was plain in Einstein's world of GR, while we witness the large scale "curvature parameters" being demonstrated for us, on such gravitational look to the cosmological scale.

    Mersenne Prime


    Marin Mersenne, 1588 - 1648


    In mathematics, a Mersenne number is a number that is one less than a power of two.

    Mn = 2n − 1.
    A Mersenne prime is a Mersenne number that is a prime number. It is necessary for n to be prime for 2n − 1 to be prime, but the converse is not true. Many mathematicians prefer the definition that n has to be a prime number.

    For example, 31 = 25 − 1, and 5 is a prime number, so 31 is a Mersenne number; and 31 is also a Mersenne prime because it is a prime number. But the Mersenne number 2047 = 211 − 1 is not a prime because it is divisible by 89 and 23. And 24 -1 = 15 can be shown to be composite because 4 is not prime.

    Throughout modern times, the largest known prime number has very often been a Mersenne prime. Most sources restrict the term Mersenne number to where n is prime, as all Mersenne primes must be of this form as seen below.

    Mersenne primes have a close connection to perfect numbers, which are numbers equal to the sum of their proper divisors. Historically, the study of Mersenne primes was motivated by this connection; in the 4th century BC Euclid demonstrated that if M is a Mersenne prime then M(M+1)/2 is a perfect number. In the 18th century, Leonhard Euler proved that all even perfect numbers have this form. No odd perfect numbers are known, and it is suspected that none exist (any that do have to belong to a significant number of special forms).

    It is currently unknown whether there is an infinite number of Mersenne primes.

    The binary representation of 2n − 1 is n repetitions of the digit 1, making it a base-2 repunit. For example, 25 − 1 = 11111 in binary


    So while we have learnt from Ulam's Spiral, that the discussion could lead too a greater comprehension. It is by dialogue, that one can move forward, and that lack of direction seems to hold one's world to limits, not seen and known beyond what's it like apart from the safe and security of home.

    Monday, December 18, 2006

    Gottfried Wilhelm von Leibniz

    This is a historical reference as well as leading to a conclusion I won't say it for you just that I present the idea, "written word," and then you decide what that message is. You might have thought it disjointed, but it's really not, as you move through it.


    Internet Philosphy-Gottfried Wilhelm Leibniz (1646-1716) Metaphysics


    There are reasons why this article is being put up, and again, developing a little history to the "line up Lee Smolin prepared" is an important step in discerning why he may have gone down a certain route for comparative relations in terms of "against symmetry."


    Click on link Against symmetry (Paris, June 06)

    I have no one telling me this, just that any argument had to have it's "foundational logic of approach" and learning to interpret why someone did something, is sometimes just as important as the science they currently pursued, or adopted, in light of other models and methods. It does not necessarily make them right. Just that they are delving in model apprehension and devising the reasons why the model they choose to use, "is" the desired one, from their current philosophical development and understanding.

    So they have to present their logic.

    The Identity of Indiscernibles

    The Identity of Indiscernibles (hereafter called the Principle) is usually formulated as follows: if, for every property F, object x has F if and only if object y has F, then x is identical to y. Or in the notation of symbolic logic:

    F(Fx ↔ Fy) → x=y

    This formulation of the Principle is equivalent to the Dissimilarity of the Diverse as McTaggart called it, namely: if x and y are distinct then there is at least one property that x has and y does not, or vice versa.

    The converse of the Principle, x=y → ∀F(Fx ↔ Fy), is called the Indiscernibility of Identicals. Sometimes the conjunction of both principles, rather than the Principle by itself, is known as Leibniz's Law.


    It is almost if the computerize world is to be developed further, "this logic" had to be based on some philosophical approach? Had to be derived from some developmental model beyond the scope of "the approach to quantum gravity" that it had it's basis designed in the area of research, a university could be exploiting itself?


    In 1671 Gottfried Wilhelm von Leibniz (1646-1716) invented a calculating machine which was a major advance in mechanical calculating. The Leibniz calculator incorporated a new mechanical feature, the stepped drum — a cylinder bearing nine teeth of different lengths which increase in equal amounts around the drum. Although the Leibniz calculator was not developed for commercial production, the stepped drum principle survived for 300 years and was used in many later calculating systems.


    This is not to say the developmental program disavows current research in all areas to be considered. Just that it's approach is based on "some method" that is not easily discernible even to the vast array of scientists current working in so many research fields.

    Why Quantum Computers?

    On the atomic scale matter obeys the rules of quantum mechanics, which are quite different from the classical rules that determine the properties of conventional logic gates. So if computers are to become smaller in the future, new, quantum technology must replace or supplement what we have now. The point is, however, that quantum technology can offer much more than cramming more and more bits to silicon and multiplying the clock--speed of microprocessors. It can support entirely new kind of computation with qualitatively new algorithms based on quantum principles!


    Increasing complexity makes it very hard to describe complex systems and imagine if your were going from the top down, what constituent descriptors of reality we would have to manufacture, if we wanted to speak about all those forms and the complexity that makes up these forms?

    Moore's Law

    Moore's law is the empirical observation that the complexity of integrated circuits, with respect to minimum component cost, doubles every 24 months[1].

    Tuesday, November 14, 2006

    Discovering the Quantum Universe


    Credit: Jean-Francois Colonna
    Superstrings: A computer's graphical representation of multi-dimensional spacetime


    -----------------------------------------------------

    Right now is a time of radical change in particle physics. Recent experimental evidence demands a revolutionary new vision of the universe. Discoveries are at hand that will stretch the imagination with new forms of matter, new forces of nature, new dimensions of space and time. Breakthroughs will come from the next generation of particle accelerators — the Large Hadron Collider, now under construction in Europe, and the proposed International Linear Collider. Experiments at these accelerators will revolutionize your concept of the universe.


    -------------------------------------------------------------

    Finding a heavenly key to climate change

    Researchers at the European Organization for Nuclear Research (Cern) in Geneva are looking at how radiation from outer space could be affecting our environment.

    A new cutting:edge experiment aims to discover how exactly cosmic rays and the Sun may influence the formation of low:level clouds, and possibly climate change.

    More than two centuries ago, the British Astronomer Royal William Herschel noted a correlation between sunspots ? an indicator of solar activity : and the price of wheat in England. He suggested that when there were few sunspots, prices rose.

    However, up until recently, there was little to back up this hypothesis. Today, inside an unassuming ? some would say decrepit:looking ? building at Cern, the Cloud (Cosmics Leaving OUtdoor Droplets) experiment might help explain how the Sun affects the climate.


    ----------------------------------------------------------------

    NASA Schedules Dark Energy Discovery Media Teleconference

    NASA will host a media teleconference with Hubble Space Telescope astronomers at 1 p.m. EST Thursday, Nov. 16, to announce the discovery that dark energy has been an ever-present constituent of space for most of the universe's history.


    See here for my comments here.

    Wednesday, September 06, 2006

    Undercut Philosophical basis , "What have you?"

    In "Beyond the dance of the sun" I show what we take for granted from a "observational standpoint," and try to increase perception, based on the quantum views.

    Is mathematics Invented or Discovered?

    "Philosophy In The Flesh"

    LAKOFF:
    When you start to study the brain and body scientifically, you inevitably wind up using metaphors. Metaphors for the mind, as you say, have evolved over time -- from machines to switchboards to computers. There's no avoiding metaphor in science. In our lab, we use the Neural Circuitry metaphor ubiquitous throughout neuroscience. If you're studying neural computation, that metaphor is necessary. In the day to day research on the details of neural computation, the biological brain moves into the background while the Neural Circuitry introduced by the metaphor is what one works with. But no matter how ubiquitous a metaphor may be, it is important to keep track of what it hides and what it introduces. If you don't, the body does disappear. We're careful about our metaphors, as most scientists should be..


    So I asked myself a question.

    What if the condensation of the human brain was the reverse, of Damasio's First Law. I mean we can train the neuron pathways to be reconstructed, by establishing the movements previously damaged by stroke. What is the evolution of the human brain, if "mind" is not leading its shape? A newly discovered ability called "Toposense," perhaps?

    Okay now, what came first, "chicken" or "egg?"

    If one had never read Kuhn, how would one know to respond in kind to the philosophical basis a David Corfield might in sharing perspective about abstractness in mathematical models?



    The thesis of 'Proofs and Refutations' is that the development of mathematics does not consist (as conventional philosophy of mathematics tells us it does) in the steady accumulation of eternal truths. Mathematics develops, according to Lakatos, in a much more dramatic and exciting way - by a process of conjecture, followed by attempts to 'prove' the conjecture (i.e. to reduce it to other conjectures) followed by criticism via attempts to produce counter-examples both to the conjectured theorem and to the various steps in the proof.J Worrall and E G Zahar (eds.), I Lakatos : Proofs and Refutations : The Logic of Mathematical Discovery


    Okay so you understand as a layman I like to see what is going on out there in the blogger world of mathematicians, I thought I would listen here, and do some research. I had to started out with the presumption that one may encounter and be moved from any positon.

    I has to philosophical understand it first.

    I have enjoyed both participating in a mathematical dialogue and, as a philosopher, thinking about what such participation has to do with a theory of enquiry. The obvious comparison for me is with the fictional dialogue Proofs and Refutations written by the philosopher Imre Lakatos in the early 1960s. The clearest difference between these two dialogues is that Lakatos takes the engine of conceptual development to be a process of

    conjectured result (perhaps imprecisely worded) - proposed (sketched) proof - suggested counterexample - analysis of proof for hidden assumptions - revised definitions, conjecture, and improved proof,

    whereas John, I and other contributors look largely to other considerations to get the concepts ‘right’. For instance, it is clear that one cannot get very far without a heavy dose of analogical reasoning, something Lakatos ought to have learned more about from Polya, both in person and through his books.




    I was quick to point out what I say about "Observation pays off," and it quickly recieved the trash box. That's not dialogue. :)

    Albrecht Dürer(self portrait at 28)

    It's about paying carefull attention as to what is created for us in images and paintings. Noticing the "anomalistic behavior" that might be brought forth for our human consumption. It required a metamorphsis for change.

    Prof.dr R.H. Dijkgraaf

    In that case I pointed out the work of Melencolia II
    [frontispiece of thesis, after Dürer 1514]by Prof.dr R.H. Dijkgraaf
    showing Albrect Durer's images repainted to suit his thesis? So I delved deeper into the image portrayed.

    On the surface this information is about what we see, yet below it, it is about seeing in ways that we are not accustom. So, the journey here was to show the nuances that invade perception, and then show what leads further into the understanding of what happens out there in the physics world in regards to the summation of Prof.dr R.H. Dijkgraaf's picture of the original.

    Thursday, August 31, 2006

    Now, here is a SuperNova for Real

    The Crab Nebula from VLT Credit: FORS Team, 8.2-meter VLT, ESO



    Now the "ultimate proof" is to hold in our hands the matters defined by objects. This is the culmination of all dimensional perspectives, being "condensed to the moment" we hold the stardust samples in our hands. In that case, it may be of a meteorite/comet in passing?

    Now we are going back to our computers for a moment here.

    Now we know what can be done in terms of computer programming, and what simulations of events can do for us, but what happens, when we look out into space and watch events unfold as they do in our models?

    Interaction with matter
    In passing through matter, gamma radiation ionizes via three main processes: the photoelectric effect, Compton scattering, and pair production.


    Photoelectric Effect: This describes the case in which a gamma photon interacts with and transfers its energy to an atomic electron, ejecting that electron from the atom. The kinetic energy of the resulting photoelectron is equal to the energy of the incident gamma photon minus the binding energy of the electron. The photoelectric effect is the dominant energy transfer mechanism for x-ray and gamma ray photons with energies below 50 keV (thousand electron volts), but it is much less important at higher energies.
    Compton Scattering: This is an interaction in which an incident gamma photon loses enough energy to an atomic electron to cause its ejection, with the remainder of the original photon's energy being emitted as a new, lower energy gamma photon with an emission direction different from that of the incident gamma photon. The probability of Compton scatter decreases with increasing photon energy. Compton scattering is thought to be the principal absorption mechanism for gamma rays in the intermediate energy range 100 keV to 10 MeV (megaelectronvolts), an energy spectrum which includes most gamma radiation present in a nuclear explosion. Compton scattering is relatively independent of the atomic number of the absorbing material.
    Pair Production: By interaction via the Coulomb force, in the vicinity of the nucleus, the energy of the incident photon is spontaneously converted into the mass of an electron-positron pair. A positron is the anti-matter equivalent of an electron; it has the same mass as an electron, but it has a positive charge equal in strength to the negative charge of an electron. Energy in excess of the equivalent rest mass of the two particles (1.02 MeV) appears as the kinetic energy of the pair and the recoil nucleus. The positron has a very short lifetime (about 10-8 seconds). At the end of its range, it combines with a free electron. The entire mass of these two particles is then converted into two gamma photons of 0.51 MeV energy each.


    I wanted to include this information about Gamma Rays first so you understand what happens in space, as we get this information. I want to show you that there is faster ways that we recognize these events, and this includes, recognition of what the spacetime fabric tells us from one place in the universe, to another.

    Does it look the same? Check out, "Going SuperNova 3Dgif by Quasar9"

    Now, take a look at this below.

    Four hundred years ago, sky watchers, including the famous astronomer Johannes Kepler, were startled by the sudden appearance of a "new star" in the western sky, rivaling the brilliance of the nearby planets. Now, astronomers using NASA's three Great Observatories are unraveling the mysteries of the expanding remains of Kepler's supernova, the last such object seen to explode in our Milky Way galaxy


    What can we learn about our modelling capabilties, and what can we learn about the events in space that need to be further "mapped?" How shall we do this?

    Gamma ray indicators prepared us for something that was happening. Now with this "advance notice" we look back, and watch it unfold?

    A new image taken with NASA's Hubble Space Telescope provides a detailed look at the tattered remains of a supernova explosion known as Cassiopeia A (Cas A). It is the youngest known remnant from a supernova explosion in the Milky Way. The new Hubble image shows the complex and intricate structure of the star's shattered fragments. The image is a composite made from 18 separate images taken in December 2004 using Hubble's Advanced Camera for Surveys (ACS).


    If advance indication are possible besides gamma ray detection, then what form would this take? Could we map the events as we learn of what happen in LIGO or LIsa operations, and how the "speed of light" is effected in a vacuum?

    Now this comes to the second part, and question of indications of information released to the "bulk perspective" as the event unfolds as this SuperNova is.

    Bulk:
    Note that in the type IIA and type IIB string theories closed strings are allowed to move everywhere throughout the ten-dimensional space-time (called the bulk), while open strings have their ends attached to D-branes, which are membranes of lower dimensionality (their dimension is odd - 1,3,5,7 or 9 - in type IIA and even - 0,2,4,6 or 8 - in type IIB, including the time direction).


    Now advancement in model assumption pushes perspective where it did not exist before.

    You had to understand the nature of "GR" in pushing perspective, in the way this post is unfolding. Gamma ray indicators, are events that are "tied to the brane" and in this sense, information is held to the brane. The "fermion principle" and identifcation of Type IIA and IIB is necessary, as part of the move to M theory?

    Thus when we look at Gamma rays they are not "separate from the event" while the bulk perspective, allows geoemtrics to invade the "new world" beyond the confines of non-euclidean geometries.

    As I pointed out, the succession of Maxwell and all the eqautions (let there be light) are still dveloped from the center outwards, and in this perspective gravitational waves wrap the event. Thus the "outer most covering" is a much higher vision and dynamical nature, then what we assume as "ripples in space."

    Bulk perspectve is a necessary revision/addition to how we think and include gravitational waves, by incorporating the "gravitonic perception" as a force carrier and extension of the Standard model.

    While it has been thought by me to include the "Tachyon question", as a faster then light entity, the thought is still of some puzzlement that this information precedes the gamma ray detection, and hence, serves to elucidate the understanding of our perceptions of the early events as they unfold, as a more "sounding" reason to how we look at these early events?

    If those whose views have been entertaining spacetravel, as I have exemplified in previous post, then it was of some importance that model enhancement would serve to help the future of spacetravel in all it's outcomes, as we now engaged, as ISCAP is engaging.

    See:

  • Einstein@Home


  • LIGO:
  • Sunday, August 27, 2006

    Numerical Relativity and Math Transference

    Part of the advantage of looking at computer animations is knowing that the basis of this vision that is being created, is based on computerized methods and codes, devised, to help us see what Einstein's equations imply.

    Now that's part of the effort isn't it, when we see the structure of math, may have also embued a Dirac, to see in ways that ony a good imagination may have that is tied to the abstractions of the math, and allows us to enter into "their portal" of the mind.

    NASA scientists have reached a breakthrough in computer modeling that allows them to simulate what gravitational waves from merging black holes look like. The three-dimensional simulations, the largest astrophysical calculations ever performed on a NASA supercomputer, provide the foundation to explore the universe in an entirely new way.

    According to Einstein's math, when two massive black holes merge, all of space jiggles like a bowl of Jell-O as gravitational waves race out from the collision at light speed.

    Previous simulations had been plagued by computer crashes. The necessary equations, based on Einstein's theory of general relativity, were far too complex. But scientists at NASA's Goddard Space Flight Center in Greenbelt, Md., have found a method to translate Einstein's math in a way that computers can understand.


    Already having this basis of knowledge availiable, it was important to see what present day research has done for us, as we look at these images and allow them to take us into the deep space as we construct measures to the basis of what GR has done for us in a our assumptions of the events in the cosmo.

    But it is more then this for me, as I asked the question, on the basis of math? I have enough links here to show the diversity of experience created from mathematical structures to have one wonder how indeed is th efinite idealization of imagination as a endless resource? You can think about livers if you likeor look at the fractorialization of the beginning of anythng and wonder I am sure.

    That has been the question of min in regards to a condense matter theorist who tells us about the bulding blocks of matter can be anything. Well, in this case we are using "computer codes" to simulate GR from a mathematical experience.

    So you see now don't you?:)

    Is Math Invented or Discovered?

    The question here was one of some consideration, as I wondered, how anyone could have delved into the nature of things and come out with some mathematcial model? Taken us along with the predecessors of endowwment thinking(imagination). To develope new roads. They didn't have to be 6 0r 7 roads Lubos, just a assumation. Sort of like, taking stock of things.

    So I may ask, "what are the schematics of nature" and the build up starts from some place. Way back, before the computer modeling and such. A means, by which we will give imagination the tools to carry on.

    So the journey began way back and the way in which such models lead our perspectives is the "overlay" of what began here in the postulates and moved on into other worldy abstractions?

    This first postulate says that given any two points such as A and B, there is a line AB which has them as endpoints. This is one of the constructions that may be done with a straightedge (the other being described in the next postulate).

    Although it doesn't explicitly say so, there is a unique line between the two points. Since Euclid uses this postulate as if it includes the uniqueness as part of it, he really ought to have stated the uniqueness explicitly.

    The last three books of the Elements cover solid geometry, and for those, the two points mentioned in the postulate may be any two points in space. Proposition XI.1 claims that if part of a line is contained in a plane, then the whole line is. In the books on plane geometry, it is implicitly assumed that the line AB joining A to B lies in the plane of discussion.


    One would have to know that the history had been followed here to what it is today.

    Where Non-euclidean geometry began, and who were the instigators of imaginitive spaces now that were to become very dynamic in the xyzt direction.

    All those who have written histories bring to this point their account of the development of this science. Not long after these men came Euclid, who brought together the Elements, systematizing many of the theorems of Eudoxus, perfecting many of those of Theatetus, and putting in irrefutable demonstrable form propositions that had been rather loosely established by his predecessors. He lived in the time of Ptolemy the First, for Archimedes, who lived after the time of the first Ptolemy, mentions Euclid. It is also reported that Ptolemy once asked Euclid if there was not a shorter road to geometry that through the Elements, and Euclid replied that there was no royal road to geometry. He was therefore later than Plato's group but earlier than Eratosthenes and Archimedes, for these two men were contemporaries, as Eratosthenes somewhere says. Euclid belonged to the persuasion of Plato and was at home in this philosophy; and this is why he thought the goal of the Elements as a whole to be the construction of the so-called Platonic figures. (Proclus, ed. Friedlein, p. 68, tr. Morrow)




    These picture above, belongs to a much larger picture housed in the Raphael rooms in Rome. This particular picture many are familiar with as I use part of it as my profile picture. It is called the "Room of the Segnatura."



    The point is, that if you did not know of the "whole picture" you would have never recognized it's parts?

    Saturday, August 26, 2006

    Beyond Spacetime?

    As well as bringing the accelerator's counter-rotating beams together, LHC insertion magnets also have to separate them after collision. This is the job of dedicated separators, and the US Brookhaven Laboratory is developing superconducting magnets for this purpose. Brookhaven is drawing on its experience of building the Relativistic Heavy Ion Collider (RHIC), which like the LHC is a superconducting machine. Consequently, these magnets will bear a close resemblance to RHIC's main dipoles. Following a prototyping phase, full-scale manufacture has started at Brookhaven and delivery of the first superconducting separator magnets to CERN is foreseen before the end of the year.





    Now some people do not like "alternate views" when looking at Sean's picture. But if you look at it, then look at the picture below, what saneness, sameness, could have affected such thinking?

    Lisa Randall:
    "You think gravity is what you see. We're always just looking at the tail of things."





    So we look for computerized versions to help enlighten. To "see" how the wave front actually embues circumstances and transfers gravitonic perception into other situations.



    Was this possible without understanding the context of the pictures shared? What complexity and variable sallows us to construct such modellings in computers?



    Okay so you know now that lisa Randall's picture was thrown inhere to hopefully help uyou see what I am saying about gravitonic consideration.

    Anything beyond the spacetime we know, exists in dimensional perspectives, and the resulting "condensative feature" of this realization is "3d+1time." The gravitonic perception is "out there?" :)

    Attributes of the Superfluids

    Now it is with some understanding that the "greater energy needed" with which to impart our views on let's say "reductionism" has pointed us in the direction of the early universe.

    So we say "QGP" and might say, "hey, is there such a way to measure such perspectives?" So I am using the graph, to point you in the right direction.



    So we talk about where these beginnings are, and the "idea of blackholes" makes their way into our view because of th reductionistic standpoint we encountered in our philosophical ramblings to include now, "conditions" that were conducive to microstate blackhole creation.

    The energy here is beyond the "collidial aspects" we encounter, yet, we have safely move our perceptions forward to the QGP? We have encounter certain results. You have to Quantum dynamically understand it, in a macro way? See we still talk about the universe, yet froma microscopic perception.

    Let's move on here, as I have.

    If you feel it too uncomfortable and the "expanse of space quantumly not stimulating" it's okay to hold on to the railings like I do, as I walked close to the "edge of the grand canyon."

    So here we are.

    I gave some ideas as to the "attributes of the superfluids" and the history in the opening paragraph, to help perspective deal with where that "extra energy has gone" and how? So you look for new physics "beyond" the current understanding of the standard model.

    So, it was appropriate to include the graviton as a force carrier? Qui! NOn?

    Thursday, August 10, 2006

    The Game of Life

    if experimenters have free will, then so do elementary particles.-John Horton Conway




    What prompted this article here is the one of JoAnne of Cosmic variance writes here about the poker hand of Binger.

    Of course these things attract my mind because of how I see what may of caused "first principle" to ever be endowed in some algorithm, that could explode into some and appear as a chaotic pattern, could now be garnered in some predictability?

    It was Ulam who first invented the "Monte Carlo Method" to study the chaos of a nuclear explosion.

    But hey, lets go way back for a minute here and try and digest some of what began historically and grew into some idea of what the particle world now holds with it's regard for coming into "being" and serving the complete rotation, until it dissapears again?


    Martin Gardner, "Mathematical Games: The fantastic combinations of John Conway's new solitaire game `life',", Scientific American, October, 1970, pp. 120-123.

    Martin Gardiner:
    Conway conjectures that no pattern can grow without limit. Put another way, any configuration with a finite number of counters cannot grow beyond a finite upper limit to the number of counters on the field. This is probably the deepest and most difficult question posed by the game. Conway has offered a prize of $50 to the first person who can prove or disprove the conjecture before the end of the year. One way to disprove it would be to discover patterns that keep adding counters to the field: a "gun" (a configuration that repeatedly shoots out moving objects such as the "glider"), or a "puffer train" (a configuration that moves about leaves behind a trail of "smoke").


    A lot of times if you cannot direct the mind to percieve supersymmetrical idealizations, then what use ever looking for the places that would speak to the nature of the universe? God of the Gaps?



    "Every space" is reducible until? The realization then exists for me, that such outward moments had always been "geometrical viable" when seen in a continuing cycle of some sort? How would you have explained it?

    Geomtrodynamically, this, has been talked about here in this blog many times. I needed ways in which to see this analogy "extend the vision of" what could have happened with our own universe. How this universe came to be?

    Yes be careful with the analogies I know.