In layman's terms, a stochastic process has the Markov property if "the future depends only on the present, not on the past"; that is, if the probability distribution of future states of the process depends only upon the current state, and conditionally independent of the past states (the path of the process) given the present state. A process with the Markov property is usually called a Markov process, and may be described as Markovian.
Mathematically, the Markov property states that, if X(t), t > 0, is a stochastic process, then
Markov processes are typically termed (time-) homogeneous if
and otherwise are termed (time-) inhomogeneous. Homogeneous Markov processes, usually being simpler than inhomogeneous ones, form the most important class of Markov processes.
In some cases, apparently non-Markovian processes may still have Markovian representations, constructed by expanding the concept of the 'current' and 'future' states to each include the states over an interval of times, for example
An example of an non-Markovian process with a Markovian representation is a moving average time series.
The most famous Markov processes are Markov chains, but many other processes, including Brownian motion, are Markovian.
Friday, September 24, 2004
Wednesday, September 08, 2004
Exponetially Quantum
It's really more confusing than you think.
It explained the superpostion of quantum states in classical computer terms of numbers in registers--every qubit added exponentially increases the number of registers the algorithm operates on.
Actually, every qubit only adds a qubit, but the size of the classical registers that would be needed to simulate the quantum register grows exponentially. This is not the same as saying the quantum register is equivalent to the huge classical register that would be needed to simulate it. The huge classical register would actually be much more useful if it could be made.
The calculation runs and ends in a group of results. Only one result of all those calulations will be right.
You are referring to a peculiarity of the Shor factoring algorithm, which is still the only useful truly quantum algorithm known. As I said, ideally a quantum algorithm would give only a single answer, i.e. at the end of the calculation the quantum state of the register would be one particular answer with unit probability and all other answers would have zero probability. However, the Shor algorithm does not do this; there is some probability of a wrong answer, and you have to run the calculation a few times to see what answers you get and then check them to see which one is right. This is not necessarily a universal characteristic of all quantum algorithms, but it's hard to say because so far the Shor algorithm is just about all we have.
Any problem that grows exponentially harder to solve as you add terms, non-polynomial problems, are tractable to a quantum computer because you simply tack on another qubit for each new term added and then run the calculation in polynomial time.
This was an early hope for QC but it is now known not to be true. Actually, there is only one algorithm known that can take advantage of the power of QC to solve what is believed to be an exponentially hard problem, and other exponentially hard problems are known not to be soluble in linear time by QC.
There is another very important exponentially hard problem which is known to be soluble by QC, although little work has been done on algorithms for it. That is the simulation of quantum systems themselves. In the long run, this may be the most important application of quantum computing, if it ever becomes practical.
--BY MarkGubrud (gubrud@squid.umd.edu)
It explained the superpostion of quantum states in classical computer terms of numbers in registers--every qubit added exponentially increases the number of registers the algorithm operates on.
Actually, every qubit only adds a qubit, but the size of the classical registers that would be needed to simulate the quantum register grows exponentially. This is not the same as saying the quantum register is equivalent to the huge classical register that would be needed to simulate it. The huge classical register would actually be much more useful if it could be made.
The calculation runs and ends in a group of results. Only one result of all those calulations will be right.
You are referring to a peculiarity of the Shor factoring algorithm, which is still the only useful truly quantum algorithm known. As I said, ideally a quantum algorithm would give only a single answer, i.e. at the end of the calculation the quantum state of the register would be one particular answer with unit probability and all other answers would have zero probability. However, the Shor algorithm does not do this; there is some probability of a wrong answer, and you have to run the calculation a few times to see what answers you get and then check them to see which one is right. This is not necessarily a universal characteristic of all quantum algorithms, but it's hard to say because so far the Shor algorithm is just about all we have.
Any problem that grows exponentially harder to solve as you add terms, non-polynomial problems, are tractable to a quantum computer because you simply tack on another qubit for each new term added and then run the calculation in polynomial time.
This was an early hope for QC but it is now known not to be true. Actually, there is only one algorithm known that can take advantage of the power of QC to solve what is believed to be an exponentially hard problem, and other exponentially hard problems are known not to be soluble in linear time by QC.
There is another very important exponentially hard problem which is known to be soluble by QC, although little work has been done on algorithms for it. That is the simulation of quantum systems themselves. In the long run, this may be the most important application of quantum computing, if it ever becomes practical.
--BY MarkGubrud (gubrud@squid.umd.edu)
Tuesday, September 07, 2004
Quantumized Brainy
For those who have been seriously inspired or irritated by Roger Penrose’s hypotheses on the possible basis of consciousness in quantum effects occurring inside neurons in the brain, a trio of researchers has published a speculative proposal that suggests that biological microtubules may act as quantum electrodynamic cavities and have the potential for quantum entanglement, teleportation and computation. The authors suggest that this mechanism may be responsible for how the brain works, or might at least provide biological building blocks for creating quantum computers. A preprint of their research paper is available online on the arXiv preprint server at http://www.arxiv.org/abs/quant-ph/0204021
Wednesday, September 01, 2004
Quantum Computing Gets Five Photons Closer
Quantum computers, which use attributes of quantum particles like atoms and photons to represent data, promise to solve certain very large problems many orders of magnitude faster than is possible using today's computers. The challenge is being able to manipulate particles well enough to carry out computing.
A key step is being able to entangle five particles, which would make it possible to check computations for errors and teleport quantum information within and between computers.
Researchers from the University of Science and Technology of China, the University of Innsbruck in Austria, and the University of Heidelberg in Germany have entangled five photons.
Error correction uses mathematical codes to detect when a bit has been accidentally flipped, and is widely used in classical computing because electronic and magnetic bits occasionally switch accidentally from a 1 to a 0 or vice versa. Quantum bits are more delicate and require an error correction method to be feasible.
The researchers used the five-photon entanglement process to carry out open-destination teleportation, which makes it possible to transmit information to any one of several processors within a quantum computer or nodes in a quantum network. Quantum teleportation is akin to faxing a document and in the process destroying the original.
It will be more than a decade before the technology is practical, according to the researchers. The work appeared in the July 1, 2004 issue of Nature.
Technology Research News (MIT FEED)
A key step is being able to entangle five particles, which would make it possible to check computations for errors and teleport quantum information within and between computers.
Researchers from the University of Science and Technology of China, the University of Innsbruck in Austria, and the University of Heidelberg in Germany have entangled five photons.
Error correction uses mathematical codes to detect when a bit has been accidentally flipped, and is widely used in classical computing because electronic and magnetic bits occasionally switch accidentally from a 1 to a 0 or vice versa. Quantum bits are more delicate and require an error correction method to be feasible.
The researchers used the five-photon entanglement process to carry out open-destination teleportation, which makes it possible to transmit information to any one of several processors within a quantum computer or nodes in a quantum network. Quantum teleportation is akin to faxing a document and in the process destroying the original.
It will be more than a decade before the technology is practical, according to the researchers. The work appeared in the July 1, 2004 issue of Nature.
Technology Research News (MIT FEED)
Tuesday, August 31, 2004
Quantumization
Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set of inputs simultaneously; a computer that computes in the twilight zone of Hilbert space. You would be thinking of a quantum computer. Relatively few and simple concepts from quantum mechanics are needed to make quantum computers a possibility. The subtlety has been in learning to manipulate these concepts. Is such a computer an inevitability or will it be too difficult to build?
solving an exponentially difficult problem for a conventional computer---that of factoring a large number. These ideas are then applied first to classical, dissipationless computers and then to quantum computers. A schematic model of a quantum computer is described as well as some of the subtleties in its programming. The Shor algorithm [1,2] for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure in factoring which makes the Shor algorithm possible is discussed. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years.
Let us start by describing the problem at hand: factoring a number N into its prime factors (e.g., the number 51688 may be decomposed as ). A convenient way to quantify how quickly a particular algorithm may solve a problem is to ask how the number of steps to complete the algorithm scales with the size of the ``input'' the algorithm is fed. For the factoring problem, this input is just the number N we wish to factor; hence the length of the input is . (The base of the logarithm is determined by our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in decimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomial in the input size (with a degree of perhaps 2 or 3).
On conventional computers the best known factoring algorithm runs in steps [3]. This algorithm, therefore, scales exponentially with the input size . For instance, in 1994 a 129 digit number (known as RSA129 [3']) was successfully factored using this algorithm on approximately 1600 workstations scattered around the world; the entire factorization took eight months [4]. Using this to estimate the prefactor of the above exponential scaling, we find that it would take roughly 800,000 years to factor a 250 digit number with the same computer power; similarly, a 1000 digit number would require years (significantly lon ger than the age of the universe). The difficulty of factoring large numbers is crucial for public-key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty of factoring numbers with around 250 digits.
Recently, an algorithm was developed for factoring numbers on a quantum computer which runs in steps where is small [1]. This is roughly quadratic in the input size, so factoring a 1000 digit number with such an algorithm would require only a few million steps. The implication is that public key cryptosystems based on factoring may be breakable.
To give you an idea of how this exponential improvement might be possible, we review an elementary quantum mechanical experiment that demonstrates where such power may lie hidden [5]. The two-slit experiment is prototypic for observing quantum mechanical behavior: A source emits photons, electrons or other particles that arrive at a pair of slits. These particles undergo unitary evolution and finally measurement. We see an interference pattern, with both slits open, which wholely vanishes if either slit is covered. In some sense, the particles pass through both slits in parallel. If such unitary evolution were to represent a calculation (or an operation within a calculation) then the quantum system would be performing computations in parallel. Quantum parallelism comes for free. The output of this system would be given by the constructive interference among the parallel computations.
solving an exponentially difficult problem for a conventional computer---that of factoring a large number. These ideas are then applied first to classical, dissipationless computers and then to quantum computers. A schematic model of a quantum computer is described as well as some of the subtleties in its programming. The Shor algorithm [1,2] for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure in factoring which makes the Shor algorithm possible is discussed. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years.
Let us start by describing the problem at hand: factoring a number N into its prime factors (e.g., the number 51688 may be decomposed as ). A convenient way to quantify how quickly a particular algorithm may solve a problem is to ask how the number of steps to complete the algorithm scales with the size of the ``input'' the algorithm is fed. For the factoring problem, this input is just the number N we wish to factor; hence the length of the input is . (The base of the logarithm is determined by our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in decimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomial in the input size (with a degree of perhaps 2 or 3).
On conventional computers the best known factoring algorithm runs in steps [3]. This algorithm, therefore, scales exponentially with the input size . For instance, in 1994 a 129 digit number (known as RSA129 [3']) was successfully factored using this algorithm on approximately 1600 workstations scattered around the world; the entire factorization took eight months [4]. Using this to estimate the prefactor of the above exponential scaling, we find that it would take roughly 800,000 years to factor a 250 digit number with the same computer power; similarly, a 1000 digit number would require years (significantly lon ger than the age of the universe). The difficulty of factoring large numbers is crucial for public-key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty of factoring numbers with around 250 digits.
Recently, an algorithm was developed for factoring numbers on a quantum computer which runs in steps where is small [1]. This is roughly quadratic in the input size, so factoring a 1000 digit number with such an algorithm would require only a few million steps. The implication is that public key cryptosystems based on factoring may be breakable.
To give you an idea of how this exponential improvement might be possible, we review an elementary quantum mechanical experiment that demonstrates where such power may lie hidden [5]. The two-slit experiment is prototypic for observing quantum mechanical behavior: A source emits photons, electrons or other particles that arrive at a pair of slits. These particles undergo unitary evolution and finally measurement. We see an interference pattern, with both slits open, which wholely vanishes if either slit is covered. In some sense, the particles pass through both slits in parallel. If such unitary evolution were to represent a calculation (or an operation within a calculation) then the quantum system would be performing computations in parallel. Quantum parallelism comes for free. The output of this system would be given by the constructive interference among the parallel computations.
Sunday, August 22, 2004
Computer Clusters
The idea behind Beowulf is to provide COTS (Commodity Off The Shelf) base systems to satisfy specific computational requirements.
The Center of Excellence in Space Data and Information Sciences (CESDIS) was located at the Goddard Space Flight Center in Greenbelt, Maryland. CESDIS was funded in part by the NASA Earth and Space Sciences (ESS) project.
Part of ESS?s mission was to determine how applicable it would be to apply parallel computers to Earth and Space Science problems. Thus, the first Beowulf was made to handle the large data sets which are frequently used in ESS applications.
Give two current applications for Beowulf clusters (http://www.cacr.caltech.edu/beowulf)
The ASCI Cluster facilitates advancement in virtual test facilities. It deals primarily with small scaling runs, long production runs, and validating code's robustness and/or correctness.
The VizCluster serves to increase the size of images which can be rendered. It does so by combining results from PC?s into a single composite image using Ethernet and a novel pixel bus transfer system.
What type of computers/processors were used in the first Beowulf cluster? What operating system? How were they connected? (http://www.beowulf.org)
The first Beowulf cluster was built with DX4 processors and were connected using 10Mbit/s Ethernet. The machines ran Linux. Since the processors would have been limited by a single Ethernet and Ethernet switches were very expensive a ?channel bonded? Ethernet was created for the operating system whereby all network traffic was spread across multiple Ethernets.
Where is the Brahma Cluster located? What department is the sponsor for this cluster?
The Brahma cluster is located at Duke University in Durham, North Carolina. It is sponsored by the Physics department.
The Center of Excellence in Space Data and Information Sciences (CESDIS) was located at the Goddard Space Flight Center in Greenbelt, Maryland. CESDIS was funded in part by the NASA Earth and Space Sciences (ESS) project.
Part of ESS?s mission was to determine how applicable it would be to apply parallel computers to Earth and Space Science problems. Thus, the first Beowulf was made to handle the large data sets which are frequently used in ESS applications.
Give two current applications for Beowulf clusters (http://www.cacr.caltech.edu/beowulf)
The ASCI Cluster facilitates advancement in virtual test facilities. It deals primarily with small scaling runs, long production runs, and validating code's robustness and/or correctness.
The VizCluster serves to increase the size of images which can be rendered. It does so by combining results from PC?s into a single composite image using Ethernet and a novel pixel bus transfer system.
What type of computers/processors were used in the first Beowulf cluster? What operating system? How were they connected? (http://www.beowulf.org)
The first Beowulf cluster was built with DX4 processors and were connected using 10Mbit/s Ethernet. The machines ran Linux. Since the processors would have been limited by a single Ethernet and Ethernet switches were very expensive a ?channel bonded? Ethernet was created for the operating system whereby all network traffic was spread across multiple Ethernets.
Where is the Brahma Cluster located? What department is the sponsor for this cluster?
The Brahma cluster is located at Duke University in Durham, North Carolina. It is sponsored by the Physics department.
Saturday, August 21, 2004
Exploring exokernal
An exokernel allocates and deallocates hardware on the most basic level. It doesn't provide abstractions for applications to use. From the most basic hardware primitives applications can implement the traditional operating system abstractions but can do so in a highly specialized manner to meet specific speed and functionality goals.
By abstracting physical hardware resources traditional operating systems limit the performance, flexibility and functionality of applications.
Exokernal architecture removes these limitations by allowing untrusted software to implement traditional operating system abstractions entirely at application level.
The key point of all exokernals is that the protection and management schemes are separate. Thus, the library operating systems are isolated from each other. This allows application libraries to be as powerful as the traditional privileged operation system.
By abstracting physical hardware resources traditional operating systems limit the performance, flexibility and functionality of applications.
Exokernal architecture removes these limitations by allowing untrusted software to implement traditional operating system abstractions entirely at application level.
The key point of all exokernals is that the protection and management schemes are separate. Thus, the library operating systems are isolated from each other. This allows application libraries to be as powerful as the traditional privileged operation system.
Friday, August 20, 2004
exokernel
The exokernel, a novel operating system architecture, was the framework of much of parallel and distributed OS research. An exokernel eliminates the high-level abstractions most of today's operating systems are built on, instead concentrating on securely and efficiently multiplexing the raw hardware. A more conventional operating system interface is provided by application-level libraries. This allows untrusted applications to safely override default OS functionality for extensibility, flexibility, or performance.
Thursday, August 19, 2004
TinyDB
TinyDB is a query processing system for extracting information from a network of TinyOS sensors.
Unlike existing solutions for data processing in TinyOS, TinyDB does not require you to write embedded C code for sensors. Instead, TinyDB provides a simple, SQL-like interface to specify the data you want to extract, along with additional parameters, like the rate at which data should be refreshed -- much as you would pose queries against a traditional database.
Given a query specifying your data interests, TinyDB collects that data from motes in the environment, filters it, aggregates it together, and routes it out to a PC. TinyDB does this via power-efficient in-network processing algorithms.
Unlike existing solutions for data processing in TinyOS, TinyDB does not require you to write embedded C code for sensors. Instead, TinyDB provides a simple, SQL-like interface to specify the data you want to extract, along with additional parameters, like the rate at which data should be refreshed -- much as you would pose queries against a traditional database.
Given a query specifying your data interests, TinyDB collects that data from motes in the environment, filters it, aggregates it together, and routes it out to a PC. TinyDB does this via power-efficient in-network processing algorithms.
Wednesday, August 18, 2004
Quantum Computing History
When quantum computers were first proposed in the 1970s and 1980s (by theorists such as the late Richard Feynman of California Institute of Technology, Pasadena, Calif.; Paul Benioff of Argonne National Laboratory in Illinois; David Deutsch of Oxford U. in England., and Charles Bennett of IBM's T.J. Watson Research Center, Yorktown Heights, N.Y.), many scientists doubted that they could ever be made practical. But in 1994, Peter Shor of AT&T Research described a specific quantum algorithm for factoring large numbers exponentially faster than conventional computers -- fast enough to defeat the security of many public-key cryptosystems. The potential of Shor's algorithm stimulated many scientists to work toward realizing the quantum computers' potential. Significant progress has been made in recent years by numerous research groups around the world.
While at IBM, Chuang extended his reputation as one of the world's leading quantum computing experimentalist. He led the team that demonstrated the world's first 2-qubit quantum computer (in 1998 at University of California Berkeley). At IBM-Almaden, Chuang and colleagues were first to demonstrate important quantum computing algorithms -- Grover's database-search algorithm in 1999 with a 3-qubit quantum computer and order finding last year (August 2000) with a 5-qubit quantum computer. The factorization using Shor's algorithm announced today is the most complex algorithm yet to be demonstrated by a quantum computer.
In addition to its ambitious experimental program, IBM Research is also noted for its many theoretical contributions to the emerging field of quantum information. IBM scientists pioneered quantum cryptography, quantum communications (including the concept of quantum teleportation) and efficient methods of error correction. David DiVincenzo, research staff member at IBM's Watson lab, has promulgated the five criteria necessary for a practical quantum computer:
1. a scalable physical system with well-characterized qubits;
2. the ability to initialize the qubit state;
3. decoherence times much longer than the quantum gate operation time;
4. a universal set of quantum gates; and
5. the ability to measure specific qubits
While at IBM, Chuang extended his reputation as one of the world's leading quantum computing experimentalist. He led the team that demonstrated the world's first 2-qubit quantum computer (in 1998 at University of California Berkeley). At IBM-Almaden, Chuang and colleagues were first to demonstrate important quantum computing algorithms -- Grover's database-search algorithm in 1999 with a 3-qubit quantum computer and order finding last year (August 2000) with a 5-qubit quantum computer. The factorization using Shor's algorithm announced today is the most complex algorithm yet to be demonstrated by a quantum computer.
In addition to its ambitious experimental program, IBM Research is also noted for its many theoretical contributions to the emerging field of quantum information. IBM scientists pioneered quantum cryptography, quantum communications (including the concept of quantum teleportation) and efficient methods of error correction. David DiVincenzo, research staff member at IBM's Watson lab, has promulgated the five criteria necessary for a practical quantum computer:
1. a scalable physical system with well-characterized qubits;
2. the ability to initialize the qubit state;
3. decoherence times much longer than the quantum gate operation time;
4. a universal set of quantum gates; and
5. the ability to measure specific qubits
Friday, August 13, 2004
Power of Quantum Computing
In a traditional computer, information is encoded in a series of bits, and these bits are manipulated via Boolean logic gates arranged in succession to produce an end result. Similarly, a quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state. The qubits can then be measured, with this measurement serving as the final computational result. This similarity in calculation between a classical and quantum computer affords that in theory, a classical computer can accurately simulate a quantum computer. In other words, a classical computer would be able to do anything a quantum computer can. So why bother with quantum computers? Although a classical computer can theoretically simulate a quantum computer, it is incredibly inefficient, so much so that a classical computer is effectively incapable of performing many tasks that a quantum computer could perform with ease. The simulation of a quantum computer on a classical one is a computationally hard problem because the correlations among quantum bits are qualitatively different from correlations among classical bits, as first explained by John Bell. Take for example a system of only a few hundred qubits, this exists in a Hilbert space of dimension ~1090 that in simulation would require a classical computer to work with exponentially large matrices (to perform calculations on each individual state, which is also represented as a matrix), meaning it would take an exponentially longer time than even a primitive quantum computer.
Richard Feynman was among the first to recognize the potential in quantum superposition for solving such problems much much faster. For example, a system of 500 qubits, which is impossible to simulate classically, represents a quantum superposition of as many as 2500 states. Each state would be classically equivalent to a single list of 500 1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for instance, whose action might be to execute a controlled-NOT operation on the 100th and 101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell swoop, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! Eventually, however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. The reason this is an exciting result is because this answer, derived from the massive quantum parallelism achieved through superposition, is the equivalent of performing the same operation on a classical super computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense computing power, and soon after realizing its potential, the hunt was on to find something interesting for a quantum computer to do. Peter Shor, a research and computer scientist at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the first quantum computer algorithm. Shor's algorithm harnesses the power of quantum superposition to rapidly factor very large numbers (on the order ~10200 digits and greater) in a matter of seconds. The premier application of a quantum computer capable of implementing this algorithm lies in the field of encryption, where one common (and best) encryption code, known as RSA, relies heavily on the difficulty of factoring very large composite numbers into their primes. A computer which can do this easily is naturally of great interest to numerous government agencies that use RSA -- previously considered to be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition, Shor has put together a toolbox of mathematical operations that can only be performed on a quantum computer, many of which he used in his factorization algorithm. Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.
Richard Feynman was among the first to recognize the potential in quantum superposition for solving such problems much much faster. For example, a system of 500 qubits, which is impossible to simulate classically, represents a quantum superposition of as many as 2500 states. Each state would be classically equivalent to a single list of 500 1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for instance, whose action might be to execute a controlled-NOT operation on the 100th and 101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell swoop, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! Eventually, however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. The reason this is an exciting result is because this answer, derived from the massive quantum parallelism achieved through superposition, is the equivalent of performing the same operation on a classical super computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense computing power, and soon after realizing its potential, the hunt was on to find something interesting for a quantum computer to do. Peter Shor, a research and computer scientist at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the first quantum computer algorithm. Shor's algorithm harnesses the power of quantum superposition to rapidly factor very large numbers (on the order ~10200 digits and greater) in a matter of seconds. The premier application of a quantum computer capable of implementing this algorithm lies in the field of encryption, where one common (and best) encryption code, known as RSA, relies heavily on the difficulty of factoring very large composite numbers into their primes. A computer which can do this easily is naturally of great interest to numerous government agencies that use RSA -- previously considered to be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition, Shor has put together a toolbox of mathematical operations that can only be performed on a quantum computer, many of which he used in his factorization algorithm. Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.
Thursday, August 12, 2004
Quantum Computer - An Introduction
What is a Quantum Computer?
Behold your computer. Your computer represents the culmination of years of technological advancements beginning with the early ideas of Charles Babbage (1791-1871) and eventual creation of the first computer by German engineer Konrad Zuse in 1941. Surprisingly however, the high speed modern computer sitting in front of you is fundamentally no different from its gargantuan 30 ton ancestors, which were equipped with some 18000 vacuum tubes and 500 miles of wiring! Although computers have become more compact and considerably faster in performing their task, the task remains the same: to manipulate and interpret an encoding of binary bits into a useful computational result. A bit is a fundamental unit of information, classically represented as a 0 or 1 in your digital computer. Each classical bit is physically realized through a macroscopic physical system, such as the magnetization on a hard disk or the charge on a capacitor. A document, for example, comprised of n-characters stored on the hard drive of a typical computer is accordingly described by a string of 8n zeros and ones. Herein lies a key difference between your classical computer and a quantum computer. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing. In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. This may seem counterintuitive because everyday phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level. This rather difficult concept is perhaps best explained through an experiment.
Behold your computer. Your computer represents the culmination of years of technological advancements beginning with the early ideas of Charles Babbage (1791-1871) and eventual creation of the first computer by German engineer Konrad Zuse in 1941. Surprisingly however, the high speed modern computer sitting in front of you is fundamentally no different from its gargantuan 30 ton ancestors, which were equipped with some 18000 vacuum tubes and 500 miles of wiring! Although computers have become more compact and considerably faster in performing their task, the task remains the same: to manipulate and interpret an encoding of binary bits into a useful computational result. A bit is a fundamental unit of information, classically represented as a 0 or 1 in your digital computer. Each classical bit is physically realized through a macroscopic physical system, such as the magnetization on a hard disk or the charge on a capacitor. A document, for example, comprised of n-characters stored on the hard drive of a typical computer is accordingly described by a string of 8n zeros and ones. Herein lies a key difference between your classical computer and a quantum computer. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing. In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. This may seem counterintuitive because everyday phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level. This rather difficult concept is perhaps best explained through an experiment.
Wednesday, August 11, 2004
Captcha
A CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a test, used with challenge-response systems, that's designed to differentiate humans from automated senders. A CAPTCHA sets some task that should be fairly easy for a human to perform, but impossible for an e-mail spammer's program. Typically, when a challenge-response system receives an e-mail message, it sends a reply with a URL linking the user to a Web page with a CAPTCHA. There are a number of different types of test used. Carnegie Mellon University's CAPTCHA Project Web site has demo versions of several tests. "Gimpy," for example, is a program that randomly selects seven words from a dictionary and then presents them, somewhat distorted, in an image. The user is asked to type three of the words that appear in the image. Another test, "Pix," presents the user with six images of a single subject, such as babies or horses, and asks them to define the subject of the pictures. Once senders have satisfied the CAPTCHA test, their addresses can be added to the recipient's whitelist of permitted senders, so they won't have to prove themselves each time they send a message.
Wednesday, August 04, 2004
e21 - Oxygen MIT
e21 wish to invert this, to have the power of computation come into the human world. As one example, collaborative work should not mean having to use a computer; it should instead mean people collaborating as they always have, with face to face interaction and the richness of human communcation and interaction. The computation ought to be embedded in that familiar environment of human interaction -- meeting rooms alive with speech, gesture, drawing -- rather than embedding (a limited part of) that interaction in the environment of computation, i.e., people typing, clicking and dragging. Our notion of human-centered computation thus means computation focused on human social interaction in its familiar context.
It aim to make the computation transparent in several senses. It ought to be transparent first in the sense that it feels natural. We aim for new forms of human-computer interaction that make interaction with software feel as familiar and natural as interaction with people. Second, it ought to be transparent because it's embedded, i.e., an invisible part of our everyday environments.
It aim to make the computation intelligent because we believe this is crucial to enabling natural interaction. O2 group find it (relatively!) easy to communicate with each other because of shared knowledge and intelligence. They must give our programs some part of that knowledge and intelligence if interaction with them is to be as easy and natural.
A significant part of the intelligence will be embedded in a variety of interfaces. They are building perceptual interfaces -- eyes and ears -- so that our environments can watch, listen, and talk to us. We are building sketching and gestural interfaces so that they can understand when we draw, point, wave, frown, and look interested.
Our work is organized around a number specific projects: Work on the intelligent room is focused on equipping spaces with the appropriate hardware (e.g., cameras, projectors, microphone arrays, live-boards) and on developing a major body of infrastructure software -- MetaGlue and the 100 or so computational agents that run the space. Work on perceptual interfaces is focused on building the "eyes and ears" for smart interfaces and environments, endowing the computer with a visual awareness of human users, so that it can pay attention to their location, gesture, and expression. These visual interfaces are being integrated with spoken language understanding systems, enabling seamless use of speech and "body language" to communicate with computational agents.
It aim to make the computation transparent in several senses. It ought to be transparent first in the sense that it feels natural. We aim for new forms of human-computer interaction that make interaction with software feel as familiar and natural as interaction with people. Second, it ought to be transparent because it's embedded, i.e., an invisible part of our everyday environments.
It aim to make the computation intelligent because we believe this is crucial to enabling natural interaction. O2 group find it (relatively!) easy to communicate with each other because of shared knowledge and intelligence. They must give our programs some part of that knowledge and intelligence if interaction with them is to be as easy and natural.
A significant part of the intelligence will be embedded in a variety of interfaces. They are building perceptual interfaces -- eyes and ears -- so that our environments can watch, listen, and talk to us. We are building sketching and gestural interfaces so that they can understand when we draw, point, wave, frown, and look interested.
Our work is organized around a number specific projects: Work on the intelligent room is focused on equipping spaces with the appropriate hardware (e.g., cameras, projectors, microphone arrays, live-boards) and on developing a major body of infrastructure software -- MetaGlue and the 100 or so computational agents that run the space. Work on perceptual interfaces is focused on building the "eyes and ears" for smart interfaces and environments, endowing the computer with a visual awareness of human users, so that it can pay attention to their location, gesture, and expression. These visual interfaces are being integrated with spoken language understanding systems, enabling seamless use of speech and "body language" to communicate with computational agents.
Monday, August 02, 2004
Oxygen's CORE Project - MIT
The Communication Oriented Routing Environment (CORE)is a device interconnect system for pervasive environments. It is designed to manage and simplify interaction between pervasive computing devices. CORE operates as a platform independent, application-level overlay network able to use various communication protocls to allow devices to communicate with each other. Devices require little or no modification to join the CORE network, and once joined, can take advantage of an array of tools, such as monitoring, configuring, and restoring the state of the networked environment in which they operate. CORE provides these tools by functioning as both an application-level router and a debugging tool capable of changing its behavior upon request from an attached device.
Current work focuses on the debugging aspect of the system. The goal is to develop a system for users to explore how changes in device's behavior can be observed by analyzing the ongoing interactions with CORE at the time the device's behavior changed.
Current work focuses on the debugging aspect of the system. The goal is to develop a system for users to explore how changes in device's behavior can be observed by analyzing the ongoing interactions with CORE at the time the device's behavior changed.
Sunday, August 01, 2004
Data Motion - Stanford
The goal of the project is to build a new infrastructure, called DataMotion, for managing and analyzing large volumes of dynamic and diverse data. Today, most information systems---even those handling multiple, distributed, heterogeneous data sources---are based on stored and relatively static data sets. However, much of the most critical information today is highly dynamic, coming instead in the form of multiple, rapid, time-varying data streams. For example, streams may contain information on banking transactions, telephone calls, observed symptoms in emergency rooms, sensors in scientific experiments, logins at computer servers, or changes to Web pages across the world.The volume is so high that it is difficult to store all the information in conventional databases. And even if it is possible to store a particular stream in a database, it is then difficult to perform the competing analyses required by diverse and geographically distributed users on this centralized database. A much better approach is to route the stream to the interested users, while in the process filtering according to user's interests, combining the stream with other relevant streams, and performing real-time analysis of the data whenever possible. DataMotion enables such distributed, real-time processing.Our project consists of 4 inter-related research thrusts, each addressing an important class of problems: (a) How to perform traditional database and data mining operations on streams; (b) How to generate streams, and how to present streams to users; (c) How to route streams to distributed users with differing information needs; and (d) How to ensure the security and privacy of streams. In addition, we will implement an experimental testbed, using 4 applications that drive our development: (a) business streams, (b) detection of epidemics, (c) scientific data streams, and (d) monitoring Web changes.
Saturday, July 31, 2004
Agent-based Intelligent Reactive Environments
aire is dedicated to examining how to design pervasive computing systems and applications for people. To study this, aire designs and constructs Intelligent Environments (IEs), which are spaces augmented with basic perceptual sensing, speech recognition, and distributed agent logic.
aire's IEs have encompassed a large range of form factors and sizes, from a pocket-sized computer up to networks of conference rooms. Each of these serve as individual platforms, or airespaces on which pervasive computing applications can be layered. Examples of aire applications currently under development include a meeting manager and capture application, contextual and natural language information retrieval, and a sketch interpretation system.
Further Reading suggestion www.ai.mit.edu
aire's IEs have encompassed a large range of form factors and sizes, from a pocket-sized computer up to networks of conference rooms. Each of these serve as individual platforms, or airespaces on which pervasive computing applications can be layered. Examples of aire applications currently under development include a meeting manager and capture application, contextual and natural language information retrieval, and a sketch interpretation system.
Further Reading suggestion www.ai.mit.edu
Friday, July 30, 2004
Stanford Stream Data Manager
In applications such as network monitoring, telecommunications data management, web personalization, manufacturing, sensor networks, and others, data takes the form of continuous data streams rather than finite stored data sets, and clients require long-running continuous queries as opposed to one-time queries. Traditional database systems and data processing algorithms are ill-equipped to handle complex and numerous continuous queries over data streams, and many aspects of data management and processing need to be reconsidered in their presence. In the STREAM project, we are reinvestigating data management and query processing in the presence of multiple, continuous, rapid, time-varying data streams. We are attacking problems ranging from basic theory results to algorithms to implementing a comprehensive prototype data stream management system
Peer - to -Peer
Peer-to-peer (P2P) systems have become a popular medium to share huge amounts of data. P2P systems distribute the main costs of sharing data - disk space for storing files and bandwidth for transferring them - across the peers in the network, thus enabling applications to scale without the need for powerful, expensive servers. Their ability to build a resource-rich system by aggregating resources enables them to dwarf the capabilities of many centralized systems for little cost.
There are, however, important challenges that must be overcome before the full potential of P2P systems can be realized. For example, the scale of the network and the autonomy of nodes make it difficult to identify, model and distribute resources that are available. Furthermore, some nodes may be malicious which makes it difficult to provide peers with authentic information or prevent denial-of-service attacks. These issues, and others, have motivated our research on understanding and improving P2P systems.
There are, however, important challenges that must be overcome before the full potential of P2P systems can be realized. For example, the scale of the network and the autonomy of nodes make it difficult to identify, model and distribute resources that are available. Furthermore, some nodes may be malicious which makes it difficult to provide peers with authentic information or prevent denial-of-service attacks. These issues, and others, have motivated our research on understanding and improving P2P systems.
Thursday, July 29, 2004
Intelligent Crawling
Efficient Crawler Should have its knowledge bases, crawling algorithm, and analysis of crawler learning ability. The knowledge bases of the crawler are incrementally built from the log of previous crawling. For efficient result of the next crawling, we present three knowledge bases: starting URLs, topic keywords, and URL prediction. Good starting URLs support the crawler to collect as many relevant web pages as possible. Good topic keywords support the crawler to recognize appropriate keywords matching the required topic. Good URL prediction supports the crawler to predict the relevancy of the content of unvisited URLs. Crawling algorithm has been separated into two parts: crawling with no knowledge bases and crawling with knowledge bases. Crawling with no knowledge bases is used in the first crawling for Internet exploration. The information gathered from the first crawling has been accumulated to be the experience of the crawler, i.e. the knowledge bases, during the consecutive crawling. Crawling with knowledge bases should be used in the next crawling for more efficient result and better network bandwidth utilization. - Weblink Team
Wednesday, July 28, 2004
What is PageRank (Explained)
In short PageRank is a “vote”, by all the other pages on the Web, about how important a page is. A link to a page counts as a vote of support. If there’s no link there’s no support (but it’s an abstention from voting rather than a vote against the page).Quoting from the original Google paper, PageRank is defined like this:We assume page A has pages T1...Tn which point to it (i.e., are citations). The parameter d is a damping factor which can be set between 0 and 1. We usually set d to 0.85. There are more details about d in the next section. Also C(A) is defined as the number of links going out of page A. The PageRank of a page A is given as follows:PR(A) = (1-d) + d (PR(T1)/C(T1) + ... + PR(Tn)/C(Tn))Note that the PageRanks form a probability distribution over web pages, so the sum of all web pages' PageRanks will be one.PageRank or PR(A) can be calculated using a simple iterative algorithm, and corresponds to the principal eigenvector of the normalized link matrix of the web.but that’s not too helpful so let’s break it down into sections.1. PR(Tn) - Each page has a notion of its own self-importance. That’s “PR(T1)” for the first page in the web all the way up to “PR(Tn)” for the last page2. C(Tn) - Each page spreads its vote out evenly amongst all of it’s outgoing links. The count, or number, of outgoing links for page 1 is “C(T1)”, “C(Tn)” for page n, and so on for all pages.3. PR(Tn)/C(Tn) - so if our page (page A) has a backlink from page “n” the share of the vote page A will get is “PR(Tn)/C(Tn)”4. d(... - All these fractions of votes are added together but, to stop the other pages having too much influence, this total vote is “damped down” by multiplying it by 0.85 (the factor “d”)5. (1 - d) - The (1 – d) bit at the beginning is a bit of probability math magic so the “sum of all web pages' PageRanks will be one”: it adds in the bit lost by the d(.... It also means that if a page has no links to it (no backlinks) even then it will still get a small PR of 0.15 (i.e. 1 – 0.85). (Aside: the Google paper says “the sum of all pages” but they mean the “the normalised sum” – otherwise known as “the average” to you and me.
Further Readings The PageRank Vector - Deepak Thukral and Ashish Juneja
Further Readings The PageRank Vector - Deepak Thukral and Ashish Juneja
Subscribe to:
Posts (Atom)