Imagine a computer whose memory is exponentially larger than its apparent physical size; a computer that can manipulate an exponential set of inputs simultaneously; a computer that computes in the twilight zone of Hilbert space. You would be thinking of a quantum computer. Relatively few and simple concepts from quantum mechanics are needed to make quantum computers a possibility. The subtlety has been in learning to manipulate these concepts. Is such a computer an inevitability or will it be too difficult to build?
solving an exponentially difficult problem for a conventional computer---that of factoring a large number. These ideas are then applied first to classical, dissipationless computers and then to quantum computers. A schematic model of a quantum computer is described as well as some of the subtleties in its programming. The Shor algorithm [1,2] for efficiently factoring numbers on a quantum computer is presented in two parts: the quantum procedure within the algorithm and the classical algorithm that calls the quantum procedure. The mathematical structure in factoring which makes the Shor algorithm possible is discussed. We conclude with an outlook to the feasibility and prospects for quantum computation in the coming years.
Let us start by describing the problem at hand: factoring a number N into its prime factors (e.g., the number 51688 may be decomposed as ). A convenient way to quantify how quickly a particular algorithm may solve a problem is to ask how the number of steps to complete the algorithm scales with the size of the ``input'' the algorithm is fed. For the factoring problem, this input is just the number N we wish to factor; hence the length of the input is . (The base of the logarithm is determined by our numbering system. Thus a base of 2 gives the length in binary; a base of 10 in decimal.) `Reasonable' algorithms are ones which scale as some small-degree polynomial in the input size (with a degree of perhaps 2 or 3).
On conventional computers the best known factoring algorithm runs in steps [3]. This algorithm, therefore, scales exponentially with the input size . For instance, in 1994 a 129 digit number (known as RSA129 [3']) was successfully factored using this algorithm on approximately 1600 workstations scattered around the world; the entire factorization took eight months [4]. Using this to estimate the prefactor of the above exponential scaling, we find that it would take roughly 800,000 years to factor a 250 digit number with the same computer power; similarly, a 1000 digit number would require years (significantly lon ger than the age of the universe). The difficulty of factoring large numbers is crucial for public-key cryptosystems, such as ones used by banks. There, such codes rely on the difficulty of factoring numbers with around 250 digits.
Recently, an algorithm was developed for factoring numbers on a quantum computer which runs in steps where is small [1]. This is roughly quadratic in the input size, so factoring a 1000 digit number with such an algorithm would require only a few million steps. The implication is that public key cryptosystems based on factoring may be breakable.
To give you an idea of how this exponential improvement might be possible, we review an elementary quantum mechanical experiment that demonstrates where such power may lie hidden [5]. The two-slit experiment is prototypic for observing quantum mechanical behavior: A source emits photons, electrons or other particles that arrive at a pair of slits. These particles undergo unitary evolution and finally measurement. We see an interference pattern, with both slits open, which wholely vanishes if either slit is covered. In some sense, the particles pass through both slits in parallel. If such unitary evolution were to represent a calculation (or an operation within a calculation) then the quantum system would be performing computations in parallel. Quantum parallelism comes for free. The output of this system would be given by the constructive interference among the parallel computations.
Tuesday, August 31, 2004
Sunday, August 22, 2004
Computer Clusters
The idea behind Beowulf is to provide COTS (Commodity Off The Shelf) base systems to satisfy specific computational requirements.
The Center of Excellence in Space Data and Information Sciences (CESDIS) was located at the Goddard Space Flight Center in Greenbelt, Maryland. CESDIS was funded in part by the NASA Earth and Space Sciences (ESS) project.
Part of ESS?s mission was to determine how applicable it would be to apply parallel computers to Earth and Space Science problems. Thus, the first Beowulf was made to handle the large data sets which are frequently used in ESS applications.
Give two current applications for Beowulf clusters (http://www.cacr.caltech.edu/beowulf)
The ASCI Cluster facilitates advancement in virtual test facilities. It deals primarily with small scaling runs, long production runs, and validating code's robustness and/or correctness.
The VizCluster serves to increase the size of images which can be rendered. It does so by combining results from PC?s into a single composite image using Ethernet and a novel pixel bus transfer system.
What type of computers/processors were used in the first Beowulf cluster? What operating system? How were they connected? (http://www.beowulf.org)
The first Beowulf cluster was built with DX4 processors and were connected using 10Mbit/s Ethernet. The machines ran Linux. Since the processors would have been limited by a single Ethernet and Ethernet switches were very expensive a ?channel bonded? Ethernet was created for the operating system whereby all network traffic was spread across multiple Ethernets.
Where is the Brahma Cluster located? What department is the sponsor for this cluster?
The Brahma cluster is located at Duke University in Durham, North Carolina. It is sponsored by the Physics department.
The Center of Excellence in Space Data and Information Sciences (CESDIS) was located at the Goddard Space Flight Center in Greenbelt, Maryland. CESDIS was funded in part by the NASA Earth and Space Sciences (ESS) project.
Part of ESS?s mission was to determine how applicable it would be to apply parallel computers to Earth and Space Science problems. Thus, the first Beowulf was made to handle the large data sets which are frequently used in ESS applications.
Give two current applications for Beowulf clusters (http://www.cacr.caltech.edu/beowulf)
The ASCI Cluster facilitates advancement in virtual test facilities. It deals primarily with small scaling runs, long production runs, and validating code's robustness and/or correctness.
The VizCluster serves to increase the size of images which can be rendered. It does so by combining results from PC?s into a single composite image using Ethernet and a novel pixel bus transfer system.
What type of computers/processors were used in the first Beowulf cluster? What operating system? How were they connected? (http://www.beowulf.org)
The first Beowulf cluster was built with DX4 processors and were connected using 10Mbit/s Ethernet. The machines ran Linux. Since the processors would have been limited by a single Ethernet and Ethernet switches were very expensive a ?channel bonded? Ethernet was created for the operating system whereby all network traffic was spread across multiple Ethernets.
Where is the Brahma Cluster located? What department is the sponsor for this cluster?
The Brahma cluster is located at Duke University in Durham, North Carolina. It is sponsored by the Physics department.
Saturday, August 21, 2004
Exploring exokernal
An exokernel allocates and deallocates hardware on the most basic level. It doesn't provide abstractions for applications to use. From the most basic hardware primitives applications can implement the traditional operating system abstractions but can do so in a highly specialized manner to meet specific speed and functionality goals.
By abstracting physical hardware resources traditional operating systems limit the performance, flexibility and functionality of applications.
Exokernal architecture removes these limitations by allowing untrusted software to implement traditional operating system abstractions entirely at application level.
The key point of all exokernals is that the protection and management schemes are separate. Thus, the library operating systems are isolated from each other. This allows application libraries to be as powerful as the traditional privileged operation system.
By abstracting physical hardware resources traditional operating systems limit the performance, flexibility and functionality of applications.
Exokernal architecture removes these limitations by allowing untrusted software to implement traditional operating system abstractions entirely at application level.
The key point of all exokernals is that the protection and management schemes are separate. Thus, the library operating systems are isolated from each other. This allows application libraries to be as powerful as the traditional privileged operation system.
Friday, August 20, 2004
exokernel
The exokernel, a novel operating system architecture, was the framework of much of parallel and distributed OS research. An exokernel eliminates the high-level abstractions most of today's operating systems are built on, instead concentrating on securely and efficiently multiplexing the raw hardware. A more conventional operating system interface is provided by application-level libraries. This allows untrusted applications to safely override default OS functionality for extensibility, flexibility, or performance.
Thursday, August 19, 2004
TinyDB
TinyDB is a query processing system for extracting information from a network of TinyOS sensors.
Unlike existing solutions for data processing in TinyOS, TinyDB does not require you to write embedded C code for sensors. Instead, TinyDB provides a simple, SQL-like interface to specify the data you want to extract, along with additional parameters, like the rate at which data should be refreshed -- much as you would pose queries against a traditional database.
Given a query specifying your data interests, TinyDB collects that data from motes in the environment, filters it, aggregates it together, and routes it out to a PC. TinyDB does this via power-efficient in-network processing algorithms.
Unlike existing solutions for data processing in TinyOS, TinyDB does not require you to write embedded C code for sensors. Instead, TinyDB provides a simple, SQL-like interface to specify the data you want to extract, along with additional parameters, like the rate at which data should be refreshed -- much as you would pose queries against a traditional database.
Given a query specifying your data interests, TinyDB collects that data from motes in the environment, filters it, aggregates it together, and routes it out to a PC. TinyDB does this via power-efficient in-network processing algorithms.
Wednesday, August 18, 2004
Quantum Computing History
When quantum computers were first proposed in the 1970s and 1980s (by theorists such as the late Richard Feynman of California Institute of Technology, Pasadena, Calif.; Paul Benioff of Argonne National Laboratory in Illinois; David Deutsch of Oxford U. in England., and Charles Bennett of IBM's T.J. Watson Research Center, Yorktown Heights, N.Y.), many scientists doubted that they could ever be made practical. But in 1994, Peter Shor of AT&T Research described a specific quantum algorithm for factoring large numbers exponentially faster than conventional computers -- fast enough to defeat the security of many public-key cryptosystems. The potential of Shor's algorithm stimulated many scientists to work toward realizing the quantum computers' potential. Significant progress has been made in recent years by numerous research groups around the world.
While at IBM, Chuang extended his reputation as one of the world's leading quantum computing experimentalist. He led the team that demonstrated the world's first 2-qubit quantum computer (in 1998 at University of California Berkeley). At IBM-Almaden, Chuang and colleagues were first to demonstrate important quantum computing algorithms -- Grover's database-search algorithm in 1999 with a 3-qubit quantum computer and order finding last year (August 2000) with a 5-qubit quantum computer. The factorization using Shor's algorithm announced today is the most complex algorithm yet to be demonstrated by a quantum computer.
In addition to its ambitious experimental program, IBM Research is also noted for its many theoretical contributions to the emerging field of quantum information. IBM scientists pioneered quantum cryptography, quantum communications (including the concept of quantum teleportation) and efficient methods of error correction. David DiVincenzo, research staff member at IBM's Watson lab, has promulgated the five criteria necessary for a practical quantum computer:
1. a scalable physical system with well-characterized qubits;
2. the ability to initialize the qubit state;
3. decoherence times much longer than the quantum gate operation time;
4. a universal set of quantum gates; and
5. the ability to measure specific qubits
While at IBM, Chuang extended his reputation as one of the world's leading quantum computing experimentalist. He led the team that demonstrated the world's first 2-qubit quantum computer (in 1998 at University of California Berkeley). At IBM-Almaden, Chuang and colleagues were first to demonstrate important quantum computing algorithms -- Grover's database-search algorithm in 1999 with a 3-qubit quantum computer and order finding last year (August 2000) with a 5-qubit quantum computer. The factorization using Shor's algorithm announced today is the most complex algorithm yet to be demonstrated by a quantum computer.
In addition to its ambitious experimental program, IBM Research is also noted for its many theoretical contributions to the emerging field of quantum information. IBM scientists pioneered quantum cryptography, quantum communications (including the concept of quantum teleportation) and efficient methods of error correction. David DiVincenzo, research staff member at IBM's Watson lab, has promulgated the five criteria necessary for a practical quantum computer:
1. a scalable physical system with well-characterized qubits;
2. the ability to initialize the qubit state;
3. decoherence times much longer than the quantum gate operation time;
4. a universal set of quantum gates; and
5. the ability to measure specific qubits
Friday, August 13, 2004
Power of Quantum Computing
In a traditional computer, information is encoded in a series of bits, and these bits are manipulated via Boolean logic gates arranged in succession to produce an end result. Similarly, a quantum computer manipulates qubits by executing a series of quantum gates, each a unitary transformation acting on a single qubit or pair of qubits. In applying these gates in succession, a quantum computer can perform a complicated unitary transformation to a set of qubits in some initial state. The qubits can then be measured, with this measurement serving as the final computational result. This similarity in calculation between a classical and quantum computer affords that in theory, a classical computer can accurately simulate a quantum computer. In other words, a classical computer would be able to do anything a quantum computer can. So why bother with quantum computers? Although a classical computer can theoretically simulate a quantum computer, it is incredibly inefficient, so much so that a classical computer is effectively incapable of performing many tasks that a quantum computer could perform with ease. The simulation of a quantum computer on a classical one is a computationally hard problem because the correlations among quantum bits are qualitatively different from correlations among classical bits, as first explained by John Bell. Take for example a system of only a few hundred qubits, this exists in a Hilbert space of dimension ~1090 that in simulation would require a classical computer to work with exponentially large matrices (to perform calculations on each individual state, which is also represented as a matrix), meaning it would take an exponentially longer time than even a primitive quantum computer.
Richard Feynman was among the first to recognize the potential in quantum superposition for solving such problems much much faster. For example, a system of 500 qubits, which is impossible to simulate classically, represents a quantum superposition of as many as 2500 states. Each state would be classically equivalent to a single list of 500 1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for instance, whose action might be to execute a controlled-NOT operation on the 100th and 101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell swoop, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! Eventually, however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. The reason this is an exciting result is because this answer, derived from the massive quantum parallelism achieved through superposition, is the equivalent of performing the same operation on a classical super computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense computing power, and soon after realizing its potential, the hunt was on to find something interesting for a quantum computer to do. Peter Shor, a research and computer scientist at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the first quantum computer algorithm. Shor's algorithm harnesses the power of quantum superposition to rapidly factor very large numbers (on the order ~10200 digits and greater) in a matter of seconds. The premier application of a quantum computer capable of implementing this algorithm lies in the field of encryption, where one common (and best) encryption code, known as RSA, relies heavily on the difficulty of factoring very large composite numbers into their primes. A computer which can do this easily is naturally of great interest to numerous government agencies that use RSA -- previously considered to be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition, Shor has put together a toolbox of mathematical operations that can only be performed on a quantum computer, many of which he used in his factorization algorithm. Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.
Richard Feynman was among the first to recognize the potential in quantum superposition for solving such problems much much faster. For example, a system of 500 qubits, which is impossible to simulate classically, represents a quantum superposition of as many as 2500 states. Each state would be classically equivalent to a single list of 500 1's and 0's. Any quantum operation on that system --a particular pulse of radio waves, for instance, whose action might be to execute a controlled-NOT operation on the 100th and 101st qubits-- would simultaneously operate on all 2500 states. Hence with one fell swoop, one tick of the computer clock, a quantum operation could compute not just on one machine state, as serial computers do, but on 2500 machine states at once! Eventually, however, observing the system would cause it to collapse into a single quantum state corresponding to a single answer, a single list of 500 1's and 0's, as dictated by the measurement axiom of quantum mechanics. The reason this is an exciting result is because this answer, derived from the massive quantum parallelism achieved through superposition, is the equivalent of performing the same operation on a classical super computer with ~10150 separate processors (which is of course impossible)!!
Early investigators in this field were naturally excited by the potential of such immense computing power, and soon after realizing its potential, the hunt was on to find something interesting for a quantum computer to do. Peter Shor, a research and computer scientist at AT&T's Bell Laboratories in New Jersey, provided such an application by devising the first quantum computer algorithm. Shor's algorithm harnesses the power of quantum superposition to rapidly factor very large numbers (on the order ~10200 digits and greater) in a matter of seconds. The premier application of a quantum computer capable of implementing this algorithm lies in the field of encryption, where one common (and best) encryption code, known as RSA, relies heavily on the difficulty of factoring very large composite numbers into their primes. A computer which can do this easily is naturally of great interest to numerous government agencies that use RSA -- previously considered to be "uncrackable" -- and anyone interested in electronic and financial privacy.
Encryption, however, is only one application of a quantum computer. In addition, Shor has put together a toolbox of mathematical operations that can only be performed on a quantum computer, many of which he used in his factorization algorithm. Furthermore, Feynman asserted that a quantum computer could function as a kind of simulator for quantum physics, potentially opening the doors to many discoveries in the field. Currently the power and capability of a quantum computer is primarily theoretical speculation; the advent of the first fully functional quantum computer will undoubtedly bring many new and exciting applications.
Thursday, August 12, 2004
Quantum Computer - An Introduction
What is a Quantum Computer?
Behold your computer. Your computer represents the culmination of years of technological advancements beginning with the early ideas of Charles Babbage (1791-1871) and eventual creation of the first computer by German engineer Konrad Zuse in 1941. Surprisingly however, the high speed modern computer sitting in front of you is fundamentally no different from its gargantuan 30 ton ancestors, which were equipped with some 18000 vacuum tubes and 500 miles of wiring! Although computers have become more compact and considerably faster in performing their task, the task remains the same: to manipulate and interpret an encoding of binary bits into a useful computational result. A bit is a fundamental unit of information, classically represented as a 0 or 1 in your digital computer. Each classical bit is physically realized through a macroscopic physical system, such as the magnetization on a hard disk or the charge on a capacitor. A document, for example, comprised of n-characters stored on the hard drive of a typical computer is accordingly described by a string of 8n zeros and ones. Herein lies a key difference between your classical computer and a quantum computer. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing. In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. This may seem counterintuitive because everyday phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level. This rather difficult concept is perhaps best explained through an experiment.
Behold your computer. Your computer represents the culmination of years of technological advancements beginning with the early ideas of Charles Babbage (1791-1871) and eventual creation of the first computer by German engineer Konrad Zuse in 1941. Surprisingly however, the high speed modern computer sitting in front of you is fundamentally no different from its gargantuan 30 ton ancestors, which were equipped with some 18000 vacuum tubes and 500 miles of wiring! Although computers have become more compact and considerably faster in performing their task, the task remains the same: to manipulate and interpret an encoding of binary bits into a useful computational result. A bit is a fundamental unit of information, classically represented as a 0 or 1 in your digital computer. Each classical bit is physically realized through a macroscopic physical system, such as the magnetization on a hard disk or the charge on a capacitor. A document, for example, comprised of n-characters stored on the hard drive of a typical computer is accordingly described by a string of 8n zeros and ones. Herein lies a key difference between your classical computer and a quantum computer. Where a classical computer obeys the well understood laws of classical physics, a quantum computer is a device that harnesses physical phenomenon unique to quantum mechanics (especially quantum interference) to realize a fundamentally new mode of information processing. In a quantum computer, the fundamental unit of information (called a quantum bit or qubit), is not binary but rather more quaternary in nature. This qubit property arises as a direct consequence of its adherence to the laws of quantum mechanics which differ radically from the laws of classical physics. A qubit can exist not only in a state corresponding to the logical state 0 or 1 as in a classical bit, but also in states corresponding to a blend or superposition of these classical states. In other words, a qubit can exist as a zero, a one, or simultaneously as both 0 and 1, with a numerical coefficient representing the probability for each state. This may seem counterintuitive because everyday phenomenon are governed by classical physics, not quantum mechanics -- which takes over at the atomic level. This rather difficult concept is perhaps best explained through an experiment.
Wednesday, August 11, 2004
Captcha
A CAPTCHA (Completely Automated Public Turing Test to tell Computers and Humans Apart) is a test, used with challenge-response systems, that's designed to differentiate humans from automated senders. A CAPTCHA sets some task that should be fairly easy for a human to perform, but impossible for an e-mail spammer's program. Typically, when a challenge-response system receives an e-mail message, it sends a reply with a URL linking the user to a Web page with a CAPTCHA. There are a number of different types of test used. Carnegie Mellon University's CAPTCHA Project Web site has demo versions of several tests. "Gimpy," for example, is a program that randomly selects seven words from a dictionary and then presents them, somewhat distorted, in an image. The user is asked to type three of the words that appear in the image. Another test, "Pix," presents the user with six images of a single subject, such as babies or horses, and asks them to define the subject of the pictures. Once senders have satisfied the CAPTCHA test, their addresses can be added to the recipient's whitelist of permitted senders, so they won't have to prove themselves each time they send a message.
Wednesday, August 04, 2004
e21 - Oxygen MIT
e21 wish to invert this, to have the power of computation come into the human world. As one example, collaborative work should not mean having to use a computer; it should instead mean people collaborating as they always have, with face to face interaction and the richness of human communcation and interaction. The computation ought to be embedded in that familiar environment of human interaction -- meeting rooms alive with speech, gesture, drawing -- rather than embedding (a limited part of) that interaction in the environment of computation, i.e., people typing, clicking and dragging. Our notion of human-centered computation thus means computation focused on human social interaction in its familiar context.
It aim to make the computation transparent in several senses. It ought to be transparent first in the sense that it feels natural. We aim for new forms of human-computer interaction that make interaction with software feel as familiar and natural as interaction with people. Second, it ought to be transparent because it's embedded, i.e., an invisible part of our everyday environments.
It aim to make the computation intelligent because we believe this is crucial to enabling natural interaction. O2 group find it (relatively!) easy to communicate with each other because of shared knowledge and intelligence. They must give our programs some part of that knowledge and intelligence if interaction with them is to be as easy and natural.
A significant part of the intelligence will be embedded in a variety of interfaces. They are building perceptual interfaces -- eyes and ears -- so that our environments can watch, listen, and talk to us. We are building sketching and gestural interfaces so that they can understand when we draw, point, wave, frown, and look interested.
Our work is organized around a number specific projects: Work on the intelligent room is focused on equipping spaces with the appropriate hardware (e.g., cameras, projectors, microphone arrays, live-boards) and on developing a major body of infrastructure software -- MetaGlue and the 100 or so computational agents that run the space. Work on perceptual interfaces is focused on building the "eyes and ears" for smart interfaces and environments, endowing the computer with a visual awareness of human users, so that it can pay attention to their location, gesture, and expression. These visual interfaces are being integrated with spoken language understanding systems, enabling seamless use of speech and "body language" to communicate with computational agents.
It aim to make the computation transparent in several senses. It ought to be transparent first in the sense that it feels natural. We aim for new forms of human-computer interaction that make interaction with software feel as familiar and natural as interaction with people. Second, it ought to be transparent because it's embedded, i.e., an invisible part of our everyday environments.
It aim to make the computation intelligent because we believe this is crucial to enabling natural interaction. O2 group find it (relatively!) easy to communicate with each other because of shared knowledge and intelligence. They must give our programs some part of that knowledge and intelligence if interaction with them is to be as easy and natural.
A significant part of the intelligence will be embedded in a variety of interfaces. They are building perceptual interfaces -- eyes and ears -- so that our environments can watch, listen, and talk to us. We are building sketching and gestural interfaces so that they can understand when we draw, point, wave, frown, and look interested.
Our work is organized around a number specific projects: Work on the intelligent room is focused on equipping spaces with the appropriate hardware (e.g., cameras, projectors, microphone arrays, live-boards) and on developing a major body of infrastructure software -- MetaGlue and the 100 or so computational agents that run the space. Work on perceptual interfaces is focused on building the "eyes and ears" for smart interfaces and environments, endowing the computer with a visual awareness of human users, so that it can pay attention to their location, gesture, and expression. These visual interfaces are being integrated with spoken language understanding systems, enabling seamless use of speech and "body language" to communicate with computational agents.
Monday, August 02, 2004
Oxygen's CORE Project - MIT
The Communication Oriented Routing Environment (CORE)is a device interconnect system for pervasive environments. It is designed to manage and simplify interaction between pervasive computing devices. CORE operates as a platform independent, application-level overlay network able to use various communication protocls to allow devices to communicate with each other. Devices require little or no modification to join the CORE network, and once joined, can take advantage of an array of tools, such as monitoring, configuring, and restoring the state of the networked environment in which they operate. CORE provides these tools by functioning as both an application-level router and a debugging tool capable of changing its behavior upon request from an attached device.
Current work focuses on the debugging aspect of the system. The goal is to develop a system for users to explore how changes in device's behavior can be observed by analyzing the ongoing interactions with CORE at the time the device's behavior changed.
Current work focuses on the debugging aspect of the system. The goal is to develop a system for users to explore how changes in device's behavior can be observed by analyzing the ongoing interactions with CORE at the time the device's behavior changed.
Sunday, August 01, 2004
Data Motion - Stanford
The goal of the project is to build a new infrastructure, called DataMotion, for managing and analyzing large volumes of dynamic and diverse data. Today, most information systems---even those handling multiple, distributed, heterogeneous data sources---are based on stored and relatively static data sets. However, much of the most critical information today is highly dynamic, coming instead in the form of multiple, rapid, time-varying data streams. For example, streams may contain information on banking transactions, telephone calls, observed symptoms in emergency rooms, sensors in scientific experiments, logins at computer servers, or changes to Web pages across the world.The volume is so high that it is difficult to store all the information in conventional databases. And even if it is possible to store a particular stream in a database, it is then difficult to perform the competing analyses required by diverse and geographically distributed users on this centralized database. A much better approach is to route the stream to the interested users, while in the process filtering according to user's interests, combining the stream with other relevant streams, and performing real-time analysis of the data whenever possible. DataMotion enables such distributed, real-time processing.Our project consists of 4 inter-related research thrusts, each addressing an important class of problems: (a) How to perform traditional database and data mining operations on streams; (b) How to generate streams, and how to present streams to users; (c) How to route streams to distributed users with differing information needs; and (d) How to ensure the security and privacy of streams. In addition, we will implement an experimental testbed, using 4 applications that drive our development: (a) business streams, (b) detection of epidemics, (c) scientific data streams, and (d) monitoring Web changes.
Subscribe to:
Posts (Atom)