1998 — 2004 |
Jamin, Sugih |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Pecase: An Empirical Investigation of the Dynamics of Measurement-Based Admission Control Algorithms Within An Economic Framework @ University of Michigan Ann Arbor
Abstract On today's Internet, packets are delivered in a best-effort manner. The Internet Engineering Task Force (IETF) is standardizing support for realtime packet delivery on the Internet. The viability of relaxed, as opposed to hard guarantees, realtime services depends fundamentally on the ability of its measurement-based admission control algorithms to determine which connection can be admitted to the network without violating service specifications. Previous research has demonstrated the viability of measurement-based admission control algorithms and relaxed realtime services. However, two important questions remain. First, independent of any specific algorithm, what traffic characteristics can serve as primary indicators to guide the setting of an algorithm's parameters? Second, can global inefficiencies resulting from the local nature of independent admission decisions be minimized? This research proposes to use equivalent token-bucket filter of aggregate traffic to guide the setting of the admission control algorithm's parameters, and to apply a market-based approach to global resource allocation. While there is a sizeable body of literature in the application of market-based control to the allocation of computing resources, no previous work on distributed resource allocation that involves multiple resources and takes into account transaction costs, i.e.~the cost to negotiate and execute a transaction, has been reported. Further information on this project is available from http://irl.eecs.umich.edu/jamin/research/CAREER.html.
|
1 |
1999 — 2002 |
Jamin, Sugih |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
An Architecture For a Global Internet Host Distance Estimation Service @ University of Michigan Ann Arbor
It is increasingly the case that a given Internet interaction could be satisfied by one of a number of Internet hosts. Examples range from short-lived interactions such as a single web page access to any one of multiple equal-content web servers to a long-term peering relationship between two news (NNTP) servers.
In any such interaction, all other things being equal, it is advantageous to access the ``nearest'' choice. By near we mean in terms of Internet performance metrics, such as low latency or high bandwidth. Even when all other things are not equal, such as the case where different web servers have different response times, it is still useful to include distance to each candidate host as one of several criteria for making a selection.
One approach to obtaining this distance information is for the initiating host to measure it itself, using either unicast (ping, traceroute) or multicast (expanding ring search) tools. While these tools have a wide range of uses, their utility is generally limited by their overhead. For instance, the cost of running a single traceroute can exceed the cost of the web page access itself. More important still, a large number of hosts making independent and frequent measurements could have a severe impact on performance overall. Ideally, measurements made by one system (host or router) should be made available, at low cost, to other hosts.
A useful general service for the Internet would be one whereby a host could quickly and efficiently learn the distance between any two hosts. To be widely useful, such a service should provide an answer with a delay and overhead less than those of the gains achieved by using the service. A simple protocol for such a service (SONAR) was discussed in the IETF (Internet Engineering Task Force) as early as February 1996, and in April 1997 as a more general service called HOPS (Host Proximity Service). Both of these efforts proposed lightweight client/server query/reply protocols along the lines of a DNS (Domain Name System) query/reply. Both also required that the server be able to produce an answer in a very short time---preferably, though not necessarily, by using information already stored locally.
This proposal is concerned with the problem of how servers in such a SONAR/HOPS service can obtain the distance information needed to answer queries. Specifically, we explore the following questions: - Which systems originally produce the distance information, and how is it produced? - How does the distance information get from these producing systems to the servers? - What form does the distance information take, and how is it used to produce answers for specific pairs of Internet hosts?
After discussing basic aspects of the questions outlined above, this proposal presents a general architecture for an underlying service that provides the basic information used by a SONAR/HOPS service. This underlying service is called IDMaps, for Internet Distance Map Service.
This work is being proposed in parallel by the PI, Sugih Jamin, and the co-PI, Lixia Zhang. Sugih Jamin's prior research has been in Internet traffic characterization and measurement-based admission control. Lixia Zhang has been an active participant in research on scalable reliable multicast. The PIs will work closely with Paul Francis of NTT Software Labs and Vern Paxson of LBNL. Paul Francis has been active on research in distributed hierarchy construction algorithms. Vern Paxson is active in Internet performance measurement and is the PI of the NSF-funded National Internet Measurement Infrastructure (NIMI) effort. IDMaps can be built on the NIMI substrate. Letters from Paul Francis and Vern Paxson confirming these collaborations are attached to this proposal.
|
1 |
2000 — 2004 |
Willinger, Walter Jamin, Sugih |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
How to Generate Random Topologies With Internet-Like Characteristics @ University of Michigan Ann Arbor
The exponential growth of the number of Internet hosts has been well documented in the trade press. The research community, however, has not seen many systematic empirical studies of how the Internet topology evolves over time and in space. Most recently, the authors of [FFF99] report on several power-law relationships observed on Autonomous Systems' (AS) connectivity degree, degree frequencies, and the neighborhood size within any given hop count from an AS. This pioneering work represents a first important step toward a better understanding of the dynamic nature of the actual Internet topology. The need for realistic random topologies in simulations has long been recognized by researchers working on routing and multicast protocols, e.g. [BE90, ZGLA91, WE94]; more recently, the need for realistic random topologies has also been voiced by researchers studying traffic dynamics and protocol behavior [MS94, FGHW99, F+00]. In recognition of this, several topology generators have been proposed in the literature. The most recent one, proposed in [J+00] and called Inet, takes advantage of the power-law relationships reported in [FFF99] in its construction of random topologies. The preliminary work conducted as part of this research and reported in this proposal shows exponential growth over time in frequency of every outdegree, the outdegree of every rank, and the neighborhood size within any given hop count from an AS. The preliminary results also show that only the random topologies generated by the Inet model have the power-law relationships similar to those of the Internet. Unfortunately, these random topologies do not exhibit the exponential growth observed of the Internet. A new Inet topology generator, called New Inet,was constructed to generate topologies exhibiting both the power law relationships and exponential growth rates over time.
The research proposed here consists of three parts: 1. To investigate whether or not proposed topology models are in fact capturing the essence of certain underlying network design mechanisms or engineering constraints that result in random topologies that perforce exhibit many of the empirically observed phenomena. To this end the PIs focus on two particular approaches concerning the emergence of scaling phenomena associated with Internet-like graph structures in the context of the models of Barabasi and Albert [BAJ99, BA99] and of Carlson and Doyle [CD99a, CD99b]. 2. An in-depth analysis of the properties of connectivity graphs at the intra-AS level. In particular, the Pis are interested in large ASs spanning wide geographic area. 3. To determine if trees constructed from a given graph can serve as a "fingerprint" of the graph from which various properties of the graph can be derived; and to model policy routing on the Internet as such trees. In particular, the PIs want to check whether these tree structures exhibit scaling laws that have been found ubiquitous in the context of river networks such as the Horton-Strahler laws [Hor45, Str57]. The PIs propose to continue studying AS-level connectivity data made available by the National Laboratory for Applied Network Research (NLANR) for better understanding of how ASs connect to each other, how this connectivity changes over time, and how each individual AS can be modeled in long-running simulations. Additionally, the PIs also propose to model router-level connectivity within ASs. For this, the PIs have access to intra-AS connectivity information from the AT&T WorldNet backbone. Funding for two graduate student research assistants was requested in this proposal. Both students will spend their academic year at the University of Michigan under the supervision of the PI, Sugih Jamin. They will help the PIs carry out research on studying AS-level connectivity, intra-AS connectivity, source-rooted trees constructed from traceroute data, and improve the New Inet random topology generator. During summers, one or both of the students will visit AT&T Labs-Research in Florham Park under the supervision of the co-PI, Walter Willinger, to collect and study data on AT&T WorldNet backbone. To improve the collaboration, the PI and co-PI may also visit each other'ssite during the project.
|
1 |
2003 — 2004 |
Laird, John [⬀] Jamin, Sugih |
N/AActivity Code Description: No activity code was retrieved: click on the grant title for more information |
Development of a Pedagogical Computer Game Engine Library in Support of Computer Science Education @ University of Michigan Ann Arbor
This is a proof of concept project in which software modules that support the development of a variety of computer game genres are developed. These software modules can be used to teach undergraduates computer game design and development. The software can also be used as a shell for explorations of many computer science disciplines for undergraduates where students can replace one of the modules with their own, thereby allowing them to concentrate on specific CS disciplines ranging from basic data structures and algorithms, to database management, issues in concurrent programming, design of network protocols, distributed systems and security, graphics programming, user interface design, and design of artificial intelligence algorithms; but still having the ability to create a computer game. The computer gaming context provides a very strong motivational construct to which most computer science students have a natural affinity. Placing these disciplines within the construct of a computer game helps students concretize and apply the concepts studied. The software is developed by University of Michigan undergraduate and graduate students. The results and software of this project are disseminated to the broader computer games and computer science community via appropriate conferences and the Internet. At the University of Michigan, the PI has been offering a Computer Games course since 1997 and the coPI has been teaching a Computer Networks course. Both PIs have developed pedagogical software packages in their respective fields that have been used by faculties in other universities nationwide.
|
1 |