Home People Projects Publications Technical Reports Meetings Links Contacts Log in

Projects

OpenFlow

OpenFlow is an open standard designed to greatly enhance networks’ flexibility. We are using OpenFlow to explore ways to safely conduct networking research in production networks, develop mobility managers for today’s increasingly mobile life, and even save power in energy-hungry data center networks.

For more information: go to the OpenFlow Homepage


POMI

POMI seeks to break down barriers to a truly programmable and open mobile Internet of tomorrow. POMI is a multi-group project, and our contribution is a flexible networking substrate.

For more information: go to the POMI Homepage


NetFPGA

The NetFPGA enables researchers and students to build working prototypes of high-speed, hardware-accelerated networking systems. The NetFPGA has been used in the classroom to teach students how to build Gigabit Ethernet (GigE) switches and Internet Prototcol (IP) routers that use hardware rather than software to forward packets. The NetFPGA hardware can be used by researchers to prototype new types of clean-slate services for next-generation networks. The NetFPGA supports multiple usage models that allow a user to build a hardware-accelerated router with as little or as much customization as they desire.

People: John W. Lockwood, Adam Covington, Glen Gibb, Jad Naous, David Erickson, Paul Hartke, James Hongyi Zeng

For more information: go to the NetFPGA Homepage


Buffer Sizing

It is well known that routers need buffers, however currently there is little understanding of why they need buffers and how much they need. Buffers that are to small will negatively affect utilization and packet loss. Too large buffers increase cost, heat, power consumption of the router and increase end-to-end latency. Practitioners today typically pick router buffers that are equal to the product of the router’s capacity and the RTT of the average flow thorugh the router. We have developed an analytical model that suggests that router buffers of core routers could be decreased by several orders of magnitude.

People: Neda Beheshti, Yashar Ganjali, Guido Appenzeller

For more information: go to the Buffer Sizing Homepage


ICING

Network layer access control to resources underlies most of topics in network architecture including both network resources (connectivity and QoS) and end-host security. We are working on ICING, a network layer that allows all stakeholders—senders, receivers, and providers—to deploy new network defenses unilaterally, with enough precision to avoid collateral damage, and without further hardware modification. ICING captures many prior network-layer defenses within a coherent framework: for a packet to flow from sender to receiver, every entity along the path must have consented to the entire path. To enforce this property, ICING’s data plane must address a key challenge: how mutually distrustful realms that cannot rely on per-packet or per-flow public key cryptography ensure that packets follow their purported paths.

People: Jad Naous, Michael Walfish (University of Texas, Austin), David Mazieres

For more information: go to the ICING homepage.


Past Projects

Valiant Load-balancing for Backbone Networks

Designing a backbone network is hard. On one hand, users expect the network to have very high availability, little or no congestion, and hence little or no queueing delay. On the other hand, traffic conditions are always changing. Over time usage patterns evolve, customers come and go, new applications are

deployed, and the traffic matrices of one year are quite different from the next. Yet the network operator must design for low congestion over the multiple years that the network is in operation. Harder still, the network must be designed to work well under a variety of link and router failures. It is not surprising that most networks today are enormously over-provisioned, with typical utilizations around 10%.

We propose that backbone networks use Valiant Load-balancing over a fully-connected logical mesh. It leads to a surprisingly simple architecture, with predictable and guaranteed performance, even when traffic matrices change
and when links and routers fail. It is provably the lowest capacity network with these characteristics. In addition, it provides fast convergence after failure, making it possible to support real-time applications.

People: Rui Zhang-Shen.

Congestion Control Algorithm for Fast Flow-Completion Times

TCP’s congestion control mechanisms are inefficient: Slow-start and AIMD make flows last many round-trip times so as to find their fair-share equilibrium rate. Most flows finish before they reach their fair rate. This inefficiency mounts as the link-rates become faster, as more and more flows can potentially finish in fewer round-trip times.

We designed a new and a practical algorithm — Rate Control Protocol (RCP) — that finishes flows an order of magnitude faster as compared to today’s mechanisms.

People: Nandita Dukkipati, Rui Zhang-Shen

For more information: go to the RCP home page

Virtual Network System

Routers are integral parts of the Internet, and they have been the focus of much study and research in the networking community. However, routers are expensive, and networks are difficult to set up. As a result, students only get to learn the theory without the opportunity to do any practical projects on routers. For researchers, new ideas and designs on routers take longer to implement and test, as the core of a router is in the kernel and
kernel implementation is a non-trivial task. Our project tries to address the problem by providing a platform to bring the core functionalities out from the kernel to the user space. This approach allows users to develop and run multiple routers at the same time on one machine. Thus, students can implement and test their own routers concurrently, and researchers can use the platform as tool for proof of concepts.

People: Martin Casado

For more information: go to the Virtual Network System home page

Ethane

Ethane is a backwards compatible NAC (Network Access Control) architecture for enterprise networks. In the Ethane approach, simple-to-define access policies are maintained in one place, and implemented consistently along a network datapath, and no user, switch or end-host has more information than it absolutely needs. We believe that it is imperative for private networks to control access to their networks using a centrally administered and easily
specified security policy that is strictly implemented by the network datapath.
The goal of Ethane is to increase the trust we place in the network, by eliminating the propagation of worms and other compromises.

People: Martin Casado, Gregory Watson.

For more information: go to the Ethane home page