Load Balancing and Parallelism for the Internet


Thesis: This thesis is concerned with three aspects of routers in the Internet --- (1) Making them faster, (2) making them safe from performance attacks, and, (3) enabling them to provide deterministic performance guarantees.

Dissertation Roadmap: I tried to make this thesis accessible to a broad set of readers, including students, engineers, mathematicians and the casual reader. The load balancing arguments in Chapters 2, 3,4,5,6 and 10 only use combinatorial arguments and can be understood with a high-school knowledge of mathematics. Router architects and networking engineers should benefit from the widely used caching algorithms presented in Chapters 7, 8 and 9. Their analysis is more sophisticated, and their proofs can be skipped by the casual reader.

Switch designers should look into a practical caching technique described in Chapter 5 which eliminates the need for complex (preferred and forced) stable marriage algorithms. Chapter 10 describes a load balancing algorithm which can be of interest to any systems engineer building high-performance systems which need memory acceleration. The proofs in Appendix D, H and I are more involved, and use adversarial analysis, Lyapunov functions and fluid model theory, and are meant to be understood by advanced graduate students in networking.

Writing: If you have ever read a patent or a document in law, you will realize how ridiculously frustrating the style of writing can be. After a lot of guidance in writing, and after reading some beautifully well written books over the past few years (see Bill Bryson's -- A short history of nearly everything), I was motivated to make the writing style precise, easy to read and conversational. I also attempted a bit of humor in places where I thought it was relevant. But I have tried to ensure that it didn't detract from the main flow of the thesis. Apologies if I havent reached that goal. And I hope you enjoy the reading!

2 sided format (436 pages, ~4MB): sundar_thesis_2sided.pdf
1 sided format (418 pages, ~4MB): sundar_thesis_1sided.pdf
Orals: PPT , Html , PDF
List of Typos: Available here

Table of Contents:
Dedication dedication.pdf
Acknowledgements acknowledgements.pdf
Abstract abstract.pdf , abstract.html
Preface preface.pdf , preface.html
Introduction introduction.pdf
 
Part I: Load Balancing and Caching for Router Architectures
Analyzing Routers with Parallel Slower Memories chap2.pdf
Analyzing Routers with Distributed Slower Memories chap3.pdf
Analyzing CIOQ Routers with Localized Memories chap4.pdf
Analyzing Buffered CIOQ Routers with Localized Memories chap5.pdf
Analyzing Parallel Routers with Slower Memories chap6.pdf
 
Part II: Load Balancing and Caching for Router Line Cards
Note to Reader note2.pdf
Designing Packet Buffers from Slower Memories chap7.pdf
Designing Packet Schedulers from Slower Memories chap8.pdf
Designing Statistics Counters from Slower Memories chap9.pdf
Maintaining State with Slower Memories chap10.pdf
Conclusion conclusion.pdf
Epilogue epilogue.pdf
 
Part III: Appendices, Lists, Bibliography et. al.
Memory Terminology appA.pdf
Definitions and Traffic Models appB.pdf
Proofs for Chapter 3 appC.pdf
Proofs for Chapter 5 appD.pdf
A Modified Buffered Crossbar appE.pdf
Proofs for Chapter 6 appF.pdf
Centralized Parallel Packet Switch Algorithm appG.pdf
Proofs for Chapter 7 appH.pdf
Proofs for Chapter 9 appI.pdf
Parallel Packet Copies for Multicast appJ.pdf
Lists of Figs, Tables, Theorems, Algorithms and Examples. lists.pdf
References references.pdf
End Notes endnotes.pdf
Abbreviations abbreviations.pdf
Index index.pdf

Last modified: Tuesday, Jul 29th 19:45:50 PDT 2008
Sundar Iyer
sundaes@cs.stanford.edu