ADGA is a one-day workshop that focuses on the theoretical foundations of distributed graph algorithms. The programme consists of several invited talks, which are aimed at the general DISC audience.

ADGA 2018 will be held on **Monday, 15 October 2018** in **New Orleans, USA**, and will be co-located with **DISC 2018**. The workshop will be chaired by Merav Parter.

For information on past and future ADGA workshops, see the web site of the ADGA workshop series. For information about the registration, please go to the DISC 2018 website.

# Schedule

ADGA programme | ||
---|---|---|

9:00 am | talk 1 | Calvin Newport: The Unreasonable Effectiveness of Decay-Based Broadcasting in Radio Networks |

10:00 am | coffee break | |

10:30 am | talk 2 | Andrew McGregor: Graph Sketching and Streaming: New Approaches for Analyzing Massive Graphs |

11:30 am | talk 3 | Valerie King: Sublinear Communication in a Message-Passing Model |

12:30 pm | lunch break | |

2:00 pm | talk 4 | Seth Pettie: Automatically Speeding Up LOCAL Graph Algorithms |

3:00 pm | talk 5 | Sebastian Forster: Single-Source Shortest Paths: Towards Optimality |

4:00 pm | coffee break | |

4:30 pm | talk 6 | Rati Gelashvili: Recent advances in population protocols |

5:30 pm | Workshop ends |

# Invited Speakers

Georgetown University

Calvin Newport is a Provost’s Distinguished Associate Professor of Computer Science at Georgetown University. He studies the theory of distributed systems with a particular focus on what can and cannot be solved in extreme settings. He earned his PhD in 2009 at MIT in Nancy Lynch’s Theory of Distributed Systems group, and worked as a post doc at MIT in Hari Balakrishnan’s Networks and Mobile Systems group.

The Unreasonable Effectiveness of Decay-Based Broadcasting in Radio Networks

A fundamental problem in distributed algorithm theory is reducing contention among an unknown set of processes attempting to communicate on a shared channel. In the 1980s, Bar-Yehuda, Goldreich and Itai (BGI) introduced a surprisingly simple randomized distributed algorithm for broadcasting a single message to all processes on a multihop shared channel. Their solution solved the problem faster (with high probability) than the best known centralized scheduling algorithm at the time. It was later proved to be optimal among all possible distributed algorithms.

In this talk, I will discuss the backstory and impact of the BGI broadcast algorithm. I will then detail our recent efforts to explore the behavior of this basic strategy in more realistic (i.e., less well-behaved) variations of the radio network model — efforts that culminated in establishing that the BGI algorithm retains strong guarantees even in very noisy and unpredictable radio networks.

My goal is to underscore the surprising and elegant effectiveness of this seemingly simple strategy.

University of Massachusetts, Amherst

Andrew McGregor is an Associate Professor at the University of Massachusetts, Amherst. He received a B.A. degree and the Certificate of Advance Study in Mathematics from the University of Cambridge and a Ph.D. from the University of Pennsylvania. He also spent a couple of years as a post-doc at UC San Diego and Microsoft Research SVC. He is interested in many areas of theoretical computer science and specializes in data stream algorithms and linear sketching. He received the NSF Career Award in 2010 and the College of Information and Computer Sciences' Outstanding Teacher Award in 2016.

Graph Sketching and Streaming: New Approaches for Analyzing Massive Graphs

In this talk, we will survey recent work on designing algorithms for analyzing massive graphs via data streams and linear sketches. We will consider a range of problems including connectivity and sparsification; matchings and vertex cover; and correlation clustering and clustering coefficients.

University of Victoria

Valerie King is a professor at the University of Victoria. Her research is in randomized algorithms with applications to dynamic graph problems and Byzantine fault tolerant distributed computing. She received her PhD from the University of California and has held visiting professor positions at Hebrew University, University of Copenhagen, University of Toronto and Ecole Normale Superieur in Paris and industrial research positions at Microsoft Research (Silicon Valley), HP and Compaq Systems Research Lab, and NECI in Princeton. She served as program chair of STOC 2017 and is an ACM fellow.

Sublinear Communication in a Message-Passing Model

This talk is about the amount of communication needed to construct a spanning tree (and minimum spanning tree) of a distributed network where each node has only local information about the network and communication is by message-passing. While this problem has been studied for thirty years or more, it is still not fully understood. I’ll survey the known techniques for upper and lower bounds and the open problems in this area.

University of Michigan

Seth Pettie is a Professor of Computer Science and Engineering at the University of Michigan, Ann Arbor. He received is Ph.D. from the University of Texas at Austin, in 2003, and was an Alexander von Humboldt Postdoctoral Fellow at the Max Planck Institute for Informatics, from 2003-2006. His research focuses on combinatorics, data structures, and algorithm design.

Automatically Speeding Up LOCAL Graph Algorithms

One strange aspect of the LOCAL model is that there are many “unnatural” ranges of computational complexities for locally checkable graph labeling problems. These complexity gaps are proved via *automatic-speedup theorems*. In this talk I will survey how 5 of the automatic speedup theorems work. In particular, on constant-degree graphs:

- Any
*o*(log(log**n*))-time algorithm can be mechanically converted to an*O*(1)-time algorithm. The argument makes use of hypergraph Ramsey numbers. - Any
*o*(log*n*)-time deterministic algorithm and any*o*(log log*n*)-time randomized algorithm can be converted to an*O*(log**n*) deterministic algorithm. The proof uses a derandomization technique reminiscent of the classic proof that BPP ⊆ P/poly. - Any
*o*(log*n*)-time randomized algorithm can be sped up to run in the complexity of the distributed Lovasz Local Lemma (LLL) problem. - Any algorithm that runs in
*n*^{o(1)}-time on trees can be sped up to run in*O*(log*n*) time on trees. The proof is based on the pumping lemma for regular languages.

Automatic-speedup is one of the principle techniques used to develop fast, sublogarithmic Distributed LLL algorithms.

This talk is based on work by Naor and Stockmeyer (SICOMP 1995), Chang, Kopelowitz, and Pettie (FOCS 2016), Chang and Pettie (FOCS 2017).

University of Salzburg

Sebastian Forster is an assistant professor at the University of Salzburg. His main research interest is the design of efficient graph algorithms, in particular for dynamic and distributed models. Sebastian completed his PhD at the University of Vienna in 2015, where he was supervised by Monika Henzinger. He worked as a postdoc at the Simons Institute for the Theory of Computing, and the Max Planck Institute for Informatics. Previously, he published under the name Sebastian Krinninger.

Single-Source Shortest Paths: Towards Optimality

Despite being one of the major classic graph problems, there is no truly satisfying algorithm for computing single-source shortest paths in distributed and parallel models of computation. In the CONGEST model, this problem has received considerable attention in recent years leading to nearly optimal approximation algorithms. As a consequence, researchers have now started to tackle the exact version of the problem. In this talk, I will review my recent contributions to this line of research.

University of Toronto

Rati Gelashvili obtained his MSc in 2014 and his PhD in 2017 from MIT under the supervision of Prof. Nir Shavit. At MIT, he was an Akamai presidential fellow and his PhD thesis won the ACM-EATCS Principles of Distributed Computing doctoral dissertation award. Afterwards, Rati was a postdoctoral fellow at University of Toronto, working with Prof. Faith Ellen. His research is centered around asynchrony in distributed computing.

Recent advances in population protocols

Population protocols are a popular model of distributed computing, in which agents with limited local state interact randomly in pairs, according to an underlying communication graph, and cooperate to collectively compute global predicates. Introduced to model animal populations equipped with sensors, they have proved a useful abstraction for settings from wireless sensor networks, to gene regulatory networks, and chemical reaction networks. From theoretical prospective, population protocols, with the restricted communication and computational power, are probably the simplest distributed model. Yet, perhaps surprisingly, solutions to classical distributed tasks are still possible. Moreover, these solutions often rely on neat algorithmic ideas for design and interesting combinatorial techniques for analysis.

Inspired by recent developments in DNA programming, an extensive series of papers, across different communities, has examined the computability and complexity characteristics of this model. I will review some of these recent developments, as well as open problems and directions.