The 18th International Conference on Principles of Distributed Systems
16-19 December 2014,
Cortina d'Ampezzo, Italy
|Tue 16th||Wed 17th||Thu 18th||Fri 19th|
|8:45||Welcome from the chairs|
|9:00||Second Workshop on Distributed Computing: Computability and Complexity
|10:40||Coffe break||Coffe break|
Shared Memory I
Shared Memory III
|11:20||Second Workshop on Distributed Computing: Computability and Complexity
Ski break (*)
|12:25||Lunch break||Concluding remarks|
Salt: Combining ACID and BASE in a Distributed Database - Lorenzo Alvisi
Integrity, Consistency, and Verification of Remote Computation - Christian Cachin
Shared Memory II
|17:00||Coffee break||Session 3
Fault Tolerance I
Fault Tolerance II
Distributed Large-Scale Data Stream Analysis - Yann Busnel
, The University of Texas at Austin, USA
- Carole Delporte, Université Paris Diderot, France
- Hugues Fauconnier, Université Paris Diderot, France
- David Ilcinkas, LaBRI, Université de Bordeaux, France
- Michel Raynal, IRISA, Université de Rennes, France
Visit the workshop website for further details.
, IBM Research Zurich, Switzerland
With the advent of cloud computing, many clients have outsourced computation and data storage to remote servers. This has led to prominent concerns about the privacy of the data and computation placed outside the control of the clients. On the other hand, the integrity of the responses from the remote servers has been addressed in-depth only recently. Violations of correctness are potentially more dangerous, however, in the sense that the safety of a service is in danger and that the clients rely on the responses. Incidental computation errors as well as deliberate and sophisticated manipulations on the server side are nearly impossible to discover with today's technology. Over the last few years, there has been rising interest in technology to verify the results of a remote computation and to check the consistency of responses from a cloud service. These advances rely on recently introduced cryptographic techniques, including authenticated data types (ADT), probabilistically checkable proofs (PCPs), fully-homomorphic encryption (FHE), quadratic programs (QP), and more. With multiple clients accessing the remote service, a further dimension is added to the problem in the sense that clients isolated from each other need to guarantee that their verification operations relate to the same "version" of the server's computation state. This tutorial will survey the recent work in this area and provide a broad introduction to some of the key concepts underlying verifiable computation, towards single and multiple verifiers. The aim is to give a systematic survey of techniques in the realm of verifiable computation, remote data integrity, authenticated queries, and consistency verification.
The approaches rely on methods from cryptography and from distributed computing. The presentation will introduce the necessary background techniques from these fields, describe key results, and illustrate how they ensure integrity in selected cases.
The tutorial consists of three parts:
- Verifiable computation;
- Authenticated data types;
- Distributed consistency enforcement.
, Crest (Ensai, Rennes) & LINA (University of Nantes), France
This tutorial aims to survey some existing algorithms that process huge amount of data inline, efficiently in term of space and time complexity. The interest of estimating metrics or identify specific patterns between several (a.k.a. distributed) data streams is important in data intensive applications. Many different domains are concerned by such analyses including machine learning, data mining, databases, information retrieval, and network monitoring. In all these applications, it is necessary to quickly and precisely process a huge amount of data. For instance, in IP network management, the analysis of input streams allows to rapidly detect the presence of anomalies or intrusions when changes in the communication patterns occur. The problem of extracting pertinent information in a data stream is similar to the problem of identifying patterns that do not conform to the expected behaviour, which has been an active area of research for many decades. For instance, depending on the specificities of the domain considered and the type of outliers considered, different methods have been designed, namely classification-based, clustering-based, nearest neighbour based, statistical, spectral, and information theory. We aim to propose a comprehensive survey of these techniques, their advantages and their drawbacks in this tutorial. A common feature of these techniques is their space complexity and their computational cost, as they rely on small space approximation algorithms for analysing their data.