First, the Internet was designed to be decentralised, with no single point of failure. Then the web came along, and we all decided to rely on centralised servers and databases in datacenters. Now we’re realising that maybe so much centralisation is not a good idea, since a server is vulnerable to disruption by network outages, denial-of-service attacks, censorship, and server failures. Moreover, since servers store data from many people in one place, they are juicy targets for hackers, and arguably great facilitators of mass surveillance. Many of us would love to build more applications with end-to-end encryption and peer-to-peer communication… but it is simply too hard. It’s very quick and easy to throw together a centralised web app with something like Rails or Django, but building anything decentralised requires deep knowledge of a wide range of technologies, ranging from distributed algorithms and network protocols to cryptography. This is really a problem of programming models: we have not yet found the right abstractions for programming decentralised systems. Writing software that runs across a network of intermittently connected, unreliable, untrusted mobile devices requires a huge mental shift compared to writing programs for a single computer. Solving this problem will require both academia’s deep knowledge and industry’s focus on practicality. In this session, we will discuss ways of thinking about programming decentralised systems, and hypotheses for approaches that might work.
Martin Kleppmann is a researcher in distributed systems and security at the University of Cambridge, and author of Designing Data-Intensive Applications (O’Reilly Media, 2017). Previously he was a software engineer and entrepreneur at Internet companies including LinkedIn and Rapportive, where he worked on large-scale data infrastructure. He is now working on TRVE DATA, a project that aims to bring end-to-end encryption and decentralisation to a wide range of applications.