Internet policy and privacy are increasingly important in the eyes of both regulators and the public. As more of our life becomes digital, where is the line drawn between the availability of information and the freedom to use it?
Recently, Professor Shafi Goldwasser, a co-founder of Duality Technologies, a Turing award and two-time Gödel Prize winner, interviewed Professor Daniel J. Weitzner, to discuss Internet privacy and public policy. Weitzner is the Director of the MIT Internet Policy Research Initiative, principal research scientist at the Computer Science and Artificial Intelligence Lab CSAIL, and professor of Internet public policy in MIT’s Computer Science Department.
The following is a transcript of the beginning of Goldwasser and Weitzner’s interview. For the full interview, press “play” on the video embedded at the end of this blog.
Shafi: Tell us a little bit about your background.
Weitzner: You can say that I have a checkered past. I studied Philosophy in college; that was not very marketable. So I then spent four years doing technical work, working with a combination of small, rich investment banks and poor nonprofits in New York City. I also had the distinction of having installed what I think was the first token ring network in sub-Saharan Africa connected with a World Bank project.
So I have a technical background, but I always felt like there were these critical questions in computing and policy that were not being answered. So I went to law school – and I didn’t really know where that would take me. In the end, where it took me was to the very beginning of what I think of as the Internet revolution.
When I arrived in Washington, DC in 1992, I was the first lawyer for the Electronic Frontier Foundation, and we worked on this whole fascinating range of issues: First Amendment issues, free speech on the Internet, surveillance, and other pressing legal issues. That’s also where I met leaders in the cryptography community, many of whom became wonderful colleagues. We had to very quickly figure out answers to this set of questions around surveillance and encryption, because the FBI and NSA were very worried about what might happen if there was (what we now call) end-to-end encryption. They wanted to introduce key escrow, where someone else would hold the [encryption] keys.
This was the “honeymoon period” for Internet policy; it was all very exciting. The Internet was growing really rapidly. It created extraordinary opportunities for people to have access to information and to speak. There also were a lot of exciting privacy opportunities. We had these very strong privacy technologies that appeared to be able to do a great job of protecting people’s private communication against all kinds of threats.
One thing led to another; I founded the Center for Democracy and Technology. But by around 1998, I had this naive feeling that we had done everything that was interesting about Internet policy.
I came up to M.I.T. to help Tim Berners-Lee set up the World Wide Web Consortium and worked on a whole number of private policy related technical design questions in the Web, privacy, security standards, a bunch of other things, and had a little stint working in the White House. [In 2011,] I was Deputy CTO for Internet Policy, where I mostly worked on consumer privacy issues. There were a lot of difficult questions about the global Internet, which had by that time gone from a kind of an exciting thing in a Silicon Valley garage to something that the whole world depended on. But governments hadn’t really figured out how to be serious about the Internet and Internet policy. After that, I came back to MIT, and started the Internet Policy Research Initiative.
All I’ll say is: in looking back at what has changed about Internet policy and technology and computer science, I think that particularly in the area of security and cryptography, there was always a sense that it would be, that what was interesting was what could be provably secure, what properties you could guarantee with mathematical certainty. In turn, that’s come headlong into a more challenging, nuanced set of questions about how to actually evaluate the risk and manage risk of using information technology.
We want to be able to manage risk in those systems because we depend on them as a society, but we’re not going to get perfection; neither do we want to just settle for the best effort. To me, the really interesting challenge in the maturation of a lot of these technology cases is that we now have to treat them as essential, but also those that are subject to some risk and where the goal and the intersection between technology and policy, I think in many ways is about being able to characterize and manage those risk levels so that we could do the things we need to do with the information we have without incurring risks that we think are not acceptable.
To watch the entire interview, click below.