I spoke with Greg Papadopoulos, former CTO of Sun Microsystems, to discuss the origins and meaning of The Network is the Computer®, as well as Cloudflare’s role in the evolution of the phrase. During our conversation, we considered the inevitability of latency, the slowness of the speed of light, and the future of Cloudflare’s newly acquired trademark. Listen to our conversation and read the full transcript below (or click here to open in a new window).
[00:00:08]
John Graham-Cumming: Thank you so much for taking the time to chat with me. I've got Greg Papadopoulos who was CTO of Sun and is currently a venture capitalist. Tell us about “The Network is the Computer.”
[00:00:22]
Greg Papadopoulos: Well, from certainly a Sun perspective, the very first Sun-1 was connected via Internet protocols and at that time there was a big war about what should win from a networking point of view. And there was a dedication there that everything that we made was going to interoperate on the network over open standards, and from day one in the company, it was always that thought. It's really about the collection of these machines and how they interact with one another, and of course that puts the network in the middle of it. And then it becomes hard to, you know, where's the line? But it is one of those things that I think even if you ask most people at Sun, you go, “Okay explain to me ‘The Network is the Computer.’” It would get rather meta. People would see that phrase and sort of react to it in their own way. But it would always come back to something similar to what I had said I think in the earlier days.
[00:01:37]
Graham-Cumming: I remember it very well because it was obviously plastered everywhere in Silicon Valley for a while. And it sounded incredibly cool but I was never quite sure what it meant. It sounded like it was one of those things that was super deep but I couldn't dig deep enough. But it sort of seems like this whole vision has come true because if you dial back to I think it's 2006, you wrote a blog post about how the world was only going to need five or seven or some small number of computers. And that was also linked to this as well, wasn't it?
[00:02:05]
Papadopoulos: Yeah, I think as things began to evolve into what we would call cloud computing today, but that you could put substantial resources on the other side of the network and from the end user’s perspective and those could be as effective or more effective than something you'd have in front of you. And so this idea that you really could provide these larger scale computing services in early days — you know, grid was the term used before cloud — but if you follow that logic, and you watch what was happening to the improvements of the network. Dave Patterson at Cal was very fond of saying in that era and in the 90s, networks are getting to the place where the desk connected to another machine is transparent to you. I mean it could be your own, in fact, somebody else's memory may in fact be closer to you than your own disk. And that's a pretty interesting thought. And so where we ended up going was really a complete realization that these things we would call servers were actually just components of this network computer. And so it was very mysterious, “The Network is the Computer,” and it actually grew into itself in this way. And I'll say looking at Cloudflare, you see this next level of scale happening. It's not just, what are those things that you build inside a data center, how do you connect to it, but in fact, it's the network that is the computer that is the network.
[00:04:26]
Graham-Cumming: It's interesting though that there have been these waves of centralization and then push the computing power to the edge and the PCs at some point and then Larry Ellison came along and he was going to have this networked computer thing, and it sort of seems to swing back and forth, so where do you think we are in this swinging?
[00:04:44]
Papadopoulos: You know, I don't think so much swinging. I think it's a spiral upwards and we come to a place and we look down and it looks familiar. You know, where you'll say, oh I see, here's a 3270 connected to a mainframe. Well, that looks like a browser connected to a web server. And you know, here's the device, it’s connected to the web service. And they look similar but there are some very important differences as we're traversing this helix of sorts. And if you look back, for example the 3270, it was inextricably bound to a single server that was hosted. And now our devices have really the ability to connect to any other computer on the network. And so then I think we're seeing something that looks like a pendulum there, it’s really a refactoring question on what software belongs where and how hard is it to maintain where it is, and naturally I think that the Internet protocol clearly is a peer to peer protocol, so it doesn't take sides on this. And so that we end up in one state, with more on the client or less on the client. I think it really has to do with how well we've figured out distributed computing and how well we can deliver code in a management-free way. And that's a longer conversation.
[00:06:35]
Graham-Cumming: Well, it's an interesting conversation. One thing is what you talked about with Sun Grid which then we end up with Amazon Web Services and things like that, is that there was sort of the device, be it your handheld or your laptop talking to some cloud computing, and then what Cloudflare has done with this Workers product to say, well, actually I think there's three places where code could exist. There's something you can put inside the network.
[00:07:02]
Papadopoulos: Yes. And by extension that could grow to another layer too. And it goes back to, I think it's Dave Clark who I first remember saying you can get all the bandwidth you want, that's money, but you can't reduce latency. That's God, right. And so I think there are certainly things and as I see the Workers architecture, there are two things going on. There's clearly something to be said about latency there, and having distributed points of presence and getting closer to the clients. And there’s IBM with interaction there too, but it is also something that is around management of software and how we should be thinking in delivery of applications, which ultimately I believe, in the limit, become more distributed-looking than they are now. It's just that it's really hard to write distributed applications in kind of the general way we think about it.
[00:08:18]
Graham-Cumming: Yes, that's one of these things isn’t it, it is exceedingly hard to actually write these things which is why I think we're going through a bit of a transition right now where people are trying to figure out where that code should actually execute and what should execute where.
[00:08:31]
Papadopoulos: Yeah. You had graciously pointed out this blog from a dozen years ago on, hey this is inevitable that we're going to have this concentration of computing, for a lot of economic reasons as anything else. But it's both a hammer and a nail. You know, cloud stuff in some ways is unnatural in that why should we expect computing to get concentrated like it is. If you really look into it more deeply, I think it has to do with management and control and capital cycles and really things that are kind of on the economic and the administrative side of things, are not about what's truth and beauty and the destination for where applications should be.
[00:09:27]
Graham-Cumming: And I think you also see some companies are now starting to wrestle with the economics of the cloud where they realize that they are kind of locked into their cloud provider and are paying rent kind of thing; it becomes entirely economic at that point.
[00:09:41]
Papadopoulos: Well it does, and you know, this was also something I was pretty vocal about, although I got misinterpreted for a while there as being, you know, anti-cloud or something which I'm not, I think I'm pragmatic about it. One of the dangers is certainly as people yield particularly to SaaS products, that in fact, your data in many ways, unless you have explicit contracts and abilities to disgorge that data from that service, that data becomes more and more captive. And that's the part that I think is actually the real question here, which is like, what's the switching cost from one service to another, from one cloud to another.
[00:10:35]
Graham-Cumming: Yes, absolutely. That's one of the things that we faced, one of the reasons why we worked on this thing called the Bandwidth Alliance, which is one of the ways in which stuff gets locked into clouds is the egress fee is so large that you don't want to get your data out.
[00:10:50]
Papadopoulos: Exactly. And then there is always the, you know, well we have these particular features in our particular cloud that are very seductive to developers and you write to them and it's kind of hard to undo, you know, just the physics of moving things around. So what you all have been doing there is I think necessary and quite progressive. But we can do more.
[00:11:17]
Graham-Cumming: Yes definitely. Just to go back to the thought about latency and bandwidth, I have a jokey pair of slides where I show the average broadband network you can buy over time and it going up, and then the change in the speed of light over the same period, which of course is entirely flat, zero progress in the speed of light. Looking back through your biography, you wrote thinking machines and I assume that fighting latency at a much shorter distance of cabling must have been interesting in those machines because of the speeds at which they were operating.
[00:11:54]
Papadopoulos: Yes, it surprises most people when you say it, but you know, computer architects complain that the speed of light is really slow. And you know, Grace Hopper who is really one of the founders, the pioneers of modern programming languages and COBOL. I think she was a vice admiral. And she would walk around with a wire that was a foot long and say, “this is a nanosecond”. And that seemed pretty short for a while but, you know a nanosecond is an eternity these days.
[00:12:40]
Graham-Cumming: Yes, it's an eternity. People don't quite appreciate it if they're not thinking about it, how long it is. I had someone who was new to the computing world learning about it, come to me with a book which was talking about fiber optics, and in the book it said there is a laser that flashes on and off a billion times a second to send data down the fiber optic. And he came to me and said, “This can't possibly be true; it's just too fast.”
[00:13:09]
Papadopoulos: No, it's too slow!
[00:013:12]
Graham-Cumming: Right? And I thought, well that’s slow. And then I stepped back and thought, you know, to the average person, that is a ridiculous statement, that somehow we humans have managed to control time at this ridiculously small level. And then we keep pushing and pushing and pushing it and people don't appreciate how fast and actually how slow the light is, really.
[00:13:33]
Papadopoulos: Yeah. And I think if it actually comes down to it, if you want to get into a very pure reckoning of this is latency is the only thing that matters. And one can look at bandwidth as a component of latency, so you can see bandwidth as a serialization delay and that kind of goes back to Clark thing, you know that, yeah I can buy that, I can't bribe God on the other side so you know I'm fundamentally left with this problem that we have. Thank you, Albert Einstein, right? It's kind of hopeless to think about sending information faster than that.
[00:14:09]
Graham-Cumming: Yeah exactly. There’s information limits, which is driving why we have such powerful phones, because in fact the latency to the human is very low if you have it in your hand.
[00:14:23]
Papadopoulos: Yes, absolutely. This is where the edge architecture and the Worker structure that you guys are working on, and I think that's where it becomes really interesting too because it gives me — you talked about earlier, well we're now introducing this new tier — but it gives me a really closer place from a latency point of view to have some intimate relationship with a device, and at the same time be well-connected to the network.
[00:14:55]
Graham-Cumming: Right. And I think the other thing that is interesting about that is that your device fundamentally is an insecure thing, so you know if you put code on that thing, you can't put secrets in it, like a cryptographic secrets, because the end user has access to them. Normally you would keep that in the server somewhere, but then the other funny thing is if you have this intermediary tier which is both secure and low latency to the end user, you suddenly have a different world in which you can put secrets, you can put code that is privileged, but it can interact with the user very very rapidly because the low latency.
[00:15:30]
Papadopoulos: Yeah. And that essence of where’s my trust domain. Now I've seen all kinds of like, oh my gosh, I cannot believe somebody is doing it, like putting their S3 credentials, putting it down on a device and having it talk, you know, the log in for a database or something. You must be kidding. I mean that trust proxy point at low latency is a really key thing.
[00:16:02]
Graham-Cumming: Yes, I think it's just people need to start thinking about that architecture. Is there a sort of parallel with things that were going on with very high-performance computing with sort of the massively parallel stuff and what's happening today? What lessons can we take from work done in the 70s and 80s and apply it to the Internet of today?
[00:16:24]
Papadopoulos: Well, we talked about this sort of, there are a couple of fundamental issues here. And one we've been speaking about is latency. The other one is synchronization, and this comes up in a bunch of different ways. You know, whether it's when one looks at the cap theorem kinds of things that Eric Brewer has been famous for, can I get consistency and availability and survive partitionability, all that, at the same time. And so you end up in this kind of place of—goes back to maybe Einstein a bit—but you know, knowing when things have happened and when state has been actually changed or committed is a pretty profound problem.
[00:17:15]
Graham-Cumming: It is, and what order things have happened.
[00:17:18]
Papadopoulos: Yes. And that order is going to be relative to an observer here as well. And so if you're insisting on some total ordering then you're insisting on slowing things down as well. And that really is fundamental. We were pushing into that in the massively parallel stuff and you'll see that at Internet scale. You know there's another thing, if I could. This is one of my greatest “aha”s about networks and it's due to a fellow at Sun, Rob Gingell, who actually ended up being chief engineer at Sun and was one of the real pioneers of the software development framework that brought Solaris forward. But Rob would talk about this thing that I label as network entropy. It's basically what happens when you connect systems to networks, what do networks kind of do to those systems? And this is a little bit of a philosophical question; it’s not a physical one. And Rob observed that over time networks have this property of wanting to decompose things into constituent parts, have those parts get specialized and then reintegrated. And so let me make that less abstract. So in the early days of connecting systems to networks, one of the natural observations were, well why don't we take the storage out of those desktop systems or server systems and put them on the other side of at least a local network and into a file server or storage server. And so you could see that computer sort of get pulled apart between its computing and its storage piece. And then that storage piece, you know in Rob’s step, that would go on and get specialized. So we had whole companies start like Network Appliances, Pure Storage, EMC. And so, you know like big pieces of industry or look the original routers were RADb you know running on workstations and you know Cisco went and took that and made that into something and so you now see this effect happen at the next scale. One of the things that really got me excited when I first saw Cloudflare a decade ago was, wow okay in those early days, well we can take a component like a network firewall and that can get pulled away and created as its own network entity and specialized. And I think one of the things, at least from my history of Cloudflare, one of the most profound things was, particularly as you guys went in and separated off these functions early on, the fear of people was this is going to introduce latency, and in fact things got faster. Figure that.
[00:20:51]
Graham-Cumming: Part of that of course is caching and then there's dealing with the speed of light by being close to people. But also if you say your company makes things faster and you do all these different things including security, you are forced to optimize the whole thing to live up to the claim. Whereas if you try and chain things together, nobody's really responsible for that overall latency budget. It becomes natural that you have to do it.
[00:21:18]
Papadopoulos: Yes. And you all have done it brilliantly, you know, to sort of Gingell’s view. Okay so this piece got decomposed and now specialized, meaning optimized like heck, because that's what you do. And so you can see that over and over again and you see it in terms of even Twilio or something. You know, here's a messaging service. I’m just pulling my applications apart, letting people specialize. But the final piece, and this is really the punchline. The final piece is, Rob will talk about it, the value is in the reintegration of it. And so you know what are those unifying forces that are creating, if you will, the operating system for “The Network is the Computer.” You were asking about the massively parallel scale. Well, we had an operating system we wrote for this. As you get up to the higher scale, you get into these more distributed circumstances where the complexity goes up by some important number of orders of magnitude, and now what's that reintegration? And so I come back and I look at what Cloudflare is doing here. You're entering into that phase now of actually being that re-integrator, almost that operating system for the computer that is the network.
[00:23:06]
Graham-Cumming: I think that's right. We often talk about actually being an operating system on the Internet, so very similar kind of thoughts.
[00:23:14]
Papadopoulos: Yes. And you know as we were talking earlier about how developers make sense of this pendulum or cycle or whatever it is. Having this idea of an operating system or of a place where I can have ground truths and trust and sort of fixed points in this are terribly important.
[00:23:44]
Graham-Cumming: Absolutely. So do you have any final thoughts on, what, it must be 30 years on from when “The Network is the Computer” was a Sun trademark. Now it's a Cloudflare trademark. What's the future going to look of that slogan and who's going to trademark it in 30 years time now?
[00:24:03]
Papadopoulos: Well, it could be interplanetary at that point.
[00:24:13]
Graham-Cumming: Well, if you talk about the latency problems of going interplanetary, we definitely have to solve the latency.
[00:24:18]
Papadopoulos: Yeah. People do understand that. They go, wow it’s like seven minutes within here and Mars, hitting close approach.
[00:24:28]
Graham-Cumming: The earthly equivalent of that is New Zealand. If you speak to people from New Zealand and they come on holiday to Europe or they move to the US, they suddenly say that the Internet works so much better here. And it’s just that it's closer. Now the Australians have figured this out because Australia is actually drifting northwards so they're actually going to get within. That's going to fix it for them but New Zealand is stuck.
[00:24:56]
Papadopoulos: I do ask my physicist friends for one of two things. You know, either give me a faster speed of light — so far they have not delivered — or another dimension I can cut through. Maybe we'll keep working on the latter.
[00:25:16]
Graham-Cumming: All right. Well listen Greg, thank you for the conversation. Thank you for thinking about this stuff many many years ago. I think we're getting there slowly on some of this work. And yeah, good talking to you.
[00:25:27]
Papadopoulos: Well, you too. And thank you for carrying the torch forward. I think everyone from Sun who listens to this, and John, and everybody should feel really proud about what part they played in the evolution of this great invention.
[00:25:48]
Graham-Cumming: It's certainly the case that a tremendous amount of work was done at Sun that was really fundamental and, you know, perhaps some of that was ahead of its time but here we are.
[00:25:57]
Papadopoulos: Thank you.
[00:25:58]
Graham-Cumming: Thank you very much.
[00:25:59]
Papadopoulos: Cheers.
Interested in hearing more? Listen to my conversations with John Gage and Ray Rothrock of Sun Microsystems:
To learn more about Cloudflare Workers, check out the use cases below:
Optimizely - Optimizely chose Workers when updating their experimentation platform to provide faster responses from the edge and support more experiments for their customers.
Cordial - Cordial used a “stable of Workers” to do custom Black Friday load shedding as well as using it as a serverless platform for building scalable customer-facing services.
AO.com - AO.com used Workers to avoid significant code changes to their underlying platform when migrating from a legacy provider to a modern cloud backend.
Pwned Passwords - Troy Hunt’s popular "Have I Been Pwned" project benefits from cache hit ratios of 94% on its Pwned Passwords API due to Workers.
Timely - Using Workers and Workers KV, Timely was able to safely migrate application endpoints using simple value updates to a distributed key-value store.
Quintype - Quintype was an eager adopter of Workers to cache content they previously considered un-cacheable and improve the user experience of their publishing platform.