Jeff Barr (The amazing story of Amazon Web Services)
- 00:00:35 How Jeff got started with Amazon Web Services in the place and how innovation has been moving along briskly with Amazon AWS.
- 00:06:15 How did it feel to promise an Internet File System and promise not to loose any data? How does AWS deal internally with failure and downtime?
- 00:11:24 How did Amazon add so much transparency to their IT department?
- 00:15:46 How did EC2 get so good so quickly between 2006 – 2011?
- 00:19:35 Why Amazon AWS works without any free support for customers. Why does support cost extra?
- 00:22:25 Where is the biggest growth and excitement with AWS right now?
- 00:24:06 Why is traffic and core CPU so much more expensive with AWS (up to 1,000 times more).
- 00:29:59 How does AWS Activate work and how does AWS support start-ups with free credits (up to one million dollars!)?
- 00:41:26 What is Jeff’s prediction how the cloud will develop in the next 10 years?
You may watch this episode on Youtube – Jeff Barr (The amazing story of Amazon Web Services).
Jeff Barr is the Chief Evangelist for Amazon Web Services (AWS) and has been its chief promoter since 2004.
Big Thanks to our Sponsors!
ExpressVPN – Claim back your Internet privacy for less than $10 a month!
Torsten Jacobi: Hey, absolutely. You know, you’ve been with Amazon Web Services, that part, that’s the cloud part of Amazon. You’ve been with them seemingly since day one. Maybe you can tell us a little bit more about your story and how you came to join Amazon in the first place.
Jeff Barr: Sure. So, back before I joined Amazon, I had already been working with very early web services. I had spent some time at Microsoft. I was part of the Visual Basic team, then I was part of the Visual Studio.net team. And I got to work with some really early web services. I was working with things like UDDI and SOAP and XML and XSLT, all very interesting but very complex technologies. And the challenge back then was, as technologists, you could look at these web services and say, this is really cool because it’s computer to computer connection across the internet. As technologists, we look at that and say, yeah, that’s super awesome. We love that. But then you show it to a business person and they’re like, I don’t understand what’s the excitement about. Like, you’re just like, nothing amazing is happening here. And I left Microsoft to do some web services consulting. And still, it was very obvious with the companies I worked with. This is way back in 2000 or so. This lack of actual tangible applications for web services was still just so obvious. And then one lucky day, I saw the very, very first service that Amazon published. This was actually a beta. And this was for a service that had a couple of names over the years. But at that point, it gave developers access to the Amazon product catalog. I happened to find that service like minutes after it was launched in beta. I saw it. I thought this is pretty cool, downloaded it, started using it, sent Amazon some feedback and threw a lot of really just lucky breaks. I ended up working on part of the team that actually built that service.
Torsten Jacobi: Well, you’ve absolutely been successful, right? The Amazon cloud has really taken off. And I discovered it in 2006, really early on as well. And there were initial services like S3, which is kind of a file system distributed in the cloud and then there was the way you could spin up instances. And I think this was a VMware product initially and then Amazon put some layers on top of it. When you look back into the early days, my personal impression is there was a lot of innovation, but nobody really knew about it. Nobody really wanted to use it. There were only a few startups who didn’t have the time and didn’t have the money to afford their own servers. So they looked at Amazon. And then the picture changed, I feel, from my point of view, for especially the last five years, innovation has slowed down with way more customers have signed up and we noticed some massive customers now at the Amazon cloud. What is your personal impression of that the last 15 years?
Jeff Barr: Well, it’s been an incredible, unique part of my career and nothing I would have ever dreamed of that I get to participate in something just so amazing and so life changing personally but more for the industry and for the customers. I would totally disagree that innovation has actually slowed down. I think that we’re just creating new services and new instance types and the pace of launches is, I think, faster than ever. At the hardware side, we’re going in at a very, very fundamental level. We’ve done things like build the Nitro system where we control, we’ve got our own chips and our own hardware that control security and the boot up process that control low level networking, disk IO, we’ve got the Graviton chips, we’re now at the Graviton 2 level, we’ve got chips for machine learning training, machine learning inferencing, we’re working with 5G with wave length zones, lots of really amazing things with machine learning. So I think the innovation is still there and still happening really quickly.
Torsten Jacobi: Yeah. Well, that’s wonderful to hear, but when that’s one side learning that I have from my time with the cloud is that it seems to be whenever you connect with the cloud, you end up being in Virginia, right? You’re in Ashburn or you’re in Randon, most of the core cloud experience is really within a 20 mile radius of Washington DC. Why is that?
Jeff Barr: Well, that was the original AWS region. The first region that we opened up was what’s now called US East 1 in Northern Virginia, but we very quickly followed that up with additional regions. I think the next one we launched was in Dublin and we then went to Singapore and Japan. I don’t have the order precise here because it happens so quickly, but we now have 25 regions all around the world and customers get to pick and choose. So one really important aspect of AWS has always been that customers choose the region and they decide exactly where their data is stored and where it’s processed and we’ve always been, I think, very, very clear on once you choose a region, that’s where the action happens and data only moves between regions if you, as the customer, decide to actually initiate that yourself.
Torsten Jacobi: Yeah. I thought it amazing that when you opened up, you had the audacity to put the customer data in a file system like S3, where for you guys, you had some couple of years experience with that using it internally for Amazon products in the late 90s, I guess, when it started, but it needs some guts to tell everyone around there, why don’t you put your files here and we promise you, we probably won’t lose it. I understand there’s a bit of a service guarantee, so there’s a legal way out, but still it would have been, let’s put it this way, extremely problematic if you wouldn’t have lost 1% of the files. What did that feel in the early days?
Jeff Barr: Well, it was something really that we knew that if you advertise something as simply as storage for the internet, that it has to live up to that promise. You have to make something that has that incredible degree of availability and durability. I know that from the very, very beginning, we actually shared that S9 was designed to deliver what we say 11 nines of durability, so 99.99 and a whole bunch more nines. This isn’t just kind of a hope or a wish. This is supported by a detailed understanding of the system, of the operational characteristics of the failure rates and failure modes of all the components, the ways that S3 checks for data internally for checking for integrity, the way it is continuously replicating and rereplicating and by literally doing the math, we were able to offer a system with that level of durability.
Torsten Jacobi: Yeah, I find that stunning. It’s something that Google for the longest time hadn’t done. You guys opened up 2005, 2006, maybe a little earlier, but I think the Google Cloud only came along much later and the Google file system that was the original springboard for a lot of their services, like the Google main search index. I feel about 2015 it was still not fully opened up for external usage, so you guys were like 10 years at least ahead of global technology.
Jeff Barr: We got a nice head start and I think we listened to customers very, very quickly and what we put out the service and it was very, very aptly named a simple storage service. We picked the absolute minimum set of internal functionality and visible APIs, offered that developers and said, here it is and here’s what it is and here’s what it does and we invite you to use it. Developers looked at that very quickly and said, I get this, I see why this is valuable to me and even better they saw that we had things like command line tools and SDKs and then other tools quickly built by third parties that all started adding value into this whole S3 ecosystem and the data started just flooding in. I don’t remember the exact dates when we started making announcements of the numbers of billions of objects but we did that for quite some time and said okay, we’re at 10, 50, 100 billion objects and we actually stopped doing it for a bit because the numbers got beyond astronomical. Earlier this year as part of Pi week we actually said we were at well over 100 trillion objects which is, that’s a number that, how do you put that into real world terms, it’s so unimaginable that number.
Torsten Jacobi: Well we kind of know how big it is when it goes down, it’s rare but it happens, I think like two years ago it went down for a day and 50% of the internet is down, S3 just one server is out of what you guys do and everything stopped because most people serve their images or static files out of S3 directed to the end user so we realized most of the websites were partly broken.
Jeff Barr: So that was somewhat more than two years ago, I don’t remember the details, I know we brought things back to life very, very quickly but again we do have S3 running out of multiple regions, there’s a lot of redundancy built into S3 and yes it did indicate that there are a lot of different ways that people are putting S3 to use. Now one of the things that we’re relentless about at AWS is we do build for this high degree of availability and durability but we do know that things are sometimes going to break in a new way, if that ever happens we have this internal model where first we bring things back online, we get our customers back up and running but then we take apart the entire system, we look at log files, we look at the entire sequence events that led to this actual customer visible issue and we always say let’s make sure that we fully understand this cascading series of events and issues and then let’s improve the system or whatever it is so that can never, ever, ever happen again and the iterated effect of this process over years and years that the system just continues to get better and better.
Torsten Jacobi: Yeah, I feel you’re really onto something there, it’s this transparency of engineering that you really pioneered, I think a lot of other people later on copied it and said we have a dashboard and you have the status, you guys I feel pioneered is at least on a big scale. They’ve said culture is something and it’s rare for engineering organizations, most engineering organizations they are kind of hidden away and I understand you obviously are different but compared to the rest of Amazon they’re hidden away and they really don’t want to talk much to the outside departments. How did you pull this off? This is where Vogels thing is that Jeff Bezos, how did that happen that it’s so transparent?
Jeff Barr: You know it’s been part of the culture since as long as I’ve been part of the organization and internally we have this model we call it correction of errors and every time things break in any way we create this document called a COE document and the interesting thing is that these documents become legend within the engineering community over time to the point where each of these documents is numbered and if you are ever in a deep tech discussion with our senior and our principal engineers they will recite some of the kind of the legendary COEs by number and they’ll say hey do you remember 257 they’re like oh yeah that was one amazing consequence of this aspect of design or they might go into a design review and they’ll be reviewing a design to make sure it’s scalable and durable and available and then one of the principal engineers might say you need to go look at COE 623 because we learned some really important lessons about your particular architecture so this we have this incredible repertoire of I want to say almost how not to do things that’s been built up over 20 plus years.
Torsten Jacobi: Yeah well when I look at say the core of American engineering so to speak right it doesn’t have to be an American who works there but it’s this value of you know big projects done with this pioneering spirit like I think of it as the Golden Gate Bridge for instance right maybe it’s not a perfect example but it’s something that was daring that was out there it was big it was a monument and it has never been done before and we I feel well we have to an extent a good amount of pioneering spirit in America we don’t have a lot of engineers who who who are allowed or are building or capable of this pioneering spirit to put it into practice and I feel this is what happened with the cloud service and especially with AWS is that we have this this amazing idea of really difficult engineering and you guys pulled it off right you you were the first to deliver the cloud service that anyone could use and you could use it for a few dollars a month right it was very equal access exactly everyone on the planet could access it there were no barriers to entry.
Jeff Barr: And you brought up the very beginning of EC2 so EC2 is almost 15 years old I think it’ll be August when we are officially 15 years old you mentioned earlier that you thought it was VMware but actually was not we used a virtualization environment called Zen in the for to get EC2 up and running and I do remember presenting about EC2 in the early days and even though there was a lot of amazing engineering inside as you picked up on a lot of people looked at it and really dismissed it they’d look and say you know that this just sounds like a couple racks of hardware or they’d say you know this is nothing really special you probably had some extra hardware left over from the last holiday season so you’re just giving customers access to it and we didn’t really talk about the special magic inside of EC2 for a really really long time we just simply said the customers care about the fact that they can make an API call or click a button and they can launch instances on demand if they need one, ten, a hundred, a thousand we’ve got customers running hundreds of thousands of instances we’re going to make all this infrastructure to make that easy and reliable and cost effective that’s where we really put all the energy into.
Yeah and well you succeeded pretty quickly I felt like in just a few years and I saw it from a user base first it was just the instances and you could do this pretty much anywhere like it kind of seemed like a normal normal server that you would get somewhere else but then you had the backup system, you had the machine images, you had the EBS storages, the backup of the EBS storages so what happened is you could scale it out because you could just you know package whatever you have and then you can launch a hundred servers but in a heartbeat and anywhere else you would have to put it up from the ground reinstall software so you really came out of a solution that kind of looked like just slightly better than any other server I could buy from any other host and it went into just a few years into a solution that for the longest time nobody was able to replicate and I still I feel given given the majority of tools that you have even others like IBM and Google and even Google surprisingly has trouble replicating that. Well you have to really strive for simplicity and I think one of the kind of interesting advantages that we had at Amazon was that we one of our first leadership principles actually the very first leadership principle that we have is customer obsession and that means that we’re whenever we build something we’re not simply inventing really awesome things in our R&D lab and then handing them off to marketing to try to sell where we’re always the methodology is somewhat famous at this point we call it start from the customer and work backward so we always begin by writing a document called a PR FAQ a combination of a press release and a frequently asked questions list we try really hard to summarize what might be a very complex very rich service we have to summarize that in the page and a half of text you get in a press release so that and but we’re we’re doing this so that the with the thinking that the first thing a customer might see is this press release and even if they don’t literally see a press release the what we find is this this actual activity of having an idea for something that might be very sophisticated very complex the the work that you have to do to refine that to the point where you can describe it incredibly well with just with simplicity and clarity and accuracy in a press release we put unbelievable large amount of energy into doing that when you hear oh you write a press release that doesn’t mean you just set aside an afternoon and just whip one out on your your laptop the the press release is this distilled thinking of how do we best solve a set of customer problems and getting to that point of clarity getting the point where we understand the problem really well and we can describe what we’d like to do to solve it in that press release is where a lot of that initial energy goes I think it’s a really good strategy and I’ve been trying that for one of my startups and it kind of worked well I mean I didn’t know much smaller scale you guys have one very different level of experience one thing I feel that you need to where you need that approach is is for instance in and that’s very specific but that’s my personal experience in certain API the API language is incredibly technical and I think for no reason the API link is often more like like sequel right it’s very simple and almost human like but but but for AWS it gets incredibly complicated with careers that nobody really understands besides few developers really highly technical highly abstract they make a lot of sense but it’s really difficult to get to that level and understand it it truly is but but I do think that that that really focused and the continued focus on customer obsession is is why we succeeded when when customers looked at this it it didn’t look it maybe it was surprising but then they looked a little bit deeper and they said well this is this is Linux inside and I know how to I know how to administer a Linux system I now know how to SSH to a Linux system I know how to install packages there was a lot of familiar aspects to that that resonated with customers and potential customers where they said okay yes I’m stepping into a new world but I’m I’m able to take a lot of skills into that new world that I’ve learned in in the prior world yeah one thing that that I always find mesmerizing and I don’t know if that’s still true but when you when you you get to use as an initial user on AWS services you don’t really have access to a human support and it’s it’s it’s not easy to to scale into that level of complexity I mean if you have experience it gets easy but it’s still there you guys build more and more stuff so it’s not easy this good documentation excellent documentation but there is no free support is a couple of forums and you can post something but basically on your own you can pay for service right and that’s fair enough but I always felt like this is very interesting approach that very few few companies do out there they do have a level of support but again often get overwhelmed and then you just you just don’t deliver anything right these emails never get answered you guys had a very almost feel it felt a little arrogant to me right I’m not necessarily is a bad thing but you guys felt like okay we’ve laid it all out and if you don’t get it you’re stupid and then you have to pay for it and you pay me a hundred bucks a month or whatever the support packages for it probably that’s on what you guys met so the term that we’d like to use internally is self service platforms where where the the way to scale isn’t always by adding more people into your organization it’s by making the systems as as obvious and as usable as possible by making that documentation accessible and readable making sure that you do have good sample code that you do have a community around you to support and like in the in the early days of AWS one of the places that I would hang out a lot was the the AWS forums and the forums have been a little bit superseded by some more modern kind of communities like I like to hang out on the AWS subreddit where there’s a lot of really knowledgeable really helpful folks and the interesting thing is the the premise of support I think sometimes is that the vendor knows everything about the product but in the very interesting phenomenon happens is that the customers in a sense know more than the vendor because we yet yes we engineer it from the ground up and we we we build it and we run it the customers are the ones that are actively making use of the all the different functions and calling the API’s and and often are using different services in conjunction with each other if you were to look at one of our teams you’ll you’ll find that the within the S3 team there’s a lot of sub teams within the EC2 team it’s the same way but each of these teams is is really focused on their individual mission if you ask any any member of the S3 team what you what what do you do they will tell you how their work contributes to the success of S3 if you ask them about other services that we offer they might have some peripheral knowledge of what it is and what it’s about but they probably can’t say well if you want to combine S3 and Redshift or S3 and I’m no Lambda let’s say well those those actually work together nicely it’s maybe not a great example but but in general the teams know as much as possible about their offering and they just don’t have the the the the kind of the vantage point necessarily to look at how their work plays out into the bigger picture yeah that makes sense and obviously if you if you use the self service it scales way better I fully understand that when you when you look at the growth right now where do you see the biggest growth spurts and you just mentioned earlier there is still a lot of innovation maybe not as visible to myself I feel like there’s a lot of higher value added services so there’s like Redis clusters for instance or obviously Lambda the serverless computing part seems to scale a lot where’s the the biggest in terms of usage but also in terms of excitement right now so I don’t keep track of the individual business units and their their growth rates I mean the fundamentals are still there the the use of S3 and EC2 used to grow and we the use cases continue to expand you’ll see that we we in addition to simply like launching new new regions all over the world and we’ve been adding to that collection very steadily we’ve been doing things like adding this new model we call local zones where with local zones we want to get compute power closer and closer to our users and so the combination of the local zones that are really can think of those as extensions of existing regions and we just announced just yesterday that we we’ve got local zones and in three additional cities we’re also working with with 5g providers in the US and in other parts of the world to to put compute and storage in in the telecom centers of the 5g providers so so getting the going back to the fundamentals right just getting compute just getting storage closer and closer to the users to enable new and and different kinds of applications yeah one thing that I I always feel like has gotten out of touch a little is some of the the pricing model for Amazon the way I love it is obviously that it starts you you go in the pure instance hour or you just pay for a usage generally that’s just awesome right this this is something people people love and I think nobody will ever say no to that and there’s no minimums ever but on the other hand when I see like traffic charges which is like a thousand times more expensive than in lots of other places or I see a computing cure computing how much can I do with my CPU which is 10 times more expensive than I go to another post is that something that where you guys feel you have to be so expensive because you can or on the other hand that’s because it’s built into the model it’s built into the cake of you only pay for what you use why is something so extremely expensive when others are very competitive within AWS well I I actually would disagree that we’re we’re more expensive than options although I will say that I personally don’t spend any time looking at other ways to obtain these resources the model that we use internally is that we we try really hard to make sure that the costs in each dimension of usage the that the prices are a direct proportion of costs and we call it a cost following model and we do this so that regardless of the way that our customers choose to use the service that that they’re paying their fair share of the of the consumption so and that this actually was was one of the most interesting decision points in the very before we even launched S3 there were there was no pay as you go example that we could look to and there were questions of like do you charge by by the month for just a fixed amount of storage do you have large increments do you there were do you have various kinds of plans at various scales and after a lot of discussion with customers after a lot of analysis of the business we said we’re going to break it down into several different dimensions and we’re going to to have our prices set as as effectively take the take our costs to do storage to do data transfer to to respond to API calls we’re going to take those fundamental costs and we’re going to really break those out and then add a bit of a margin on top but then set each of those different pricing dimensions accordingly for our customers we do tend to reduce prices we’ve had somewhere close to 100 price reductions over the years and those represent the things like Moore’s law where semiconductors continue to get more powerful and less expensive they represent economies of scale on our side and also we tend to learn how to run the services more more efficiently over time yeah I think in general you you you’re correct I think the model is appealing but think about traffic charges so you get a lot of the servers that you just bear hardware it’s you get 20 terabyte right getting 20 terabyte out of S3 or PC2 is I think what is $1,800 and the same as you know when you that’s an incredible amount and it’s just for one particular server right and there’s I think you see now competitors like hot wasabi you basically do an S3 clone and say oh you don’t have any traffic charges yes there’s certain models that they exclude that you can’t use for them or basically traffic is included so your bill drops from $2,000 almost if you have that much traffic to zero and that’s pretty crazy right and that that’s why I feel this I mean there’s certainly other use cases where the difference is not a stark but if you have a highly traffic website you pay dearly it’s affordable in in in Amazon and in other posts they give it to you for free that’s quite stunning right that’s maybe more slow because they seem to have no costs with it so so I don’t understand the business models of the other providers but but I would have to imagine and speculate a bit that that if every one of their customers was to max out their service in that particular dimension that their model wouldn’t wouldn’t actually play out to their benefit the there’s there’s averages and expectations in a model like that where if you say you can have up to this much bandwidth well that’s wonderful but the expectation is that the average customer is going to consume considerably less and what what I’ve heard from our customers is that they do understand that there is a cost for bandwidth at scale but but then they they need the the predictability and they need to know that if they were to actually consume all the available bandwidth that they don’t they don’t face a cap they don’t suddenly get a different enter into a different charging level and there’s now I don’t understand at the the deep technical level how you might configure connectivity but there there’s different kinds of peering arrangements there there’s certainly different quality levels that you can aspire to as you do this we’re doing this at the the quality level that that our customers expect to run their most business critical applications yeah I think we all remember the first um peering agreement from google right you would expect because they had downloaded all the websites right there would be a huge cost factor involved and basically they got all the traffic for like a few thousand dollars and it was an incredible amount of traffic because they downloaded everything to the data center which is the traffic you give away for free as well right so that is not used much so these pipes are basically empty and you get it for really cheap but the other way is being charged but that’s you know every cloud has their own idea about how to charge for this based on their own peering agreements yeah and one thing I’d say is we do always listen to our customers and if you know as as we get feedback from customers about okay well this part of AWS we’re we’re happy with the pricing this part we sure wish you could do better we we always we listen we listen to that and we do our best to respond accordingly yeah um one thing that you guys really excel at is enabling startups and that’s that’s an amazing boost that you deliver to the startup economy especially Yen Silicon Valley and you basically now you even have this particular pro program that gives us credits I didn’t even know about it until like two years ago I don’t know how old it is maybe you can tell us a little more about that role of startups in the ecosystem sure so I have a lot of affection for startups having having worked in multiple different startups and the one thing I remember from my pre Amazon experience I used to consult for different startups and what would often happen is that I would get in pretty early on often when they were before they had been fully funded they’re kind of limping along on having just a couple servers and some really bare bones infrastructure at a certain point they would get funded and the first thing that they would do after getting funded would be really they call their their Sun Microsystem salesperson and say we’ve got all this money and we need to give a third of it to you so we can have a whole bunch of Sun servers in our in our data center and this was kind of around the 2000 2001 era where there’s there’s tons of startups there’s all this demand for for Sun servers and then it would take months for this hardware to show up sometimes the startup would be constrained in their growth because they couldn’t get their servers fast enough sometimes there’d be more servers than than customers and they’d have the surplus and so the this idea of being able to use EC2 and to get exactly as much compute power as you need when you need it and if you get this incredibly awesome burst of traffic because you are well back in the old days we say back if you were slash dotted now you’re you’re reddit or you’re hacker news through your CNN when you get that huge burst of traffic that might be your one big opportunity to get in front of this gigantic audience and to get yourself to that next level of of success so being able to say architect your system properly for so your auto scale and then if you get that massive traffic surge you’re all set to handle it but then when it goes away you can you can auto scale back down startups get that they always aspire to greatness but they have to actually be practical and have the infrastructure they can afford so so startups were really one of the first the first targets and and I just loved to see all these just really early startup succeed when we worked with going way back companies like gigavox media and smug mug and animoto lots of developers of Facebook apps and Facebook add ins in the early days they understood all too well that this idea of going viral way back then going back 10 12 almost 15 years at this point they understood that you put something out there in front of the public it can catch people’s eye and before you know it your your need for compute power just is going to far exceed anything you could plan for or or or pay for up front so that that appeal was just was right there from the very beginning yeah I think what what obviously we all watch the tv show um Silicon Valley and I don’t know it quips every other episode about how much money goes to Jeff Bezos because that was the biggest bill and I think this is true of all startups right so it’s that the biggest bill besides the salaries is generally and the lawyers for the vcs is generally amazon and but we all have that problem right so when we want to build something um I don’t always hear that from our customers I often hear especially our I see a lot of customers that are doing serverless applications and a lot of serverless customers tell me that their their serverless bill is about equal to their coffee bill oh yeah the serverless is awesome right I wanted to get the serverless in the second but that’s really interesting to say that serverless is a very different idea but generally you you run this part of instances and say it costs a few thousand dollars a month whatever you were you you’re ready to invest what I wanted to get at is but what you do now is that depending on your funding level and you obviously hopefully you can tell us more how this actually works is something called aws active weight and depending on your funding level you give startups or lots of credit I do like that the initial fix right to get up to a million dollars in the aws credits right there there are several different plans for startups with with varying levels of credits that they can get to we also like to work through various kinds of accelerator programs because we again in the interest of scale whatever we architect something new at amazon whether it’s a service or a program that that’s facing the customer base we always think about scale and so when we think about like how do we reach millions of startups around the world we realize that we we can’t possibly scale and staff up to to to have direct connections to all those startups so we like to work through accelerators and have those accelerators actually they know their local markets they know their customers they can create and nurture those relationships with with local startups and we give them the ability to actually make make the tough decisions and decide which of those startups would would be able to receive some aws credits to to grow yeah and how does how does this work so basically from what I understand is an application process even after you get the funding right so you’re funded you’re part of an accelerator you’re part of a your VC funded then there’s an application process that goes through aws active weight and then you get to use those over 12 months or over the lifetime of your startup how does that work you know I’m not as familiar as I should be with the various programs but that in general terms that that’s that is the way that it works we we do want to make it easy but we also want to do our due diligence to make sure that we’re we’re supporting the the most promising of the startups yeah yeah you just mentioned the serverless computing startups and I I thought that’s really interesting for the longest time and I feel like a lot of computing and people don’t really realize that I feel it’s kind of still working like in the 80s you log in into your linux shell you run a couple of batch scripts these batch scripts do something and then you scale it up in the cloud you do this on clones of hundreds of different servers or just on one it depends on your on what you want to do and there’s some I’d say some more complicated stacks on top of this that make it easier to manage all this complexity but kind of it’s still the same thing as you said earlier SSH is very much 70s 80s technology it hasn’t really changed much and we still the computers are bigger but it also has the same problems of scalability of managing it of server administration and serverless seems to be the way to just get rid of all this and it’s it’s a new paradigm but it seems to be incredibly limited um from what you can do with it do you see that changing there’s a lot more developers who figured out how to use serverless properly and there’s also more options on AWS side well it’s great that you actually put that point in time that the 80s because I actually remember the 80s and running unix servers way way way back then serverless is brand new I think the the entire concept of serverless is at at the earliest 2015 I don’t remember the exact year when we we launched lambda but somewhere 2014 2015 where we’re still in the very early days customers are getting great value from from serverless and I I see a lot of well I when I first launched lambda first wrote that first blog post this idea and I think the blog post was like run your code in the cloud I knew for sure that startups would look at lambda really quickly and say this is great because it takes us out of this business of of running our own infrastructure and that happened really quickly they they saw the model they they said okay this is this will really let us focus on building our business building our building our customer base and not doing what we often call the undifferentiated heavy lifting so startups saw that very quickly and jumped on the the serverless bandwagon I as soon as I started to see the enterprises going for serverless I said okay this is a this happened far more quickly than I would have expected and when I saw serverless in the enterprise and that they said well this is so cool because we we in a lot of enterprises they have a lot of servers most of which are apparently just kind of sitting idle for long stretches of time just eager to actually do something of use so serverless tends to be a really great match for these enterprise applications that might go from effectively idle for long stretches to very very busy for a couple minutes or a couple hours every day yeah I think it’s kind of like as what S3 did as a local we used to have local file systems right and we can just combine them and there were like ideas to to to combine them in the server part get an S3 data for the whole internet I think that’s what lambda or any other serverless approach can do for the whole internet to be all this this the CPU power to be used on the individual instances finally go into one big centrally managed super computer right that’s that’s a global idea of a more intelligent cloud and I think it has that potential but you have to redo all the code and have to rethink the the whole coding needs to be set up from scratch to fit that serverless paradigm it really depends but well I think I’ve seen with our customers is that they’ve often already done some work to really separate out the the business logic from kind of the more infrastructure code and for them when they’re when they’re when they’ve done that or if they do that as part of a serverless they end up with a really a nice clean separation and this ability to have these these nice modules that they can then use in a lot of different situations a little bit more kind of pick and choose than these these last generation monoliths that they’ve they’ve unfortunately built and need to need to often spend some time untangling do you know if docker runs in lambda is that something you can combine it actually does there’s container support on lambda so you can actually create containers and then basically upload those or reference those in in lambda yeah that’s a perfect hybrid technology right because docker basically gives you the view of an instance even do you have multiple in multiple of those containers on one instance and they just move it over that’s that’s awesome exactly now now when we talk about evolution it would be really fascinating to go and go back to the the beginnings of lambda which are still not that long ago and say okay we supported a single runtime language and we basically said okay it’ll it’s all on demand scaling over the last couple years we’ve added support for a whole bunch of different languages multiple runtimes we’ve added something we call provision concurrency so that if you know that you’re going to be using kind of always a certain kind of level of activity of lambda you can you can pre provision that amount of concurrency so you you don’t have to worry about scaling up so so quickly yeah we continue to add features add options and and again that this isn’t just dreaming up things this is because we talk to customers and we we listen to them learn from them do our best to meet their needs yeah talking about the future of the cloud what do you think is going to happen the next 30 years so we have the one side of the of that silicon valley who is all about records while singularity we all become these cyborgs and we obviously have Elon Musk who’s already working on the New Orleans model when you think about the cloud what’s going to happen in the next 30 years about the cloud will it become a sentient being that’s one of the other things that people put forward and says oh it’s somewhere in there we’re going to see the sandian being coming to to to talk to us maybe in a couple of years wow let’s see well i’ve got enough science fiction on the bookshelf behind my green screen that does that does talk about computers becoming sentient and i don’t think we’re anywhere close to that at this point um part part of sentience if you really if we really want to go abstract in sci fi is is effectively self awareness and i i read i read some article a week or two ago about like how did we how did our consciousness emerge and consciousness apparently there was a nice logical argument that said well consciousness is just the next step from just having kind of self awareness in terms of like touching and feeling and pain well consciousness is just really being able to do that inside your brain which it kind of makes sense to me it wasn’t a a full explanation but but this made me think of things like a system that uses for example um cloud watch metrics well cloud watch metrics are that kind of sensory system inside of your your cloud infrastructure and you could think of well if your cpu low is getting too high or your network traffic is too high that’s kind of like feeling pain and effectively you’re saying if we feel that pain of being too busy while we autoscale up and if we feel the the let’s say the the pleasure of like idle instances while we kind of scale down so maybe is that like a nervous system and some autonomous responses maybe sort of maybe really primitive way to think about that yeah it seems a lot like that and when you when you take it a little bit further and you feel like there is this i mean there isn’t a lot of decision making that the cloud can do for itself right now it seems to be predisposed but so are we right we are predisposed but mostly what’s going on our unconscious there’s a heart is beating we’re breathing there’s so much stuff going on 24 seven in our unconscious that we have no control over it’s kind of like the cloud right we are basically a machine too we just think we are not because we are we are flesh and others are metal but it’s the same thing if you if you go back to it before most of our conscious decisions we have no control why do you happen what’s actually being determined by our unconscious and then there’s a tiny sliver that’s moved to the conscious part right we have no control over what’s being moved maybe we can we can prime ourselves a little bit we can take some drugs but in the end we are basically slaves of that machine that someone else built we’ll be living in it but someone else built it nothing a cloud being has the same problem someone else built this thing they don’t have control they don’t come they can’t just redo a new machine they need humans for this or whatever is building those machines if it’s robots at some point but they have the same problem this is vast amount of unconscious knowledge and maybe some consciousness will eventually with some self control happen we will have that layer I think nobody has seen it yet but we also don’t know how we work because there’s all these other animals that work like machines but they seemingly don’t have consciousness depending on how we define it exactly and there’s a lot of really interesting analogies that we can draw I certainly don’t have a blog post in draft form that says the cloud has now achieved sentience and step back we’re all that’s why it was certainly don’t have one of those in the works we don’t know where this is going and interesting enough we we think in terms of machine learning and AI and we I suspect that to the casual observer is like kind of thinking well that’s kind of where we get to that’s that’s where the sentience might come from but maybe we’ll get surprised on the other hand as we talk about all this ability to to be somewhat self aware and to to respond to events in the environment well we we do have this idea of event driven programming it’s kind of more digital than than analog but but you sometimes do get some emergent behavior and that when you take multiple very complex systems and connect them together there are delays time outs resonances buffer overflows kind of things that the each of the components are wonderful on their own you put them together it’s like they actually surprised us a bit when we put them all together now now was the the organism as a whole did it have self awareness to say oh now I’m this big complicated thing and I I’m now greater than some of my parts definitely not but but it exhibits behavior as if as if it did yeah mark we’re just was telling me then he’s a physicist he he wrote CF engine and but he said you know the cloud is really where we’re going to see this quantum effects and quantum effects not necessarily in the way physicists think of it but you have those those unpredictable phenomenons that if you that things can talk to each other the way you didn’t intend them to ever talk to each other it’s kind of like the famous part of the of quantum mechanics that one particle on the one end of the universe can determine the fate of the other particle on the other end of the universe which shouldn’t be impossible because it’s a real time thing and he said we will see effects like this most likely in the cloud and these things work together and something emergent is coming out of it that seems like it cannot happen but it does happen and it’s somewhat predictable not not perfectly predictable but somewhat predictable and he felt like this if anything will be the the seed of this the super AI of this emergence it will be somewhere in the data center this is the place where it will happen so he said you can bet your money on it that it will happen when and how nobody knows well if you would know that we would have it already but it’s it’s going to be in one of those superstructures one of those cloud superstructures sure well the the compute power is certainly there it it is very very safely in in well intentioned human hands right now for control but i’ll certainly keep my eyes peeled for any any emerging sentient behavior yeah i worry a little bit because i think about it we have like i don’t know 20 maybe 100 data centers all in virginia right so let’s assume we we we have a few thousand there in a couple of years right all in the same spot and suddenly there is an emergent behavior they all go dark there will be a big problem right so the internet will go dark for that part of the internet obviously you have more availability zones and so have others but it’s a massive part of internet capacity it will just go offline and nobody really knows how to fix it because it’s something that was emergent right it’s not like okay we found the buck we change it no it’s something that that had a decision that it made that we could control or maybe we don’t even want to control yeah we’re firmly in the realm of advanced sci fi speculation interestingly enough though we do run this exercise within our data centers and within our regions we call game days and a game day basically says let’s make sure that we do understand how these systems behave in in at the edges of the other design limits so with the game day what we’ll do is we will identify some part of infrastructure and say we believe we’ve architected the system so that if this part of the infrastructure were to slow down or to fail or to be unplugged that the hypothesis is the rest of the system should just continue to run so we we form this hypothesis we put monitoring into place and then we actually go and we perform that that action that we do unplug the server we we introduce errors into the apis so that they they’re slow or they return failing results and we we under control conditions we say what if we break this or what if this is really slow or what if this doesn’t happen and we we try to get this understanding of have we actually built this level of of durability and availability and resilience into the systems yeah that’s that’s a wonderful way to be prepared for a disaster if and David Urban told me this if we have access in 2040 that’s certainly is based on Moore’s law but it’s still speculation we have access to the the the raw computing power of a million brains for a thousand dollars so it doesn’t mean it’s a sentient thing or it’s anything special happens but it’s just a raw computing power when when you look at this do you know how many servers amazon manages like hardware servers right not their virtual instances or maybe you know that number two do you have any ideas in the millions is it in the billions how many individual things in a rack actually exist i don’t know that within three hours of magnitude i honestly don’t i’m sure it’s it’s a mind blowing number and it’s not something that i think we’ve ever shared and you can certainly look at the the growth of aws over the years and and look at the growth rates and probably not do any computation on that that would be meaningful but you can see the fact that as aws grows 30 40 some sometimes 50 per cent per year you can imagine the compounding effect of that over over the last 15 years yeah that’s one one one way you might be able to get in the right ballpark there yeah i wanted to run my own numbers that’s why i wanted to pick your brain if we do this in 2040 what what kind of virtual instance is this right so what will we need to take it to or to an ec2 or whatever instance that has a certain computing power that we know our brain has or we have a core idea of what it what it can compute we don’t know how it works or how efficient it is and then we can just see how will it be a million will it be a hundred thousand makes a big difference what but it’s on on each instance literally you could run an ai that does crunch some numbers and gives you some guidance how to solve this problem this is i think the biggest computing resource that is being added to aws is really a algorithm so i have to make them it’s very expensive but once you have them they’re not very computational intensive yeah so there are customers that fairly routinely run compute jobs that things like various kind of analytic jobs or or simulation jobs that are well above a hundred thousand cores simultaneously that that’s that we’ve we’ve passed that several years ago as as kind of an upper not an upper bound but kind of a reasonable thing for customers to do so if customers need to launch multi hundred thousand core jobs that they sometimes don’t just go ahead and and after they’ve they’ve kind of they’ve got their they’re ready to go as far as technically and they’ve given us a little bit of a heads up that we can make sure their their account is prepared to give them that that level of access there are customers that run jobs of that scale effectively routinely and by course that’s a like gpu on hardware level or that’s just a virtual core so it depends on the hardware that you’re running on but we we generally if we’re running on a machine that’s hyper threaded we the unit is a vcpu a virtual cpu so you’ll you’ll basically get you’ll get two vcpus per physical core if you’re on something like a graviton two processor one one core is effectively one one physical core one vcpu is one physical core on on the machine well that’s massive do you have any idea how much the bill would be for renting a hundred thousand for an hour hundred thousand course um you know i have to admit that i’ve written blog posts that had those numbers in them but it gets lower and lower over time because the the effective cost for a core hour continues to go down and we have this model called the the spot instance model where where instead of paying the list price with the the price is basically determined by by availability so you you can save up to 90 off the list price by by using spot spot cores or spot instances yeah well when we just assumed 10 cents i was just doing this in my head so it’d be somewhere around 10 000 for an hour more less 10 cents is kind of a might even be a little bit high there are lots of instances that give you far below 10 cents per core per hour yeah well we definitely gonna we yeah we definitely gonna get there in and well it really depends what cpu speed we associate with with the brain right it’s probably it’s a little more than a current core but i mean in 20 years from now we’re going to see a lot of doubling i don’t know how many doublings oh yeah the seven i mean the question might even be like is compute power the actual limiting factor or is this a memory bound problem or is it is algorithms right it ultimately is probably the the availability of all this compute power and all this memory you need code to actually take advantage of it yeah for sure and i think you definitely need this this serverless approach to it it won’t work with uh with ssh so i i think i already know that i mean maybe someone comes up with another layer like me and where i was and maybe that’s that’s what’s gonna gonna happen but i can see this and that’s for me quite stunning when we when we see laptops who seem to have been lacking behind Moore’s law for quite some time i’m looking at macbooks especially and they got warmer and hotter but they got also slower well obviously the software got more more more research intensive over the years and then suddenly apple came up with the m1 now and suddenly i think Moore’s law is back it’s like 10 times faster than the comparable price intel chip just from a year ago and you feel like whoa Moore’s law you can’t do much about it okay it’s it’s almost like a natural law and that’s oh if you give up on it for a while because you think oh we can’t go smaller and then and technology doesn’t really work but then we come up with virtual cores and multi cores and it suddenly is back with the vengeance right and the interesting thing is we’ve kind of heard of like the end of Moore’s law in my career i’ve probably heard that 10 times in the last 20 years that we’re we’re now at the the finest possible resolution on our chips and we’re at whatever number of of nanometers we can now make make the lines on the chips and i just saw last week that that i think ibm is now working with two nanometer chips and the state of art that was before that was uh was i think seven or ten nanometers so there’s there’s there’s always a new step yeah well i think that’s that’s a positive note it’s good for you guys right so what do you feel is the biggest challenge for AWS the next 20 years i hope you’re going to stay on for quite some time wow let’s say for 20 years i’ll be 80 plus years old at that point you can make it happen yeah i’ll tell me yeah i i had to try you know what i i i really enjoy what i do i have to say that i it’s been just kind of the the privilege of a lifetime to be able to be present at the birth of this industry and to be able to write all this content and to share it with the audience and to see people pick it up and doesn’t have it not not just change their business that that’s kind of neat but to the amount of positive energy good karma generated by by people reading something i’ve done and learning from it putting it to use maybe improving improving their job getting a better job making their life a bit better making their family’s life a bit better it’s like that’s kind of a cool and amazing thing to be able to do um where are we out in 20 years impossible to predict just impossible to predict where we’re we’re going to be um i’ll i’ll keep on doing this as long as i think i’m relevant and and useful yeah well i think there’s going to be another amazing success story coming there i think this cloud is way way further to run um you’re still only seeing very small part of that footprint you guys are at 50 billion a year now i think so that’s about right i you know i it sounds odd but i i tend not to commit the specific numbers to memory because they change every quarter and and if i if i remember one of them then i’m just going to be stuck on a on a previous quarter so i tend not to actually have those just at hand to be able to recite them but it’s certainly been exciting and one thing i remember is i remember being in my 20s really early in my tech career and just realizing it was going to be a really fun really exciting career but also seeing people that were that they seemed really old at the time they were probably like 45 or 50 and thinking man those those folks are kind of way way past the prime and they’re they’re kind of just they’re stuck in the past you’re stuck in the generation of technology and i i think back to being all right i might not have i’ve barely been 21 but seeing these folks and saying you know i’m never going to get stuck on some generation of technology i’m always going to be tracking the the future tracking the new technology doing my best to understand it and to to be able to to be competent with it and now i think i’m probably at least a decade older than these people that seemed really really old at the time and i hopefully i’ve been able to do that and just stay stay educated and stay relevant yeah well you you definitely are living it you’re doing it right now well jeff thank you so much for coming on the podcast we appreciate that thanks for sharing all your insights it’s been my pleasure thanks so much for having me absolutely take it easy jeff talk soon well done bye bye