Cloud Native ComputingDevelopersDevOpsFeaturedLet's TalkOpen Source

Linode Has The Fastest NVMe-Based Block Storage Among All Cloud Providers

0

Guest: Mike Jung (LinkedIn)
Company: Cloud Spectator (LinkedIn, Twitter)
Show: Let’s Talk

Cloud Spectator is one of the most reputable, independent cloud benchmarking and consulting firms focused on the performance of IaaS (Infrastructure as a Service) and applications in the Cloud. The company recently conducted a Linode-commissioned study on NVMe-based block storage and looked at major cloud providers and alternate cloud providers including Linode, AWSAzure, GCP, Vultr, and DigitalOcean. The study found that Linode continues to offer better performance and value for money beating the biggest player in the industry, AWS.

“Linode’s new NVMe-based block storage is extremely fast,” said Mike Jung, CEO, Cloud Spectator. “It offered the highest read performance, random read performance, as well as random write performance across the board. We also found that unlike some of the larger providers like with Amazon, the size of the volume provision will impact performance, but there’s kind of a limit of like 3,000 IOPS, at least with their EBS volumes.”

In this episode of Let’s Talk, Jung shared the findings of this survey and we also discussed other topics that are listed below:

  • What role does NVMe-based block storage play in cloud-centric workloads and is it really better than SSD-based storage?
  • Are there any challenges while embracing NVMe-based block storage as compared to SSDs?
  • What are the things that the company was looking for when it conducted this survey?
  • Jung shares with us some of the key survey findings.
  • We have noticed that Linode often stays ahead of competitors. Can you share some insights? What secret sauce do they have?
  • Jung talks about specific use cases where NVME-based block storage is a better solution over SSDs.

[expander_maker]

Swapnil Bhartiya: Hi, this is your host Swapnil Bhartiya, and welcome to another episode of Let’s Talk. And today we have with us, once again, Mike Jung, CEO of Cloud Spectator. Mike it’s great to have you back on the show.

Mike Jung: Thanks for having me again.

Swapnil Bhartiya: This time you have conducted a study on NVMe based block storage. Before we talk about this study in Journal, I want to know from you, if you can share your insights on what role is NVMe blocked storage playing in the cloud centric workloads, and is it really better than SSD based storage?

Mike Jung: Yeah, So we’re finding that NVME is kind of the next generation interface for storage, right? I mean, when you look at kind of the evolution of storage, we’ve gone from hard drives, spinning discs, and then we’ve gone to SSD, but that’s all still using the SATA protocol. And so NVMe is kind of getting around that. Storage is really no longer limited by that older protocol.

It’s going to be the future of storage performance. And we’re already seeing many of these manufacturers making devices that are taking advantage of that higher bandwidth PCI interface. NVMe storage if you get really fast SSD storage versus NVMe, on a consumer level, that may offer similar performance, but we’re finding in the cloud, NVMe performance seems to really excel and shine. Think of it as the next generation of your standard SSD storage.

Swapnil Bhartiya: Excellent. Do you see any kind of roadblocks or challenges in terms of adoption of NVMe based storage because, but you talk about interface. SETA interface is very popular. All the motherboards or even the expansion slots there. Is there any kind of hurdle that you see in the adoption?

Mike Jung: I mean, it is kind of a shift in, I guess, the way you’re building a server. I mean, a lot of people are used to using the typical SETA ports or rate cards. NVMe is a little bit different. They do make some, I believe, some rate cards for NVMe devices. It’s just a different format, but otherwise pricewise, you’ll often find that NVMe storage SSDs can cost just about the exact same as normal SETA based SSDs. I don’t think price is a concern there. It’s just a different format, but I don’t think that’s a huge barrier. And I think a lot of cloud providers are easily getting around that.

Swapnil Bhartiya: Excellent. Now let’s talk about the study. Tell us what are the things that we are looking for when you were conducting this history? What are the process for the companies that you looked at?

Mike Jung:

Yeah, basically, the companies we looked at. We looked at AWS, we looked at Azure, Linode, of course, and we looked at Digital Ocean and Vulture were the ones that we covered in the study. And the kind of the criteria was to get a feel for, “Okay, of all these providers, let’s look at their fastest storage tier, whatever that might be, NVMe or not.

And let’s see how that compares to Linode’s recently, released NVMe blocked storage. We’re really just looking for, Let’s pit each provider’s fastest blocked storage against each other. And then let’s kind of see where NVMe storage or Linode’s storage comes into play. That was kind of the criteria we tested out the storage on one CPU, smaller kind of shared environment VMs.

And then we kind of focused on a larger four CPU, eight gig Ram group that generally is more focused for like compute optimized, dedicated kind of workloads. We tested two different storage sizes for both of the VM sizes. So to see how… Does the size that you provision the blocked storage volume… Does that impact performance? Meaning, if I have a hundred gig drive attached versus a 500 gig, do we see any changes in performance or is it the exact same across the board? This recent report was really kind of focusing mainly on just blocked storage.

Swapnil Bhartiya: And what were some of your key findings? Can you share those with us?

Mike Jung: Yeah. The findings… Kind of the main takeaways from the report and from the study was that Linode’s new NVMe blocked storage is extremely fast. It offered the highest read performance, random read performance, as well as random write performance across the board. We also find that unlike some of the larger providers like with Amazon, the size of the volume provision will impact performance, but there’s kind of a limit of like 3,000 IOPS, at least with their EBS volumes.

With Linode and even with some of the other smaller providers, the storage, the block volume size doesn’t seem to have an impact really on the performance you get, which is great. If you don’t need to have a ton of space, but you just need a really fast device, you’re going to get the best bang for your buck and Linode not only did the best in terms of performance, but when you factor in kind of the performance you get per dollar spent, especially with blocked storage, Linode really excels, and they offer the most reader, write IOPS per dollar spent.

So those are main takeaways from this recent study.

Swapnil Bhartiya: We have found the fact remains that Linode stays ahead of competitors. Can you share some insights? What do they though do? Do they optimize things or just that their service is so great? Because they have beaten other players in previous cities as well. What is the secret sauce there?

Mike Jung: Linode has always, even before I started working with them at Cloud Spectator, I’ve noticed they’ve always tended to offer pretty solid performance at a low price. And when I started working with them… We’ve done more than a few reports in studies now. They are constantly assessing the industry as a whole. They want to know, “How do we stack up, how does this stack up compared to what everyone else is offering?” And they’re also constantly looking at the latest and greatest hardware, whether that’s the new AMD CPUs or moving over to NVMe blocked storage, they’re constantly assessing, testing, seeing where they stack up and then making improvements.

That’s, I think, why we’re constantly seeing Linode do extremely well in a lot of these studies is because they’re just always trying to basically be the best, especially when it comes to high performance at kind of a low cost. Linode’s able to do that. They are a smaller provider, but they’re not so large that they have to cater to everything. They’re really able to specialize and focus in on performance and value. And that’s why we tend to see them do extremely well time and time again.

Swapnil Bhartiya: If I ask you from just blocked storage perspective, and I ask what are the use cases or workloads that you would advise users who are on Linode that they should move towards? Because it may not make sense in every case, or it may make sense in certain cases. Where and how, when you user should lean towards NVMe based storage versus SSD based?

Mike Jung: Yeah. I think that really the question kind of would be, I mean, with SSD in the case of Linode, they offer… NVMe is in two of their data centers currently. I know they’re rolling it out. Everywhere else is just normal SSD. If you have any kind of a workload that requires really high performance, if it’s doing a lot of reads and writes that aren’t sequential. For example, a database: you need persistent storage, right? You need something that “When I save this data, it’s going to be there, if I shut off the VM, but I need it to be higher performance.” That’s a perfect example of where you’d want to use the fastest type of storage.

If your workload is not constantly accessing, doing a bunch of random, small reads and writes… For example, if you’re streaming video or something like that, NVMe storage will still work, but that’s where you get the advantage of using save object storage, which isn’t going to necessarily give you the highest performance, but for those workloads, you don’t necessarily need it, but for databases or any application that’s doing a lot of reading and writing to permanent storage.

NVMe storage is really the best way to go.

Swapnil Bhartiya: Mike, thank you so much for taking time out today and talk about this study that you folks did there. And also sharing the… The key insight for me was one of the use cases that you just should use, because sometimes we do tend to use everything, but it’s not optimized. Thanks for sharing those in insights as well. And as usual, I would love to have you back on the show. Thank you.

Mike Jung: Yeah. Thanks for having me.

[/expander_maker]