Cloud Native ComputingDevelopersDevOpsEdge/IoTFeaturedLet's TalkVideo

Section: Enterprise-Grade Platform Without The Cost And Hassle

0

Guest: Dan Bartholomew (LinkedIn)
Company: Section (Twitter)

Section is a sophisticated, distributed, general-purpose workload system that provides dynamic orchestration to ensure applications run at the right place and time. In this episode of TFiR: Let’s Talk, Swapnil Bhartiya sits down with Co-Founder and CTO Dan Bartholomew, to talk about the company’s backstory, focus areas, and the benefits it provides to companies of all sizes.

Key highlights of this video interview:

  • How Section came to be: When Bartholomew and co-founder Stewart McGrath were building large-scale ecommerce websites, they identified a disconnect between cloud and the content delivery networks (CDNs). They built a system that would have the networking properties of a CDN, in that it’s massively distributed. They also wanted people to be able to choose what software they actually run inside the point of presence (PoP). They ended up with a large, distributed compute platform that allows people to bring whatever software they like in the form of a Docker container and have that run across all of the different PoPs inside Section’s network. Today, Section is a distributed, general-purpose workload system.
  • Last month, Section made two big announcements: 1) it has enhanced its global platform to make it easier to deploy and scale a Mastodon server, and 2) it is supporting persistent volume (PV) storage to enable distributed deployment of Mastodon and other complex workloads.
  • One of Section’s strengths is a core technology that it calls the adaptive edge engine, which dynamically moves the Docker containers around the network depending on where the users are. Instead of deploying Docker containers in one or two or 10 data centers and having them always on, what Section does is it continuously monitors where the users are coming from. Then, it finds places inside the Section network that would best suit those users. This means lower costs for customers because they don’t need to run the containers in locations that are giving a sub-optimal cost-benefit ratio.
  • Bartholomew says a key motivator for them designing and creating this technology is that as they see the market for edge computing grow with many, many more locations. What is needed is a computational approach to deciding where and when to run the software. And that is exactly the problem that the adaptive edge engine tackles.
  • The market segment it is currently focused on is operating distributed computing networks for customers that build platforms as a service. Trend: Companies that create GraphQL APIs are looking to improve the end-user performance by becoming distributed to overcome some of the speed-of-light problems. Pain points: They don’t have the operations capability to run a large number of clusters. Some of the advanced networking capabilities needed to build this kind of system are hard to obtain. They need to take on a big innovation burden that entails solving a lot of problems around networking and cost management. Solution: Section gives them the benefits of having all of this distribution without having to do all of the operations work. Customers can innovate to stay ahead, without taking on the burden of having to train a team.

This summary was written by Camille Gregory.