Sparrow : Scalable Scheduling for Sub-Second Parallel Jobs

Sparrow : Scalable Scheduling for Sub-Second Parallel Jobs

Background

In the previous posts around datacenter scheduling, most of the focus has been long running services or batch jobs that runs from minutes to days. Sparrow is looking to solve a different use case, where it looks to solve the scheduling problem when placing jobs that runs interactive (milliseconds) workloads. Therefore, it’s unacceptable for a scheduler spend seconds to place workloads which effectively doubles or triples the runtime. Schedulers can’t maintain a low latency overhead on millions of tasks, mostly because schedulers rely on central state, and all decisions goes through a single node which becomes the bottleneck (remember Omega?). 

sparrow-technique

Sparrow 

Sparrow is a distributed stateless scheduler that is designed to be completely decentralized, where each individual scheduler can make a placement decision without coordination. Comparing this to the Omega scheduler, the Omega scheduler has decentralized the scheduling decision logic, but still has a central placement node with the source of truth. A decentralized scheduler can be linearly scalable, however with the possibility of making suboptimal scheduling decisions . Completely decentralized schedulers has already been done in web servers load balancing, and typically the placement decision is based on power of two sampling (reference), which it randomly picks two servers and chooses the less loaded one.

However, as the authors pointed out that simply performing power of two placement can cause a lot of race conditions and performs poorly when the number of tasks in a parallel job increases (where job has to wait for the last task to finish). Therefore, Sparrow employs two techniques to improve upon power of two: batch sampling and virtual reservation

Batch sampling

The simple way to implement power of two sampling, is for each task to randomly select two nodes, send a RPC probe to each one to get the current load status and choose the least loaded node. The downside of per-task sampling is that it may become unlucky where sampling probes two heavily loaded machines, and leaves the light loaded machines idle. This can still results to high tail latency as it becomes difficult to find free machines. Batch sampling improves over per-task sampling, by aggregating information from all probes from previous tasks in the same job and then chooses the least loaded among those. Results from the paper shows 10%-50% response time improvements. The trade off is that it requires extra load on the scheduler, up to 50% when 90% loaded. Also batch sampling can’t be used when task level constraints is specified.

Virtual Reservation

There are a few potential problems with sampling that virtual reservation is looking to solve. One is that the level of load is usually determined by looking at number of jobs queued, however job runtime might vary which can cause inaccurate load estimate. Another problem is that two independent schedulers might concurrently decide to place workloads on the same idle node, leading to less ideal placement. 

Therefore, instead of return load information on probe, workers will enqueue a reservation in its job queue. When the reservation reaches the front of the queue, it then sends an RPC to the scheduler requesting task information. The scheduler then chooses the first workers that replies to run its tasks, and responds to the remaining workers that all tasks are placed. This solves the previous problems (assuming network is stable) as the tasks are ran on workers that are available the soonest. The paper claims it can improve 35% response time and bring within 14% of optimal placement.

The tradeoff is that there is time wasted when the workers sends the RPC to the schedulers and waiting for the scheduler to respond to it, which in the paper states about 2% effiency lost. Also stated in the paper that network overhead when running in a virtualized environment also caused extra slowness as reservation caused the workers to wait.

Notes

Exploring a fully decentralized scheduler is not just relevant for interactive queries, it also becomes relevant with lambdas where tasks can only run for milliseconds as well. However, it also becomes a challenge when certain tasks do require more centralized knowledge (interference, etc). In the future I’ll be covering more hybrid schedulers design (centralized + decentralized) and will be interesting to see what the tradeoffs are like.

Omega: flexible, scalable schedulers for large compute clusters

Omega: flexible, scalable schedulers for large compute cluster

This post is part of the Datacenter scheduling series, which I’ll be covering Omega, paper published by Google back in 2013 around their work to improve their internal container orchestrator.

Background

Google runs mixed workload in their production for better utilization and effiency, and it is the Google’s production job scheduler responsibility to decide where and when jobs/tasks gets launched. However, the scheduler has become more complicated and hard to rewrite as cluster and workloads grows increasingly. Omega is a new design that aims to allow a new scheduling model that can scale their scheduling implementation much more simply.

Screen Shot 2016-11-11 at 6.57.05 PM.png

There are two common types of scheduler architecture that exists today. One type is Monolithic, which is a single centralized scheduler holding the full state of the cluster and making scheduling decisions. Another type is Two-Level, where a resource manager distributes partial resources of the cluster to separate scheduler “frameworks”, and each framework can make local decisions of these resources. The problem of a Monolithic scheduler is that a centralized scheduler could be the bottleneck and becoming complex to maintain when resource demands and scheduling policies scales, and the problem of Two-level scheduler is that jobs that desires the optimal placement cannot be possible as it doesn’t have the visibility of the entire cluster.

Also as workload and compute are becoming more heterogeneous, the quality and also the latency for placement decisions can matter. For example, if a short interactive job takes longer than a few seconds to schedule then it already spent more time than how the job itself will take. The ability to add and maintain these policies longer term is important.

Omega

Omega borrows from the database community and introduces a new kind of scheduler architecture which is nether two-level or monolithic. It introduces a shared-state architecture, where there are multiple schedulers just like two-level scheduling, however there is no central allocator and each scheduler has access to the full state of cluster instead of being partitioned into smaller ones. The full state is being frequently updated to each scheduler and the scheduler will make a local placement decision, and attempt to commit changes back to the shared copy with a atomic commit. If somehow there are conflicting updates, the scheduler’s transaction will be aborted and needs to retry with a new cluster state. The central component’s responsibility is only to persist state, try to commit changes and validate requests. There is no explicit fairness, and relies to each scheduler to have local limits and rely on after the fact monitoring.

The interesting aspect of this design is that if a particular scheduler becomes the bottleneck, it can also launch multiple instances of the scheduler and load balance between them. This works until conflicts and synchronizing cluster state to all schedulers becomes a bottleneck, which can happen when 100s of schedulers are running. For the purpose of scaling to 10s of schedulers the validation through simulation seems suffice.

Notes

Interestingly few years after Omega was published, the Borg paper later stated that Borg’s single scheduler policy and monolithic scheduler hasn’t been a problem in terms of scalability, and Omega was never really adopted within Google. 

We do see from the Borg paper that Omega’s design did become part of Borg’s single scheduler (separate threads in the scheduler working in parallel on shared state) but not fully distributed as the paper described. Also the Omega paper also influenced all the other schedulers when it comes to optimistically schedule workloads (Mesos, K8s, Nomad, Amazon ECS).

I believe there are probably more learnings we can extract from database that applies to scheduler, hope to see more experiments like Omega happening soon.

 

Tetrisched: Space-Time Scheduling for Heterogeneous Datacenters

Tetrisched: Space-Time Scheduling for Heterogeneous Datacenters 

In this post I’ll be covering Tetrisched, a scheduler based on alsched. To summarize what is alsched, it is a scheduler that allows users to supply soft constraints with utility functions. I’ll be skipping background and motivation and details about alsched as it’s mostly covered by the previous post.

Tetrisched is similar to alsched where it is also trying to maximize the amount of utility based on supplied utility functions. However, it differs from alsched by not just considering space (or in other words, amount of resources) but also time (when these resources are consumed).

screen-shot-2016-10-24-at-1-01-57-am

Utility operators in alsched used to describe utility based on values of resources, but Tetrisched now includes also the periods of time that influences utility. For example, a utility function could include a deadline and also describing either choosing 2 GPU enabled servers that can complete job in 2 units, or running anywhere with a slower runtime of 3 units. The scheduler will then combine all the expressions, and turn each scheduling decision into a Mixed Integer Linear Program (MILP) using solvers that are configured to get to within 10% optimal solution (otherwise diminishing returns between amount of work and improvement).screen-shot-2016-10-24-at-12-54-57-am

Another important aspect of Tetrisched is plan-ahead, where it considers multiple jobs soft constraints and future placements and decide whether it should wait for preferred resources or more alternative options. This can be also computationally intensive so it has be limited to how far advanced it computes, but according to the paper can lead to 3x improvements. Without plan-ahead (alsched), Tetrisched can potentially perform worse than hard constraints at high load.

screen-shot-2016-10-24-at-12-56-48-am

Tetrisched also introduces a wizard tool that will help translate user requirements into utility functions that inputs into Tetrisched. The wizard supports various job types that is configured to know how to compose a utility function based on the type. For example, a HDFS type job will automatically come up with a utility function that computes the utility of maximizing the utility of scheduling on HDFS storage nodes vs non-HDFS and gaining benefits even if it’s partial. Each job also specifies a budget (can be based on priority or other values), sensitivity for delaying and desired times and optional penalty for dropping the job. 

With the plan-ahead and wizard, Tetrisched brings better usability and scheduling especially when there is a higher level of burstiness and load.

Notes

The most interesting aspect of this paper is bringing the time dimension and the ability to express how it impacts the overall utility to take that into account when scheduling. By understanding deadlines besides just amount of resources these batch jobs needs, utilization can be further unlocked especially in a shared cluster. This is still a pretty unexplored space in any container scheduler in the wild.

What will also be interesting to see a scheduler that can take into account deadlines and batch jobs, as well as long running jobs and able to make tradeoffs between them, which does better than just killing all batch jobs when long running jobs needs resources.

alsched: Algebraic Scheduling of Mixed Workloads in Heterogeneous Clouds

alsched: Algebraic Scheduling of Mixed Workloads in Heterogeneous Clouds

This paper was from SOCC 2012 and submitted by CMU.

Background

As compute resources (cloud or on-prem) are becoming heterogeneous, different applications resource and scheduling needs are also diverse. For example, running deep learning with Tensorflow most likely runs best on GPU instances, and Spark jobs will like to get scheduled next to where its data resides. Most schedulers solve this by allowing users to specify hard constraints. However, like the previous stated examples these desires are often not mandatory and just results in a less efficient execution. If treating preferred but not required scheduling needs (soft constraints) as hard constraints can lead to low utilization (cannot use idle compute), but ignoring them also leads to lower effiency.

Alsched

Alsched provides the ability to express soft constraints when scheduling a job. This is also a challenge on itself, as soft constraints can be quite complex when it comes to different allocation tradeoffs and fallbacks. For example, a user might want to give higher preference in colocating the tasks in the same rack, but if not possible then schedule anywhere else, etc. To solve this, Alsched uses composable utility functions, where each utility function maps a resource allocation decision to a utility value using different utility primitives. One example primitive can be `linear n Choose k`, where the utility value grows linearly up until a max k (e.g: preferring to scheduling on an gpu instance grows linearly up until 4 instances, where giving more than 4 isn’t more preferred).

Users can then compose utility functions with operators like Min, Max, Sum, etc. From the following examples you can see that this is quite powerful, where users can express either simple or more sophisticated needs. 

The Alsched scheduler takes the composed utility functions as an input along with a job, and every time the scheduler needs to allocate resources it can try to either optimally or greedily compute a the different allocation scoring. 

In their evaluation they tested enabling hard constraints only, no constraints and soft constraints, with different jobs that either perform much better colocated or ones that can simply be ran in parallel. From their results it shows that hard constraints has the slowest runtime when the speed up of locality doesn’t matter much as it needs to wait for scarce resources to be available. In this setup where the opportunity for optimal placement is scarce, soft constraints tries to evaluate both resource availability and potential speedup and biggest difference in their experiments is 10x reduction in runtime. 

For future work, the authors stated that it’s hard to expect an average user to construct a good composing utility function tree, and will like to see trees automatically generated from the application, and let power users the ability to define a custom one.

Notes

Constraints as mentioned in this paper is a essential way to allow resource availability and effiency to be used in the most optimized way in schedulers. However, I’ve seen it also hard in practice for users to choose the right constraints, especially when applications and resources changes. The ability to generate a declarative soft constraint (with tradeoffs) and allow multiple soft constraints to work together will be an interesting exercise. 

Mesos allows users to codify this logic in their framework, and perhaps this can also be a way to create a shared framework/scheduler logic that simplifies frameworks too. 

    

Jockey: Guaranteed Job Latency in Data Parallel Clusters

Next post in the datacenter scheduling series, I’ll be covering paper “Jockey: Guaranteed Job Latency in Data Parallel Clusters“, which is a joint work between Microsoft Research and Brown University.

Background

Big Data frameworks (MapReduce, Dryad, Spark) running in large organizations are often shared among multiple groups and users, and jobs often has strict deadlines (e.g: finish within 1 hour). To ensure these deadlines are met is challenging because these jobs often have complex operations, various data patterns and failures on different levels which includes tasks, nodes and network. Also to improve cluster utilization, jobs need to multiplex within the cluster which then creates more variability. This all leads to variable performance in jobs. Having dedicated clusters for strict SLO jobs can provide best isolation, but then utilization becomes very low.

Jockey

Previous approaches to this problem either tries to introduce priorities or weights among jobs as inputs to the scheduler. However, priorities are not expressive enough as many jobs can be high priority or low priority, and always prioritizing resources to high priority jobs can potentially harm the lower SLO jobs. Weights are also difficult as users have a hard time understanding how to weigh their jobs, especially when the cluster is noisy.

Jockey aims to solve this problem by allowing users to provide a utility curve expressed by piecewise linear functions, which gets mapped by the system to weights automatically. The system’s goal is to then maximize the utility while minimizing resources, by dynamically adjusting the allocation of resources to each job.

For Jockey to be able to allocate resources correctly overtime, it needs to understand how to estimate the amount of resources required to meet a job’s deadline at each stage. Jockey uses prior execution performance events and data, to build a resource model of each job’s stage and requirements. Using this model, Jockey uses a simulator to runs thousands of simulation per job offline to

build a resource allocation table that models the correlation between amount of resource allocation and completion time of the job. 

Jockey then at job’s runtime will run a control loop, which periodically examines the job’s current progress using its built-in progress indicator and determine if the job is progressing as expected. If the job is somehow performing slower than expected (probably due to noisy neighbors or failures), Jockey will use the model to allocate more resources to the job. Also if the job is running is faster, it can also reduce the allocation. When the model is mispredicting the resources possibly due to significant change in job input size, Jockey will attempt to rerun the simulator at runtime or just fallback to the previous weighted sharing.

screen-shot-2016-10-05-at-6-15-50-pm

The results according to the paper from running 800 executions of 21 jobs in production, only missed 1 deadline about 3%. It also reduces the shortest runtime and longest relative runtime of a job from 4.3x variance down to 1.4x.

There are more details of how the simulator works, progress indicator and evaluations in the paper.

Notes

With container orchestrators, users are launching more mixed workloads (batch + long running, etc) in the same cluster with Mesos/K8s. Problems described in this paper are still relevant with Docker, such as noisy neighbors and variance in job execution. Interestingly, if we can also learn from prior executions of predictable batch jobs to build a similar resource allocation model (e.g: Spark, Hadoop), we can also use this information in the framework that controls the resource allocation to meet deadlines and also maximize utilization. However, to come up with a good simulator for these systems are pretty challenging. The paper did suggest a machine learning approach could be possible, which hopefully can be explored more.

Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

As part of the datacenter scheduling series, the next paper I am covering is “Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

This a joint paper done by AMPLab and Facebook, where the background is also around Hadoop/Dryad like systems running MapReduce type workloads with a mix of long batch jobs and shorter interactive queries. While Facebook saw the benefits of consolidating workloads and users into the same cluster, it also started to hurt performance of queries. Therefore the Hadoop fair scheduler was designed and employed to help maximize fairness and performance. As discussed in previous papers, there is a conflict between ensuring data locality and fairness, where maximizing one will possibly hurt another. As a scheduler, there are two options when fairness needs to be enforced when new jobs are submitted; Either preempt existing tasks or wait for running tasks to finish. Although preempting tasks allows fairness to be enforced right away, it also wastes the work that has already performed (also consider the effect of checkpointing).

This paper proposes the idea of delay scheduling, which the scheduler is choosing what node to run the head-of-line job, if the available nodes doesn’t have local data for the job it lets it wait and evaluate subsequent jobs instead. If the jobs are skipped long enough, it will then start to relax the locality preference from node level to rack level, and eventually non-local tasks when rack locality isn’t possible. The two key characteristics of the Yahoo/Facebook workload that the authors tested is that most jobs are small, and data locality improves job performance quite noticeably (2x faster in their workloads).

The other scheduling improvement to ensure fairness among users that submits a variety of jobs, is that the paper proposes hierarchical resource pools, that allows different users to specify different scheduling policy (FIFO, Fair, etc) while each having its own fair share among the pools. It also offer some suggestions about removing node hotspots (increase data replication for hotter data) and more.

With these two improvements, the paper evaluates the existing Yahoo/FB workload and shows that it shows various amount of improvements in different task lengths and types of workloads.

Delay scheduling  works well in this environment as most of the tasks are short comparing to the jobs and also there are multiple locations to run a particular job based on data replication.

Notes

Delay scheduling is a very simple concept, but it can be a effective scheduling technique with the constraints mentioned in this paper. It can also be an possible scheduling action that a generic scheduler framework can take (i.e: Quincy).

And in fact this is actually a somewhat common approach to Mesos schedulers, where framework usually opts to wait for sometime for better offers to choose from to run a particular task when a more ideal placement can matter. What could be an interesting way to help frameworks to decide if it should “delay” scheduling is to possible offer rates and estimation of getting an offer that’s in the locality preference before it chooses to wait or run.

 

 

Quincy: Fair scheduling for Distributed Computing Clusters

On the next paper in the datacenter scheduling paper series, we’ll cover Quincy, which is a paper published by Microsoft in 2008. This is also the paper that Firmament (http://firmament.io/) is primarily based on.

http://www.sigops.org/sosp/sosp09/papers/isard-sosp09.pdf

There is a growing number of data intensive jobs (and applications), and various jobs can take from minutes to even days. Therefore, it becomes a demand to ensure that there are some notion of fairness in the cluster, where a single job cannot monopolize the whole cluster. It also is important that ensuring low latency for a short job doesn’t impact the overall throughput of the system. Another aspect of the design consideration state in the paper was that they assume maintaining high network bandwidth as the cluster grows becomes quite expensive, and jobs can often have high cross cluster traffic, therefore it becomes a goal to reduce network traffic by preferring data locality. However, often striving for both fairness and locality is a conflicting goal. This is because to have the best  locality placement for a job it often requires delaying the placement , which can sacrifice fairness.

Quincy is a generic scheduling framework that aims to provide a flexible and powerful way of scheduling multiple concurrent jobs, which in the paper it uses Dryad as the data processing system generating these jobs. As there are multiple dimensions that are competing (locality vs fairness, etc) when it comes to scheduling, Quincy provides a way to express the cost of these choices so there can be a global cost function that determines the best action.

Quincy is a flow-based scheduling framework, where the underlying data structure that stores all the decisions is a graph, where it encodes structure of the network (rack, node, switches) and also the set of waiting tasks with locality data. If can quantify the cost each scheduling decision (cost of pre-empting a task, cost of less ideal data locality, etc), then we can come up with a graph that represent these choices and use a solver to find the best path for scheduling.

Screen Shot 2016-04-14 at 5.54.08 PM

In the graph above we can see an example flow graph, where each box in the left are roots and worker tasks submitted to the scheduler, and every action (unscheduled, pre-emption) and assignments (cluster, rack, computer) is encoded on the right. The costs of the edges are updated on every salient event (task assigned, worker added) or based on a timer event (e.g: cost of having a task waiting in the queue). There are also other costs that can be computed and configured, such as the cost of transferring data across various levels, etc.

(The firmament website has a great visualization that you can play around http://firmament.io/#howitworks)

The paper cited that the measured overhead of solving the min-cost flow for 250 machines was about 7 ms and highest 60ms, where they tried 2500 machines running 100 concurrent job and highest scheduling time was about 1 second. Improvements can still be made by computing incrementally instead.

In the evaluation, the paper used 250 machines and tried four levels of fairness policies (w or w/o pre-empt,  w or w/o fair sharing) and attempt to see the effectiveness of the scheduler by measuring fairness, job time and data transfers. In their workloads and setup, they shown turning on fair sharing and pre-emption provides the most effective outcome.

Final thoughts

While this paper is back in 2008 and has lots of emphasis on batch workloads and data locality, modeling scheduling as a min-cost flow problem is very powerful as seen in Firmament that the flexible costing model can allow different workloads / configuration to provide different scheduling decisions.

As there are work already ongoing to integrate Firmament into K8s and Mesos, it’s still unclear to me how Firmament’s costing and configuration can be best configured in all kinds of different cluster environments. However, this is also an opportunity as I think having a flexible scheduling model like Quincy graph provides a powerful abstraction for future improvements.