alsched: Algebraic Scheduling of Mixed Workloads in Heterogeneous Clouds

alsched: Algebraic Scheduling of Mixed Workloads in Heterogeneous Clouds

This paper was from SOCC 2012 and submitted by CMU.


As compute resources (cloud or on-prem) are becoming heterogeneous, different applications resource and scheduling needs are also diverse. For example, running deep learning with Tensorflow most likely runs best on GPU instances, and Spark jobs will like to get scheduled next to where its data resides. Most schedulers solve this by allowing users to specify hard constraints. However, like the previous stated examples these desires are often not mandatory and just results in a less efficient execution. If treating preferred but not required scheduling needs (soft constraints) as hard constraints can lead to low utilization (cannot use idle compute), but ignoring them also leads to lower effiency.


Alsched provides the ability to express soft constraints when scheduling a job. This is also a challenge on itself, as soft constraints can be quite complex when it comes to different allocation tradeoffs and fallbacks. For example, a user might want to give higher preference in colocating the tasks in the same rack, but if not possible then schedule anywhere else, etc. To solve this, Alsched uses composable utility functions, where each utility function maps a resource allocation decision to a utility value using different utility primitives. One example primitive can be `linear n Choose k`, where the utility value grows linearly up until a max k (e.g: preferring to scheduling on an gpu instance grows linearly up until 4 instances, where giving more than 4 isn’t more preferred).

Users can then compose utility functions with operators like Min, Max, Sum, etc. From the following examples you can see that this is quite powerful, where users can express either simple or more sophisticated needs. 

The Alsched scheduler takes the composed utility functions as an input along with a job, and every time the scheduler needs to allocate resources it can try to either optimally or greedily compute a the different allocation scoring. 

In their evaluation they tested enabling hard constraints only, no constraints and soft constraints, with different jobs that either perform much better colocated or ones that can simply be ran in parallel. From their results it shows that hard constraints has the slowest runtime when the speed up of locality doesn’t matter much as it needs to wait for scarce resources to be available. In this setup where the opportunity for optimal placement is scarce, soft constraints tries to evaluate both resource availability and potential speedup and biggest difference in their experiments is 10x reduction in runtime. 

For future work, the authors stated that it’s hard to expect an average user to construct a good composing utility function tree, and will like to see trees automatically generated from the application, and let power users the ability to define a custom one.


Constraints as mentioned in this paper is a essential way to allow resource availability and effiency to be used in the most optimized way in schedulers. However, I’ve seen it also hard in practice for users to choose the right constraints, especially when applications and resources changes. The ability to generate a declarative soft constraint (with tradeoffs) and allow multiple soft constraints to work together will be an interesting exercise. 

Mesos allows users to codify this logic in their framework, and perhaps this can also be a way to create a shared framework/scheduler logic that simplifies frameworks too. 


Jockey: Guaranteed Job Latency in Data Parallel Clusters

Next post in the datacenter scheduling series, I’ll be covering paper “Jockey: Guaranteed Job Latency in Data Parallel Clusters“, which is a joint work between Microsoft Research and Brown University.


Big Data frameworks (MapReduce, Dryad, Spark) running in large organizations are often shared among multiple groups and users, and jobs often has strict deadlines (e.g: finish within 1 hour). To ensure these deadlines are met is challenging because these jobs often have complex operations, various data patterns and failures on different levels which includes tasks, nodes and network. Also to improve cluster utilization, jobs need to multiplex within the cluster which then creates more variability. This all leads to variable performance in jobs. Having dedicated clusters for strict SLO jobs can provide best isolation, but then utilization becomes very low.


Previous approaches to this problem either tries to introduce priorities or weights among jobs as inputs to the scheduler. However, priorities are not expressive enough as many jobs can be high priority or low priority, and always prioritizing resources to high priority jobs can potentially harm the lower SLO jobs. Weights are also difficult as users have a hard time understanding how to weigh their jobs, especially when the cluster is noisy.

Jockey aims to solve this problem by allowing users to provide a utility curve expressed by piecewise linear functions, which gets mapped by the system to weights automatically. The system’s goal is to then maximize the utility while minimizing resources, by dynamically adjusting the allocation of resources to each job.

For Jockey to be able to allocate resources correctly overtime, it needs to understand how to estimate the amount of resources required to meet a job’s deadline at each stage. Jockey uses prior execution performance events and data, to build a resource model of each job’s stage and requirements. Using this model, Jockey uses a simulator to runs thousands of simulation per job offline to

build a resource allocation table that models the correlation between amount of resource allocation and completion time of the job. 

Jockey then at job’s runtime will run a control loop, which periodically examines the job’s current progress using its built-in progress indicator and determine if the job is progressing as expected. If the job is somehow performing slower than expected (probably due to noisy neighbors or failures), Jockey will use the model to allocate more resources to the job. Also if the job is running is faster, it can also reduce the allocation. When the model is mispredicting the resources possibly due to significant change in job input size, Jockey will attempt to rerun the simulator at runtime or just fallback to the previous weighted sharing.


The results according to the paper from running 800 executions of 21 jobs in production, only missed 1 deadline about 3%. It also reduces the shortest runtime and longest relative runtime of a job from 4.3x variance down to 1.4x.

There are more details of how the simulator works, progress indicator and evaluations in the paper.


With container orchestrators, users are launching more mixed workloads (batch + long running, etc) in the same cluster with Mesos/K8s. Problems described in this paper are still relevant with Docker, such as noisy neighbors and variance in job execution. Interestingly, if we can also learn from prior executions of predictable batch jobs to build a similar resource allocation model (e.g: Spark, Hadoop), we can also use this information in the framework that controls the resource allocation to meet deadlines and also maximize utilization. However, to come up with a good simulator for these systems are pretty challenging. The paper did suggest a machine learning approach could be possible, which hopefully can be explored more.

Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

As part of the datacenter scheduling series, the next paper I am covering is “Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

This a joint paper done by AMPLab and Facebook, where the background is also around Hadoop/Dryad like systems running MapReduce type workloads with a mix of long batch jobs and shorter interactive queries. While Facebook saw the benefits of consolidating workloads and users into the same cluster, it also started to hurt performance of queries. Therefore the Hadoop fair scheduler was designed and employed to help maximize fairness and performance. As discussed in previous papers, there is a conflict between ensuring data locality and fairness, where maximizing one will possibly hurt another. As a scheduler, there are two options when fairness needs to be enforced when new jobs are submitted; Either preempt existing tasks or wait for running tasks to finish. Although preempting tasks allows fairness to be enforced right away, it also wastes the work that has already performed (also consider the effect of checkpointing).

This paper proposes the idea of delay scheduling, which the scheduler is choosing what node to run the head-of-line job, if the available nodes doesn’t have local data for the job it lets it wait and evaluate subsequent jobs instead. If the jobs are skipped long enough, it will then start to relax the locality preference from node level to rack level, and eventually non-local tasks when rack locality isn’t possible. The two key characteristics of the Yahoo/Facebook workload that the authors tested is that most jobs are small, and data locality improves job performance quite noticeably (2x faster in their workloads).

The other scheduling improvement to ensure fairness among users that submits a variety of jobs, is that the paper proposes hierarchical resource pools, that allows different users to specify different scheduling policy (FIFO, Fair, etc) while each having its own fair share among the pools. It also offer some suggestions about removing node hotspots (increase data replication for hotter data) and more.

With these two improvements, the paper evaluates the existing Yahoo/FB workload and shows that it shows various amount of improvements in different task lengths and types of workloads.

Delay scheduling  works well in this environment as most of the tasks are short comparing to the jobs and also there are multiple locations to run a particular job based on data replication.


Delay scheduling is a very simple concept, but it can be a effective scheduling technique with the constraints mentioned in this paper. It can also be an possible scheduling action that a generic scheduler framework can take (i.e: Quincy).

And in fact this is actually a somewhat common approach to Mesos schedulers, where framework usually opts to wait for sometime for better offers to choose from to run a particular task when a more ideal placement can matter. What could be an interesting way to help frameworks to decide if it should “delay” scheduling is to possible offer rates and estimation of getting an offer that’s in the locality preference before it chooses to wait or run.



Quincy: Fair scheduling for Distributed Computing Clusters

On the next paper in the datacenter scheduling paper series, we’ll cover Quincy, which is a paper published by Microsoft in 2008. This is also the paper that Firmament ( is primarily based on.

There is a growing number of data intensive jobs (and applications), and various jobs can take from minutes to even days. Therefore, it becomes a demand to ensure that there are some notion of fairness in the cluster, where a single job cannot monopolize the whole cluster. It also is important that ensuring low latency for a short job doesn’t impact the overall throughput of the system. Another aspect of the design consideration state in the paper was that they assume maintaining high network bandwidth as the cluster grows becomes quite expensive, and jobs can often have high cross cluster traffic, therefore it becomes a goal to reduce network traffic by preferring data locality. However, often striving for both fairness and locality is a conflicting goal. This is because to have the best  locality placement for a job it often requires delaying the placement , which can sacrifice fairness.

Quincy is a generic scheduling framework that aims to provide a flexible and powerful way of scheduling multiple concurrent jobs, which in the paper it uses Dryad as the data processing system generating these jobs. As there are multiple dimensions that are competing (locality vs fairness, etc) when it comes to scheduling, Quincy provides a way to express the cost of these choices so there can be a global cost function that determines the best action.

Quincy is a flow-based scheduling framework, where the underlying data structure that stores all the decisions is a graph, where it encodes structure of the network (rack, node, switches) and also the set of waiting tasks with locality data. If can quantify the cost each scheduling decision (cost of pre-empting a task, cost of less ideal data locality, etc), then we can come up with a graph that represent these choices and use a solver to find the best path for scheduling.

Screen Shot 2016-04-14 at 5.54.08 PM

In the graph above we can see an example flow graph, where each box in the left are roots and worker tasks submitted to the scheduler, and every action (unscheduled, pre-emption) and assignments (cluster, rack, computer) is encoded on the right. The costs of the edges are updated on every salient event (task assigned, worker added) or based on a timer event (e.g: cost of having a task waiting in the queue). There are also other costs that can be computed and configured, such as the cost of transferring data across various levels, etc.

(The firmament website has a great visualization that you can play around

The paper cited that the measured overhead of solving the min-cost flow for 250 machines was about 7 ms and highest 60ms, where they tried 2500 machines running 100 concurrent job and highest scheduling time was about 1 second. Improvements can still be made by computing incrementally instead.

In the evaluation, the paper used 250 machines and tried four levels of fairness policies (w or w/o pre-empt,  w or w/o fair sharing) and attempt to see the effectiveness of the scheduler by measuring fairness, job time and data transfers. In their workloads and setup, they shown turning on fair sharing and pre-emption provides the most effective outcome.

Final thoughts

While this paper is back in 2008 and has lots of emphasis on batch workloads and data locality, modeling scheduling as a min-cost flow problem is very powerful as seen in Firmament that the flexible costing model can allow different workloads / configuration to provide different scheduling decisions.

As there are work already ongoing to integrate Firmament into K8s and Mesos, it’s still unclear to me how Firmament’s costing and configuration can be best configured in all kinds of different cluster environments. However, this is also an opportunity as I think having a flexible scheduling model like Quincy graph provides a powerful abstraction for future improvements.

Improving MapReduce Performance in Heterogeneous Environments

Improving MapReduce Performance in Heterogeneous Environments‘ is the first paper in the collection of scheduling papers I’d like to cover. If you like to learn the motivation behind this series you can find it here. As mentioned, I will also add my personal comments about this paper from Mesos perspective. This is the very first every paper summary post I’ve done so will definitely like feedback as I’m sure there are lots of room for improvement!

This paper is written around 2008, which is after Google released the MapReduce paper and Hadoop was being developed by Yahoo inspired by that paper. Hadoop is a general purpose data processing framework that divides up a job in to Map and a Reduce tasks, and multiple phases belongs to each task. Each Hadoop machine will have a fixed number of slots based on available resources, and Hadoop implemented its own Hadoop Fair scheduler that places these tasks into the available resources in the cluster. From the MapReduce paper, Google also suggested that one of the common causes to cause a job to take more time to complete is the problem of ‘stragglers’, which are machines that takes unusual amount of time to complete a task. Different reasons could have caused this, such as bad hardware. To work around this, they suggested by running ‘backup tasks’ (aka speculative execution) which is running the same task in parallel in other machines. Without this mechanism jobs can take 44% longer according to the MapReduce paper.

The authors realize that there are a number of assumptions and heuristics employed in the Hadoop Fair Scheduler, that can cause severe performance degradation in heterogeneous environments, such as Amazon EC2. This paper specifically targets how can the Hadoop scheduler improve on speculative executing tasks.

To understand the improvements, we need to first cover how does the Hadoop scheduler decide to speculative execute tasks, and the key assumptions it has made.

Hadoop scheduler

Each Hadoop task’s progress is measured between 0 and 1, where each three phases (copy, sort, reduce) of the task counts 1/3 of the score. The original Hadoop scheduler basically looks at all tasks that has progress score less than average in its category minus 0.2 with running over a minute, it is then marked as a ‘straggler’ and later ranked by data locality to start speculative execute.

The original Hadoop scheduler made these assumptions:

1. Nodes can perform work at roughly the same rate.
2. Tasks progress at a constant rate throughout time.
3. There is no cost to launching a speculative task on a node that would otherwise have an idle slot.
4. A task’s progress score is representative of fraction of its total work that it has done.
5. Tasks tend to finish in waves, so a task with a low progress score is likely a straggler.
6. Tasks in the same category (map or reduce) require roughly the same amount of work.

The paper later describes how each assumption doesn’t hold in a heterogeneous environment. Hadoop assumes any detectable slow nodes are stragglers. However, the authors state that public cloud environment like EC2 can be an be slow because of co-location, where ‘noisy neighbors’ can cause contention on Network or Disk I/O as these resources are not perfectly isolated in a public cloud environment. The measured difference in performance cited was about 2.5x. This also makes assumption 3 no longer true since idle nodes can be sharing resources. Also too many speculative tasks can be running if there is no limit and causes resources to not be used for actual useful tasks, which is more likely to happen in a heterogeneous environment.

Another insight was that not all phases progress at the same rate, as some phases requires more expensive copying of the data over the network such as the copy phase of the reduce task. There are other insights the authors mentioned that can be found in the paper.

LATE scheduler

The LATE scheduler is a new speculative task scheduler proposed by the authors to try to address the given assumptions and add features to behave better in a heterogeneous environment. The first difference is that the LATE scheduler first evaluates tasks that it estimated to finish farthest into the future to speculative execute, which is what LATE stands for (Longest Approximate Time to End). It uses a simple heuristic to estimate the finish time: (1 – ProgressScore) / ProgressRate. ProgressRate is simply the ProgressScore divided by the elapsed running time. By doing this change it only tries to re-execute tasks that will improve the job response time.
The second difference is that it tries to launch backup tasks nodes that aren’t tagged as stragglers, which is by finding the total work performed that is above a SlowNodeThreshold.
The third difference is that it introduced a limit of how many speculative tasks can be launched to avoid wasting resources and limit contention.

The combination of these changes allows the new scheduler to perform better in a heterogeneous environment, which in their evaluation performs 58% better on average. Note that the authors also came up with a default values for different configurations based on trying different values in their test environment.

Final random thoughts

By simply applying different heuristics in scheduling there is a quite noticeable improvement of performance. However, the assumptions of the workload and the cluster environment  matters a lot  in the heuristics and also with the values they’ve chosen. It will be interesting if users are able to run these evaluations on their cluster based on their typical workloads and come up with different values, which I think can be a useful framework on Mesos to write.

One of the most interesting aspects of this paper is the expectation and detection of stragglers in the cluster, as contention being the big cause. In Mesos world when there are multiple frameworks running various tasks in the same cluster, the effect of contention can also happen even on a private cluster since containers only isolate cpu/memory (to a certain degree) but network / io or even caches to DRAM bandwidth are not currently isolated by any container runtime. If we can understand the performance characteristics of all the tasks running from all Mesos frameworks, I think avoiding co-location or stragglers detection can become a lot more efficient as a lot of straggler information can be shared among frameworks that ran similar workloads. Another interesting thought is that if Mesos knows what tasks are speculative executed we can also apply better priorities when it comes to oversubscription.

A survey of datacenter scheduling papers

Scheduling workloads in a cluster is not a new topic and have years of research behind it. But scheduling recently became popular because of the popularity of containers (Docker) and the rise of scheduling frameworks that can run more than just MapReduce and OpenMP jobs (Mesos and Kubernetes).

Having the opportunity  to work on Mesos at Mesosphere and becoming a PMC in this project has made me much more aware of what’s going on in this space. However, as I see different areas we can improve in Mesos or Spark on Mesos scheduling and some problems we are facing, I like to go back and understand the scheduling literature for me to have a better understanding when considering how to make improvements in our schedulers.

Motivated by these reasons, I decide to cover the recent datacenter scheduling paper in a chronological order, and being inspried by Adrian Coyler’s `The Morning Paper`, I like to post a short summary of each scheduling paper on this blog. I also will add my personal comments and thoughts after reading the paper from a Mesos perspective.

Also very thankful for Malte Schwarzkopf (MIT, co-author of Google Omega and Firmament) for providing me a list of papers that I should cover.

I’ll update the sections below as I add more blog posts, but stay tuned if you also like to learn more about datacenter scheduling!

Papers covered