Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

As part of the datacenter scheduling series, the next paper I am covering is “Delay Scheduling: A Simple Technique for Achieving Locality and Fairness in Cluster Scheduling

This a joint paper done by AMPLab and Facebook, where the background is also around Hadoop/Dryad like systems running MapReduce type workloads with a mix of long batch jobs and shorter interactive queries. While Facebook saw the benefits of consolidating workloads and users into the same cluster, it also started to hurt performance of queries. Therefore the Hadoop fair scheduler was designed and employed to help maximize fairness and performance. As discussed in previous papers, there is a conflict between ensuring data locality and fairness, where maximizing one will possibly hurt another. As a scheduler, there are two options when fairness needs to be enforced when new jobs are submitted; Either preempt existing tasks or wait for running tasks to finish. Although preempting tasks allows fairness to be enforced right away, it also wastes the work that has already performed (also consider the effect of checkpointing).

This paper proposes the idea of delay scheduling, which the scheduler is choosing what node to run the head-of-line job, if the available nodes doesn’t have local data for the job it lets it wait and evaluate subsequent jobs instead. If the jobs are skipped long enough, it will then start to relax the locality preference from node level to rack level, and eventually non-local tasks when rack locality isn’t possible. The two key characteristics of the Yahoo/Facebook workload that the authors tested is that most jobs are small, and data locality improves job performance quite noticeably (2x faster in their workloads).

The other scheduling improvement to ensure fairness among users that submits a variety of jobs, is that the paper proposes hierarchical resource pools, that allows different users to specify different scheduling policy (FIFO, Fair, etc) while each having its own fair share among the pools. It also offer some suggestions about removing node hotspots (increase data replication for hotter data) and more.

With these two improvements, the paper evaluates the existing Yahoo/FB workload and shows that it shows various amount of improvements in different task lengths and types of workloads.

Delay scheduling  works well in this environment as most of the tasks are short comparing to the jobs and also there are multiple locations to run a particular job based on data replication.

Notes

Delay scheduling is a very simple concept, but it can be a effective scheduling technique with the constraints mentioned in this paper. It can also be an possible scheduling action that a generic scheduler framework can take (i.e: Quincy).

And in fact this is actually a somewhat common approach to Mesos schedulers, where framework usually opts to wait for sometime for better offers to choose from to run a particular task when a more ideal placement can matter. What could be an interesting way to help frameworks to decide if it should “delay” scheduling is to possible offer rates and estimation of getting an offer that’s in the locality preference before it chooses to wait or run.

 

 

Quincy: Fair scheduling for Distributed Computing Clusters

On the next paper in the datacenter scheduling paper series, we’ll cover Quincy, which is a paper published by Microsoft in 2008. This is also the paper that Firmament (http://firmament.io/) is primarily based on.

http://www.sigops.org/sosp/sosp09/papers/isard-sosp09.pdf

There is a growing number of data intensive jobs (and applications), and various jobs can take from minutes to even days. Therefore, it becomes a demand to ensure that there are some notion of fairness in the cluster, where a single job cannot monopolize the whole cluster. It also is important that ensuring low latency for a short job doesn’t impact the overall throughput of the system. Another aspect of the design consideration state in the paper was that they assume maintaining high network bandwidth as the cluster grows becomes quite expensive, and jobs can often have high cross cluster traffic, therefore it becomes a goal to reduce network traffic by preferring data locality. However, often striving for both fairness and locality is a conflicting goal. This is because to have the best  locality placement for a job it often requires delaying the placement , which can sacrifice fairness.

Quincy is a generic scheduling framework that aims to provide a flexible and powerful way of scheduling multiple concurrent jobs, which in the paper it uses Dryad as the data processing system generating these jobs. As there are multiple dimensions that are competing (locality vs fairness, etc) when it comes to scheduling, Quincy provides a way to express the cost of these choices so there can be a global cost function that determines the best action.

Quincy is a flow-based scheduling framework, where the underlying data structure that stores all the decisions is a graph, where it encodes structure of the network (rack, node, switches) and also the set of waiting tasks with locality data. If can quantify the cost each scheduling decision (cost of pre-empting a task, cost of less ideal data locality, etc), then we can come up with a graph that represent these choices and use a solver to find the best path for scheduling.

Screen Shot 2016-04-14 at 5.54.08 PM

In the graph above we can see an example flow graph, where each box in the left are roots and worker tasks submitted to the scheduler, and every action (unscheduled, pre-emption) and assignments (cluster, rack, computer) is encoded on the right. The costs of the edges are updated on every salient event (task assigned, worker added) or based on a timer event (e.g: cost of having a task waiting in the queue). There are also other costs that can be computed and configured, such as the cost of transferring data across various levels, etc.

(The firmament website has a great visualization that you can play around http://firmament.io/#howitworks)

The paper cited that the measured overhead of solving the min-cost flow for 250 machines was about 7 ms and highest 60ms, where they tried 2500 machines running 100 concurrent job and highest scheduling time was about 1 second. Improvements can still be made by computing incrementally instead.

In the evaluation, the paper used 250 machines and tried four levels of fairness policies (w or w/o pre-empt,  w or w/o fair sharing) and attempt to see the effectiveness of the scheduler by measuring fairness, job time and data transfers. In their workloads and setup, they shown turning on fair sharing and pre-emption provides the most effective outcome.

Final thoughts

While this paper is back in 2008 and has lots of emphasis on batch workloads and data locality, modeling scheduling as a min-cost flow problem is very powerful as seen in Firmament that the flexible costing model can allow different workloads / configuration to provide different scheduling decisions.

As there are work already ongoing to integrate Firmament into K8s and Mesos, it’s still unclear to me how Firmament’s costing and configuration can be best configured in all kinds of different cluster environments. However, this is also an opportunity as I think having a flexible scheduling model like Quincy graph provides a powerful abstraction for future improvements.

Improving MapReduce Performance in Heterogeneous Environments

Improving MapReduce Performance in Heterogeneous Environments‘ is the first paper in the collection of scheduling papers I’d like to cover. If you like to learn the motivation behind this series you can find it here. As mentioned, I will also add my personal comments about this paper from Mesos perspective. This is the very first every paper summary post I’ve done so will definitely like feedback as I’m sure there are lots of room for improvement!

This paper is written around 2008, which is after Google released the MapReduce paper and Hadoop was being developed by Yahoo inspired by that paper. Hadoop is a general purpose data processing framework that divides up a job in to Map and a Reduce tasks, and multiple phases belongs to each task. Each Hadoop machine will have a fixed number of slots based on available resources, and Hadoop implemented its own Hadoop Fair scheduler that places these tasks into the available resources in the cluster. From the MapReduce paper, Google also suggested that one of the common causes to cause a job to take more time to complete is the problem of ‘stragglers’, which are machines that takes unusual amount of time to complete a task. Different reasons could have caused this, such as bad hardware. To work around this, they suggested by running ‘backup tasks’ (aka speculative execution) which is running the same task in parallel in other machines. Without this mechanism jobs can take 44% longer according to the MapReduce paper.

The authors realize that there are a number of assumptions and heuristics employed in the Hadoop Fair Scheduler, that can cause severe performance degradation in heterogeneous environments, such as Amazon EC2. This paper specifically targets how can the Hadoop scheduler improve on speculative executing tasks.

To understand the improvements, we need to first cover how does the Hadoop scheduler decide to speculative execute tasks, and the key assumptions it has made.

Hadoop scheduler

Each Hadoop task’s progress is measured between 0 and 1, where each three phases (copy, sort, reduce) of the task counts 1/3 of the score. The original Hadoop scheduler basically looks at all tasks that has progress score less than average in its category minus 0.2 with running over a minute, it is then marked as a ‘straggler’ and later ranked by data locality to start speculative execute.

The original Hadoop scheduler made these assumptions:

1. Nodes can perform work at roughly the same rate.
2. Tasks progress at a constant rate throughout time.
3. There is no cost to launching a speculative task on a node that would otherwise have an idle slot.
4. A task’s progress score is representative of fraction of its total work that it has done.
5. Tasks tend to finish in waves, so a task with a low progress score is likely a straggler.
6. Tasks in the same category (map or reduce) require roughly the same amount of work.

The paper later describes how each assumption doesn’t hold in a heterogeneous environment. Hadoop assumes any detectable slow nodes are stragglers. However, the authors state that public cloud environment like EC2 can be an be slow because of co-location, where ‘noisy neighbors’ can cause contention on Network or Disk I/O as these resources are not perfectly isolated in a public cloud environment. The measured difference in performance cited was about 2.5x. This also makes assumption 3 no longer true since idle nodes can be sharing resources. Also too many speculative tasks can be running if there is no limit and causes resources to not be used for actual useful tasks, which is more likely to happen in a heterogeneous environment.

Another insight was that not all phases progress at the same rate, as some phases requires more expensive copying of the data over the network such as the copy phase of the reduce task. There are other insights the authors mentioned that can be found in the paper.

LATE scheduler

The LATE scheduler is a new speculative task scheduler proposed by the authors to try to address the given assumptions and add features to behave better in a heterogeneous environment. The first difference is that the LATE scheduler first evaluates tasks that it estimated to finish farthest into the future to speculative execute, which is what LATE stands for (Longest Approximate Time to End). It uses a simple heuristic to estimate the finish time: (1 – ProgressScore) / ProgressRate. ProgressRate is simply the ProgressScore divided by the elapsed running time. By doing this change it only tries to re-execute tasks that will improve the job response time.
The second difference is that it tries to launch backup tasks nodes that aren’t tagged as stragglers, which is by finding the total work performed that is above a SlowNodeThreshold.
The third difference is that it introduced a limit of how many speculative tasks can be launched to avoid wasting resources and limit contention.

The combination of these changes allows the new scheduler to perform better in a heterogeneous environment, which in their evaluation performs 58% better on average. Note that the authors also came up with a default values for different configurations based on trying different values in their test environment.

Final random thoughts

By simply applying different heuristics in scheduling there is a quite noticeable improvement of performance. However, the assumptions of the workload and the cluster environment  matters a lot  in the heuristics and also with the values they’ve chosen. It will be interesting if users are able to run these evaluations on their cluster based on their typical workloads and come up with different values, which I think can be a useful framework on Mesos to write.

One of the most interesting aspects of this paper is the expectation and detection of stragglers in the cluster, as contention being the big cause. In Mesos world when there are multiple frameworks running various tasks in the same cluster, the effect of contention can also happen even on a private cluster since containers only isolate cpu/memory (to a certain degree) but network / io or even caches to DRAM bandwidth are not currently isolated by any container runtime. If we can understand the performance characteristics of all the tasks running from all Mesos frameworks, I think avoiding co-location or stragglers detection can become a lot more efficient as a lot of straggler information can be shared among frameworks that ran similar workloads. Another interesting thought is that if Mesos knows what tasks are speculative executed we can also apply better priorities when it comes to oversubscription.

A survey of datacenter scheduling papers

Scheduling workloads in a cluster is not a new topic and have years of research behind it. But scheduling recently became popular because of the popularity of containers (Docker) and the rise of scheduling frameworks that can run more than just MapReduce and OpenMP jobs (Mesos and Kubernetes).

Having the opportunity  to work on Mesos at Mesosphere and becoming a PMC in this project has made me much more aware of what’s going on in this space. However, as I see different areas we can improve in Mesos or Spark on Mesos scheduling and some problems we are facing, I like to go back and understand the scheduling literature for me to have a better understanding when considering how to make improvements in our schedulers.

Motivated by these reasons, I decide to cover the recent datacenter scheduling paper in a chronological order, and being inspried by Adrian Coyler’s `The Morning Paper`, I like to post a short summary of each scheduling paper on this blog. I also will add my personal comments and thoughts after reading the paper from a Mesos perspective.

Also very thankful for Malte Schwarzkopf (MIT, co-author of Google Omega and Firmament) for providing me a list of papers that I should cover.

I’ll update the sections below as I add more blog posts, but stay tuned if you also like to learn more about datacenter scheduling!

Papers covered

Docker on Mesos 0.20

In our recent MesosCon survey to the existing Mesos users, one of the biggest feature ask was to have Docker integration into Mesos. Although users can already launch Docker images with Mesos thanks to the external containerizer work with Deimos, that approach still requires a external component to be installed on each slave and also we see that integrating Docker directly into Mesos provides longer term roadmap of how possibly Docker can provide future features to Mesos.

What’s been added?

At the API level we also added a new ContainerInfo that serves as the base proto message for all future Containers, and added a DockerInfo message that provides Docker specific options that can be set. We also added ContainerInfo into TaskInfo and ExecutorInfo so that users can launch a Task with Docker, or launch an Executor with Docker.

Internally we created a Docker Containerizer that encapsulates Docker as a containerizer for Mesos and it is going to be released with 0.20. 

The Docker Containerizer will take the specified Docker image and launch, wait, and remove the Docker container following the life cycle of the Containerizer itself. It will also redirect Docker logs into your sandbox’s stdout/stderr log files for you.

We also added a Docker abstraction that currently maps Docker commands to Docker CLI commands that will be issued in the slave. 

For more information about the docker changes, please take a look at the documentation in the Mesos repo (https://github.com/apache/mesos/blob/master/docs/docker-containerizer.md)

Challenges

The first big challenge is trying to integrate Docker into Mesos is to find a way to fit Docker into the Mesos’s slave containerizer model, and keep it as simple as possible.

As we decided to integrate with the Docker CLI we get to really learn what does Docker CLI provide and how we can map starting a container (docker run), waiting (docker wait), destroying a container (docker kill, docker rm -f) to Mesos. 

Although Docker provides an option to the run command to specify the CPU and Memory resources allocated for that container, the first gap we identified was that it does not provide the interface to update the resources allocated. Part of the Containerizer interface is to provide a way to update the resources used for a container. Luckily Mesos already has utilities that deals with Cgroup as part of the Mesos Containerizer, so we decided to re-use the code to update the Docker’s cgroup values underneath Docker.

One of the biggest concern for a Mesos slave is to be able to recover the docker tasks after the slave recovers from a crash, and somehow to make sure we don’t leak docker containers or any resource as part of the slave crash with Docker. We decided to name every container that mesos created with a prefix and the container id, and use the container name to help us during recovery to know what’s still running and what should be destroyed if it’s not part of the slave’s checkpoint state.

After mapping all the Cli commands and seeing things working with simple Docker run, we started to realize Docker images various ways that affect what the actually command is being ran after Docker run, such as ENTRYPOINT and CMD in the Docker image itself. It becomes obvious that we don’t have enough flexibility in our Mesos API when we see the only option for our API to specify a command is a required string field. We need to make the command value optional so users can use the image’s default command. We also used to have to wrap the command in /bin/sh to handle commands that contain pipes and or any operators so the whole command gets to execute in the docker image and not the host. However, when a image has a ENTRYPOINT /bin/sh becomes part of the parameter to ENTRYPOINT and causes bad behaviors. We’ve then added both a shell flag and making the value as an optional field in Mesos.

The last and one of the biggest challenge is make sure we handle the timeouts in Mesos in each stage of the Docker Containerizer launch. Part of the Mesos containerizer life cycle is to trigger a destroy when the launch exceeds a certain timeout, however it is up to the containerizer to properly destroy and log what is going on. We went through each stage and made sure when the containerizer is pulling large files or docker is pulling a large image we show a sensible error message and can clean up correctly.

We also ran into couple Docker bugs that is logged into Github. However, I found the Docker community to be really responsive and the velocity of the project is definitely going fast.

Further Work

Currently we had to default to host networking while we launch Docker as it is the simplest way to get a Mesos executor running in a Docker image so it can talk about to the slave with it’s advertised PID. More work is needed to support more networking options.

There is also a lot more features to consider, especially around allowing Docker containers to be linked and communicated to each other.

There is a lot more that I won’t list in this blog, but I’m glad of the shape of the integration and looking forward to see community feedback.

Credits

Really like to thank Ben Hindman for the overall guidance and working many late nights resolving many docker on mesos issues. Also Yifan Gu that worked on many patches in the beginning as well!