powerswitch logo


Overview
Large-scale graph-structured computation usually exhibits iterative and convergence-oriented computing nature, where input data is computed iteratively until a convergence condition is reached. Such features have led to the development of two different computation modes for graph-structured programs, namely synchronous and asynchronous modes. Unfortunately, there is currently no in-depth study on their execution properties and thus programmers have to manually choose a model, either requiring a deep understanding of underlying graph engines, or suffering from suboptimal performance.
PowerSwitch project makes the comprehensive characterization on the performance of the two modes on a set of typical graph-parallel applications. Our study shows that the performance of the two modes varies significantly with different graph programs, partitioning methods, execution stages, input graph and cluster scales and no single mode consistently outperforms the other. This project proposes Hsync, a hybrid graph computation mode that adaptively switches a graph-parallel program between the two modes for optimal performance. Hsync constantly collects execution statistics on-the-fly and leverages a set of heuristics to predict future performance and determine when a mode switch could be profitable. We have built both an online sampling and an offline neural-network approaches to allow accurate prediction of future execution performance. PowerSwitch has been built based on PowerGraph, a state-of-the-art graph-parallel systems, to support Hsync execution of graph applications. On a 48-node EC-like cluster, PowerSwitch consistently outperforms the best of both modes, with a speedup ranging from 9% to 73% due to accurate switching between two modes.


Publications
If you use PowerSwitch in your work or research, please let us know about it. We also encourage you to reference our papers.


Software
The PowerSwitch source code is distributed with a Apache license 2.0. The copyright is held by Shanghai Jiao Tong University.
You can checkout the newest code from github mirror, which is based on GraphLab 2.2 (Jul. 8, 2013. commit:fc3d6c6).
To use the full functions, we strongly suggest you checkout source code from the github mirror. But if you are interested in estimating or working with powerswitch hybrid engine, you can also obtain a snapshot (~14MB) from Dec. 16 2013, which is based on GraphLab 2.2 (Jul. 8, 2013. commit:fc3d6c6).
Once you've downloaded PowerSwitch, you can find instructions for installing and building it on the GraphLab's documentation page.
For Sampling-Switch mode, you just need to set the Hsync engine with configuration in command line.
For a nerual network based switch mode, besides setting the configuration with Hsync engine, please follow the instructions in Neural Network Training part.


Quick Start

Usage#1: Running PageRank algorithm on LiveJournal graph using PowerSwitch engine. (Online Sampling Strategy)

  $mpiexec -f ~/mpd.hosts -n 48 ./pagerank --graph /path/to/livejornal_ --format tsv --engine msync 
  --engine_opts s_auto=true,a_tasknum=5,t_sample=1000,rate_AvsS=0.75
	
   NOTE:
     --engine engine : set execution engine (msync: mix-adaptive engine of PowerSwtich)
     --engine_opts   : set engine options
          s_auto     : set estimation and switch strategy to be sampling and adaptive switch.
          a_tasknum  : set executed task number(x1000) on each machine during sampling in async, 
                       which should be set according to input size. 
                       (suggested to be < 20, and default is 5, which means 5x1000=5000)
          t_sample   : set time interval of monitor in asynchronous scheduling, and default is 1000 (1000ms).
          rate_AvsS  : set task number proportion of synchronous and asynchronous schedulings.
                       (range:0.5 to 1, and default is 0.75)

Usage#2: Running PageRank algorithm on LiveJournal graph using PowerSwitch engine. (Neural Network Strategy)

  $mpiexec -f ~/mpd.hosts -n 48 ./pagerank --graph /path/to/livejornal_ --format tsv --engine msync 
  --engine_opts auto=true,t_sample=1000,rate_AvsS=0.75

   NOTE:
     --engine_opts    : set engine options
          auto        : set estimation and switch strategy to be neural network estimation and adaptive switch. 
                        This should be set together with "graph.set_async_thro(double thro)" after loading the graph, 
                        in which the throughput could be calculated based on input from "|E|/|V|" and "R"
                        (|E|/|V|= graph.num_edges/graph.num_vertices and R = graph.num_replicas/graph.num_vertices).
   
TIPS: You can follow the Neural Network Training part to generate the c++ function for profiling the throughput with the two input.

Usage#3: Running Loopy BP Algorithm on synthetic image data using PowerSwitch engine.

  $mpiexec -f ~/mpd.hosts -n 48 ./lbp_structured_prediction --prior /path/to/synth_vdata_ --graph /path/to/synth_edata_ 
  --output /path/to/output --graph_opts ingress=grid --engine msync --engine_opts s_auto=true,a_tasknum=5,t_sample=1000

   TIPS: The input data could be produced according to GraphLab's documentation page of Loopy BP Algorithm.

Usage#4: Running SSSP on directed weighted graph using PowerSwitch engine.
  $mpiexec -f ~/mpd.hosts -n 12 ./sssp_x --graph_opts ingress=grid --graph /path/to/weight-ca-road-dataset 
  --engine msync --engine_opts s_auto=true,start_async=true,rate_AvsS=0.5

   NOTE:
     --engine_opts:
          start_async  : set starting mode to be asynchronous or not. The sssp_x program is implemented as same as 
                         original sssp program but loads a derected graph as input (line format : source target weight).


Neural Network Training
You should insert the following code into your application to load the training graphs produced by PowerSwitch system, (A sample is provided in the pagerank_x.cpp applications under the toolkits/graph_analytics directory of the source code) or you could provide any other training graphs by yourself, e.g., the subgraphs of your real input data.

  /** NOTE:
   *  The default vertex number of training graphs is 200,000, 
   *  which may be adjusted to ensure that "piped-thread x #machine > vertex number".
   *  (piped-thread in PowerSwitch is 3000 currently.)
   *  You could add vertex number as the second parametter, when loading, 
   *  e.g., graph.load_synthetic_powerlaw(powerlaw_alpha, vertex_number);
   */

  double powerlaw_alpha = 0;
  clopts.attach_option("powerlaw_alpha", powerlaw_alpha,
                       "Apha for generate a synthetic powerlaw out-degree graph. ");
  size_t randomdegree = 0;
  clopts.attach_option("random", randomdegree,
                       "Generate a random graph with the parametter as average degree.");
					 
  // make a synthetic graph
  if(powerlaw_alpha > 0) { 
    dc.cout() << "Loading synthetic Powerlaw graph. alpha " << powerlaw_alpha << std::endl;
    graph.load_synthetic_powerlaw(powerlaw_alpha);
  }
  // make a random graph
  else if(randomdegree> 0) {
    dc.cout() << "Loading Random graph. #e/#v " << randomdegree << "." << std::endl;
    graph.load_random_graph(randomdegree);
  }


To produce the training data of neural network, you could refer to the training script as an example. It should be noted that the training computation should be executed under stable network environment.
We suggest that Neural Network Toolbox in MATLAB could be used to build the neural network.
You can run the command in MATLAB to start the toolbox with a friendly UI:
	>>nnstart
Then choose the "Fitting Tool", and follow the guide. You could generate cpp interface in MATLAB with the script or MATLAB object produced by neural network toolbox. The generated function would receive two parametters as input and return a throughput as output.


People


Contact Information
Please contact Chenning Xie for further information. Welcome to let us know about bug reports and suggested improvements, while we may not be able to provide technical support timely. If you make some correction or improvement to PowerSwitch, we would appreciate receiving a patch that we can include in the next release.


Related Projects
PowerLyra: differentiated parallel graph computation and partitioning on skewed graph.
Polymer: NUMA-aware graph-structured analytics framework.


External Link



This page has been visited Website Hit Counter times since 11/24/2014.
Stats provided by the Easy Counter