Monthly Archive for July, 2014

Distributed Code Review

As I’ve written about previously, one of the compelling reasons to look at Git as an enterprise SCM system is the great workflow innovation in the Git community. Workflows like Git Flow have pulled in best practices like short lived task branches and made them not only palatable but downright convenient. Likewise, the role of the workflow tools like Gerrit should not be discounted. They’ve turned mandatory code review from an annoyance to a feature that developers can’t live without (although we call it social coding now).

But as any tool skeptic will tell you, you should hesitate before building your development process too heavily on these tools. You’ll risk locking in to the way the tool works – and the extra data that is stored in these tools is not very portable.

The data stored in Git is very portable, of course. A developer can clone a repository, maintain a fork, and still reasonably exchange data with other developers. Git has truly broken the bond between code and a central SCM service.

As fans of social coding will tell you, however, the conversation is often just as important as the code. The code review data holds a rich history of why a change was rejected, accepted, or resubmitted. In addition, these tools often serve as the gatekeeper’s tools: if your pull request is rejected, your code isn’t merged.

Consider what happens if you decide you need to switch from one code review tool to another. All of your code review metadata is likely stored in a custom schema in a relational database. Moving, say, from Gerrit to GitLab would be a significant data migration effort – or you just accept the fact that you’ll lose all of the code review information you’ve stored in Gerrit.

For this reason, I was really happy to hear about the distributed code review system now offered in SmartGit. Essentially SmartGit is using Git to store all of the code review metadata, making it as portable as the code itself. When you clone the repository, you get all of the code review information too. They charge a very modest fee for the GUI tools they’ve layered on top, but you can always take the code review metadata with you, and they’ve published the schema so you can make sense of it. Although I’ve only used it lightly myself, this system breaks the chain between my Git repo and the particular tool that my company uses for repository management and access control.

I know distributed bug trackers fizzled out a couple of years ago, but I’m very happy to see Syntevo keep the social coding conversation in the same place as the code.

Git MultiSite Cluster Performance

A common misconception about Git is that having a distributed version control system automatically immunizes you from performance problems. The reality isn’t quite so rosy. As you’ll hear quite often if you read about tools like Gerrit, busy development sites make a heavy investment to cope with the concurrent demands on a Git server posed by developers and build automation.

Here’s where Git MultiSite comes into the picture. Git MultiSite is known for providing a seamless HA/DR solution and excellent performance at remote sites, but it’s also a great way to increase elastic scalability within a single data center by adding more Git MultiSite nodes to cope with increased load. Since read operations (clones and pulls) are local to a single node and write operations (pushes) are coordinated, with the bulk of the data transfer happening asynchronously, Git MultiSite lets you scale out horizontally. You don’t have to invest in extremely high-end hardware or worry about managing and securing Git mirrors.

So how much does Git MultiSite help? Ultimately that depends on your particular environment and usage patterns, but I ran a little test to illustrate some of the benefits even when running in a fairly undemanding environment.

I set up two test environments in Amazon EC2. Both environments used a single instance to run the Git client operations. The first environment used a regular Git server with a new empty repository accessed over SSH. The second environment instead used three Git MultiSite nodes.  All servers were m1.large instances.

The test ran a series of concurrent clone, pull, and push operations for an hour. The split between read and write operations was roughly 7:1, a pretty typical ratio in an environment where developers are pulling regularly and pushing periodically, and automated processes are cloning and pulling frequently. I used both small (1k) and large (10MB) commits while pushing.

What did I find?

Git MultiSite gives you more throughput

Git MultiSite processed more operations in an hour. There were no dropped operations, so the servers were not under unusual stress.

throughput

Better Performance

Git MultiSite provided significantly better performance, particularly for reads. That makes a big difference for developer productivity.

perf

More Consistent Performance

Git MultiSite provides a more consistent processing rate.

procrate

You won’t hit any performance cliffs as the load increases.

pullrunner

Try it yourself

We perform regular performance testing during evaluations of Git MultiSite. How much speed do you need?

Big Data ETL Across Multiple Data Centers

Scientific applications, weather forecasting, click-stream analysis, web crawling, and social networking applications often have several distributed data sources, i.e., big data is collected in separate data center locations or even across the Internet.

In these cases, the most efficient architecture for running extract, transform, load (ETL) jobs over the entire data set becomes nontrivial.

Hadoop provides the Hadoop Distributed File System (HDFS) for storage and YARN (Yet Another Resource Negotiator) as the programming model in Hadoop 2.0. ETL jobs use the MapReduce programming model to run on the YARN framework.

Though these are adequate for a single data center, there is a clear need to enhance them for multi-data center environments. In these instances, it is important to provide active-active redundancy for YARN and HDFS across data centers. Here’s why:

1. Bringing compute to data

Hadoop’s architectural advantage lies in bringing compute to data. Providing active-active (global) YARN accomplishes that on top of global HDFS across data centers.

2. Minimizing traffic on a WAN link

There are three types of data analytics schemes:

a) High-throughput analytics where the output data of a MapReduce job is small compared to the input.

Examples include weblogs, word count, etc.

b) Zero-throughput analytics where the output data of a MapReduce job is equal to the input. A sort operation is a good example of a job of this type.

c) Balloon-throughput analytics where the output is much larger than the input.

Local YARN can crunch the data and use global HDFS to redistribute for high throughput analytics. Keep in mind that this might require another MapReduce job running on the output results, however, which can add traffic to the WAN link. Global YARN mitigates this even further by distributing the computational load.

Last but not least, fault tolerance is required at the server, rack, and data center levels. Passive redundancy solutions can cause days of downtime before resuming. Active-active redundant YARN and HDFS provide zero-downtime solutions for MapReduce jobs and data.

To summarize, it is imperative for mission-critical applications to have active-active redundancy for HDFS and YARN. Not only does this protect data and prevent downtime, but it also allows big data to be processed at an accelerated rate by taking advantage of the aggregated CPU, network and storage of all servers across datacenters.

– Gurumurthy Yeleswarapu, Director of Engineering, WANdisco

More efficient cluster utilization with Non-Stop Hadoop

Perhaps the most overlooked capability of WANdisco’s Non-Stop Hadoop is its efficient cluster utilization in secondary data centers. These secondary clusters are often used only for backup purposes, which is a waste of valuable computing resources. Non-Stop Hadoop allows you to take full advantage of the CPU and storage resources that you’ve paid for.

Of course anyone who adopts Hadoop needs a backup strategy, and the typical solution is to put a backup cluster in a remote data center. distcp, a part of the core Hadoop distribution, is used to periodically transfer data from the primary cluster to the backup cluster. You can also run some read-only jobs on the backup cluster, as long as you don’t need immediate access to the latest data.

Still, that backup cluster is a big investment that isn’t being used fully. What if you could treat that backup cluster as a part of your unified Hadoop environment, and use it fully for any processing?  That would give you a better return on that backup cluster investment, and let you shift some load off of the primary cluster, perhaps reducing the need for additional primary cluster capacity.

That’s exactly what Non-stop Hadoop provides: you can treat several Hadoop clusters as part of a single unified Hadoop file system. All of the important data is replicated efficiently by Non-Stop Hadoop, including the NameNode metadata and the actual data blocks. You can write data into any of the clusters, knowing that the metadata will be kept in sync by Non-Stop Hadoop and that the actual data will be transferred seamlessly (and much faster compared to using a tool like distcp).

As a simple example, recently I was ingesting two streams of data into a Hadoop cluster. Each ingest job handled roughly the same amount of data. The two jobs combined took up about 28 seconds of cluster CPU time during each run and consumed roughly 500MB of cluster RAM during operation.

Then I decided to run each job separately on two clusters that are part of a single Non-Stop Hadoop deployment. In this case, again running both jobs at the same time, I took up 15 seconds on the first cluster and 18 seconds on the second cluster, using about 250MB of RAM on each.

The exact numbers will vary depending on the job and what else is running on the cluster, but in this simple example I’ve accomplished three very useful things:

  • I’ve gotten useful work out of a second cluster that would otherwise be idle.
  • I’ve shifted half of the processing burden off of the first cluster. (It also helps to have six NameNodes instead of 2 to handle the concurrent writes.)
  • I don’t have to run the distcp job to transfer this data to a backup site – it’s already on both clusters. Not only am I getting more useful work out of my second cluster, I’m avoiding unnecessary overhead work.

So there you have it – Non-Stop Hadoop is the perfect way to get more bang for your backup cluster buck. Want to know more? We’ll be happy to discuss in more detail.

Talking R in an Excel World

I just finished reading a good summary of 10 powerful and free or low-cost analytics tools.  Of the 10 items mentioned, 2 are probably common in corporate environments (Excel and Tableau) while the other 8 are more specialized.  I wonder how successful anyone is at introducing a new specialized analytics tool into an environment where Excel is the lingua franca?

I’ve personally run into this situation a few times.  Recently I ran a Monte Carlo simulation in R to generate a simple P&L forecast for a business proposal.  I followed good practices by putting my code into R-markdown and generating a PDF with the code, assumptions, variables, and output.  I then had to share some of the conclusions with a colleague who was generating pricing models in a spreadsheet, however, and knowing that introducing R on short notice wouldn’t go over well, I just copied some of the key results into Excel and sent it off.

Similarly, a few months ago I was working on a group project to analyze some public data.  I used R to understand the data and perform the analysis, but then had to reproduce the final analysis in Excel to share it with the team.  That seems wasteful, but I couldn’t see another way to do it.  It would have taken me quite a long time to do all the exploratory work in Excel (I’ve not yet figured out how to create several charts in a loop in Excel) and Excel just doesn’t have the same type of tools (like principal components for dimension reduction).

R does have some capabilities to talk Excel, but these don’t seem particularly easy to use and the advice is typically to use CSV as an interchange format, which has obvious limitations including the loss of formulas and formatting.

So I’m stumped.  As long as Excel remains a standard interchange format I guess I’ll just have to do some manual data translation.  Has anyone solved this problem in a more elegant way?