Monthly Archive for August, 2014

Sample datasets for Big Data experimentation

Another week, another gem from the Data Science Association. If you’re trying to prototype a data analysis algorithm, benchmark performance on a new platform like Spark, or just play around with a new tool, you’re going to need reliable sample data.

As anyone familiar with testing knows, good data can be tough to find. Although there’s plenty of data in the public domain, most of it is not ready to use. A few months ago, I downloaded some data sets from a US government site and it took a few hours of cleaning before I had the data in shape for analysis.

Behold: https://github.com/datasets. Here the Frictionless Data Project has compiled a set of easily accessible and well documented data sets. The specific data may not be of much interest, but these are great for trials and experimentation. For example, if you want to analyze time series financial data, there’s a CSV file with updated S&P 500 data.

Well worth a look!

SmartSVN 8.6 Available Now

We’re pleased to announce the release of SmartSVN 8.6, the popular graphical Subversion (SVN) client for Mac, Windows, and Linux. SmartSVN 8.6 is available immediately for download from our website.

New Features include:

  • Bug reporting now optionally allows uploading bug reports directly to WANdisco from within SmartSVN
  • Improved handling of svn:global-ignores inherited property
  • Windows SASL authentication support added and required DLLs provided

Fixes include:

  • Internal error when selecting a file in the file table
  • Possible internal error in repository browser related to file externals
  • Potentially incorrect rendering of directory tree in Linux

For a full list of all improvements and features please see the changelog.

Contribute to further enhancements

Many issues resolved in this release were raised by our dedicated SmartSVN forum, so if you’ve got an issue or a request for a new feature, head there and let us know.

Get Started

Haven’t yet started using SmartSVN? Get a free trial of SmartSVN Professional now.

If you have SmartSVN and need to update to SmartSVN 8, you can update directly within the application. Read the Knowledgebase article for further information.

Experiences with R and Big Data

The next releases of Subversion MultiSite Plus and Git MultiSite will embed Apache Flume for audit event collection and transmission. We’re taking an incremental approach to audit event collection and analysis, as the throughput at a busy site could generate a lot of data.

In the meantime, I’ve been experimenting with some more advanced and customized analysis. I’ve got a test system instrumented with a custom Flume configuration that pipes data into HBase instead of our Access Control Plus product. The question then is how to get useful answers out of HBase to questions like: What’s the distribution of SCM activity between the nodes in the system?

It’s actually not too bad to get that information directly from an HBase scan, but I also wanted to see some pretty charts. Naturally I turned to R, which led me again to the topic of how to use R to analyze Big Data.

A quick survey showed three possible approaches:

  • The RHadoop packages provided by Revolution Analytics, which includes RHBase and Rmr (R MapReduce)
  • The SparkR package
  • The Pivotal package that lets you analyze data in Hawq

I’m not using Pivotal’s distribution and I didn’t want to invest time in looking at a MapReduce-style analysis, so that left me with RHBase and Spark R.

Both packages were reasonably easy to install as these things go, and RHBase let me directly perform a table scan and crunch the output data set. I was a bit worried about what would happen once a table scan started returning millions of rows instead of thousands, so I wanted to try SparkR as well.

SparkR let me define a data source (in this case an export from HBase) and then run a functional reduce on it. In the first step I would produce some metric of interest (AuthZ success/failure for some combination of repository and node location) for each input line, and then reduce by key to get aggregate statistics. Nothing fancy, but Spark can handle a lot more data than R on a single workstation. The Spark programming paradigm fits nicely into R; it didn’t feel nearly as foreign as writing MapReduce or HBase scans. Of course, Spark is also considerably faster than normal MapReduce.

Here’s a small code snippet for illustration:

 

lines <- textFile(sc, "/home/vagrant/hbase-out/authz_event.csv")
mlines = lapply(lines, function(line) {
       return(list(key, metric))
       })
parts = reduceByKey(mlines, "+", 2L)
reduced = collect(parts)

 

In reality, I might use SparkR in a lambda architecture as part of my serving layer and RHBase as part of the speed layer.

It already feels like these extra packages are making Big Data very accessible to the tools that data scientists use, and given that data analysis is driving a lot of the business use cases for Hadoop, I’m sure we’ll see more innovation in this area soon.

Subversion Vulnerability in Serf

The Apache Subversion team have recently published details of two vulnerabilities in the Serf RA layer.

Firstly, vulnerable versions of the Serf RA layer will accept certificates that it should not accept as matching the hostname the client is using to make the request. This is deemed a Medium risk vulnerability.

Additionally, affected versions of the Serf RA layer do not properly handle certificates with embedded NUL bytes in their Common Names or Subject Alternate Names. This is deemed a low risk vulnerability.

Either of these issues, or a combination of both, could lead to a man-in-the-middle attack and allow viewing of encrypted data and unauthorised repository access.

A further vulnerability has also been identified in the way that Subversion indexes cached authentication credentials. An MD5 hash collision can be engineered such that cached credentials are leaked to a third party. This is deemed a Low risk vulnerability.

For more information on these issues please see the following links:
http://subversion.apache.org/security/CVE-2014-3522-advisory.txt
http://subversion.apache.org/security/CVE-2014-3528-advisory.txt
https://groups.google.com/forum/#!msg/serf-dev/NvgPoK6sFsc/_TR7Buxtba0J

The ra_serf vulnerability affects Subversion versions 1.4.0-1.7.17 and 1.8.0-1.8.9. The Serf library vulnerability affects Serf versions 0.2.0 through 1.3.6 inclusive. Finally, the credentials vulnerability affects Subversion versions 1.0.0-1.7.17 and 1.8.0-1.8.9.

If you are using any of the vulnerable versions mentioned above we would urge you to upgrade to the latest release, either 1.8.10 or 1.7.18. Both are available on our website at https://www.wandisco.com/subversion/download

Spark and Hadoop infrastructure

I just read another article about how Spark stands a good chance of supplanting MapReduce for many use cases. As an in-memory platform, Spark provides answers much faster than MapReduce, which must perform an awful lot of disk I/O to process data.

Yet MapReduce isn’t going away. Beside all of the legacy applications built on MapReduce, MapReduce can still handle much larger data sets. Spark’s limit is in terabytes, while MapReduce can handle petabytes.

There’s one interesting question I haven’t seen discussed, however. How do you manage the different hardware profiles for Spark and other execution engines? A general-purpose MapReduce cluster will likely balance I/O, CPU, and RAM, while a cluster tailored for Spark will emphasize RAM and CPU much more heavily than I/O throughput. (As one simple example, switching a Spark MLlib job to a cluster that allowed allocation of 12GB of RAM per executor cut the run time from 370 seconds to 14 seconds.)

From what I’ve heard, YARN’s resource manager doesn’t handle hybrid hardware profiles in the same cluster very well yet. It will tend to pick the ‘best’ data node available for a task, but that means it will tend to pick your big-memory, big-CPU nodes for everything, not just Spark jobs.

So what’s the answer? One possibility is to set up multiple clusters, each tailored for different types of processing. They’ll have to share data, of course, which is where it gets complicated. The usual techniques for moving data between clusters (i.e. the tools built on distcp) are meant for schedule synchronization – in other words, backups. Unless you’re willing to accept a delay before data is available to both clusters, and unless you’re extremely careful about which parts of the namespace each cluster effectively ‘owns’, you’re out of luck…unless you use Non-Stop Hadoop, that is.

Non-Stop Hadoop lets you treat two or more clusters as a unified HDFS namespace, even when the clusters are separated by a WAN. Each cluster can read and write simultaneously, using WANdisco’s active-active replication technology to keep the HDFS metadata in sync. In addition, Non-Stop Hadoop’s efficient block-level replication between clusters means data transfers much more quickly.

This means you can set up two clusters with different hardware profiles, running Spark jobs on one and traditional MapReduce jobs on the other, without any additional administrative overhead. Same data, different jobs, better results.

Interested? We’ve got a great demo ready and waiting.

 

SmartSVN 8.6 RC1 Available Now

We’re proud to announce the release of SmartSVN 8.6 RC1. SmartSVN is the cross-platform graphical client for Apache Subversion.

New features include:

  • File protocol authentication implemented, using system login as default
  • “Manage as Project” menu item added for unmanaged working copies
  • Navigation buttons added to notifications to show previous/next notification

Fixes include:

  • Navigation buttons added to notifications to show previous/next notification
  • Opening a missing project could end up in an endless loop
  • Linux: Refresh might not have picked up changes if inotify limit was reached
  • Merge dialog “Merge From” label was cut off
  • Illegal character error while editing svn:mergeinfo
  • Context menu wasn’t available in commit message text area
  • Progress window for explorer integration context menu actions sometimes appeared behind shell window

For a full list of all improvements and features please see the changelog.

Have your feedback included in a future version of SmartSVN

Many issues resolved in this release were raised via our dedicated SmartSVN forum, so if you’ve got an issue or a request for a new feature, head over there and let us know.

You can download Release Candidate 1 for SmartSVN 8.6 from our website.

Haven’t yet started with SmartSVN? Claim your free trial of SmartSVN Professional here.

Health care and Big Data

What’s the general impression of the public sector and health care? Stodgy. Buried in paperwork. Dull. Bureaucrats in cubicles and harried nurses walking around with reams of paper charts.

Behind the scenes, however, the health care industry and their counterparts in the public sector have been quietly launching a wave of technological innovation and gaining valuable insight along the way. Look no further than some of their ‘lessons learned’ articles, including this summary of a recent study in Health Affairs. The summary is well worth a read, as the lessons are broadly applicable no matter what industry you’re in:

  • Focus on acquiring the right data
  • Answer questions of general interest, not questions that show off the technology
  • Understand the data
  • Let the people who understand the data have access to it in any form

Accomplishing these goals, they found, required a broader and more sophisticated Hadoop infrastructure than they had anticipated.

Of course, this realization isn’t too much of a surprise here at WANdisco. One of the early adopters of Non-Stop Hadoop was UC Irvine Health, a leading research and clinical center in California. UC Irvine Health has been recognized as an innovation center, and is currently exploring new ways to use Big Data to improve the quality and indeed the entire philosophy of its care.

You may be thinking you’ve still got time to figure out a Big Data strategy. Real time analytics on Hadoop? Wiring multiple data centers into a logical data lake? Not quite ready for prime time? Before you prognosticate any further, give us a call. Even ‘stodgy’ industries are seeing Big Data’s disruption up close and personal.