The inspiration for WANdisco Fusion

Screen Shot 2015-04-21 at 10.08.22 PM

Roughly two years ago, we sat down to start work on a project that finally came to fruition this week.

At that meeting, we had set ourselves the challenge of redefining the storage landscape. We wanted to map out a world where there was complete shared storage, but where the landscape remained entirely heterogeneous.

Why? Because we’d witnessed the beginnings of a trend that has only grown more pronounced with the passage of time.

From the moment we started engaging with customers, we were struck by the extreme diversity of their storage environments. Regardless of whether we were dealing with a bank, a hospital or utility provider, different types of storage had been introduced across every organization for a variety of use cases.

In time, however, these same companies wanted to start integrating their different silos of data, whether to run real-time analytics or to gain a full 360 perspective of performance. Yet preserving diversity across data center was critical, given that each storage type has its own strengths.

They didn’t care about uniformity. They cared about performance and this meant being able to have the best of both worlds. Being able to deliver this became the Holy Grail – at least in the world of data centers.

This isn’t quite The Gordian Knot but it’s certainly a very difficult, complex problem and possibly one that could only be solved with our core, patented IP DConE.

Then we had a breakthrough.

Months later and I’m proud to formally release WANdisco Fusion (WD Fusion), the only product that enables WAN-scope active-active synchronization of different storage systems into one place.

What does this mean in practice? Well it means that you can use Hadoop distributions like Hortonworks, Cloudera or Pivotal for compute, Oracle BDA for fast compute, EMC Isilon for dense storage. You could even use a complete variety of Hadoop distros and versions. Whatever your set-up, with WD Fusion you can leverage new and existing storage assets immediately.

With it, Hadoop is transformed from being something that runs within a data center into an elastic platform that runs across multiple data centers throughout the world. WD Fusion allows you to update your storage infrastructure one data center at a time, without impacting your application ability or by having to copy vast swathes of data once the update is done.

When we were developing WD Fusion we agreed upon two things. First, we couldn’t produce anything that made changes to the underlying storage system – this had to behave like a client application. Second, anything we created had to enable a complete single global name-space across an entire storage infrastructure.

With WD Fusion, we allow businesses to bring together different storage systems by leveraging our existing intellectual property – the same Paxos-powered algorithm behind Non-Stop Hadoop, Subversion Multisite and Git Multisite – without making any changes to the platform you’re using.

Another way of putting it is we’ve managed to spread our secret sauce even further.

We have some of the best computer scientists in the world working at WANdisco, but I’m confident that this is the most revolutionary project any of us have ever worked on.

I’m delighted to be unveiling WD Fusion. It’s a testament to the talent and character of our firm, the result of looking at an impossible scenario and saying: “Challenge accepted.”

Why Data Driven Companies Rely on WANdisco Fusion

Hadoop is now clearly gaining momentum. We are seeing more and more customers attempting to deploy enterprise grade applications. Data protection, governance, performance and availability are top concerns. WANdisco Fusion’s level of resiliency is enabling customers to move out of the lab and into production much faster.

As companies start to scale these platforms and begin the journey to becoming data driven, they are completely focused on business value and return on investment. WANdisco’s ability to optimize resource utilization by eliminating the need for standby servers resonates well with our partners and customers. These companies are not Google or Facebook. They don’t have an endless supply of hardware and their core business isn’t delivering technology.

As these companies add data from more sources to Hadoop, they are implementing backup and disaster recovery plans and deploying multiple clusters for redundancy. One of our customers, a large bank, is beginning to utilize the cloud for DR.

I’ve met 11 new customers in the past eight days. Five of them have architected cloud into their data lake strategy and are evaluating the players. They are looking to run large data sets in the cloud for efficiency as well as backup and DR.

One of those customers, a leader in IT security, tells me they plan to move their entire infrastructure to the cloud within the next 12 months. They already have 200 nodes in production today, which they expect to double in a year.

Many of our partners are interested in how they can make it easy to onboard data from behind the firewall to the cloud while delivering the best performance. They recognize this is fundamental to a successful cloud strategy.

Companies are already embarking on migrations from one Hadoop platform to another. We’re working with customers on migration from MapR to HDP, CDH to HDP, CDH to Oracle BDA, and because we are HCFS compatible, GPFS to IOP. Some of these are petabyte scale.

For many of these companies, WANdisco Fusion’s ability to eliminate downtime, data loss and business disruption is a prerequisite to making that transition. Migration has never been undertaken lightly. I’ve spoken to partners who are unable to migrate their customers due to the required amount of downtime and risk involved.

One customer I met recently completed a large migration to HDP and just last week acquired a company that has a large cluster on Cloudera. We’re talking to them about how we can easily provide a single consistent view of the data. This will allow them to get immediate value from the data they have just acquired. If they choose to migrate completely, they are in control of the timing.

Customers measure their success by time to value. We’re working closely with our strategic partners to ensure our customers don’t have to worry about the nuts and bolts, irrespective of distributions, on-prem, cloud, or hybrid environment so customers can concentrate on the business outcome.

Please reach out to me if these use cases resonate and you would like to learn more.

Peter Scott
SVP Business Development

Subversion 1.9の新機能

What’s New in Subversion 1.9 (クリックでリプレイ)と題したWebinarの概要です。

svn auth, copy, merge,blame, cleanup,infoに新しいオプションが追加されています。1.8のWorking Copyに対する互換を保障しているので、1.9クライアントと1.8を一緒に使うことも可能になります。
Lock(コミット中に他の人がコミットしないようにする)の数が数百を超えるとスケールしないとの問題に対応しています。一つは既にGETで使われているHTTP PipelineのLockへの適用です。クライアント側のみの変更のため、クライアント側を1.9にすることで効果を得られます。多数のLock発生時、サーバー側での余計なデータの書き込みオーバヘッド解消のため、FSFSにMulti Lock機能が追加されています。LockのHock(post-lock, post-unlock)が複数パスで使用できるようになっています。



Subversion Offer

Subversion Offer

Subversion Offer

最新版Subversion 1.9 がダウンロード可能に


“What’s New in Subversion 1.9.” Register
弊社でテスト済のSubversion 1.9のバイナリは以下からダウンロード可能です。

DevOps is eating the world

You know a technology trend has become fully mainstream when you see it written up in the Wall Street Journal.  So it goes with DevOps, as this recent article shows.

DevOps and continuous delivery have been important trends in many firms for several years.  It’s all about building higher quality software products and delivering them more quickly.  For SaaS companies it’s an obvious fit as they sometimes push out minor changes many times a day.  But even companies with more traditional products can benefit.  And internal IT departments can use DevOps principles to start saying “yes” to business users more often.

For example, let’s say that your business analytics team asks for a small Hadoop cluster to try out some of the latest machine learning algorithms on Spark.  Saying “yes” to that request should only take hours, not weeks.  If you have a private cloud and the right level of automation, you can spin up a new Spark cluster in minutes.  Then you can work with the analysts to automate the deployment of their algorithms.  If they’re wildly successful and they need to move their new project to a production cluster it’s just a matter of deploying somewhere with more resources.

Of course, none of this comes easily.  On the operations side you’ll need to invest in the right configuration and private cloud infrastructure.   Tools like Puppet, Ansible, and Docker can capture the configuration of servers and applications as code.

But equally important is the development infrastructure.  Companies like Google practice mainline development: all of their work is done from the trunk or mainline, supported by a massive continuous build and test infrastructure.  And Gerrit, a tool that Google sponsors, is perhaps the best code review tool for continuous delivery.

If you look at potential bottlenecks in a continuous delivery pipeline, you need to consider how code gets to the mainline, and then how it gets deployed.  With Gerrit there are only two steps to the mainline:

  • Commit the code.  Gerrit makes a de facto review branch on the fly and initiates a code review.
  • Approve the merge request.  Gerrit handles the merge automatically unless there’s a conflict.

With this system you don’t even need to ask a developer to open a pull request or create a private branch.  Gerrit just automates all of that.  And Gerrit will also invoke any continuous build and test automation to make sure that code is passing those tests before a human reviewer even looks at it.

Once it’s on the mainline the rest of the automation kicks in, and those operational tools become important to help you rapidly spin up more realistic test environments.

As you can imagine, this type of infrastructure can put a heavy load on your development systems.  That’s why WANdisco has put the muscle of Git MultiSite behind Gerrit, giving you a horizontally scalable Gerrit infrastructure.

Latest Git binaries available for download

As part of our participation in the open source SCM community, WANdisco provides up-to-date binary downloads for Git and Subversion for all major platforms.  We now have the latest Git binaries available for download on our Git downloads site.

One interesting new feature is git push –atomic.  When you’re pushing several refs (e.g. branches) at once, this feature makes sure that either all the refs are accepted or none are.  That’s useful if you’re making related changes on several branches at once.  Those who merge patches onto several releases at once are often in this position.

The Git community has done a great job of ensuring a stable upgrade process, so there’s generally little concern about upgrading.  It’s always a good idea to review the release notes of course.

Big Data Tech Infrastructure Market Share

The Data Science Association just published this infographic showing market share for a variety of different tools and technologies that form part of the Big Data ecosystem.  The data would’ve been more useful if it was grouped into categories, but here are a few observations:

  • Amazon is dominating the field for cloud infrastructure.  It’d be interesting to see how much of that is used for test and development versus serious production deployments.
  • Cloudera has more market share than vanilla Apache Hadoop, Hortonworks, or MapR.  It’ll be interesting to see how this picture evolves over time with the advent of the Open Data Platform.
  • Mesos has a surprising share of 14%.  At a recent Big Data event in Denver an audience survey showed that only one person out of 50 was even experimenting with Mesos.  Perhaps this survey is oriented more towards early adopters.

It’s always interesting to see these types of surveys as a complement to the analyst surveys from 451, Wikibon, and the like.

The 100 Day Progress Report on the ODP

This blog by Cheryle Custer, Director Strategic Alliance Marketing Hortonworks, has been republished with the author’s permission.

It was just a little over 100 days ago that 15 industry leaders in the Big Data space announced the formation of the Open Data Platform (ODP) initiative. We’d like to let you know what has been going on in that time, to bring you a preview of what you can expect in the next few months and let you know how you can become involved.

Some Background

What is the Open Data Platform Initiative?
The Open Data Platform Initiative (ODP) is an enterprise-focused shared industry effort focused on simplifying adoption and promoting the use and advancing the state of Apache Hadoop® and Big Data technologies for the enterprise. It is a non-profit organization being created by folks that help to create:  Apache, Eclipse, Linux, OpenStack, OpenDaylight, Open Networking Foundation, OSGI, WSI (Web Services Interoperability), UDDI , OASIS, Cloud Foundry Foundation and many others.

The organization relies on the governance of the Apache Software Foundation community to innovate and deliver the Apache project technologies included in the ODP core while using a ‘one member one vote’ philosophy where every member decides what’s on the roadmap. Over the next few weeks, we will be posting a number of blogs to describe in more detail how the organization is governed and how everyone can participate.

What is the Core?
The ODP Core provides a common set of open source technologies that currently includes: Apache Hadoop® (inclusive of HDFS, YARN, and MapReduce) and Apache® Ambari. ODP relies on the governance of the Apache Software Foundation community to innovate and deliver the Apache project technologies included in the ODP core. Once the ODP members and processes are well established, the scope of the ODP Core will expand to include other open source projects.

Benefits of the ODP Core
The ODP core is a set of open source Hadoop technologies designed to provide a standardized core that big data solution providers software and hardware developers can use to deliver compatible solutions rooted in open source that unlock customer choice.

By delivering on a vision of “verify once, run anywhere”, everyone benefits:

  • For Apache Hadoop® technology vendors, reduced R&D costs that come from a shared qualification effort
  • For Big Data application solution providers, reduced R&D costs that come from more predictable and better qualified releases
  • Improved interoperability within the platform and simplified integration with existing systems in support of a broad set of use cases
  • Less friction and confusion for Enterprise customers and vendors
  • Ability to redirect resources towards higher value efforts

100 Day Progress Report

In the 100 days since the announcement, we’ve made some great progress:

Four Platforms Shipping
At Hadoop Summit in Brussels in April, we announced the availability of four Hadoop platforms all based on a vision of a common ODP core: Infosys Information PlatformIBM Open Platform, Hortonworks Data Platformand Pivotal HD. The commercial delivery of ODP based distributions across multiple industry leading vendors immediately after the launch of the initiative demonstrates the momentum behind ODP to accelerate the delivery of compatible Hadoop distributions and the simplification it brings to the ecosystem using that as an industry standard.

New Members and New Participation Levels
In addition to revealing that Telstra is one of the founding Platinum members of the ODP, we’ve added new nine new members, including BMC, DataTorrent,PLDTSquid SolutionsSyncsort, UnifizData, Zettaset. We welcome these new members and are looking forward to their participation and their announcements. We also announced new membership level to provide an easy entrée for any company to participate in the ODP. The Silver level of membership allows companies to have a direct voice into the future of big data and contribute people, tests, and code to accelerate executing on the vision.

Community Collaboration at the Bug Bash
ODP Member Alitscale lead the efforts on a Hadoop Community Bug Bash. This unique event for the Apache Hadoop community, along with co-sponsors Hortonworks, Huawei, Infosys, and Pivotal, saw over 150 participants from eight countries and nine time zones, to strengthen Hadoop and honor the work of the community by reviewing and resolving software patches. Read more about the Bug Bash, where 186 issues were resolved either with closure or patches committed to code. Nice job everyone!  You can participate in upcoming bug bashes, so stay tuned.

Technical Working Group and the ASF
Senior engineers and architects from the ODP member companies have come together as a Technical Working Group (TWG). The goal of the TWG is to jump-start the work required to produce ODP core deliverables and to seed the technical community overseeing the future evolution of the ODP core. Delivering on the promise of “verify once and run anywhere” TWG is building h certification guidelines for “compatibility” (for software running on top of ODP) and “compliance” (for ODP platforms). We have scheduled a second TWG face-to-face meeting at Hadoop Summit and where committers, PMC and ASF members will be meeting to continue these discussions.

What’s Next?

Many of the member companies will be at Hadoop Summit in San Jose.

While you’re at Hadoop Summit, you can attend the IBM Meet Up and hear more about the ODP. Stay tuned to this blog as well – we’ll use this as a platform to inform you of new developments and provide you insight on how the ODP works.

Want to know more about the ODP, here are a few reference documents

Enterprise Hadoop Adoption: Half Empty or Half Full?

This blog by Shaun Connolly, Hortonworks VP of Corporate Strategy, has been republished with the author’s permission.

As we approach Hadoop Summit in San Jose next week, the debate continues over where Hadoop really is on its adoption curve. George Leopold from Datanami was one of the first to beat the hornet’s nest with his article entitled Gartner: Hadoop Adoption ‘Fairly Anemic’. Matt Asay from TechRepublic and Virginia Backaitis from CMSWire volleyed back with Hadoop Numbers Suggest the Best is Yet to Come and Gartner’s Dismal Predictions for Hadoop Could Be Wrong, respectively.

At the center of the controversy is the report published by Merv Adrian and Nick Heudecker from Gartner: Survey Analysis: Hadoop Adoption Drivers and Challenges. Specifically, the Gartner survey shows that 26% of respondents are deployed, piloting or experimenting; 11% plan to invest within 12 months; and an additional 7% plan to invest within 24 months.

Glass Half Empty or Half Full?

I believe the root of the controversy comes not in the data points stated above, but in the phrasing of one of the key findings statements: “Despite substantial hype and reported successes for early adopters, over half of respondents (54%) report no plans to invest at this time. Additionally, only 18% have plans to invest in Hadoop over the next two years.

The statement is phrased in the negative sense, from a lack of adoption perspective. While not wrong, it represents a half-empty perspective that is more appropriate for analyzing mature markets such as the RDBMS market, which is $100s of billions in size and decades into its adoption curve. Comparing today’s Hadoop market size and adoption to today’s RDBMS market is not particularly useful. However, comparing the RDBMS market at the time it was five years into its adoption cycle might be an interesting exercise.

When talking about adoption for newer markets like Enterprise Hadoop, I prefer to frame my view using the classic technology adoption lifecycle that models adoption across five categories with corresponding market share %s: Innovators (2.5%), Early Adopters (13.5%), Early Majority (34%), Late Majority (34%), and Laggards (16%).

Putting the Gartner data into this context shows Hadoop in the Early Majority of the market at the classic inflection point of its adoption curve.


As a publicly traded enterprise open source company, not only is Hortonworks code open, but our corporate performance and financials are open too. Earlier this month, we released Hortonworks’ first quarter earnings. In Q4-2014 and Q1-2015, we added 99 and 105 new subscription customers respectively, which means we added over 46% of our 437 subscription customers in the past 6 months. If we look at the Fortune 100, 40% are Hortonworks subscribers including: 71% of F100 retailers, 75% of F100 Telcos, and 43% of F100 banks.


We see these statistics as clear indicators of the building momentum of Open Enterprise Hadoop and the powerful Hortonworks model for extending Hadoop adoption across all industries. I won’t hide the fact that I am guilty of having a Half Full approach to life. As a matter of fact, I proudly wear the t-shirt every chance I get. The Half Full mindset serves us well at Hortonworks, because we see the glass filling quickly. The numbers for the last two quarters show that momentum.

Come Feel the Momentum at Hadoop Summit on June 9th in San Jose!

If you’d like to see the Hadoop momentum for yourself, then come join us at Hadoop Summit in San Jose starting June 9th.

Geoffrey Moore, author of Crossing the Chasm, will be a repeat keynote presenter this year. At Hadoop Summit 2012, he laid out a technology adoption roadmap for Big Data from the point of view of technology providers. Join Geoff as he updates that roadmap with a specific focus on business customers and the buying decisions they face in 2015.

Mike Gualtieri, Principal Analyst at Forrester Research, will also be presenting. Join Mike for his keynote entitled Adoption is the Only Option—Five Ways Hadoop is Changing the World and Two Ways It Will Change Yours.

In addition to keynote speakers, Summit will host more than 160 sessions being delivered by end user organizations, such as Aetna, Ernst & Young, Facebook, Google, LinkedIn, Mercy, Microsoft, Noble Energy, Verizon, Walt Disney, and Yahoo!, so you can get the story directly from the elephant’s mouth.

San Jose Summit 2015 promises to be an informational, innovative and entertaining experience for everyone.

Come join us. Experience the momentum for yourself.

Configuring multiple zones in Hadoop

Hortonworks, a WANdisco partner and another member of the Open Data Platform, recently published a list of best practices for Hadoop infrastructure management.  One of the top recommendations is configuring multiple zones in Hadoop.  Having development, test, and production environments gives you a safe way to test upgrades and new applications without disturbing a production system.

One of the challenges with creating multiple similar zones is sharing data between them.  Whether you’re testing backup procedures and application functionality, or prototyping a new data analysis algorithm, you need to see similar data in all the zones.  Otherwise you’re not really testing in a production-like environment.

But in a large cluster transferring terabytes of data around between zones can be time consuming and it’s tough to tell how stale the data really is.  That’s where WANdisco Fusion becomes an essential part of your operational toolkit.  WANdisco Fusion provides active-active data replication between Hadoop clusters.  You can use it to effectively share part of your Hadoop data between dev/test/prod zones in real-time.  All of the zones can make full use of the data, although you can of course use your normal access control system to prevent updates from certain zones.

DevOps principles are coming to Hadoop, so contact one of our solutions architects today to see how WANdisco Fusion can help you maintain multiple zones in your Hadoop deployment.

Different views on Big Data momentum

I was struck recently by two different perspectives on Big Data momentum.  Computing Research just published their 2015 Big Data Review in which they found continued momentum for Big Data projects.  A significantly higher number of their survey respondents in 2015 are using Big Data projects for operational results.  In a contrasting view, Gartner found that only 26% of the respondents were running or even experimenting with Hadoop.

If you dig a little deeper into the Computing study, you’ll see that it’s speaking about a wider range of Big Data options than just Hadoop.  The study mentions that 29% of the respondents are at least considering using Hadoop specifically, up from 15% last year.  So the two studies are closer than they look at first glance, yet the tone is strikingly different.

One possible explanation is that the Big Data movement is much bigger than Hadoop and it’s easier to be optimistic about a movement than a technology.  But even so, I’d tend towards the optimistic view of Hadoop.  If you look at the other technologies being considered for Big Data, analytics tools and databases (including NoSQL databases) are driving tremendous interest, with over 40% of the Computing Research participants evaluating new options.  And the Hadoop community has done a tremendous amount of work to turn Hadoop into a general purpose Big Data platform.

You don’t have to look very far for examples.  Apache Spark is now bundled in mainstream distributions to provide fast in-memory processing, while Pivotal (a member of the Open Data Platform along with WANdisco) has contributed Greenplum and HAWQ to the open source effort.

To sum up, the need for ‘Big Data’ is not in dispute, but the technology platforms that underpin Big Data are evolving rapidly.  Hadoop’s open nature and evolution from a processing framework to a platform are points in its favor.