Monthly Archive for February, 2013

WANdisco Non-Stop NameNode Removes Hadoop’s Single Point of Failure

We’re pleased to announce the release of the WANdisco Non-Stop NameNode, the only 100% uptime solution for Apache Hadoop. Built on our Non-Stop patented technology, Hadoop’s NameNode is no longer a single point of failure, delivering immediate and automatic failover and recovery whenever a server goes offline, without any downtime or data loss.

“This announcement demonstrates our commitment to enterprises looking to deploy Hadoop in their production environments today,” said David Richards, President and CEO of WANdisco. “If the NameNode is unavailable, the Hadoop cluster goes down. With other solutions, a single NameNode server actively supports client requests and complex procedures are required if a failure occurs. The Non-Stop NameNode eliminates those issues and also allows for planned maintenance without downtime. WANdisco provides 100% uptime with unmatched scalability and performance.”

Additional benefits of Non-Stop NameNode include:

  • Every NameNode server is active and supports simultaneous read and write requests.
  • All servers are continuously synchronized.
  • Automatic continuous hot backup.
  • Immediate and automatic recovery after planned or unplanned outages, without the need for administrator intervention.
  • Protection from “split-brain” where the backup server becomes active before the active server is completely offline. This can result in data corruption.
  • Full support for HBase.
  • Works with Apache Hadoop 2.0 and CDH 4.1.

“Hadoop was not originally developed to support real-time, mission critical applications, and thus its inherent single point of failure was not a major issue of concern,” said Jeff Kelly, Big Data Analyst at Wikibon. “But as Hadoop gains mainstream adoption, traditional enterprises rightly are looking to Hadoop to support both batch analytics and mission critical apps. With WANdisco’s unique Non-Stop NameNode approach, enterprises can feel confident that mission critical applications running on Hadoop, and specifically HBase, are not at risk of data loss due to a NameNode failure because, in fact, there is no single NameNode. This is a major step forward for Hadoop.”

You can learn more about the Non-Stop NameNode at the product page, where you can also claim your free trial.

If you’d like to get first-hand experience of the Non-Stop NameNode and are attending the Strata Conference in Santa Clara this week, you can find us at booth 317, where members of the WANdisco team will be doing live demos of Non-Stop NameNode throughout the event.

WANdisco Announces Non-Stop Hadoop Alliance Partner Program

We’re pleased to announce the launch of our Non-Stop Alliance Partner Program to provide Industry, Technology and Strategic Partners with the competitive advantage required to compete and win in the multi-billion dollar Big Data market.

There are three partner categories:

  • For Industry Partners, which include consultants, system integrators and VARs, the program provides access to customers who are ready to deploy and the competitive advantage necessary to grow business through referral and resale tracks.
  • For Technology and Strategic Partners, including software and hardware vendors, the program accelerates time-to-market through Non-Stop certification and reference-integrated solutions.
  • For Strategic Partners, the program offers access to WANdisco’s non-stop technology for integrated Hadoop solutions (OEM and MSP)

Founding Partners participating in the Non-Stop Alliance Partner Program include Hyve Solutions and SUSE.

“Hyve Solutions is excited to be a founding member of WANdisco’s Non-Stop Alliance Partner Program,” said Steve Ichinaga, Senior Vice President and General Manager of Hyve Solutions. “The integration of WANdisco and SUSE’s technology with Hyve Solutions storage and server platforms gives enterprise companies an ideal way to deploy Big Data environments with non-stop uptime quickly and effectively into their datacenters.”

“Linux is the undisputed operating system of choice for high performance computing. For two decades, SUSE has provided reliable, interoperable Linux and cloud infrastructure solutions to help top global organizations achieve maximum performance and scalability,” said Michael Miller, vice president of global alliances and marketing, SUSE.  “We’re delighted to be a Non-Stop Strategic Technology Founding Partner to deliver highly available Hadoop solutions to organizations looking to solve business challenges with emerging data technologies.”

Find out more about joining the WANdisco Non-Stop Alliance Partner Program or view our full list of partners.

Scaling Subversion for the Enterprise

Apache Subversion is one of the world’s most popular open source version control solutions. It’s also becoming increasingly popular within the enterprise, with plenty to offer enterprise users, including:

  • Established professional support options
  • A commercial-friendly Apache license
  • Atomic commits that allow enterprise users to track and audit changes
  • Plenty of free training resources, such as webinars, refcards and online tutorials

However, large Subversion deployments have limitations that can negatively affect your business. If you are using multiple Subversion repositories across globally distributed teams, you’re likely facing challenges around performance and productivity, repository sync, WAN latency and connectivity, access control or the need for HADR (high availability and disaster recovery).

In our new, free-to-attend ‘Office Hours’ sessions, our expert Solution Architect will conduct live demos, showcasing how our Subversion MultiSite technology can help you overcome the limitations and risks related to globally distributed SVN deployments. Over the course of the hour, our Solution Architect Patrick Burma will cover these issues and accompanying solutions, from the administrative, business and IT perspectives, and will be available to answer all of your business-specific questions.

You can register for all of this week’s sessions now:

All sessions will take place at 10:00am PST (1:00pm EST) and are free to attend.

 

Subversion Tip of the Week

SVN Import

There are two main options when you need to add new file(s) to your Apache Subversion project: the ‘SVN Add’ command and ‘SVN Import.’ The advantage of performing an ‘SVN Import’ is that:

  • ‘SVN Import’ communicates directly with the repository, so no working copy or checkout is required.
  • Your files are immediately committed to the repository, and are therefore available to the rest of the team.
  • Intermediate directories that don’t already exist in the repository are automatically created without the need for additional switches.

‘SVN Import’ is typically used when you have a local file tree that’s being added to your Subversion project. Run the following to add a file/file tree to your repository:

svn import -m “log message” (local file/file tree path) (repository-URL)

In this example, we’re adding the contents of the “Release2” folder to the repository, in an existing ‘branches’ directory.

svn import 1

As already mentioned, intermediate directories do not need to exist prior to running the ‘SVN Import’ command. In this example, we’re again importing the contents of ‘Release2,’ but this time we’re simultaneously creating a ‘Release2’ directory to contain the files.

svn import create new folder

If you check the repository, you’ll see a new ‘Release2’ directory has been created. The contents of your ‘Release2’ file tree are located inside.

ubersvn import

Want more advice on your Apache Subversion installation? We have a full series of SVN refcards for free download, covering hot topics such as branching and merging, and best practices. You can find out more at www.wandisco.com/svnref

Adding and Deleting Files from the Command Line

When working with files under Apache Subversion’s version control, eventually you will need to start adding and removing files from your project. This week’s tip explains how to add a file to a project at the working copy level or, alternatively, commit it straight to the central repository. It will also highlight how to delete a file, either by scheduling it for deletion via the working copy or deleting it straight from the central repository.

Adding Files

Files can be added to a project via the working copy. After you’ve added the file to your working copy, it’ll be sent to the central repository and shared with the rest of your team the next time you perform an ‘svn commit.’

To add a file to your working copy (and schedule it for addition the next time you perform a commit) run:

svn add (working-copy-location)/file-to-be-added

In this example we’re adding a file called ‘executable’ to the trunk directory of the ‘NewRepo’ working copy.

Subversion 1

You’ll need to perform a commit to send this item to the repository and share it with the rest of your team.

Subversion 2

Deleting Files 

Once you start adding files to your working copy, sooner or later you’ll need to remove files. When files are deleted in the working copy, they’re scheduled for deletion in the repository the next time you perform a commit, in exactly the same way as the ‘svn add’ command.

Schedule files for deletion in the working copy by running:

svn delete (working-copy-location)/file-to-be-deleted

In this example, we’re scheduling ‘executable.png’ for deletion.

Subversion 3

Alternatively, you can delete files from the repository immediately. Note, this operation creates a new revision and therefore requires a log message.

svn delete -m “log message” (repository-URL)/file-to-be-deleted

Subversion 4

Looking for an easy-to-use cross platform Subversion client? Claim your free 30 day trial of SmartSVN Professional by visiting: www.smartsvn.com/download

Fetching Previous Revisions in Subversion

One of the fundamental features of Apache Subversion is that it remembers every change committed to the central repository, allowing users to easily recover previous versions of their project.

There are several methods available to users who wish to roll back to an earlier revision:

1) Perform a Checkout

By default, Subversion checks out the head revision, but you can instruct it to checkout a previous revision by adding a revision number to your command:

svn checkout -r(revision-number) (repository-URL)

In this example, we’re creating a working copy from the repository data in revision 5.

checking out revision 5

2) ‘Update’ to Previous Revision

If you already have a working copy, you can ‘update’ it to a previous revision by using ‘svn update’ and specifying the revision number:

svn update -r(revision-number) (working-copy-location)

In this example, we’re updating the ‘Project’ working copy to revision 5.

svn update to past revision

3) Perform a Reverse Merge

Alternatively, you can perform a reverse merge on your working copy. Usually, a reverse merge is followed by an svn commit, which sends the previous revision to the repository. This effectively rolls the project back to an earlier version and is useful if recent commit(s) contain errors or features you need to remove.

To perform a reverse merge, run:

svn merge -r(revision-to-be-merged):(target-revision) (working-copy-URL)

svn reverse merge

Looking for an easy-to-use cross platform Subversion client? Claim your free 30 day trial of SmartSVN Professional by visiting: www.smartsvn.com/download

Hadoop Console: Simplified Hadoop for the Enterprise

We are pleased to announce the latest release in our string of Big Data announcements: the WANdisco Hadoop Console (WHC.) WHC is a plug-and-play solution that makes it easy for enterprises to deploy, monitor and manage their Hadoop implementations, without the need for expert HBase or HDFS knowledge.

This innovative Big Data solution offers enterprise users:

  • An S3-enabled HDFS option for securely migrating from Amazon’s public cloud to a private in-house cloud
  • An intuitive UI that makes it easy to install, monitor and manage Hadoop clusters
  • Full support for Amazon S3 features (metadata tagging, data object versioning, snapshots, etc.)
  • The option to implement WHC in either a virtual or physical server environment.
  • Improved server efficiency
  • Full support for HBase

“WANdisco is addressing important issues with this product including the need to simplify Hadoop implementation and management as well as public to private cloud migration,” said John Webster, senior partner at storage research firm Evaluator Group. “Enterprises that may have been on the fence about bringing their cloud applications private can now do so in a way that addresses concerns about both data security and costs.”

More information about WHC is available from the WANdisco Hadoop Console product page. Interested parties can also download our Big Data whitepapers and datasheets, or request a free trial of WHC. Professional support for our Big Data solutions is also available.

This latest Big Data announcement follows the launch of our WANdisco Distro, the world’s first production-ready version of Apache Hadoop 2.

Subversion Tip of the Week

Getting Help With Your Subversion Working Copy

When it comes to getting some extra help with your Apache Subversion installation, you will find plenty of documentation online and even a dedicated forum where SVN users can post their questions and answer others. However, Subversion also comes with some handy built-in commands that can show you specific information about your working copy, files, directories, and all of Subversion’s subcommands and switches. This post explains how to access all of this information from the command line.

1) SVN Help

One of the most useful features of command line Subversion is the instant access to its built-in documentation through the ‘svn help’ command. To review all of the details about a particular subcommand, run:

svn help (subcommand)

In the example below, we’ve requested information on the ‘unlock’ subcommand. The printout includes all the additional switches that can be used in conjunction with ‘svn unlock.’

svn help unlock

Alternatively, if you need to see a list of all the available subcommands, simply run ‘svn help.’

svn help

2) SVN Info

If you need more information about the paths in a particular working copy, run the ‘svn info’ command. This will display:

  • Path
  • Repository URL
  • Repository Root
  • Repository UUID
  • Current revision number
  • Node Kind
  • Schedule
  • Information on the last change that occurred (author, revision number, date)

svn info

3) SVN Status

This command prints the status of your files and directories in your local working copy:

svn status (working-copy-path)

svn status

Want more advice on your Apache Subversion installation? We have a full series of SVN refcards for free download, covering hot topics such as branching and merging, and best practices. You can find out more at www.wandisco.com/svnref

 

WANdisco Launches Apache Hadoop Forum

Last week, we launched WANdisco Distro (WDD), a fully tested, production-ready version of Apache Hadoop 2 that undergoes the same rigorous quality assurance process as our enterprise software solutions. To support the needs of WDD users and the wider Apache Hadoop community, we’ve also launched a dedicated Apache Hadoop forum.

In addition to sections on the enterprise Hadoop products WDD, Non-Stop NameNode and WANdisco Console for Hadoop, forum users can connect with other users and get advice on their Hadoop installations – especially installing and configuring Hadoop, and running Hadoop on Amazon’s Simple Storage Service.

The Hadoop forum is also the place to connect with WANdisco’s core Hadoop developers. These include Dr. Konstantin V. Shvachko, a veteran Hadoop developer, member of the team that created the Hadoop Distributed File System (HDFS) and current member of the Apache Hadoop PMC; Jagane Sundar, who has extensive big data, cloud, virtualization, and networking experience and former Director of Hadoop Performance and Operability at Yahoo!; and Dr. Konstantin Boudnik, one of the original developers of Hadoop and founder of Apache BigTop.

This forum is intended to be a useful resource for the Apache Hadoop community, so we’d love to hear your feedback on the Hadoop Forum. If there’s a section or functionality you would like to suggest we add to improve your forum experience, please let us know. You can leave a post at the forum, at this blog or Contact Us directly.

We look forward to hearing from you!

Running the SLive Test on WANdisco Distro

The SLive test is a stress test designed to simulate distributed operations and load on the NameNode by utilizing the MapReduce paradigm. It was designed by Konstantin Shvachko and introduced into the Apache Hadoop project in 2010 by him and others. It is now one of the many stress tests we ran here at WANdisco in testing our distribution, WANdisco Distro (WDD).

You can read the original paper about how this test works here:
https://issues.apache.org/jira/secure/attachment/12448004/SLiveTest.pdf
You can view the associated Apache JIRA for the introduction of this test here:
https://issues.apache.org/jira/browse/HDFS-708

This blog will provide a short tutorial on how you can run the SLive test on your own cluster of Hadoop 2 and YARN / MapReduce. Before we begin, please make sure you are logged in as the ‘hdfs’ user:

su – hdfs

The first order of business is to become familiar with the parameters supported by the stress test.

The percentage of operation distribution parameters:
-create <num> -delete <num> -rename <num> -read <num>  -append <num> -ls <num> -mkdir <num>

Stress test property parameters:
-blockSize <min,max> -readSize <min,max> -writeSize <min,max> -files <total>

The first set of parameters controls “how many of this kind of operation do you want?”. For example, if you want to simulate just a create and delete scenario, with no reads or writes, you would run the test with -create 50 -delete 50 (or any other percentages that add up to 100) and set the others in that first set to 0, or just don’t specify them and the test will automatically set them to 0.

The second set of parameters controls properties that extend throughout the entire test. “How many files do you want to make?,” “What is the biggest and smallest that you want each block in the file to be?” They can be ignored for the most part, except for “-blockSize”. Using the default block size, which is 64 megabytes, may cause your run of the SLive test to take longer. In order to make this a speedy tutorial, we will use small block sizes. Please note that block sizes must be in multiples of 512 bytes. We will use 4096 bytes in this tutorial.

There are other parameters available, but they are not necessary in order to provide a basic understanding and run of this stress test. You can refer to the document at the top of this entry if your curiosity of the other parameters is getting the best of you, or you can run:

hadoop org.apache.hadoop.fs.slive.SliveTest –help

The second step is to understand how to run the test. Although it is advised NOT to do this just yet, you can make the following call to instantly run the test with default parameters by executing the following command:

hadoop org.apache.hadoop.fs.slive.SliveTest

However, since we have no initial data within the cluster, you should notice that most, if not all, of the operations in the report are failures. Run the following to initialize the cluster with 10,000 files, all with a tiny 4096 byte block size, in order to achieve a quick run of the SLive test:

hadoop org.apache.hadoop.fs.slive.SliveTest -create 100 -delete 0 -rename 0 -read 0 -append 0 -ls 0 -mkdir 0 -blockSize 4096,4096 -files 10000

On a cluster with 1 NameNode and 3 DataNodes, running this command should take no longer  than about 3 – 4 minutes. If it is taking too long, you can try re-running with a lower “-files” parameter number and/or a smaller “-blockSize” parameter as well.

After you have initialized the cluster with data, you will need to delete the output directory that your previous SLive test run had created:

hadoop fs -rmr /test/slive/slive/output

You will need to do this after every time you have run an SLive test; otherwise your next run attempt will fail, telling you that the output directory for your requested run already exists.

You can now run the default test, which performs an equal distribution of creates, deletes, reads, and other operations across the cluster:

hadoop org.apache.hadoop.fs.slive.SliveTest

Or you can specify the parameters of your own choosing and customize your own load to stress test with! That is the purpose of the test, after all. Enjoy!

Here are the results obtained from our own in-house run of the SLive test for you to compare your own results with. I ran the following command after initialization:

hadoop org.apache.hadoop.fs.slive.SliveTest -blockSize 4096,4096 -files 10000

And I got the following results:

13/02/11 11:00:36 INFO slive.SliveTest: Reporting on job:
13/02/11 11:00:36 INFO slive.SliveTest: Writing report using contents of /test/slive/slive/output
13/02/11 11:00:36 INFO slive.SliveTest: Report results being placed to logging output and to file /home/hdfs/part-0000
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type AppendOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “bytes_written” = 4317184
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “failures” = 1
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “files_not_found” = 365
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 59813
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 1054
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “bytes_written” = 0.067 MB/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 23.741 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 17.622 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type CreateOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “bytes_written” = 1490944
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “failures” = 1056
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 19029
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 364
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “bytes_written” = 0.053 MB/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 74.623 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 19.129 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type DeleteOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “failures” = 365
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 4905
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 1055
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 289.501 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 215.087 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type ListOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “dir_entries” = 1167
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “files_not_found” = 1145
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 536
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 275
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “dir_entries” = 2177.239 directory entries/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 2649.254 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 513.06 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type MkdirOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 5631
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 252.175 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 252.175 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type ReadOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “bad_files” = 1
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “bytes_read” = 25437917184
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “chunks_unverified” = 0
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “chunks_verified” = 3188125200
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “files_not_found” = 342
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 268754
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 1077
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “bytes_read” = 90.265 MB/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 5.284 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 4.007 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type RenameOp
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “failures” = 1165
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 1130
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 1420
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “successes” = 255
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 1256.637 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “successes” = 225.664 successes/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Basic report for operation type SliveMapper
13/02/11 11:00:36 INFO slive.ReportWriter: ————-
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “milliseconds_taken” = 765862
13/02/11 11:00:36 INFO slive.ReportWriter: Measurement “op_count” = 9940
13/02/11 11:00:36 INFO slive.ReportWriter: Rate for measurement “op_count” = 12.979 operations/sec
13/02/11 11:00:36 INFO slive.ReportWriter: ————-

avatar

About Plamen Jeliazkov