Software sizing isn’t easy

I’m going to quote pretty much the entirety of an introduction I wrote to an article just posted at the jazz.net Deployment wiki on CLM Sizing (https://jazz.net/wiki/bin/view/Deployment/CLMSizingStrategy):

Whether new users or seasoned experts, customers using IBM Jazz products all want the same thing: They want to use the Jazz products without worrying that their deployment implementation will slow them down, that it will keep up with them as they add users and grow. A frequent question we hear, whether it’s from a new administrator setting up Collaborative Lifecycle Management (CLM) for the first time, or an experienced administrator tuning their Systems and Software Engineering (SSE) toolset, is “How many users will my environment support?”

Back when Rational Team Concert (RTC) was in its infancy we built a comprehensive performance test environment based on what we thought was a representative workload. It was in fact based upon the workload the RTC and Jazz teams itself used to develop the product. We published what we learned in our first Sizing Guide. Later sizing guides include: Collaborative Lifecycle Management 2011 Sizing Guide and Collaborative Lifecycle Management 2012 Sizing Report (Standard Topology E1). As features were added and the release grew, we started to hear about what folks were doing in the field. The Jazz products, RTC especially, are so flexible that customers were using them with wonderfully different workloads than we had anticipated.

Consequently, we stepped back from proclaiming a one-size fits all approach, and moved to presenting case studies and specific test reports about the user workload simulations and the loads we tested. We have published these reports on the jazz.net Deployment wiki at Performance datasheets. We have tried to make a distinction between performance reports and sizing guides. Performance reports document a specific test with defined hardware, datashape and workload, whereas sizing guides suggest patterns or categories of hardware, datashape and workload. Sizing reports are not specific and general descriptions of topologies and estimations of workloads they may support.

Throughout the many 4.0.x release cycles, we were still asked “How many users will my environment support?” Our reluctance to answer this apparently straightforward question frustrated customers new and old. Everyone thinks that as the Jazz experts we should know how to size our products. Finally, after some analysis and messing up countless whiteboards, we would like to present some sizing strategies and advice for the front-end applications in the Jazz platform: Rational Team Concert (RTC), Rational Requirements Composer (RRC)/Rational DOORS Next Generation (DNG) and Rational Quality Manager (RQM). These recommendations are based upon our product testing and analysis of customer deployments.

sizingestact_wide

The article talks about how complex estimating a software sizing can be. Besides the obligatory disclaimer, there’s a pointer to the CLM and SEE recommended topologies and a discussion of basic definitions. There’s also a table listing many of the non-product (or non-functional) factors which can wreck havoc with the ideal performance of a software deployment.

Most importantly, the article provides some user sizing basics for Rational Team Concert (RTC), Rational Requirements Composer (RRC)/Rational DOORS Next Generation (DNG) and Rational Quality Manager (RQM). Eventually we’ll talk a bit more about the strategies / concepts needed to determine whether you may need two CCMs or multiple application servers in your environment.

For now, I hope we’re taking a good step towards answering the perennial question: “How many users will my environment support,” and explaining why it’s so hard to answer that question accurately.

As always, comments and questions are appreciated.

Be even smarter with virtualization

It took a bit of unplanned procrastination, but we finally got to the second and third parts of our in-depth investigation of virtualization as it relates to IBM Rational products.

casestudy3snip

Part two is now published here as Be smart with virtualization: Part 2. Best practices with IBM Rational Software.

Part three lives on the deployment wiki here as Troubleshooting problems in virtualized environments.

Part two presents two further case studies and a recap of the principles explored in Part 1. We took a stab at presenting the tradeoffs between different virtualization configurations. Virtualization is becoming more prevalent because it’s a powerful way to manage resources and squeeze efficiencies from hardware. Of course there are balances and Part two goes a bit deeper.

Part three moves to the deployment wiki and offers some specific situations we’ve solved in our labs and with customers. There are also screen shots of one of the main vendor’s tools which can guide you to identify your own settings.

Virtualization demystified

Read about Rational’s perspective on virtualization over at IBM developerWorks

For the IBM Innovate 2011 conference, the Rational Performance Engineering team presented some of its research on virtualization. We had an accompanying slide deck too, and called it the Rational Virtualization Handbook.

It’s taken a bit of time, but we have finally fleshed out the slides and written a proper article.

Actually, the article has stretched into two parts, the first of which lives at Be smart with virtualization. Part 1. Best practices with IBM Rational software. Part 2 is in progress and will contain further examples and some troubleshooting suggestions. I can’t say for sure, but we have a topic lined up which would make a third part, but there’s a lot of work ahead.

I’m tempted to repost large excerpts because I’m proud of the work the team did. And it took a bit longer than expected to convert slideware into a real article, and so the article took a lot of work. I won’t give away the secrets here…. You’ll have to check out IBM developerWorks yourself. However, let me kickstart things with a history sidebar:

A brief history of virtualization

Despite its emergence as a compelling, necessary technology in the past few years, server virtualization has actually been around for quite some time. In the 1970s, IBM introduced hypervisor technology in the System z and System i® product lines. Logical partitions (LPARs) became possible on System p® in 2000. The advent of virtual machines on System x and Intel-based x86 hardware was possible as early as 1999. In just the last few years, virtualization has become essential and inevitable in Microsoft Windows and Linux environments.

What products are supported in virtualized environments?

Very often we explain that asking whether a particular Rational product is supported with virtualization isn’t actually the right question. Yes, we’ve run on Power hardware and lpars for several years now. Admittedly KVM and VMware are newer to the scene. Some may recall how clock drift could really mess things up, but those problems seem to be behind us.

The question isn’t whether Rational products are supported on a particular flavor of virtualization: If we support a particular Windows OS or Linux OS, then we support that OS whether it’s physical or virtualized.

Virtualization is everywhere

Starting in 2010 at Innovate and other events, we routinely asked folks in the audience whether they were aware their organizations were using virtualization (the platform didn’t matter). In 2010 and 2011 we got a few hands, maybe two or three in a room of 20. Folks were asking us if virtualization was supported. Was it safe? Could they use it? What were our suggestions?

Two years later, in 2012, that ratio was reversed: Nearly every hand in the audience shot up. We got knowing looks from folks who had disaster stories of badly managed VMs. There were quite a few people who had figured out how to manage virtualization successfully. There were questions from folks looking for evidence and our suggestions to take back to their IT folks.

Well, finally, we have something a bit more detailed in print.