Wet Paint: Behind the Jazz.net Deployment wiki

The Deployment wiki

As previously alluded, we’re trying to change the model for how we communicate CLM and Rational product information specific to deployment. Presumably we could discuss what we mean by deployment, but the jazz.net Deployment wiki says it best:

“… a single repository for Rational deployment guidance and best practices for customers, partners, and Rational staff beyond the basic product and support documentation. The scope of the wiki is the full range of Rational products and solutions, but it focuses on advice for those that are most commonly deployed. The wiki assembles deployment information from many locations into a single community-developed and community-managed knowledge base.”

But really, what’s it for?

Having spent some time behind the curtain helping to get the wiki in order, I can speak to some of the behaviors we’re trying to improve:

  1. Bring accurate, relevant information to customers and practitioners
  2. Create a community where qualified experts can share their knowledge

We’re trying to consolidate the many different ways customers can find Rational product information, improve the shelf life of that information, and find a way to incorporate practitioner experience.

What’s wrong with the old model? (Well, nothing really…)

Back when, you’d open the box with the product diskettes and get a thick tome or two explaining how to install and use the software. Printed documentation seems to have gone the way of cassettes and eight-track tapes. Online documentation seems to be the current trend. Yes, trees are saved and cycle time reduced. I’m not completely sure that wikis represent the state of the art for documentation and knowledge management but this isn’t the place for that discussion.

Going about this the right way

No surprise that those of us working on the wiki are some of the most articulate of the bunch. However, we are a very self-conscious crowd even if we’re trying to impose our view of the world. We’re constantly asking each other to review what we write and collate. We want to get the facts right. We want to help our customers.

As we keep our goals in mind, there are a few things we worry about:

  1. Too much frequent, spurious change
  2. Readers misunderstanding or misusing what we say
  3. Making statements which are just plain wrong

Readers arrive at wikis well-aware that the content may be in flux. Still, we want to minimize trivial changes. At the deployment wiki there’s a wrapper of legal statements and terms of use which protect all of us. Much of what we publish is driven by common sense. We’re also constantly keeping an eye on what each of us do, not to police, but to be sure we’re doing our best work everywhere we can.

Humble beginnings

My work on the wiki started with the Troubleshooting Performance section. We assembled an excellent International team straddling development and customer support. At the end of February 2013, the Performance Troubleshooting Team locked itself in a room for one week. That we’d effectively take a clean sheet of paper and embark on a conceptually difficult topic was perhaps over-reaching, but we got an immense amount of work done. Supplied with generous servings of coffee, donuts and bagels, the team defined wiki structure, authored dozens of pages and figured out several necessary behaviors useful for the wiki’s more straightforward pages.

You might have thought we’d start on more direct material. That is, begin with “How to make a peanut butter and jelly sandwich,” rather than jump right into “Let’s figure out why you’re having trouble making a sandwich.”

Very often, the best instructions are sequential:

  1. Desire a specific result
  2. Assemble ingredients
  3. Follow sequential steps
  4. Arrive at result

Inventing theory

Troubleshooting presumes something, anything, may have gone wrong, and requires a diagnostic vocabulary. We had to write pages on how to solve problems, before we even picked the problems we wanted to solve. We had to agree on how to frame issues so that readers could use our pages to comprehend their own concerns and use our pages for help.

  1. Desire a specific result
  2. Assemble ingredients
    1. How do you know if you don’t have all the ingredients?
    2. Might some of the ingredients be faulty?
  3. Follow sequential steps
    1. How do you know if you missed a step?
  4. Arrive at result
    1. How might you determine you haven’t achieved the result?

We settled upon situations as the name for the things we were trying to help solve and write about. We tried to come up with reasonably narrow situations, and applied some of our Agile techniques to frame situations:

“As a <user> trying to do <X action>, I encounter <Y performance problem>.”

We discovered that situations fell into two categories: situations which administrators were most likely to encounter (not necessarily product specific, more likely to be environmental) and situations which users were most likely to encounter (more likely to be product specific, not necessarily environmental).

Given performance problems often appear to end-users as a culmination of several problems, we decided that the situations we would discuss had to be discrete and specific: “How to troubleshooting performance of <feature>,” or “How to diagnose performance problems in <non-product component within an environment>.”

Because the pages are on an easily discoverable public wiki, the material had to be reasonably self-contained with clearly defined scope. We also had to add hints for where readers would go next should our pages not actually prove to be helpful.

We thought a lot about who our audience might be: The administrator situations and user situations seem obvious now, but they weren’t our original design.

Admittedly, maybe I worried more about theory than most. Or at least, the team let me worry about it while they were doing lots of research and writing.

Actual practice

Many of the situations required researching and writing background pages to explain key concepts even before troubleshooting started. A page on performance troubleshooting ETLs required explaining how to read ETL log files. A page on slow performance in web browsers required material about how to measure browser performance independently of CLM. A discussion of troubleshooting virtualized configurations required an introduction to virtualization concepts.

Each situation proposes an initial assessment to identify symptoms, impact/scope, timing/frequency and any other environmental changes. These are the basics of troubleshooting. Then there might suggestions about how to collect data, use tools, or complete an analysis. Finally there are the paragraphs readers expect where possible causes and solutions are itemized. Not every cause and solution applies to everyone. We didn’t want to create a greater burden for support by suggesting things that were wrong, irrelevant or confusing.

As mentioned above we also wanted to make sure readers could find their way to support, other IBM and generally available material. As much as we worried about a top-down structure (entry pages, secondary pages, etc.), we are well-aware that folks might arrive at a page from an Internet search.

For consistency, we created a template for writing our pages. At the end of our troubleshooting template are Ground Rules. They look simple, but they are easily internalized and effective.

Ready right now, and preparing for the future

We’re desperate for feedback. Each page has a comments section which we monitor. We’re considering updating the wiki platform with some “like” capability, as well as tie it closer to jazz.net’s forum.

We created a page where in-progress articles can be found, and suggestions for potential troubleshooting topics can be made.  So please visit the wiki and let us know what you think.

 

Woz will be at Innovate 2013!
And so will the Jazz Jumpstart team

OMG, Woz is coming to Innovate!

No,  I never made a fish tank out of my Mac SE. I did own a Mac clone (Yay, Power Computing!), and have a broken System 7 watch someplace from that operating system’s launch. (I also have a Windows 95 key chain someplace, but that’s another story…) When MacWorld was in Boston, I’d take a day from work to wander the slow floor, check out the new gadgets and collect tee-shirts. These days, my work-provided lenovo ThinkPad stands out in a house full of Macs and iPods.

But Woz is going to speak at Innovate??!! I’m not going to miss that.

Rational’s two seasons

At Rational there are basically two times of the year: pre-Innovate and post-Innovate. Stuff needs to be wrapped up in May for the annual conference, or it has to wait until after. Sure sometimes there are other deadlines, but for some of us, the first week in June is how we measure the working world. I know some of our customers look at it that way too, asking for defects to be fixed before June, or using the conference to get some fresh ideas and work started for the rest of the year.

Innovate is where we join our partners to showcase products all across the Rational portfolio. Customers present on the cool things they’re doing, and some of us head down to Orlando, Florida (June 2 to 6 this year) to give presentations, workshops and take part in interactive discussions. Conferences have adjusted to the times over the years. I think my biggest MacWorld tee-shirt haul neared 14. At Innovate, there’s always some swag, but given I have to fly there and back, I’ll collect pens instead.

Jazz Jumpstart is all over Innovate

So maybe Woz gets a keynote under the Klieg lights, but the Jazz Jumpstart Team is all over Innovate, helping customers, meeting colleagues, saying hi to new friends, but most importantly, letting the world know just how awesome the Jazz products are and what wonderful things you can do with them. The best team in Rational is a big part of the Innovate experience. Here are some highlights:

  • SudhakarFreddy Frederick reveals how to best use RTC for Android development (Solving the Android Platform Development puzzle with RTC, MDEV-1120B, Wednesday, June 5, in Asia 3 from 1:45pm to 2:45pm)
  •  Ralph Schoon pulls back the curtain on All You Need to Know About Customizing RTC  (RDA-1051, Thursday, June 6,  Swan 9 from 11:00am to noon)
  • Rosa Naranjo provides Strategies for Planning and Completing and Successful CLM Upgrade (RDA-1481, Tuesday, June 4, Swan 9 from 4:15pm to 5:45pm)
  • Jorge Díaz explains Building Mainframe Applications with RTC Enterprise Extensions (SZ-1203, Thursday, June 6, Northern E2 from 9:45am to 10:45am)
  • Jim Ruehlin holds down the big CLM Process Enactment Workshop (WKS-1116, Tuesday, June 4, Swan 8 from 3:00pm to 6:00pm) and gets help from Jorge and Ralph too. (I get to count chairs, I think…)
  • Our team’s fearless manager Dan Tox Toczala and I discuss all about Maximizing your Jazz Environment and Performance (RDA-1327, Wednesday, June 5, Swan 9 from 4:15pm to 5:45pm). We have an interesting “gimmick” planned for this year, so don’t miss it.
  • You can also catch me co-presenting Jazz High-Availability and Disaster Recovery Levels and Best Practices (RDA-2485, Tuesday, June 4, Swan 9 from 1:45pm to 2:45pm)

These are just highlights: Check our individual teams’ blogs for what else we’re up to. Titles, days, locations and times can change, so keep an eye on the conference schedule and the TV monitors posted throughout the site.

 

 

 

Our changing audience

Really, who are you?
Not yet introducing the new jazz deployment wiki

In our corner of the world, some of us on the Jazz Jumpstart universe are wondering who will spill the beans and mention the new jazz Deployment wiki first. I don’t think it will be me.

We’re all working on a new way for the Jazz ecosystem to present information, specifically deployment information. Not just “Insert tab A into slot B” types of material, but the more opinionated, specific stuff you’ve told us you want to hear. We have folks working on Monitoring, Integrating, Install and Upgrade, and other deployment topics. I own the Performance Troubleshooting section.

When the actual wiki rolls out (and I can actually talk about it), I’ll talk about some of the structure and design questions we wrestled with. For now I want to talk about one of the reasons why we’re presenting information differently, and that’s because we think our audience has changed.

IT used to be simple

Ok, so may IT was never actually that simple, but it was certainly a lot easier to figure out what to do. One of IBM Rational’s strengths is that we’ve built strong relationships with our customers over the years. Personally, a lot of the customers I know (and who I think know me) started out as ClearCase or ClearQuest admins and over time have evolved now to Jazz/CLM admins. Back when, there was pretty much a direct relationship with our product admins, who in turn knew their end users and had ownership of their hardware environments.

This picture describes what I’m talking about (they’re from a slide deck we built in 2011 to talk about virtualization some of which lives on elsewhere, but these pics are too good to abandon):

aud_olditmod

The relationship between Rational Support / Development and our customers remains strong and direct. Over the years it’s the context of our customers product administrators that has shifted in many cases:

aud_newsysmod

Consolidation, regulation, governance, compliance, etc., have all created additional IT domains which are often outside the customers’ product administration. There are cases where our relationship with our customers’ product administrators remains strong but we’ve lost sight of their context.

Here’s another way to look at the old model, this is specifically around hardware ownership:

aud_oldmod2

Back in the day, our customers’ product administrators would request hardware, say a Solaris box (yes, I am talking about many years ago…), the hardware would arrive and the Rational product admin would get root privileges and start the installation. Nowadays, the hardware might be a VM, and there might be all sorts of settings which the admin can’t control such as security, database, or as is pertinent to this example, VMs.

aud_newmod2

 

This is a long winded way to say that we’re well aware we have multiple audiences, and need to remember that product administrators and IT administrators may no longer be the same people. Loving a product and managing how it’s used isn’t quite the same as it used to be. We’re trying to get better at getting useful information out there which is one of the reasons for the new deployment wiki.

 

Virtualization demystified

Read about Rational’s perspective on virtualization over at IBM developerWorks

For the IBM Innovate 2011 conference, the Rational Performance Engineering team presented some of its research on virtualization. We had an accompanying slide deck too, and called it the Rational Virtualization Handbook.

It’s taken a bit of time, but we have finally fleshed out the slides and written a proper article.

Actually, the article has stretched into two parts, the first of which lives at Be smart with virtualization. Part 1. Best practices with IBM Rational software. Part 2 is in progress and will contain further examples and some troubleshooting suggestions. I can’t say for sure, but we have a topic lined up which would make a third part, but there’s a lot of work ahead.

I’m tempted to repost large excerpts because I’m proud of the work the team did. And it took a bit longer than expected to convert slideware into a real article, and so the article took a lot of work. I won’t give away the secrets here…. You’ll have to check out IBM developerWorks yourself. However, let me kickstart things with a history sidebar:

A brief history of virtualization

Despite its emergence as a compelling, necessary technology in the past few years, server virtualization has actually been around for quite some time. In the 1970s, IBM introduced hypervisor technology in the System z and System i® product lines. Logical partitions (LPARs) became possible on System p® in 2000. The advent of virtual machines on System x and Intel-based x86 hardware was possible as early as 1999. In just the last few years, virtualization has become essential and inevitable in Microsoft Windows and Linux environments.

What products are supported in virtualized environments?

Very often we explain that asking whether a particular Rational product is supported with virtualization isn’t actually the right question. Yes, we’ve run on Power hardware and lpars for several years now. Admittedly KVM and VMware are newer to the scene. Some may recall how clock drift could really mess things up, but those problems seem to be behind us.

The question isn’t whether Rational products are supported on a particular flavor of virtualization: If we support a particular Windows OS or Linux OS, then we support that OS whether it’s physical or virtualized.

Virtualization is everywhere

Starting in 2010 at Innovate and other events, we routinely asked folks in the audience whether they were aware their organizations were using virtualization (the platform didn’t matter). In 2010 and 2011 we got a few hands, maybe two or three in a room of 20. Folks were asking us if virtualization was supported. Was it safe? Could they use it? What were our suggestions?

Two years later, in 2012, that ratio was reversed: Nearly every hand in the audience shot up. We got knowing looks from folks who had disaster stories of badly managed VMs. There were quite a few people who had figured out how to manage virtualization successfully. There were questions from folks looking for evidence and our suggestions to take back to their IT folks.

Well, finally, we have something a bit more detailed in print.

 

Field notes: Measuring capacity, users and rates

Picture2

Small, Medium and Large

A frequent question concerns how we might characterize a company’s CLM deployment. Small, medium and large are great for tee-shirts, but not for enterprise software deployments. I admit we tried to use such sizing buckets at one point, but everyone said they were extra-large.

Sometime we characterize a deployment’s size by the number of users that might interact with it. We try to distinguish between named and concurrent users.

Named users are all the folks permitted use a product. These are registered users or users with licenses. This might be everyone in a division or all the folks on a project.

Concurrent users are the folks who actually are logged in and working at any given time. These are the folks who aren’t on vacation, but actually doing work like modifying requirements, checking in code, or toying with a report. Concurrent users are a subset of named users.

Generally we’ve seen the concurrent number of users hover around 15 to 25% of the named users. The percentage is closer to 15% in a global company whose users span time zones, and closer to 25% in a company where everyone is in one time zone.

As important is it is to know how many users might interact with a system, user numbers aren’t always an accurate way to measure a system’s capacity over time. “My deployment supports 3000 users” feels like a useful characterization, but it can be misleading because no two users are the same.

Because it can lead to simple answers, I cringe whenever someone asks me of a particular application or deployment, “How many users does it support?” I know there’s often no easy way to characterize systems, and confess I often ask countless questions and end up providing simple numbers. (I’ve tried answering “It supports one user at a time,” but that didn’t go over so well.)

Magic Numbers

How many users does it support” is a Magic Number Question, encouraged by Product Managers and abetted by Marketing, because it’s simple and easy to print on the side of the box. Magic Numbers are especially frequent in web-applications and website development. I’ve been on death marches mature software development projects where the goal was simply to hit a Magic Number so we could beat a competing product’s support statement and affix gold starbursts to the packaging proclaiming “We support 500 users.” (It didn’t matter if performance sucked, it was only important to cram all those sardines into the can.)

Let’s look at the Magic Number’s flaws. No two users do the same amount of work. The “500 users” is a misleading aggregate as one company may have 500 lazy employees where another might have caffeinated power-users running automated scripts and batch jobs. And even within the lazy company, no two users work at the same rate.

Ideally we’d use a unit of measure to describe work. A product might complete “500 Romitellis” in an hour, and we could translate end-user actions to this unit of measure. Creating a new defect might create 1 Romitelli, executing a small query might be 3 Romitellis, a large query could be 10 Romitellis. But even this model is flawed as one user might enter a defect with the least amount of data, whereas another user might offer a novel-length description. The difference in data size might trigger extra server work. This method also doesn’t account for varied business logic which might require more database activity for one project and less for another.

Rates

Just as a set number of users is interesting but insufficient, a simple counter to denote usage doesn’t helpfully describe a system’s capacity. We need a combination of the two, or a rate. Rate is a two-part measurement describing how much of an action occurred in how much time. But as essential as rates are, it’s important to realize how rates can be misunderstood.

Consider this statement:

“I drank six glasses of wine.”

On its own, that may not seem so remarkable. But you may get the wrong impression of me. Suppose I say:

“I drank six glasses of wine in one night.

Now you might have reason to worry. My wife would definitely be upset. And suppose I say:

“I drank six glasses of wine in the month of August.”

That would give a completely different impression. This is where rate comes into play. The first rate is 6-per-day the second is 6-per-month. The first relative to the second is roughly 30 times greater. One rate is more likely lead to disease and family disharmony than the other.

Let’s move this discussion to product sizing. Consider this statement:

“The bank completes 200 transactions.”

It’s just as unhelpful as the side-of-the-box legend, “The bank supports 200 users.” For this statement to be valuable it needs to be stated in a rate:

“The bank completes 200 transactions in a day.

This seems reasonable at first glance, as it suggests a certain level of capability. But now we can offer another:

“The bank completes 200 transactions in a second.

And now the first rate comes into perspective. Realize that rates can have some degree of meaningful variance. When a system is usable and working, it rarely functions at a consistent rate. Daily peaks and valleys are normal. Some months are busier than others. We may have an average rate, but we probably also need to specify an extreme rate:

“The bank completes 200 transactions in an hour on payday just before closing.

Or even a less-than-average rate:

“The bank completes 20 transactions in an hour on a rainy Tuesday morning in August.”

Imagine we’re testing an Internet website designed to sell a popular gift. The transaction rate in the middle of May should be different than the transaction rate in the weeks between Thanksgiving and Christmas. Therefore we should try to articulate capacity at both average transaction rate and peak transaction rate.

TPH

There’s no perfect solution for measuring user capacity. What we like to do is to describe work in units of transactions-per-hour-per-user. This somewhat gets beyond differences in users by characterizing work in terms of transactions, and also averages different types of users and data sizes to create a basic unit. Of course we could discuss all day what a transaction means. For now, take it to mean a self-contained action in a product (login, create an artifact, run a query, etc.), For much of our tests, we determined an average rate was 15 transactions-per-hour-per-user. (I abbreviate it 15 tph/u.) A peak rate was three times as much or 45 tph/u.

15 tph/u may not seem like very much activity. It’s approximately one transaction every four minutes. But we looked at customer logs, we looked at our logs, and this is the number we came up with as an average transaction rate for some of our products. Imagine that within an hour you may complete 15 transactions and practically you may complete more at the top of the hour, get distracted and complete fewer as the hour completes. Also, 15 is a convenient number for multiplying and relating to an hour. (Thank the Babylonians for putting 60 minutes into an hour.) Increasing the 15 tph/u rate four times means something happens once every minute.

Returning back to the “How many users” question, it’s usually better to answer it in terms of transactions-per-hour: “We support 100 users working at an average rate of 15 transactions-per-hour-per-user on average and 75 transactions-per-hour-per-user at peak periods.” Such statements can make Marketing glaze over, but they’re generally more accurate. Smart companies can look at that statement and say, “In our super-busy company, that’s only 50 users working at 30 transactions-per-hour-per-user” on average and so forth.

When might you see us talking in these terms with such attention to rates? We’re working on it. We want to get it right.

 

Why browser performance doesn’t matter and why maybe you shouldn’t care

Thinking (and obsessing) about software performance, Part 2

When I first blogged about performance over at jazz.net, I received feedback on one hot topic in particular, namely browser performance. There’s no doubt the “Browser Wars” are big business for the competing players. Here in the U.S. during the 2012 Summer Olympics’ televised coverage we saw one vendor heavily tout its newest browser. Of course they claimed it was faster than ever before.

Earlier this year the Rational Performance Engineering team planned to address this topic with an article showing how Rational Team Concert behaves in the major browsers. We also wanted to include details about our testing method so folks could measure their own browsers’ response time and compare with ours. The raw material lives here.

The intent was to produce a graph something like this:

why_image001
Hypothetical Response Times for the Same Transaction in Three Browsers

However, the data collected using three browsers available at that time, suggested that the battle for browser supremacy may be nearing its end. The gaps are closing, and some browsers appear to be better for different use cases.

why_image002
Six Common WebUI Use-Cases on Jazz.net (4.0.1 M3)

Further experimentation revealed that the same task executed in the same browser may not complete consistently in the same amount of time. In fact, it’s possible that this variability could blur the distinction between browsers.

The more we tested and investigated, the more we realized that generalizing about Transaction Z’s performance in Browser B was nearly pointless because everyone’s browser experience is different. A more accurate representation of the first picture would actually be:

why_image004
Hypothetical Response Times for Three Browsers for Three Users

Admittedly, these are imaginary numbers selected for dramatic effect. However, we will get to some facts later on. I’m not saying the CLM team has given up trying to optimize web application performance. A top task for the team is to investigate and deliver improvements in RTC plan loading. What I want to say is that obsessing over browser performance can be deceptive, if not fruitless, because as the major browsers’ performance becomes consistent, it appears that individual and corporate policies are starting to affect browser behavior more.

Let’s look at some things that can slow down your browser.

There have been documented reports that some hotel Wi-Fi services add JavaScript to your browser which may infiltrate and change every page you load (search for “hotel wifi javascript injection”). At the time of this writing, this alteration is presumed to be benign, and maybe less of an issue. But it’s still conceivable that another organization offering Wi-Fi might do something similar. Setting aside the security implications, this injected JavaScript can slow you down. Imagine having to touch a particular flower pot on your neighbor’s porch each and every time you enter your own house.

The hotel Wi-Fi example is a known example of what others might label spyware, which is more pervasive than any of us realize. Some corporations spy on their employees for all sorts of good and maybe not so good reasons. Some sites we visit may expose us to malware and spyware. These logging and caching systems can slow you down, not just browser performance, but your entire machine.

Staying connected with friends, family and even our employers in the Internet age may require being logged into many sites and applications such as Google and Facebook. An open session with Gmail while you’re filling out that online status report might slow you down a tiny fraction, as your browser requests are noted and tracked. Elsewhere, while you’re surfing, open sessions with Yahoo and Bing follow your trail and leave cookies for others. Why else do you suddenly see coupons for that TV you priced last week or hotel ads for places you researched flight times?

Our online crumbs create a profile which advertisers and others are keen to eavesdrop upon. Consequently, if we’re checking friends’ Facebook pages or simply surfing, our search results may be customized. Here are two Google searches run from the same machine and IP, one using Chrome’s “incognito” feature, the other from my heavily customized Firefox:

why_image005
My search results for “quest clear” in incognito Chrome

why_image006
My search results for “quest clear” in customized Firefox

They’re pretty similar except for the order of results. Firefox’s Google results correctly deduce I might be more interested in ClearQuest than World of Warcraft. What’s also interesting is that Google suggests Chrome’s search was 0.03 faster. Is it the browser that’s faster, or is it that Google needs those hundredths-of-a-second to filter its results based on what it knows about me?

Of course corporate applications handle cookies differently, if they use them at all. Showing that the same search from the same desktop can yield different results shows that the intervening browser might keep track of what you do and silently dispatch data to places that aren’t directly related to your immediate tasks. Any additional traffic whether customized or not can slow you down.

If you use an Http sniffer tool to log your http activity (for example, Fiddler, HTTP Sniffer, HTTP Scoop, or even Firebug or HttpWatch) then you might be able to note the excess traffic that comes in and out of your system. My fully-loaded browser is constantly polling for stock updates, news feeds and weather reports. Less frequent traffic alerts me to changed web pages, new antivirus definitions, fresh shared calendar entries and application updates. Those special toolbars offer to make searching slightly easier, but they are probably keeping an eye on your every move.

Indeed if you visit commercial sites with such a tool, you may see all the embedded JavaScript which tracks users firing requests to various different sites which do nothing but data collection. A modest amount of Internet traffic has to be related to tracking. Indeed some websites are slow simply because tracking scripts and data are set to load before the real human-consumable content.

Having fresh data at our behest comes at a price, perhaps small, but still measurable. If you can’t run your browser bare-bones, then we recommend having a sterile alternate browser handy. As indicated above, one of my browsers is fully loaded, the other knows very little about me and keeps no cookies, bookmarks or history.

Because browsers are customizable, and can be tampered with by sites we visit or by the services we use, some corporations take a stronger stance. Common to financial, consulting or any organization that must adhere to government or industry standards, browser and desktop settings may be completely locked down and not changeable at the desktop. A corporation may use policies in the operation system to limit the number of open connections and other TCP/IP settings, and others may put hard limits on the expiration of cookies and cached content.

Some companies route all Internet and Intranet traffic to a proxy server for optimization, security and logging. Some organizations are looking to protect customer information that passes through their hands and/or comply with EU privacy laws. We have seen situations where poor RTC performance was due to the extra trip to a corporate proxy server. Some organizations permit rerouting traffic, others don’t.

We have heard from customers who can’t follow our browser tuning recommendations because their organizations won’t let them. If this surprises you, it’s actually a fairly common practice. Years ago I worked as a consultant for a major petrochemical company. Every night corporate bots would scan desktops and uninstall, delete and reset any application or product customization that wasn’t permitted by corporate IT. Thank goodness the developers had software configuration management and checked-in their work at the end of each day.

Browsers allow us to explore the Internet and interact with applications. They can take us anywhere in cyberspace, and we can pretty much do anything we want once we get there. They are an infinitely customizable vehicle from which to explore the World Wide Web. But customization comes at an expense, and sometimes, there are settings and behaviors we cannot change. As a software development organization, we’re becoming more aware and sensitive to these concerns and are working towards adjusting our web interfaces to work better in controlled environments.

There’s one lurking topic I’ve not addressed head-on. Given how quickly new browsers hit the market, it’s very difficult for software vendors to validate promptly their products in each and every new release. Believe me, this is a headache for software vendors not easily solved.

At the desktop level, there are some things all of us can to right now to improve performance. Here’s a short list, and you’re probably already doing a few of them.

In your OS:

  • Keep your OS up-to-date and patched
  • Run and update your security software on a schedule

In your browser(s):

  • Keep your browser up-to-date and patched
  • Make sure any plug-ins or extensions are up-to-date and patched
  • Remove any plug-ins or extensions you do not use
  • Think twice before customizing your browser
  • Run as lean as possible

In your alternate, lean browser:

  • Keep your browser up-to-date and patched
  • Delete your cache after each session
  • Prevent cookies wherever possible
  • Avoid any customizations

Thanks,

Grant.

 

Thinking (and obsessing) about software performance [Reprint]

So maybe it’s cheating to include this jazz.net post, but I had meant to start a series over there, and now appear to be picking it up over here.

Here’s how the series kicks off…

When we interact with software, we require it does what we want it to and that it responds to our wishes quickly. The domain of software performance is all about understanding where the time goes when we ask software to do something. Another way to look at software performance is to think of it as a scientific attempt to identify and remove anything that might slow it down.

For more, please go to https://jazz.net/blog/index.php/2012/07/16/performance-part-1/.

Thanks,

Grant.