The Uptime Institute

Monday, October 21, 2002

Jonathan G. Katz, Secretary
Securities and Exchange Commission
450 5th Street, NW
Washington, DC 20549-0609

RE: File No S7-32-02

The Uptime Institute maintains the nation's largest database of how data centers physically fail. Started in 1994, we have tracked statistics on 3,000,000 square feet of electrically-active raised floor for our membership of mostly Fortune 100 companies (see 2002 membership roster attached). We estimate our members represent 60% or more (based on assets) of the total mission-critical raised-floor space of the nation's financial institutions.

Briefly, this is our data:

  1. In eight years, none of our 48 members has experienced physical failures that would require activation of their backup site. I think this is for two reasons.

    1. Members are in Tier 3 (1985) and Tier 4 (1994) facilities (our tier ranking system is defined on our website at http://www.upsite.com/TUIpages/whitepapers/tuitiers.html)

    2. Members are very serious in the staffing and maintenance resources they devote to assuring equipment and facility uptime

  2. Over the past eight years, the collective facility failure rate of our members has dropped from an average of once every eight months to once every three years. We have every reason to believe that the failure rate for non-members has remained at once every six months and that those companies doing processing in commercial collocation or web hosting facilities will be subject to an increasing frequency and duration of outages (Many of the collocation sites built in the last 36 months are Tier 1, which means they cannot perform necessary maintenance without a processing interruption. The age of these sites is nearing the sharp rise in the bathtub curve failure rate because electrical maintenance has not been performed).

  3. There is no perfect place to build a data center. Every location within the continental US has an element of natural disaster risk (earthquake, snow, ice, tornado, hurricane, wind, flood plain, etc., as shown in a Natural Risk Locations Map which is available through The Uptime Institute). In addition, there are man made risks of being near an interstate highway, railroad, or chemical plant. There also are risks with where the data center is located within the building. The classic is being located under the cafeteria, which exposes the date center to water leaks and to evacuations due to cooking fires on the floor above (these are further explained as part of the risk map).

  4. 95% of disaster recovery hot site activations are not as the result of an act of God, but are the result of management decisions to take risk (often these decisions are made at a junior level without senior management understanding the full business consequences), either by building in an inadequate location, designing to Tier 1 (1965) or Tier 2 (1975) standards, or not maintaining electrical equipment.

It is from this perspective that we offer the following suggestions:

  1. Specify the minimum regional separation between primary and backup sites. Unfortunately, fiber distance limitations are still too restrictive to deal with a regional event like the ice storm several years ago that affected most of Maine and parts of New Hampshire and Vermont. From a facility perspective we suggest the primary and secondary sites be served by two different parts of the electric transmission grid, being served by separate water grids (because large data centers require large amounts of water for cooling evaporation), being in different regions from a weather perspective, diversity for data communications, and being served by a different transportation network. We have seen repeated problems of depending upon cell phone for emergency communications because of insufficient provider capacity when a regional event occurs.

  2. The fault tolerance and maintainability of the primary and backup site infrastructure needs to be specified. We would suggest at least Tier 3.

  3. The competence of the facility staff running the infrastructure needs to be specified to especially assure the backup site is being maintained and will really work in an emergency. The last publicly reported test of engine generators actually working in an emergency occurred in the early nineties when the South Street Substation outage left Wall Street without power for several weeks. Twenty-five companies declared disasters because their engine generator equipment either failed immediately, in the first hour, or in the first 24-hours. Unfortunately, the painful lessons learned about the need for testing performance under real load regularly as opposed to just exercising whether the engine will start, have mostly been forgotten in the facility department cost reductions of the last ten years.

Respectfully yours,

Kenneth G. Brill
Executive Director
The Uptime Institute
1347 Tano Ridge Road
Santa Fe, NM 87506
Tel: (505) 986-3900
Fax: (505) 982-8484


Members of the Site Uptime® Network
(2002 Roster)

A.G. Edwards

Alltel

American Express

BellSouth

BP North America

Boeing

Capital Group

Caterpillar

ChevronTexaco

Conagra Foods

Confidential (2)

Consonus

Deere & Company

Defense Information

Systems Agency

Depository Trust

DST Systems

E*Trade

Exodus Communications

Fidelity Investments

Hewitt Associates

Hewlett-Packard

Household International

Inovant (Visa)

(i)Structure

Johnson & Johnson

JP Morgan Chase

Lexis-Nexis

Lowe's

MasterCard

Microsoft

Nationwide

Northwest Airlines

Orange

Philadelphia Stock

Exchange

Procter & Gamble

Salomon Smith Barney

SIAC

Social Security Admin

Sprint

Sun Microsystems

SunGard Recovery Systems

Target

United Airlines

United Parcel Service

USAA

Verizon Communications

Wachovia Bank