Back in the dot com bubble, there was a rush to build medium-sized data centers to accommodate the surge in servers that needed secure hosting. Many of these centers were what is called “Telephone Design.” They were less expensive to build and more manageable and quicker to bring to the market.

The bubble burst, and so came a decade of Data Center glut. Slowly as more and more companies looked for both redundancy and consolidation, the glut was swallowed up. Then the age of the “cloud” was born. The cloud can mean many things to many people. To Data Center owners, it means only one thing. Revenue. And so, the birth of the large Data Centers came into being. And then, suddenly, we have a glut of Data Centers, which resulted in the consolidation of both owners and facilities.

The latest trend is to build smaller Data Centers with all the same infrastructure as the large ones but can be more flexible in their deliveries and prices. Colocation is an important piece of any company’s IT solution and we design our services specifically to your needs. Colocation is simply a data center facility that allows for customers to rent space for servers and/or other computing hardware. Thin-nology’s Austin Colocation Data Center is a brand-new state-of-the-art facility. Even if a fire were to break out, our Austin Colocation Services facility would still provide uninterrupted, continuous operation. It has all the security you will find in all Tier III facilities (see Business Continuity Planning and Disaster Recovery), plus one other thing. We are stealth. To the outside world, we do not look like a data center (see below for more). We modeled our cooling systems, using Fluent Simulation Technology, to ensure a constant temperature throughout the facility no matter the heat load. No hot air gets into the facility ever. We rent by the U or by the rack with 120 single phase and 208 three-phase power options.

A large Data Center often has a high fixed cost. Racks in a 240,000 Sq. Ft data center must be cooled just the same as if they were in a 5000 sq. ft. Data Center. So, the fixed cost of cooling 240,000 sq. ft. that is only filled to 50% capacity means the operator must recover those costs through their current customer base.

A 5000 Sq. ft. facility can operate at a much lower fixed cost. By doing this the savings can then be passed along to the customer base. How we calculate the cooling cost is by the amount of power required to operate, times the KW Hour cost of the electricity required. Now, a larger Data Center has more bargaining power with the suppliers than the smaller facilities. But even if they can negotiate 2-3 cents per KW versus the 4-5 cents per KW the smaller facilities will pay, their outlay is still huge.

For the “Telephone Design” Data Centers, still in business, the design that made them fast and easy to build has led to high operational costs due to the way they are cooled. They deploy a hot Isle, cold Isle, design, where the hot air in the hot Isle is sucked up into a Plenum, and cold air is dumped into the cold Isle. This cold air is sucked through the Air Condition Systems from the hot air Plenum. Many of these facilities brag that they have a 15 or 20-foot plenum from which to suck from and cool. But this same design is what leads to much higher operating costs and insufficient cooling. This is especially true in hot climates. In Houston, Dallas, Austin, as an example, where daytime temperatures reach over 100 degrees. The amount of hot air is enormous and trying to cool it means fighting a losing battle. This is where the small Data Centers designed properly have a massive advantage in both cost and cooling.

A small Data Center of 5-8000 Sq. Ft. can have a hot Isle, cold Isle design where zero hot air is dumped into the Data Center to be sucked out. This design requires that a Plenum is designed to accommodate only the hot air coming from the Racks. The racks themselves have chimneys that allow the air conditioners to suck the air into the plenum to be cooled and then dumped back into the Data Center. This means the only hot air being cooled is coming from within the Data Center. It means that there is always a constant, manageable temperature through the Data Center. It also means that the Air Conditioners can be “teamed” to operate only when required to maintain the desired temperature. As we are only cooling hot air generated inside the Data Centers, then costs are contained as the Data Center occupancy fluctuates. It also means that the customer always has balanced cooling for his/her equipment. The two things that shorten the life of physical equipment is power fluctuation and heat. In a smaller Data Center, these two factors are constant.

The other aspect of smaller Data Centers that some see as an issue is they typically don’t look like a Data Center and, as such, don’t have the external physical barriers found in the large modern Data Center. And they cannot accommodate the large entities that have 200-300 racks. These are both true, but the smaller Data Centers cater to an audience with much smaller needs. Half a rack, three racks, one rack, and so on. These customers want the same Internal Security Features offered by the large Data Centers but affordable. The external Physical Security, Metal Doors, and Cages on entry are sufficient to alleviate any fears of having their systems stolen. And in fact, an external physical threat is such a rare occurrence, especially as most Data Centers are manned 24/7/365, which is rarely considered when choosing a Data Center.

So in conclusion, we will see a phasing out of the “Telephone Design” Data Centers, more consolidation in terms of owners and facilities, and more of the smaller design Data Centers to accommodate the small to medium businesses.