Skip to main content

Data centre checklist


  1. Testing of data centre. Emergency shutdown test and powerup from complete light out.
  2. Consolidate servers using virtualisation.
  3. Power-supply efficiencies for servers
  4. Using networked storage can also reduce energy costs.
  5. Check for airflow blockages under the floor
  6. Leaks in the racks, which will drive up the need for airflow.Insert blanking plates, make sure those you already have are in the right place.
  7. Consider raising the temperature a few degrees. If the weather is cold outside, design air-conditioning systems that can take advantage of external air.
  8. Use variable-speed fans. Most air-conditioning fans run at 100 percent cycle and have one speed, but a dynamic fan can use temperature sensors to increase and decrease fan speeds as needed. UPSs are often over-sized and older models may not be designed to run efficiently for low utilisation rates.
  9. Store data on tape or offline wherever possible.
  10. Motion detectors to turn off lighting when nobody is working, and recycled water collection systems for backup cooling.
  11. Perform a health check before embarking upon expensive upgrades to the data centre to deal with cooling problems, certain checks should be carried out to identify potential flaws in the cooling infrastructure. These checks will determine the health of the data centre in order to avoid temperature-related IT equipment failure. They can also be used to evaluate the availability of adequate cooling capacity for the future. The current status should be reported and a baseline established to ensure that subsequent corrective actions result in improvements. A cooling system checkup should include the following items: maximum cooling capacity; CRAC (computer room air conditioning) units; chiller water/ condenser loop; room temperatures; rack temperatures; tile air velocity; condition of sub floors; airflow within racks; and aisle and floor tile arrangement.
  12. Initiate a cooling system maintenance schedule. Regular servicing and preventive maintenance is essential to keeping the data centre operating at peak performance. If the system has not been serviced for some time then this should be initiated immediately. A regular maintenance regime should be implemented to meet the recommended guidelines of the manufacturers of the cooling components.
  13. Install blanking panels and implement a cable maintenace schedule. Unused vertical space in rack enclosures causes the hot exhaust from equipment to take a “shortcut” back to the equipment’s intake. This unrestricted recycling of hot air means that equipment heats up unnecessarily. The installation of blanking panels prevents cooled air from bypassing the server intakes and stops hot air from recycling.Airflow within the rack is also affected by unstructured cabling arrangements, which can restrict the exhaust air from IT equipment. Unnecessary or unused cabling should be removed, data cables should be cut to the right length and patch panels used where appropriate. Power to the equipment should be fed from rack-mounted PDUs with cords cut to the proper length.
  14. Remove under-floor obstructions and seal the floor in data centres with a raised floor, the sub floor is used as a plenum, or duct, to provide a path for the cool air to travel from the CRAC units to the vented floor tiles (perforated tiles or floor grilles) located at the front of the racks. This sub floor is often used to carry other services such as power, cooling pipes, network cabling and, in some cases, water and/or fire detection and extinguishing systems.During the data centre design phase, design engineers will specify the floor depth sufficient to deliver air to the vented tiles at the required flow rate. Subsequent addition of racks and servers will result in the installation of more power and network cabling. Often, when servers and racks are moved or replaced, the old cabling is abandoned beneath the floor. Air distribution enhancement devices can alleviate the problem of restricted airflow. Overhead cabling can ensure that this problem never even occurs. If cabling is run beneath the floor, sufficient space must be provided to allow the airflow required for proper cooling. Ideally, subfloor cable trays should be run at an upper level beneath the floor to keep the lower space free to act as the cooling plenum.Missing floor tiles should be replaced and tiles reseated to remove any gaps. Cable cut-outs in the floor cause the majority of unwanted air leakages and should be sealed around the cables. Tiles with unused cutouts should be replaced with full tiles and tiles adjacent to empty or missing racks should also be replaced with full tiles.
  15. Separate high density racks when high density racks are clustered together, most cooling systems become ineffective. Distributing these racks across the entire floor area alleviates this problem. The fundamental reason why spreading out high density loads is effective is because isolated high power racks can effectively “borrow” under-utilised cooling capacity from neighbouring racks. However, this effect cannot work if the neighboring racks are already using all the capacity available to them.
  16. Implement a hot aisle/ cold aisle environment, where cold aisles contain the vented floor tiles and racks are arranged so that all server fronts (intakes) face a cold aisle. Hot air exhausts into the hot aisle, which contains no vented floor tiles.
  17. Align air handling units with hot aisles to optimise cooling efficiency. With a raised-floor cooling system it is more important to align CRAC units with the air return path (hot aisles) than with the sub floor air supply path (cold aisles).
  18. Manage floor vents. Rack airflow and rack layout are key elements in maximising cooling performance. However, improper location of floor vents can cause cooling air to mix with hot exhaust air before reaching the load equipment, giving rise to the cascade of performance problems and costs described earlier. Poorly located delivery or return vents are very common and can negate nearly all the benefits of a hot-aisle/cold-aisle design. The key to air delivery vents is to place them as closely as possible to equipment intakes, which maximises keeping cool air in the cold aisles.
  19. Install inflow-assisting devices where the overall average cooling capacity is adequate but hot spots have been created by the use of high density racks, cooling loads within racks can be improved by the retrofitting of fan-assisted devices that improve airflow, and can increase cooling capacity to between 3 kW and 8 kW per rack.
  20. Install self-contained high density devices. As power and cooling requirements within a rack rise above 8 kW, it becomes increasingly difficult to deliver a consistent stream of cool air to the intakes of all the servers when relying on airflow from vented floor tiles. In extreme high density situations (greater than 8 kW per rack), cool air needs to be directly supplied to all levels of the rack - not from the top or the bottom - to ensure an even temperature at all levels. Self contained, high density cooling systems that accomplish this are designed to be installed in a data centre without impacting any other racks or existing cooling systems. Such systems are thermally “room neutral” and will either take cool air from the room and discharge air back into the room at the same temperature, or use their own airflow within a sealed cabinet.
https://www.linkedin.com/pulse/my-top-10-posts-pulse-ronald-bartels/

Comments

Popular posts from this blog

LDWin: Link Discovery for Windows

LDWin supports the following methods of link discovery: CDP - Cisco Discovery Protocol LLDP - Link Layer Discovery Protocol Download LDWin from here.

Battery Room Explosion

A hydrogen explosion occurred in an Uninterruptible Power Source (UPS) battery room. The explosion blew a 400 ft2 hole in the roof, collapsed numerous walls and ceilings throughout the building, and significantly damaged a large portion of the 50,000 ft2 building. Fortunately, the computer/data center was vacant at the time and there were no injuries. Read more about the explosion over at hydrogen tools here .

STG (SNMP Traffic Grapher)

This freeware utility allows monitoring of supporting SNMPv1 and SNMPv2c devices including Cisco. Intended as fast aid for network administrators who need prompt access to current information about state of network equipment. Access STG here (original site) or alternatively here .