Skip to main content

5 practices that need to be taken into account when migrating to a more modern data centre

 

I have previously written about Prestik, Scotch tape and barbed wire data centres. These problematic locations are usually legacy environments. Data centres have become highly consolidated and require technical and operational efficiencies. Thus, legacy practices need to be corrected and this article details a few of these.

  1. Legacy IT equipment in a data centre potentially does not have dual power supplies. Modern data centres supply resilience power feeds to maintain uptime and having only one power supply will have an adverse impact. Besides replacing the IT equipment, a temporary solution is to install a rack mounted ATS switch.
  2. Non-standard cabinets are not optimal for cooling. Cooling optimization is achieved using hot aisle containment. In this configuration racks are typically installed in pods where all the racks are of a similar form factor. Non-standard racks are difficult to install in these pods. It is possible to use butcher curtains or a similar type of paneling to achieve some sort of containment when the racks are migrated or reconfigured.
  3. In a legacy environment the network cablng is typically wired using copper to central row or even in some cases to the main core data centre switches themselves. As a result, legacy data centres typically have a disproportionate larger quantity of copper cabling. Newer network architectures are based on using fibre interconnections between racks as well as the core data centre switches and copper cabling is usually confined to within the racks themselves. Some chassis-based platforms even offload this connectivity to a backplane within the chassis, thereby further eliminating cabling.
  4. Legacy data centres typically utilize manual asset management as well as manual logs for controlling access. This is not efficient as the data centre scales and fully automated digital asset management systems need to be introduced.  The access control also needs to be based on a digital system and this includes visitors. A modern data centre that is not end-to-end digital for all operations is not feasible.
  5. There are new technologies available that introduce wireless Internet of Things sensors for monitoring and validating data centre operations. The modern data centre has dramatically more analytics from these sensors. As a minimum, these sensors increase the footprint of analytics such as temperature, power, presence and access but a factor of well over tenfold.

Bonus: Read the 15 best practices as recommended by a leading IT Consultancy here.

What other practices do you think are worth a mention on this list? Please comment below.

This article was originally published over at LinkedIn: 5 practices that need to be taken into account when migrating to a more modern data centre

Comments

Popular posts from this blog

LDWin: Link Discovery for Windows

LDWin supports the following methods of link discovery: CDP - Cisco Discovery Protocol LLDP - Link Layer Discovery Protocol Download LDWin from here.

Battery Room Explosion

A hydrogen explosion occurred in an Uninterruptible Power Source (UPS) battery room. The explosion blew a 400 ft2 hole in the roof, collapsed numerous walls and ceilings throughout the building, and significantly damaged a large portion of the 50,000 ft2 building. Fortunately, the computer/data center was vacant at the time and there were no injuries. Read more about the explosion over at hydrogen tools here .

STG (SNMP Traffic Grapher)

This freeware utility allows monitoring of supporting SNMPv1 and SNMPv2c devices including Cisco. Intended as fast aid for network administrators who need prompt access to current information about state of network equipment. Access STG here (original site) or alternatively here .