Showing posts with label Strategy. Show all posts
Showing posts with label Strategy. Show all posts

Tuesday, 13 January 2015

DataCenter Procurement – When to Buy New?

By now, we have seen the building blocks of datacenter like RacksCables, Power, Cooling, etc; we have also learnt about its different aspects including how to Managing your Datacenter. It’s time for us to see other components of its life cycle. In this article, we will go through a different view upon planning and procuring datacenter components.

Before we start.. Welcome Back! I have been busy with holiday season, friends and family for a while so couldn’t contribute to my own blog; well its never too late. Okay.. Lets get back to business.

Overview
Most of the organization having in-house datacenter follows a common mission statement: “To Facilitate the Hardware Availability for Data Center Operations & Efficient Utilization, for the Purpose of Business Continuity with less TCO & More ROI.”. I assume most of you know about the DataCenter Life Cycle Management and the activities involved in it. I am not an expert, but I tried to give my view on DataCenter Life Cycle and Management Framework below:


If we focus on the inner most ring of this framework, we can find the core activities in Datacenter Lifecycle i.e.  Analyze, Design, Plan, Procure and Deploy. Analysis happens at very initial stage where we collect information; we hear the Voice of Customer and convert that into requirements. Once you have all the requirements, we need to size the environment accordingly and create a high level design of how things will be deployed or how components will integrate or talk to each other. Later comes the most hectic part of this life cycle. Upper Management, off course, cares about mostly the money. As we have traditionally heard about our AIM for IT Managements:

·         The Reduce the Total Cost of Ownership (TCO)
·         The Increase the Return of Investments (ROI)
Challenge comes, how to achieve it? I have done multiple projects on IT cost optimization in my past experience, I will be sharing one of them in this article.

Applying Lean Six Sigma Methodology
In Lean Six Sigma, I have learnt about problem solving and improvements via DMAIC process i.e. Define Measure Analysis Improve and Control. Now going back to the problem that we all are facing with High Cost, I tried to use Fish Bone technique to give an overview of how we are getting High Cost in IT Operations in DataCenter. Below is my very basic analysis of high cost, off course, there are multiple branches, pros and cons for every consideration, but like I said, this is just an overview to help you understand where I am coming from.

If you notice, under Money; Annual Maintenance Cost (support) and Procurement are the two major contributors to Operational Expenses and Money expenditure for every environment. Now, I am sure you might have head How Virtualization makes your life easy with features to bring in like Centralize Management, Less support cost etc, so I am not touching on that. However, Lets talks about how we can reduce cost during procurement itself considering virtualization and other techniques.

Legendary Procurement Process “As Is”
Below is the “As is” procurement workflow followed in usually in every organization.
 Now, as you can see there are multiple flaws with this process:
•  There is No check on optimized usage of existing hardware before new procurement
    There is No check on timelines of usage of New hardware to be procured.
    There are No Guidelines available to choose the Right Hardware for project.
    There is No check if Virtualization can avoid procurement
    Methodology to Track the current role, usage & future plan of Data center hardware is Missing
    Unplanned procurement leads to wastage of CAPEX & OPEX Budget.

Now to make it better, we need first put in our thoughts to Identify the need – what you really want?
  • Do I need More Hardware in my Environment?
  • Does present solution meet my needs?
  • Do I have Areas  of Improvement?
  • Do I have 100% Utilization of Existing Resources?
  • Am I buying Hardware as per the Roadmap?
  • Do I need to Invest to Manage existing Resources?
  • Do I need Business Continuity or Data Availability?
  • Do I have enough Hardware to Meet the Dynamic Needs?
  • Do I Track the Usage of Existing Hardware?


New Approach
Definitely, before we proceed with procurement, we need to identify what we have and analyze what we need. Now, let me introduce you a new way of procurement:

You can call it a process or decision tree for “When to Buy”, whichever way its easy for you to integrate with your own environment. I understand its not easy to Bring Change in an organization but when people have common motive, its easy to drive atleast. As you can notice, it highlights the checkpoints where you can think – Can I Avoid Procurement?
  • Check if free hardware is available
  • Check if hardware require immediate deployment
  • Check if free hardware is available during required timeline
  • Check if free hardware is available in other section, Biz-IT or Partners
  • Check if server required can be virtual server, if No, Why? verify from TC
  • Check if storage connectivity would be required.
  • Check if dedicated path is required for Server
    • If No, Raw Device Mapping with Virtual Machine can be used
    • If Yes, VMDirectPath can be used (VMware Technology, similar is available for Microsoft Hyper-v as well)
  • Check if Simulators can be used


To understand it better lets highlight the changes into sections:


1.   The Asset Management - Inventory and Audits: collect your existing hardware inventory and perform audits for its usage and health. This will help in doing projections and planning well in advance.
2.   Usage Tracking: With available reports, find out if there is any free hardware or capacity available for deployment. Usually DataCenter Life Cycle Managers have this information.
3.   Shared Resources: At times the resources can be shared or can be released in case of Test & Dev environment. Another example is of Public Cloud Hosting.
4.  Virtualize: Check the possibility if it can be virtualized. I consider most of the environment already have virtual infrastructure which may or may not have capacity for hosting new VMs. In case you don’t have capacity, rather investing in buy new physical server, invest in increasing the capacity for existing virtual infra and share its resources. You can also go Hosting services these days offered by many vendors.

Well, that was just an overview, but there are many aspects of saving cost. However, to start with, this approach might save some initial investment itself. Why to optimize later when you can cut cost in initial stage, right?

Any more questions? please write back or comment here. There are more things to share.. 



Request you to join my group on Facebook & LinkedIN with name "DataCenterPro" to get regular updates. I am also available on Tweeter as @_anubhavjain. Shortly I am going to launch my own YouTube channel for free training videos on different technologies as well. 

Happy Learning!!

Thursday, 27 November 2014

Best Practices: Data Backup Strategy

While I was working with as Pre-sales Consultant with a Backup Software Company, the most common question I use to come across was "How should I backup my Data?" or "How frequent I should Backup my data?" or "what Strategy I should use to backup my Data?"

Many companies these days define a data backup policies with the retention period or type of backup during a certain period. Many of the backup strategy are defined by users say "Tower of Hanoi" or "Grand-Father-Son (GFS)", along with options like One time Full or Immediate Incremental backup, which are usually manual or with defined schedule.
However, I feel custom options, are for those, who really want to extract & use this software as efficient, as it is, to get complete ROI out of this application. Their are many dependencies or I can say "check list" before defining the a True Backup Plan. Space & Retention period plays the main role while defining the backup plan. While considering these parameters, we should also not forget what your software can do for you, like its Compression rate, around 55%, which I feel is the maximum I have seen as compare to other players in market. Apart from these, we should consider, am I looking for Data availability or Business Continuity. I should also consider RTO (Recovery Time Objectives) for the backup, the more number of backups we will have, the more time it will take to recover, which is standard for any application. These days some Backup Recovery products provide speed of around 1.3GB per minute. 
To clarify more about it, lets think of a scenario, where I have to 200GB of Data to be backed up in 2TB of Space across SAN. Now as per company policy, i have retention period of 6 months.  I am considering data is coming from a single server and we are doing the Image backup, not the files or folder. Now, the moment I will be doing 1 full backup in a month, with 4 differentials in week & daily incremental, I dont think we can achieve our target for keeping backup for 6 months, as usually differential backups are half the size of full backups. This doesn't mean option for GFS is bad, its just its the right choice in every situation. Backups are critical and part of every DataCenter compliance. They should be well defined, tested and implemented, considering the needs of the user. Backup Software is just an application, which can make wonders, but how to make wonders, its up to us. 
Coming back to the scenario, if the user would have selected the Custom Backups, with 1 full backup & daily incremental backup, it would have been enough. It will also provide me Business continuity as I will have less number of backups to be recovered and I can extract any file as well, hence have my data availability as well. 
Please Note Though sometime speed of backup also matter to us, but I am ignoring this fact since currently we are discussing the backup strategy, as speed is already pretty good as observed around 1.3GB per minute. However, we should consider the speed of backup application & connected storage speed of data transfer, when we have multiple backups running simultaneously. During that stage, we have to consider options like bandwidth allocation & de-duplication as well. 
Other Best Practices, I can think of, are as following (Please note, this is Purely on my Experience basis)
1. Do not give the passwords, wrong password attempts may corrupt the backup or can interrupt the backup operation when running a schedule. 
2. If number of machine is above 20, its better to create individual plan for group of machines. You may also consider creating different folder for backup in same location. Anyhow the backup plan waits for completing the previous plan, so like I said, its up to you, how you want your software to work.
3. Tapes are Pre-Historical :) (slow), These days backup application are New Generation (very fast), I dont prefer Tape if duration of backup recovery matters to you. Even though backup speed can reach up to 2GB per minute, but since target speed is low, data backup will be slow as well. However, they are still required but as offsite copy, should not be used as primary. 
4. Dont backup single machine on a de-duplication enabled location, since it will check for data redundancy in same location and will take time to complete backup.
5. Make sure you are using option to validate the "Full Archive" 
6. I always suggest to take backup on two different locations, if possible with no financial constraints to invest for data availability under Disaster Level 5. 
7. I suggest to take full partitions backups rather then filer or folder, since even though backup is of drive, we can still do Files & Folder recovery. Also, its much faster as does backup sector by sector and auto include any new file created in same folder location, which infact does not happen if we do files & folder. 
8. Drill for Test recovery should be done oftenly, atleast once in 3 months. 
9. Bootable CDs should be kept ready with latest version of kernel. Problems usually dont knock door before coming. We should be armed to fight against them. 
10. Notification plays important role. Make sure you are getting notification via email or SNMP. If its SNMP, make sure your Trap catcher is listening alerts. 
Feedback, Questions will be Appreciated. 
Note: This article is re-posted, earlier it was posted on vendor site. Its still a Hot thread in their community. FYI https://forum.acronis.com/forum/17555 

Any more questions? please write back or comment here. There are more things to share.. 

Request you to join my group on Facebook & LinkedIN with name "DataCenterPro" to get regular updates. I am also available on Tweeter as @_anubhavjain. Shortly I am going to launch my own YouTube channel for free training videos on different technologies as well. 

Happy Learning!!