Computing, as with many things, ends up being cramped in the existing system. Maybe a little faster in computing than in other areas. Arises always a choice between better manage existing versus invest in a new solution / make an upgrade.
Better manage existing
This choice is more courageous than the second, but also more risky. It means that you think you can do better than what has been done since the beginning. This is generally characterized by time to spend with a gain difficult to estimate in advance. I think we should try it first because:
- It will potentially generate savings, even if it is insufficient and that we should still invest.
- It will better understand the need for reviewing the uses, and thus justify the investment potential.
- It shows that you not only “invest”.
The main thing is to set a goal in terms of time and load to produce a result. It should, however, never exceed a certain % that would cost the latter.
Demand in computing resource inexorably raise in business. The most strategic projects are generally subject to a “capacity planning” to ensure that the solution will last the famous 3 or 4 years of depreciation. There are some poor parents who rarely benefit from this treatment:
- Storage of office files,
- Mail Storage,
- Network usage (inter sites, and Internet).
Requesting household in the office files is like rowing in the desert. Everyone claims to have better things to do, but nobody wants to pay the price that it costs (central storage €MC / N€tApp, backup …). To stop this bleeding, miracle solutions have emerged (3 tiers archiving, deduplication, SharePoint …). This last allows indexing, which is almost worse. How to find where in a shambles without clean his room. Not only users do not want to delete old files, but they do not want to classify either…
Fortunately, we can transform the virus in a vaccine: searching words salaries and bonuses. Guaranteed Results!
The network is part of the heavy investment that work per stages. Storage and backup are also in. Solutions exist for quite some time, because it was the first point of contention:
- QOS: Manage the pipe: ensure flow, restricting others
- Compression: Riverbed & co. Hope that the data are redundant and do the equivalent of a “zip” on network streams.
In response to this, I propose two approaches in parallel:
- Ensure that best “minimum” practices are applied
- Equip IT to be able to do chargeback.
Some good proven practices:
- http / https flows are compressed by Web servers and proxy
- Replications (DFS, SQL …) between sites are made during peak hours or with integrated bandwidth management,
- Favor sending Delta rather than complete
- Search the largest files,
- Block from the start rather than a posteriori (media files …)
- Set quotas to manage unplanned growth. Even if blocking is not really possible.
- Note any “temporary” solution, identifying the applicant, the reason and the date of removal,
- Put safety (warning / blocking) below the actually blocking values .
- After putting into production, revalidate the initial capacity planning.
When needs are for a specific project, it is often easy to identify the pattern of costs. This is more difficult when it comes to the Internet or storage. Chargeback tools measure consumption. Even if the chargeback is not done, it clearly identifies the consumer, and ventilate the cost of the next upgrade.
Invest
This solution is certainly (or so) an immediate response to a problem or need. On some issues, such as office files, it cannot attract the wrath of users, especially when they do not hesitate to compare the price of a 1TB disk from the dealer on the corner. However there are cases where this choice does not bring the expected benefits. This is particularly the case with performance issues, where to buy a second server does not necessarily mean twice faster.
The investment is often favored because it also helps to have resources to carry out the actions. If you want to optimize your virtual infrastructure, you may be struggling to get a budget at most for an audit. So if you do a project with new servers and upgrade, you will be given the budget for it, with the needed resources. This is due to the difficulty of displaying earnings before optimization.
Conclusion
I recommend the following actions for the IT++ “label”:
- Have key indicators of saturation. These should be sufficient to have the time to conduct an optimization phase. Otherwise we find ourselves back to the wall and the investment will be systematically used.
- Request exercise to quantify the consumption of resources in projects. Use upgrade to request it on existing project. Check after the difference between the expected and the actual. The figures are as interesting as the awareness of people about the impact of their project.
- When a solution to a consumption problem is identified in a project (enable http compression), include it as the default for all new projects. Ask the existing one to check if it applies too.
- Implement chargeback tools on shared items and where the consumer is not clearly identifiable.
- Verify that consumption graphs are actually available on the key elements of architecture: storage, network, processor, memory. It is not when there is a saturation that these graphs must be implemented.
- Reduce the price of central storage GB. This allows easy recognition when applications. Ditto for the network.
- Challenging choices in the past renewals architecture. We made some choices based on:
- context
- The state of technology (maturity, cost, knowledge)
- budget.
Sometimes it is even others who have made these choices on the current architecture. It less engaging “just renew”, but it locks you indirectly limited choices for the future.