By submitting my Email address I confirm that I have read and accepted the Terms of Use and Declaration of Consent.

By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.

You also agree that your personal information may be transferred and processed in the United States, and that you have read and agree to the Terms of Use and the Privacy Policy.

as well as reduce the amount spent per workload in server costs.

Some cost-savings ideas, like eliminating a costly process, do not require enterprises to spend any money. Investing in more efficient equipment or simplifying management via predictive analytics do require upfront expenditures to implement, but offer big opportunities for longer-term IT cost management goals.

Networking, storage and server costs

The name of the game in saving costs on information systems is efficiency. The goal should be to feed your servers a steady diet of data to improve workload throughput and maximize system utilization. Improving system efficiency -- across compute, storage and networking -- leads to higher utilization.

Higher utilization reduces total server costs in several ways. The data center team purchases fewer servers and associated peripherals. Balanced utilization reduces energy consumption, which helps lower cooling costs. Software costs for these servers also decrease, since software licensing is usually priced by the number of microprocessor cores used to execute a given workload.

Microprocessor choices

Another IT cost management strategy is to run workloads on the system best suited to execute them. IT executives who have been following systems designs over the years know that microprocessors have reached physical design limitations in roughly the 5 GHz range. Vendors are taking other steps to speed up server performance. Some servers use redesigned I/O busses or direct-to-the-CPU busses, such as IBM's CAPI interface. Look for new servers built with multiple types of processors, including traditional CPUs, graphical processing units for accelerating parallel processing workloads, and field programmable gate arrays for moving data at line speed. Multiprocessor accelerated systems might be capable of processing workloads exponentially faster than a traditional system design. IT workloads handled more efficiently again means potentially using fewer systems. So 100 times faster server processing for a given workload means 100 times lower server costs for the same task. Efficiency saves money.

Data center operational costs

Inside the Icelandic data center industry

In some localities worldwide, up to 50% of data center costs come from managing systems, peripherals, applications and databases. The way to manage IT operational costs is to simplify. Accomplish it by changing broken or outdated processes, and by purchasing new analytics-driven management software to speed diagnosis, fault isolation and repair.

In many data centers, mainframes process very high volumes of transactional data, so that useful data resides within the mainframe environment. However, many enterprises consider the mainframe a high-volume batch processor and not a strong analytics server, offloading the mainframe data to other data warehouse servers for processing. To process that data, it must be extracted from the mainframe database, transformed to a usable format for the data warehouse server and loaded into separate storage -- the extract, transform and load (ETL) process. Frequently, two or more copies of that data go through ETL for backup and restore purposes.

Data centers can reduce server and IT management costs by processing the data where it resides at the mainframe level. Current-generation mainframe servers process analytics workloads in near real time. Fixing broken ETL processes can save millions of dollars per year.

Contrary to popular belief, moving data is not free. When moving data from mainframes to other servers, there are costs related to burning mainframe MIPS to extract and send data; costs for receiving/loading and managing data on data warehouse systems, including the purchase of those data warehouse systems and associated storage and networks; network equipment transmission expenses and administrative time spent managing servers and storage.

The cost of human insights

Human labor is one of the biggest operational costs in the data center. Depending on skill sets and geographical location, annual salaries for systems, storage, network, database and application administrators range from $75,000 to well over $125,000 (USD). If the enterprise needs fewer data center technicians to manage, tune and troubleshoot systems, it stands to save measurable money.

The systems management market is on the cusp of a major new development: cognitive analytics. Systems learn to analyze themselves and take corrective actions automatically, or notify a human if a failure is occurring or imminent (forecasting future failures is known as predictive analytics). IBM has taken the lead in the data center space with its Watson cognitive environment combined with several analytics products, including a log analysis offering and predictive analytics. Using IBM's Operational Analytics, systems managers search through big data log files for anomalies, and then a system takes corrective action using an automated script or simply notifies a systems manager of the problem. Expect other data center vendors to introduce cognitive and other analytics and management tool sets in the future, or for vendors to partner with IBM to use Watson with proprietary analytics extensions. Systems are learning to identify problems using cognitive analytics software combined with management software -- the more self-diagnosis and automated management in place, the less you'll need human-related IT management.

For managing the mainframe, IBM's zAware monitors the environment, recording how a mainframe behaves when running optimally. If a problem occurs, zAware isolates changes to clearly show where the problem resides. A mainframe essentially monitors and diagnoses itself, and makes the mainframe manager aware of exactly where a problem lies. Expect this same type of software to surface in the distributed server world soon to optimize IT usage.

1 comment

Register

Login

Forgot your password?

Your password has been sent to:

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy