Your shopping cart is empty!
A data center or computer centre (also datacenter) is a facility used to house computer systems and associated components, such as telecommunications and storage systems. It generally includes redundant or backup power supplies, redundant data communications connections, environmental controls (e.g., air conditioning, fire suppression) and security devices. Large data centers are industrial scale operations using as much electricity as a small town and sometimes are a significant source of air pollution in the form of diesel exhaust.
A data center (sometimes spelled datacenter) is a centralized repository, either physical or virtual, for the storage, management, and dissemination of data and information organized around a particular body of knowledge or pertaining to a particular business.
The National Climatic Data Center (NCDC), for example, is a public data center that maintains the world's largest archive of weather information. A private data center may exist within an organization's facilities or may be maintained as a specialized facility. Every organization has a data center, although it might be referred to as a server room or even a computer closet.
In that sense, data center may be synonymous with network operations center (NOC), a restricted access area containing automated systems that constantly monitor server activity, Web traffic, and network performance.
A four tier system that provides a simple and effective means for identifying different data center site infrastructure design topologies. The Uptime Institute's tiered classification system is an industry standard approach to site infrastructure functionality addresses common benchmarking standard needs. The four tiers, as classified by The Uptime Institute include the following:
An image of racks of telecommunications equipment in part of a data center
One of the challenges in earlier implementations of virtualized environments was the task of workload balancing. As multiple virtual servers were deployed on a single physical machine, it was difficult to tell if the various servers were competing for resources. The question of which workloads were where and how many of them could comfortably co-exist was more a matter of intuition than information.
With increasing maturity in the technology and management tools, that reality has changed. Workload balancing is now able to keep up with shifting business requirements, and the intelligence incorporated into these consoles allows for memory management, automated resource optimization, and policy control to keep critical processes from resource deprivation. The value of being able to easily move a virtualized server from one physical machine to another, without losing track of the resources, is best realized by using automated virtualization management tools.
Workload management enables you to manage workload distributions to provide optimal performance for users and applications. Workload management comprises the following:
Connection Load Balancing: It balances incoming connections across all of the instances that provide the requested service.
High Availability Framework: It enables the Database to maintain components in a running state at all times.
Fast Connection Failover: This is the ability of Clients to provide rapid failover of connections by subscribing to network events.
Runtime Connection Load Balancing: This is the ability of Clients to provide intelligent allocations of connections in the connection pool based on the current service level provided by the database instances when applications request a connection to complete some work.
Workload is the amount of work assigned to, or done by, a client, workgroup, server, or Internetwork in a given time period. For example if we take a manufacturing organization, a workload can be a combination of:
As the amount of data storage grows in the PETA BYTES and so is the associated processing, the biggest challenge the enterprises will face in the near future is how to make an optimal computing environment to take care of these workloads so that they are finished in the business-requested latency while the computing power needed to do them is optimal while dynamic and scalable to take care increased demands.
Workload Optimization and Challenges
The biggest challenge most enterprises face today is first, how to measure their work load size. Unlike a few other sizing parameters like Function Points (which defines the size of an application), LOC ( the size of the raw computing code), there are not many good industry standard measures to give an indication of a workload.
In today's world the complexity of the IT organization is determined by the $$ value of IT budget spending (like ours is a $5 billion IT shop), but not really about we process XXXX of workloads in a month. For example MIPS (millions of instructions per second (MIPS) is one such a measure to calculate the workload characteristics of an enterprise.
The other issues in today's enterprise workload processing are:
Due to these application characteristics, most organization are not able to optimize their workloads because the workloads tend to contend for the same resources resulting in a deadlock situation among them. Also the operational expenses and capital expenses remain the same even when moved to a Dynamic infrastructure environments like Cloud.
Best Practices from Batch Jobs for a Legacy Era
Consider the above characteristics of the older batch jobs, against the monolithic stored procedures or business components that perform most of the processing in a single thread, so that even if a dynamic computing facility is available they will not scale up much.
There are many tools for technical management of virtual infrastructures – hypervisor configuration, VM performance monitoring, memory management, etc. Without doubt, such low-level infrastructure management is important, but ultimately IT needs to be supporting business-critical application services, not just infrastructure. In fact, Enterprise Management Associates (EMA) research shows 74% of enterprises with virtualization are using their virtual infrastructure to support production applications .
Managing a vendor- or platform-specific virtual infrastructure is just one part of managing business critical applications. EMA data shows that most enterprises actually have multiple virtualization vendors – not just VMware, but also Microsoft, Citrix, Red Hat, IBM, Oracle/Sun, etc. Most enterprises also have multiple platforms – not just Windows, but also Linux, UNIX, i5/OS, z/OS, etc. In fact the average enterprise has four different vendors, and four different platforms, in their virtualization environment alone. Moreover, the average enterprise also has a significant traditional or physical infrastructure – EMA research shows that in most cases, only 25-30% of the server environment is actually virtualized .
While virtualization vendors like VMware have great tools to support virtual infrastructure, they do not provide sophisticated tools for broad, multi-platform, multi-vendor, physical and virtual business workload management.
Workload Management (WM) is an emerging paradigm for IT systems management arising from the intersection of dynamic infrastructure, virtualization, identity management, and the discipline of software appliance development. WM enables the management and optimization of computing resources in a secure and compliant manner across physical, virtual and cloud environments to deliver business services for end customers.
The WM paradigm builds on the traditional concept of workload management whereby processing resources are dynamically assigned to tasks, or "workloads," based on criteria such as business process priorities (for example, in balancing business intelligence queries against online transaction processing), resource availability, security protocols, or event scheduling, but extends the concept into the structure of individual workloads themselves.