Your shopping cart is empty!
A service-level agreement (SLA) is a part of a service contract where the level of service is formally defined. In practice, the term SLA is sometimes used to refer to the contracted delivery time (of the service or performance). As an example, internet service providers will commonly include service level agreements within the terms of their contracts with customers to define the level(s) of service being sold in plain language terms. In this case the SLA will typically have a technical definition in terms of mean time between failures (MTBF), mean time to repair or mean time to recovery (MTTR); various data rates; throughput; jitter; or similar measurable details.
A service-level agreement is a negotiated agreement between two parties, where one is the customer and the other is the service provider. This can be a legally binding formal or an informal "contract" (for example, internal department relationships). Contracts between the service provider and other third parties are often (incorrectly) called SLAs – because the level of service has been set by the (principal) customer, there can be no "agreement" between third parties; these agreements are simply a "contract." Operational-level agreements or OLAs, however, may be used by internal groups to support SLAs.
The SLA records a common understanding about services, priorities, responsibilities, guarantees, and warranties. Each area of service scope should have the "level of service" defined. The SLA may specify the levels of availability, serviceability, performance, operation, or other attributes of the service, such as billing. The "level of service" can also be specified as "target" and "minimum," which allows customers to be informed what to expect (the minimum), while providing a measurable (average) target value that shows the level of organization performance. In some contracts, penalties may be agreed upon in the case of non-compliance of the SLA (but see "internal" customers below). It is important to note that the "agreement" relates to the services the customer receives, and not how the service provider delivers that service.
Service level agreements can contain numerous service performance metrics with corresponding service level objectives. A common case in IT service management is a call center or service desk.
In the days of operational transaction-processing systems, a service level agreement (SLA) was the agreement between the users and the IT department governing the expectations of the online environment. SLAs were created as a means of measuring and managing the service levels delivered to the end users for that environment. This agreement was a formal statement regarding proper service levels. Although it was somewhat like a contract, it was never a legally enforceable instrument.
SLAs covered two aspects of the online environment: response time and availability. Some SLAs were elaborate; some SLAs were very simple. A typical SLA would be:
1. When must data stores be loaded by (time)
- What will happened if a persistent problem occurs (“swat” teams)?
- Who is responsible for fixing process chains and who pays?
- Do you get a discount for each DataStore that is not loaded in time?
2. How should software fixes be applied
- When will service packs, SAP Notes, and fixes be applied?
- Who pays for it?
- Who is responsible for testing them?
3. When will the system be upgraded
- When will upgrades, service packs, and fix packs occur, how is the pricing determined?
- Who pays for it and who is responsible for testing?
- How long can the system be off-line?
4. Minimum uptime and target uptime
- What is uptime defined as (data store loaded vs. queries available vs. security fixes applied vs. portal uptime vs. third-party reporting tool uptime vs. network uptime, etc.)?
- What are the penalties (money) for missing the uptime requirements?
5. Issues log
- What issues must be logged?
- Who owns the log? Do you have access?
- Can entries be updated, or must an audit trail be preserved?
6. Backup and disaster recovery
- What is included in the backup and when is it taken?
- When will restore abilities be tested?
- How fast must restore occur, and what data stores and users will first have access (priority list)?
7. Who owns the data
- If you switch vendors, who owns the data?
- How will you get access to the data? Do you get full insights to all?
- Who, of the vendor’s employees, gets access to your data? Can they share it with your competitor?
8. Service tickets
- When will service tickets be monitored?
- What are the categories and who will resolve them?
- What are the resolution process and timelines?
- How are customer and support satisfaction measured?
9. Escalation process
- What will happened if an issue cannot be resolved by the Internal IT department/vendor and your Business SLA manager?
- What are the steps needed to terminate the SLA contract and are there any payments/fault payments or budget recourse (i.e., move money from cost centers)? The more details you put into the contract up front, the easier it will be to measure and the more likely you are to have a successful relationship
10. Be Reasonable, Not everything will run 100% correct all the time.
- Some examples of reasonable performance include:
a) 90% of all queries run under 20 seconds
b) System is available 98% of the time
c) Data loads are available at 8am — 99% of the time
d) User support tickets are answered within 30 minutes (first response)
e) User support tickets are closed within 48 hours — 95% of the time.
f) System is never unavailable for more than 72 hrs — including upgrades, service packs, and disaster recovery
g) Delta backups are done each 24 cycle and system backups are done every weekend
If, by default, all misclassifications had equal weights, target values (class labels) that appear less frequently would not be privileged. You might obtain a model that misclassifies these less frequent target values while achieving a very low overall error rate. To improve classification decision trees and to get better models with such 'skewed data', the Tree heuristic automatically generates an appropriate cost matrix to balance the distribution of class labels when a decision tree is trained. You can also manually adjust the cost matrix.
A cost matrix (error matrix) is also useful when specific classification errors are more severe than others. The Classification mining function tries to avoid classification errors with a high error weight. The trade-off of avoiding 'expensive' classification errors is an increased number of 'cheap' classification errors. Thus, the number of errors increases while the cost of the errors decreases in comparison with the same classification without a cost matrix. Weights specified must be greater than or equal to zero. The default weight is 1. The cost matrix diagonal must be zero.
Your input data might contain information about customers. 99% of these customers are satisfied, and 1% of these customers are not satisfied. You might want to build a model that predicts whether a customer is satisfied by using only a small training set of data. If you use only a small set of training data, you might obtain a degenerated pruned tree. This tree might consist only of one node that predicts that all of the customers are satisfied. This model seems to be of high quality because the error rate is very low (1%). However, to understand which attribute values describe a customer who is not satisfied, a different behavior is required.
You might want to enforce that the misclassification of a customer who is not satisfied is considered ten times as expensive as the misclassification of a customer who is satisfied.
Return on investment (ROI) is one way of considering profits in relation to capital invested. Return on assets (ROA), return on net assets (RONA), return on capital (ROC) and return on invested capital (ROIC) are similar measures with variations on how 'investment' is defined.
Marketing not only influences net profits but also can affect investment levels too. New plants and equipment, inventories, and accounts receivable are three of the main categories of investments that can be affected by marketing decisions.
The purpose of the "return on investment" metric is to measure per period rates of return on dollars invested in an economic entity. ROI and related metrics (ROA, ROC, RONA and ROIC) provide a snapshot of profitability adjusted for the size of the investment assets tied up in the enterprise. Marketing decisions have obvious potential connection to the numerator of ROI (profits), but these same decisions often influence assets usage and capital requirements (for example, receivables and inventories). Marketers should understand the position of their company and the returns expected. ROI is often compared to expected (or required) rates of return on dollars invested.
For a single-period review just divide the return (net profit) by the resources that were committed (investment):
return on investment (%) = Net profit ($) / Investment ($) × 100
return on investment = (gain from investment - cost of investment) / cost of investment