To make rational decisions about the structure of a reverse supply chain, it’s best to divide the chain into its five key components and analyze options, costs, and benefits for each:
Product Acquisition. Our research suggests that this task—retrieving the used product—is key to creating a profitable chain. The quality, quantity, and timing of product returns need to be carefully managed. Otherwise, companies may find themselves flooded with returned products of such variable quality that efficient remanufacturing is impossible. Companies often will need to work closely with retailers and other distributors to coordinate collection.
Reverse Logistics. Once collected, products need to be transported to facilities for inspection, sorting, and disposition. There is no one “best” design for a reverse logistics network; each has to be tailored to the products involved and the economics of their reuse.
Inspection and Disposition. The testing, sorting, and grading of returned products are labor-intensive and time-consuming tasks.
Reconditioning. Companies may capture value from returned products by extracting and reconditioning components for reuse or by completely remanufacturing the products for resale.
Distribution and Sales. If a company plans to sell a recycled product, it first needs to determine whether there is demand for it or whether a new market must be created. If it’s the latter, the company should expect to make heavy investments in consumer education and other marketing efforts.
Return procedures typically tend to confuse customers due to the terms and conditions applied, thereby reducing customer satisfaction. However, organisations are increasingly trying to simplify procedures and enhance customer services to gain an edge.
Reverse Logistics and Analytics
Analytics provides a means of using data across tens of thousands of points, as well as internal and external factors, to understand what is happening. But, the use of analytics can be applied to gain a view of what is likely to happen, what needs to happen to avoid poor outcomes, and why certain events may lead to a given outcome. It sounds complicated, but cloud computing technology has enabled a new generation of analytics which can be applied to reverse logistics.
To use analytics in reverse logistics in the warehouse, the Warehouse Manager should follow these steps:
- Integrate systems, allowing inbound and outbound systems, as well as business-to-business and consumer-facing systems, to communicate.
- Monitor inventory from a single location.
- Simplify all planning and fulfillment tools by automating the process.
- Use the Internet-enabled devices, connected to the Internet of Things (IoT), to obtain real-time visibility and full unit lifecycle tracking.
- Set realistic key performance indicators and metrics for insights gained from analytics in reverse logistics.
Reverse logistics analytics can help companies to derive insight from vast volumes of streaming data for truly real-time on inventory optimization, route optimization, and transport analytics. By leveraging reverse logistics analytics solutions, companies can gain deeper, faster, actionable insights into operations, customers, and markets gaining an edge over competitors successfully.
Reverse Logistics Metrics
Typical business metrics used to measure reverse logistics address cost, service and supplier order fulfillment characteristics and may include:
- Financial key performance indicators (KPIs) including returns cost as a percentage of sales, returns processing costs by category/channel/supplier, shipping costs, inventory levels and carrying costs, write-offs
- Responsiveness including return process cycle time (days)
- Customer feedback and experience
- Returns rationale including the percentage of returns due to damage, faults, preference changes and accuracy of delivery.
- Cycle time: Cycle time is considered to be an important metric for measuring the performance of your reverse logistics system. If the return processes are standardized and streamlined, the return cycle should be short and “hassle-free”.
- Amount of product reclaimed and resold: For a better reverse logistics system, it is important for companies to track the percentage of the reclaimed and resold product. Also, this metric helps in estimating the total recaptured value.
- Percentage of material recycled: This metric in reverse logistics management helps in tracking the percentage of recycled product in the stream of reverse logistics. Furthermore, this helps in inventory optimization and better supply chain management.
Decision Making in Reverse Logistics
A pricing decisions model for a fuzzy closed-loop supply chain with retail competition in the marketplace is considered. Delphi method is applied to differentiate the criteria for evaluating traditional suppliers and green suppliers. Fuzzy AHP (Analytical Hierarchy Process) in a reverse supply chain to select the most economical product to be reprocessed and the potential recovery facilities. A hybrid fuzzy multi-criteria-decision-making model for evaluating green suppliers.
In reverse logistics product flow consists of three stages
- collection (Stage A)
- sorting and testing (Stage B)
- processing (Stage C)
The processing is either a complete processing of a single product stream, such as carpet recycling, or through split streams, in which multiple products are handled, such as splitting recovered construction sand into clean, half-clean and polluted sand. Processing (represented by the dashed arrow) produces four main types of output: finished products such as half-clean construction sand, refurbished machines and recovered spare parts such as copiers or lease-return computers, reprocessed raw material such as nylon from carpet, and disposal of waste.
Optimizing Reverse Logistics
Companies must cut down on returns by boosting sales, which, likewise, helps to give a strategic advantage over the competition. One possibility is to increase product discounts, run promos more often and keep stock up-to-date with new, eye-catching deals so that consumers do not need to return products and are fully satisfied with their purchases.
Another option consists of establishing a short trial period during which the customer can return the product if it is not to their liking. That said, once this time is up, returns would not be accepted.
To lower reverse logistics costs, companies must focus on
- Product life management. Items go through different phases (introduction, growth, maturity and decline) and each one of these requires an individual approach to management.
- Information technology systems. These provide real-time tracking of the products. Automated data collection about reverse product flows and processing these data are pivotal in the development of efficient management of this chain.
Obsolescence Management takes into account the life span of all the moving pieces of Reverse Logistics complex system with a plan to replace obsolete parts as they age, before it becomes a crisis. This is not a simple process. Challenges include parts availability, diminishing materials, counterfeit avoidance and knowing where to look to find what you need.
Obsolescence can present itself in two ways; the item in question is no longer suitable for current demands, or is no longer available from the original manufacturer.
Obsolescence is when a part, service, or resource is no longer available even though it is still needed. Every industry that depends on technology runs into this critical issue. If not managed effectively, obsolescence will have a negative impact on reverse logistics business.
The trajectory of technology is doubling every two years; and the rapidly changing landscape often renders needed parts difficult to find or unavailable. This is a critical issue for applications with long lifecycles.
Obsolescence Management must be anticipated. It’s not cost effective to replace a whole machine simply because one part or a piece of a system is no longer manufactured, neither is it necessary.
Factors leading to Obsolescence
It is essential to understand how obsolescence can occur and the types of obsolescence that exist.
Technological Evolution – A new generation of technology effectively makes its predecessor obsolete. An example of this would be faster microprocessors making slower ones obsolete. Typically, the new generation technology has improved performance and functionality, often at a lower cost than its predecessors.
Technological Revolutions – In a technological revolution, a new technology supersedes (displaces) its predecessor. An example of this is the CD-ROM, which has greater storage capacity and speed than the floppy disk, DVD/Blu-Ray discs that have better quality and more multimedia functions than VHS.
Market Forces – Obsolescence due to market forces occurs when the demand for a component or technology falls, and the manufacturer considers it uneconomical to continue production. This is an increasing problem, as low-volume markets no longer have the purchasing power necessary to persuade manufacturers to continue production. Part manufacturers and distributors may not be willing to manufacture or stock parts that have a small market. The cost of managing the distribution of low-volume parts while providing affordable prices is a challenge; hence, the few distributors that do provide low-volume parts charge high fees.
Environmental Policies and Restrictions – Obsolescence can be caused by directives, rules, and other legislation imposed by governments. For example, EC-Directives are regulations of the European Community for all member states to reach specific goals associated with the usage and waste of specific materials. For example, the directive on the Restriction on Hazardous Substances
(RoHS) to ban specific substances in products sold in the EU that could end up in the waste stream.
Allocation – Allocation obsolescence is caused by long product lead time, resulting in temporary obsolescence usually categorized as a short-term supply chain disruption. For example, during the worldwide recession in 2008–2009, many manufacturers reduced production and inventory in order to stabilize their businesses. As customers for parts recover and the demand for parts grows, temporary unavailability of parts can result. In addition, in some cases it appears that chip manufacturers may be delaying capital expenditure while enjoying the higher prices.
Planned Obsolescence – Planned obsolescence refers to an assortment of techniques used to artificially limit the durability of manufactured goods in order to stimulate repetitive consumption.
Planned obsolescence, also referred to as built-in obsolescence, is a method of stimulating consumer demand by designing products that wear out or become out-of-date after limited use. Manufacturers increase profits by forcing the customer to buy the next generation of the product after a fixed (planned) useful or functional product life cycle. If the manufacturer has a monopoly, or at least an oligopoly, planned obsolescence or built-in obsolescence may be part of their business strategy.
Reactive Obsolescence Management
The reactive management approach can be employed to mitigate obsolescence. These tactics and strategies can be short-term solutions adopted for a particular case to bridge to a future design refresh or longer-term efforts employed until the end of support of the system. Depending on factors such as system support life, period, and volume of production, the type of part, and plans for future design refreshes, the equipment manufacturer must decide which strategy to use to best handle part obsolescence issues when they occur.
The reactive obsolescence management approach consists of determining an appropriate resolution to the unavailability of a component that is already obsolete or the impending unavailability of a component that will become obsolete in the near future, executing the resolution process, and documenting and tracking the actions taken.
Most reactive obsolescence management organizations develop detailed plans for how obsolescence events will be resolved when they occur using reactive approaches.
Component manufacturing companies issue a product discontinuance notice (PDN) when they decide to cease production of a part or replace it with a new part that is sufficiently different to warrant a new part number. If component manufacturing companies change their products, they issue a product change notice (PCN) to inform their customers about the actual changes to parts.
Obsolescence Mitigation Tactics – There are a number of possible solutions to obsolescence problems after they occur. These range in complexity from a simple part substitution to a major redesign of a product. The selection of the most appropriate solution is a complex task involving a large number of factors, including the time available, the expected future production and support lifetime, and the expected occurrence of future product developments.
- Negotiation with the manufacturer
- Part substitution
- Lifetime Buys/bridge buys
Proactive Obsolescence Management
The proactive obsolescence management, proactively track life cycle information on selected parts in order to prevent obsolescence-driven risks such as production stops and expensive redesigns. Proactive obsolescence management means that the availability of parts is being monitored and actions to manage obsolescence are being taken prior to a part’s actual discontinuance. Obviously, most organizations will not have enough resources to proactively manage all the parts in their systems, and therefore parts need to be ranked in order of importance. The criteria for proactive management is generally as follows:
- There has to actually be some demand for the part (if you are never going to need the part again, proactive management is not necessary).
- The part has to be at some risk of going obsolete.
- There has to be some risk of running out of the part if it does go obsolete (if five times as many parts as will ever be needed are currently in stock, proactive management is not needed).
- The part has to be in the critical path and pose some level of difficulty to manage if it goes obsolete.
Solutions for Obsolescence Management
Obsolescence management consists of pro-actively managing the redesign of a system based on forecasted obsolescence dates, production and support plans, and employing mitigation strategies that are effective. There are a variety of strategies to plan for obsolescence mitigation to make sure that the application or processes remain operable. Livingston describes a variety of approaches that can aid in minimizing future obsolescence issues as
- System Architecture Approaches – Modular, open, and integrated systems architecture allows developers and designers flexibility when dealing with rapid technology changes and allows new products to be replaced easier.
- Technology Independence – Technology independence also allows devices to be substituted or replaced by newer next generation products without affecting the existing products and modules, which is very important in light of the fact that commercial products have high turnaround times.
- Software Portability – Software portability refers to software that is compiled independent from a target hardware device as to allow the software to be executed on a replaced device without
Using Analytics for Obsolescence Management
The main objective of regression analysis is to develop relationships between 2 or more variables in order to gain insight of one of them through knowing the relationships of the others. There are several types of regression models, which include; simple linear regression, multiple regression, and non-linear regression. The variables in a regression model are considered to be random which is good as the factors driving obsolescence are highly variable. The simple regression model consists of one independent variable, related to a dependent variable
Obsolescence forecasting is a proactive management approach that aids systems developers and manufacturers in identifying component obsolescence and discontinuances before they occur. Systems developers and manufacturers can better plan, design, and sustain their products and systems by understanding the life cycles of the products they utilize.
The process is a 7-step approach to forecasting obsolescence and listed below.
Step 1: Identify part/technology group – The primary purpose of this step is to identify the technology group of the part. Consider the part technology group to be a family of parts with the same technology and functional characteristics, irrespective of the company that produces the part.
Step 2: Identify part primary and secondary attributes – The primary attribute and secondary attributes are characteristics of the technology.
Primary and secondary attributes for IC part classes
Step 3: Determine number of sources – In this step the number of manufacturers and suppliers are determined for the part. If the part is already obsolete, no sources will be found. Also, if the part is new and not introduced into the market yet, no sources will be found.
Step 4: Obtain sales data of primary attribute – This step calls for data mining of sales data, which is used to identify life-cycle curves. Market research companies readily compile sales data. Life cycle curves are computed with sales in number of units shipped, but if this data is not available, sales dollars or market share could be used.
Step 5: Construct profile and determine parameters – Use the sales data to construct the life cycle curves of the part. A Gaussian distribution is used to fit a curve to the data. The obsolescence zone is defined by Sandborn using the ordered pair
Step 6: Determine the zone of obsolescence – The zone of obsolescence is a time-interval estimation in which parts become obsolete. Sandborn, split the life cycle out into ordered pairs
Step 7: Modify the zone of obsolescence – In this phase determine the modifications to the life cycle intervals based on the secondary attributes. If the years to obsolescence for any of the secondary attributes falls within the life span (+/- 3 σ years) of the main attribute, the years to obsolescence for the generic part will be modified.
We know that for the life cycle of a typical electronics part, it usually advances through six stages, namely; introduction, growth, maturity, decline, phase-out, and discontinuance. There are however certain parts that do not conform to this life-cycle as a result of market dynamics.
Predictive analytics is the use of advanced analytic techniques that leverage historical data to uncover real-time insights and to predict future events. The use of predictive analytics is a key milestone on your analytics journey — a point of confluence where classical statistical analysis meets the new world of artificial intelligence (AI).
Predictive analytics uses historical data to predict future events.
Predictive analytics helps teams in industries as diverse as finance, healthcare, pharmaceuticals, automotive, aerospace, and manufacturing.
- Automotive – Breaking new ground with autonomous vehicles. Companies developing driver assistance technology and new autonomous vehicles use predictive analytics to analyze sensor data from connected vehicles and to build driver assistance algorithms.
- Aerospace – Monitoring aircraft engine health. To improve aircraft up-time and reduce maintenance costs, an engine manufacturer created a real-time analytics application to predict subsystem performance for oil, fuel, liftoff, mechanical health, and controls.
- Energy Production – Forecasting electricity price and demand. Sophisticated forecasting apps use models that monitor plant availability, historical trends, seasonality, and weather.
- Financial Services – Developing credit risk models. Financial institutions use machine learning techniques and quantitative tools to predict credit risk.
- Industrial Automation and Machinery – Predicting machine failures. A plastic and thin film producer saves 50,000 Euros monthly using a health monitoring and predictive maintenance application that reduces downtime and minimizes waste.
- Medical Devices – Using pattern-detection algorithms to spot asthma and COPD. An asthma management device records and analyzes patients’ breathing sounds and provides instant feedback via a smart phone app to help patients manage asthma and COPD.
Predictive Analytics Workflow
Typically, the workflow for a predictive analytics application follows these basic steps:
- Import data from varied sources, such as web archives, databases, and spreadsheets. Data sources include data in a CSV file and other formats.
- Clean the data by removing outliers and combining data sources. Identify data spikes, missing data, or anomalous points to remove from the data. Then aggregate different data sources together – in this case, creating a single table.
- Develop an accurate predictive model based on the aggregated data using statistics, curve fitting tools, or machine learning. Forecasting is a complex process with many variables, so you might choose to use neural networks to build and train a predictive model. Iterate through your training data set to try different approaches. When the training is complete, you can try the model against new data to see how well it performs.
- Integrate the model into a load forecasting system in a production environment. Once you find a model that accurately forecasts the load, you can move it into your production system, making the analytics available to software programs or devices, including web apps, servers, or mobile devices.
Types of Predictive Analytics
Generally, the term predictive analytics is used to mean predictive modeling, “scoring” data with predictive models, and forecasting. However, people are increasingly using the term to refer to related analytical disciplines, such as descriptive modeling and decision modeling or optimization. These disciplines also involve rigorous data analysis, and are widely used in business for segmentation and decision making, but have different purposes and the statistical techniques underlying them vary.
Predictive models – Predictive modeling uses predictive models to analyze the relationship between the specific performance of a unit in a sample and one or more known attributes or features of the unit. The objective of the model is to assess the likelihood that a similar unit in a different sample will exhibit the specific performance.
The available sample units with known attributes and known performances is referred to as the “training sample”. The units in other samples, with known attributes but unknown performances, are referred to as “out of [training] sample” units. The out of sample units do not necessarily bear a chronological relation to the training sample units.
Descriptive models – Descriptive models quantify relationships in data in a way that is often used to classify customers or prospects into groups. Unlike predictive models that focus on predicting a single customer behavior (such as credit risk), descriptive models identify many different relationships between customers or products.
Decision models – Decision models describe the relationship between all the elements of a decision—the known data (including results of predictive models), the decision, and the forecast results of the decision—in order to predict the results of decisions involving many variables.
Analytical Techniques Used
The approaches and techniques used to conduct predictive analytics can broadly be grouped into regression techniques and machine learning techniques.
Regression techniques –
Regression models are the mainstay of predictive analytics. The focus lies on establishing a mathematical equation as a model to represent the interactions between the different variables in consideration. Depending on the situation, there are a wide variety of models that can be applied while performing predictive analytics.
Linear regression model – The linear regression model analyzes the relationship between the response or dependent variable and a set of independent or predictor variables. This relationship is expressed as an equation that predicts the response variable as a linear function of the parameters.
The goal of regression is to select the parameters of the model so as to minimize the sum of the squared residuals. This is referred to as ordinary least squares (OLS) estimation and results in best linear unbiased estimates (BLUE) of the parameters if and only if the Gauss-Markov assumptions are satisfied.
Once the model has been estimated we would be interested to know if the predictor variables belong in the model—i.e. is the estimate of each variable’s contribution reliable? To do this we can check the statistical significance of the model’s coefficients which can be measured using the t-statistic.
Time series models – Time series models are used for predicting or forecasting the future behavior of variables. These models account for the fact that data points taken over time may have an internal structure (such as auto correlation, trend or seasonal variation) that should be accounted for.
Time series models estimate difference equations containing stochastic components. Two commonly used forms of these models are autoregressive models (AR) and moving average (MA) models. The Box–Jenkins methodology (1976) developed by George Box and G.M. Jenkins combines the AR and MA models to produce the ARMA (autoregressive moving average) model, which is the cornerstone of stationary time series analysis.
Box and Jenkins proposed a three-stage methodology involving model identification, estimation and validation. The identification stage involves identifying if the series is stationary or not and the presence of seasonality by examining plots of the series, autocorrelation and partial autocorrelation functions.
Survival or duration analysis – Survival analysis is another name for time-to-event analysis. These techniques were primarily developed in the medical and biological sciences, but they are also widely used in the social sciences like economics, as well as in engineering (reliability and failure time analysis).
Classification and regression trees (CART) – Globally-optimal classification tree analysis is a generalization of optimal discriminant analysis that may be used to identify the statistical model that has maximum accuracy for predicting the value of a categorical dependent variable for a data-set consisting of categorical and continuous variables. The output of HODA is a non-orthogonal tree that combines categorical variables and cut points for continuous variables that yields maximum predictive accuracy, an assessment of the exact Type I error rate, and an evaluation of potential cross-generalizability of the statistical model.
CARTs- are a non-parametric decision tree learning technique that produces either classification or regression trees, depending on whether the dependent variable is categorical or numeric, respectively.
Multivariate adaptive regression splines – Multivariate adaptive regression splines (MARS) is a non-parametric technique that builds flexible models by fitting piecewise linear regressions.
Machine learning techniques –
Machine learning, a branch of artificial intelligence, was originally employed to develop techniques to enable computers to learn. Today, since it includes a number of advanced statistical methods for regression and classification, it finds application in a wide variety of fields including medical diagnostics, credit card fraud detection, face and speech recognition and analysis of the stock market.
Neural networks – Neural networks are nonlinear sophisticated modeling techniques that are able to model complex functions. They can be applied to problems of prediction, classification or control in a wide spectrum of fields such as finance, cognitive psychology/neuroscience, medicine, engineering, and physics.
Multilayer perceptron (MLP) – The multilayer perceptron (MLP) consists of an input and an output layer with one or more hidden layers of non-linearly-activating nodes or sigmoid nodes. This is determined by the weight vector and it is necessary to adjust the weights of the network. The backpropagation employs gradient fall to minimize the squared error between the network output values and desired values for those outputs. The weights adjusted by an iterative process of repetitive present of attributes. Small changes in the weight to get the desired values are done by the process called training the net and is done by the training set (learning rule).
Radial basis functions – A radial basis function (RBF) is a function which has built into it a distance criterion with respect to a center. Such functions can be used very efficiently for interpolation and for smoothing of data.
Support vector machines – Support vector machines (SVM) are used to detect and exploit complex patterns in data by clustering, classifying and ranking the data. They are learning machines that are used to perform binary classifications and regression estimations.
Naïve Bayes – Naïve Bayes based on Bayes conditional probability rule is used for performing classification tasks. Naïve Bayes assumes the predictors are statistically independent which makes it an effective classification tool that is easy to interpret. It is best employed when faced with the “curse of dimensionality” problem, i.e. when the number of predictors is very high.
k-nearest neighbours – The nearest neighbour algorithm (KNN) belongs to the class of pattern recognition statistical methods. The method does not impose a priori any assumptions about the distribution from which the modeling sample is drawn. It involves a training set with both positive and negative values.