The current pandemic outbreak unlike other types of events has impacted many firms' supply and demand with unprecedented consequences. The scope of these effects greatly depends on the characteristics of the industry. This research evaluates the performance of a specific implementation of vendor-managed inventory (VMI) in a case study from a semiconductor company. A multi-period, multi-echelon serial supply chain consisting of the customer VMI warehouse (facing the end demand), the supplier distribution center and the supplier manufacturing plant is studied with agent-based and discrete event simulation. The results suggest that the severity of the demand reduction plays a big role in the replenishment process of the VMI, creating a bullwhip effect which reduces the speed of recovery. The behavior of the customer in terms of the quality of the forecast and whether or not it is been inflated allows the supplier to better plan when dealing with limited capacity.
Vendor Managed Inventory (VMI) is a mainstream supply chain collaboration model. Measurement approaches defining minimum and maximum inventory levels for avoiding product shortages and over-stocking are rampant. No approach undertakes the responsibility aspect concerning inventory level status, especially in semiconductor industry which is confronted with short product life cycles, long process times, and volatile demand patterns. In this work, a root-cause enabling VMI performance measurement approach to assign responsibilities for poor performance is undertaken. Additionally, a solution methodology based on reinforcement learning is proposed for determining optimal replenishment policy in a VMI setting. Using a simulation model, different demand scenarios are generated based on real data from Infineon Technologies AG and compared on the basis of key performance indicators. Results obtained by the proposed method show improved performance than the current replenishment decisions of the company.
The semiconductor industry is one of the most challenging and investment intensive high tech industries. To successfully compete in the market, all manufacturers worldwide invest a great deal of effort to reduce production costs and to improve efficiencies to world class levels. The chosen approach at Infineon Technologies to reach and maintain cost leadership and best in class performance was the implementation of a central top-down planning tool for production lines: the reference fab, a comprehensive cost model initially developed at the Memory Products Group and rolled out to all production sites. Covering all segments of the value adding chain of the site, the model applies systematic target setting to a limited set of key performance indicators (KPI). The process of target finding is based on a combination of benchmarking results, target cost models and best practice comparisons of all sites. An efficient reporting and controlling system monitoring the same KPIs as used in the reference fab has been implemented in order to establish continuous improvement and best in class performance on a long term run. By using this approach, the Memory Products Division succeeded in reducing production cost by more than 25% compared to initial plan scenarios.
The created data is collected from an algorithm for measuring Infineon's customer Order Lead Times. Due to the frequent changes in e.g. volumes or Confirmed Delivery dates, an algorithm is necessary to calculate and thereby measure the correct Order Lead Times since the SAP data is misleading here. As the Lead Time data is confidential, Qualified Synthetic data will be created that share the same distribution and characteristics, but do not depict sensible customer information. The distribution of products and customers will be taken into account as well, but in encoded form both for security and confidentiality reasons. The final table of data will include Product Line, Business Month, Order Entry date, Requested and confirmed order lead time, customer name encoded, product name encoded, Order number and order volume.
Semantic Web technologies provide the possibility of a common framework to share knowledge across supply chain networks. We explore Semantic Web technologies to evaluate processes' demand predictability through a use case. First, we create an ontology describing the relevant domain concepts and data and define competency questions based on the need of the use case. Then, we map the data to the ontology in a knowledge graph. We design a chain of SPARQL queries to retrieve and insert information from the knowledge graph to answer the competency questions. We calculate the underlying demand for supply chain processes using aggregations and the created semantic description. We successfully computed a pre-defined metric for demand predictability for different time scopes and process groups: the mean of yearly coefficient of variation. Using this approach, one could perform predictability evaluations for relevant indicators of end-to-end supply chains by further data integration in the semantic framework.
The industrial landscape is swiftly progressing toward Industry 5.0, marking the fifth revolution characterized by the integration of sustainable practices and digital sovereignty. This article advocates for the adoption, expansion, and implementation of artificial intelligence (AI)-enabled hardware, tools, methods, and semiconductor technologies in the journey toward Industry 5.0. Beyond the initial proposal, the article explores primary research areas and the diverse challenges inherent in this transition. Notably, significant accomplishments in pivotal industrial use cases are appended, providing validation evidence. This comprehensive approach aims to bridge academic advancements with practical industrial application, fostering a symbiotic relationship between humans and machines for increased efficiency, innovation, and adaptability.
Horizontal and vertical integration rely on flexible interaction, i.e. communication and cooperation, of heterogeneous systems and subsystems on several abstraction layers. A large ontology (the Digital Reference) that describes semiconductor supply chains and supply chains containing semiconductors is needed so as to leverage the highly distributed and decentralized data sources. The Digital Reference Platform Application (DRPA) is a tool designed to explore the power of the Digital Reference Ontology, offering features for ontology exploration and data creation along with Arrowhead Framework integration. With this, the DRPA offers interoperability between arbitrary software applications and industrial cyber-physical systems - integrated into Arrowhead Local Clouds. This approach applied for a voting wristband prototype shows the potential of the DPRA. The final target is to extend the potential of DRPA beyond Ontology exploration.
Demand planning in the semiconductor industry is typically divided into different planning horizons, midterm and short-term. Accurate demand forecasting is crucial because of long capacity installation times, long lead-times, short product life cycles, and constantly new technological advances. As demand forecasting for short and mid-term horizons are often made on different product and time granularities using different planning tools, we may see demand fluctuations (on the same granularity) within individual horizons and at the intersections of different granularities. This paper discusses stability of demand forecasts depending on time and product granularity and introduces definitions of good and bad stability, using Symmetric Mean Absolute Percentage Error (SMAPE) as a measure for stability. We show that time and product granularities have a significant effect on the intra-horizon stability of a demand plan and that planning on different granularities can lead to artificial demand fluctuations at the intersections of planning horizons.