Analysis of System Losses Capacity Study at Volvo Cars Torslanda Master of Science Thesis JESSICA ANDERSSON MOA DANIELSSON Department of Product and Production Development Division of Production Systems CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden, 2013 Analysis of System Losses Capacity Study at Volvo Cars Torslanda JESSICA ANDERSSON MOA DANIELSSON Department of Product and Production Development CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden, 2013 Analysis of System Losses Capacity Study at Volvo Cars Torslanda JESSICA ANDERSSON MOA DANIELSSON © JESSICA ANDERSSON & MOA DANIELSSON, 2013. Department of Product and Production Development Chalmers University of Technology SE-412 96 Gothenburg Sweden Telephone + 46 (0)31-772 1000 Cover: The cover picture illustrates one of the stations in the body shop at Volvo Cars Torslanda. © Volvo Cars Corporation, 2013. Reproservice, Chalmers Gothenburg, Sweden, 2013 i ABSTRACT This report covers a master thesis work performed at Volvo Cars Torslanda in cooperation with Chalmers University of Technology. The aim of the project was to investigate the different causes of the low capacity and high variability in the production output in a limited part of the body shop. The results of the thesis point out where improvements should be made, and concludes that a possible increase in production output by 9% could be reached. Tools such as process mapping and flow simulation have been used with the purpose of finding the largest constraints in the production system. In order to locate the constraints of the production system initial static analyses were performed, including a process mapping and analyses of historical data. However the initial static analyses were not sufficient enough in finding the bottlenecks, so the project carried on with simulation modelling. During the project several bottleneck detection methods have been applied and evaluated. The shifting bottleneck detection method was proven to be more successful than both the modified static waiting time method and the utilisation method in this particular case. The shifting bottleneck method resulted in the discovery of a primary bottleneck station as well as secondary bottlenecks. Further experiments, performed in a simulation model, concluded that a synergy could be achieved by combining improvements of a secondary bottleneck, the pallet transporting the products, with improvements of the identified primary bottleneck station. Keywords: discrete event simulation, process mapping, bottleneck detection. ii iii ACKNOWLEDGEMENT The authors would like to thank Chalmers University of Technology and Volvo Cars Torslanda for enabling such an interesting master thesis work. The experiences gained by applying theoretical knowledge on a real world production are very valuable. Many thanks to all people who have been involved in making this master thesis successful and especially to those who have spent time explaining and discussing today’s production system during the project. The authors would also like to thank Jon Andersson, who has supervised the master thesis, and the examiner Anders Skoogh at the department of Product and Production Development at Chalmers University of Technology for all of their guidance and support. Finally, special thanks to Jacob Ruda at Volvo Cars Torslanda who have been a great asset and exceptionally helpful when supervising the project. Without his excellent supervision the project would not have reached such good results. iv v TABLE OF CONTENTS 1. INTRODUCTION ................................................................................................................................ 1 1.1 Background ...................................................................................................................................... 1 1.2 Aim .................................................................................................................................................... 2 1.3 Problem formulation ....................................................................................................................... 2 1.4 Delimitations .................................................................................................................................... 3 2. THEORY ............................................................................................................................................... 5 2.1 Lean production ............................................................................................................................... 5 2.2 Conceptual modelling and process mapping ................................................................................ 7 2.3 Data collection ............................................................................................................................... 11 2.4 Theory of constraints ..................................................................................................................... 12 2.5 Simulation ....................................................................................................................................... 15 3. METHODOLOGY ............................................................................................................................. 21 3.1 Project definition and project plan .............................................................................................. 21 3.2 Establishing of theoretical foundation ........................................................................................ 22 3.3 Conceptual Modelling and Process Mapping ............................................................................. 22 3.4 Data collection ............................................................................................................................... 22 3.5 Interviews ....................................................................................................................................... 23 3.6 Static analysis ................................................................................................................................. 23 3.7 Prioritization of problem areas .................................................................................................... 23 3.8 Simulation ....................................................................................................................................... 23 3.9 Methodology evaluation ............................................................................................................... 24 4. FACTORY DESCRIPTION ............................................................................................................. 25 4.1 The body shop ................................................................................................................................ 25 5. PROCESS MAPPING AND PRIORITIZATION OF PROBLEM AREAS ............................ 27 5.1 Problem definition and project plan ............................................................................................ 27 5.2 Establishing of theoretical foundation ........................................................................................ 28 5.3 Conceptual modelling and process mapping .............................................................................. 28 5.4 Modularisation of the production system ................................................................................... 32 5.5 Initial static analysis ....................................................................................................................... 32 5.6 Interviews ....................................................................................................................................... 39 5.7 Prioritization list ............................................................................................................................ 41 6. ANALYSIS OF SYSTEM LOSSES ................................................................................................. 43 6.1 Static analysis of module five and six .......................................................................................... 43 6.2 Simulation results .......................................................................................................................... 48 7. DISCUSSION ...................................................................................................................................... 61 8. CONCLUSION AND RECOMMENDATIONS ........................................................................... 65 9. REFERENCES .................................................................................................................................... 67 APPENDIX A ........................................................................................................................................... I vi 1 1. INTRODUCTION This master thesis work was performed at Volvo Cars Torslanda, hereafter referred to as “VCT”, in cooperation with Chalmers University of Technology. The goal of the project is to study the capacity problems at VCT, as a step towards optimising the production of Volvo cars. VCT produces six different car bodies on the same flexible line in the body shop. In a specific area of the body shop, where the main flow merges with three sub flows, VCT experiences a lower capacity than expected due to system losses. A description of VCT’s production plant is given in chapter 3. Factory description. This master thesis, hereafter referred to as "the thesis", aims at identifying possible causes to the system losses by using tools as process mapping and flow simulation of a chosen area in the body shop. In this introductory chapter the background to the thesis, section 1.1 Background, will be described together with the aim of the project, section 1.2 Aim, the problem formulation, section 1.3 Problem formulation, as well as the delimitations in section 1.4 Delimitations. 1.1 Background VCT produces the Volvo car models V60, V60 hybrid, S60, S80, V70, XC70 and XC90. These, with the exception of XC90, are produced on the same flexible line in the body shop, which makes the process complex. Today the maximum theoretical capacity is never reached and the body shop faces high variability with an average output of about 38 bodies per hour. In the production flow between the under body buffer, which experiences an utilisation of 80-100% nearly half of the time, and the K10 buffer, which experiences an utilisation of 0-20% nearly half the time, see figure 1, VCT experiences a lower capacity than expected. With a takt time of 60 seconds a maximum theoretical capacity of around 54-56 jobs per hour (JPH) should be reachable. Especially since the area is not limited by the preceding and subsequent production, considering the large initial buffer which is well stocked and the final buffer which appear to be starved. However, the buffer situation does indicate a problem within the area between these buffers. Figure 1, the preceding and subsequent buffers indicate a problem area within the body shop. The geographical area between the two buffers, which is marked in figure 2, reaches from the large under body buffer downstream to the second buffer, named K10. In the 2 initial under body buffer approximately 80 under bodies are stored and in the K10 buffer approximately 20 car bodies are stored, in wait for further assembly. With regard to these two buffers the system can be considered more or less independent within the marked area in figure 2. Figure 2, view over VCT's body shop. The projects geographical area is marked. The introduction of the S60 model in the body shop at 2011 is one of many reconstructions of the body shop over the years. Continuing reconstructions due to new or renewed car models have deteriorated the system losses and rendered previous researches unusable. Therefore, this thesis is needed in order to analyse the comprehensive system losses in today’s car body production. 1.2 Aim The thesis aims at investigating the different causes of the low capacity and high variability in the production output in a limited part of the body shop. Today the capacity is somewhat sufficient but night shifts and production during weekends sometimes need to be performed when the products cannot be delivered from the body shop. The thesis will therefore study the system losses as a step towards a more efficient production. When the thesis have found the constraints in the system improving them should lead to a higher maximum capacity in VCT’s body shop and thereby a more flexible manufacturing without the need for extra shifts to manage the daily demand. Knowledge about production constraints is crucial when developing production systems, hence the thesis is an important first step towards a more efficient production. 1.3 Problem formulation Below the two main research questions are stated. The first question (RQ1) aims at locating the system constraint and the second (RQ2) evaluates how the constraints affect the throughput in the area described in figure 2 above. 3 RQ1 Which factors contribute to the system losses in the body shop? RQ2 How much do the contributing factors affect the throughput in the delimited area of the body shop? 1.4 Delimitations The time frame of the project in combination with wishes from VCT has resulted in the following limitations for the thesis:  Geographical limitations range from the under body buffer to the K10 buffer as illustrated in figure 2.  The thesis will not include an evaluation of suitable simulation softwares, but use the software Plant Simulation. Plant Simulation will be used since it is a recognized discrete event simulation tool that is in compliance with VCT.  Change management and implementation of possible arisen improvements to the production flow are also not included in this thesis.  Cost analyses of suggested improvements will not be included within the thesis. 4 5 2. THEORY This chapter presents the theoretical background of the thesis and describes the different theories that were used in order to execute a successful project. Popular philosophies such as Lean production, section 2.1 Lean production, and Theory of constraints, section 2.4 Theory of constraints, have been considered. Furthermore Conceptual modelling and process mapping section 2.2 Conceptual modelling and process mapping, including Value Stream Mapping, section 2.2.1 Value Stream Mapping, data collection, section 2.3 Data collection, different bottleneck detection methods section 2.4.1 Bottleneck detection methods and theories regarding computerized simulation, section 2.5 Simulation, have been studied. 2.1 Lean production In the 1990s Toyota caught the world's attention with their successful engineering and manufacturing of cars. They managed to design cars faster, more reliable, with a high quality and still at a competitive cost. Liker (2004) studied Toyota for 20 years and found 14 principles that together constitute the Toyota Way. The philosophy is called Lean production, originally Toyota Production System (TPS), and has a very high customer focus. Lean strives to eliminate non-value adding processes (“muda” in Japanese). The core of Lean production is to lower or eliminate waste, which is every activity that does not add value to the product or service from the customers’ perspective. Lean manufacturing is characterized by an effective organization of the manufacturing system. With faster throughput time for work in progress (WIP), smaller batch sizes, shorter set-up and change-over times, greater up time, greater schedule stability and lower rework and rectification costs. However, Lean is only directly applicable to very few manufacturers, most organizations have to adapt the tools and philosophies to fit theirs circumstances (Jina et al., 1997). The pyramid in figure 3 below shows the four groupings of the Toyota Way’s 14 principles; Philosophy, Process, People/Partners and Problem Solving. Principle 1 starts at the bottom as a foundation, which is the Philosophy and Long-term thinking. The second group addresses the Process and eliminating waste with the guidelines “The Right Process Will Produce the Right Results”. The third section adds value to the organization by developing People and Partners, and the fourth section focuses on Problem Solving (Liker, 2004). 6 Figure 3, the 14 principles of the Toyota Way (Liker 2004). The process related principles are of extra interest, due to the nature of this thesis. The thesis does not aim to influence the company’s philosophy or to develop the people within the organization, but to investigate the system losses in the production process and look at factors that contribute to the problem. From a Lean perspective any process could be improved by using principle 2 to 8: bring problems to the surface by creating a continuous flow, use “pull” systems, level out the workload, stop for quality problems, standardize tasks, use visual control and only reliable technology. The mentioned principles are all theories for improvement that are worthy of consideration. According to Liker’s 12th principle, first-hand information is critical to ensure understanding of the system. It is of great importance to take nothing for granted, and know that data aren’t facts, but only contribute to the facts. The facts remain in the process, whilst the data are one step away from the process (Liker, 2004). This is not far from the popular philosophy of learning by doing. Deep knowledge of the system can only be gained from the system itself, not from data (Liker, 2004). Another relevant principle is the 2nd principle which emphasises the need to create a continuous flow in the production. Flow does not only increase the speed in the production, it interconnects people and processes to each other and helps surfacing problems (Liker, 2004). Levelled production with increased flow has not only been proven beneficial in mass production, but also in low-volume production. The benefit is however the same in all systems; the capacity of existing equipment can be increased without expensive investments (Djassemi and Olsen, 2010). 7 2.2 Conceptual modelling and process mapping Process mapping aims at identifying, developing and improving the whole production flow and not only single processes. It is an efficient way to reduce operational waste and raise business performance (Dickens, 2007). The mapping is a key initial step to understanding the current process, Robinson et.al (2010) even states that the process mapping is probably the most important aspect of a simulation project since the developed model have impact on data requirements and the validity of the results from the simulation. When developing the process map, or a conceptual model, it’s important to find the right level of detail for the project, simplifications of the reality are often needed in order to ease the mapping process and make the model easier to understand (Robinson et al., 2010). However, the level of detail depends entirely on the aim of the project. Banks (1998) suggest that the conceptual model should be developed by focusing on the projects key research questions abstracted from the objectives in order to find the right level of detail. To further narrow down the model the output measures necessary to answer the research questions should be defined (Banks, 1998). When the process itself have been identified it is important to define the process owner. The process owner is the one responsible for decisions regarding the process, and should be considered the client for any improvement project performed (Conger, 2011). Therefore, Conger (2011) emphasizes that interviews with affected parties should be held before the process mapping begins. Customers, suppliers and operators in the process have different views of the process which should be taken into consideration. Dickens (2007) have a similar view on process mapping, which states that a team of key internal stakeholders should be put together in order to gather the all their great knowledge, define the problem and come up with possible solutions. Robinson (2010) states that the outcomes of a process map are a rather vague concept that could result in various different models. An example of this is Robinson’s (2010) figure, illustrated in figure 4, where the conceptual models area is defined within the ellipse. According to the figure the conceptual model could be an outcome from all processes, like an improved understanding of the real world or a computer model. However, the models are often illustrated in either flow charts or in tabular form (Linton, 2007). Figure 4 also illustrates that the conceptual model is dependent on and consist of four main elements; the objectives of the project, the output of the project that answers the research questions, the projects inputs and the models scope and level of detail. Robinson (2010) further states that the design of a conceptual model is an iterative process that continually needs to be revised through the project. 8 Figure 4, Illustration of conceptual model elements (Robinson, 2010). Damelio (1996) speaks of three main tools to visualise work: relationship maps, cross- functional process maps and flowcharts. The first tool, relationship maps, shows the interrelations between the suppliers, the organization and the customer. These three components have to be identified. “The organization” could be the entire company, a factory or even a single department and the suppliers and customers would then be defined based on that first selection (Damelio, 1996). The second tool is cross-functional process mapping, which illustrates workflows following their distinct path as they use activities to transform resources into outputs. The process touches several functions or departments in an organization; hence it is a “cross-functional” process. A swim lane diagram is one type of cross-functional map that illustrates the different functions as horizontal bands, the workflow as interrelated activities and supplier-customer relationships when a work item crosses over into another function, or swim lane (Damelio, 2011). A swim lane diagram should have one start symbol and one end symbol, in rare cases a process can have two start symbols, however then it is most likely that the process is in fact two processes (Conger, 2011). The third tool is flow charting, which is used to graphically visualize the sequence of work activities that makes up a process. Flowcharts focus on what actually happens and helps to make types of waste in a non-value-adding activity visible. Intelligence can be included into the chart by using the appropriate symbol for the right action (Damelio, 2011). One of the most used and recognized tools for flow charting is Toyota Production Systems’ standard Value Stream Mapping, which is further described below. 2.2.1 Value stream mapping Value Stream Mapping (VSM) is a flowcharting method that originates from Toyota’s Lean production philosophy. It aims to give a better understanding of the system as well as showing where improvements can be made by reducing or removing wastes. A VSM 9 visually expresses the current production process as a simplified snapshot of the system, showing all steps of the process, both value-adding and non-value-adding. The icons used in the VSM are standardized and some of them are presented in figure 5 below. Figure 5, example of VSM icons (Rother & Shook, 2003). VSM is manufacturing oriented, though similar approaches exist in other application areas. By creating a VSM it is possible to visualize the entire manufacturing and information flow and see how operations currently communicate with production control and each other (Tapping et al., 2002). All actions needed to create a product or service and bring it to a customer, in-house or external, are visible in the map. It helps focusing on improving the whole system and not only optimizing specific parts (Rother & Shook, 2003). Tapping et al. (2002) emphasizes the importance of going to the factory floor in person, rather than relying on past reports and already collected data. To stress the accuracy of the data is completely in line with Toyota Way’s 12th principle, mentioned under section 2.1 Lean production, that stands for “Go see for yourself to thoroughly understand the situation” (Liker, 2004). Liker (2004) means that this is of highest importance since no problem solving can be made without fully understanding and deeply analysing the current situation. The creation of a value stream map According to Rother and Shook (2003) the VSM approach follows four main steps: 1. Select one product family as the target for improvement. 2. Construct a current state map for the product value stream, using information gathered from the actual production process. 3. Map the future state. 4. Create a plan for the realisation of the future state. 10 One could say that the four steps represent Plenerth’s (2007) four phases: Preparation, Current state map, Future state map and an Improvement plan. The first two steps aims at deciding the scope of the mapping and gaining knowledge of the system, whilst step 3 and 4 look into the future and tries to create an improved future state. Therefore, only the first two steps are of highest importance to the thesis, since the thesis does not aim at offering a solution to identified problems. Hence a VSM would serve as a conceptual model of the production system, not as a solution or plan for a future state. When level of detail and product family have been chosen the drawing of the current state map can start. Tapping et al. (2002) recommends the following approach when drawing the map: 1. Start with drawing rough sketches of the main production operations. 2. Chose 7 to 10 key attributes to focus on when collecting data. Go to the floor and start at the most downstream operation. Work upstream through the chosen flow, collecting the process data. 3. Discuss and analyse the gathered data. Check so that all necessary data are collected. 4. Repeat step two if necessary. When these steps have been finished the actual drawing of the map can start. Starting with the supplier to the left and finishing with the customer to the right. The manufacturing operations are drawn at the bottom of the map; indicating each process by a process icon. The process attributes are added in data boxes below the process icons. Thereafter all inventories are illustrated in the flow by triangular icons with work in progress quantities. The material- and information flow are added in the map as well as pull and push arrows. Lastly, a timeline is drawn under the processes in order to show the production lead-time (Rother and Shook, 2003 and Tapping et al., 2002). An example of a VSM is illustrated below in figure 6. Figure 6, example of Value Stream Map. 11 2.3 Data collection A wide range of data is needed to perform process mapping and flow simulations, it stretches from production orders and production batches to machine failure frequencies and repair times. The resulting process map and flow simulation are highly dependent on the quality of the data, which makes data collection a timely but very important task (Banks, 1998). According to Skoogh and Johansson (2008) the data collection often represents one third of the project’s total time. The first step when collecting data is to identify the parameters that are of importance for the study. Depending on the system complexity and the nature of the task an appropriate level of detail need to be selected. To gather the right parameters at the right level of detail is of high importance due to the time consumption (Skoogh and Johansson, 2008). The required data could be collected in many different ways. In order to ease the collection of the needed data Robinson and Bhatia (1995) recommend to categorize the data in three different categories, this approach is also much alike Banks’ (1998) model for data collection. Category A handle all available data, category B handle the data that not are available but collectible and the third, category C, handle the data that not are available neither collectible. The available data for Category A could be collected through the company’s databases such as Corporate Business Systems or through previous studies. All this data need to be questioned; what is the source, what is collected, when was it collected and how was it collected? Do the data fit the scope of the project? Is the level of detail relevant? (Banks, 1998). The data have to be investigated, defined and validated so that it thoroughly fits the project. Category B data need to be collected by hand, for example by timing the process and preferably while the processes mapping is performed. The data in category C are not accessible and therefore only based on estimates and approximations. In order to collect these data discussions with experts within the relevant area could be done, historical data from similar processes and systems could be investigated or customer could be asked for an educated guess in order to get an approximation of the values (Robinson and Bhatia, 1995). Banks (1998) suggests that the data available should be used in the beginning and that more data should be request when needed. With this approach data collection and model conceptualization can be performed in parallel. Assumptions should be made when necessary. The project shouldn’t come to a halt due to missing information, when an assumption can lead the project forward. Afterwards the sensitivity of the model should be evaluated to see whether it is sensitive to the assumption or not, though re- evaluations of assumptions has to be encouraged since knowledge of the system is gathered continually. All assumptions have to be validated before presenting the results (Banks, 1998). While collecting data it is of great importance to use a system for the storage of data so that it is easily accessible and understandable. Data should be stored at one place, a spread sheet could be used for smaller projects but for bigger projects it is more preferable with a database. The collected data also need to be thoroughly investigated 12 and validated in order to make the analysis of the process mapping and simulation model as smooth and accurate as possible (Robinson and Bhatia, 1995). 2.4 Theory of constraints Theory of Constraints, TOC, is a management philosophy developed by Eliyahu Goldratt. TOC gradually emerged from Goldratt’s theory Optimized Production Timetables, OPT, which is described in his best-selling book The Goal from 1984. The OPT theory is, according to Rahman (1998), based on nine rules; these are summarized in the following statements. It is highly important to balance the flow, and not the capacity. The utilisation and activation of a resource is interchangeable. One hour of lost production at a bottleneck is one hour lost for the total system and all different constraints should be considered before establishing schedules. The essence of the TOC philosophy is the fact that all systems have constraints, for example a bottleneck, which influences the entire system’s performance. The existence of a constraint gives the possibility of improvements and since a system always will consist of at least one constraint TOC is a continuous process. The continuous improvement is the last aspect out of five focusing steps in the TOC approach to reduce or remove constraints; these are according to Goldratt and Cox (2004) as follows: 1. Identify the system's constraint(s). 2. Decide how to exploit the system's constraint(s). 3. Subordinate everything else to the above decision. 4. Elevate the system's constraint(s). 5. If in the previous steps a constraint has been broken, go back to step 1, but do not allow inertia to cause a system's constraint. The TOC method’s application on logistical systems should be controlled by the drum- buffer-rope, DBR, methodology and managed by time buffers, according to Rahman (1998). The methodology of DBR was presented by Goldratt and Cox in The Goal’s sequel The Race 1986. The DBR methodology coordinates material utilisation with the systems resources and reduces the systems complexity by focusing on critical resources. This is done by firstly the drum, the pace of the constraint, which sets the pace for all stations in the system. Strategically placed buffers makes sure that the constraint never lack of material and prevents variations in the systems output. Finally the rope handles the communication and make sure that the products are pulled in to the constraint at the right pace (Gardiner et al, 1993) and (Rahman, 1998). Time buffers prevent disruptions at non-constraint resources and could be divided into three different categories; constraint buffers, assembly buffers and shipping buffers. Constraint buffers are placed in front of bottlenecks in order to make sure that the constraint can follow the schedule. Assembly buffers contains products that later will be merged with products from the bottleneck. Shipping buffers ensure that the delivery dates are held (Rahman, 1998). Below, the first step of TOC’s philosophy, identify the system’s constraint(s), is studied further by describing different methods to identify bottlenecks in a system. 13 2.4.1 Bottleneck detection methods The classification of a bottleneck may vary. Normally the definitions include but are not necessarily restricted to: physical constraints, economical characteristics, output limitations, capacity utilisation, work-in-process limitations and capacity in relation to demand (Lawrence and Buss, 1995). Within this thesis a bottleneck is simply defined as the production system stage that has the largest influence on limiting the throughput in the system. All manufacturing systems have constraints, which control their performances. Hence to improve performance it is important to find and elevate the constraints, as mentioned earlier under section 2.4 Theory of Constraints. There are several methods used to find constraints, or bottlenecks, and their success rate vary depending on the type of system. Constraints are not easily identified using conventional methods (Lima et al., 2008). Since manufacturing systems are dynamic, their state can vary with seasons and even during a single workday, therefore the constraint can move i.e. be time dependent (Roser et al., 2003). In literature there are many different methods for bottleneck detection. The three main methods could be identified as the utilisation method, waiting time method and shifting bottleneck method, all of which are described further below. Utilisation method When using the utilisation method for bottleneck detection, the machine with the largest active time in percentages is defined as the bottleneck. The active time includes more than just machining time, since for instance breakdown times also have influence on the utilisation. With this method a primary bottleneck can be identified, but no secondary or non-bottlenecks can be detected (Roser et al., 2003). Another limitation is that the method is only suited for steady state systems and with its need for data from a long time period it can only determine average bottlenecks and not momentary ones (Roser et al., 2002). On the other hand, with a long time perspective in mind an average bottleneck is found quickly using the utilisation method (Lawrence and Buss, 1995). Waiting time method The waiting time method defines the bottleneck as the machine where parts have to wait the longest, either by measuring the queue length or longest waiting time. This method can find both momentary and average bottlenecks, by comparing the queue lengths or the waiting times (Roser et al., 2002). However, the method is limited to usage in systems where the buffers have unlimited capacity, also the supply must be smaller than the system capacity in order to not fill the queues permanently, rendering the analysis impossible (Roser et al., 2003). The shifting bottleneck method The shifting bottleneck method defines the machine with the longest uninterrupted active period as the bottleneck. The method can detect both momentary bottlenecks 14 and average bottlenecks, depending on the chosen timeframe. Furthermore it does not only detect primary bottlenecks, but also secondary and non-bottlenecks, making it possible to improve system throughput using a two-pronged approach by both focusing on reducing the cycle times of the primary bottleneck, and also ensuring a steady supply to the primary bottleneck by improving the secondary bottlenecks (Roser et al., 2002). The shifting bottleneck method determines whether a machine at any given time is a sole bottleneck, a shifting bottleneck or a non-bottleneck. To turn this momentary approach into a search for an average bottleneck during a time period, the percentage of time when a machine is a sole bottleneck and a shifting bottleneck is measured. The bottleneck detection method that uses utilisation and the one that uses waiting time do not take time variations into consideration, but the shifting bottleneck method does (Roser et al., 2002). Comparison of bottleneck detection methods Roser et al. (2003) made a comparison between the three bottleneck detection methods and applied it on a system containing AGVs (Automatic Guided Vehicles). A brief review of their results is given below. The utilisation method searches for the most active machine and defines it as the bottleneck. The method lacks ability to say which machines are secondary bottlenecks and the analysis is often not dynamic enough. The waiting time method looks for the machine for which the parts have to wait the longest, defining that machine as the bottleneck. In order to use the waiting time method buffers should have infinite capacity and the system capacity should be larger than the supply, otherwise the queues will be permanently full. The limitations of the waiting time method are large, since the method cannot account for the AGVs themselves even though they are a kind of queue (Roser et al., 2003). Moving on, the shifting bottleneck method determines the bottlenecks by usage of the active times, similarly as the utilisation method does. Note that active time includes more than just machining, for example AGV transportation and breakdowns. The difference is however, that the shifting bottleneck method does not use percentages of active time, but instead the uninterrupted duration that a machine is active. This approach can find the system constraint at any given time, by identifying the machine with the longest active period. Furthermore Roser et al. (2003) concludes that even though interconnected stations have the same utilisation degree they could prove to constrain the system unequally using the shifting bottleneck method. In a series of interconnected stations the stations in the beginning are more often the bottleneck than later stations, and the risk of starving later stations increases with the number of stations in the series (Roser et al., 2003). Furthermore the method distinguishes between shifting bottlenecks and sole bottlenecks, the difference being whether or not the bottleneck serves as a constraint 15 by itself or if it overlaps with other bottlenecks. This distinction is of outmost importance when it comes to a bottleneck probability analysis (Roser et al., 2003). It is also notable that the shifting bottleneck method shows a much clearer distinction between the more probable and less probable bottlenecks (Lima et al., 2008). Selection of bottleneck detection method Lima et al. (2008) proposes a methodology for selection of the best suitable bottleneck detection method. The selection should be based on the degree of fluctuations in the system, which can be objectively decided using the following equation: Where elf is the fluctuation factor, which indicates a high fluctuation if below 0.5 and a low fluctuation if larger than 0.5. The numerator “station” is the total number of production stations involved and denominator “frequency” describes the “Sole Bottleneck Frequency” per station, which is easily determined using a simulation. With the fluctuation factor set table 1 could be consulted for recommended methods. Table 1, recommended bottleneck detection method based on situation (Lima et al. 2008). 2.5 Simulation Simulation is a powerful tool for both existing and future systems. It makes impossible analyses possible and works excellent as a tool for decision-making. Furthermore it is a money saver, since it is much more costly to experiment with a real production system rather than with a simulation (Shannon, 1998). Banks (1998) defines simulation as an imitation of a real-world process over time, which is used to analyse the behaviour of a system, ask what-if questions about the real system and aid in the design of real systems. Shannon (1998) defines simulation as the process of conducting experiments with a model of a real system, with the purpose of gaining knowledge of the system and /or evaluating strategies for the operation of the system. 16 John S. Carson II (2005) has created a list of situations when simulation is most useful. The points which fit the thesis have been rewritten and are presented below:  There is no simple static model or calculation accurate enough to analyse the situation.  The real system is regular and interactions can be defined.  The real system has a level of complexity that is difficult to grasp in its entirety. It is difficult or impossible to predict the effect of proposed changes.  A tool that can get all members of a team into more common understanding is needed.  Simulation is an excellent educational device. In some cases the simulation animation could be the only way to visualize how parts of the system influence each other and contribute to the overall system. Banks (2005) states many advantages with simulation, they include but are not limited to the aspect of independent testing of changes (without disturbing the production), the opportunity to compress and expand time and creating a better understanding of an otherwise to complex system. There are of course also disadvantages with simulation. For example; special training is required to build and interpret a simulation model, simulation could be used inefficiently when there are easier solutions and simulation modelling could be costly (Banks, 2005). 2.5.1 Types of simulation When determining the type of simulation model needed the nature of the real system decides which approach to choose. The two central approaches are discrete event simulation and continuous simulation. In most simulations time is an important variable, to which other variables are dependent. In a discrete event model the dependent variables change at specific points in time, called event times. In continuous modelling, the dependent variables are continuous functions of time (Banks, 1998). Most discrete-event models include a statistical distribution; hence they are stochastic and have random variations (Carson, 2005). A model could also be deterministic, which means that is has no random variations and therefore always produces the same output given the same input (Banks, 2010). The text will from here on focus on discrete event models with stochastic variations, since this approach is most applicable on the production system being modelled. 2.5.2 Discrete event simulation Discrete event simulation (DES) models depend on time, that is, they are dynamic (Banks, 2005). DES models are based on the concepts of state, events, activities and 17 processes. The states change instantaneously, but only at discrete times called event times. An event can trigger new events, activities and processes. All events can be categorized as primary or secondary events, where primary events are triggered by data, whilst secondary events occur automatically due to model logic (Carson II, 2005). An efficient model has sufficient complexity to answer stated questions, but is at the same time not too complex. Since a model imitates a real system, the boundaries must be set with caution (Banks, 1998). All relevant system components must be described in the simulation model, but their level of detail can vary with their relevance for the study. 2.5.3 The steps in a simulation study Banks (1998) provide a model builder guide for a well-performed simulation study, this guide is visualized in figure 7 and each step is further described below. Figure 7, steps in a simulation study (Banks, 1998). 18 1. Problem formulation The simulation modeller and the client which is the owner of the problem need to make sure that they have a common understanding of the problem. Either the client can formulate the problem, which the simulation modeller has to fully understand, or the simulation modeller can prepare a problem statement, which then has to be agreed upon by the client (Banks, 1998). 2. Setting of objectives and overall project plan The objectives should stand in relation to the questions stated in the problem formulation. The overall project plan specifies resources needed in terms of for example time, personnel hardware and software. The plan could also include the different steps in the project, the output at each stage as well as the cost of the study (Banks, 1998). 3. Model conceptualization As a starting point a simplified model should be created, which can gradually grow into a more appropriate complexity to be able to answer the questions of the study. The involvement of the client should continue throughout the building. This will enhance both the quality as well as the client’s confidence in the model (Banks, 1998). 4. Data collection A wide range of data is needed to perform flow simulations, it often stretches from production orders and production batches to machine failure frequencies and repair times. Depending on the system complexity and the nature of the task an appropriate level of detail need to be selected,. The resulting simulation is highly dependent on the quality of the data, which makes data collection a timely but very important task (Skoogh and Johansson, 2008) and (Banks, 1998). The data collection need to be thorough, and the source as well as the purpose of the collected data need to be questioned in order to prove its usefulness. This topic is further discussed in chapter 2.3 Data collection. 5. Model translation This step consists of the translation of the conceptual model from step 3, the conceptual model, into a computer-recognizable form with the input from the data collection (Banks, 1998). 6. Verified? The computerized model is verified in order to ensure that the model behaves in accordance to the conceptual model and that the code is correctly written. The verification of the model should be carried out before the validation of the model and continuously throughout the modelling. This since the code has to be correct and in accordance to the conceptual model before actions is performed to ensure that the model reflects the reality at a satisfying level. However, these actions are based on that the conceptual model is thoroughly validated to fit the reality of the system (Sargent, 1984) and (Banks, 1998). 19 7. Validated? When the model has been verified and works as it is intended to do due to the conceptual model it is time to make sure that it is a representation of the real system. The validation process consists of several different phases of the modelling process and could according to Sargent (1984) be described by the help of figure 8 below. Figure 8, the process of a simulation model validation (Sargent, 1984). The Problem Entity represent the real system or the system to be investigated, the Conceptual Model is developed for the particular study and is the logical representation of the problem entity and the Computerized Model is the conceptual model transferred to a computer program. These three stages of the modelling are connected to each other by three different phases. The conceptual model is constructed in an analysis and modelling phase and the computerized model in a computer programming and implementation phase. Finally results and conclusions are constructed by performing different experiments on the computerized model and compare that to the problem entity in the experimental phase. Verification and validations should continuously be performed on the above simplified steps of a simulation model. The validity of the conceptual model should be confirmed by tests on the theory and assumptions that the conceptual model are built upon. The conceptual models logic should also represent the problem entity on a satisfying level. The computerized model should be verified in order to represent the conceptual model, as described under the previous section 6. Verified? Operational validity is performed in order to ensure that the computerized model is a good representation of the reality. This could preferably be performed by experimentally compare the system and the models output for different conditions. However, a perfect replica of the real system is not 20 necessarily the best to aim at when validating the computerized model. This since that it may be too time consuming or too costly to perform all different experiments to test all different conditions for the model. Also, all different conditions may not be relevant to test for the problem entity. Further validation should be performed by asking knowledgeable people if the computerized model seems to fit its purpose and if it is a good representation of the real system. Finally, when building the model it is of highly importance that the data used in the model is validated and correct (Sargent, 1984) and (Bank, 1998). 8. Experimental design When deciding upon which experiments to run, it is important to define required length of the simulation runs, and the number of runs for each experiment (Banks, 1998). 9. Production runs and analysis In order to estimate performance measures for the experiments, the simulation model is run and analysed (Banks, 1998). Data recordings from the warm-up phase of the simulation run can be disregarded since the system has not yet reached its steady state. During the warm-up phase the model might not behave in accordance with the real production, since the model is empty in the beginning (Grassmann, 2008). 10. More runs? When step 9 is performed and analysed the modeller can determine whether more runs or additional scenarios need to be performed (Banks, 1998). 11. Documentation and reporting Documentation needs to be carried out throughout the simulation project. To assure that the right decisions were made and that they were well-founded it is important to know which alternatives where considered and why. 12. Implementation If the experiments are successful and the client is satisfied with the execution of the project there is a chance that the results of the study will be implemented in the real system (Banks, 1998). 2.5.4 The software: plant simulation The simulation modelling has been carried out using the Siemens software Plant Simulation, which is a Discrete Event Simulation software with focus on logistics systems for example in a production. Plant Simulation is designed for modelling and simulation of systems, to enable optimization of material flow, use of resources and logistics for global production facilities as well as specific production lines. The software comes with advanced analysis tools such as bottleneck analysis, statistics and graphs which help in the evaluation of different scenarios in a present or future system (Siemens Industry Software AB, 2013). 21 3. METHODOLOGY This chapter describes the methods that are used throughout the thesis. The general framework for the execution of the thesis is visualised in figure 10 below. Each step in the figure is further described under respective section below. Figure 9, methodology map. As described by figure 10 the thesis starts by defining the project and creating a project plan, see section 3.1 Problem definition and Project plan. Thereafter a theoretical foundation is established and a process mapping is performed in order to gather necessary knowledge of the execution of the thesis and to thoroughly understand the production system analysed. These steps are included in sections 3.2 Establishing of theoretical foundation and section 3.3 Conceptual Modelling and Process Mapping. Parallel with the process mapping and continuously through the thesis data are collected, this process is described in section 3.4 Data collection. When the process is mapped and validated interviews are performed with knowledgeable people in order to further validate the process mapping and to collect information so that a comprehensive static analysis can be performed. This is described in section 3.5 Interviews and 3.6 Static analysis. If no results are extracted from the static analysis the problem is further scrutinized in a dynamic analysis by creating a simulation model. Since the studied area is a rather large geographical area it is desirable that a prioritization first is performed so that the creation of the simulation model could be simplified. These steps are described in section 3.7 Prioritization of problem areas and 3.8 Simulation. If the dynamic analysis results in an identified problem the methodology needs to be evaluated and questioned and finally the results can be presented, further described in section 3.9 Methodology evaluation and 3.10 Presentation of results. 3.1 Project definition and project plan A problem definition is set in cooperation with tutors at Volvo Cars Torslanda (VCT) and Chalmers University of Technology. A number of research questions are formulated in order to keep the focus of the project. When the problem definition is finished a project planning report, including a project plan, will be written and approved by VCT as well as Chalmers University of Technology before the project can start. 22 3.2 Establishing of theoretical foundation Relevant theories and methods are examined in preparation for the study of the capacity problem. The literature research handles the subjects of data collection, process- and value stream mapping, bottleneck detection methods and discrete event simulation. Basic knowledge about production systems is already gathered from courses taken at Chalmers University of Technology, such as Lean Production, Production Management and various project courses. Deeper knowledge within the subjects is gathered through researching books, scientific articles and PhD theses. 3.3 Conceptual Modelling and Process Mapping The process mapping follows the construction of a Value Stream Map (VSM), since VSM is a standardized and well known tool. According to Tapping et al. (2002) it is preferable to start to draw ruff sketches of the production flow before starting to map the processes. Therefore, maps of the body shop are studied before a site visit in order to get a first view of the plant and provide guidance in the following process mapping. Before the site visit a number of main data parameter are constructed so a focus is set when performing the mapping. Careful preparations before the site visit are of extra importance when studying a big geographical area as in this thesis. During the site visit the mapping is performed by walking upstream from the last station to the first station of the production flow. The product group to be included in the VSM map is pre-decided by VCT since the thesis aims at investigating a certain part of the body shop. When deciding upon level of detail aspects such as needed output for analysis and possibilities to extract data are considered. 3.4 Data collection Large amounts of data are gathered during the project. Some data are downloadable from VCT’s internal databases, but in every case the data need to be reworked in order to suit its purpose. Some data are collected manually by timing the events on the shop floor, this is especially necessary when it comes to validation of the downloaded data. The collected data are classified according to the three categories mentioned in section 2.3 Data collection; Category A - Available, Category B - Not available but collectible and Category C - Neither available nor collectible. The data which are collected include cycle times, transport times, buffer levels, throughput times and production disturbances. These variables all affect the system capacity and are therefore necessary for the study of system losses. The cycle times are collected for each station, with regard to the five different car models. This is important in order to contribute to the analysis of the process mapping and to see whether the stations have somewhat balanced cycle times or not. Transport times, which are wastes, are analysed in search for deviations from reasonable times. The buffer levels play a crucial role in pinpointing where the system has difficulties keeping up with the takt, and so does the throughput times. Production disturbances stand in immediate relation to the capacity and are analysed with regard to bottleneck analyses to find the severest constraint in the production system. 23 3.5 Interviews Interviews are held in order to collect thoughts from relevant people and validate or get new perspective on the static analysis. In order to get fair results with a width of knowledge the interviewees are from different departments and handle different work tasks. It is also preferable if interviews are held with people on all levels of the organisation, from shop floor to managers. In order to get the interviews as equal as possible the same questionnaire are used during all interviews. It is preferable if the interviews start with wide and comprehensive questions in order to not influence the interviewee. 3.6 Static analysis The static analysis is initially performed on a rather comprehensive level, and thereafter again on a more detailed level in accordance with the prioritisation, see section 3.7 Prioritization of problem areas below. The analyses are performed with help of the process mapping and by the collected data. To identify problem areas, a static bottleneck detection is performed. 3.7 Prioritization of problem areas In a complex system such as a production plant there are many different problems at any given moment. To find the most pressing problems both interviews and data are analysed, see section 3.4 Data collection and section 3.5 Interviews for further description of selected methods. Data are highly objective in comparison with interviews, although data collection can be biased depending on the viewer’s perspective. This makes interviews a good supplement to data collection, since the aim of the data collection can be directed based on the interviews. All points of interest, both based on interviews and static analysis are collected, considered and analysed. When selecting what to move forward with a qualitative evaluation, based on the categories that have the largest influence on the throughput of the system, are performed. Categories which do not fit within the thesis limitations are shelved. 3.8 Simulation If the static analysis shows that further research are needed in order to locate the factors causing the body shops system losses a dynamic analysis are be performed. The dynamic analysis are performed in a simulation model and most likely in a discrete-event simulation model since they are, according to Carson (2005), stochastic and have random variations and therefore best represent VCT’s production system of today. When building a discrete-event simulation model there are, according to Banks (1998), a number of steps that should be considered, these are further discussed in section 2.5.3 The steps in a simulation study. Since a static analysis is done before the simulation model are constructed large parts of the four first steps of the simulation model are already completed. For example problem formulation and setting of objectives and overall project plan are already executed. For the model conceptualization the VSM map constructed during the process mapping is used and a large number of data are already 24 gathered for the static analysis that hopefully also contributes to the simulation model. However, since the simulation model highly depends on the quality of the data it is important to make sure that the data are correctly used in the simulation model and if necessary that new data are gathered. When these steps are finished the conceptual model is translated into a simulation model. Since VCT uses the program Plant Simulation it is pre-determined that the model is built in this program. After the model is built it is time to verify that it behaves as the conceptual model and validate it to make sure that it is a representation of VCT’s production system. Once a representative model is constructed experiments are designed to dynamically locate system losses and the dynamic analysis is performed. 3.9 Methodology evaluation Once the problem is identified and the results are reached the methodology have to be examined and questioned to assure the validity and correctness of the results. Pros and cons with the methodology and strengths and/or weaknesses are preferably stated and discussed. 25 4. FACTORY DESCRIPTION This chapter presents the Volvo Cars Torslanda (VCT) production plant where the thesis has been carried out. VCT is a complete car manufacturer that delivers to customers all around the world. 155 411 cars were produced here in 2012. The factory is divided into three main parts: the body shop, the paint shop and the final assembly, as illustrated in figure 9 below. Figure 10, an overview of Volvo Cars Torslanda. The thesis’s geographical limitations, see figure 2 at page 2, are within the body shop where the complete body in white (completed car body) is welded together before it is sent to the paint shop, see section 4.1 The body shop below for a more thorough description of the production steps in the body shop. After the paint shop the painted body is assembled into a finished car in the final assembly factory. The product range includes the car models V70, XC70, XC90, V60, V60-hybrid, S60 and S80. All different product variants are more or less produced on the same flexible line through the whole production plant. 4.1 The body shop The production of a car body starts with the under body, which is the flooring structure. The under body is manufactured in three different parts, the front, middle and rear, which is thereafter welded together into the complete under body and sent to a under body buffer. When the very first front part of the under body is manufactured, the product receives an identity and specifications for further manufacturing. 26 When the under body leaves the under body buffer it enters the area on which the thesis is focused. The order queue is already set, so the loading station after the under body buffer selects the right product and loads it on a fitting pallet type. Numerous operations of respot welding and assemblies of left and right sides, roof beams and a roof as well as several quality checks leads the product into the K10 buffer. When the product enters the K10 buffer it leaves the thesis’s area of interest. The XC90 model has an almost entirely independent flow through the body shop, and it does not cross the thesis’s delimited area between the under body buffer and K10. After the K10 buffer the products go through further assembly lines with components such as fenders and doors, before they are sent off to the paint shop. Once the products have been painted they continue into the final assembly, where the products receive all their interior and external details before they are quality tested. 27 5. PROCESS MAPPING AND PRIORITIZATION OF PROBLEM AREAS This chapter is divided into seven main sections. Firstly the problem definition is described under section 5.1 Problem definition and Project plan, thereafter the studied literature are presented under section 5.2 Establishing of theoretical foundation. The results from the process mapping are presented in section 5.3 Conceptual Modelling and Process Mapping, and thereafter follows the geographical modularisation under section 5.4 Modularisation of the production system. The execution and results from the static analysis on module level can be found in section 5.5 Initial static analysis, whilst the performed interviews are described under section 5.6 Interviews. A prioritization of certain points of interest is stated in section 5.7 Prioritization list. After the prioritization was set the analysis continued with focus on the areas of interest, see chapter 6. Analysis of System Losses. The thesis has followed the outline displayed in figure 11, which is an overview of the execution of the thesis. As seen in the figure the thesis has followed an iterative methodology where conclusions have been drawn after each step which have justified in which direction the next steps are taken. Figure 11, map of the execution of the thesis. 5.1 Problem definition and project plan The problem definition was made in cooperation with tutors at Volvo Cars Torslanda (VCT) and Chalmers University of Technology. Discussions led to the research questions in section 1.4 Problem formulation. As a result of the problem definition a project planning report was written and thereafter approved by VCT and Chalmers University of Technology. The project planning report included a problem context, the 28 aim of the thesis, limitations, the research questions and a time plan. The time plan in the form of a Gantt-scheme. 5.2 Establishing of theoretical foundation In order to scrutinize the given main problem and come up with significant theories and methods literature within relevant subjects was studied in an initial phase of the thesis. The studied literatures were mainly within the subjects of data collection, process- and value stream mapping, bottleneck detection methods and discrete event simulation. Basic knowledge about production systems are gathered from courses taken at Chalmers University of Technology, such as Lean Production, Production Management and various project courses which include simulation. Deeper knowledge within the subjects was gathered through researches in books, scientific articles and PhD theses. Inspiration and information were also collected through previous master’s theses within the subject. All the theories used in this project are described in chapter 2. Theory. 5.3 Conceptual modelling and process mapping As explained in section 4.3 Conceptual Modelling and Process Mapping the process mapping was chosen to follow the construction of a Value Stream Map (VSM). The product group was set to include the car models that are produced within the limited area of the body shop: S60, V60, V70, XC70 and S80. The level of detail on the VSM was chosen to be at a station level within the thesis geographical limitations, see figure 2 on page 2. This decision was based on the fact that most data were collected at a station level. In addition to the VSM over the limited geographical area, a VSM map over the whole body shop at a line level was constructed in order to get an overall picture of the flow, and especially the information flow. When product group and level of detail had been chosen rough sketches of the body shop’s flow were drawn from existing maps in order to get a first view of the plant. These maps worked as first conceptual models and guidance in the following process mapping. Before the site visit a number of main data parameters were constructed, these were:  Model specific cycle times  Transport times  Number of pallets for each pallet type  Available production time  Planned downtime  Downtime  Change over time Focus was on these data parameters when the mapping at the floor was performed. The mapping was performed by walking upstream from the last station to the first station. Starting on line level through the whole flow in the body shop and then at a station level within the limited geographical area. Some data were collected during the mapping in order to get an estimation of the different times and not in order to create statistical foundation. In addition to the site visits short interviews were held with a number of experienced people in order to get a clearer picture of the information- and material flow 29 and the overall manufacturing process. The VSMs were continuously reconstructed after several site visits and interviews until all relevant information were included and the map was thoroughly validated. The two different VSMs, on line level and on station level, were created in Microsoft Visio with standardized VSM icons and data gathered from VCT’s databases. Conclusions were drawn that a VSM classical snapshot picture would not contribute to a satisfying enough level of detail for the analysis of the process, due to the large variations in the flow. Larger amounts of data were required; therefore historical data were collected in VCT’s databases and not during the mapping. The collected data were weighted in relation to the product mix in order to get a representative average for each station to ease the analysis and create a fair picture of the production. When interviews, observations and data had been gathered, analysed and visualised in the two VSMs, the conceptual model was validated through interviews with knowledgeable people. The resulting two Value Stream Maps are presented below in figure 12 and figure 13. Figure 12 show the entire body shop on a production line level, whilst figure 13 illustrates the VSM on a station level within the area of interest specified in figure 2 at page 2. 30 Figure 12, VSM over the entire body shop O lo fströ m e tc. Front flo or 3 2 U nder bo dy b ufffer W eld in g 4 1 A ss. + w eld in g 4 2 M ea suring 4 3 B SI 5 6 B SI 5 7 B SO 5 1 a B SO 5 2 a Left B S 5 1 b 52 b H öge rsid a R F 5 4 -2 0 0 R oof bea m s 54 -01 0 A ss. + w eld in g 5 5 K 10 W eld in g 6 2 W eld in g 6 4 A ss. S80 V 70 6 5 A ss. S60 V 60 XC 90 9 2 A ssem bly 6 6 A ssem bly 9 3 Q uality check 6 8 C 6 8 B 6 8 A TB TC C u sto m e r TA P Front flo or 3 3 M idle flo or 3 4H - form atio n 3 5 R ear floor 3 6 R ear floor 3 7 Front flo or 31 Flo or m ergin g 38 Flo or 3 3 80 0 B V O K O P A R O B O P SB O P B O P K A P M A S P LU S 30 0 Ten t P R O ST SA M S TC ID Lo gistics Fram ing 5 4 PSI 1 3 83 , -8 4 , 2 7 11 1 B SI 5 3 , 5 2 - 2 2 0 B SO 1 3 93 , -9 5, -9 6 31 Figure 13, Value Stream Map over the area within the thesis limitations. W eld in g 1141 -030 W eld in g 1141 -040 W eld in g 1141 -050 W eld in g 1141 -060 W eld in g 1141 -080 W eld in g 1141 -090 W eld in g 1141 -120 W eld in g 1141 -150 W eld in g 1142 -020 W eld in g 1142 -030 W eld in g 1142 -040 W eld in g 1142 -050 W eld in g 1 1 4 2 -0 6 0 Lase r w eld in g 1 1 4 2 -0 7 0 W eld in g 1 1 4 2 -0 9 0 G lu e in g 1 1 4 2 -1 0 0 W eld in g 1 1 4 2 -1 1 0 W eld in g 1 1 4 2 -1 3 0 K n o ck te st 1 1 4 3 -0 2 0 W eigh in g 1143 -040 M e asu rin g 1143 -080 M a n u al assem b ly 1 1 5 6 -0 1 0 M a n u al assem b ly 1 1 5 6 -0 2 0 W h e el w ell 1 1 5 6 -0 3 0 A sse m b ly 1 1 5 6 -0 4 0 W eld in g 1 1 5 6 -0 5 0 W eld in g 1 1 5 6 -0 6 0 W eld in g 1 1 5 6 -0 7 0 W eld in g 1 1 5 6 -0 8 0 M e rgin g p o in t 1 1 5 2 -0 5 0 W eld in g 1 1 5 2 -0 6 0 W eld in g 1 1 5 2 -0 7 0 W eld in g 1 1 5 2 -1 0 0 Q u ality ch eck 1 1 5 2 -1 1 0 A sse m b ly , n o t S6 0 V 6 0 1157 -010 A sse m b ly 1 1 5 7 -0 2 0 W h e el w ell S6 0 V 6 0 1157 -040 W eld in g 1157 -050 W eld in g 1157 -060 W eld in g 1157 -070 W eld in g 1157 -080 M e rgin g p o in t 1151 -050 W eld in g 1 1 5 1 -0 6 0 W eld in g 1 1 5 1 -0 7 0 W eld in g 1151 -100 Q u ality ch eck 1151 -110 G lu e in g w eld in g 1 1 5 2 -0 4 0 A sse m b ly 1 1 5 1 -0 1 0 M a n u al assem b ly 1151 -030 A sse m b ly 1 1 5 2 -0 1 0 A sse m b ly 1 1 5 2 -0 3 0 B SI 1151 -220 B SI S6 0 step 1 1386 A sse m b ly 1154 -010 A sse m b ly V 7 0 1154 -220 G lu e in g & assem b ly 1 1 5 4 -2 1 0 Fram in g 1 1 5 4 -0 2 0 W eld in g 1 1 5 4 -0 3 0 W eld in g 1154 -040 G lu in g w eld in g 1 1 5 4 -0 6 0 A sse m b ly w eld in g 1154 -070 W eld in g 1 1 5 4 -0 9 0 W eld in g 1 1 5 4 -1 0 0 W eld in g 1155 -040 G lu e in g 1155 -050 R o o f assem b ly & w eld in g 1 1 5 5 -0 6 0 Lase r w eld in g 1 1 5 5 -0 8 0 W eld in g 1155 -100 W eld in g 1155 -110 M e asu rin g 1155 -120 W eld in g 1155 -140 P u n ch in g 1155 -150 P u n ch in g 1155 -170 M e asu rin g 1155 -180 M aterial 1 1 5 4 -1 8 0 G lu e in g w eld in g 1 1 5 1 -0 4 0 Su b - assem b ly 1 1 5 1 -2 1 0 Su b - assem b ly 1152 -210 A sse m b ly V 6 0 1 1 5 4 -2 4 0 TA P U B 8 U n d er b o d y b u ffer T A 2 K 1 0 P SI LH V 7 0 1 3 8 3 B SI 1 1 5 3 -0 1 0 B SO V ä S6 0 , V 6 0 , V 7 0 1393 B SO R i S60 V 60 V 70 1 3 9 5 B SO S8 0 1 3 9 6 M D T 00 :02 :27 M C B F 3 2 0 C T 4 6 ,1 5 M D T 00 :02 :45 M C B F 5 7 5 C T 4 1 ,1 1 M D T 00 :02 :45 M C B F 4 1 1 C T 4 3 ,7 7 M D T 00 :03 :10 M C B F 1 2 4 C T 3 9 ,7 4 M D T 00 :02 :52 M C B F 1 0 9 C T 4 2 ,9 4 M D T 00 :02 :38 M C B F 2 4 0 C T 4 6 ,4 5 M D T 00 :02 :14 M C B F 9 6 0 C T 4 2 ,0 4 M D T 00 :01 :33 M C B F 2 8 8 C T 4 3 ,1 6 M D T 00 :02 :28 M C B F 1 8 9 C T 4 4 ,5 6 M D T 00 :02 :06 M C B F 2 9 5 C T 4 5 ,0 6 M D T 00 :02 :31 M C B F 8 1 C T 4 2 ,8 7 M D T 00 :03 :14 M C B F 4 6 0 C T 4 9 ,0 1 M D T 00 :03 :02 M C B F 2 2 6 C T 4 2 ,1 1 M D T 00 :03 :38 M C B F 1 8 3 C T 4 5 ,8 5 M D T 00 :07 :12 M C B F 7 1 9 C T 3 3 ,1 8 M D T 00 :05 :02 M C B F 2 7 4 C T 4 9 ,1 1 M D T 00 :02 :06 M C B F 4 2 6 C T 3 9 ,7 4 M D T 00 :04 :59 M C B F 5 7 5 C T 4 4 ,0 9 M D T 00 :03 :38 M C B F 9 4 3 C T 2 9 ,2 4 M D T 00 :00 :46 M C B F 1 1 K C T 5 ,9 9 M C B F C T 3 8 ,1 5 M D T 0 M C B F 0 C T 4 2 ,4 6 M D T 0 M C B F 0 C T 4 7 ,9 7 M D T 0 M C B F 0 C T 4 1 ,9 7 M D T 0 M C B F 0 C T 3 0 ,9 6 M D T 0 M C B F 0 C T 4 5 ,6 5 M D T 0 M C B F 0 C T 3 9 ,0 7 M D T 0 M C B F 0 C T 4 2 ,9 7 M D T 0 M C B F 0 C T 4 9 ,8 M D T 0 M C B F 0 C T 2 5 ,8 2 M D T 0 M C B F 0 C T 4 3 ,3 1 M D T 0 M C B F 0 C T 1 4 ,5 6 M D T 0 M C B F 0 C T 4 0 ,3 1 M D T 0 M C B F 0 C T 3 4 ,0 2 M D T 0 M C B F 0 C T 4 1 ,8 4 M D T 0 M C B F 0 C T 4 0 ,7 M D T 0 M C B F 0 C T 4 4 ,6 6 M D T 0 M C B F 0 C T 7 7 ,4 M D T 00 :02 :41 M C B F 1 3 7 C T 2 4 ,1 3 M D T 00 :02 :33 M C B F 3 6 C T 4 8 ,1 9 M D T 00 :02 :36 M C B F 8 9 C T 4 3 ,2 5 M D T 00 :02 :37 M C B F 5 6 4 C T 4 3 ,3 1 M D T 00 :02 :32 M C B F 8 5 C T 4 3 ,8 2 M D T 00 :04 :26 M C B F 2 8 1 C T 4 6 ,1 3 M D T 00 :01 :44 M C B F 1 1 25 C T 3 6 ,5 7 M D T 00 :03 :08 M C B F 1 6 6 C T 4 3 ,5 2 M D T 00 :01 :27 M C B F 0 C T 4 5 ,4 9 M D T 0 M C B F 0 C T 5 6 ,7 1 M D T 0 M C B F 0 C T 6 2 ,1 2 M D T 0 M C B F 0 C T 7 4 ,6 7 M D T 00 :03 :09 M C B F 1 6 1 C T 4 0 ,3 6 M D T 00 :03 :01 M C B F 3 9 7 C T 3 9 ,1 9 M D T 00 :03 :19 M C B F 7 9 C T 4 2 ,4 9 M D T 00 :03 :01 M C B F 5 1 C T 4 9 ,2 7 M D T 00 :02 :22 M C B F 1 7 6 C T 4 5 ,7 8 M D T 00 :02 :37 M C B F 1 8 5 C T 4 1 ,5 2 M D T 0 M C B F 0 C T 3 ,0 6 M D T 00 :02 :16 M C B F 4 4 C T 4 0 ,1 1 M D T 00 :05 :25 M C B F 3 6 C T 2 3 ,9 5 M D T 00 :04 :29 M C B F 3 4 23 C T 4 6 ,1 5 M D T 00 :04 :30 M C B F 1 7 15 C T 3 9 ,5 1 M D T 0 M C B F 0 C T 1 4 ,2 1 M D T 0 M C B F 0 C T 3 3 ,5 5 M D T 0 M C B F 0 C T 1 6 ,3 M D T 0 M C B F 0 C T 7 ,8 2 M D T 0 M C B F 0 C T 2 7 ,1 1 M D T 0 M C B F 0 C T 2 7 ,0 3 M D T 0 M C B F 0 C T 3 0 ,9 6 M D T 0 M C B F 0 C T 6 ,1 6 M D T 0 M C B F 0 C T 1 4 ,1 9 M D T 0 M C B F 0 C T 3 6 ,0 5 M D T 0 M C B F 0 C T 2 3 ,5 4 M D T 0 M C B F 0 C T 2 6 ,9 9 M D T 0 M C B F 0 C T 2 8 ,0 3 M D T 0 M C B F 0 C T 3 2 ,0 6 M D T 0 M C B F 0 C T 3 5 ,0 9 Lo ad in g 1 3 1 7 -2 9 0 B SI V 6 0 step 1 1 3 8 5 P SI R H V 7 0 1 3 8 4 P SI S 8 0 2 7 1 1 KO P Q u ality ch eck 1155 -190 E m p ty statio n 1 1 5 4 -1 1 0 U n lo ad in g 1 1 5 5 -2 0 0 M D T 00 :06 :47 M C B F 2 9 64 C T 4 1 ,3 1 M D T 00 :01 :57 M C B F 3 4 3 C T 1 ,2 1 52 M D T 00 :04 :22 M C B F 7 4 C T 5 7 ,9 0 04 32 As figure 12 illustrates, the information flow for the body shop is very complex. It can also be noted in both figure 12 and figure 13 that there are many buffers in the system, in fact there are even more buffer places than displayed in the figures, due to the many transportations in-between stations and production lines. 5.4 Modularisation of the production system When the process mapping was finished all of the stations were divided into six different modules. The modules were set based on the complexity of the stations within the module and how they affect the overall system. All the chosen modules have some buffer capacity between each other, and some have internal buffer places. Module one were decided to consists of the two first production lines, 11-41 and 11-42, since they mainly consist of respot robots and are relatively stable. Module two consists of the third production line, 11-43, which is a measurement line. The measurements are not done the same way on all products which generates differences in cycle times on this production line. Module three and four consists of left body side (11-52 and 11-57) and right body side (11-52 and 11-56), which are divided into different modules since the production is affected unequally by the two sides. The fifth module consists of the production of the roof beams and ringframe (11-54) and the sixth module handles the line from the framing of the different lines downstream to the K10 buffer (11-54, 11-55). The different modules have been visualized in figure 14 below. Figure 14, modularization of the chosen area in the body shop. 5.5 Initial static analysis The static analysis was performed in two steps, one initial in order to come up with a prioritization of the problem area due to the rather large amount of stations within the system, and a second at a higher level of detail for the prioritized areas, further explained in section 5.7 Prioritization list. The static analysis was based on the conceptual models, observations, data and interviews. 33 In the initial phase analyses based on cycle times and throughput rate were combined with disruption analyses in order to find the bottleneck or the problem area. These analyses were mainly performed on module level, see section 5.4 Modularisation of the production system for description of the modules. However, all stations’ inherent cycle times within the system were collected and analysed since they are of interest in the bottleneck detection analysis. In the following sections the analysed data are presented and classified according to the three categories mentioned in section 2.3 Data collection; Category A - Available, Category B - Not available but collectible and Category C - Neither available nor collectible. Several databases with different ways of logging the data have been used in order to get as correct data as possible and in order to verify the data properly. Before the data were collected the level of detail was considered. As mentioned above some data needed to be collected on station level and some needed to be collected on module level. Times which were not available in databases had to be collected manually by timing the missing events. This data therefore belong to Category B - Not available but collectible. Fortunately these times were mostly transport times, elevators and turntables, which could all be assumed not to depend on different car body models. However, these times were noted to vary for different pallets, so approximately 10 events were timed in order to calculate a reliable average time (with extreme values excluded). 5.5.1 Inherent cycle times Inherent cycle times, or machining times, for the different stations in the production flow could be categorized into A - Available. At any given moment the duration of the approximately 15 000 latest historical cycles, which is approximately 40 days of production, could be extracted from a logging system called CTView and imported into Microsoft Excel. Thereafter the cycle times were separated into the different car models and a weighted average for each station and model was calculated. When calculating the average cycle times the extreme values were removed since they could be assumed to be incorrect. A problem with the CTView logging system is that machines tended by operators do not log the full inherent cycle times, only machining times, which could yield a misleading cycle time analysis. As an example one station’s cycle time indicated that the station worked well under the cycle times of preceding and following stations, but one look at the disruption log instantly revealed that the cycle times for the station were often exceeded. These stations cycle times needed to be validated by timing the processes manually. Furthermore, for every observed station there were several car bodies of unknown model with registered cycle times. These have however been considered negligible due to their low frequency and have therefore been excluded. The results of the cycle time analysis are presented below in figure 15. 34 Figure 15, the stations with the top inherent cycle times. Figure 15 presents the stations with the highest inherent cycle times in the system. VCC’s takt time is at 60 seconds, as seen in figure 15 all stations’ inherent cycle times are below 60 seconds which make them acceptable. The problem did not seem to be the inherent cycle times. 5.5.2 Throughput rate Throughput times were extracted from the production control database (TAP), thus categorised into A - Available. Data for the first and last station of all modules were extracted for three representative days. The number of car bodies that had passed the stations between 07:00 and 17:00 were selected and the throughput time in seconds per unit were calculated for first and last stations within each module and graphically visualized in figure 16 below. The lighter bars illustrate the three connecting flows, the left body side 11-51-100 (module three), the right body side 11-52-100 (module four) and the roof beams 11-54-010 (module five), whilst the darker bars all indicate the under body flow (module one, two and six). Figure 16, throughput rate. 35 In a system the throughput should be somewhat the same for all connected stations, but due to the fact that all stations are not always occupied some stations register a faster throughput simply because they had a better starting point with a larger work in process. This effect is enlarged since only three days have been studied, but overall the throughput time for the production flow is accurate. According to figure 16 station 54-010, in module five, seems to limit the framing station 54-020, which is in the beginning of module six. Noteworthy is that this limitation of the flow do not extend to limit station 55-190, which is located at the end of module six, as it should when the stations are connected to each other. However, this could as discussed above be a result of the short period of time that is considered. Furthermore, it is remarkable that the throughput rate is at approximately 95 seconds when the inherent cycle times for the stations are well below 60. This indicates that the problem is not the cycle times but down times, transport times or something else. 5.5.3 Production disturbances The analysis continued with studying the disturbances, primarily to see whether the high throughput times were a result of the disturbances, but also in order to visualise the system constraints. Data on downtime, starvations and blockings for each station could be extracted from VCT’s database for disturbances (SUSA), hence categorised in to A- Available . When searching for the system constraints the common utilisation method, see section 2.4.1 Bottleneck detection methods, could not be used since the active time could not be specified accurately. Unfortunately only disturbances are logged and there is no way to know for certain when the station has been active or not. Therefore a modified version of the waiting time method, see section 2.4.1 Bottleneck detection methods, was developed based on available data. Instead of studying where products have to wait the longest, an analysis of whether each station is starved or blocked the most could be executed. The database (SUSA) was unfortunately not optimal for the cause. The database records the lengths of disruptions in seconds and the first alarm owns the recording at the specific station, while hiding all simultaneous underlying disturbances. When the first disturbance settles an underlying disturbance can take ownership, but the time for the second disturbance is not the true duration since its beginning is hidden from the first disturbance. This would not be a problem if all disturbances led to a halt in the process, but at the time being only some do and this leads to inaccuracies in the calculations. Another issue with the database is the times displayed for short stoppages. The database receives its data from a Virtual Device which checks the Programmable Logic Controller (PLC) for a status change only every 25th second. This means that a shorter disturbance which starts and resolves within the 25 seconds is not recorded; whilst some short disruptions which take place during a status check are recorded. This leads to a very random recording of small disruptions. 36 Yet another quite unpredicted problem with the database is that only some interruptions are actually logged in the system. The system (SUSA) looks at predefined codes and selects which status changes to log, leaving others behind. There are standardisations for this but they have not been followed, which led to the discovery of some interruptions that were logged in the short memory of a PLC but not in the database. The effects of this problem are very hard to grasp, but it can only be assumed that the database is relatively close to the truth since it is the only system VCT uses to log and follow up disturbances. Furthermore, data for the first and the last stations were not always able to be collected since not all stations recorded information in the system. In module three and four nearby stations needed to be chosen instead of the last ones. This problem was considered negligible due to the fact that the stations that were chosen were nearby and directly linked to the last station in the module. Furthermore, in module three and four the first stations were excluded from the initial study since they do not affect the main flow directly. There are several starting points for module three and four where the first material is mounted. It could be assumed that the last stations would indicate problems within these modules. The disturbance times were verified in a couple of strategically selected stations by comparing them to the production control database (TAP). An average inherent cycle time, weighted by car model, was multiplied with the number of produced car bodies in order to generate the process time. Thereafter the process time was summed with the calculated disturbance time and then compared to the overall production time for the time period. The added process time and disturbance time normally reach only 70-80% of the overall production time, which could be explained with transport times and product handling in between the processes. Data from each module as well as station 11-73-680, where the under body is loaded into the pallet, were extracted for the last 32 days of production in order to evaluate the modules, see figure 17 below. A clarifying flowchart of the studied stations is displayed in figure 18. To assure representative data the activities during the night shift were removed. The takt time is much lower during the night shift, which creates many starvations and blockings due to the simple fact that fewer products are in process. All data recorded during brakes were also removed since the manual stations generate starvations and blockings in the production system during break times. 37 Figure 17, disturbance times on module level. Figure 17 illustrates that module one and two (with end stations 11-41-130 and 11-43- 090) are generally more often blocked than starved, which indicates that the bottleneck lies downstream in the flow. However in module five and six (with starting stations 11- 54-010 and 11-54-020) the starvation times are higher than the blockings, which raises a suspicion that the constraint in the system is near the framing station 11-54-020, or somewhere downstream from that point. It was also noted that the framing station had the largest rate of “Other” disturbances, which are inherent disturbances and not waiting times. When it comes to module three and four (11-51-100 and 11-52-100), the left and right body side, it seemed as their constraint lies upstream in their respective flows. Figure 18, flow chart of the studied modules. It is notable that the starvation times as well as the blocking times are in close relations with the cycle times. For example station 11-73-680 and station 11-55-190 in figure 17 above are both known to have at much lower than average cycle time. This means that when these fast stations have finished their processes and are ready to start new ones, 38 they have to log starvation due to slower preceding stations. However, the relatively fast stations should also log blockings just as quickly, which means that a comparison between starvation times and blocking times within the fast stations should point in the direction of a bottleneck. Another interesting aspect is that the disturbance “manual loading time exceeded” appeared a lot in the roof beam station 11-54-010 (module five), which needed further investigation. The same analysis was performed for the framing station 11-54-020, where the four flows merge into one in the beginning of module six. This station’s starvation and blocking disturbances indicate which out of the four flows the framing station is waiting for or blocked by. The results are illustrated in figure 19 and figure 20 below. Figure 19, disturbance times for the framing station. Figure 20, distribution of disturbances in the framing station. 39 Figure 19 and 20 clearly states that the framing station is mainly starved waiting for roof beams, “Roof Beams” which is module five, and blocked by the downstream under body flow, ”Under Body” which is module six. Note that when looking at the number of disturbances the roof beam station has a clear majority of nearly 39%. This supports figure 17, where the roof beam station 11-54-010 logged many disturbances under the title “manual loading time exceeded”. It became clear that the roof beam station, which is module five, needed further research and so did the downstream under body flow, module six, with the framing station and its relatively large bar of “Other” disturbances. 5.5.4 Transport times In order to make sure that the transport times do not constraint the production system they were extracted from two different databases and also gathered manually, hence the classification of data as both A - Available and B - Not available but collectable. The transport times for stations that had a predecessor on the same production line were extracted from the same database as the cycle times (CTView), hence based on 15 000 cycles with extreme values excluded. Transport times between lines and for stations without logged cycle times had to be extracted from another system (TAP) that logged passing bodies. By comparing time stamps for individual bodies the missing transport times were calculated. These calculations gave average transport times independent of car model and were based on over 5000 car bodies with extreme values excluded. The system that log cycle and transport times (CTView) is not thoroughly standardized and placement of recording equipment as well as the information system itself could differ between stations. The fact that two different databases have been used to calculate transport times are another source for uncertainty since the two systems collect data differently. Further uncertainty lies in whether the whole transport time is included or not and how the transport time is defined in the system. However, the data could be validated by timing a number of strategically chosen stations by hand and comparing those times with the two databases’ recordings, both by validating the mean value and by validating the logged data. Stations without recorded data were also timed by hand, in these cases the timing was based on approximately 10 passing bodies, which were considered sufficient due to the low variations in transport times. The transport times were also verified using the production control database (TAP) and calculating lead times, which are the sum of previously calculated cycle times and the transport times. The transport times were checked over in search for large variations in speed or distance. All times gathered were found to be below the takt time of 60 seconds, and should therefore not constrain the production system. However together with the cycle times the takt time could be exceeded, so the transport times are still of importance. 5.6 Interviews Interviews were held with selected people in order to identify problem areas and to get a comprehensive knowledge of the different problems in the factory. They were also used to validate and discuss the identified problem areas from the static analysis. The persons whom were chosen for the interviews were from different departments and had different 40 work tasks, they were for example mechanics, engineers and managers. This gave a width and comprehensive knowledge and more trustworthy results. The same questionnaire, which can be found in appendix A, was used during all interviews in order to be able to compare and analyse the different answers. The interviews started with wide and comprehensive questions in order to not influence the interviewee and then became more and more specific in order to validate the information that was extracted from the static analysis. More specific questions within the interviewees work area were also discussed at the end of each interview. During the interviews maps were used in order to easier illustrate and discuss geographical areas and pinpoint problem areas. Graphs from the static analysis and the process maps, were also presented to the interviewee in order to get new perspectives, a relevant discussion on the static analysis and a validation of both the analysis and the process maps. All ideas of different problem areas that came up during the interviews were gathered and analysed continuously as the interviews proceeded. It soon appeared that some ideas were shared throughout the group of interviewees, some were shared within departments and some ideas were unique. Ideas which supported the analysed data were of extra interest but also frequent ideas were considered a resource and later on compared with further data analysis. The ideas that were unique and poorly founded were shelved. The ideas from the interviews were stored in an idea database. This database also consisted of different ideas that were collected through the project, both from different persons and ideas resulting from the different static analyses. The ideas that were considered solid and well founded followed a pattern and could therefore be categorized into eleven different categories. The results from the interviews