Prediction of the endurance limit utilizing the weakest link concept A comparison of two statistical methods Master’s Thesis in Mechanical Engineering ANDREAS ANDERSON Department of Applied Mechanics Division of Material and Computational Mechanics CHALMERS UNIVERSITY OF TECHNOLOGY Göteborg, Sweden 2012 Master’s Thesis 2012:07 Space for picture Replace this square with a picture illustrating the content of the report. This picture should be “floating over the text”, in order not to change the position of the title below (clic on Format: Object: Layout, and chose “In front of text”) Instructions for use of this template Replace the yellow marked text with your own title, name etc on the first nine pages. Replace only the text and not the return-signs, comment marks [mp1] etc. Update all field in the document by choosing Edit: Select All (Ctrl A) and then clicking the F9-button. Write your report using the formats and according to the instructions in this template. When it is completed, update the table of contents. The report is intended to be printed double-sided. MASTER’S THESIS 2012:07 Prediction of the endurance limit utilizing the weakest link concept A comparison of two statistical methods Master’s Thesis in Mechanical Engineering ANDREAS ANDERSON Department of Applied Mechanics Division of Material and Computational Mechanics CHALMERS UNIVERSITY OF TECHNOLOGY Göteborg, Sweden 2012 Prediction of the endurance limit utilizing the weakest link concept A comparison of two methods Master’s Thesis in Mechanical Engineering ANDREAS ANDERSON © ANDREAS ANDERSON, 2012 Master’s Thesis 2012:07 ISSN 1652-8557 Department of Applied Mechanics Division of Material and Computational Mechanics Chalmers University of Technology SE-412 96 Göteborg Sweden Telephone: + 46 (0)31-772 1000 Chalmers Reproservice Göteborg, Sweden 2012 I Prediction of the endurance limit utilizing the weakest link concept A comparison of two methods Master’s Thesis in Mechanical Engineering ANDREAS ANDERSON Department of Applied Mechanics Division of Material and Computational Mechanics Chalmers University of Technology ABSTRACT Several theoretical methods for predicting the endurance limit exists. The major benefit of these methods is the possibility to reduce both time and cost by performing simulations in a computer. Using this approach there is no need to manufacture prototypes exclusively for a time consuming and costly test. This thesis describes how two theoretical models based on the weakest link concept are implemented as a post processing tool into the Abaqus 6.7 CAE software. Based on a pre-determined loading case for a given geometry and material, this tool returns the probability of failure for the actual loading case. The models are also evaluated from a statistical point of view. Not only is the actual calculated endurance limit compared among the models, but also the actual sensitivity for each of the models. Further work evaluating the accuracy of predicting the endurance limit has been made and a method has been developed in order to compute the uncertainty of the presented results. This result is presented as the distribution of the endurance limit within a 95% confidence interval. The results show that the accuracy of the statistical models is acceptable with regards to the actual spread of results obtained from tests performed in real life in a laboratory. Therefore the simulation approach must be considered as a powerful tool in the product development process. Key words: Endurance limit, Fatigue limit, Weakest link, Weibull, GPD, Inclusion, Post processing. II CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 III Contents ABSTRACT I  CONTENTS III  PREFACE V  1  INTRODUCTION 1  1.1  Background 1  1.2  Objective 1  1.3  Limitations 2  1.4  Outline 2  2  THEORY 3  2.1  Traditional approach 4  2.2  Statistical models 5  2.2.1  The Weibull method 5  2.2.2  The Generalized Pareto Distribution (GPD) method 7  3  IMPLEMENTING THE MODELS USING FEM-SOFTWARE 9  3.1  Assumptions 9  3.2  Defining geometry and constraints 9  3.3  Mesh convergence study 11  3.4  Obtaining material parameters 15  3.5  Introducing the statistical models 17  3.5.1  The Weibull method 17  3.5.2  The GPD method 18  3.5.3  Statistical methods, regional approach 18  3.6  Workflow 18  4  EVALUATION OF THE STATISTICAL METHODS 20  4.1  Precision of parameters obtained from the NRIM tests 20  4.2  Finding the endurance load –manual method 20  4.3  Finding the endurance load -automated method 21  4.4  Finding the endurance load with the traditional approach 23  4.5  Comparison of the statistical models 24  4.6  Uncertainty of the statistical models 25  4.6.1  Finding the parameters with the method of maximum likelihood 25  4.6.2  Determination of the uncertainty by using Monte Carlo simulation 29  CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 IV 5  RESULTS 35  5.1  Comparison of the models 35  5.2  Uncertainty of the model itself 35  6  CONCLUDING REMARKS AND RECOMMENDATIONS FOR FUTURE WORK 36  6.1  Summary of results 36  6.2  Comparison project 36  6.3  Develop a tool for calculating the endurance-limit 36  6.4  Further investigation and improvement of the regional approach 37  7  REFERENCES 38  CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 V Preface This master thesis is the final part of the Master Program in Mechanical Engineering at Chalmers University of Technology, Gothenburg. This project has been carried out with Ph. D Magnus Ekh as supervisor representing Chalmers and with Ph. D Erland Johnson and Ph. D Thomas Svensson representing SP, National Technical Research Institute of Sweden. The work has mainly been carried out at SP Technical Research Institute of Sweden. My co-worker Sarah Lorén is highly appreciated for her help with the introduction to the initial statistical parts of the project. I would also like to thank Jan Granlund at Femtech AB for his support regarding the Abaqus software. Finally, special thanks are due to Thomas Svensson for his support and valuable discussions throughout the work. Borås, March 2012 Andreas Anderson CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 1 1 Introduction Fatigue is a common cause for failure in mechanical components subjected to cyclic loading. Due to factors as economy, performance, maintenance and safety more and more of these components are manufactured by high strength steel or cast iron. In order to predict a components ability to withstand a certain loading condition, it is favourable to do this by computational methods rather than a trial-and-error testing approach. This thesis presents and compares two computational methods implemented into the commercial Abaqus software. 1.1 Background The fatigue limit is an interaction between the stress distribution and the local material properties. In order to make an estimation of the actual fatigue limit it is necessary to include both of these parameters into an analysis. The maximum stress for “simple” components subjected to cyclic loading is relatively easy to estimate. The stress distribution is affected by geometry changes and in design handbooks for engineers “cases” for typical geometries are presented. In these tables stress concentration factors (in literature often denoted as Kt), are available for several simplified geometries in order to find the maximum stress. There are two main reasons, limiting the usefulness of these models for deriving stresses for fatigue calculation. Firstly, these “cases” give no detailed information about the stress distribution in the components actual volume. Secondly, as the geometry gets more complex there is likely no corresponding “case”. It is therefore necessary to use an analytical tool that dissolves the global loading of the component into local stresses. The most commonly and effective method today is the so called finite element method. As mentioned initially, the stress distribution alone is not sufficient to tell us about the fatigue limit. We also need to describe the local material properties. In materials, as cast iron, the fatigue properties depend on the size and distribution of defects, which can be located on the surface as well as inside the material. From defects like these, cracks may be initiated. The fatigue limit can be described as the maximum stress amplitude which does not allow the most critical defect to increase in size. It is therefore desirable to develop methods allowing both the stress distribution and the local material properties in the component to be included in a fatigue limit analysis. 1.2 Objective The purpose with this work is to develop a tool that can predict the fatigue strength based on established computational methods. These methods should be implemented as scripts written in the Python programming language which can be used together with the Abaqus software. The methods should be statistically compared and the uncertainty of each method should be evaluated. This evaluation should be able to tell if there is a possibility to verify the validity and accuracy of each model by mechanical testing. The calculated fatigue limit together with the corresponding level of uncertainty will indicate the number of tests that needs to be made in order to tell which one of the models that best will describe the actual material behaviour. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 2 1.3 Limitations There are several methods for computational predictions of the fatigue limit. In this thesis two models were chosen and implemented. The methods are statistical approaches, the first one formulated by Bomas et al. (1999), here denoted as the Weibull method and the other one formulated by Yates et al. (2002) denoted as the Generalized Pareto Distribution, GPD, method. 1.4 Outline The theory behind the two implemented computational models is presented followed by the work implementing these into the FEM software. A method to evaluate the total accuracy of each implemented model based on the accuracy of each individual material parameter is presented. The results are thereafter discussed and visualized as colour fringe plots visualizing the fatigue strength of the modelled component. Finally, conclusions and recommendations for future work are presented. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 3 2 Theory It is well known that a component that is subjected to a cyclic load can fracture at load levels lower than the ultimate load in static conditions. This behaviour can be described in a Wöhler- or SN-diagram as presented in figure 2.1. This diagram describes the expected number of cycles to failure for a component subjected to a constant amplitude stress. The name Wöhler derives from the German railroad engineer who during the mid 1800 carried out rotating bending fatigue tests of railroad axels, being one of the first to describe the fatigue phenomena. The first letters in the name SN-diagram stands for “stress” and “number”. Figure 2.1 The relation between stress range and component life known as the Wöhler diagram. Imperfections in materials are one of the main parameters that contribute to variation in the test results. Variations of cycles to failure are normally described by a log-normal- or log-Gaussian-distribution. In the initial, sloping part of figure 2.1 the curve shows the expected cycles to failure, the median value of the distribution. Half of the tested specimens are expected to break before reaching the expected number of cycles while the other half is expected to exceed the pre-determined number of cycles. The curve indicates accordingly the probability of failure, P(The component fails)=0.5. When the Stress Amplitude is lowered under a certain level, some materials show behaviour of “infinite life”. This is illustrated in figure 2.1 by the constant value of the Stress Limit above 5·106 cycles. At stress amplitudes lower than this no cracks will propagate through the material. The same basic Wöhler-diagram is presented in figure 2.2 but the statistical influence is clearer visualized with regards to how we expect the result to fluctuate. We know that the expected life for a component subjected to a constant amplitude load is not exactly determined and will vary for apparently identical test specimens. We also know that the CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 4 actual endurance limit will be varying between corresponding specimens. This behaviour is in figure 2.2 visualized as confidence intervals of Gaussian-distributions in log-scale. Figure 2.2 The Wöhler-diagram with regards to expected variance. The fatigue process in a component is a complex interaction of defects, local stresses and the cyclic deformation behaviour of the material. During the fatigue process one or several cracks will grow until the component fails. The failure is due to the fastest growing crack. This crack may have been initiated from a large, but not necessary the largest flaw or inclusion, depending on the local stress level, Yates et al. (2002). Fatigue cracks in hard materials are initiated in volume imperfections at the surface, Bomas et al. (1999). Fatigue crack propagation in these materials is very fast and it is usually not possible to stop a test in the crack propagation phase. Post mortem analyses normally show that the crack was originated in a single defect, a consequence of the rapid crack propagation. This indicates that there is a most dangerous defect which, together with local stresses causes crack initiation. This way of reasoning is the basics of the weakest link concept. 2.1 Traditional approach The probably most intuitive way of calculating the critical load for a structure is just to look at the stated endurance limit expressed as stress for the actual material. Thereafter we choose the region in the actual geometry that is exposed to the maximum stress. By assuming that this maximum stress is the actual endurance stress and then calculate CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 5 “backwards” we will get a value for the critical load. This approach indirectly presumes that the defect intensity is homogeneous throughout the geometry. 2.2 Statistical models One of the benefits with a statistical approach is that it considers the defect intensity. Both of the methods, Weibull and GPD, predict the probability for failure of a component subjected to a certain loading condition when knowing the statistical behaviour of the material. In order to implement the models when making studies of components it is necessary to have access to material parameters. These parameters are calculated from data obtained from practical material testing. For the studies in this report the parameters was obtained from material data presented in NRIM (1978). Both models are consistent with the principles of the weakest link concept. This means that the interest when calculating the probability of failure is subvolumes of a structure and how each of these subvolumes is loaded and how they together form the global probability of failure. The weakest link concept originally formulated by Weibull (1959) is based on three assumptions:  Defects are assumed to be randomly distributed.  There is no interaction between the defects.  The defects could be considered to be located along a link in such way that when one defect breaks this, weakest link, leads to global failure. Obviously these assumptions will influence the types of materials of which these statistical models will be considered as applicable for. And in the same time the models do not consider crack propagation and are therefore only suitable to materials where propagation is of less importance. The models predict the fatigue limit and not the actual fatigue life. 2.2.1 The Weibull method As mentioned the Weibull method is consistent with the principles of the weakest link concept. This method works under the assumption that the endurance limit is Weibull distributed. From a statistical point of view consider 0V being a reference volume. Assume that the maximum defect size in this volume is a random variable denoted A0. The distribution function of A0 is then called    aAPaFA  00 . The statistical size effect is described by determining the maximum defect size A in an arbitrary volume V. The distribution function of A is then:      0 0 V V AA aFaF  (2.1) Taking a finite element model as a starting point, we assume that the maximum size defects are independently distributed among the element volumes. We assume that under homogeneous symmetric tension-compression load there is a stress threshold for every defect. If this threshold is exceeded it will cause a crack initiation from this defect. This threshold is denoted a and is a function of the defect size a. It is suggested that this CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 6 relation can be described by fracture mechanics relations and the following relation is proposed, Bomas (1999). na a Q  (2.2) Where Q and n are material parameters. The stress amplitude a can be interpreted as the endurance limit for the actual defect size a. Bomas then introduces the two random variables, EL0 for the reference volume and EL for the arbitrary volume: n L A Q E 0 0  , nL A Q E  (2.3) Two distribution functions for these variables were proposed, the Weibull-distribution and the Gumbel-distribution. The following work is founded on the Weibull-distribution. These distribution functions for EL0 and EL may be interpreted as the fracture probability in the volumes 0V and V respectively. Bomas suggests that the distribution for an arbitrary volume EL can be expressed as:   m V a L eF aE           01    (2.4) where 0V = Endurance limit of the reference volume m = Weibull exponent In the case of inhomogeneous stress distribution we need to apply equation (2.4) into small volume elements. Assume that we have a stress distribution described by  uaa   where u is the space coordinate. To each coordinate u we have an associated infinitely small subvolume dv. We can then express the endurance probability for a subvolume dv as:   m V a V dv aE edvP           00,    (2.5) Multiplication of the probabilities of endurance for each of the subvolumes gives us the endurance probability for the total volume. This is simply made by integrating the exponent:   dv V aE V m V a edvP            00 1 ,    (2.6) CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 7 The probability for a certain component to fail when subjected to a certain loading condition is then expressed as:                      V m V dv v V failscomponentTheP 00 1 exp1   (2.7) where  v = Stress in volume v 0V = Reference volume 0V = Characteristic fatigue strength m = Parameter related to material scatter The material constants 0V and m used for the work presented in this report were calculated by the method of profile likelihood described in section 4.6 from the NRIM (1978) material data. 2.2.2 The Generalized Pareto Distribution (GPD) method The GPD method belongs to the family of statistical distributions used for modelling data over a threshold. Assuming a component with a volume V, let u denote the coordinates of points within that volume. Then, let σ(u) denote the stress at u. For each location of u we can assume that failure is initiated at u if the stress is above a certain critical value and that there in the same time is an inclusion larger than a critical size at u. Failure will not occur until the combination of these parameters (stress and inclusion size) is greater than a critical level. It is assumed that the component survives if and only if for each inclusion the local stress is less than a critical value, Yates (2002). The defects are assumed to follow an extreme value distribution. A relation between the endurance limit and the defect size are the “area-max” model expressed as:       6 1 max 6 1 max2 1 120          areaqarea R HVCe   (2.8) where e = Endurance limit C,  = Constants HV = Material hardness, Vickers R = Stress ratio, max min   maxarea = Maximum inclusion size CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 8 In equation (2.2) for convenience we simply make the substitution   q R HVC          2 1 120 . Hence, for each level of stress there is a corresponding critical inclusion size expressed as 6        q dc . Smaller inclusion sizes are not considered to initiate crack growth. The assumptions for the inclusions are:  Inclusions larger than the critical size cd will appear as a Poisson process with intensity 0 .  The size of the inclusions is random.  The inclusions are independent from each other, regarding size and location.  The inclusion size above the critical size cd is exponentially distributed with parameter 0 . For further convenience, we let 0 0 0  d e . The unit for  is mm-³ and the unit for 0 and 0d are µm. This expression can be interpreted as the intensity of inclusions of any size. Finally, the probability for a certain component to fail that is subjected to a certain loading condition can be expressed in equation (2.9).                        V du uq failscomponentTheP 0 66 expexp1   (2.9) where  u = Stress in volume u  = Intensity of inclusions (of any size) q = Parameter from analyzed data estimated from material hardness and stress ratio while testing 0 = Parameter from analyzed data describing inclusion size distribution V = Component volume (The material constants  , q and 0 used for the work with the GPD method presented in this report were calculated by Sara Loren at FCC from the NRIM (1978) material data). CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 9 3 Implementing the models using FEM-software In order to implement the statistical models Abaqus CAE 6.7-1, a commercial software used for solving finite element analysis (FEA), was chosen. A crucial feature for the software chosen was the ability to execute externally written scripts. For Abaqus the scripts are written in the Python programming language. In order to manage this, Python version 2.4.3 was installed and integrated into the Abaqus software. Abaqus was in this configuration used both as a pre-processor, for calculating structural stresses and as a post-processor in order to visualize the new added data fields created by running the scripts. 3.1 Assumptions In order to use FEA to calculate the fatigue behaviour the following assumptions were made.  Volume of defects is significantly smaller than the volume of the elements. Abaqus is not used for modelling any defects. The defects are indirectly added by implementation of the statistical models.  A defect is located inside a single element only. It can’t be chaired by two neighbouring elements.  The stress distribution is considered as homogenous in each element.  The material is considered to behave linear, isotropic and elastic (E and v are constant).  E=205 GPa and v=0.5.  Von Mises stress is used for the calculations. 3.2 Defining geometry and constraints When choosing a suitable geometry as a basis for the experiments it is preferable if the geometry is easy to model and in the same time reduces the needed data processing time. In order to achieve this, a simple axi-symmetric hour-glass geometry similar to the test specimens used for fatigue testing was chosen. The test geometry was designed with a waist allowing the maximum stresses to be localized at the surface of the smallest diameter. The test specimen had a major diameter of 20 mm with a radius of 5 mm cut into the smallest diameter of the waist of 10 mm. In order to benefit from symmetries in the geometry, only a quarter of the test specimen was modelled in order to further reduce the data processing time. The chosen geometry is presented in figure 3.1. In order to constrain the geometry in figure 3.1, the left boundary form the axi-symmetry, the lower boundary was constrained as a symmetry while the tensional load was applied as a pressure to the upper boundary. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 10 Figure 3.1 A picture showing the axi-symmetric geometry chosen for the FEA calculations to the left. Half of the actual test geometry is presented to the right. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 11 3.3 Mesh convergence study As the accuracy of the calculation of the stress distribution in the component increases with a “finer resolution mesh” it is necessary to obtain sufficiently small elements to reach reliable results. In the same time there is a contradiction to the fine resolution mesh due to the data processing time used. There is also a contradiction to the assumptions in section 3.1 if we choose an element size smaller than the critical defect size. As the level of convergence is a balance between accuracy and processing time a compromise is necessary. Several meshes were tested with different element layouts with respect to convergence and processing time. In order to achieve this, a so called convergence study was performed. By allowing the mesh density to increase towards the notch the accuracy could be maintained while the processing time was kept limited. The different types of evaluated meshes are illustrated in figure 3.2. Figure 3.2 A picture showing 6 different meshes used for the convergence study in the FEA software. The meshes are entitled coarse, normal, fine, very fine, hybrid and hybrid fine, respectively. In figure 3.2 the first mesh can be considered as rough as the mesh gets finer for each step to the right up until mesh “very fine”. For mesh “hybrid” and “hybrid fine” one can clearly see that the mesh density is not evenly distributed in the geometry. Instead, the mesh density increases near the radius in the bottom of the geometry where we expect a high gradient of the stress distribution. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 12 Table 3.1 A convergence/ process time study based on mesh density. Total number of elements Elements in radius Normalized max Mises stress in geometry 1 Normalized max displacement in geometry 2 CPU Time (s) Coarse 256 7 0,809 0,999 0,2 Normal 1592 16 0,937 1,000 0,5 Fine 6019 31 0,994 1,000 2,1 Very Fine 9634 50 1,000 1,000 3,1 Hybrid 398 25 0,975 1,000 0,4 Hybrid fine 1337 30 1,004 1,000 0,6 1 Stress obtained from the integration point with the maximum stress in the geometry. 2 Max displacement obtained from node coordinates. The convergence study presented in table 3.1 showed that we reached acceptable convergence somewhere between mesh “fine” and “very fine”. Further, the results indicated that mesh “hybrid fine” could give us good results with the benefit of reduced processing time without sacrificing the accuracy required in the interesting region around the radius. The data from the convergence study is also presented in graphical form in figure 3.3. Figure 3.3 A convergence/ process time study based on mesh density. When performing an FEA not only the mesh is of importance, the types of elements you assign to the mesh also influence the accuracy. The different types of elements have different ability to reproduce loading conditions as e.g. bending. In the Abaqus software two candidates emerged among the element types. The “four node bilinear axisymmetric quadrilateral element with reduced integration” CAX4R and the corresponding eight node element CAX8R was considered suitable for our loading conditions. In order to find a good compromise between mesh density, element type and processing time further analysis was necessary. As the earlier studies showed no trade off when using the so called hybrid mesh the following convergence study ignored the evenly distributed mesh geometries. In order to get a quantitative analysis the number of elements in the CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 13 radius was chosen as a measure of the mesh density. As earlier mentioned the types of elements used are also affecting the processing time. The result of this convergence study is presented in table 3.2 and in graphically in figure 3.4. Table 3.2 A convergence/ process time study based on mesh density and element type. Element type Elements in radius Normalized max Mises stress in radius 1 CPU Time (s) CAX4R 12 0.823 1 20 0.869 1.4 40 0.951 5.5 80 0.979 7.2 CAX8R 6 0.935 0.4 12 0.994 0.5 25 1.000 2.6 1 Stress obtained from the integration point with the maximum stress in the radius. Figure 3.4 A convergence/ process time study based on mesh density and the type of element used. The result from the convergence study showed convergence for the CAX8R elements at a more coarse resolution density than the CAX4R elements. In terms of processing time the CAX8R element showed to be consuming about the same CPU time for the same density compared to the CAX4R elements. The best compromise chosen for the further work was a CAX8R element based model with a density of 25 elements in the radius. The mesh that will be used together with the statistical methods is visualized in figure 3.5. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 14 Figure 3.5 The chosen mesh for implementing the statistical models. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 15 The mesh described in figure 3.5 was used in the analysis for both of the statistical models as two different types of mesh may affect the actual comparison. In figure 3.6 the stress distribution from an FEA in Abaqus is visualized. One can see, as expected, that the actual stress is largest in the region located where the stress carrying area is smallest. Figure 3.6 The stress distribution obtained from FEA in Abaqus. The red colour corresponds to high levels of stress whilst the blue color corresponds to low levels. The left picture shows the modelled geometry whilst the right picture shows a fringe plot of the actual test specimen. 3.4 Obtaining material parameters Material data was collected from tests results presented in NRIM/FDS/No.4 (1978). This data was processed by Sara Lorén at Fraunhofer-Chalmers Centre, FCC, and she supplied the material parameters presented in section 4.1 used for the studies and implementation in this thesis. The steel was a so called S55C-steel with a carbon content of 0.55 percent. Table 3.5 Chemical composition of the S55C-steel samples. C Si Mn P S Ni Cr Ca Contents (wt%) 0.52 0.29 0.83 0.016 0.025 0.04 0.04 0.02 The samples were selected from eleven batches of hot rolled steel rods of size ≈22 mm (diameter) originating from steel ingots of typically 4 tonnes. The steel was normalized at CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 16 825°C for 30 minutes and thereafter air cooled. Next step was to heat it to 825°C during 30 minutes and then water quenched. Finally the steel was tempered in three groups at 550, 600 and 650°C respectively for 60 min and thereafter it was watercooled. The steel´s pureness was also examined with regards to non-metallic inclusions. The pureness is indicated in table 3.6. Table 3.6 Results from metallurgic examination. Contents (%) Inclusions deformed by plastic work (sulphide, silicate etc). 0.12 Inclusions occuring in discontinuous arrays (aluminum etc). 0.02 Isolated inclusions (granular oxides etc). 0.00 From this material NRIM has prepared test specimens for static and fatigue tests. Fatigue tests have been performed for rotating-bending, torsion and uniaxial loading conditions. The test results used in this thesis for calculating the material parameters were results for rotating-bending tempered at 600°C. The static properties of the steel are presented in table 3.7. The fatigue results are compiled in table 3.8 and visualized in the Wöhler- diagram in figure 3.7. Table 3.7 Mechanical properties of S55C steel tempered at 600 °C. Upper yield stress 0.2 proof stress Tensile Strength True fracture stress Elongation Reduction of area Charpy impact value Vickers hardness (N/mm²) (N/mm²) (N/mm²) (N/mm²) (%) (%) (J/cm²) (HV 196N) 635 639 823 1512 24 64 130 261 Table 3.8 Results from NRIM used as input for calculating the material parameters. Stress amplitude Cycles to failure Stress amplitude Cycles to failure (N/mm²) (N/mm²) 520 5.08 x 104 440 1.21 x 107 520 6.34 x 104 440 >2.12 x 107 500 1.45 x 105 440 >2.12 x 107 500 1.52 x 105 430 1.25 x 107 480 2.42 x 105 430 >2.12 x 107 480 3.10 x 105 430 >2.12 x 107 470 1.47 x 105 420 3.41 x 107 460 3.12 x 105 420 >2.12 x 107 460 4.34 x 105 420 >2.12 x 107 450 4.01 x 105 410 1.79 x 107 450 4.82 x 105 410 >2.11 x 107 450 >2.12 x 107 410 >2.11 x 107 CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 17 Figure 3.7 A Wöhler-diagram made from the NRIM test results in table 3.8. Specimens that did not fracture are illustrated with an arrow. The endurance limit for the steel in rotating bending test is stated to 444 N/mm² in the NRIM data sheet. 3.5 Introducing the statistical models To be able to use numerical tools for solving equation (2.1) and (2.2), the integral expression in each equation representing a continuously distributed volume consisting of infinitely small part volumes, du. was replaced. Instead, a discrete distributed volume consisting of the finite part volumes or elements obtained from the Abaqus analysis was used. Abaqus stores values as volume and stress in data fields, the values are structured element wise and node wise, respectively. These values were used to compute, in various approaches as will be described later, a numerical summation of the probability for failure for the test specimen. 3.5.1 The Weibull method The original method formulated by Bomas (1999) was implemented according to equation (3.1); see below. The similarity with equation (2.7) is obvious. The integral sign has been replaced by a summation sign. This summation uses the values for each element (stress and volume). In order to summarize these finite elements in Python scripting a simple for-loop was chosen. For each iteration in the for-loop the values associated to a single element, Mises and iV , is calculated together with the material characterisers 0V and m in order to summarize the resulting value for the expression in equation (3.1). 0V is calculated from the test specimen geometry. We can thereby express the probability of component failure. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 18                              n i m V Mises iV V failscomponentTheP 10 0 1 exp1   (3.1) 3.5.2 The GPD method By the same way of reasoning we apply this to the GPD expression in equation (2.9) according to.                         n i Mises i q VfailscomponentTheP 1 0 66 expexp1    (3.2) 3.5.3 Statistical methods, regional approach An alternative way to use the statistical methods is to let them describe the probability of failure for a certain region. When implementing this approach the elements were grouped in ten different groups resulting in ten different regions based upon the element stresses. This approach is preferably presented as a banded fringe plot in which each band corresponds to a probability of failure in that banded region. Equation (3.1) is then applied for each region of the whole structure.                             ri m V iV V failsrregionTheP 0 Mises 0 1 exp1,   (3.3) And by the same way of reasoning, the GPD method gives.                                ri i i q VfailsrregionTheP 0 66 expexp1,    (3.4) This regional approach was not further investigated in this thesis. 3.6 Workflow The workflow when calculating probability of failure can be visualized by the block scheme in figure 3.6. In this thesis the same parameters were used for all calculations. Abaqus performs a conventional analysis based on the given conditions. Then we execute the Python script that uses the information stored as data fields together with the given parameters and returns a new data field and from that the calculated probability of failure. If we instead are interested in finding the endurance-limit we need to find the load or stress for which P=0.5. By the definitions given in section 2, this load or stress corresponds to the fatigue-limit. This is simply made by giving Abaqus a starting load from which Abaqus returns, by using the Python script, the calculated probability of failure. Depending on the result we then continue by repeating the process by giving Abaqus a new load from which a new probability of failure is calculated. This process is CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 19 then continued until we get a satisfying answer close to P=0.5 and it is in the block scheme represented by the “external loop”. Figure 3.8 The principle of calculating the probability of failure. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 20 4 Evaluation of the statistical methods 4.1 Precision of parameters obtained from the NRIM tests As the actual values of the parameters used for modelling originates from a limited number of real-life tests they are obviously associated with a certain degree of uncertainty. The statistical information in table 4.1 about the test uncertainty was supplied by Sara Lorén. For all of the calculations performed in section 3 the declared mean values of the parameters were used for the calculations. The values for V0 and q were calculated from the test specimen geometry and were regarded as constants for the experiments. Table 4.1 Parameters used for calculations with associated confidence interval. Method Calculated Parameter Mean Value 95% confidence interval Variance Covariance Weibull m 20.0078 [6.18; 43.5] 88 119 0V 394.1848 [348.5; 415.1] 210 V0 1005.3 GPD  0.1075 [0.0064; 6.1] 0.033 -0.14 0 1.4738 [0.6166; 6.8340] 0.62 q 544.83 4.2 Finding the endurance load –manual method The method of finding the actual endurance-limit (as described in section 3.6) was initially a manually iterative method. By guessing an input load we get a probability for failure in return. Depending on if this value is larger or smaller than 0.5 we decrease or increase our guess of the next level of load in the iteration process. As one can understand this is a rather primitive method with no structured way of finding our next guess. In figure 4.1 the typical relationship between probability and load for the statistical models is described. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 21 60 70 80 90 100 110 120 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 Applied Load kN P ro ba bi lit y of F ai lu re Figure 4.1 The typical relation between applied load and the resulting probability of failure. Each cross represents one calculation. It is obvious that this relation cannot be described by a linear or an exponential function. We can see that P≈0 for small loads up until a point where, for a limited increase in load, P increases to ≈1 and remains at ≈1 for higher loads. As we are unable to use a mathematical function in order to predict the endurance limit we need to have a more refined method. 4.3 Finding the endurance load -automated method Searching for the endurance limit manually will be a very time consuming process. We need a tool in order to replace our guess based process. A suitable numerical method for finding the load for which P=0.5 is the bisection method. This method is illustrated in figure 4.2. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 22 Figure 4.2 An illustration of the fundamentals of the bisection method. The bisection method can be explained in a few steps. First we guess two points that defines an interval in which we know P=0.5 is located. The first iteration consists of calculating the function value (in our case the probability) for the point in the middle of our interval. This point divides our interval into two equally sized sub-intervals. Depending on if the function value at this point is larger or smaller than 0.5 we choose the interval that contains the load value where P=0.5 is included as illustrated in figure 4.2. This step is repeated until we have reached the desired level of accuracy. This method could obviously be implemented into a script written in Python interacting directly with Abaqus. As the availability of running Abaqus was limited, an alternative approach was chosen. When Abaqus has run the calculation it stores the result as field data in a file format called “.odb” (Output Data Base). By writing a script that converts the field data to text files, the info can be read by other software with no need to have access to Abaqus. For convenience the continued evaluation was performed in Matlab. In order to be able to run the continued analysis in Matlab we need the data fields for element volume and the stress from an initial analysis. It is important to emphasize that we are restricted to linear elastic behaviour using this way of reasoning. We can in the same time benefit from the fact that every single stress is proportional to the applied load. If we double the load each element stress is also doubled. This means that we do not need to run a complete analysis in order to extract the data field for the stresses. Instead we can scale the initial analysis as long as we know the corresponding applied load. A Matlab script was developed returning the applied load for a desired level of probability of failure. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 23 4.4 Finding the endurance load with the traditional approach As mentioned in section 2.1 the intuitive way of finding the endurance load would be by setting the maximum stress in the structure to the endurance stress and count “backwards”. As we are fortunate to have a non-complex geometry as described in figure 3.1 we can easily use tables of stress concentration factors in order to get an estimate of the endurance load. The following figure was extracted from a technical handbook, Dahlberg (2001). Figure 4.3 Stress concentration - tension, Dahlberg (2001). If we apply the dimensions of our geometry according to: D=20 mm d=10 mm r=5 mm We obtain the stress concentration factor Kt from figure 4.3 and it is from the graph estimated to 1.4. In section 3.4 the endurance stress is stated from the experiments to 444 N/mm². By rewriting the equation in figure 4.3 we can calculate the endurance load for our test specimen. kN d K P t Endurance 9.24 4 10 4.1 444 4 22       (4.1) This calculation is made as a reference in order to compare it with the results from the statistical models. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 24 4.5 Comparison of the statistical models In figure 4.4 the probability of failure for a certain loading for each of the models is presented. A second horizontal axis is presented in top of the figure showing the maximum von Mises stress in the structure. The resulting interval when altering the parameters by ±1% is presented as black lines. Example: For the Weibull method we have two parameters, m and 0V . If we let each of the 2 parameters adopt 3 values, the stated value, stated value -1% and stated value +1% we will get 9 possible combinations for calculating the load for a certain probability. Each of these calculations is presented by a black curve in figure 4.4 and the curve corresponding to the stated values of the parameters is coloured. Figure 4.4 Visualization of the probability density functions with the resulting intervals when altering the parameters by ±1%. Here we can see obvious distinctions between the models. As the Weibull method appears to render a lower level of endurance limit than the GPD method, the GPD method appears to be less sensitive to a variation of the parameters. The Weibull method also generates a much narrower load interval illustrated by the steeper inclination of the curve which estimates a smaller variance of the fatigue limits. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 25 4.6 Uncertainty of the statistical models The altering of the parameters in figure 4.4 just describes the sensitivity when altering the parameters 1%, not the actual uncertainty. In order to calculate the uncertainty we need to find a better approach. The confidence interval and variance for each parameter together with the corresponding covariance between the parameters for each model is given in table 4.1. In order to calculate the uncertainty for each method the intuitive choice would be the Gauss method of approximation: ij jij i n i i F Cov fff i                    1 2 2 2 (4.1) where     1 2 ,,,, 1,1,           iiiiii i gfgff (4.2) Calculations with this approach for the GPD method indicates an uncertainty where F is about 60% of the mean value of the distribution. In section 4.6.2 it is seen by a Monte Carlo simulation that this value is highly exaggerated. Unfortunately this method implies that each of the  -parameters is symmetrically distributed round their corresponding “expected values”. Unfortunately this is not the case for the parameters used here. By observing table 4.1 one can see that the Weibull parameters are noticeably distorted from a symmetrical distribution whilst the GPD parameters are extremely asymmetrically distributed. This makes Gauss method of approximation unable to be accurately used for this problem. 4.6.1 Finding the parameters with the method of maximum likelihood In order to evaluate the uncertainty, the method to calculate the material parameters made by Sara Lorén needed to be reproduced. The work was performed in Matlab and a script was written capable of calculating material parameters for the GPD method according to Lorén (2004). This approach utilizes the method of maximizing the likelihood in order to find the most likely parameter values. The likelihood-function is expressed as:                                  1 0 0 6 6 2 220 exp2exp1,     d s q RLL i Bi                       1 0 0 6 6 2 22 exp2exp     d s q RL j Uj (4.3) where  , 0 = Material parameters to be determined L2,R2 ,  = Geometrical constants CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 26 s i,j = Nominal stress level during test q = Parameter from analyzed data estimated from material hardness and stress ratio while testing B,U = Broken, Unbroken The method is able to compute both of the parameters at the same time. It also in the same time handles the dependency linking the two parameters. As the parameters are combined they together gives the so called likelihood. The relationship can be visualized as a 3-D diagram, figure 4.5. Figure 4.5 Visualization of the likelihood as a function of the parameters  and 0 . Note that the z-axis is expressed as log-likelihood. In order to further clarify how each of the parameters relates to the likelihood a visualization of the profile likelihood method can be shown. Each of the parameters for the GPD method,  and 0 , are visualized as 2-D diagrams in figure 4.6 and figure 4.7 respectively. Profile likelihood can be explained as the function of a parameter which maximizes the likelihood given the value of the other parameters. The method used to determine the parameters in this thesis is illustrated by the following example: CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 27  Determine a suitable range for one of the parameters (the actual range might need to be re-defined in order to cover the wanted confidence interval).  For a given value for parameter 1, we maximize the likelihood for parameter 2. Register the maximized value of the log-likelihood function (equation 4.3).  Repeat this throughout the interval for parameter 1. The result is presented in figure 4.6. This can be regarded as the marginal distribution for the statistical uncertainty of parameter 1. In our case, the 100% distribution for 0 reaches from a value of approximately 0.5 to infinity. In order to avoid infinity in our computations we will use a smaller interval. This method was used in order to make a Monte Carlo simulation, taking the dependency into account. Figure 4.6 Visualization of the profile likelihood for parameter 0 . CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 28 0 1 2 3 4 5 6 7 -13 -12.5 -12 -11.5 -11 -10.5  Lo g- lik el ih oo d Figure 4.7 Visualization of the profile likelihood for parameter  . CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 29 4.6.2 Determination of the uncertainty by using Monte Carlo simulation This method can be summarized to the following steps:  Randomly choose a set of values (parameters) from a known distribution.  Use these values to make deterministic calculations.  Create a distribution function from the result. In our case the parameters consist of  and 0 . As mentioned in section 4.6, the parameters are unfortunately asymmetrically distributed as well as dependent. The benefit of the Monte Carlo method is that if we can find a set of parameters, fulfilling the demands mentioned we will after several computations be able to create a representative portrait of the true distribution function. On the downside, as one may expect, this is a very costly method in terms of computational effort. Each calculation, from parameter to the resulting value, needs to be repeated for typically 103 times in order to form an acceptably shaped distribution function. As an experiment the Gaussian distribution function was created using the Monte Carlo method in order to estimate how many calculations that were suitable, figure 4.8. -4 -2 0 2 4 0 5 10 n=100 -4 -2 0 2 4 0 20 40 n=1000 -4 -2 0 2 4 0 200 400 n=10000 Figure 4.8 Influence of number of calculations, n, when creating a Gaussian distribution. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 30 A suitable number of calculations for the further work was hereby determined to 10 000. In order to implement the Monte Carlo method with two dependent and asymmetrical parameters the process described in figure 4.9 was applied. Figure 4.9 Monte Carlo process for creating the distribution function. In the work earlier performed by Lorén (2004), the 95% confidence interval for the parameters is also stated. We will use this to create the corresponding 95% distribution for FP=0.5. The parameter 0 was chosen as our main parameter and is presented together with its 90, 95 and 99% confidence interval respectively in figure 4.10. In order to make a marginal statistical distribution of the profile likelihood result, it must be normalized to represent the total probability unity. This demands extremely much effort since we have a long right tail. This problem is here solved by assuming that the essential part of the distribution is contained up to 95% probability. Then we can construct the truncated cumulative distribution function presented in figure 4.11. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 31 Figure 4.10 Parameter 0 with corresponding confidence intervals. The Monte Carlo method was implemented in Matlab. Initially we randomize a set of 104 evenly distributed values in the range of 0 to 0.95 (due to the chosen confidence interval). These values are then translated into 104 new values utilizing the relation visualized in figure 4.11 to form the distribution of 0 . Figure 4.11 The relation between a evenly distributed parameter and 0 . CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 32 The next step is to find the corresponding values for  . We can here use the maximum likelihood theory to calculate the profile likelihood for  for each of the randomized values of 0 . From this distribution for  , we randomize a value by the same principle as for 0 in order to satisfy the demand of dependency. We then get 104 parameter-pairs visualized in figure 4.12, each * marker represents a pair of sample parameters. Figure 4.12 The set of 104 parameter-pairs visualized in a 2-D diagram. The dependency is clearly visible. The final step is now to run a script for each pair of parameters and thereby build the resulting distribution for the endurance limit. The result is visualized as a frequency function in figure 4.13 and as a distribution function in figure 4.14. One can see that the shape of the resulting function reminds of a Gaussian distribution but is distinctly asymmetric. 25 30 35 40 45 50 0 500 1000 P=0.5 Load kN Figure 4.13 Endurance limit (FP=0.5) frequency distribution. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 33 30 32 34 36 38 40 42 44 46 48 0 0.2 0.4 0.6 0.8 0.95 P=0.5 Load kN Figure 4.14 Endurance limit (FP=0.5) distribution function. The parameters generated from the method of maximum likelihood generated an endurance limit of 33.4 kN. The corresponding 95% confidence interval is [30, 48]. Note that the endurance limit for this study differs from the earlier experiments. This is due to a slightly different analysis setup where it was necessary to use parameters from the rotating bending test in order to reach convergence in the computer analysis. One of the parameters for the uniaxial tests reached for infinity in the upper 95% limit. The confidence interval then might be better to be presented by the means of a relative measure as for example 100% (+44%, -7%). Finally the calculated endurance limit is visualized in a 3-D plot as a function of the parameters  and 0 , figure 4.13. The value calculated by the method of maximum likelihood is represented by the red marker. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 34 Figure 4.15 The relation between the endurance limit and the parameters CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 35 5 Results 5.1 Comparison of the models The statistical models, Weibull in equation 2.1 and GPD in equation 2.2, each describe the probability of failure for a certain load. As we can see from figure 4.4 the models’ prediction of failure deviates from each other. According to Weibull the endurance load would be smaller than according to the GPD and also have a smaller variance. The Weibull method generates an endurance load level of approximately 75% of the GPD method. There is also a difference in how the models predict the variance of failure for a variation of the load level. This sensitivity can be interpreted as the slope of the curves rising from 0 to 1 in figure 4.1. In order to quantify this sensitivity the interval of load that allows the probability to increase from 0.1 to 0.9 was chosen as a measure. The result is presented in table 5.1 and is also expressed as percentage relative to the endurance load. For comparison the fatigue load originating from the “traditional approach” based on the stress concentration factor and the endurance stress is also presented. This load level showed to be located distinctly lower than the statistically based methods. Table 5.1 Predicted endurance load with corresponding sensitivity, Uniaxial. Endurance Load P=0.5 Load P=0.1 Load P=0.9 Method kN (MPa) kN (MPa) kN (MPa) Weibull 30.1 (476) 28.3 (448) 31.3 (495) GPD 31.0 (490) 28.3 (448) 33.7 (533) Traditional 24.9 (444,394) - - In terms of the data processing effort the statistical models are very similar and there are no obvious benefits of choosing one or the other. 5.2 Uncertainty of the model itself Expressed in terms of percentages the endurance limit for the evaluated GPD method in our study was 100% (+44%, -7%). It must be considered valuable to not only know the actual calculated endurance limit but also its corresponding uncertainty. As we are facing a problem as an engineer when dimensioning, the awareness of the accuracy of the chosen method will make it possible to establish well founded safety factors. This is then made already in the design stage of the product development cycle. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 36 6 Concluding remarks and recommendations for future work 6.1 Summary of results The two statistical methods (Weibull and GPD) have been implemented into commercial software (Abaqus) by using the possibility of Python scripting. Each method is fully usable for finding the probability of failure for an arbitrary structure. One should keep in mind the limitations for using these models as they are only suitable for high cycle fatigue on linear elastic materials. When both of the models are implemented onto the same test specimen with identical conditions apart from the models themselves, the calculated result differs. The results differ both in the actual calculated probability and size of the load range from which the distribution function raises from 0 to 1. Further experiments indicate that with regards to the uncertainty originated from the material parameters, the difference between the models is not significant. A structured method for finding the endurance limit, P=0.5, for a structure has been developed. 6.2 Comparison project Obviously it would be very interesting to perform further verifying tests in order to distinguish between the models. Not only to compare the actual mean value for the load of rupture but also for the actual distribution in the results. Would the theoretical approach also be able to describe the spread in the results, as this is of importance for the use of these models “in field”? How will the python code implemented in Abaqus work for real complex structures? If the computational effort is assessed to be to extensive in these cases perhaps an alternative approach to the Python scripting method needs to be considered. 6.3 Develop a tool for calculating the endurance-limit As we are assuming that the validity for these models is restricted to linear elastic models only, we can benefit from this behaviour. Linear elasticity means that the stress in each cell is directly proportionate to the total applied load. If we double the load, we double the stress in each cell. This means that we do not need to rerun the Abaqus CAE for each of the iterations. As Abaqus gives us data fields with the stress belonging to each element we can simply multiply this value with the factor of change in global load and thereby creating a new data field to be used for a new iteration. This simple operation takes significantly less effort than a complete rerun in Abaqus. In this thesis the work made to find the endurance limit utilized a simple bisection method. This method is known as a linear converging method and is thereby quite slow. Other numerical methods like the Newton-Raphson are quadratic converging and thereby faster and might speed up the process. By observing the relation between the probability of failure as a function of applied load described in figure 4.1 one can see obvious problems using Newton-Raphson for the initial iterations. As the Newton-Raphson method uses the tangent of a function in order to make the estimate for the next iteration we will encounter problems outside the P=0.5 region. Outside this region, where P≈0 or P≈1, the derivative is near 0 and the estimation for the next iteration will be close to CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 37 positive or negative infinity. This means that both numerical methods should be combined in order to compose a good, effective tool for calculating the endurance-limit. 6.4 Further investigation and improvement of the regional approach For more complex structures than a test rod it might be interesting to further investigate how the regional approach described in 3.5.3 could be used. When a component is optimized with regards to weight and material usage the consequence is often that the maximum local stresses is of the same magnitude. This means that there are several candidate regions where failure could occur. As a large region might be subjected to a large stress but not the maximum stress in a structure, it can still be the most likely region causing a failure. This because that the larger volume increase the probability of containing a inclusion from which a crack can grow and cause a failure. For these instances it would be interesting to divide the structure into sections, each with its corresponding probability of failure for a certain loading condition. This might, for some analyses, result in a better understanding of the actual function of the component. Developing this method could also generate a tool making it possible to map different scenarios of failure with their related possibility of occurring. CHALMERS, Applied Mechanics, Master’s Thesis 2012:07 38 7 References National Research Institute for Metals NRIM. (1978): DATA SHEETS ON FATIGUE PROPERTIES OF S55C(0.55C) STEEL FOR MACHINE STRUCTURAL USE, No.4. Bomas H. Mayr P. and Linkewitz T (1999): Inclusion Size Distribution and Endurance Limit of a Hard Steel. ASCE Journal of Structural Engineering, Vol. 126, No. 2, July 1999, pp. 149-164. Yates J. R., Shi G., Atkinson h. V., Sellars C. M. and Anderson C. W. (2002): Fatigue tolerant design of steel components based on the size of large inclusions. Fatigue and Fracture of Engineering Materials & Structures, Vol. 25, 2002, pp. 667-676. Weibull W. (1959): Zur Abhängigkeit der Festigkeit von der Probengrösse. Ingenieur- Archiv 28, pp. 360-362. Dahlberg T. (2001): Teknisk Hållfasthetslära. Studentlitteratur, Lund, Sweden. ISBN 91- 44-01920-3. Norberg S. (2006): Prediction of the Fatigue Limit –Accuracy and Post-Processing Methods. Lic. Thesis. Department of Solid Mechanics, KTH – Royal Institute of Technology, Publication 97: 2006, Stockholm, Sweden, 2006. Lorén S. (2004): Estimating fatigue limit distributions under inhomogeneous stress conditions. International Journal of Fatigue, No 26, 2004, pp. 1197-1205.