Uncertainties
Construction projects are complex technical systems that face significant uncertainties throughout their execution and use While some uncertainties can never be completely eliminated, they must be considered during the design and verification phases The impact of these uncertainties can vary based on the structure's nature, environmental conditions, and applied actions, with certain types becoming critical Identifying these uncertainties is essential for successful construction management.
– natural randomness of actions, material properties and geometric data;
– statistical uncertainties due to limited available data;
– uncertainties of theoretical models owing to the simplification of actual conditions; – vagueness due to inaccurate definitions of performance requirements;
– gross errors in design, execution and operation of the structure;
– lack of knowledge of the behaviour of new materials in real conditions
The uncertainties presented are arranged in order of diminishing knowledge and the theoretical tools available for their analysis and consideration in design.
The inherent randomness and statistical uncertainties can be effectively addressed using probability theory and mathematical statistics, as outlined in the Eurocode and International Standards However, the lack of reliable experimental data, particularly concerning new materials and environmental influences, poses significant challenges Additionally, the available data is often inconsistent and derived from varying conditions, such as material properties and internal dimensions of reinforced concrete cross-sections, making analysis and application in design difficult, if not impossible.
The uncertainties inherent in theoretical models can be evaluated through both theoretical and experimental research, with guidance provided by established Standards The ambiguity arising from imprecise definitions, particularly regarding serviceability and performance requirements, can be partially addressed using fuzzy set theory However, these methods have had limited practical application due to the scarcity of suitable experimental data Nevertheless, advancements in theoretical tools and experimental research are expected to enhance our understanding of the behavior of new materials and structures over time.
The absence of adequate theoretical tools is evident in significant errors and knowledge gaps, which frequently lead to structural failures To mitigate these human-induced gross errors, implementing a quality management system that utilizes statistical inspection and control methods can be highly effective.
Various design methods and operational techniques have been implemented globally to mitigate the negative impacts of uncertainties throughout a structure's lifespan Concurrently, the theory of structural reliability has emerged to rationally analyze these uncertainties and incorporate them into the design and verification processes of structural performance This theory was largely driven by the need to address observed deficiencies and failures in structures due to various factors However, the term "reliability" is frequently applied in a broad context, warranting further clarification.
Definition of reliability
Reliability is often misunderstood and oversimplified, typically viewed in absolute terms where a structure is either deemed reliable or not This perspective incorrectly suggests that a reliable structure will never fail, neglecting the reality that failures can still occur even in reliable systems A more accurate interpretation acknowledges the possibility of failures, focusing on their probability or frequency rather than an absolute certainty of reliability This simplified view implies that there exists a defined set of structural conditions that create a zone of "absolute reliability," beyond which failures may be expected.
A simplified view of structural reliability is misleading, as the concept of "absolute reliability" is largely unattainable for most structures It is crucial to recognize, especially during the design phase, that there is always a small probability of failure within the structure's intended lifespan This acknowledgment is essential for the effective design of civil structures Consequently, understanding the term "reliability" and the phrase "the structure is safe" requires a nuanced interpretation that accepts the inherent uncertainties in engineering.
In structural design a number of similar definitions of the term reliability, or their interpretations, are used in literature and in national and international documents ISO 2394
Reliability is defined as the capability of a structure to meet specified requirements under designated conditions throughout its intended lifespan, aligning with both national and international standards.
The Eurocode does not provide a specific definition of reliability, but emphasizes that it encompasses the load-bearing capacity, serviceability, and durability of structures According to the Fundamental Requirements, structures must be designed and constructed to ensure appropriate reliability throughout their intended lifespan while also being economically viable.
– remain fit for the use for which it is required; and
– sustain all actions and influences likely to occur during execution and use.”
Different levels of reliability for load-bearing capacity and serviceability can be accepted, as outlined in document [1], which connects the probability of failure (pf) and the reliability index (E) to the consequences of failure.
Note that the above definition of reliability includes four important elements:
- given (performance) requirements – the definition of structural failure;
- time period – the assessment of the required service-life T;
- reliability level – the assessment of the probability of failure p f ; and
When establishing conditions of use for a structure, it is essential to limit input uncertainties, particularly regarding stability and potential collapse While defining these terms is relatively straightforward, addressing occupant comfort and environmental characteristics often introduces vagueness and inaccuracies Translating occupant requirements into precise technical criteria can be challenging, resulting in ambiguous conditions that complicate the design process.
In this article, the term "failure" refers broadly to any unfavorable condition of a structure, such as collapse or excessive deformation, that is clearly defined by specific structural criteria.
Historical development of design methods
During their development, design methods that addressed uncertainties and ensured structural reliability have been closely tied to empirical, experimental, and theoretical knowledge of mechanics and probability theory In the twentieth century, various empirical methods for structural design became established, leading to three widely used approaches that continue to influence current structural design standards To streamline computational procedures, these methods are occasionally modified or updated Therefore, it is essential to outline these three fundamental design methods and highlight specific measures that can impact failure probability factors and enhance structural reliability.
The first universally-accepted design method for civil structures is the method of permissible stresses It is based on the condition
The relationship V max < V per, where V per = V crit / k, indicates that the coefficient k is crucial for accounting for uncertainties in local load effects (V max) and structural resistance (V per), thereby enhancing structural reliability However, this method's primary limitation lies in its focus on local reliability verification within the elastic range and its inability to separately address uncertainties in fundamental quantities and computational models for assessing action effects and structural resistance Consequently, the probability of failure is governed solely by the coefficient k.
The global safety factor method is a widely-accepted structural design approach that ensures the calculated safety factor (s) exceeds a specified value (s0), expressed as s = X resist / X act > s0 This method aims to provide a more accurate representation of the behavior of structural elements and their cross-sections by evaluating the overall structural resistance (X resist) against the action effects (X act) However, similar to the permissible stresses method, it has a significant limitation: it cannot account for uncertainties in basic quantities and theoretical models Consequently, the probability of failure is managed solely through the global safety factor (s).
The partial factor format, often mistakenly referred to as the limit states method, represents the most advanced approach in structural design today This method evaluates the action effect (E_d) and structural resistance (R_d) based on design values of key quantities such as action (F_d), material properties (f_d), dimensions (a_d), and model uncertainties (T_d) These design values are derived from characteristic values (F_k, f_k, a_k, T_k) while accounting for uncertainties and employing partial safety factors, combination factors, and other reliability measures Consequently, this comprehensive system of partial factors and reliability elements effectively manages the probability of structural failure.
The partial factor method provides the best opportunity for harmonizing the structural reliability of various structures made from different materials However, none of the existing methods directly apply the probability of failure Notably, the recent ISO standard is the first to incorporate probabilistic methods into structural design.
Probabilistic design methods [2] are based on the condition that the probability of failure p f does not exceed a specified target value p t during the service life of a structure T p f ≤ p t (1.4)
Assessing the probability of failure (p_f) in structural engineering can be achieved using a computational model that incorporates essential variables, including actions, mechanical properties, and geometric data The limit state of a structure is characterized by the limit state function, g(X), which indicates the structure's performance A positive limit state function (g(X) ≥ 0) signifies a safe condition, while a negative value (g(X) < 0) indicates structural failure For a more comprehensive understanding, refer to Chapter 5.
Basic quantities often exhibit time-dependent (stochastic) behaviors; however, it is usually adequate to represent them using time-independent models These models are derived from analyzing the extreme values (maximum or minimum) of the relevant quantity.
(action or resistance) during the specified design life T
For most ultimate limit states and serviceability limit states the probability of failure can be expressed by the equation p f = P{g( X ) < 0} (1.6)
When dealing with time-dependent quantities in structural reliability analysis, more complex procedures are required Chapter 6 discusses theoretical models for these time-dependent actions Nevertheless, many problems can be simplified to a time-independent format by evaluating the minimum of the function g(X) over the time period T, as illustrated in equation (1.6).
The evaluation of reliability measures, including characteristic values and partial factors, in the new structural design standards is influenced by both probabilistic considerations and historical experience In Eurocode 1, the selection of these measures aims to simplify calculations for practical design, but this simplification can sometimes result in oversimplification, ultimately leading to increased material consumption.
The challenge of harmonizing new design codes for various structures lies in balancing general reliability principles with the current trend of simplifying computational procedures, which may undermine the benefits of the partial factors method While material consumption is a key evaluation criterion, it is not the sole focus; factors such as design and construction complexity, maintenance, service life, insurance, material recycling, and adaptability to changing occupancy must also be considered A comprehensive analysis of these criteria will likely drive future investigations and optimization studies.
In today's landscape, enhancing current methods hinges on calibration procedures, optimization techniques, and rational approaches that incorporate probability theory, mathematical statistics, and reliability theory Central to these processes is the probability of failure (p f), which, despite its limited informative capacity, serves as a fundamental measure of structural reliability Reliability theory offers essential tools for the continuous improvement and standardization of design across various structures and materials, while also enabling the application of general methodologies to new structures and materials.
Annex A of this chapter presents a straightforward example of a reinforced concrete slab designed using the aforementioned techniques, which will also serve as a reference in subsequent chapters to demonstrate the implementation of more advanced probabilistic methods.
Design working life and design situation
The design working life refers to the duration a structure or its components are expected to function for their intended purpose with regular maintenance, avoiding the need for major repairs According to Table 1.1 from EN 1990, various categories of construction works are outlined along with their respective anticipated design working lives.
Table 1.1 Indicative design working life
Notional Design Working Life (years)
2 10-25 Replaceable structural parts, e.g gantry girders, bearings (see appropriate standards)
3 15-30 Agricultural and similar structures (e.g buildings for animals where people do not normally enter)
4 50 Building structures and other common structures (e.g hospitals, schools etc)
5 100 Monumental building structures, bridges and other civil engineering structures (e.g churches)
Current knowledge is inadequate for accurately predicting the lifespan of structures, as the behavior of materials over time can only be estimated However, it is possible to assess the expected maintenance periods and replacement timelines for various structural components Key factors include considering material deterioration, such as fatigue and creep, during reliability verification, as well as comparing different design solutions and materials to balance initial costs with life cycle costs Additionally, developing management procedures and strategies for systematic maintenance and renovation of structures is essential for long-term sustainability.
When designing a structure, it is essential to account for the variations in actions, environmental influences, and structural properties that occur over its lifespan This involves selecting specific scenarios that represent different time intervals and the associated hazards.
Four design situations are classified in EN 1990 [1] as follows:
Persistent situations encompass the typical conditions under which a structure is used, often linked to its intended design lifespan These scenarios may involve extreme loading factors such as wind, snow, and other imposed loads that the structure must withstand.
Transient situations are temporary conditions affecting a structure's use or exposure, particularly during construction or repair These situations typically involve a time frame significantly shorter than the structure's design working life, often estimated at around one year in most cases.
Accidental situations involve unusual conditions affecting a structure or its exposure, such as fire, explosion, impact, or localized failure These events are typically characterized by a brief duration, although they can pose risks when local failures go unnoticed.
(d) Seismic situations refer to exceptional conditions applicable to the structure when subjected to seismic events
When selecting design situations, it is essential to account for all reasonably foreseeable conditions that may arise during the structure's execution and use For instance, if a structure experiences an accidental design situation, such as fire or impact, it may require repairs within a short timeframe of approximately one year, necessitating the consideration of transient design situations During this period, a lower reliability level and reduced partial factors compared to those used for persistent design situations may be appropriate Nonetheless, it is crucial that the repair design incorporates all other foreseeable design situations.
Limit states
Limit states are critical classifications that determine the performance of a structure, distinguishing between satisfactory (safe and serviceable) and unsatisfactory (failed and unserviceable) conditions These states define the thresholds beyond which a structure fails to meet its design criteria, with each limit state linked to specific performance requirements However, these requirements are frequently not articulated clearly, making it challenging to establish precise definitions for the corresponding limit states.
Expressing performance requirements qualitatively and defining limit states can be challenging, especially for structures made of ductile materials This complexity is particularly evident in determining ultimate limit states and serviceability limit states, which are crucial for user experience, as illustrated in Figure 1.1 Understanding these uncertainties is essential for effectively applying the limit state concept in structural design.
The traditional sharp concept of limit states posits that a structure is deemed satisfactory up to a specific load effect value, E0, beyond which it is considered unsatisfactory However, accurately defining this threshold can be challenging, rendering the simplistic approach inadequate Instead, a transition region, denoted as , offers a more realistic representation, where a structure gradually loses its satisfactory performance This vague concept of limit states introduces uncertainties that can only be addressed through reliability analyses using advanced mathematical techniques, which are not included in the current Eurocodes.
Figure 1.1 Sharp and vague definition of a limit state
In order to simplify the design procedure two fundamentally different types of limit states are generally recognised:
Ultimate limit states relate to structural failure, while serviceability limit states pertain to normal usage conditions such as deflections, vibrations, and cracks Effective design must address both safety and serviceability, ensuring durability in each case The fundamental differences between ultimate and serviceability limit states are crucial for accurate reliability verification, highlighting the importance of distinguishing between these two categories.
Vague infringements of serviceability limit states typically do not result in severe consequences for the structure, allowing for its continued use once the actions causing the infringement are addressed.
The ultimate limit states criteria focus solely on structural parameters and relevant actions, whereas serviceability limit states criteria also consider client and user requirements, which can be subjective, as well as the characteristics of installed equipment or non-structural elements.
The distinction between ultimate limit states and serviceability limit states leads to separate reliability conditions and varying reliability levels in their verification However, if adequate information confirms that one limit state's requirements are satisfied by another, verification of one may be waived For instance, in reinforced concrete beams designed for ultimate limit states, deflection verification can be omitted if the span-to-effective depth ratio is less than 18 for highly stressed concrete or less than 25 for lightly stressed concrete.
In structural design, it is essential to account for variations in actions, environmental influences, and structural properties throughout the life of the structure This involves selecting distinct situations—persistent, transient, accidental, and seismic—representing specific time intervals and associated hazards Both ultimate and serviceability limit states must be considered in these design situations to cover all reasonably foreseeable conditions during the execution and use of the structure Additionally, various realistic arrangements within each load case should be assumed to determine the envelope of action effects for effective design.
When designing structures, it is essential to consider time-variant effects, which are influenced by action and resistance variables The reliability verification of a structure must be aligned with its design working life Additionally, it is crucial to acknowledge that many time-dependent effects, such as fatigue, exhibit a cumulative nature that needs to be addressed in the design process.
Ultimate limit states
Ultimate limit states relate to structural collapse and failure, directly impacting both structural safety and public safety In certain instances, these limit states also address the protection of sensitive contents, such as chemicals or nuclear waste materials.
In structural engineering, the initial occurrence of a limit state often signifies failure, particularly in scenarios where excessive deformations play a critical role Prior states leading up to structural collapse can be regarded as ultimate limit states for simplification It is essential to consider these factors when determining the reliability parameters for structural design and quality assurance For instance, in the design of foundations for rotating machinery in power plants, excessive deformation is a key consideration that dictates the overall design approach.
The following list provides the most typical ultimate limit states that may require consideration in the design:
(a) loss of equilibrium of the structure or any part of it, considered as a rigid body;
(b) failure of the structure or part of it due to rupture, fatigue or excessive deformation; (c) instability of the structure or one of its parts;
The transformation of a structure or its components into a mechanism can lead to deterioration, which diminishes the structure's strength and may trigger ultimate limit states It is essential to differentiate between two types of structures: damage-tolerant, which are robust, and damage-intolerant, which are sensitive to minor disturbances or construction imperfections When assessing the impact of various deterioration mechanisms on ultimate limit states, it is crucial to consider the specific type of structure involved.
Ensuring an adequate reliability level for damage-tolerant structures requires an effective quality control program In these structures, fatigue damage is considered a serviceability limit state, highlighting the importance of monitoring performance It's essential to recognize that different partial factors may apply to various ultimate limit states, emphasizing the need for tailored approaches in structural assessment.
Serviceability limit states
Serviceability limit states relate to the normal usage conditions of a structure, focusing on its operational functionality, user comfort, and overall aesthetic appeal.
Taking into account the time-dependency of load effects it is useful to distinguish two types of serviceability limit states which are illustrated in Figure 1.2
Irreversible serviceability limit states refer to conditions that remain permanently exceeded, even after the actions causing the infringement have been removed Examples include permanent local damage and unacceptable deformations that cannot be rectified.
Reversible serviceability limit states are conditions that can be rectified once the actions causing the issue are eliminated Examples include cracks in prestressed components, temporary deflections, and excessive vibrations, all of which do not exceed acceptable limits when the triggering factors are addressed.
Figure 1.2 Irreversible and reversible limit states
Irreversible limit states share design criteria with ultimate limit states, with the first passage being crucial for assessment This aspect must be considered when establishing serviceability requirements in contracts or design documents In contrast, reversible limit states do not equate the first infringement with failure or loss of serviceability Serviceability requirements can vary based on the acceptance of infringements, their frequency, and duration Typically, three types of serviceability limit states are recognized.
(b) specified duration and frequency of infringements are accepted; and
(c) specified long-term infringement is accepted
Serviceability criteria are aligned with the characteristic, frequent, and quasi-permanent values of variable actions Typically, the verification of serviceability limit states for various design situations involves the use of specific combinations of actions corresponding to these three types of limit states.
(a) the rare (characteristic) combination if no infringement is accepted;
(b) the frequent combination if the specified time period and frequency of infringements are accepted; and
The acceptance of a quasi-permanent combination for specified long-term infringements is crucial in design considerations Key serviceability limit states that influence the appearance and functionality of the structure must be carefully evaluated to ensure effective use.
Excessive deformation, displacement, sagging, and inclination can significantly impact the aesthetics of a structure, user comfort, and overall functionality, potentially leading to damage in finishes and non-structural components.
(c) damage that is likely to adversely affect the appearance (local damage and cracking), durability or functioning of the structure
Different types of structures may have specific serviceability limit state requirements outlined in material-oriented codes For instance, in concrete structures, structural deformation can lead to ultimate limit states.
Reliability differentiation
For the purpose of reliability differentiation EN 1990 [1] establishes reliability classes
RC (also called consequence classes (CCs)) Three classes are defined in accordance with consequences of failure or malfunction of the structure as follows:
High consequence for loss of human life, or economic, social or environmental consequences very great
Medium consequence for loss of human life, economic, social or environmental consequences considerable
Low consequences for loss of human life, and economic, social or environmental consequences small or negligible
Table 1.2 gives the recommended minimum values for the reliability index E associated with reliability classes (RC) for ultimate limit states only, fatigue and serviceability limit states as indicated in EN 1990 [1]
Table 1.2 Reliability classes and recommended minimum values for reliability index E
Ultimate limit states Fatigue Serviceability
Reliability class RC 2 is generally applied to residential and office buildings, with an E value of 3.8 for a 50-year reference period However, designs based on Eurocodes with recommended partial factors may result in a different E value than those specified in Table 1.1 It's important to understand that the probability of failure and the associated E index are merely notional and do not accurately reflect actual failure rates, which are primarily influenced by human error These values serve as operational benchmarks for calibrating codes and comparing the reliability levels of various structures.
Note that a slightly different Table 1.3 is provided in ISO 2394 [2]
Relative cost of safety measures
Consequences of failure small some moderate Great
It is further suggested to use:
A: for serviceability limit state E = 0 for reversible and E = 1,5 for irreversible states B: for fatigue limit states E = 2,3 to E = 3,1 depending on the possibility of inspection C: for ultimate limit states the safety classes E = 3,1, E = 3,8 and E = 4,3
Thus ISO 2394 (1) recommends similar target E values for the ultimate limit states (the shaded part of the Table 1.3) as those indicated in EN 1990 [1]
The values presented in Table 1.3 are based on the assumption of log-normal or Weibull distributions for resistance, normal distribution for permanent loads, and Gumbel distribution for variable loads It is important to note that these theoretical models, or similar ones, should be utilized in probabilistic analysis.
Historical design methods can be effectively demonstrated through the example of a simple reinforced concrete slab in an office building This illustration highlights the benefits of the reliability-based partial factor method and underscores the importance of reliability theory in structural design when compared to alternative design approaches.
A simply supported slab with a span of 6 meters is subjected to a permanent load, including its self-weight and other fixed components of the building, estimated at a characteristic value of 7 kN/m² according to EN 1990 standards.
In an office area, the characteristic imposed load is typically valued at q k = 3 kN/m², although the average load is significantly lower, approximately 0.8 kN/m².
Further, the concrete C20/25 having the characteristic strength f ck MPa (the mean
30 MPa) and reinforcement bars having the characteristic strength f yk P0 MPa (the mean
For the slab design, a concrete strength of 560 MPa is specified, with a predetermined height of 0.2 m based on prior experience Utilizing these preliminary specifications, an estimate for the required reinforcement of the slab must be calculated.
To streamline computational procedures, it is assumed that the stress distribution in the slab, as depicted in Figure A.1, is consistent across all design methods While design methods that utilize permissible stresses rely on a linear stress distribution in the slab's compression zone, the widely accepted rectangular stress distribution shown in Figure A.1 serves as a valid approximation for illustrating the key characteristics of these design methods.
The key variables in this context include: d, representing the effective depth; x, indicating the depth of the compression zone; b, the slab width, assumed to be 1 meter; A s, the area of reinforcement; f c, the strength of the concrete; and f y, the yield strength of the reinforcement.
Here M = (g+q)L 2 /8 denotes the bending moment due to the permanent and imposed loads gand q Using conditions (A.1) and (A.2) the engineer derived the following formula for the area A s : y c c 2 s
Until now, all variables have been treated as deterministic, overlooking potential uncertainties that could influence their actual values The engineer recognized that the fundamental variables in equation (A.3) could exhibit significant variation but struggled to recall how to incorporate this uncertainty Consequently, he sought a preliminary estimate of the area.
A s he took the mean (average) values of all the basic variables involved in Figure A.1 The results of this attempt together with outcomes of various code methods are summarised in
Table A.1 Note that equation (A.3) may be used generally for any design method indicated in
Table A.1 presents a reinforced concrete slab designed using various methods for a span of 6 meters and a height of 0.2 meters (effective depth of 0.17 meters) The slab is subjected to permanent loads of 7 kN/m² and variable loads of 3 kN/m², with a mean variable load of 0.8 kN/m² The concrete used is C25/20, characterized by a characteristic compressive strength (fck) of 20 MPa and an average strength of 30 MPa, while the yield strength of the steel reinforcement (fyk) is 500 MPa, with a mean value of 560 MPa.
The partial factors method (CEN) 628 0,000933 841 4,82 0,70u10 –6
Table A.2 Design values of the loads and material strengths
Basic variable The mean Permissible stresses
Table A.1 demonstrates that using the "mean values method" for initial approximations is inadequate, resulting in an unsatisfactory reinforcement area of A s = 0.000376 m² and a reinforcement ratio of only 0.0022, which corresponds to an unacceptably high failure probability of 0.5 It is widely acknowledged that design calculations should utilize safer values for the basic variables instead of relying on mean values.
(that are offered later in the book) Table A.2 indicates the design values of loads and material text.
The permissible stresses method (CP 114) and the global safety factor approach yield conservative and potentially uneconomical results, while the partial safety factors method, endorsed in recent EN documents, offers a more effective design framework Notably, the reliability index E = 4.8 achieved through this method is near the target value of E = 3.8, indicating its suitability for structural design.
The primary benefit of the partial safety factors method lies in its ability to account for the uncertainty of individual basic variables through the calibration of relevant partial factors and other reliability components This book aims to elucidate the fundamental principles of this method and demonstrate how to determine suitable design values for basic variables to ensure an adequate level of reliability.
Experiment, random event, sample space
This chapter provides a clear overview of essential probability theory concepts and terms commonly applied in the reliability analysis of civil engineering structures and systems The discussion of various concepts and laws is presented intuitively, without in-depth mathematical proofs For comprehensive explanations of the concepts, theorems, and rules discussed, readers are encouraged to refer to specialized literature.
The most significant fundamental concepts of the theory of probability applied in structural reliability include
– sample space (space of events)
These terms are used by the classical probability theory, but are also applicable in the contemporary theory of probability based on the theory of sets
In probability theory, an experiment is defined as the execution of a specific set of conditions, denoted as S The classical approach assumes that these experiments can be repeated indefinitely, such as rolling a die or testing a concrete cube Each trial yields clear outcomes that allow for a definitive determination of whether a specific event has occurred, like rolling a predetermined number or assessing if a concrete cube meets a specified threshold.
In practical applications of probability theory, the assumption of arbitrary repeatable experiments yielding clear outcomes is often unrealistic, particularly in fields like construction, where only a limited number of experiments can be conducted Today, the theory of probability embraces broader concepts, linking terms such as experiment, event, and sequence of events to the general theory of sets.
The concept of an experiment is universally relevant, but defining the specific conditions is crucial, regardless of the feasibility of conducting the experiment In certain instances, experiments may only be executed in a hypothetical context.
Accurate and comprehensive specification of conditions S is crucial for any experiment, as the results and their interpretations must be directly linked to these conditions Comparing experiments conducted under varying conditions can result in significant errors and misconceptions Therefore, detailing the relevant conditions and verifying them is essential for all probabilistic analyses.
Probability theory focuses on the outcomes of experiments that cannot be precisely determined due to uncertain conditions These experiments, known as random experiments, yield results described by events that may occur when specific conditions are met, but are not guaranteed Such events, referred to as random events, are typically represented by capital letters, such as A.
In probability theory, an event that is guaranteed to happen whenever specific conditions are met is referred to as a certain event, commonly denoted by U Conversely, an event that cannot occur under any circumstances is known as an impossible event, typically represented by V.
The sample space of a random experiment encompasses all possible outcomes that can result from a specific set of conditions, S This space can be finite, such as the outcomes from tossing a die, or infinite, like testing a concrete cube in a testing machine In certain scenarios, a system of elementary events can be identified, which are indivisible outcomes, such as the numbers 1 to 6 when rolling a die However, in other cases, such as testing a cube, the system of elementary events may not be clear or may not exist.
This summary clarifies essential concepts such as experiments, sets of conditions (S), events, and sample spaces through three straightforward examples In addition to defining these key terms, the examples offer valuable insights into the general principles and mathematical tools that describe real-world scenarios and the accepted simplifying assumptions associated with them.
Rolling a die serves as a classic and valuable example of a random experiment from an educational perspective In this scenario, the set of conditions is straightforward, as the die is balanced and symmetrical, ensuring that the method of tossing does not influence the outcome.
The certain event U denotes the event where any of the numbers 1, 2, 3, 4, 5 or 6 occur
The impossible event V denotes the event when other numbers appear (e.g 0, 7, 9 etc.)
Elementary events, denoted as E_i for i = 1 to 6, are indivisible occurrences When a specific set of conditions S is met, each elementary event has an equal probability of occurring, indicating a system of equally likely elementary events.
Random events can be represented as A_i, where A_1 denotes the appearance of the number 1, A_2 represents the even numbers (E_2, E_4, E_6), A_3 indicates numbers divisible by three (E_3, E_6), and A_4 includes numbers divisible by two or three (E_2, E_3, E_4, E_6) In this scenario, the sample space, which encompasses all possible outcomes of a toss, is clearly finite.
The cylinder strength of concrete is evaluated through a random experiment involving the loading of a test cylinder in a testing machine The conditions affecting this experiment include the concrete's composition, treatment, age, cube dimensions, and loading speed The key random event under investigation is the failure of the concrete cylinder at a specific loading level While high loading levels guarantee failure, low levels ensure it never occurs At the loading level representing the characteristic cylinder strength, failure may happen in approximately 5% of instances, indicating variability in performance under similar conditions.
Elementary events can be defined by dividing a specific loading range into equal-width intervals For instance, concrete grade C20, with a characteristic cylinder strength of 20 MPa, has a loading range from 10 to 50 MPa, which can be segmented into 4 MPa intervals Each elementary event represents the failure of a cylinder within these intervals Experimental results show varying failure rates: two cylinders failed between 18 to 22 MPa, nine between 22 to 26 MPa, and seventeen between 24 to 30 MPa, indicating that the events are not equally probable The sample space is infinite, comprising one-sided or two-sided intervals.
Figure 2.1 The number of failed cylinders versus the loading level
Figure 2.1 shows a frequently used graphical representation of experimental results, which is referred to as a histogram, and which is commonly used for the development of probabilistic models describing basic variables
When throwing a dart at a board, as illustrated in Figure 2.2, each throw serves as a realization of a random experiment The conditions influencing the outcome, denoted as set S, encompass factors such as the distance from the throwing point, the board's size, the type of dart used, and various other throwing conditions.
Figure 2.2 An example of throwing a dart onto a board – Venn diagram
It is assumed that every point of the board can be hit with equal likelihood and that the board is always hit (these are, undoubtedly, questionable assumptions)
The hitting of the whole board is therefore a certain event U
An impossible event V is a throw that misses the board
Relations between random events
Figure 2.2 illustrates a common representation of random events using Venn diagrams, where the entire rectangle denotes the certain event U, while the ovals represent random events A and B This visual format helps to explore fundamental relationships between events A and B, leading to the definition of key terms and the derivation of general relationships among random events Various diagrams similar to Figure 2.2 can depict these relationships and combinations For a comprehensive understanding, including formal mathematical proofs, refer to specialized literature [11, 12].
If an event B occurs every time the conditions S are realized, as a result of which an event A occurs, we say that event A implies event B, which is usually symbolically denoted as
In probability theory, when events A and B happen simultaneously under conditions S, we refer to this as the intersection of the two events, denoted as A ∩ B Conversely, if at least one of the events A or B occurs whenever conditions S are met, this is known as the union of the events, represented as A ∪ B Additionally, if event A occurs while event B does not, we describe this as the difference A – B Events A and its complement Ā are known as complementary events, meaning that A ∪ Ā equals the universal set U and A ∩ Ā equals the empty set V These concepts are governed by fundamental probability rules.
(the commutative, associative and distributive laws) hold for the intersection and union of random events:
These basic rules lead to the definition of more complicated relations of the intersection and union of a system of events A i : i
The following rules (so-called de Morgan rules), the validity of which follows from the above relations, are sometimes effectively applied in practical computations of probabilities of complex events
The use of these rules is evident from the two following examples
A basic serial system subjected to forces P includes two components, as illustrated in Figure 2.3 The system can fail if either element 1 (failure F1) or element 2 (failure F2) experiences a malfunction.
F = F 1 F 2 The complementary event F (no failure) is, according to relation (2.5), described by an event for which it holds
Town C receives its water supply from two sources, A and B, through a pipeline comprising three independent branches: 1, 2, and 3 The failures of these branches are denoted as F1, F2, and F3, respectively If both sources A and B have adequate capacity to meet the town's needs, a water shortage occurs in Town C only when all three branches fail, represented by the event (F1 F2) F3.
In the event that branches 1 and 2 fail, it is beneficial to analyze a complementary event that assesses the sufficiency of water in town C According to de Morgan's rules, the complementary event regarding water sufficiency in town C can be defined as follows.
(F where the event (F1F2) represents sufficient water in the join of the branches 1 and 2, which is at the same time the beginning of the branch 3
Figure 2.3 Water supply system of a town C from sources A and B.
In a statically determinate truss structure comprising seven members and subjected to forces P, we define event F as the occurrence of structural failure Specifically, let F_i represent the failure of each individual element, where i ranges from 1 to 7.
Figure 2.4 Statically determinate truss structure
The failure of the whole structure (event F) occurs if a failure of at least one of the members occurs Therefore it holds that
The manufacturing conditions of individual components can lead to mutually dependent events, meaning they are not exclusive Therefore, when calculating the probability of failure, it may be beneficial to consider the complementary event F, as outlined by de Morgan’s rules.
Similar relationships may be effectively used when analysing complex technical systems
A complete system of events, denoted as A i, is characterized by the union of these events equaling a certain event U, ensuring that at least one event A i occurs In the context of complex event analysis, the term "complete system of mutually exclusive events" is also relevant, where only one event A i occurs at any given time.
Probability measures the likelihood of random events occurring, and its definition involves complex mathematical concepts The historical evolution of probability reflects significant advancements in both theory and practical applications Traditionally, probability is defined using a complete set of elementary events For an event A, which includes m out of n equally likely elementary events, the probability is calculated as the ratio of m to n, where n represents a complete system of mutually exclusive events.
For probability defined in this way it obviously holds that
It can also be shown for a system of mutually exclusive events A i that the probability of the union of these events is given by the relation
The classical definition of probability works well for simple cases like dice tossing, but it falters when dealing with non-symmetrical dice Additionally, examples in civil engineering demonstrate that a finite set of elementary events is inadequate for addressing key challenges As a result, alternative definitions of probability have developed to overcome these limitations.
The geometrical definition of probability is related to the throwing of a dart in
The probability of an event A is defined as the ratio of the area of event A, referred to as area(A), to the area of the certain event U, denoted as area(U).
The geometric definition seeks to address a limitation of the classical definition by moving beyond a finite number of elementary events However, it still fails to recognize that not all points on the board (event U) have equal chances of occurring Clearly, using "surface area" as a measure of occurrence is inadequate, indicating that this challenge remains unresolved.
Probability, in statistical terms, is defined through the outcomes of an experiment conducted multiple times When an experiment is repeated n times, and a specific event A occurs m(A) times, the relative frequency of event A, represented as m(A)/n, tends to stabilize as n increases This stability of relative frequencies is known as statistical stability The limit that the relative frequency m(A)/n approaches as the number of trials n grows is recognized as the probability P(A) of event A, serving as an objective measure of its occurrence.
However, the assumption of statistical stability and convergence indicated in equation
(2.11) (i.e the limit of the quantity derived from the results of experiments) causes some mathematical difficulties
Classical, geometrical, and statistical definitions of probability aim to establish a clear understanding of probability and offer a method for its calculation; however, this endeavor proves to be exceptionally challenging, if not outright impossible.
The global consensus on the axiomatic system marks a significant milestone in defining the foundational concepts of probability theory This system articulates the definition of probability and its essential properties, yet it does not offer practical guidance for its calculation.
Note that equations (2.7) to (2.9) characterize the common properties of the classical, geometrical as well as statistical definition of probability:
1 the probability of a certain event is equal to 1;
2 the probability of an impossible event is equal to 0; and
3 if an event A is a union of partial and mutually exclusive events A 1 , A 2 , , A n , then the probability of event A is equal to the sum of probabilities of the partial events
The axiomatic definition of probability introduces these general properties as axioms
Probability P is a real function, defined in a sample space / above the certain event U with these properties:
2 For the certain event U, it holds that
3 If A i /, i = 1, 2, and if for arbitrary i and j A i A j = V, then