Safety Engineering - Safety and Reliability

Safety and Reliability

For more details on this topic, see Inherent safety.

Probabilistic risk assessment has created a close relationship between safety and reliability. Component reliability, generally defined in terms of component failure rate, and external event probability are both used in quantitative safety assessment methods such as FTA. Related probabilistic methods are used to determine system Mean Time Between Failure (MTBF), system availability, or probability of mission success or failure. Reliability analysis has a broader scope than safety analysis, in that non-critical failures are considered. On the other hand, higher failure rates are considered acceptable for non-critical systems.

Safety generally cannot be achieved through component reliability alone. Catastrophic failure probabilities of 10−9 per hour correspond to the failure rates of very simple components such as resistors or capacitors. A complex system containing hundreds or thousands of components might be able to achieve a MTBF of 10,000 to 100,000 hours, meaning it would fail at 10−4 or 10−5 per hour. If a system failure is catastrophic, usually the only practical way to achieve 10−9 per hour failure rate is through redundancy. Two redundant systems with independent failure modes, each having an MTBF of 100,000 hours, could achieve a failure rate on the order of 10−10 per hour because of the multiplication rule for independent events.

When adding equipment is impractical (usually because of expense), then the least expensive form of design is often "inherently fail-safe". That is, change the system design so its failure modes are not catastrophic. Inherent fail-safes are common in medical equipment, traffic and railway signals, communications equipment, and safety equipment.

The typical approach is to arrange the system so that ordinary single failures cause the mechanism to shut down in a safe way (for nuclear power plants, this is termed a passively safe design, although more than ordinary failures are covered). Alternately, if the system contains a hazard source such as a battery or rotor, then it may be possible to remove the hazard from the system so that its failure modes cannot be catastrophic. The U.S. Department of Defense Standard Practice for System Safety (MIL–STD–882) places the highest priority on elimination of hazards through design selection.

One of the most common fail-safe systems is the overflow tube in baths and kitchen sinks. If the valve sticks open, rather than causing an overflow and damage, the tank spills into an overflow. Another common example is that in an elevator the cable supporting the car keeps spring-loaded brakes open. If the cable breaks, the brakes grab rails, and the elevator cabin does not fall.

Some systems can never be made fail safe, as continuous availability is needed. For example, loss of engine thrust in flight is dangerous. Redundancy, fault tolerance, or recovery procedures are used for these situations (e.g. multiple independent controlled and fuel fed engines). This also makes the system less sensitive for the reliability prediction errors or quality induced uncertainty for the separate items. On the other hand, failure detection & correction and avoidance of common cause failures becomes here increasingly important to ensure system level reliability.

Read more about this topic:  Safety Engineering

Famous quotes containing the words safety and and/or safety:

    The high sentiments always win in the end, the leaders who offer blood, toil, tears and sweat always get more out of their followers than those who offer safety and a good time. When it comes to the pinch, human beings are heroic.
    George Orwell (1903–1950)

    [As teenager], the trauma of near-misses and almost- consequences usually brings us to our senses. We finally come down someplace between our parents’ safety advice, which underestimates our ability, and our own unreasonable disregard for safety, which is our childlike wish for invulnerability. Our definition of acceptable risk becomes a product of our own experience.
    Roger Gould (20th century)