Reliability analysis of the compressed air supplying system in underground mines - Nature .

Live News for Picoin

173 Views

        

This paper aims to analyze the reliability of the compressed air unit of Qaleh-Zari Copper Mine, Iran using the Markov modeling approach. In this section, the reliability analysis process through the Markov chain approach is presented. To achieve this, possible states for all main and reserve compressors are considered for the Markov chain modeling. The failure and repair rate of all compressors is calculated to obtain the probability of being of the system in each of the states. Then, the stationary state of the Markov chain is calculated to obtain the confidence interval for the system availability. Moreover, the probability of failure at any time period is considered to study the reliability behavior.

Markov chain modelling and estimation of the transition probabilities

Markov chain is a stochastic process used for the mathematical modeling of a certain kind of phenomena depending on random variable. In this approach, the probability concepts are used to described how a system change from one state to the other states26. In this technique, the system does not have the memory. On the other hand, future states of a system depend on the current and the last immediate states. Furthermore, the behavior of the system must be the same at all points of time regardless of the point of time being considered. In this circumstance, the probability of transition from one state to another is constant at all times. On the other hand, the process is stationary or homogeneous27.

In the application of Markov modeling, system can be in only one state at a time, and from time to time it makes a transition from one state to another state. Two types of models can be considered for the modeling that are discrete- and continuous- time chains. In the discrete time Markov chains, transitions occur only at a fixed unit time interval with a transition required at each interval while, in the continuous time chains, the transitions are permitted to occur at any real-valued time interval. It is noted that the discrete case generally is known as a Markov chain and the Markov process is generally known for the continuous one13.

At the first step of the Markovian approach, all states of the system are determined and the transfer rates (failure or repair) between states are obtained. Then, the Markov transition diagram is used to describe the relationship between the states of the system. Figure 1 shows a basic model of Markov chain with two states i and j, where λ and μ are the constant failure and repair (transfer rates), respectively. It notes that a chain in which every state can be reached from all other states either directly or indirectly through intermediate states is known as ergodic. In the ergodic Markov chains, the limiting values of the state probabilities are independent of the initial conditions27.

Figure 1
figure 1

Basic Markov model.

With considering the transition probability from each state i to state j, the probability that the system will be in state j, that it started in state i after n time intervals, can be calculated and discussed. The stochastic process {Xn}, n = 0, 1, 2, …, is a discrete-time Markov chain for all states i0, … ,i,j if it satisfies the Markov property as28,29,30:

$$P(X_{n + 1} = j{|}X_{n} = i, X_{n - 1} = i_{n - 1} , \ldots , X_{0} = i_{0} ) = P(X_{n + 1} = j{|}X_{n} = i) = P_{ij}$$

(1)

where Pij is the probability that the chain, whenever in state i, moves next (one unit of time later) into state j, \(i \ne j\), and is referred to as a one-step transition probability. The \(P_{ij}^{\left( n \right)}\) called the n-step transition probabilities as follows:

$$P_{ij}^{\left( n \right)} = P(X_{n} = j{|}X_{0} = i)$$

(2)

The \(P_{ij}^{(n)}\) is the n-step transition probability. This means the probability that a process in state i will be in state j after n additional transitions. If the n-step transition probabilities are collected into the matrix form as \(P(n) = \left\{ {p_{ij}^{(n)} } \right\}\), then according to Chapman-Kolmogorov equation, then \(P^{\left( n \right)}\) equals \(P^{n}\) for the stationary Markov chains.

The following stochastic constraints to the matrix are considered for the matrix P:

$$p_{ij} \ge 0,\quad \mathop \sum \limits_{j = 0}^{J} p_{ij} = 1,\quad i,j = 1, \ldots ,J$$

(3)

It worth to note that, regarding principle of the Markov chain process, the transition events are independent of one another and then, the transition probabilities (\(p_{ij}\)) can be obtained from the Binomial distribution.

Estimating overall transition matrix

As mentioned, \(P_{ij}^{n}\) is considered as the probability that the chain goes from state i to the j in n-steps and the new \(P_{ij}^{n}\) numbers are arranged the entries of a matrix, named the n-step transition probability matrix or the overall transition matrix (\(P^{n}\)). The matrix \(P^{n}\) is estimated through matrix multiplication. On the other hand, the n-step transition matrix can be obtained by multiplying the matrix P by itself n times. To do this, the eigenvalue and eigenvector approach, can be used. In this approach, the matrix P can be expanded as30;

$$P = U\Lambda U^{ - 1}$$

(4)

where Λ is the diagonal matrix of eigenvalues and U is the matrix whose columns are the corresponding eigenvectors. The, the overall transition matrix \(P^{n}\) can be estimated from the following equation29,30:

$$P^{n} = U\Lambda^{n} U^{ - 1}$$

(5)

The remain of this subsection is devoted to study the long-term behavior of Markov chains. In should be noted that if the initial probability distribution of the Markov chain is known, then the probability distribution at some time instant n or \(P_{ij}^{n}\) can be evaluated. For an ergodic Markov chain, the matrix multiplication technique can be applied to obtain the steady-state or the limiting values probabilities. The sequence of n-steps transition matrices \(P^{n}\) approaches to a matrix whose rows are all identical. This means that, the \(Lim_{n \to \infty } P_{ij}^{n} = \pi_{j}\). This indicates that, \(P_{ij}^{n}\) is converging to some value which is the same for all i. On the other hand, a limiting probability exists that the process will be in state j after a large number of transitions, and this value is independent of the initial state i29,30. Either \(\pi_{j}\) is the steady state distribution of the Markov chain. The steady state of a Markov chain can be obtained from the following equations:

$$\pi_{j} = \mathop \sum \limits_{i = 1}^{\infty } \pi_{i} P_{ij } ,\quad \mathop \sum \limits_{j = 1}^{\infty } \pi_{j} = 1,\quad j \ge 1$$

(6)

Consider D = {Sd} as a set of desirable states and U = {Su} as a set of undesirable states. The mean time of being of the system in a set of desirable states (\(\overline{D}\)) desirable states (\(\overline{U}\)) and can be obtained from Eqs. (7) and (8), respectively31:

$$\overline{D} = \frac{{\mathop \sum \nolimits_{j \in D} \pi_{i} }}{{\mathop \sum \nolimits_{j \in U} \left( {\mathop \sum \nolimits_{j \in D} \pi_{i} .p_{ij} } \right)}}$$

(7)

$$\overline{U} = \frac{{\mathop \sum \nolimits_{j \in U} \pi_{j} }}{{\mathop \sum \nolimits_{j \in U} \left( {\mathop \sum \nolimits_{j \in D} \pi_{i} .p_{ij} } \right)}}$$

(8)

Moreover, the transition probability from desirable states to undesirable ones (\(\overline{P}\)) is obtained as:

$$\overline{P} = \frac{1}{{\overline{D} + \overline{U}}}$$

(9)

Reliability and remaining lifetime estimation

The probability that the system will be in state j, that it started in state i for the first time (n = 1) between m-1 and m time steps, is obtained from the transition probability matrix as follows30,31:

$$f_{ij}^{\left( m \right)} = P(X_{n + m} = j, X_{n + m - 1} = i, \ldots , X_{n + 1} = i{|}X_{n} = i) = P_{ii}^{m - 1} P_{ij}$$

(10)

The same approach can be considered for different time step and the cumulative probability function can be obtained. Therefore, the reliability function of the event at time τ, where the system remains in state j which it started in state i for the first time (n = 1) is formulated from the following equation:

$$R(\tau ) = 1 - F_{ij} (\tau ) = 1 - \mathop \sum \limits_{m = 1}^{\tau } P_{ii}^{m - 1} P_{ij} ,\quad \tau = 1,2, \ldots$$

(11)

where \(F_{ij} (\tau )\) in the cumulative transition probability.

In this approach, the lifetime probability of transitioning from state i to state j can be calculated, as well. The expected value for the first transition from i to j is formed as:

$$\mathop \sum \limits_{r = 1}^{\infty } mf_{ij}^{(m)} = 1$$

(12)

Then, the expected time to the state j is calculated as:

$$E_{ij} (\tau ) = \frac{{\mathop \sum \nolimits_{r = 1}^{\infty } mf_{ij}^{(m)} }}{{1 - p_{ii}^{m} }}|_{r = 1} = \frac{1}{{1 - p_{ii} }}$$

(13)