个人简介
2007年毕业于新加坡国立大学并获博士学位,2008年加入数学院系统所统计科学研究室工作至今。主要研究方向为工业统计,侧重于系统可靠性与软件可靠性理论方法,同时面向我国民用航天质量与可靠性实际需求。主要研究工作发表于工业统计及质量可靠性领域的重要期刊,包括IEEE Transactions on Reliability, IIE Transactions, Journal of Quality Technology, Reliability Engineering and System Safety等。胡庆培博士现任中国现场统计研究会理事,中国现场统计研究会可靠性分会常务理事、中国运筹学会可靠性分会常务理事。
研究方向
统计学
工业统计
可靠性
学术论文
-
Residual life prediction for complex systems with multi-phase degradation by ARMA-filtered hidden Markov model
The performance of certain critical complex systems, such as the power output of ground photovoltaic (PV) modules or spacecraft solar arrays, exhibits a multi-phase degradation pattern due to the redundant structure. This pattern shows a degradation trend with multiple jump points, which are mixed effects of two failure modes: a soft mode of continuous smooth degradation and a hard mode of abrupt failure. Both modes need to be modeled jointly to predict the system residual life. In this paper, an autoregressive moving average model-filtered hidden Markov model is proposed to fit the multi-phase degradation data with unknown number of jump points, together with an iterative algorithm for parameter estimation. The comprehensive algorithm is composed of non-linear least-square method, recursive extended least-square method, and expectation–maximization algorithm to handle different parts of the model. The proposed methodology is applied to a specific PV module system with simulated performance measurements for its reliability evaluation and residual life prediction. Comprehensive studies have been conducted, and analysis results show better performance over competing models and more importantly all the jump points in the simulated data have been identified. Also, this algorithm converges fast with satisfactory parameter estimates accuracy, regardless of the jump point number.
-
A Reliability Assessment Approach for Systems with Heterogeneous Component Information
Reliability assessment of complex systems is an important yet difficult task. The difficulty arises largely because of heterogeneous component-level data, e.g., lifetime data, degradation data, component-level assessment result and prior information. A method for reliability assessment is developed in this paper for systems plagued by degradation and lifetime data. Our framework divides component level information into two types. These two types are then combined using a systematic approach. This paper describes the method along with some application examples which demonstrate the approach capable of overcoming several difficulties associated with conventional reliability assessment approaches.
-
Strategic Allocation of Test Units in an Accelerated Degradation Test Plan
Degradation is often dened in terms of the change of a key performance characteristic over time. It is common to see that the initial performance of the test units varies and it is strongly correlated with the degradation rate.
- Reliability demonstration test for load-sharing systems with exponential and Weibull components
-
Degradation Modeling, Analysis, and Applications on Lifetime Prediction
Degradation signals provide more information for product life status than failure data, when specific degradation mechanism can be identified. Modeling and analysis with the degradation signal is helpful to extrapolate for product lifetime prediction. In this chapter, comprehensive review has been conducted for different kinds of modeling and analysis approaches, together with the corresponding lifetime prediction results. Furthermore, discussions over related issues like product initial performance are presented.
-
An Approach for Reliability Demonstration Test Based on Power-Law Growth Model
Reliability demonstration test (RDT) is a critical and necessary step before the acceptance of an industrial system. Generally, a RDT focuses on designing a test plan through which one can judge whether the system reliability indices meet specific requirements. There are many established RDT plans, but few have incorporated the reliability growth aspects of the corresponding products. In this paper, we examine a comprehensive test plan that involves information concerning the reliability growth stage. An approach for RDT under the assumption of the power-law model is proposed. It combines data related to the growth stage with those pertaining to the test stage of the product to reduce the cost of the test. Through simulation studies and numerical examples, we illustrate the characteristics of the test plan and significant reduction in test costs through our approach.
-
Design and Risk Evaluation of Reliability Demonstration Test for Hierarchical Systems with Multilevel Information Aggregation
As reliability requirements become increasingly demanding for many engineering systems, conventional system reliability demonstration testing (SRDT) based on the number of failures depends on a large sample of system units. However, for many safety critical systems, such as missiles, it is prohibitive to perform such testing with large samples. To reduce the sample size, existing SRDT methods utilize test data from either system level or component level. In this paper, an aggregation-based SRDT methodology is proposed for hierarchical systems by utilizing multilevel reliability information of components, subsystems, and the overall system. Analytical conditions are identified for the proposed method to achieve lower consumer risk. The performances of different SRDT design strategies are evaluated and compared according to their consumer risks. A numerical case study is presented to illustrate the proposed methodology and demonstrate its validity and effectiveness.
-
Software reliability growth modeling and analysis with dual fault detection and correction processes
Computer software is widely applied in safety-critical systems. The ever-increasing complexity of software systems makes it extremely difficult to ensure software reliability, and this problem has drawn considerable attention from both industry and academia. Most software reliability models are built on a common assumption that the detected faults are immediately corrected; thus, the fault detection and correction processes can be regarded as the same process. In this article, a comprehensive study is conducted to analyze the time dependencies between the fault detection and correction processes. The model parameters are estimated using the Maximum Likelihood Estimation (MLE) method, which is based on an explicit likelihood function combining both the fault detection and correction processes. Numerical case studies are conducted under the proposed modeling framework. The obtained results demonstrate that the proposed MLE method can be applied to more general situations and provide more accurate results. Furthermore, the predictive capability of the MLE method is compared with that of the Least Squares Estimation (LSE) method. The prediction results indicate that the proposed MLE method performs better than the LSE method when the data are not large in size or are collected in the early phase of software testing.
-
A general modeling and analysis framework for software fault detection and correction process
Software reliability growth modeling plays an important role in software reliability evaluation. To incorporate more information and provide more accurate analysis, modeling software fault detection and correction processes has attracted widespread research attention recently. In modeling software correction processes, the assumption of fault correction time is relaxed from constant delay to random delay. However, stochastic distribution of fault correction time brings more difficulties in modeling and corresponding parameter estimation. In this paper, a framework of software reliability models containing both information from software fault detection process and correction process is studied. Different from previous extensions on software reliability growth modeling, the proposed approach is based on Markov model other than a nonhomogeneous Poisson process model. Also, parameter estimation is carried out with weighted least-square estimation method, which emphasizes the influence of later data on the prediction. Two data sets from practical software development projects are applied with the proposed framework, which shows satisfactory performance with the results.
-
Proportional hazard modeling for hierarchical systems with multi-level information aggregation
Reliability modeling of hierarchical systems is crucial for their health management in many mission-critical industries. Conventional statistical modeling methodologies are constrained by the limited availability of reliability test data, especially when the system-level reliability tests of such systems are expensive and/or time-consuming. This article presents a semi-parametric approach to modeling system-level reliability by systematically and explicitly aggregating lower-level information of system elements; i.e., components and/or subsystems. An innovative Bayesian inference framework is proposed to implement information aggregation based on the known multi-level structure of hierarchical systems and interaction relationships among their composing elements. Numerical case study results demonstrate the effectiveness of the proposed method.
-
Lower confidence limit for reliability based on grouped data using a quantile-filling algorithm
The aim of this paper is to propose an approach to constructing lower confidence limits for a reliability function and investigate the effect of a sampling scheme on the performance of the proposed approach. This is accomplished by using a data-completion algorithm and certain Monte Carlo methods. The data-completion algorithm fills in censored observations with pseudo-complete data while the Monte Carlo methods simulate observations for complicated pivotal quantities. The Birnbaum–Saunders distribution, the lognormal distribution and the Weibull distribution are employed for illustrative purpose. The results of three cases of data-analysis are presented to validate the applicability and effectiveness of the proposed methods. The first case is illustrated through simulated data, and the last two cases are illustrated through two real-data sets.
-
Study of an imputation algorithm for the analysis of interval-censored data
In this article, an iterative single-point imputation (SPI) algorithm, called quantile-filling algorithm for the analysis of interval-censored data, is studied. This approach combines the simplicity of the SPI and the iterative thoughts of multiple imputation. The virtual complete data are imputed by conditional quantiles on the intervals. The algorithm convergence is based on the convergence of the moment estimation from the virtual complete data. Simulation studies have been carried out and the results are shown for interval-censored data generated from the Weibull distribution. For the Weibull distribution, complete procedures of the algorithm are shown in closed forms. Furthermore, the algorithm is applicable to the parameter inference with other distributions. From simulation studies, it has been found that the algorithm is feasible and stable. The estimation accuracy is also satisfactory.
-
Robust recurrent neural network modeling for software fault detection and correction prediction
Software fault detection and correction processes are related although different, and they should be studied together. A practical approach is to apply software reliability growth models to model fault detection, and fault correction process is assumed to be a delayed process. On the other hand, the artificial neural networks model, as a data-driven approach, tries to model these two processes together with no assumptions. Specifically, feedforward backpropagation networks have shown their advantages over analytical models in fault number predictions. In this paper, the following approach is explored. First, recurrent neural networks are applied to model these two processes together. Within this framework, a systematic networks configuration approach is developed with genetic algorithm according to the prediction performance. In order to provide robust predictions, an extra factor characterizing the dispersion of prediction repetitions is incorporated into the performance function. Comparisons with feedforward neural networks and analytical models are developed with respect to a real data set.
联系方式
北京市海淀区中关村东路55号思源楼518
qingpeihu@amss.ac.cn