Most accessed

  • Published in last 1 year
  • In last 2 years
  • In last 3 years
  • All

Please wait a minute...
  • Select all
    |
  • article
    MA Jian
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.04.006

    Variable selection is of significant importance for classification and regression tasks in machine learning and statistical applications where both predictability and explainability are needed. In this paper, a Copula Entropy (CE) based method for variable selection which use CE based ranks to select variables is proposed. The method is both model-free and tuning-free. Comparison experiments between the proposed method and traditional variable selection methods, such as distance correlation, Hilbert-Schmidt independence criterion, stepwise selection, regularized generalized linear models and adaptive LASSO, were conducted on
    the UCI heart disease data. Experimental results show that CE based method can select the `right' variables out more effectively and derive better interpretable results than traditional methods do without sacrificing accuracy performance. It is believed that CE based variable selection can help to build more explainable models.

  • article
    NIU Yong; LI Huapeng; LIU Yanghui; XIONG Shifeng; YU Zhou; ZHANG Riquan
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.007

    With the improvement of data collection and storage capacity, ultra-high dimensional data\ucite{9}, that is, dimensionality with the exponential growth of samples appears in many scientific neighborhoods. At this time, penalized variable selection methods generally encounter three  challenges: computational expediency, statistical accuracy, and algorithmic stability, which are limited in handling ultra-high dimensional problems. Fan and Lv\ucite{9} proposed the method of ultra-high dimensional feature screening, and achieved a lot of research results in the past ten years, which has become the most popular field of research in statistics. This paper mainly introduces the related work of ultra-high dimensional screening method from four aspects: the screening methods with model hypothesis, including parametric, non-parametric and semi-parametric model hypothesis, model-free hypothesis, and screening methods for special data. Finally, we briefly discuss the existing problems of ultra-high dimensional screening methods and some future directions.

  • article
    SUN Jiajing;MCCABE Brendan;CUI Wenquan;LI Guoxing
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.00.001

    The traditional PAR process (Poisson autoregressive process) assumes that the arrival process is the equi-dispersed Poisson process, with its mean being equal to its variance. Whereas the arrival process in the real DGP (data generating process) could either be over-dispersed, with variance being greater than the mean, or under-dispersed, with variance being less than the mean. This paper proposes using the Katz family distributions to model the arrival process in the INAR process (integer valued autoregressive process with Katz arrivals) and deploying Monte Carlo simulations to examine the performance of maximum likelihood (ML) and method of moments (MM) estimators of INAR-Katz model. Finally, we used the INAR-Katz process to model count data of hospital emergency room visits for respiratory disease. The results show that the INAR-Katz model outperforms the Poisson model, PAR(1) model, and has great potential in empirical application.

  • article
    QI Kai; YANG Hu
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.03.001

    Index tracking mainly focuses on replicating or tracking the performance of a financial index which is also a popular passive portfolio management strategy. The classical methods often considerthe full replication consisted of all asserts of an index. However, the full replication often suffers from small and illiquid positions and high cost as the number of asserts increasing. Thus, the investors intend to purchase sparse portfolios. In stock markets, besides, there are still apparently existing group effects among stocks. This paper proposes the nonnegative sparse group lasso method for model selection and estimation to grouped variables without overlapping. We provide almost necessary and sufficient conditions for the variable selection and estimation consistency of the method in finite dimensional group cases. To get the solutions of the model, we derive a computational method based on coordinate decent algorithm. To track the index, the nonnegative sparse group lasso outperforms other current methods with group effects such as nonnegativeelastic net, according to tracking error.

  • article
    CHEN Bin;CHEN Mu-Fa;XIE Yingchao;YANG Ting;ZHOU Qin
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.04.001

    As the continuation and deepening of \ncite{1},this paper focuses on the center of economic equilibrium and uses mathematics as a tool to explore two themes in the economy: firstly, the ``pillar'' industry and ``bottle strength'' industry, ``top'' products and ``weak'' products in the economic system, that is, the ranking and
    stability analysis of products; secondly, forecast and adjust, optimize the design and debugging of economic structure.

  • article
    CAO Xuefei; LI Jihong; WANG Ruibo; NIU Qian; WANG Yu
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.03.001

    The bi-directional long short-term memory neural network model is widely used in natural language processing, but hyperparameter tuning of the model is difficult in practice. In this paper, we take the semantic role recognition task as an example, consider four candidate features (word, part of speech, target word and position) and two hyperparameters (the number of layers of the network and whether CRF classifier is used) as factors in robust design, and select the optimal combination of features and hyperparameters by setting levels of each factor and performing experiments. In particular, we perform 32 cross validation on a small datasets to select the optimal configuration combination of the model based on the SNR of robust design. Then, we analyze the influence of each factor on the performance of the model by quantitatively analyze so that the model has a certain degree of interpretability. Moreover, in order to verify the superiority of our tuning method, we use the standard segmentation of natural language processing on a big dataset, adopt the traditional greedy strategy to select the optimal configuration combination, and compare with our method on the test set. The results show that our method is better than the traditional tuning method.

  • WANG Dan; PI Lin
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.001

    This paper establishes a empirical likelihood method to detect change-point in the mean of heavy-tailed sequence. Firstly, under the null and the alternative hypothesis, the empirical likelihood functions are obtained in the heavy-tailed observations. Secondly, the empirical likelihood ratio statistics is constructed based on empirical likelihood functions. And under the null hypothesis, the asymptotic distribution of statistics is given. Finally, Monte Carlo simulation is carried out to verify the correctness of the method. The simulation results show that the performance of our method is well to detect mean change in heavy-tailed sequence.

  • article
    ZHU Jiaqing; ZHAO Shengli
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.001

    Order of addition designs with conditions are widely used in experiments, but references on this subject are rather primitive. The paper gives the definition of conditional main effect of pair-wise ordering factor, studies the orthogonality of conditional main effects of pair-wise ordering factors, and proposes the model of order of addition designs with conditions. Finally, it gives the methods for data analysis through two examples.

  • article
    BIAN Huabin; TONG Xinle; YAO Dingjun
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.01.002

    In the context of the aging population, longevity risk will increase great economic pressure to the national endowment security system. How to measure and manage longevity risk has become the focus of research in recent years. Based on the Chinese population mortality data, and Lee-Carter model, we introduce DEJD model (double exponential jump diffusion model) to describe the jump asymmetry of time series factors, and prove that DEJD model is more effective than Lee-Carter model in fitting time series factors. In addition, we use the population mortality data predicted by DEJD model to price the SM bonds in Chinese market, providing an important reference for the promotion of SM bond in China.

  • article
    LIU Weiqiang; ZHAN Mengya
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.06.006

    The paper considers the optimal dividend and capital injection strategies for the compound poisson risk process in a random interest rates environment. In the model, the surplus is assumed to be ordinary but the interest rates are governed by an exogenous Markov chain. Here, the problem is solved by two steps. First, we find out the capital injection form that the optimal strategy should follow. Then we look for the optimal solution in the restricted set with the particular capital injection form. In the paper, we discuss ``restricted'' and ``unrestricted'' two cases and provide a possible solution for ``unrestricted'' case when the claim distribution is exponential.

  • article
    LING Xiaoliang; GAO Yu; LI Ping
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.05.001

    In reliability engineering, components of the coherent system may be dependent since they operate in a common random environment. This paper uses multivariate distortion function to describe the dependence among lifetimes of components and the structure function of the coherent system. Some sufficient conditions are given to compare two coherent systems under different random environments in the sense of usual stochastic order, failure rate order, reversed failure rate order and likelihood ratio order.

  • article
    SONG Yanan; ZHAO Xuejing
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.003

    The requirements of model accuracy and robustness make the outlier detection and robust estimation become more and more important in the model construction. In this paper, we first use the high-dimensional influential measure (HIM) based on the marginal correlation and the high-dimensional discriminant method based on the distance correlation (HDC) to respectively detect the outliers in the data set. Then the points are divided into two parts: normal points and abnormal points. Based on the initial normal point set, we construct the method of recovery for the points that are misclassified to normal point set, by using a kind of robust coefficient estimation method and the concept of hyper ellipsoid contour in residual space. Thereafter the outlier probability of each point in the abnormal point set are calculated to further recover the normal points that are misspecified in the abnormal point set and thus detect the true outlier value. The accuracy rate of outlier detection has been further improved. The performance of the proposed method is illustrated through simulations of three types of anomaly data under two predictive data structures, as well as three real examples.

  • article
    ZHANG Chi; TIAN Guoliang; LIU Yin
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.03.006

    In fields of sociology, psychology, ecology, insurance, medicine and epidemiology, count data are often collected for specific studies. While count data without zero-category or with excess zeros arise quite frequently, a series of zero-truncated and zero-inflated models were soon developed to analyze these kinds of data, such as zero-truncated/inflated Poisson distribution and zero-truncated/inflated negative binomial distribution. It is necessary to make statistical inferences on unknown parameters when fitting data by these models. Existing studies merely focus on one of these models. In this paper, based on the stochastic representations of zero-truncated and zero-inflated distributions proposed in recent years, we construct a general method to obtain the maximum likelihood estimates of parameters under a unified framework, and make a review on familiar discrete distributions. Moreover, zero-adjusted models are further proposed to extend the applications, aiming to provide researchers appropriate and convenient methods in count data analyses. All methods are demonstrated by simulation studies and two real data analyses.

  • article
    LI Qi; ZHANG Jiujun
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.03.002

    In this paper, we propose distribution-free mixed exponentially weighted moving average-cumulative sum (EWMA-CUSUM) and mixed cumulative sum -- exponentially weighted moving average (CUSUM-EWMA) control charts based on the Ansari-Bradley test for detecting process scale without any distributional assumption of the underlying quality process. The performances of the proposed charts are measured in terms of average run-length and some other performance indexes. The effect of sample size in phase I and phase II on phase II of the proposed charts is also investigated. The application of the new chart is illustrated by real data examples.

  • article
    XIA Xiaoyu; YAN Litan
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.002

    Let B^H=\{B_t^H,\,0\leq t\leq T\}$ be a fractional Brownian motion with Hurst index H\in(0,1/2)\cup(1/2,1) and let b be a Borel measurable function such that |b(t,x)|\leq(1+|x|)f(t)$ for x\in\mathbb{R}$ and $0<t<T$, where $f$ is a non-negative Borel function. In this note, we consider the existence of a weak solution for the stochastic differential equation of the form \[X_t=x+B_t^H+\int_0^tb(s,X_s)\md s.\] It is important to note that $f$ can be unbounded such as f(t)=(T-t)^{-\beta} and f(t)=t^{-\alpha} for some 0<\alpha,\beta<1. This question is not trivial for stochastic differential equations driven by fractional Brownian motion.

  • article
    YANG Xin; LI Bingyue; TIAN Ping
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.06.001

    In this paper, we consider the ultrahigh dimensional partially linear model, in which the dimension of the parametric vector is exponential order of the sample size. Based on profile least squares and regularization after retention method, we propose a new method to perform variable selection for the ultrahigh dimensional partially linear model. Under certain regularity conditions, it is proved that the estimator achieves sign consistency. Compared with Lasso, SIS-Lasso and adaptive Lasso, it is found that the proposed method is better in terms of recovering the coefficient sign of linear part through the numerical simulation and real data analysis.

  • article
    OU Hui; XIE Zhendong; LI Junxiong; WANG Qiuling
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.06.004

    Taking flood catastrophe risk in China as the research background, aiming at the characteristics of flood loss ``low frequency and high loss'', Bayesian inference method is used to fit the loss distribution, and Bayesian inference is used to obtain the loss frequency distribution and loss quota distribution of flood in China.
    On this basis, Monte Carlo simulation method is used to calculate the probability distribution of annual flood loss in China under different trigger conditions, and then CAPM is used to study the pricing of flood catastrophe bonds in China. It is concluded that under different trigger conditions, as the trigger value increases gradually, the corresponding trigger is triggered. Comparing the three types of bonds, it can be found that the price of bonds decreases with the decrease of principal guarantee ratio and the increase of principal loss ratio, that is, the investment risk is directly proportional to the return, which provides reference for
    issuing flood catastrophe bonds in China.

  • article
    HU Siyi
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.04.002

    This paper studies an iterative method of estimating parameters in Gamma distribution based on maximum likelihood estimation and EM algorithm improved by non gradient information spectrum residual method in the case of classified data, Type-I interval censered data, Type-II interval censered data, and it proves the strong consistency of the algorithm. The simulation results show that the iterative method proposed in this paper can greatly shorten the running time while ensuring the accuracy, the estimated mean square error tends to zero with the increase of sample size.

  • article
    ZHANG Xueling; LU Qiujun
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.06.003

    In many real-world problems, observations are usually described by approximate values due to fuzzy uncertainty, unlikeprobabilistic uncertainty that has nothing to do with experimentation. The combination of statistical model and fuzzy set theory is helpful to improve the identification and analysis of complex systems. As an extension of statistical techniques, this study is an investigation of the relationship between fuzzy multiple explanatory variables and fuzzy response with numeric coefficients and the fuzzy random error term. In this work we describe a parameter estimation procedure carrying out the least-squares method in a
    complete metric space of fuzzy numbers to determine the coefficients based on the extension principle. We demonstrate how the fuzzy least squares
    estimators present large sample statistical properties, including asymptotic normality, strong consistency and confidence region. The estimators are also examined via asymptotic relative efficiency concerning traditional least squares estimators. Different from the construction of error term in Kim et
    al.\cite{21}, it is more reasonable in the proposed model since the problems of inconsistency in referring to fuzzy variable and producing the negative spreads may be avoided. The experimental study verifies that the proposed fuzzy least squares estimators achieve the meaning consistent with the theory identification for large sample data set and better generalization regarding one single variable model.

  • article
    ZOU Yuye; FAN Guoliang
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.008

    Wavelet estimation method has always been one hot and difficult problem in Statistics, and has wide application value in data compression, turbulence analysis, image and signal processing and seismic exploration, etc. The research object of this paper is the application of wavelet estimation method in mathematical statistics, focuses on the basic theory of wavelet estimation method, the types of threshold, and research achievements of the wavelet estimation method under complete data, incomplete data and longitudinal data. Due to the complexity and incompleteness of the data, traditional research methods are no longer applicable. It needs to combine with the characteristics of left truncated data, right censored data, missing data and longitudinal data, use the plug-in, calibration regression, imputation and inverse probability weighting methods. The nonlinear wavelet estimators of estimated functions are constructed, study the asymptotic expansion for mean integral square error (MISE) of nonlinear wavelet estimators and prove the asymptotic normality of estimators. The asymptotic expansions of MISE are still true for the estimated function with finite discontinuous points, and verify the uniform convergence rate of nonlinear wavelet estimators in Besov spaces, which contain unsmoothed functions; as well the wavelet method is used to study the consistency and convergence rate of the parametric and nonparametric parts for the semi-parametric regression models. Finally, the potential development direction of wavelet method is briefly discussed.

  • article
    HE Lei; LIN Lin
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.003

    A reasonable mortality model is the key to accurately measuring longevity risks. This paper considers the dependence of mortality among different age groups and the autocorrelation and heteroscedastic structure of mortality in each age group. The multivariate Copula and AR(n)-LSV models are used to construct the mortality model. VaR, TVaR, GlueVaR are used to measure longevity risk. The results show that Copula-AR($n$)-LSV characterizes mortality trends and fluctuations better than Lee-Cater model; When mortality in China gradually decline, insurance companies will face increasing longevity risk in the future.

  • article
    ZHOU Maoyuan;CUI Ning; WANG Xiuli; JI Yonggang
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.007

    The new coronavirus originated in Wuhan, Hubei and rapidly spread to 31 provinces and cities in China and many countries around the world, putting tremendous pressure on world development. When COVID-19 epidemic will end is the issue which draws most attention. Here, this paper applies weighted least squares regression model to study the development characteristics and trends of the number of people under medical observation in Hubei. We modify the predictions for end time of epidemic by combining predicted results of the regression model and actual situation and measure accuracy of model predictions. Applying the above method and using the epidemic data released by Health Commission of Hubei Province, the following conclusions were reached: the day-on-day change rate of the number of people under medical observation showed a linear development trend and has lasted for 46 days; the number of people under medical observation in Hubei dropped to 0 on April 16; after revising the prediction results, the end time of epidemic in Hubei Province should be not earlier than April 5 and not later than April 7. In addition, we find that the weighted linear regression prediction model method is simple and has high prediction accuracy.

  • article
    ZHAO Haiqing; PAN Lijun
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.006

    In this paper, pth moment exponential stability of stochastic systems with Markov switching and serval delayed impulses is investigated. It is assume that the state variables on the impulses may relate to the time-varying delays. By using stochastic analysis and impulsive techniques, serval new stability criteria are derived. Meanwhile, an example is provided to demonstrate the effectiveness of the obtained results.

  • article
    ZHANG Xiaoyue; ZHANG Meijuan
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.006

    We consider a branching random walk with bounded steps in random environments, where the particles are produced as a branching process with a random environment in time, and move independently as a random walk with bounded steps on $\mathbb{Z}$ with a random environment in location. We study the speed of the rightmost particle, conditionally on the survival of the branching process.

  • article
    DU Mingyue; SUN Jianguo
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.06.006

    Interval-censored failure time data are a general type of failure time or time-to-event data where the failure time of interest is known or observed only to lie in an interval or window instead of being observed exactly. They often occur in many fields, including demographical studies, epidemiological studies, medical or public health research and social science, and in different forms. A common and general set-up that naturally yields interval-censored data is the study with
    longitudinal or periodical follow-ups such as many clinical trials or observation studies. In this paper, after some brief discussion about the background and some commonly used models, we will review some recent advances, mainly during about last five years, on several important topics related to regression analysis as well as some issues that need more research in the analysis of interval-censored data.

  • article
    ZHENG Mingliang
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.06.005

    The traditional accelerated life test scheme is necessary to give the rough values of some model parameters in advance, but the influence of fluctuation on the stability of test scheme is irregulared. Based on the prior life test information, this paper aims to minimize the mean and variance of asymptotic variance of $p$-quantile life estimate under normal test stress level, using maximum likelihood estimation theory and Nelson cumulative failure principle, the optimal robust design mathematical model of step stress accelerated life test scheme with uncertainty parameters under Weibull distribution is established. The results of optimal robust design of step stress accelerated life test scheme for electrical connectors show that: comparing with the optimal design of step stress test scheme in the literature, the optimal robust design scheme is not sensitive to the uncertainty of model parameters when the asymptotic variance of median
    life estimate is basically the same; Comparing with the optimal design of constant accelerated life test scheme, when the statistical accuracy of test data is basically the same, the number of samples required can be reduced by 1/5, and the test time can be reduced by about 1/4.

  • article
    SHI Wanlin
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.004

    We study the moderate deviation probability of the position of the rightmost particle in a branching Brownian motion and obtain its moderate deviation function. Firstly, Chauvin and Rouault studied the large deviation probability for the rightmost position in a branching Brownian motion. Recently, Derrida and Shi considered lower deviation for the same model. By contrast, Our main result is more extensive.

  • article
    ZHANG Hongbo
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.01.002

    In this paper we study a Geo/Geo/1 queue with T-IPH vacations, where T-IPH denotes the discrete-time phase type distribution defined on a birth and death process with countably many states. Both the multiple and single vacation strategies are considered. For each case, based on the system of stationary equations and using complex analysis method, we firstly give the probability generating functions (PGFs) of stationary distributions for queue length and sojourn time. Moreover, by analysis the PGFs, recursive and asymptotic formulas for additional queue length and additional delay are also given. Finally, we further give some numerical examples to show the effectiveness of the method.

  • article
    BAI Mingyan; PENG Jiangyan; JING Haojie
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2020.06.002

    We consider a discrete-time risk model with dependence structures, where the claim-sizes \{X_n\}_{n\geq1} follow a one-sided linear process with independent and identically distributed (i.i.d.) innovations $\{\varepsilon_n\}_{n\geq1}$, and the innovations and financial risks form a sequence of independent and identically distributed copies of a random pair $(\varepsilon,Y)$ with dependent components. When the product \varepsilon Y has a heavy-tailed distribution, we establish some asymptotic estimates of the ruin probabilities in this discrete-time risk model. Finally, we use a Crude Monte Carlo (CMC) simulation to verify our results.

  • article
    ZHANG Junjian; LI Zhihang
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.01.005

    Mean chage-point detection is the groundwork in statistics. The trimmed empirical Euclidean likelihood ratio function is constructed based on the features of change-point in mean model. And the explicit expression is derived. The null limit distribution of the test statistic is investigated with extreme value distribution. And the change-point detection is made. if the change-point exists, it's location and consistency are discussed. Simulations and real analysis of Nile River data show that our proposed method is practicable and effective.

  • article
    QIN Yongsong; ZHANG Ping
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.03.003

    We apply the empirical likelihood technique to propose a new class of estimators of the error variance in linear models. It is shown that the proposed estimators  are asymptotically normally distributed with asymptotic variances not greater than that of the usual estimators of the error variance. And the closed forms of the asymptotic variances of the estimators are presented.

  • article
    CHEN Mu-Fa
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.02.001

    The paper consists of three parts. The first one is from the ergodic theorem of Markov chain to L.K. Hua's fundamental theorem on the optimization of economics. The second one is the Hua's revised version and the author's modification of Hua's theorem. The third one is the computational algorithms on the maximal eigenpair of the structure matrix in the economic system. Some examples are illustrated.

  • article
    YANG Zhaoqiang; TIAN Yougong
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.01.001

    This paper constructs the assets portfolio of lever corporation by the structural approach. Because irreversibility and uncertainty of corporate bankruptcy, the corporate bankruptcy is equivalent to a default of the bonds. By using the parabolic stochastic partial differential equations (SPDE) which the lookback option satisfied,the assets portfolio pricing model of lever corporation is derived under the mixed jump-diffusion fractional Brownian motion (MJD-fBm) environment.
    When the lever corporation in the financial crisis, Shareholders use capital injection to make up for operating losses and debt servicing, then the probability of no default before the bonds maturity and the conditional distribution of the lever corporation assets is obtained, and the pricing formula for lookback option is derived. In the end, a numerical example is given to illustrate the influence of different Hurst parameters and risk coefficient and stock asset weight to the default
    probability of the lever corporation.

  • article
    YU Xiaohang; HE Hua
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.004

    In this paper, we study the optimal insurance and consumption/investment strategies for a wage earner with an uncertain lifetime under partial information. The goal is to maximize the expected utility of a wage earner's consumption, bequest and terminal wealth. We obtain the optimal value functions and the corresponding optimal strategies under the power utility and logarithmic utility. Finally, we give a numerical example and derive the corresponding results.

  • article
    JING Ying;YANG Weiguo
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.02.005

    In this paper, we prove a strong limit theorem for Markov chains in bi-infinite random environment. As corollary, we obtain the strong law of large numbers for nonhomogeneous Markov chains. Finally, we derive the strong limit theorem of harmonic mean of stochastic transition probabilities for Markov chains in bi-infinite random environment.

  • article
    BO Lijun; ZHANG Tingting
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.03.004

    By the empirical analysis of the open data on COVID-19 in the America, this paper proposes a stochastic dynamic infection model for the regions in America during the pandemic period of COVID-19. To solve when to ``open'' or ``restrict'' the economic and social activities, we construct a multi-regional optimal prevention and control switching Nash equilibrium strategy based on maximizing the expected utility with mean-field interactions. Then, we consider the infection population model for the representative region and solve the corresponding optimal prevention and control switching strategy under the infinite number of regions. Meanwhile, we prove that this strategy is an $\epsilon$-Nash equilibrium for finite regions when the number of regions tends to infinity. By comparing and analyzing the optimal switching boundaries under different process states, we will give specific suggestions on when and how to adjust the prevention
    efforts.

  • article
    TIAN Yuzhu; TIAN Maozai
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2021.04.005

    Regression models are traditionally estimated using the least square estimation (LSE) method which may result in non-robust parameter estimates when data includes non-normal feature or outliers. Compared to LSE approach, composite quantile regression (CQR) can provide more robust estimation results even suffering non-normal errors or outliers. Based on a composite asymmetric Laplace distribution (CALD), the weighted composite quantile regression (WCQR) can be treated in the Bayesian framework. Regularization methods have been verified to be very effective for high-dimensional sparse regression models in that
    it can simultaneously conduct variable selection and parameters estimation. In this paper, we combine Bayesian LASSO regularization methods with WCQR to fit linear regression models. Bayesian LASSO-regularized hierarchical models of WCQR are constructed and the conditional posterior distributions of all unknown parameters are derived to conduct statistical inference. Finally, the developed methods are illustrated by Monte Carlo simulations and a real data analysis.

  • article
    WANG Hao; CHENG Xiaoqiang; GONG Xiaojie
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.02.007

    This article considers the optimal dividend policy with delayed capital injections, and assumes that the capital injection delay follows the exponential distribution. We aim to find the optimal dividend and capital injection strategies to maximize the utility of dividend and capital. Since surplus process of the insurance company involves a mixed Poisson process, we use a stochastic differential equation to characterize the surplus process by adopting diffusion approximation
    techniques, and then we obtain the value function under the utility criterion. When the value function is smooth, the quasi variational inequality is obtained by using the dynamic programming principle. In this paper, we consider the value function from three different regions (the dividend area, the continuous area and the capital injection area). Through the boundary conditions, we derive the expression of the value function in different regions and present the verification theorem. A numerical example is presented to illustrate the effects of the capital injection delay under different parameters.

  • article
    LIN Na;LIU Yuanyuan
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.04.005

    In this paper, we investigate algebraic and exponential transience for continuous-time Markov chains (CTMCs). Equivalent relations of these transience are revealed between CTMCs and their jump chains and dual processes. The results are further applied to derive the criteria of these transience for general CTMCs, generalized Markov branching processes and birth-death processes.

  • article
    ZENG Weijia; ZHANG Riquan
    CHINESE JOURNAL OF APPLIED PROBABILITY AND STATIST. https://doi.org/10.3969/j.issn.1001-4268.2022.01.007

    Lasso is a variable selection method commonly used in machine learning, which is suitable for regression problems with sparsity. Distributed computing is an important way to reduce computing time and improve efficiency when large sample sizes or massive amounts of data are stored on different agents. Based on the equivalent optimization model of Lasso model and the idea of alternating stepwise iteration, this paper constructs a distributed algorithm suitable for
    Lasso variable selection. And the convergence of the algorithm is also proved. Finally, the distributed algorithm constructed in this paper is compared with cyclic-coordinate descent and ADMM algorithm through numerical experiments. For the sparse regression problem with large sample set, the algorithm proposed in this paper has better advantages in computing time and precision.