Hausdorff Dimension of Range and Graph for General Markov Processes

CHEN Zhi-He

CHEN Zhi-He, . Hausdorff Dimension of Range and Graph for General Markov Processes[J]. Chinese Journal of Applied Probability and Statistics, 2024, 40(6): 942-956.
Citation: CHEN Zhi-He, . Hausdorff Dimension of Range and Graph for General Markov Processes[J]. Chinese Journal of Applied Probability and Statistics, 2024, 40(6): 942-956.
陈芷禾. Markov过程象集和图集的Hausdorff维数[J]. 应用概率统计, 2024, 40(6): 942-956. DOI: 10.12460/j.issn.1001-4268.aps.2024.2022133
引用本文: 陈芷禾. Markov过程象集和图集的Hausdorff维数[J]. 应用概率统计, 2024, 40(6): 942-956. DOI: 10.12460/j.issn.1001-4268.aps.2024.2022133

Hausdorff Dimension of Range and Graph for General Markov Processes

Funds: 

Leshan Normal University Scientific Research Start-up Project for Introducing High-level Talents RC2024001

More Information

Markov过程象集和图集的Hausdorff维数

详细信息
  • 中图分类号: O211.6

  • Abstract: We establish the Hausdorff dimension of the graph of general Markov processes on $\mathbb{R}^d$ based on some probability estimates of the processes staying or leaving small balls in small time. In particular, our results indicate that, for symmetric diffusion processes (with $\alpha=2$) or symmetric $\alpha$-stable-like processes (with $\alpha\in (0, 2)$) on $\mathbb{R}^d$, it holds almost surely that $\text{dim}_{\mathcal{H}}\mathrm{Gr}X([0, 1])= \mathbb{1}_{\{\alpha<1\}}+(2-1/\alpha)\mathbb{1}_{\{\alpha\ge1, d=1\}}+(d\wedge \alpha)\mathbb{1}_{\{\alpha\ge1, d\ge2\}}.$ We also systematically prove the corresponding results about the Hausdorff dimension of the range of the processes.
    摘要: 在假定小时间内过程停留或离开小球概率估计的条件下, 我们建立$\mathbb{R}^d$上一般Markov过程图集的Hausdorff维数. 特别地, 我们的结果表明在$\mathbb{R}^d$上对称扩散过程($\alpha=2$) 或对称$\alpha$-stable型过程($\alpha\in(0, 2)$), 几乎处处有 $\text{dim}_{\mathcal{H}}\mathrm{Gr}X([0, 1])= \mathbb{1}_{\{\alpha<1\}}+(2-1/\alpha)\mathbb{1}_{\{\alpha\ge1, d=1\}}+(d\wedge \alpha)\mathbb{1}_{\{\alpha\ge1, d\ge2\}}.$ 同时我们也系统地证明了Markov过程象集Hausdorff维数的相关结果.
  • Random fractal is a hot subject in the development of modern probability theory. In particular, the fractal properties of sample paths for stochastic processes play an important role in random fractal theory, which can be traced back to Lévy's research [1] on Brownian motion in the 1940s. Since then, stable processes or other Lévy processes, with Brownian motion as a special case, have been widely studied; see [2-5] and the survey paper [6]. So far, fractal properties of sample paths for Lévy processes have been fruitful; see [7-12] as well as the book [13]. Among them, there are numerous significant works on the Hausdorff dimension of the range and the graph of Lévy processes. For example, the Hausdorff dimension of the range of symmetric $ \alpha $-stable process was studied in [14-16], and the corresponding results for the graph of symmetric stable process were considered in [2]. Recently, the uniform Hausdorff and packing dimension of the range of a large family of Markov processes have been proven in [17], and the Hausdorff dimension of the range and the graph of stable-like processes are also considered in [18]. However, it seems that the Hausdorff dimension of the graph of general Markov processes is not available. One purpose of the present paper is to fill this gap. In the following we first describe the assumptions and the setting of our paper, and then present our main result.

    Consider a strong Markov process $ X:=\{(X_t)_{t\ge 0}, ( \mathsf{P}^x)_{x\in \mathbb R^d}\} $ with state space $ { \mathbb R^d} $, which is defined on some probability space $ (\Omega, \; \mathcal F, \; \mathsf{P}) $. Denote the range of $ X $ by $ X([0, 1]):=\{x\in { \mathbb R^d}:x=X_t\; \mathrm{for\; some}\; t\in[0, 1]\}, $ and the graph of $ X $ by $ \mathrm{Gr} X([0, 1]):=\{(t, X_t)\in [0, 1]\times { \mathbb R^d}:\; t\in[0, 1]\}. $ We will investigate Hausdorff dimensions of the range and graph (random) sets above of the process $ X $ under the following assumption. For any $ x\in \mathbb R^d $ and $ r>0 $, let $ B(x, r):=\{y\in \mathbb R^d:|x-y|< r\} $.

    Assumption (A)

    (ⅰ) There exist constants $ c_1, \; \alpha_1>0 $ such that for all $ x\in \mathbb R^d $, $ t\in[0, 1] $ and $ r\in(0, 1) $,

    $$ \begin{equation} \mathsf{P}^x(X_t\in B(x, r)^c)\le c_1\frac{t}{r^{\alpha_1}}. \end{equation} $$ (1)

    (ⅱ) There exist constants $ c_2, \; \alpha_2>0 $ such that for all $ x\in \mathbb R^d $, $ t\in(0, 2] $ and $ r\in(0, 1) $,

    $$ \begin{equation} \mathsf{P}^x(X_t\in B(x, r))\le c_2 \left(\frac{r}{t^{1/\alpha_2}}\right)^d. \end{equation} $$ (2)

    Before stating our main result, we provide some comments on the assumption above.

    Remark 1 (ⅰ) Without loss of generality, we can assume that $ \alpha_2\le \alpha_1 $. Note that $ \mathsf{P}^x(X_t\in B(x, r)^c)\le \mathsf{P}^x(\tau_{B(x, r)}\le t), $ where $ \tau_{B(x, r)}=\inf\{t>0:X_t\notin B(x, r)\} $. There are a few results to verify Assumption (A)(ⅰ), e.g., see [19; Chapter 5] for a large class of Feller processes on $ \mathbb R^d $.

    (ⅱ) Suppose that the process $ X $ has a transition density (i.e., heat kernel). Then, (1) is concerned with the probability estimate of the process $ X $ exiting the ball $ B(x, r) $, which is related to off-diagonal estimates of the heat kernel, while (2) is the probability estimate of the process $ X $ hitting the ball $ B(x, r) $ that is related to on-diagonal estimates of the heat kernel.

    The main result of the paper is as follows.

    Theorem 1 (ⅰ) Suppose that Assumption (A)(ⅰ) holds. Then, $ \mathsf{P} $-a.s., the Hausdorff dimension of the range for the strong Markov process $ X $ satisfies

    $$ \dim_{\mathcal{H}}X([0, 1])\le d\wedge \alpha_1, $$

    and the Hausdorff dimension of the graph for the strong Markov process $ X $ satisfies

    $$ \dim_{\mathcal{H}}\mathrm{Gr}X([0, 1])\le \mathbb 1_{\{\alpha_1<1\}}+(2-1/\alpha_1) \mathbb 1_{\{\alpha_1\ge1, \; d=1\}}+(d\wedge \alpha_1) \mathbb 1_{\{\alpha_1\ge1, \; d\ge2\}}. $$

    (ⅱ) Suppose that Assumption (A)(ⅱ) holds. Then, $ \mathsf{P} $-a.s., the Hausdorff dimension of the range of strong Markov process $ X $ satisfies

    $$ \dim_{\mathcal{H}}X([0, 1])\ge d\wedge \alpha_2, $$

    and the Hausdorff dimension of the graph for the strong Markov process $ X $ satisfies

    $$ \dim_{\mathcal{H}}\mathrm{Gr}X([0, 1])\ge \mathbb 1_{\{\alpha_2<1\}}+(2-1/\alpha_2) \mathbb 1_{\{\alpha_2\ge1, \; d=1\}}+(d\wedge \alpha_2) \mathbb 1_{\{\alpha_2\ge1, \; d\ge2\}}. $$

    The approach of Theorem 1 is partly motivated by those in the literature; see [5-6, 18] for details. To prove the upper bound of the Hausdorff dimension of the range of Markov processes, we will make full use of the finite variation property of its sample paths, while we adopt the density theorem via the sojourn time of the process to obtain the corresponding lower bound. In particular, according to Remark 1(ⅱ) one can see that the assertion (ⅱ) for the range covers [20; Theorem 1.4], where the corresponding result for Feller processes is proved under on-diagonal estimates for heat kernel.

    To study the Hausdorff dimension of the graph of the process, we consider the space-time process $ (\mathcal{G}(t))_{t\ge 0}:=(t, X_t)_{t\ge 0} $ in $ \mathbb R^{d+1} $, and therefore the graph of $ X $ can be viewed as the range of $ (\mathcal{G}(t))_{t\ge 0} $. Since projecting the range into the time axis or the space axis does not increase the corresponding Hausdorff dimension, we can use bounds for the Hausdorff dimension of the range to obtain lower bounds for the graph, and further refine the lower bound when $ d=1 $ and $ \alpha_2>1 $ by applying the density theorem. For the upper bound of the Hausdorff dimension of the graph, we not only use the finite variation property of its sample paths, but also apply the upper box-counting dimension when $ d=1 $ and $ \alpha_1>1 $. It should be emphasized that the statement of Theorem 1 is more delicate and more general than known results in [2-3, 18].

    The remainder of the paper is organized as follows. In Section 2, we provide some preliminaries concerning on the Hausdorff dimension and useful related tools. Section 3 is devoted to the proof of Theorem 1. In the final section, we take two examples to illustrate the power of Theorem 1.

    In this section, we review the definition of the Hausdorff dimension and useful related tools, which are used to prove Theorem 1. For more details, one can refer to [6, 21-22].

    For any $ \delta>0 $, let $ \Phi $ be the class of functions $ \varphi:(0, \delta)\rightarrow (0, \infty) $ that are right continuous, monotone increasing with $ \varphi(0+)=0 $, and satisfy that there exists a finite constant $ K>0 $ such that

    $$ \frac{\varphi(2s)}{\varphi(s)}\le K, \quad 0<s<\delta/2. $$

    Definition 1 For any $ \varphi\in\Phi $, the $ \varphi $-Hausdorff measure of $ E\subseteq { \mathbb R^d} $ is defined by

    $$ \begin{equation} \varphi -m(E)=\lim\limits_{\varepsilon\rightarrow0}\inf\left\{\sum\limits_{i=1}^\infty\varphi(2r_i): E\subseteq\bigcup\limits_{i=1}^{\infty}B(x_i, r_i), \, r_i<\varepsilon\right\}, \end{equation} $$ (3)

    and the Hausdorff dimension of $ E $ is defined by

    $$ \begin{equation} \dim_{\mathcal{H}}E=\inf\{\alpha>0:\; s^\alpha -m(E)=0\}. \end{equation} $$ (4)

    Next, we introduce the definition of the box-counting dimension of a Borel set, which is often used to prove the upper bound of its Hausdorff dimension. For any $ \varepsilon>0 $ and any Borel set $ E\subseteq { \mathbb R^d} $, let $ N(E, \varepsilon) $ be any quantity of the following terms:

    (ⅰ) The smallest number of balls with radius $ \varepsilon $ that can cover $ E $;

    (ⅱ) The largest number of disjoint balls with radius $ \varepsilon $ and centers in $ E $;

    (ⅲ) The smallest number of $ d $-dimensional intervals with side length $ \varepsilon $ that cover $ E $;

    (ⅳ) The number of binary $ d $-dimensional intervals with side length $ \varepsilon=2^{-n} $ and intersecting $ E $;

    (ⅴ) The smallest number of balls with diameter less than $ 2\varepsilon $ that can cover $ E $.

    Definition 2 The upper and lower box-counting dimension of $ E\subseteq { \mathbb R^d} $ are defined by

    $$ \bar{\dim}_B\, E=\limsup\limits_{\varepsilon\rightarrow0}\frac{\log N(E, \varepsilon)}{-\log\varepsilon} $$

    and

    $$ \underline{\dim}_B\, E=\liminf\limits_{\varepsilon\rightarrow0}\frac{\log N(E, \varepsilon)}{-\log\varepsilon}, $$

    respectively. If $ \bar{\dim}_B\, E=\underline{\dim}_B\, E $, the common value is called the box-counting dimension of $ E $.

    It is easy to verify that the upper and lower box-counting dimension defined by $ N(E, \varepsilon) $ taking any of the five numbers above are the same. The following lemma shows the relationship between the upper box-counting dimension and the Hausdorff dimension; see [21; Theorem 4.6] for the proof.

    Lemma 2 For any Borel set $ E\subseteq { \mathbb R^d} $, we have

    $$ \dim_{ \mathcal H}E\le \bar{\dim}_B\, E. $$

    As indicated by Lemma 2, the upper bound of $ \dim_{ \mathcal H}E $ can be derived from the upper bound of $ \bar{\dim}_B\, E $. Another approach to bounding the Hausdorff dimension of the range for a stochastic process relies on its $ p $-variation.

    Definition 3 Let $ f:[0, 1]\rightarrow \mathbb R^d $ be a càdlàg function. For any $ p>0 $, the $ p $-variation of $ f $ is defined by

    $$ \begin{equation} V_p(f, [0, 1]):=\sup\sum\limits_{j=0}^{m-1}|f(t_{j+1})-f(t_j)|^p, \end{equation} $$ (5)

    where the supremum in (5) is taken over all finite partitions $ 0=t_0<t_1<\cdots<t_{m-1}<t_m=1 $, with $ m\ge1 $, of the interval $ [0, 1] $.

    The following lemma was first proposed by [4; (3.3)], see also [20; Remark 1.3].

    Lemma 3 If $ f:[0, 1]\rightarrow \mathbb R^d $ is a càdlàg function with finite $ p $-variation, then

    $$ \dim_{\mathcal{H}}f([0, 1])\le p\wedge d. $$

    The following lemma, known as the density theorem in the literature, was first introduced in [23]. It is highly effective for obtaining the lower bound of the Hausdorff dimension; see [6] and references therein for further details. For any Borel measure $ \mu $ on $ \mathbb R^d $ and $ \varphi\in\Phi $, the upper $ \varphi $-density of $ \mu $ at $ x\in \mathbb R^d $ is defined by

    $$ \bar{D}^{\varphi}_{\mu}(x):=\limsup\limits_{r\rightarrow 0}\frac{\mu(B(x, r))}{\varphi(2r)}. $$

    Lemma 4 Given $ \varphi\in\Phi $, there exists a positive constant $ K $ such that for any nonnegative Borel measure $ \mu $ on $ \mathbb R^d $ with $ 0<\|\mu\|:=\mu( \mathbb R^d)<\infty $ and every Borel set $ E\subseteq \mathbb R^d $,

    $$ \begin{equation} K^{-1}\mu(E)\inf\limits_{x\in E}\{\bar{D}^{\varphi}_{\mu}(x)\}^{-1}\le \varphi -m(E)\le K\|\mu\|\sup\limits_{x\in E}\{\bar{D}^{\varphi}_{\mu}(x)\}^{-1}. \end{equation} $$ (6)

    The proof of the Hausdorff dimension of the range of the process $ X $ stated in Theorem 1 is split into two parts. Roughly speaking, we prove the upper bound by the finite variation of the sample paths, and verify the lower bound by applying the density theorem. (Upper bound) First, we prove the upper bound of the Hausdorff dimension of the range of $ X $. By Assumption (A)(ⅰ) and [19; Theorem 5.19], for all $ p>\alpha_1 $ with $ p\ge1 $,

    $$ V_p(X, [0, 1])<\infty, \quad {\rm a.s., } $$

    where $ V_p(X, [0, 1]) $ is the $ p $-variation of the process $ X $ on $ [0, 1] $ given in (5).

    Furthermore, by Lemma 3, we know that if $ f:[0, 1]\rightarrow \mathbb R^d $ is a càdlàg function with finite $ p $-variation, then $ \dim_{\mathcal{H}}f([0, 1])\le p\wedge d, $ a.s. Letting $ p\rightarrow \alpha_1 $,

    $$ \dim_{\mathcal{H}}X([0, 1])\le \alpha_1\wedge d, \quad {\rm a.s.} $$

    The proof is complete.

    (Lower bound) Next, we prove the lower bound of the Hausdorff dimension of the range of $ X $. For any $ t_0\in[0, 1) $ and $ r\in(0, 1) $, define

    $$ T(t_0, r):=\int_{t_0}^{t_0+1} \mathbb 1_{\{|X_t-X_{t_0}|\le r\}}\, {{{\rm{d}}}} t. $$

    For simplicity, we write $ T(0, r) $ as $ T(r) $. By Assumption (A)(ⅱ), for all $ x\in \mathbb R^d $ and $ 0<r<1 $,

    $$ \begin{equation*} \begin{split} \mathsf{E}^x[T(r)] & = \mathsf{E}^x\left[\int_{0}^{1} \mathbb 1_{\{|X_t-x|\le r\}}\, {{{\rm{d}}}} t\right]=\int_{0}^{1} \mathsf{P}^x\left(X_t\in B(x, r)\right)\, {{{\rm{d}}}} t\\ & \le \int_{0}^{r^{\alpha_2}}1\, {{{\rm{d}}}} t+c_1\int_{r^{\alpha_2}}^{1}\left(\frac{r}{t^{1/\alpha_2}}\right)^d\, {{{\rm{d}}}} t \le c_2r^{\alpha_2\wedge d}(1+\log r^{-1}). \end{split} \end{equation*} $$

    Combing this with Fubini's theorem and the Markov property implies that for all $ n\ge 2 $,

    $$ \begin{align*} \mathsf{E}^x[T(r)^n] & = \mathsf{E}^x\left[\int_{0}^{1}\cdots\int_{0}^{1}\prod\limits_{j=1}^{n} \mathbb 1_{\{|X_{s_j}-x|\le r\}}\, {{{\rm{d}}}} s_1\cdots{{{\rm{d}}}} s_n\right]\\ &\le n!\int_{0\le s_1\le \cdots\le s_n\le1} \mathsf{E}^x\left[\prod\limits_{j=1}^{n-1} \mathbb 1_{\{|X_{s_j}-x|\le r\}} \mathbb 1_{\{|X_{s_n}-X_{s_{n-1}}|\le 2r\}}\right]{{{\rm{d}}}} s_1\cdots{{{\rm{d}}}} s_n\\ &=n!\int_{0\le s_1\le \cdots\le s_n\le1} \mathsf{E}^x\left\{ \mathbb 1_{\bigcap\nolimits_{j=1}^{n-1}\left\{|X_{s_j}-x|\le r\right\}} \mathsf{E}^{X_{s_{n-1}}}\left[ \mathbb 1_{\{|X_{s_n}-X_{s_{n-1}}|\le 2r\}}\right]\right\}{{{\rm{d}}}} s_1\cdots{{{\rm{d}}}} s_n\\ &\le n \mathsf{E}^x[T(r)^{n-1}]\sup\limits_{x\in \mathbb R^d} \mathsf{E}^x[T(2r)]\le n!\left(\sup\limits_{x\in \mathbb R^d} \mathsf{E}^x[T(2r)]\right)^n\\ &\le c_3^n n!r^{n(\alpha_2\wedge d)}(1+\log r^{-1})^n \end{align*} $$

    with $ c_3\ge c_2 $. Thus, for any $ u>0 $,

    $$ \begin{equation*} \begin{split} \mathsf{E}^x\left[ \text{e}^{uT(r)}\right]& =1+\sum\limits_{n=1}^{+\infty}\frac{u^n}{n!} \mathsf{E}^x\left[T(r)^n\right] \le 1+\sum\limits_{n=1}^{+\infty}u^nc_3^n r^{n(\alpha_2\wedge d)}(1+\log r^{-1})^n. \end{split} \end{equation*} $$

    In particular, letting $ u=\frac{1}{2c_3 r^{\alpha_2\wedge d}(1+\log r^{-1})} $, $ \mathsf{E}^x\left[ \text{e}^{uT(r)}\right] $ is bounded by 2. This along with the Markov inequality again gives us that for any $ \lambda>0 $ and $ 0<r<1 $,

    $$ \begin{align*} \mathsf{P}^x\left(T(r)\ge \lambda c_3r^{\alpha_2\wedge d}(1+\log r^{-1})\right) &= \mathsf{P}^x\left( \text{e}^{uT(r)}\ge \text{e}^{u\lambda c_3r^{\alpha_2\wedge d}(1+\log r^{-1})}\right)\\ &\le \text{e}^{-u\lambda c_3 r^{\alpha_2\wedge d}(1+\log r^{-1})} \mathsf{E}^x\left[ \text{e}^{uT(r)}\right] \le 2 \text{e}^{-\lambda/2}. \end{align*} $$

    Here,

    $$ \sum\limits_{m=1}^{\infty} \mathsf{P}^x\left(T(2^{-m})\ge c_3 m 2^{-m(\alpha_2\wedge d)}(1+\log 2^m)\right) \le2\sum\limits_{m=1}^{\infty} \text{e}^{-m/2}<\infty. $$

    By the Borel-Cantelli lemma, almost surely there exists a random variable $ m_0:=m_0(\omega)\ge1 $ so that for any $ m\ge m_0 $,

    $$ T(2^{-m})< c_3 m 2^{-m(\alpha_2\wedge d)}(1+\log 2^m), \quad {\rm a.s.} $$

    For all $ r $ small enough, let $ m $ be the unique integer such that $ 2^{-m-1}\le r<2^{-m} $. Then, for any $ \varepsilon>0 $, almost surely

    $$ \begin{align*} \frac{T(r)}{r^{(\alpha_2\wedge d)-\varepsilon}\log r^{-1}} &\le\frac{T(2^{-m})}{m(\log2) 2^{(-m-1)[(\alpha_2\wedge d)-\varepsilon]}}\\ &=\frac{T(2^{-m})}{c_3 m 2^{-m(\alpha_2\wedge d)}(1+\log 2^m)}\frac{c_3 2^{-m(\alpha_2\wedge d)}(1+\log 2^m)}{(\log2) 2^{(-m-1)[(\alpha_2\wedge d)-\varepsilon]}}\\ &< \frac{c_3(1+\log 2^m)2^{-(m+1)\varepsilon}}{(\log2)}\le c_4:=c_4(\varepsilon), \end{align*} $$

    where $ c_4(\varepsilon) $ is independent of $ m $. Following the arguments above, we can obtain that there is a constant $ c_5>0 $ so that for any $ t_0\in[0, 1) $

    $$ \limsup\limits_{r\rightarrow0}\frac{T(t_0, r)}{r^{(\alpha_2\wedge d)-\varepsilon}\log r^{-1}}\le c_5, \quad {\rm a.s.} $$

    By Lemma 4, there is

    $$ \varphi_\varepsilon -m(X([0, 1]))\ge c_6\quad {\rm a.s.}, $$

    where $ \varphi_\varepsilon(r)=r^{(\alpha_2\wedge d)-\varepsilon}\log r^{-1} $. In particular, we arrive at that the lower bound of the Hausdorff dimension of the range of $ X $ on $ [0, 1] $ satisfies

    $$ \dim_{\mathcal{H}}X([0, 1])\ge (\alpha_2\wedge d)-\varepsilon, \quad {\rm a.s.} $$

    Letting $ \varepsilon\rightarrow 0 $, we can prove the desired assertion.

    The proof of the assertion for the graph stated in Theorem 1 is similar to that for the range, but it requires significantly more effort in the case of one dimension. We first consider upper bounds by applying the variation again and the upper box-counting dimension, and then study lower bounds by the projection approach and the density theorem as well.

    (Upper bound) (1) Let $ p>\alpha_1 $ with $ p\ge1 $. Consider the $ p $-variation of the time-space process $ (\mathcal{G}(t))_{t\ge 0}:=(t, X_t)_{t\ge 0} $ in $ \mathbb R^{d+1} $. Then, almost surely

    $$ \begin{align*} V_p(\mathcal{G}, [0, 1]) & :=\sup\limits_{0=t_0\le t_1\le\cdots\le t_{n}=1}\sum\limits_{i=0}^{n-1}|(t_{i+1}, X_{t_{i+1}})-(t_{i}, X_{t_{i}})|^p\\ &\le 2^{p-1}\sup\limits_{0=t_0\le t_1\le\cdots\le t_{n}=1}\sum\limits_{i=0}^{n-1}\left(|t_{i+1}-t_{i}|^p+|X_{t_{i+1}}-X_{t_{i}}|^p\right)\\ & \le c_1\left(\sup\limits_{0=t_0\le t_1\le\cdots\le t_{n}=1}\sum\limits_{i=0}^{n-1}|t_{i+1}-t_{i}|^p+\sup\limits_{0=t_0\le t_1\le\cdots\le t_{n}=1}\sum\limits_{i=0}^{n-1}|X_{t_{i+1}}-X_{t_{i}}|^p\right)\\ &\le c_1\left(1+V_p(X, [0, 1])\right)<\infty, \end{align*} $$

    where the first inequality is due to the fact that for all $ q\ge 1 $ and $ a, b\ge 0 $, $ (a+b)^q\le 2^{q-1}(a^q+b^q) $, the second inequality follows from $ p\ge1 $, and in the last inequality we used the Assumption (A)(ⅰ) and [19; Theorem 5.19] (see the arguments in the previous subsection). Then, by Lemma 3, we have

    $$ \dim_{\mathcal{H}}\mathcal{G}([0, 1])\le p\wedge d, \quad {\rm a.s.} $$

    Thus, if $ 0<\alpha_1<1 $, then, letting $ p=1 $,

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\le1, \quad {\rm a.s.} $$

    due to $ d\ge1 $. If $ \alpha_1\ge1 $, then, letting $ p\rightarrow \alpha_1 $, almost surely

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\le\alpha_1\wedge d=\begin{cases}1, &d=1;\\ \alpha_1\wedge d, &d\ge2.\end{cases} $$

    (2) Next, we will refine the upper bounds for the case that $ d=1 $ and $ \alpha_1>1 $.

    Let $ \{\lambda(j):j\ge0\} $ be a sequence of partitions, with $ \lambda(j)=\{\frac{k}{2^j}:k=0, \cdots, 2^j\} $, of $ [0, 1] $ into subintervals $ I_{j, k}:=[\frac{k}{2^j}, \frac{k+1}{2^j}] $ with $ k=0, \cdots, 2^{j-1} $, and all having the same length $ 2^{-j} $ for any $ j\ge0 $. Denote the oscillation of the process $ X $ in the dyadic interval $ I_{j, k} $ by

    $$ \mathrm{Osc}(X, I_{j, k}):=\sup\{|X_t-X_s|:s, t\in I_{j, k}\}=\sup\limits_{t\in I_{j, k}}X_t-\inf\limits_{s\in I_{j, k}}X_s. $$

    Fix $ j\ge0 $. For each $ k\ge0 $, $ \mathrm{Gr} X(I_{j, k}) $ can be covered by at most $ 2^j \mathrm{Osc}(X, I_{j, k})+2 $ squares of side length $ 2^{-j} $. Let $ N(E, \delta) $ represent the smallest number of sets with diameter at most $ \delta $ to cover $ E $. Thus, for all $ p>\alpha_1 $, almost surely

    $$ \begin{align*} N([0, 1], 2^{-j}) & = \sum\limits_{k=0}^{2^j-1}\left(2^j \mathrm{Osc}(X, I_{j, k})+2\right)\le 2^j\sum\limits_{k=0}^{2^j-1}V_p(X, I_{j, k})^{1/p}+2\cdot2^j\\ &\le 2^j\left(\sum\limits_{k=0}^{2^j-1}V_p(X, I_{j, k})\right)^{1/p}(2^j)^{1/p}+2\cdot2^j\\ &\le 2^{j(2-1/p)}V_p(X, [0, 1])^{1/p}+2\cdot2^j, \end{align*} $$

    where in the first inequality we used the definition of $ p $-variation, and in the second inequality we used the Hölder inequality. Therefore, by $ p>1 $,

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\le\bar{\dim}_{B}\, \mathrm{Gr} X([0, 1])=\limsup\limits_{\delta\downarrow0}\frac{\log N([0, 1], \delta)}{-\log\delta}\le 2-\frac{1}{p}, \quad {\rm a.s.}, $$

    where the first inequality follows from Lemma 2. Letting $ p\rightarrow \alpha_1 $, we obtain

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\le 2-\frac{1}{\alpha_1}, \quad {\rm a.s.} $$

    (Lower bound) (3) For any $ t>0 $, let $ \mathcal{G}(t):=(t, X_t) $. Then, the graph of $ X $ can be viewed as the range of $ \mathcal{G} $. Note that the dimension of the range of $ \mathcal{G} $ does not increase, if projecting the range of $ \mathcal{G} $ onto the time axis and the space axis. Thus, the lower bound for the dimension of the graph satisfies

    $$ \dim_{\mathcal{H}}\mathcal{G}([0, 1])\ge \dim_{\mathcal{H}}(X([0, 1]))\vee \dim_{\mathcal{H}}([0, 1])\ge\max\{d\wedge\alpha_2, 1\}, \quad {\rm a.s.}, $$

    where the last inequality follows from the lower bound of the Hausdorff dimension of the range. Thus, when $ 0<\alpha_2<1 $, we have

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\ge1, \quad {\rm a.s.}; $$

    when $ \alpha_2\ge1 $, almost surely

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\ge d\wedge\alpha_2=\begin{cases}1, &d=1;\\d\wedge\alpha_2, &d\ge2.\end{cases} $$

    (4) In this part, we will improve the lower bound for the case that $ d=1 $ and $ \alpha_2>1 $. The idea of the proof is similar to that in Section 3.1, and the main difference is that we will use time-space sojourn time.

    For any $ t_0\in[0, 1) $, $ s\in[0, 1] $ and $ a>0 $, define

    $$ \widetilde{T}_{t_0}(a, s):=\int_{t_0}^{t_0+s} \mathbb 1_{\{|X_t-X_0|\le a\}}\, {{{\rm{d}}}} t. $$

    For simplicity, we write $ \widetilde{T}(a, s):=\widetilde{T}_{0}(a, s)=\int_{0}^{s} \mathbb 1_{\{|X_t-X_0|\le a\}}\, {{{\rm{d}}}} t $. In particular, by the Assumption (A)(ⅱ), for any $ x\in \mathbb R^d $ and $ a, s>0 $,

    $$ \begin{equation*} \begin{split} \mathsf{E}^x[\widetilde{T}(a, s)] & = \mathsf{E}^x\left[\int_{0}^{s} \mathbb 1_{\{|X_t-x|\le a\}}\, {{{\rm{d}}}} t\right] =\int_{0}^{s} \mathsf{P}^x(|X_t-x|\le a)\, {{{\rm{d}}}} t\\ &\le c_1\int_{0}^{s}at^{-1/\alpha_2}\, {{{\rm{d}}}} t \le c_2as^{1-1/\alpha_2}. \end{split} \end{equation*} $$

    For any $ n\ge2 $, applying Fubini's theorem,

    $$ \begin{align*} \mathsf{E}^x[\widetilde{T}(a, s)^n] & = \mathsf{E}^x\left[\left( \int_{0}^{s} \mathbb 1_{\{|X_t-x|\le a\}}{{{\rm{d}}}} t\right)^n\right] =\int_{0}^{s}\cdots\int_{0}^{s} \mathsf{E}^x\left[\prod\limits_{i=1}^{n} \mathbb 1_{\{|X_{t_{i}}-x|\le a\}}\right]{{{\rm{d}}}} t_1\cdots{{{\rm{d}}}} t_n\\ &=n!\int_{0\le t_1\le\cdots\le t_n\le s} \mathsf{E}^x\left[\prod\limits_{i=1}^{n} \mathbb 1_{\{|X_{t_{i}}-x|\le a\}}\right]{{{\rm{d}}}} t_1\cdots{{{\rm{d}}}} t_n\\ & \le n!\int_{0\le t_1\le\cdots\le t_n\le s} \mathsf{E}^x\left[\prod\limits_{i=1}^{n-1} \mathbb 1_{\{|X_{t_{i}}-x|\le a\}} \mathbb 1_{\{|X_{t_n}-X_{n-1}|\le 2a\}}\right]{{{\rm{d}}}} t_1\cdots{{{\rm{d}}}} t_n\\ &=n!\int_{0\le t_1\le\cdots\le t_{n-1}\le s} \mathsf{E}^x\bigg\{ \mathbb 1_{\bigcap\nolimits_{i=1}^{n-1}\left\{|X_{t_i}-x|\le a\right\}}\\ & \qquad\qquad\qquad\qquad\qquad\cdot \int_{t_{n-1}}^{s} \mathsf{E}^{X_{t_{n-1}}}\left[ \mathbb 1_{\{|X_{t_n}-X_{t_{n-1}}|\le 2a\}}\right]{{{\rm{d}}}} t_n\bigg\}{{{\rm{d}}}} t_1\cdots{{{\rm{d}}}} t_{n-1}\\ & =(n-1)!\int_{0\le t_1\le\cdots\le t_{n-1}\le s} \mathsf{E}^x\bigg\{ \mathbb 1_{\bigcap\nolimits_{i=1}^{n-1}\left\{|X_{t_i}-x|\le a\right\}}\\ &\qquad\qquad\qquad\qquad\qquad\cdot n\cdot \mathsf{E}^{X_{t_{n-1}}}\left[\widetilde{T}(2a, s-t_{n-1})\right]\bigg\}\, {{{\rm{d}}}} t_1\cdots{{{\rm{d}}}} t_{n-1}\\ &=n \mathsf{E}^{x}\left[\widetilde{T}(a, s)^{n-1}\right]\left(\sup\limits_{x\in \mathbb R^d} \mathsf{E}^x[\widetilde{T}(2a, s)]\right) \le n!\left(\sup\limits_{x\in \mathbb R^d} \mathsf{E}^x[\widetilde{T}(2a, s)]\right)^n\\ &\le c_3^n n! a^n s^{n(1-1/\alpha_2)} \end{align*} $$

    with $ c_3\ge c_2 $. Thus, for all $ u>0 $,

    $$ \begin{align*} \mathsf{E}^x[ \text{e}^{u\widetilde{T}(a, s)}] & =1+ \mathsf{E}^x\left[\sum\limits_{n=1}^{\infty}\frac{u^n\widetilde{T}(a, s)^n}{n!}\right] =1+\sum\limits_{n=1}^{\infty}\frac{u^n}{n!} \mathsf{E}^x[\widetilde{T}(a, s)^n]\\ &\le 1+\sum\limits_{n=1}^{\infty}c_3^n u^n a^n s^{n(1-1/\alpha_2)}. \end{align*} $$

    Letting $ u=\frac{1}{2c_3as^{1-1/\alpha_2}} $, $ \mathsf{E}^x[ \text{e}^{u\widetilde{T}(a, s)}] $ is bounded by 2. Applying this and the Markov inequality, we find that for any $ \lambda>0 $,

    $$ \begin{align*} \mathsf{P}^x(\widetilde{T}(a, s)\ge \lambda c_3as^{1-1/\alpha_2})&= \mathsf{P}^x( \text{e}^{u\widetilde{T}(a, s)}\ge \text{e}^{u\lambda c_3as^{1-1/\alpha_2}}) \le \text{e}^{-u\lambda c_3as^{1-1/\alpha_2}} \mathsf{E}^x[ \text{e}^{u\widetilde{T}(a, s)}]\\ &\le 2 \text{e}^{-\lambda/2}. \end{align*} $$

    In particular,

    $$ \sum\limits_{m=1}^{\infty} \mathsf{P}^x(\widetilde{T}(2^{-m}, 2^{-m})\ge c_3m2^{-m(2-1/\alpha_2)})\le 2\sum\limits_{m=1}^{\infty} \text{e}^{-m/2}<\infty, $$

    Therefore, by the Borel-Cantelli lemma, almost surely there exists $ m_0:=m_0(\omega)\ge1 $ so that for all $ m\ge m_0 $,

    $$ \widetilde{T}(2^{-m}, 2^{-m})< c_3m2^{-m(2-1/\alpha_2)}, \quad {\rm a.s.} $$

    For all $ a $ small enough, let $ m $ be the unique integer such that $ 2^{-m-1}\le a<2^{-m} $. Then,

    $$ \frac{\widetilde{T}(a, a)}{(\log a^{-1})a^{2-1/\alpha_2}}\le \frac{\widetilde{T}(2^{-m}, 2^{-m})}{c_3m2^{-m(2-1/\alpha_2)}} \cdot\frac{c_3 m 2^{-m(2-1/\alpha_2)}}{(\log2)m 2^{-(m+1)(2-1/\alpha_2)}}\le c_4, \quad {\rm a.s.}, $$

    where $ c_4 $ is independent of $ m $. Similarly, it can be proved that there exists a constant $ c_5>0 $ such that for any $ t_0\in[0, 1) $,

    $$ \limsup\limits_{a\rightarrow0}\frac{\widetilde{T}_{t_0}(a, a)}{(\log a^{-1})a^{2-1/\alpha_2}}\le c_5, \quad {\rm a.s.} $$

    By Lemma 4,

    $$ \limsup\limits_{a\rightarrow0}\frac{\mu([t_0, t_0+a]\times[X_{t_0-a}, X_{t_0+a}])}{\varphi(a)}\le c_5, \quad {\rm a.s., } $$

    where $ \varphi(r)=r^{2-1/\alpha_2}\log r^{-1} $. Thus,

    $$ \varphi -m ( \mathrm{Gr} X([0, 1]))\ge c_6, \quad {\rm a.s.}, $$

    which implies that for all $ \alpha_2>1 $ and $ d=1 $,

    $$ \dim_{\mathcal{H}} \mathrm{Gr} X([0, 1])\ge 2-1/\alpha_2, \quad {\rm a.s.} $$

    The proof is complete.

    In this section, we present two examples such that Assumption (A) holds. Therefore, Theorem 1 applies to these cases. Note that for functions $ f $ and $ g $, the notation $ f\asymp g $ means that there exist constants $ c_1, c_2, c_3, c_4>0 $ such that $ c_1f(c_2r)\le g(r)\le c_3f(c_4r) $, and the notation $ f\simeq g $ means that there exist constants $ c_1, c_2>0 $ such that $ c_1f(r)\le g(r)\le c_2f(r) $.

    Example 1 Consider a symmetric $ \alpha $-stable-like process $ X:=(X_t)_{t\ge0} $ on $ \mathbb R^d $ with the infinitesimal generator as follows:

    $$ \mathcal L u(x)=\lim\limits_{\varepsilon\rightarrow 0} \int_{\{y\in { \mathbb R^d}:|x-y|>\varepsilon\}}(u(x)-u(y))\frac{c(x, y)}{|x-y|^{d+\alpha}}\; {{{\rm{d}}}} y, $$

    where $ \alpha\in(0, 2) $, $ 0<c_{1}\le c(x, y)=c(y, x)\le c_{2}<\infty $. According to [24; Theorem 1.1], the transition density function $ p(t, x, y) $ of the process $ X $ satisfies

    $$ p(t, x, y)\simeq t^{-d/\alpha}\wedge \frac{t}{|x-y|^{d+\alpha}}, \quad x, y\in { \mathbb R^d}, t>0. $$

    Note that for all $ x\in { \mathbb R^d} $, $ t\ge0 $ and $ r>0 $,

    $$ \begin{align*} \mathsf{P}^x(X_t\in B(x, r)^c) &=\int_{B(x, r)^c} p(t, x, y)\, {{{\rm{d}}}} y \le c_3\int_{B(x, r)^c} \frac{t}{|x-y|^{d+\alpha}}\, {{{\rm{d}}}} y\\ &= c_3\sum\limits_{i=1}^{\infty}\int_{B(x, 2^ir)\backslash B(x, 2^{i-1}r)} \frac{t}{|x-y|^{d+\alpha}}\, {{{\rm{d}}}} y\\ &\le c_3\sum\limits_{i=1}^{\infty}\int_{B(x, 2^ir)\backslash B(x, 2^{i-1}r)} \frac{t}{(2^{i-1}r)^{d+\alpha}}\, {{{\rm{d}}}} y\\ &\le c_3\sum\limits_{i=1}^{\infty}\frac{2^{id}t}{2^{(i-1)(d+\alpha)}r^{\alpha}}\le c_4 \frac{t}{r^{\alpha}}, \end{align*} $$

    which means that Assumption (A)(ⅰ) holds with $ \alpha_1=\alpha $, and

    $$ \begin{align*} \mathsf{P}^x(X_t\in B(x, r)) =\int_{B(x, r)} p(t, x, y)\, {{{\rm{d}}}} y \le c_5\int_{B(x, r)} t^{-d/\alpha}\, {{{\rm{d}}}} y \le c_6\frac{r^d}{t^{d/\alpha}} \end{align*} $$

    gives us that the Assumption (A)(ⅱ) is satisfied with $ \alpha_2=\alpha $.

    Example 2 Consider a symmetric diffusion process $ X $ on $ { \mathbb R^d} $ with the infinitesimal generator as follows:

    $$ \mathcal L u(x)=\frac{1}{2}\sum\limits_{i, j=1}^{d}\frac{\partial}{\partial x_i}\left(a_{ij}(x)\frac{\partial u(x)}{\partial x_j}\right), $$

    where $ A(x):=(a_{ij}(x))_{1\leq i, j\leq d} $ is a measurable $ d\times d $ matrix-valued function on $ { \mathbb R^d} $ that is uniform elliptic and bounded in the sense that there exists a constant $ c\ge 1 $ such that

    $$ c^{-1}\sum\limits_{i=1}^{d}\xi^{2}_{i}\le \sum\limits_{i, j=1}^{d}a_{ij}(x)\xi_{i}\xi_{j}\le c\sum\limits_{i=1}^{d}\xi^{2}_{i} $$

    for any $ x $, $ \xi=(\xi_{1}, \cdots, \xi_{n})\in \mathbb R^{d} $. It is well known that the process $ X $ has a joint Hölder continuous transition density function $ p(t, x, y) $, which enjoys the following celebrated Aronson's estimates (see [25]):

    $$ p(t, x, y)\asymp t^{-d/2}\exp\left\{-\frac{|x-y|^2}{t}\right\} $$

    for $ t>0 $ and $ x, y\in { \mathbb R^d} $. Clearly, for any $ x\in \mathbb R^d $ and $ t, r>0 $,

    $$ \begin{align*} \mathsf{P}^x(X_t\in B(x, r)) &\le c_1\int_{B(x, r)} t^{-d/2}\, {{{\rm{d}}}} y\le c_2\frac{r^d}{t^{d/2}}, \end{align*} $$

    which yields that Assumption (A)(ⅱ) holds with $ \alpha_2=2 $. On the other hand, for any $ x\in \mathbb R^d $ and $ t, r>0 $,

    $$ \begin{align*} \mathsf{P}^x(X_t\in B(x, r)^c) &\le c_1\int_{B(x, r)^c} t^{-d/2}\exp\left\{-\frac{c_2|x-y|^2}{t}\right\}\, {{{\rm{d}}}} y\\ &\le c_1\sum\limits_{n=1}^{\infty}\int_{B(x, 2^{n}r)\backslash B(x, 2^{n-1}r)} t^{-d/2}\exp\left\{-\frac{c_2|x-y|^2}{t}\right\}\, {{{\rm{d}}}} y\\ &\le c_1\sum\limits_{n=1}^{\infty} t^{-d/2}\exp\left\{-\frac{c_2(2^{n-1}r)^2}{t}\right\}(2^{n}r)^d\le c_3\exp\left\{-\frac{c_4 r^2}{t}\right\}\le \frac{c_5 t}{r^2}, \end{align*} $$

    and so Assumption (A)(ⅰ) holds with $ \alpha_1=2 $.

    Remark 2 It can be observed that the arguments above to verify Assumption (A) rely solely on heat kernel estimates for small time and small scaling. So it follows from [26; Example 1.1] that Assumption (A) holds with $ \alpha_1=\alpha_2=\alpha $ for a large class of symmetric jump processes on $ \mathbb R^d $ with jumping kernel

    $$ J(x, y)\simeq \frac{1}{|x-y|^{d+\alpha}} \mathbb 1_{\{|x-y|\le 1\}} + \frac{1}{|x-y|^{d+\beta}} \mathbb 1_{\{|x-y|>1\}}, $$

    where $ \alpha\in (0, 2) $ and $ \beta\in (0, \infty) $. Therefore, Theorem 1 holds for these processes with $ \alpha_1=\alpha_2=\alpha $.

    Similarly, according to [27; Theorem 1.4] we can see that Assumption (A) holds, and therefore Theorem 1 holds as well, with $ \alpha_1=\alpha_2=2 $ for a large class of symmetric diffusions with jumps.

  • [1]

    LÉVY P. Processus Stochastiques et Mouvements Browniens [M]. Paris: Gauthier-Villars, 1948.

    [2]

    BLUMENTHAL R M, GETOOR R. The dimension of the set of zeros and the graph of a symmetric stable process [J]. Illinois J Math, 1962, 6(2): 308–316.

    [3]

    JAIN N, PRUITT W E. The correct measure function for the graph of a transient stable process [J]. Z Wahrscheinlichkeitstheorie verw Geb, 1968, 9(2): 131–138. doi: 10.1007/BF01851003

    [4]

    MCKEAN H P. Sample functions of stable processes [J]. Ann Math, 1955, 61(3): 564–579. doi: 10.2307/1969814

    [5]

    PRUITT W E, TAYLOR S J. Sample path properties of processes with stable components [J]. Z Wahrscheinlichkeitstheorie verw Geb, 1969, 12(4): 267–289. doi: 10.1007/BF00538749

    [6]

    XIAO Y M. Random fractals and Markov processes [C] //LAPIDUS M L, van FRANKENHUIJSEN M. Fractal Geometry and Applications: A Jubilee of Benoit Mandelbrot. American Mathematical Society, 2004: 261–338.

    [7]

    FRISTEDT B E. Sample function behavior of increasing processes with stationary, independent increments [J]. Pac J Math, 1967, 21(1): 21–33. doi: 10.2140/pjm.1967.21.21

    [8]

    MEERSCHAERT M M, XIAO Y M. Dimension results for sample paths of operator stable Lévy processes [J]. Stochastic Process Appl, 2005, 115(1): 55–75. doi: 10.1016/j.spa.2004.08.004

    [9]

    PRUITT W E. Some dimension results for processes with independent increments [C] //Stochastic Processes and Related Topics(Proc. Summer Res. Inst. on Statist. Inference for Stochastic Processes). Academic Press, 1975: 133–165.

    [10]

    TAYLOR S J. Sample path properties of processes with stationary independent increments [C] // Stochastic Analysis (a Tribute to the Memory of Rollo Davidson). Wiley, 1973: 387–414.

    [11]

    TAYLOR S J. The measure theory of random fractals [J]. Math Proc Cambridge Philos Soc, 1986, 100(3): 383–406. http://www.ams.org/mathscinet-getitem?mr=857718

    [12]

    WOLF J. Random fractals determined by Lévy processes [J]. J Theor Probab, 2010, 23: 1182–1203. doi: 10.1007/s10959-009-0224-8

    [13]

    SATO K I. Lévy Processes and Infinitely Divisible Distributions[M]. Cambridge: Cambridge University Press, 1999.

    [14]

    BLUMENTHAL R M, GETOOR R. Some theorems on stable processes [J]. Trans Amer Math Soc, 1960, 95(2): 263–273. doi: 10.1090/S0002-9947-1960-0119247-6

    [15]

    BLUMENTHAL R M, GETOOR R. A dimension theorem for sample functions of stable process [J]. Illinois J Math, 1960, 4(3): 370–375.

    [16]

    BLUMENTHAL R M, GETOOR R. Sample functions of stochastic processes with stationary independent increments [J]. J Math Mech, 1961, 10(3): 493–516. http://www.ams.org/mathscinet-getitem?mr=123362

    [17]

    SUN X B, XIAO Y M, XU L H, et al. Uniform dimension results for a family of Markov processes [J]. Bernoulli, 2018, 24(4B): 3924–3951.

    [18]

    YANG X C. Hausdorff dimension of the range and the graph of stable-like processes [J]. J Theor Probab, 2018, 31: 2412–2431. doi: 10.1007/s10959-017-0784-y

    [19]

    BÖTTCHER B, SCHILLING R L, WANG J. Lévy Matters. III. Lévy-Type Processes: Construction, Approximation and Sample Path Properties [M]. Switzerland: Springer Cham, 2013.

    [20]

    KNOPOVA V P, SCHILLING R L, WANG J. Lower bounds of the Hausdorff dimension for the images of Feller processes [J]. Stat Probabil Lett, 2015, 97: 222–228. doi: 10.1016/j.spl.2014.11.027

    [21]

    HU D H, LIU L Q, HU X Y, et al. Introduction to Random Fractal (In Chinese) [M]. Wuhan: Wuhan University Press, 1995.

    [22]

    MATTILA P. Geometry of Sets and Measures in Euclidean Spaces: Fractals and Rectifiability [M]. Cambridge: Cambridge University Press, 1995.

    [23]

    ROGERS C A, TAYLOR S J. Functions continuous and singular with respect to a Hausdorff measure [J]. Mathematika, 1961, 8(1): 1–31. doi: 10.1112/S0025579300002084

    [24]

    CHEN Z Q, KUMAGAI T. Heat kernel estimates for stable-like processes on d-sets [J]. Stochastic Process Appl, 2003, 108(1): 27–62. doi: 10.1016/S0304-4149(03)00105-4

    [25]

    ARONSON D G. Non-negative solutions of linear parabolic equations[J]. Ann Scuola Norm Sup Pisa Cl Sci, 1968, 22(4): 607–694. http://archive.numdam.org/ARCHIVE/ASNSP/ASNSP_1971_3_25_2/ASNSP_1971_3_25_2_221_0/ASNSP_1971_3_25_2_221_0.pdf

    [26]

    CHEN Z Q, KUMAGAI T, WANG J. Heat kernel estimates for general symmetric pure jump Dirichlet forms [J]. Ann Scuola Norm Sup Pisa Cl Sci, 2022, 23(3): 1091–1140.

    [27]

    CHEN Z Q, KUMAGAI T. A priori Hölder estimate, parabolic Harnack principle and heat kernel estimates for diffusions with jumps [J]. Rev Mat Iberoam, 2010, 26(2): 551–589. http://lib-arxiv-008.serverfarm.cornell.edu/pdf/0808.4010.pdf

计量
  • 文章访问数:  76
  • HTML全文浏览量:  17
  • PDF下载量:  25
  • 被引次数: 0
出版历程
  • 收稿日期:  2022-09-28
  • 修回日期:  2023-02-21
  • 录用日期:  2023-04-30
  • 网络出版日期:  2024-09-05
  • 刊出日期:  2024-12-29

目录

/

返回文章
返回