1 Introduction

The Strauss process is a point process that has a penalized density with respect to an underlying Poisson point process.

Given a point space \(S \subset \mathbb{R}^n\) of finite Lebesgue measure, say that \(X \subset S\) is a point process if it contains a finite number of points. Write \(X = \{x_1, \ldots, x_n\}\) if it contains \(n\) points.

A random point process \(X\) is a *Poisson point process of rate \(\lambda\) if the number of points in \(X\) has a Poisson distribution with mean equal to \(\lambda\) times the Lebesgue measure of \(S\), and given the number of points in \(X\), each is uniformly distributed over \(S\).

For a point process \(X \subset S\) and positive constant \(r\), let \(c_r(X)\) be the number of pairs of distinct points in \(X\) that are at most distance \(r\) apart. For \(\gamma \in [0, 1]\), let \[ f_{\gamma, r}(x) = \gamma^{c_r(X)}. \]

A point process with unnormalized density \(f_{\gamma, r}\) with respect to the underlying measure that is a Poisson point process with rate \(\lambda\) over \(S\) is called a Strauss process [11].

Because \(\gamma \leq 1\), this density penalizes point process that have many points within distance \(r\) of each other. This density can also be written as a product. \[ f(x) = \prod_{\{x_i, x_j\} \subseteq x} [\gamma \cdot \text{ind}(\text{dist}(x_i, x_j) \leq r) + 1 \cdot \text{ind}(\text{dist}(x_i, x_j) > r)]. \] Here \(\text{ind}\) is the usual indicator function that evaluates to 1 if the argument is true and is 0 otherwise.

Say that a density is a penalty factor density if it consists of factors each of which is at most 1. For such a density, the acceptance-rejection (AR) method can be used to generate samples exactly from the target distribution.

\(\texttt{AR-Strauss}(S, \gamma, r, \lambda)\)

  1. Draw \(X\) as a Poisson random variable with rate \(\lambda\) over \(S\). Say \(X\) has \(n\) points.

  2. For each \(1 \leq i < j \leq n\), generate \(U_{i, j}\) uniform over \([0, 1]\).

  3. If for all \(1 \leq i < j\), \[ U_{i, j} \leq [\gamma \cdot \text{ind}(\text{dist}(x_i, x_j) \leq r) + 1 \cdot \text{ind}(\text{dist}(x_i, x_j) > r)], \] then return \(X\).

  4. Else, let \(Y\) be the output of a call to \(\texttt{AR-Strauss}(S, \gamma, r, \lambda)\). Return \(Y\).

This method was the first perfect simulation method for generating exactly from the Strauss process. AR goes back to [8]. More recently, more perfect simulation methods for this process have been developed. These include

See [2] and [3] for more detail and the theory underlying these methods. In particular, DCFTP, BDS, and PRS all rely on the process being locally stable, and so these will be referred to as local methods.

The running time of AR tends to be exponential in \(\lambda\) and the size of the point space \(S\). The running time of local methods tend to be polynomial in the size of \(S\) when \(\lambda\) lies below a certain threshold (the critical value) and then exponential above that threshold. This makes generating from the Strauss process difficult for high values of \(\lambda\).

In this work, a new method for generating from the Strauss process is presented. Like AR has an exponential running time in \(\lambda\), but which grows more slowly than the size of the point space \(S\). Therefore, the rate of the exponential is much lower than AR and that of the local methods past their critical point.

The result is an algorithm that allows generation of Strauss processes over \((\lambda, S)\) pairs that were computationally infeasible before.

Strauss process over S = [0, 1] \times [0, 1] with \lambda = 200

Strauss process over \(S = [0, 1] \times [0, 1]\) with \(\lambda = 200\)

The rest of the paper is organized as follows. The next section presents the new stitching algorithm, and presents results on correctness and running time. Section 4 then gives numerical results on the running time. Section 5 then concludes.

2 Acceptance rejection and stitching

Given an unnormalized penalty density \(h \leq 1\) with underlying measure \(\mu\), consider the general \(\texttt{AR}\) algorithm.

\(\texttt{AR}(h, \mu)\)

  1. Draw \(X\) from \(\mu\).

  2. Draw \(U\) uniformly from \([0, 1]\).

  3. If \(U \leq h(X)\), then return \(X\).

  4. Else, let \(Y\) be the output of a recursive call to \(\texttt{AR}(h, \mu)\). Return \(Y\).

Let \(Z_h\) be the integral of \(h\) over \(\mu\). If \(Z_h > 0\), then the output of \(\texttt{AR}(h, \mu)\) has density \(h\) with respect to measure \(\mu\). The number of times \(X\) is generated is geometrically distributed with mean \(1 / Z_h\).

The proof uses The Fundamental Theorem of Perfect Simulation [3] which gives two conditions under which the output of a probabilistic recursive algorithm \(\mathcal{A}\) comes from a target distribution.

The first condition (which is necessary) is that \(\mathcal{A}\) must terminate with probability 1.

Now consider an algorithm \(\mathcal{A'}\) where the recursive calls are replaced with oracles that generate from the correct distribution. If \(\mathcal{A'}\) has output that provably comes from the correct distribution, say that \(\mathcal{A'}\) is locally correct. Then the second condition is that \(\mathcal{A'}\) is locally correct.

First consider the probability that the algorithm accepts.
\[\begin{align*} \mathbb{P}(U \leq h(X)) &= \mathbb{E}[\text{ind}(U \leq h(X))] \\ &= \mathbb{E}[\mathbb{E}[\text{ind}(U \leq h(X)) | X]] \\ &= \mathbb{E}[h(X)] \\ &= \int h(x) \ d\mu(x) = Z_h. \end{align*}\] By assumption this integral value \(Z_h > 0\). Hence the number of times the algorithm generates \(X\) is a geometric random variable with a positive parameter, and so is finite with probability 1.

Now consider algorithm \(\texttt{A'}\), where in the last line the recursive call for \(Y\) is replaced by an oracle. Let \(W\) be the output of \(\texttt{A'}\). Then for any measurable set \(A\), \[\begin{align*} \mathbb{P}(W \in A) &= \mathbb{P}(X \in A, U \leq h(X)) + \mathbb{P}(U > h(X), Y \in A) \\ &= \int_{x \in A} h(x) \ d\mu(x) + (1 - Z_h) \int_{y \in A} \frac{h(y)}{Z_h} \ d\mu(y) \\ &= \int_{y \in A} \frac{h(y)}{Z_h} \ d\mu(y) = \mathbb{P}(Y \in A). \end{align*}\] Therefore \(\texttt{A'}\) has the correct output distribution, and so by the FTPS so does \(\texttt{AR}\).

Now suppose that our target density can be factored into three pieces: \[ h_S(x) = h_{S_1}(x \cap S_1) h_{S_2}(x \cap S_2) h_{S_1, S_2}(x), \] where \(h_{S_1, S_2}(x)\) is also a penalty density. Then it is possible to use the partition to create a faster algorithm.

\(\texttt{AR-split}(h, \mu, S)\)

  1. Partition \(S\) into \((S_1, S_2)\).

  2. Draw \(X_1\) using \(\texttt{AR-split}(h, \mu, S_1)\), draw \(X_2\) using \(\texttt{AR-split}(h, \mu, S_2)\).

  3. Draw \(U\) uniformly from \([0, 1]\).

  4. If \(U \leq h_{S_1, S_2}(X_1 \cup X_2)\) then return \(X_1 \cup X_2\).

  5. Else draw \(Y\) from \(\texttt{AR-split}(h, \mu, S)\).

Let \(Z_h\) be the integral of \(h\) with respect to \(\mu\) over \(S\). If \(Z_h > 0\), then the output of \(\texttt{AR-split}(h, \mu, S)\) has density \(h\) with respect to measure \(\mu\) over \(S\).

Let \(X_1\) be a draw from \(\mu\) over \(S_1\) and \(X_2\) a draw from \(\mu\) over \(S_2\). For \(U_1\) and \(U_2\) uniforms over \([0, 1]\) independent of \(U\), let \[\begin{align*} p_1 &= \mathbb{P}(U_1 \leq h_{S_1}(X_1)) \\ p_2 &= \mathbb{P}(U_2 \leq h_{S_2}(X_2)) \\ p_3 &= \mathbb{P}(U \leq h_{S_1, S_2}(X_1, X_2)| X_1 \sim h_{S_1}, X_2 \sim h_{S_2}). \end{align*}\]

Then the chance of accepting \(X\) as a draw from \(h\) in line 4 is \(p_1 p_2 p_3 = Z_h > 0\). Hence \(p_1\), \(p_2\), and \(p_3\) are positive, which means that the algorithm terminates with probability 1 in finite time.

Now consider \(\texttt{A'}\), where the recursive calls are replaced with oracles. Hence in \(\texttt{A'}\), \((X_1, X_2)\) have density \(h_{S_1}(x_1)h_{S_2}(x_2)\), and \(Y\) from line 5 has density \(h(x)\). Let \(W\) be the output of the algorithm. Then for any measurable set \(A\),

Let \[ p = \mathbb{P}(U \leq h_{S_1,S_2}(X_1, X_2) | X_1 \sim h_{S_1}, X_2 \sim h_{S_1}) = \int_{(x_1, x_2)} h_{S_1}(x_1)h_{S_2}(x_2)h_{S_1, S_2} d[\mu(x_1) \times \mu(x_2)], \] Then \[\begin{align*} \mathbb{P}(W \in A) &= \mathbb{P}(X_1 \cup X_2 \in A, U \leq h_{S_1, S_2}(X_1 \cup X_2)) + \mathbb{P}(U \geq h_{S_1, S_2}(X_1 \cup X_2), Y \in A) \\ &= \int_{(x_1, x_2) \in A} h_{S_1}(x_1)h_{S_2}(x_2) h_{S_1, S_2}(x_1, x_2) \ d[\mu(x_1) \times \mu(x_2)] + (1 - p)\mathbb{P}(Y \in A) p \mathbb{P}(Y \in A) + (1 - p)\mathbb{P}(Y \in A) = \mathbb{P}(Y \in A). \end{align*}\]

Therefore \(\texttt{A'}\) has the correct output distribution, and so by the FTPS so does \(\texttt{AR-split}.\)

Note that the expected number of times \(X\) is sampled in \(\texttt{AR}\) is \[ \frac{1}{p_1} \cdot \frac{1}{p_2} \cdot \frac{1}{p_3}. \] By using the recursion, the expected number of times \(X\) is sampled in \(\texttt{AR-split}\) is \[ \left[\frac{1}{p_1} + \frac{1}{p_2} \right] \frac{1}{p_3}. \]

Therefore, recursion instead of AR should be used whenever \(p_1 + p_2 > 1\). This gives rise to the splitting algorithm, which uses recursion as much as possible.

\(\texttt{AR-stitch}(h, \mu, S)\)

  1. Draw \(Z\) from \(\mu\), and independently \(U_1\) uniform over \([0,1]\). If \(U \leq h_S(Z)\), then return \(Z\).

  2. Partition \(S\) into \((S_1, S_2)\).

  3. Draw \(X_1\) using \(\texttt{AR-stitch}(h, \mu, S_1)\), draw \(X_2\) using \(\texttt{AR-stitch}(h, \mu, S_2)\).

  4. Draw \(U_2\) uniformly from \([0, 1]\).

  5. If \(U_2 \leq h_{S_1, S_2}(X_1 \cup X_2)\) then return \(X_1 \cup X_2\).

  6. Else draw \(Y\) from \(\texttt{AR-stitch}(h, \mu, S)\).

The first result is that this procedure terminates in finite time with probability 1.

Let \(Z_h\) be the integral of \(h\) over \(\mu\) restricted to \(S\). If \(Z_h > 0\), then \(\texttt{AR-split}(h, \mu, S)\) terminates in finite time with probability 1.

Let \(r(p)\) be the supremum over the expected number of times \(X\) is generated over all choices of \(S_1, S_2, \lambda, r, \gamma\) when the probability \(X\) is accepted in line 1 is \(p\).

Then let \(p_1\) be the probability that \(X\) is accepted in the recursive call over \(S_1\), \(p_2\) the acceptance probability over \(S_2\), and \(p_3\) the probability that \((X_1, X_2)\) is accepted in line 4.

Note \(p = p_1 p_2 p_3\). There is always at least one draw of \(X\) in any call. Then there is a \(1 - p\) chance of two recursive calls, followed by a \(1 - p_3\) chance of a third recursive call. Hence \[ r(p) = 1 + (1 - p)[r(p_1) + r(p_2) + (1 - p_3)r(p)]. \]

This holds for all \(p' \geq p\), so letting \(w = \sup_{p' \in [p, 1]}\) gives \[ w \leq 1 + (1 - p)[3w]. \]

Hence for \(p \geq 3 / 4\), \(w \leq 4\).

Similarly, if both \(p_1\) and \(p_2\) are at most \(3 / 4\), then \[ r(p) \leq 1 + (1 - p)[8 + (1 - p_3) r(p)], \] and \(r(p) \leq [1 + 8(1 - p)] / p < \infty\).

Finally, if both \(p_1\) and \(p_2\) are at most \(3 / 4\), then \(p_1 \geq (4/ 3)p\) and \(p_2 \geq (4 / 3)p\). Hence \[ r(p) \leq 1 + (1 - p)[r(p_1) + r(p_2) + r(p)], \] and \[ r(p) \leq [1 + r(p_1) + r(p_2)] / p. \]

Each of \(r(p_1)\) and \(r(p_2)\) can then be expanded. This process is repeated until the probabilities rise above \(3 / 4\), when happens after splitting at most \(t = \lceil \log_{4 / 3}(1 / p) \rceil\) times. At which point \(r(p)\) is the sum of at most \(3^t\) terms that are at most either \(1\) or \(4\), and each of which has a coefficient of at most \((1 / p)^t\) attached.

Therefore, the expected number of calls to \(X\) is finite, making the number of calls to \(X\) finite with probability 1.

Next is the correctness of the algorithm.

Let \(Z_h\) be the integral of \(h\) over \(\mu\) on point space \(S\). If \(Z_h > 0\), then \(\texttt{AR-split}(h, \mu, S)\) terminates in finite time with probability 1 with output distributed as unnormalized density \(h\) with respect to \(\mu\) over \(S\).

The algorithm terminates with probability 1 by the previous fact. Hence by the FTPS, it is only necessary to show that the algorithm is locally correct.

Let \(\mathcal{A'}\) be the algorithm where lines 3 and 6 are replaced with oracles drawing from the correct distributions. In particular, \(Y\) is a draw from \(\mu\) restricted to point space \(S\). For any measurable \(B\), let \[ p(B) = \int_{B} h(x) \ d\mu(x). \] If \(\Omega\) is the exponential space of all point processes [9], then \[ \mathbb{P}(Y \in B) = \frac{p(B)}{p(\Omega)}. \]

Fix a measurable set \(A\), and let \(W\) be the output of \(\mathcal{A'}\). Then the chance the output is in \(A\) can be broken down into the probability of three events \(e_1\), \(e_2\), and \(e_3\): \[\begin{align*} \mathbb{P}(W \in A) &= \mathbb{P}(e_1) + \mathbb{P}(e_2) + \mathbb{P}(e_3) \\ e_1 &= Z \in A, U_1 \leq h_S(Z) \\ e_2 &= U_1 > h_S(Z), X_1 \cup X_2 \in A, U_2 > h_S(X_1 \cup X_2) \\ e_3 &= U_1 > h_S(Z), U_2 > h_S(X_1 \cup X_2), Y \in A. \end{align*}\]

As in the earlier proof of the correctness of acceptance rejection, \[ \mathbb{P}(e_1) = \int_{x \in A} h_S(x) \ d\mu = p(A). \]

The chance that \(Z\) is not accepted is \[\begin{align*} \mathbb{P}(U > h_S(Z)) &= 1 - \mathbb{P}(U \leq h_S(Z)) \\ &= 1 - \int_{x \in \Omega} h_S(x) \ d\mu \\ & = 1 - p(\Omega). \end{align*}\]

Hence, using independence, \[\begin{align*} \mathbb{P}(e_2) &= \mathbb{P}(U_1 > h_S(Z)) \mathbb{P}(X_1 \cup X_2 \in A, U_2 > h_S(X_1 \cup X_2)) \\ &= (1 - p(\Omega))\int_{x_1 \cup x_2 \in A} h_{S_1}(x_1) h_{S_2}(x_2) h_{S_1, S_2}(x_1, x_2) \ d\mu \\ &= (1 - p(\Omega))\int_{x_1 \cup x_2 \in A} h_{S}(x_1 \cup x_2) \ d\mu \\ &= (1 - p(\Omega))p(A). \end{align*}\]

Similarly, using independence on the last term gives \[\begin{align*} \mathbb{P}(e_3) &= (1 - p(\Omega))(1 - p(\Omega))p(A). \end{align*}\]

Putting these terms together gives \[\begin{align*} \mathbb{P}(W \in A) &= p(A)[1 + (1 - p(\Omega))] + \mathbb{P}(Y \in A)(1 - p(\Omega))^2 \\ &= \mathbb{P}(Y \in A)[p(\Omega) + p(\Omega)(1 - p(\Omega) + (1 - p(\Omega))^2 \\ &= \mathbb{P}(Y \in A) \end{align*}\] which completes the proof of correctness.

In some cases, it is possible to know when \(h\) is easy to sample from using AR, at which point, one can substitute basic AR in for line 1.

\(\texttt{AR-stitch-base}(h, \mu, S)\)

  1. For \((h, S)\) easy, draw \(Z\) using \(\texttt{AR}(h, \mu, S)\). Return \(Z\).

  2. Partition \(S\) into \(S_1\) and \(S_2\).

  3. Draw \(X_1\) using \(\texttt{AR-stitch-base}(h, \mu, S_1)\), draw \(X_2\) using \(\texttt{AR-stitch-base}(h, \mu, S_2)\).

  4. Draw \(U_2\) uniformly from \([0, 1]\).

  5. If \(U_2 \leq h_{S_1, S_2}(X_1 \cup X_2)\) then return \(X_1 \cup X_2\).

  6. Else draw \(Y\) from \(\texttt{AR-stitch-base}(h, \mu, S)\).

Let \(Z_h\) be the integral of \(h\) over \(\mu\) on point space \(S\). If \(Z_h > 0\), then \(\texttt{AR-split-base}(h, \mu, S)\) terminates in finite time with probability 1 with output distributed as unnormalized density \(h\) with respect to \(\mu\) over \(S\).

The proof follows the same outline as for \(\texttt{AR-stitch}(h, \mu, S)\).

3 Stitching in practice

To illustrate how this works in practice, consider \(\texttt{AR-stitch-base}(h, \mu, S)\) for two important models.

3.1 The Strauss process.

The density \(h\) and the reference measure \(\mu\) are both determined by the parameters \(\lambda\), \(r\), and \(\gamma\).

Given a process \(X_1\) over \(S_1\) and \(X_2\) over \(S_2\), \(h_{S_1, S_2}(X_1, X_2)\) is \(\gamma\) raised to the number of pairs of points \((x_1, x_2) \in X_1 \times X_2\) that are within distance \(r\) of each other. This gives the following algorithm.

\(\texttt{Strauss-AR-stitch-base}(\lambda, r, \gamma, S)\)

  1. If \(\lambda\) times the Lebesgue measure of \(S\) is at most 5, draw \(Z\) using AR, and return \(Z\).

  2. Partition \(S\) into \(S_1\) and \(S_2\).

  3. Draw \(X_1\) using \(\texttt{Strauss-AR-stitch-base}(\lambda, r, \gamma, S_1)\), draw \(X_2\) using \(\texttt{Strauss-AR-stitch-base}(\lambda, r, \gamma, S_2)\).

  4. Draw \(U_2\) uniformly from \([0, 1]\). Let \(c\) be the number of \(a_i \in X_1\) and \(b_j \in X_2\) such that \(\text{dist}(a_i, b_j) \leq r\).

  5. If \(U_2 \leq \gamma^c\) then return \(X_1 \cup X_2\).

  6. Else return a draw from \(\texttt{Strauss-AR-stitch-base}(\lambda, r, \gamma, S)\).

3.2 The Ising model

In the Ising model (and its extension, the Potts model), each of the vertices of a graph are given a label from a color set. In the ferromagnetic model, edges of the graph are penalized by \(\exp(-2\beta)\) (for \(\beta > 0\) a constant) when the two edges of the set are colored differently. The reference measure is just uniform over all colorings of the vertices.

In other words, the density for a graph with edge set \(E\) is \[ f(x) = \prod_{\{i, j\} \in E} [\exp(-2\beta)\text{ind}(x(i) \neq x(j)) + \text{ind}(x(i) = x(j))] \] with respect to counting measure over all colorings of the vertices of the graph.

A partition of a vertex set of a graph is called a cut. The stitching needs only check edges which cross the cut, meaning that the endpoints of the edges lie in different halves of the cut.

The density (and reference measure) are determined by \(\beta\), the edge set \(E\), and the vertex set \(V\).

\(\texttt{Ising-AR-stitch-base}(\beta, E, V)\)

  1. If \(V = \{v\}\), then choose \(X(v)\) uniformly from the set of colors, return \(X\).

  2. Partition \(V\) into \(V_1\) and \(V_2\).

  3. Draw \(X_1\) using \(\texttt{Ising-AR-stitch-base}(\beta, E, V_1)\), draw \(X_2\) using \(\texttt{Ising-AR-stitch-base}(\beta, E, V_2)\).

  4. Draw \(U_2\) uniformly from \([0, 1]\). Let \(c\) be the number of \(i \in V_1\) and \(j \in V_2\) such that \(x(i) \neq x(j)\).

  5. If \(U_2 \leq \exp(-2\beta c)\) then return \((X_1, X_2)\).

  6. Else return a draw from \(\texttt{Ising-AR-stitch-base}(\beta, E, V)\).

4 Numerical results

For the Ising model, there are algorithms designed to perfectly sample for \(\beta\) above and below the critical temperature [10], therefore, only the Strauss process is considered here.

In order to evaluate the running time behavior of various algorithms for generating from the Strauss process, timings were run on \(S = [0, 1] \times [0, 1]\) for basic AR, the PRS method of [4], and stitching. AR is always exponential in \(\lambda\), while PRS stays polynomial in \(\lambda\) before moving to exponential in \(\lambda\) past a certain threshold. Stitching (represented as ARS in the figure) is also exponential in \(\lambda\), but at a much smaller rate.

Timings of Acceptance Rejection, Partial Rejection Sampling, and Acceptance Rejection with Stitching for varying \lambda over S = [0, 1] \times [0, 1].

Timings of Acceptance Rejection, Partial Rejection Sampling, and Acceptance Rejection with Stitching for varying \(\lambda\) over \(S = [0, 1] \times [0, 1]\).

A plot of the log of the timings shows the exponential nature of the growth.

Timings of Acceptance Rejection, Partial Rejection Sampling, and Acceptance Rejection with Stitching for varying \lambda over S = [0, 1] \times [0, 1].

Timings of Acceptance Rejection, Partial Rejection Sampling, and Acceptance Rejection with Stitching for varying \(\lambda\) over \(S = [0, 1] \times [0, 1]\).

5 Conclusion

Stitching is a simple to implement algorithm that has an exponential running time with a rate far lower than either acceptance rejection or various local methods. This enables its use in generating from the Strauss process over parameter values and spaces never before possible.

References

[1] Huber, M. 2012. Spatial birth-death swap chains. Bernoulli. 18, 3 (2012), 1031–1041.

[2] Huber, M. 2011. Spatial point processes. Handbook of MCMC. S. Brooks et al., eds. Chapman & Hall/CRC Press. 227–252.

[3] Huber, M.L. 2015. Perfect Simulation. CRC Press.

[4] Jerrum, M. and Guo, H. 2019. Perfect simulation of the hard disks model by partial rejection sampling. Annales de l’Institut Henri Poincaré D (AIHPD). (2019).

[5] Kendall, W.S. 1995. Perfect simulation for the area-interaction point process. Proceedings of the sympos. On probability towards the year 2000 (1995).

[6] Kendall, W.S. and Møller, J. 2000. Perfect simulation using dominating processes on ordered spaces, with application to locally stable point processes. Adv. Appl. Prob. 32, (2000), 844–865.

[7] Kendall, W.S. and Thönnes, E. 1999. Perfect simulation in stochastic geometry. Pattern recognition. 32, (1999), 1569–1586.

[8] Neumann, J. von 1951. Various techniques used in connection with random digits. Monte carlo method (Washington, D.C., 1951).

[9] Preston, C.J. 1977. Spatial birth-and-death processes. Bull. Inst. Int. Stat. 46, 2 (1977), 371–391.

[10] Propp, J.G. and Wilson, D.B. 1996. Exact sampling with coupled Markov chains and applications to statistical mechanics. Random Structures Algorithms. 9, 1–2 (1996), 223–252.

[11] Strauss, D.J. 1975. A model for clustering. Biometrika. 63, (1975), 467–475.