Hand | Card 1 | Card 2 | Card 3 | Card 4 | Card 5 | One pair? |
---|---|---|---|---|---|---|
1 | King ♢ | King ♠ | Queen ♠ | 10 ♢ | 6 ♠ | Yes |
2 | 8 ♢ | Ace ♢ | 4 ♠ | 10 ♢ | 3 ♣ | No |
3 | 4 ♢ | 5 ♣ | Ace ♢ | Queen ♡ | 10 ♠ | No |
4 | 3 ♡ | Ace ♡ | 5 ♣ | 3 ♢ | Jack ♢ | Yes |
5 | 6 ♠ | King ♣ | 6 ♢ | 3 ♣ | 3 ♡ | No |
6 | Queen ♣ | 7 ♢ | Jack ♠ | 5 ♡ | 8 ♡ | No |
7 | 9 ♣ | 4 ♣ | 9 ♠ | Jack ♣ | 5 ♠ | Yes |
8 | 3 ♠ | 3 ♣ | 3 ♡ | 5 ♠ | 5 ♢ | Yes |
9 | Queen ♢ | 4 ♠ | Queen ♣ | 6 ♡ | 4 ♢ | No |
10 | Queen ♠ | 3 ♣ | 7 ♠ | 7 ♡ | 8 ♢ | Yes |
11 | 8 ♡ | 9 ♠ | 7 ♢ | 8 ♠ | Ace ♡ | Yes |
12 | Ace ♠ | 9 ♡ | 4 ♣ | 2 ♠ | Ace ♢ | Yes |
13 | 4 ♡ | 3 ♣ | Ace ♢ | 9 ♡ | 5 ♡ | No |
14 | 10 ♣ | 7 ♠ | 8 ♣ | King ♣ | 4 ♢ | No |
15 | Queen ♣ | 8 ♠ | Queen ♠ | 8 ♣ | 5 ♣ | No |
16 | King ♡ | 10 ♣ | Jack ♠ | 10 ♢ | 10 ♡ | No |
17 | Queen ♠ | Queen ♡ | Ace ♡ | King ♢ | 7 ♡ | Yes |
18 | 5 ♢ | 6 ♡ | Ace ♡ | 4 ♡ | 6 ♢ | Yes |
19 | 3 ♠ | 5 ♡ | 2 ♢ | King ♣ | 9 ♡ | No |
20 | 8 ♠ | Jack ♢ | 7 ♣ | 10 ♡ | 3 ♡ | No |
21 | 5 ♢ | 4 ♠ | Jack ♡ | 2 ♠ | King ♠ | No |
22 | 5 ♢ | 4 ♢ | Jack ♣ | King ♢ | 2 ♠ | No |
23 | King ♡ | King ♠ | 6 ♡ | 2 ♠ | 5 ♣ | Yes |
24 | 8 ♠ | 9 ♠ | 6 ♣ | Ace ♣ | 5 ♢ | No |
25 | Ace ♢ | 7 ♠ | 4 ♡ | 9 ♢ | 9 ♠ | Yes |
% Yes | 44% |
11 Probability Theory, Part 2: Compound Probability
11.1 Introduction
In this chapter we will deal with what are usually called “probability problems” rather than the “statistical inference problems” discussed in later chapters. The difference is that for probability problems we begin with a knowledge of the properties of the universe with which we are working. (See Section 8.9 on the definition of resampling.)
We start with some basic problems in probability. To make sure we do know the properties of the universe we are working with, we start with poker, and a pack of cards. Working with some poker problems, we rediscover the fundamental distinction between sampling with and without replacement.
11.2 Introducing a poker problem: one pair (two of a kind)
What is the chance that the first five cards chosen from a deck of 52 (bridge/poker) cards will contain two (and only two) cards of the same denomination (two 3’s for example)? (Please forgive the rather sterile unrealistic problems in this and the other chapters on probability. They reflect the literature in the field for 300 years. We’ll get more realistic in the statistics chapters.)
We shall estimate the odds the way that gamblers have estimated gambling odds for thousands of years. First, check that the deck is a standard deck and is not missing any cards. (Overlooking such small but crucial matters often leads to errors in science.) Shuffle thoroughly until you are satisfied that the cards are randomly distributed. (It is surprisingly hard to shuffle well.) Then deal five cards, and mark down whether the hand does or does not contain a pair of the same denomination.
At this point, we must decide whether three of a kind, four of a kind or two pairs meet our criterion for a pair. Since our criterion is “two and only two,” we decide not to count them.
Then replace the five cards in the deck, shuffle, and deal again. Again mark down whether the hand contains one pair of the same denomination. Do this many times. Then count the number of hands with one pair, and figure the proportion (as a percentage) of all hands.
Table 11.1 has the results of 25 hands of this procedure.
In this series of 25 experiments, 44 percent of the hands contained one pair, and therefore 0.44 is our estimate (for the time being) of the probability that one pair will turn up in a poker hand. But we must notice that this estimate is based on only 25 hands, and therefore might well be fairly far off the mark (as we shall soon see).
This experimental “resampling” estimation does not require a deck of cards. For example, one might create a 52-sided die, one side for each card in the deck, and roll it five times to get a “hand.” But note one important part of the procedure: No single “card” is allowed to come up twice in the same set of five spins, just as no single card can turn up twice or more in the same hand. If the same “card” did turn up twice or more in a dice experiment, one could pretend that the roll had never taken place; this procedure is necessary to make the dice experiment analogous to the actual card-dealing situation under investigation. Otherwise, the results will be slightly in error. This type of sampling is “sampling without replacement,” because each card is not replaced in the deck prior to dealing the next card (that is, prior to the end of the hand).
11.3 A first approach to the one-pair problem with code
We could also approach this problem using random numbers from the computer to simulate the values.
Let us first make some numbers from which to sample. We want to simulate a deck of playing cards analogous to the real cards we used previously. We don’t need to simulate all the features of a deck, but only the features that matter for the problem at hand. In our case, the feature that matters is the face value. We require a deck with four “1”s, four “2”s, etc., up to four “13”s, where 1 is an Ace, and 13 is a King. The suits don’t matter for our present purposes.
We first first make a vector to represent the face values in one suit.
<- 1:13
one_suit one_suit
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13
We have the face values for one suit, but we need the face values for whole deck of cards — four suits. We do this by making a new vector that consists of four repeats of one_suit
:
# Repeat the one_suit vector four times
<- rep(one_suit, 4)
deck deck
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11 12
[26] 13 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11
[51] 12 13
11.4 Shuffling the deck with R
At this point we have a complete deck in the variable deck
. But that “deck” is in the same order as a new deck of cards . If we do not shuffle the deck, the results will be predictable. Therefore, we would like to select five of these “cards” (52 values) at random. There are two ways of doing this. The first is to use the sample
’rnd.choice`]{.python} tool in the familiar way, to choose 5 values at random from this strictly ordered deck. We want to draw these cards without replacement (of which more later). Without replacement means that once we have drawn a particular value, we cannot draw that value a second time — just as you cannot get the same card twice in a hand when the dealer deals you a hand of five cards.
As you saw in Section 8.14, the default behavior of sample
is to sample without replacement, so simply omit the replace=TRUE
argument to sample
to get sampling without replacement:
# One hand, sampling from the deck without replacement.
<- sample(deck, size=5)
hand hand
[1] 6 10 12 11 12
The above is one way to get a random hand of five cards from the deck. Another way is to use sample
to shuffle the whole deck
of 52 “cards” into a random order, just as a dealer would shuffle the deck before dealing. Then we could take — for example — the first five cards from the shuffled deck to give a random hand. See Section 8.14 for more on this use of sample
.
# Shuffle the whole 52 card deck.
<- sample(deck)
shuffled # The "cards" are now in random order.
shuffled
[1] 8 13 5 4 12 9 5 7 11 2 13 2 6 8 8 6 10 9 12 9 11 7 13 11 12
[26] 7 10 4 2 4 7 1 3 5 1 9 2 4 6 1 8 10 3 13 5 11 12 3 1 10
[51] 6 3
Now we can get our hand
by taking the first five cards from the deck
:
# Select the first five "cards" from the shuffled deck.
<- shuffled[1:5]
hand hand
[1] 8 13 5 4 12
You have seen that we can use one of two procedures to a get random sample of five cards from deck
, drawn without replacement:
- Using
sample
withsize=5
to take the random sample directly fromdeck
, or - shuffling the entire
deck
and then taking the first five “cards” from the result of the shuffle.
Either is a valid way of getting five cards at random from the deck
. It’s up to us which to choose — we slightly prefer to shuffle and take the first five, because it is more like the physical procedure of shuffling the deck and dealing, but which you prefer, is up to you.
11.4.1 A first-pass computer solution to the one-pair problem
Choosing the shuffle deal way, the chunk to generate one hand is:
<- sample(deck)
shuffled <- shuffled[1:5]
hand hand
[1] 6 9 6 2 1
Without doing anything further, we could run this chunk many times, and each time, we could note down whether the particular hand
had exactly one pair or not.
Table 11.2 has the result of running that procedure 25 times:
Hand | Card 1 | Card 2 | Card 3 | Card 4 | Card 5 | One pair? |
---|---|---|---|---|---|---|
1 | 9 | 4 | 11 | 9 | 13 | Yes |
2 | 8 | 7 | 6 | 11 | 1 | No |
3 | 1 | 1 | 10 | 9 | 9 | No |
4 | 4 | 2 | 2 | 1 | 1 | No |
5 | 8 | 11 | 13 | 10 | 3 | No |
6 | 13 | 7 | 11 | 10 | 6 | No |
7 | 8 | 1 | 10 | 11 | 12 | No |
8 | 12 | 6 | 1 | 1 | 9 | Yes |
9 | 4 | 12 | 13 | 12 | 10 | Yes |
10 | 9 | 12 | 12 | 8 | 7 | Yes |
11 | 5 | 2 | 4 | 11 | 13 | No |
12 | 3 | 4 | 11 | 8 | 5 | No |
13 | 2 | 4 | 2 | 13 | 1 | Yes |
14 | 1 | 1 | 3 | 5 | 12 | Yes |
15 | 4 | 6 | 11 | 13 | 11 | Yes |
16 | 10 | 4 | 8 | 9 | 12 | No |
17 | 7 | 11 | 4 | 3 | 4 | Yes |
18 | 12 | 6 | 11 | 12 | 13 | Yes |
19 | 5 | 3 | 8 | 6 | 9 | No |
20 | 11 | 6 | 8 | 9 | 6 | Yes |
21 | 13 | 11 | 5 | 8 | 2 | No |
22 | 11 | 8 | 10 | 1 | 13 | No |
23 | 10 | 5 | 8 | 1 | 3 | No |
24 | 1 | 8 | 13 | 9 | 9 | Yes |
25 | 5 | 13 | 2 | 4 | 11 | No |
% Yes | 44% |
11.5 Finding exactly one pair using code
Thus far we have had to look ourselves at the set of cards, or at the numbers, and decide if there was exactly one pair. We would like the computer to do this for us. Let us stay with the numbers we generated above by dealing the random hand
from the deck
of numbers. To find pairs, we will go through the following procedure:
- For each possible value (1 through 13), count the number of times each value has occurred in
hand
. Call the result of this calculation —repeat_nos
. - Select
repeat_nos
values equal to 2; - Count the number of “2” values in
repeat_nos
. This the number of pairs, and excludes three of a kind or four a kind. - If the number of pairs is exactly one, label the
hand
as “Yes”, otherwise label it as “No”.
11.6 Finding number of repeats using tabulate
Consider the following 5-card “hand” of values:
<- c(5, 7, 5, 4, 7) hand
This hand represents a pair of 5s and a pair of 7s.
We want to detect the number of repeats for each possible card value, 1 through 13. Let’s say we are looking for 5s. We can detect which of the values are equal to 5 by making a Boolean vector, where there is TRUE
for a value equal to 5, and FALSE
otherwise:
<- (hand == 5) is_5
We can then count the number of 5s with:
sum(is_5)
[1] 2
In one chunk:
<- sum(hand == 5)
number_of_5s number_of_5s
[1] 2
We could do this laborious task for every possible card value (1 through 13):
<- sum(hand == 1) # Number of aces in hand
number_of_1s <- sum(hand == 2) # Number of 2s in hand
number_of_2s <- sum(hand == 3)
number_of_3s <- sum(hand == 4)
number_of_4s <- sum(hand == 5)
number_of_5s <- sum(hand == 6)
number_of_6s <- sum(hand == 7)
number_of_7s <- sum(hand == 8)
number_of_8s <- sum(hand == 9)
number_of_9s <- sum(hand == 10)
number_of_10s <- sum(hand == 11)
number_of_11s <- sum(hand == 12)
number_of_12s <- sum(hand == 13) # Number of Kings in hand. number_of_13s
Above, we store the result for each card in a separate variable; this is inconvenient, because we would have to go through each variable checking for a pair (a value of 2). It would be more convenient to store these results in a vector. One way to do that would be to store the result for card value 1 at position (index) 1, the result for value 2 at position 2, and so on, like this:
# Make vector length 13, with one element for each card value.
<- numeric(13)
repeat_nos 1] <- sum(hand == 1) # Number of aces in hand
repeat_nos[2] <- sum(hand == 2) # Number of 2s in hand
repeat_nos[3] <- sum(hand == 3)
repeat_nos[4] <- sum(hand == 4)
repeat_nos[5] <- sum(hand == 5)
repeat_nos[6] <- sum(hand == 6)
repeat_nos[7] <- sum(hand == 7)
repeat_nos[8] <- sum(hand == 8)
repeat_nos[9] <- sum(hand == 9)
repeat_nos[10] <- sum(hand == 10)
repeat_nos[11] <- sum(hand == 11)
repeat_nos[12] <- sum(hand == 12)
repeat_nos[13] <- sum(hand == 13) # Number of Kings in hand.
repeat_nos[# Show the result
repeat_nos
[1] 0 0 0 1 2 0 2 0 0 0 0 0 0
You may recognize all this repetitive typing as a good sign we could use a for
loop to do the work — er — for us.
<- numeric(13)
repeat_nos for (i in 1:13) { # Set i to be first 1, then 2, ... through 13.
<- sum(hand == i)
repeat_nos[i]
}# Show the result
repeat_nos
[1] 0 0 0 1 2 0 2 0 0 0 0 0 0
In our particular hand
, after we have done the count for 7s, we will always get 0 for card values 8, 9 … 13, because 7 was the highest card (maximum value) for our particular hand
. As you might expect, there is a an R function max
that will quickly tell us the maximum value in the hand:
max(hand)
[1] 7
We can use max
to make our loop more efficient, by stopping our checks when we’ve reached the maximum value, like this:
<- max(hand)
max_value # Only make a vector large enough to house counts for the max value.
<- numeric(max_value)
repeat_nos for (i in 1:max_value) { # Set i to 0, then 1 ... through max_value
<- sum(hand == i)
repeat_nos[i]
}# Show the result
repeat_nos
[1] 0 0 0 1 2 0 2
In fact, this is exactly what the function tabulate
does, so we can use that function instead of our loop, to do the same job:
<- tabulate(hand)
repeat_nos repeat_nos
[1] 0 0 0 1 2 0 2
11.7 Looking for hands with exactly one pair
Now we have repeat_nos
, we can proceed with the rest of the steps above.
We can count the number of cards that have exactly two repeats:
== 2) (repeat_nos
[1] FALSE FALSE FALSE FALSE TRUE FALSE TRUE
<- sum(repeat_nos == 2)
n_pairs # Show the result
n_pairs
[1] 2
The hand is of interest to us only if the number of pairs is exactly 1:
# Check whether there is exactly one pair in this hand.
== 1 n_pairs
[1] FALSE
We now have the machinery to use R for all the logic in simulating multiple hands, and checking for exactly one pair.
Let’s do that, and use R to do the full job of dealing many hands and finding pairs in each one. We repeat the procedure above using a for
loop. The for
loop commands the program to do ten thousand repeats of the statements in the “loop” between the start {
and end }
curly braces.
In the body of the loop (the part that gets repeated for each trial) we:
- Shuffle the
deck
. - Deal ourselves a new
hand
. - Calculate the
repeat_nos
for this new hand. - Calculate the number of pairs from
repeat_nos
; store this asn_pairs
. - Put
n_pairs
for this repetition into the correct place in the scoring vectorz
.
With that we end a single trial, and go back to the beginning, until we have done this 10000 times.
When those 10000 repetitions are over, the computer moves on to count (sum
) the number of “1’s” in the score-keeping vector z
, each “1” indicating a hand with exactly one pair. We store this count at location k
. We divide k
by 10000 to get the proportion of hands that had one pair, and we message
the result of k
to the screen.
# Create a bucket (vector) called a with four "1's," four "2's," four "3's,"
# etc., to represent a deck of cards
= 1:13
one_suit one_suit
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13
# Repeat values for one suit four times to make a 52 card deck of values.
<- rep(one_suit, 4)
deck deck
[1] 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11 12
[26] 13 1 2 3 4 5 6 7 8 9 10 11 12 13 1 2 3 4 5 6 7 8 9 10 11
[51] 12 13
# Vector to store result of each trial.
<- numeric(10000)
z
# Repeat the following steps 10000 times
for (i in 1:10000) {
# Shuffle the deck
<- sample(deck)
shuffled
# Take the first five cards to make a hand.
= shuffled[1:5]
hand
# How many pairs?
# Counts for each card rank.
<- tabulate(hand)
repeat_nos <- sum(repeat_nos == 2)
n_pairs
# Keep score of # of pairs
<- n_pairs
z[i]
# End loop, go back and repeat
}
# How often was there 1 pair?
<- sum(z == 1)
k
# Convert to proportion.
= k / 10000
kk
# Show the result.
message(kk)
0.4285
one_pair
starts at Note 11.1.
In one run of the program, the result in kk
was 0.428, so our estimate would be that the probability of a single pair is 0.428.
How accurate are these resampling estimates? The accuracy depends on the number of hands we deal — the more hands, the greater the accuracy. If we were to examine millions of hands, 42 percent would contain a pair each; that is, the chance of getting a pair in the long run is 42 percent. It turns out the estimate of 44 percent based on 25 hands in Table 11.1 is fairly close to the long-run estimate, though whether or not it is close enough depends on one’s needs of course. If you need great accuracy, deal many more hands.
A note on the deck
s, hand
s, repeat_nos
s in the above program, etc.: These “variables” are called “vector”s in R. A vector is an array (sequence) of elements that gets filled with numbers as R conducts its operations.
To help keep things straight (though the program does not require it), we often use z
to name the vector that collects all the trial results, and k
to denote our overall summary results. Or you could call it something like scoreboard
— it’s up to you.
How many trials (hands) should be made for the estimate? There is no easy answer.1 One useful device is to run several (perhaps ten) equal sized sets of trials, and then examine whether the proportion of pairs found in the entire group of trials is very different from the proportions found in the various subgroup sets. If the proportions of pairs in the various subgroups differ greatly from one another or from the overall proportion, then keep running additional larger subgroups of trials until the variation from one subgroup to another is sufficiently small for your purposes. While such a procedure would be impractical using a deck of cards or any other physical means, it requires little effort with the computer and R.
11.8 Two more tntroductory poker problems
Which is more likely, a poker hand with two pairs, or a hand with three of a kind? This is a comparison problem, rather than a problem in absolute estimation as was the previous example.
In a series of 100 “hands” that were “dealt” using random numbers, four hands contained two pairs, and two hands contained three of a kind. Is it safe to say, on the basis of these 100 hands, that hands with two pairs are more frequent than hands with three of a kind? To check, we deal another 300 hands. Among them we see fifteen hands with two pairs (3.75 percent) and eight hands with three of a kind (2 percent), for a total of nineteen to ten. Although the difference is not enormous, it is reasonably clear-cut. Another 400 hands might be advisable, but we shall not bother.
Earlier I obtained forty-four hands with one pair each out of 100 hands, which makes it quite plain that one pair is more frequent than either two pairs or three-of-a-kind. Obviously, we need more hands to compare the odds in favor of two pairs with the odds in favor of three-of-a-kind than to compare those for one pair with those for either two pairs or three-of-a-kind. Why? Because the difference in odds between one pair, and either two pairs or three-of-a-kind, is much greater than the difference in odds between two pairs and three-of-a-kind. This observation leads to a general rule: The closer the odds between two events, the more trials are needed to determine which has the higher odds.
Again it is interesting to compare the odds with the formulaic mathematical computations, which are 1 in 21 (4.75 percent) for a hand containing two pairs and 1 in 47 (2.1 percent) for a hand containing three-of-a-kind — not too far from the estimates of .0375 and .02 derived from simulation.
To handle the problem with the aid of the computer, we simply need to estimate the proportion of hands having triplicates and the proportion of hands with two pairs, and compare those estimates.
To estimate the hands with three-of-a-kind, we can use a notebook just like “One Pair” earlier, except using repeat_nos == 3
to search for triplicates instead of duplicates. The program, then, is:
<- 1:13
one_suit <- rep(one_suit, 4) deck
<- numeric(10000)
triples_per_trial
# Repeat the following steps 10000 times
for (i in 1:10000) {
# Shuffle the deck
<- sample(deck)
shuffled
# Take the first five cards.
<- shuffled[1:5]
hand
# How many triples?
<- tabulate(hand)
repeat_nos <- sum(repeat_nos == 3)
n_triples
# Keep score of # of triples
<- n_triples
triples_per_trial[i]
# End loop, go back and repeat
}
# How often was there 1 pair?
<- sum(triples_per_trial == 1)
n_triples
# Convert to proportion
message(n_triples / 10000)
0.0251
three_of_a_kind
starts at Note 11.2.
To estimate the probability of getting a two-pair hand, we revert to the original program (counting pairs), except that we examine all the results in the score-keeping vector z
for hands in which we had two pairs, instead of one. ::: {.notebook name=“two_pairs” title=“Two pairs”}
<- rep(1:13, 4) deck
<- numeric(10000)
pairs_per_trial
# Repeat the following steps 10000 times
for (i in 1:10000) {
# Shuffle the deck
<- sample(deck)
shuffled
# Take the first five cards.
<- shuffled[1:5]
hand
# How many pairs?
# Counts for each card rank.
<- tabulate(hand)
repeat_nos <- sum(repeat_nos == 2)
n_pairs
# Keep score of # of pairs
<- n_pairs
pairs_per_trial[i]
# End loop, go back and repeat
}
# How often were there 2 pairs?
<- sum(pairs_per_trial == 2)
n_two_pairs
# Convert to proportion
print(n_two_pairs / 10000)
[1] 0.0465
:::
For efficiency (though efficiency really is not important here because the computer performs its operations so cheaply) we could develop both estimates in a single program by simply generating 10000 hands, and count the number with three-of-a-kind and the number with two pairs.
Before we leave the poker problems, we note a difficulty with Monte Carlo simulation. The probability of a royal flush is so low (about one in half a million) that it would take much computer time to compute. On the other hand, considerable inaccuracy is of little matter. Should one care whether the probability of a royal flush is 1/100,000 or 1/500,000?
11.9 The concepts of replacement and non-replacement
In the poker example above, we did not replace the first card we drew. If we were to replace the card, it would leave the probability the same before the second pick as before the first pick. That is, the conditional probability remains the same. If we replace, conditions do not change. But if we do not replace the item drawn, the probability changes from one moment to the next. (Perhaps refresh your mind with the examples in the discussion of conditional probability including Section 9.1.1)
If we sample with replacement, the sample drawings remain independent of each other — a topic addressed in Section 9.1.
In many cases, a key decision in modeling the situation in which we are interested is whether to sample with or without replacement. The choice must depend on the characteristics of the situation.
There is a close connection between the lack of finiteness of the concept of universe in a given situation, and sampling with replacement. That is, when the universe (population) we have in mind is not small, or has no conceptual bounds at all, then the probability of each successive observation remains the same, and this is modeled by sampling with replacement. (“Not finite” is a less expansive term than “infinite,” though one might regard them as synonymous.)
Chapter 12 discusses problems whose appropriate concept of a universe is finite, whereas Chapter 13 discusses problems whose appropriate concept of a universe is not finite. This general procedure will be discussed several times, with examples included.
One simple rule-of-thumb is to quadruple the original number. The reason for quadrupling is that four times as many iterations (trials) of this resampling procedure give twice as much accuracy (as measured by the standard deviation, the most frequent measurement of accuracy). That is, the error decreases with the square root of the number of iterations. If you see that you need much more accuracy, then immediately increase the number of iterations even more than four times — perhaps ten or a hundred times.↩︎