This blog assumes the reader is familiar with the basic concept of rolling hashes. There are some math-heavy parts, but one can get most of the ideas without understanding every detail.

The main focus of this blog is on how to choose the rolling-hash parameters to avoid getting hacked and on how to hack codes with poorly chosen parameters.

Designing hard-to-hack rolling hashes

Recap on rolling hashes and collisions

Recall that a rolling hash has two parameters \((p,a)\) where \(p\) is the modulo and \(0\le a < p\) the base. (We'll see that \(p\) should be a big prime and \(a\) larger than the size of alphabet.) The hash value of a string \(S = s_0 \cdots s_{n-1}\) is given by:

\[ h(S) = (\sum^{n-1}_{i=0} a^{n-i-1} s_i) \mod P \]

For now, lets consider the simple problem of: given two strings \(S,T\) of equal length, decide whether they're equal by comparing their hash values \(h(S), h(T)\). Our algorithm declares \(S\) and \(T\) to be equal if \(h(S) = h(T)\). Most rolling hash solutions are built on multiple calls to this subproblem or rely on the correctness of such calls.

Let's call two strings \(S, T\) of equal length with \(S \neq T\) and \(h(S) = h(T)\) an equal-length collision. We want to avoid equal-length collisions, as they cause our algorithm to incorrectly assesses \(S\) and \(T\) as equal. (Note that our algorithms never incorrectly assesses strings a different.) For fixed parameters and reasonably small length, there are many more strings than possible hash values, so there always are equal-length collisions. Hence you might think that, for any rolling hash, there are inputs for which it is guaranteed to fail.

Luckily, randomization comes to the rescue. Our algorithm does not have to fix \((p, a)\), it can randomly pick then according to some scheme instead. \(A\) scheme is reliable if we can prove that for arbitrary two string \(S, T\), \(S\neq T\) the scheme picks \((p, a)\) such that \(h(S) \neq h(T)\) with high probability. Note that the probability space only includes the random choices done inside the scheme; the input \((S, T)\) is arbitrary, fixed and not necessarily random. (If you think of the input coming from a hack, then this means that no matter what the input is, our solution will not fail with high probability.)

I'll show you two reliable schemes. (Note that just because a scheme is reliable does not mean that your implementation is good. Some care has to be taken with the random number generator that is used.)

Randomizing base

This part is based on a blog by rng_58. His post covers a more general hashing problem and is worth checking out.

This scheme uses \(a\) fixed prime \(p\) (i.e. \(10^9 + 7\) or \(4\cdot 10^9 + 7\)) and picks \(a\) uniformly at random from \([0,p-1]\). Let \(A\) be a random variable for the choice of \(a\).

To prove that this scheme is good, consider two strings \((S, T)\) of equal length and do some calculations:

\[ \begin{aligned} h(S)&=h(T)\\ (\sum_{i=0}^{n-1} A^{n-i-1} S_i) \mod p & = (\sum_{i=0}^{n-1} A^{n-i-1} T_i) \mod p \end{aligned} \]

Therefore,we have:

\[ \sum_{i=0}^{n-1} A^{n-i-1} S_i \equiv \sum_{i=0}^{n-1} A^{n-i-1} T_i \pmod p\\ \]

Let us let \(P(A)\) denote a polynomial in \(A\) of degree \(\le n-1\):

\[ P(A) = \sum_{i=0}^{n-1} A^{n-i-1}(S_i-T_i) \equiv 0 \pmod p \]

\(P\) is non-zero as \(S \neq T\). The calculations show that \(h(S) = h(T)\) if and only if \(A\) is a root of \(P(A)\).

As \(p\) is prime and we are doing computations \(\mod p\), we are working in a field. Over a field, any polynomial of degree \(\le n - 1\) has at most \(n-1\) roots. Hence there are at most \(n - 1\) choices of a that lead to \(h(S) = h(T)\). Therefore:

\[ Pr[h(S)=h(T)]=Pr[P(A)=0] \le \frac{n-1} {p} \]

So for any two strings \((S, T)\) of equal length, the probability that they form an equal-length collision is at most . This is around \(10^{-4}\) for \(n = 10^5, p = 10^9 + 7\). Picking larger primes such as \(2^{31} - 1\) or \(4\cdot 10^9 + 7\) can improve this a bit, but one needs more care with overflows.

Tightness of bound

For now, this part only applies to primes with smooth \(p - 1\), so it doesn't work for \(p = 10^9 + 7\) for example. It would be interesting to find a construction that is computable and works in the general case.

The bound \(\frac{p-1} {p}\) for this scheme is actually tight if \(n-1|p-1\). Consider \(S=ba...a\) and \(T=aa...b\) with \(P(A)=A^{n-1}-1\).

As \(p\) is prime,\(\frac{\mathbb{Z} } {p\mathbb{Z} }\) is cyclic of order \(p - 1\), hence there is a subgroup \(G \subseteq \frac{\mathbb{Z} } {p\mathbb{Z} }\) of order \(n - 1\). Any \(g\subseteq G\) then satisfies \(g^{n - 1} = 1\), so \(P(A)\) has \(n - 1\) distinct roots.

Randomizing modulo

This scheme fixes a base \(a \le |\sum|\) and a bound \(N > a\) and picks a prime p uniformly at random from \([N, 2N - 1]\).

To prove that this scheme is good, again, consider two strings \((S, T)\) of equal length and do some calculations:

\[ \begin{aligned} h(S)&=h(T)\\ (\sum_{i=0}^{n-1} a^{n-i-1} S_i) \mod p & = (\sum_{i=0}^{n-1} a^{n-i-1} T_i) \mod p \end{aligned} \]

Therefore,we have:

\[ \sum_{i=0}^{n-1} a^{n-i-1} S_i \equiv \sum_{i=0}^{n-1} a^{n-i-1} T_i \pmod p\\ \]

So:

\[ X = \sum_{i=0}^{n-1}a^{n-i-1}(S_i-T_i) \equiv 0 \pmod p \]

As \(X \equiv 0 \pmod p,p|X\).As we chose a large enough, \(X \neq 0\). Moreover \(|X| < a^n\). An upper bound for the number of distinct prime divisors of \(X\) in \([N, 2N - 1]\) is given by \(\log_N (|X|) = \frac{n\ln (a)} {\ln N}\). By the prime density theorem, there are around \(\frac{X} {\ln{N} }\) primes in \([N, 2N - 1]\). Therefore:

\[ Pr[h(S)=h(T)] = Pr[p|X] \le \sim \frac{n\ln(a)} {N} \]

Note that this bound is slightly worse than the one for randomizing the base. It is around \(3\cdot 10{-4}\) for \(n = 10^5, a = 26, N = 10^9\).

How to randomize properly

The following are good ways of initializing your random number generator.

  • high precision time.

    1
    2
    chrono::duration_cast<chrono::nanoseconds>(chrono::high_resolution_clock::now().time_since_epoch()).count();
    chrono::duration_cast<chrono::nanoseconds>(chrono::steady_clock::now().time_since_epoch()).count();

    Either of the two should be fine. (In theory, high_resolution_clock should be better, but it somehow has lower precision than steady_clock on codeforces??)

  • processor cycle counter

    1
    __builtin_ia32_rdtsc();

  • some heap address converted to an integer

    1
    (uintptr_t) make_unique<char>().get();

  • processor randomness (needs either pragma or asm) (Thanks halyavin for suggesting this.)

    1
    2
    3
    4
    5
    6
    7
    // pragma version
    #pragma GCC target ("rdrnd")
    uint32_t rdrand32(){
    uint32_t ret;
    assert(__builtin_ia32_rdrand32_step (&ret));
    return ret;
    }
    1
    2
    3
    4
    5
    6
    // asm version
    uint32_t rd() {
    uint32_t ret;
    asm volatile("rdrand %0" :"=a"(ret) ::"cc");
    return ret;
    }

If you use a C++11-style rng (you should), you can use a combination of the above:

1
2
3
4
5
6
7
seed_seq seq{
(uint64_t) chrono::duration_cast<chrono::nanoseconds>(chrono::high_resolution_clock::now().time_since_epoch()).count(),
(uint64_t) __builtin_ia32_rdtsc(),
(uint64_t) (uintptr_t) make_unique<char>().get()
};
mt19937 rng(seq);
int base = uniform_int_distribution<int>(0, p-1)(rng);

Note that this does internally discard the upper 32 bits from the arguments and that this doesn't really matter, as the lower bits are harder to predict (especially in the first case with chrono.).

See the section on 'Abusing bad randomization' for some bad examples.

Extension to multiple hashes

We can use multiple hashes (Even with the same scheme and same fixed parameters) and the hashes are independent so long as the random samples are independent. If the single hashes each fail with probability at most \(a_1,\cdots, a_k\), the probability that all hashes fail is at most \(\prod_{i=1}^k a_i\).

For example, if we use two hashes with \(p = 10^9 + 7\) and randomized base, the probability of a collision is at most \(10^{ - 8}\); for four hashes it is at most \(10 ^{- 16}\). Here the constants from slightly larger primes are more significant, for \(p = 2^{31} - 1\) the probabilities are around \(2.1\cdot 10^{ - 9}\) and \(4.7\cdot 10^{ - 18}\).

Larger modulo

Using larger (i.e. 60 bit) primes would make collision less likely and not suffer from the accumulated factors of \(n\) in the error bounds. However, the computation of the rolling hash gets slower and more difficult, as there is no __int128 on codeforces.

One exception to this is the Mersenne prime \(p = 2^{61} - 1\); we can reduce \(\mod p\) by using bitshifts instead. (Thanks dmkz for suggesting this.) The following code computes \(a\cdot b \mod p\) without __int128 and is only around \(5\%\) slower than a \(30\) bit hash with modulo.

For the argument, the condition \(0\le a,b < mod\) should hold.The return value then also satisfies this.

1
2
3
4
5
6
7
8
9
constexpr uint64_t mod = (1ull<<61) - 1;
uint64_t modmul(uint64_t a, uint64_t b){
uint64_t l1 = (uint32_t)a, h1 = a>>32, l2 = (uint32_t)b, h2 = b>>32;
uint64_t l = l1*l2, m = l1*h2 + l2*h1, h = h1*h2;
uint64_t ret = (l&mod) + (l>>61) + (h << 3) + (m >> 29) + (m << 35 >> 3) + 1;
ret = (ret & mod) + (ret>>61);
ret = (ret & mod) + (ret>>61);
return ret-1;
}

A smaller factor can be gained by using unsigned types and \(p=4\cdot 10^9 + 7\).

Note that \(p=2^{64}\)(overflow of unsigned long long) is not prime and can be hacked regardless of randomization (see below).

Extension to multiple comparisons

Usually, rolling hashes are used in more than a single comparison. If we rely on m comparison and the probability that a single comparison fails is \(p\) then the probability that any of the fail is at most \(m\cdot p\) by a union bound. Note that when \(m = 10^5\), we need at least two or three hashes for this to be small.

One has to be quite careful when estimating the number comparison we need to succeed. If we sort the hashes or put them into a set, we need to have pair-wise distinct hashes, so for \(n\) string a total of \(\binom{n} {2}\) comparisons have to succeed. If \(n = 3\cdot 10^5\), \(m 4.5\cdot 10^9\), so we need three or four hashes (or only two if we use \(p = 2^{61} - 1\)).

Extension to strings of different length

If we deal with strings of different length, we can avoid comparing them by storing the length along the hash. This is not necessarily however, if we assume that no character hashes to \(0\). In that case, we can simple imagine we prepend the shorter strings with null-bytes to get strings of equal length without changing the hash values. Then the theory above applies just fine. (If some character (i.e. a) hashes to \(0\), we might produce strings that look the same but aren't the same in the prepending process (i.e. a and aa).)

Computing anti-hash tests

This section cover some technique that take advantage of common mistakes in rolling hash implementations and can mainly be used for hacking other solutions. Here's a table with a short summary of the methods.

NameUse caseRun timeString lenthNotes
Thue-MorseHash with overflow\(O(1)\)\(2^{10}\)Works for all bases simultaneously.
BirthdaySmall modulo\(O(\sqrt{p}\log p)\)\(\approx 2 \log_{ | \sum | }p\)Can find multiple collisions.
TreeLarge modulo\(O(2^{\sqrt{p\lg p} } )\)\(O(2^{\sqrt{p\lg p}}+1)\)faster; longer strings
Multi-treeLarge modulo\(O((2^{\sqrt{2\lg m p} }+\log_{ | \sum | }m)\cdot m\log m)\)\(\approx O((2^{\sqrt{2\lg m p} }+\log_{ | \sum | }m))\)slower; shorter strings
Lattice reductionMedium-large alphabet, Multiple hashes\(O(length^3)\)\(\approx \sum_{i=0}^{n-1}\log_{ | \sum | } (p_n)\)Great results for $
CompositionMultiple hashesSum of single runtimesProduct of single string lengthsCombines two attacks.

Single hashes

Thue–Morse sequence: Hashing with unsigned overflow (\(p = 2^{64}\), \(q\) arbitrary)

One anti-hash test that works for any base is the Thue–Morse sequence, generated by the following code.

1
2
3
4
5
6
7
8
const int Q = 10;
const int N = 1 << Q;
std::string S(N, ' '), T(N, ' ');

for (int i = 0; i < N; i++)
S[i] = 'A' + __builtin_popcount(i) % 2;
for (int i = 0; i < N; i++)
T[i] = 'B' - __builtin_popcount(i) % 2;

\(S\), \(T\) form an equal-length collision, regardless of the chosen base.

See this blog for a detailed discussion. Note that the bound on the linked blog can be improved slightly, as \(X^2 - 1\) is always divisible by \(8\) for odd \(X\). (So we can use \(Q = 10\) instead of \(Q = 11\).)

Birthday-attack: Hashing with 32-bit prime and fixed base (\(p < 2^{32}\) fixed, \(q\) fixed)

Hashes with a single small prime can be attacked via the birthday paradox. Fix a length \(l\) and the size of alphabet \(d\), and pick \(k\) strings of length \(l\) uniformly at random. If \(l\) is not to small, the resulting hash values will approximately be uniformly distributed. By the birthday paradox, the probability that all of our picked strings hash to different values is:

\[ \sum_{i=0}^{k-1}(1-\frac{i} {d} ) < \sum_{i=0}^{k-1}(e^{-\frac{i} {d} } ) =e^{-\frac{k(k-1)} {2d} } < e^{-\ln 2} = \frac{1} {2} \]

Hence with probability \(> \frac{1} {2}\) we found two strings hashing to the same value. By repeating this, we can find an equal-length collision with high probability in \(O(\sqrt{p} )\). In practice, the resulting strings can be quite small (\(length \approx 6\) and \(d=62\) for \(p = 10^9 + 7\), not sure how to upper-bound this.).

More generally, we can compute \(m\) strings with equal hash value in \(O(m\cdot p^{1-\frac{1} {m} } )\) using the same technique with \(r = m \cdot p^{1-\frac{1} {m} }\).

Tree-attack: Hashing with larger prime and fixed base (\(p\) fixed, \(q\) fixed)

Thanks Kaban-5 and pavel.savchenkov for the link to some Russian comments describing this idea.

For large primes, the birthday-attack is to slow. Recall that for two strings \((S, T)\) of equal length:

\[ h(S)=h(T) \]

\[ \sum_{i=0}^{n-1}a^{n-i-1}(S_i-T_i) \equiv 0 \pmod{p} \]

we set \(\alpha_i = S_i - T_i\) satisfies \(-|\sum| \le \alpha_i \le |\sum|\).The tree-attack tries to find \(\alpha_i \in\{-1,0,1\}\) such that:

\[ \sum_{i=0}^{n-1} (a_{n-i-1} \mod p) \cdot \alpha_i = 0 \]

The attack maintains clusters \(C_1, ..., C_k\) of coefficients. The sum \(S(C)\) of a cluster \(C\) is given by:

\[ S(C) = \sum_{i\in C} (a_{n-i-1} \mod p) \cdot \alpha_i \]

We can merge two clusters \(C_1\) and \(C_2\) to a cluster \(C_3\) of sum \(S(C_1) - S(C_2)\) by multiplying all the \(\alpha_i\) from \(C_2\) with $ - 1$ and joining the set of coefficients of \(C_1\) and \(C_2\). This operation can be implemented in constant time by storing the clusters as binary trees where each node stores its sum; the merge operation then adds a new node for \(C_3\) with children \(C_1\) and \(C_2\) and sum \(S(C_1) - S(C_2)\). To ensure that the \(S(C_3) \ge 0\), swap \(C_1\) and \(C_2\) if necessary. The values of the \(\alpha_i\) are not explicitly stored, but they can be recomputed in the end by traversing the tree.

Initially, we start with \(n = 2k\) and each \(\alpha_i=1\) in its own cluster. In a phase, we first sort the clusters by their sum and then merge adjacent pairs of clusters. If we encounter a cluster of sum \(0\) at any point, we finish by setting all \(\alpha_j\) not in that cluster to \(0\). If we haven't finished after \(k\) phases, try again with a bigger value of \(k\).

For which values of \(k\) can we expect this to work? If we assume that the sums are initially uniformly distributed in \([0,p-1]\), the maximum sum should decrease by a factor \(\sim 2^{k-i}\) in phase \(i\). After \(k\) phases, the maximum sum is around \(\frac{p} {2^{\binom{k} {2} } }\), so \(k\approx \sqrt{2\lg p}+1\) works. This produces strings of length \(n=2^{\sqrt{2\lg p}+1}\) in \(O(n)\) time. (A more formal analysis can be found in the paper 'Solving Medium-Density Subset Sum Problems in Expected Polynomial Time', section 2.2. The problem and algorithms in the paper are slightly different, but the approach similar.)

Multi-tree-attack

While the tree-attacks runs really fast, the resulting strings can get a little long. (\(n = 2048\) for \(p = 2^{61} - 1\).) We can spend more runtime to search for a shorter collision by storing the smallest m sums we can get in each cluster. (The single-tree-attack just uses \(m = 1\).) Merging two clusters can be done in \(O(m\log m)\) with a min-heap and a \(2m\)-pointer walk. In order to get to m strings ASAP, we allow all values \(\alpha_i\in[-|\sum|,|\sum|]\) and exclude the trivial case where all \(\alpha_i\) are zero.

Analysing the expected value of \(k\) for this to work is quite difficult. Under the optimistic assumption that we reach m sums per node after \(\log_{\left |\sum \right |} m\) steps, that the sums decrease as in the single tree attack and that we can expected a collision when they get smaller than \(m^2\) by the birthday-paradox, we get \(k = \sqrt{2\frac{\lg p} {\lg m} }+\log_{|\sum|} m\). (A more realistic bound would be \(k=\frac{\lg p} {\lg m}+log_{|\sum|}m\), which might be gotten by accounting for the birthday-paradox in the bound proven in the paper 'On Random High Density Subset Sums', Theorem 3.1.)

In practice, we can use \(m \approx 10^5\) to find a collision of length \(128\) for \(|\sum|=2\), \(p = 2^{61} - 1\) in around 0.4 seconds.

Lattice-reduction attack: Single or multiple hashes over not-to-small alphabet

Thanks to hellman_ for mentioning this, check out his write-up on this topic here. There's also a write-up by someone else here.

As in the tree attack, we're looking for \(\alpha_i\in[-|\sum|,|\sum|]\) such that:

\[ \sum_{i=0}^{n-1} (a_{n-i-1} \mod p) \cdot \alpha_i = 0 \]

The set:

\[ {\alpha_0,\cdots,a_{n-1},\beta} | \beta \equiv \sum_{i=0}^{n-1} (a_{n-i-1} \mod p)\cdot \alpha_i \]

forms a lattice (A free \(\mathbb{Z}\)-module embedded in a subspace of \(\mathbb{R}^n\).) We're looking for an element in the lattice such that \(\beta = 0\) and \(|\alpha_i| \le |\sum|\). We can penalize non-zero values of \(\beta\) by considering:

\[ \beta = 10^5((\sum_{i=0}^{n-1} (a_{n-i-1} \mod p)\cdot \alpha_i)) \mod p \]

instead, then we seek to minimize \(\max\{|\alpha_0|,\cdots,|a_{n-1}|,|\beta|\}\). Unfortunately, this optimization problem is quite hard, so we try to minimize:

\[ \alpha_0^2 + \cdots + \alpha_{n-1}^2 + \beta^2 \]

instead. The resulting problem is still hard, possibly NP-complete, but there are some good approximation algorithms available.

Similar to a vector space, we can define a basis in a lattice. For our case, a basis is given by:

\[ \{e_\beta+10^5(a_{n-i-1} \mod p)e_{a_i} | 0\le i < n\} \cup \{p\cdot 10^5e_\beta\} \]

A lattice reduction algorithm takes this basis and transforms it (by invertible matrices with determinant \(\pm\)) into another basis with approximately shortest vectors. Implementing them is quite hard (and suffers from precision errors or bignum slowdown), so I decided to use the builtin implementation in sage.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
"""Anti-rolling-hash test generation by lattice reduction
original code by Hellman_, modified by dacin21
"""

from sage.all import *

def anti_hash(PAs, string_length, sigma, block_size = 0, MULTIPLIER = 100000, sigma_base = ord('a')):
n = len(PAs)
N = string_length
As = [a for p, a in PAs]
Ps = [p for p, a in PAs]

def h(s):
"""polynomial hash modulo <n> primes"""
v = [0] * n
for c in s:
v = [(x * q + ord(c))%p for x, q, p in zip(v, As, Ps)]
return tuple(v % p for v, p in zip(v, Ps))

mv = matrix(ZZ, N, N)
for y in xrange(N):
for x, q, p in zip(range(n), As, Ps):
mv[y,x] = pow(q, N-y-1, p);

m = matrix(ZZ, N + n, N + n)
# submatrix with terms
m.set_block(0, 0, MULTIPLIER * mv)
# modulo reductions
m.set_block(N, 0, MULTIPLIER * diagonal_matrix(Ps))
# term coefficients
m.set_block(0, n, identity_matrix(N))
# 4th submatrix is zero

m_reduced = m.LLL()
if block_size > 0:
m_reduced = m_reduced.BKZ(block_size = block_size)

for row in m_reduced:
print row[:n], min(row[n:]), "~", max(row[n:])
delta = max(abs(v) for v in row[n:])
if set(row[:n]) == {0} and delta < sigma:
print "Found collision!"
s = [None] * N
t = [None] * N
for i, v in enumerate(row[n:]):
a = sigma_base
b = a + abs(v)
if v > 0:
a, b = b, a
s[i] = a
t[i] = b
s = "".join(map(chr, s))
t = "".join(map(chr, t))
print s + " " + t
# print h(s)
# print h(t)
assert h(s) == h(t)
break
else:
print "Failed to find collision, try a larger string_length"
print "For lengths > 30, setting block_size to 10 or 15 is recommended"

if __name__ == '__main__':
with open("hash.in", 'r') as infile:
n, k, sigma = map(int, infile.readline().strip().split())
PAs = [tuple(map(int, infile.readline().strip().split())) for _ in xrange(n)]
anti_hash(PAs, k, sigma)

Input is taken from a file named hash.in in the format. To use the BKZ algorithm, set the block_size argument.

Input format:

1
2
3
4
n length sigma
p_1 a_1
...
p_n a_n

Sage offers two algorithms: LLL and BKZ, the former is faster but produces worse approximations, especially for longer strings. Analyzing them is difficult, so I experimented a bit by fixing \(|\sum|=26\), \(p = 2^{61} - 1\) and fixing \(a_1,\cdots ,a_n\) randomly and searching for a short anti-hash test with both algorithms. The results turned out really well.

LLL algorithm:

\(n\)minimal lengthtime spent (seconds)
1120.02
2230.04
3340.08
4480.18
5600.34
6770.75
73015.14
8\(> 1500\)\(> 990\)

for \(n = 8\) the algorithm couldn't find a collision of length \(\le 1500\).

BKZ algorithm with block_size = 10:

\(n\)minimal lengthtime spent (seconds)
1120.02
2230.04
3330.09
4450.21
5570.58
6721.12
7882.59
81085.33

BKZ algorithm with block_size = 15:

\(n\)minimal lengthtime spent (seconds)
1120.02
2230.04
3330.09
4450.21
5570.52
6701.24
7852.41
81025.20

BKZ algorithm with block_size = 25:

\(n\)minimal lengthtime spent (seconds)
1120.02
2230.04
3330.10
4460.58
5571.14
6703.67
7846.96
89626.70

Note that this attack does not work well for small (i.e binary) alphabets when \(n > 1\) and that the characters have to hash to consecutive values, so this has to be the first attack if used in a composition attack.

Composition-attack: Multiple hashes

Credit for this part goes to ifsmirnov, I found this technique in his jngen library.

Using two or more hashes is usually sufficient to protect from a direct birthday-attack. For two primes, there are \(N = p_1·p_2\) possible hash values. The birthday-attack runs in \(O(\sqrt{N})\), which is \(\approx 10^{10}\) for primes around \(10^9\). Moreover, the memory usage is more than \(\sqrt{ (2 \ln 2)N}\cdot 8\) bytes (If you only store the hashes and the rng-seed), which is around 9.5 GB.

The key idea used to break multiple hashes is to break them one-by-one.

  • First find an equal-length collision (by birthday-attack) for the first hash \(h_1\), i.e. two strings \(S, T, S \neq T\) of equal length with \(h_1(S) = h_1(T)\). Note that strings of equal length built over the alphabet \(S, T\) (i.e. by concatenation of some copies of \(S\) with some copies of \(T\) and vice-versa) will now hash to the same value under \(h_1\).
  • Then use \(S\) and \(T\) as the alphabet when searching for an equal-length collision (by birthday-attack again) for the second hash \(h_2\). The result will automatically be a collision for \(h_1\) as well, as we used \(S, T\) as the alphabet.

This reduces the runtime \(O(\sqrt{p_1}+\sqrt{p_2} )\) . Note that this also works for combinations of a 30-bit prime hash and a hash mod \(2^{64}\) if we use the Thue–Morse sequence in place of the second birthday attack. Similarly, we can use tree- instead of birthday-attacks for larger modulos.

Another thing to note is that string length grows rapidly in the number of hashes. (Around \(2\log_{|\sum|}(\sqrt{ (2\ln 2)p_1} )\cdot\log_2 (\sqrt{ (2\ln 2)p_2} )\cdots \log_2(\sqrt{ (2\ln 2)p_k} )\), the alphabet size is reduced to \(2\) after the first birthday-attack. The first iteration has a factor of \(2\) in practice.) If we search for more than 2 strings with equal hash value in the intermediate steps, the alphabet size will be bigger, leading to shorter strings, but the runtime of the birthday-attacks gets slower (\(O(p^{\frac{2} {3} } )\) for \(3\) strings, for example.).

Abusing bad randomization

On codeforces, quite a lot of people randomize their hashes. (Un-)Fortunately, many of them do it an a suboptimal way. This section covers some of the ways people screw up their hash randomizations and ways to hack their code.

This section applies more generally to any type of randomized algorithm in an environment where other participants can hack your solutions.

Fixed seed

If the seed of the rng is fixed, it always produces the same sequence of random numbers. You can just run the code to see which numbers get randomly generated and then find an anti-hash test for those numbers.

Picking from a small pool of bases(rand()%100)

Note that rand() % 100 produced at most 100 distinct values \([0,99]\). We can just find a separate anti-hash test for every one of them and then combine the tests into a single one. (The way your combine tests is problem-specific, but it works for most of the problems.)

More issues with rand()

On codeforces, rand() produces only \(15\)-bit values, so at most \(2^{15}\) different values. While it may take a while to run \(2^{15}\) birthday-attacks (estimated \(111\) minutes for \(p = 10^9 + 7\) using a single thread on my laptop), this can cause some big issues with some other randomized algorithms.

Edit: This type of hack might be feasible if we use multi-tree-attacks. For $ p =10^9+7,||=26$, running \(2^{15}\) multi-tree attacks with \(m = 10^4\) takes around \(2\) minutes and produces an output of \(5.2\cdot 10^5\) characters. This is still slightly to large for most problems, but could be split up into multiple hacks in an open hacking phase, for example.

In C++11 you can use mt19937 and uniform_int_distribution instead of rand().

Low-precision time(Time(NULL))

Time(NULL) only changes once per second. This can be exploited as follows

  1. Pick a timespan \(\Delta T\).
  2. Find an upper bound \(T\) for the time you'll need to generate your tests.
  3. Figure out the current value \(T_0\) of Time(NULL) via custom invocation.
  4. For \(t = 0,\cdots,(\Delta T) - 1\), replace Time(NULL) with \(T_0 + T + t\) and generate an anti-test for this fixed seed.
  5. Submit the hack at time \(T_0 + T\).

If your hack gets executed within the next \(\Delta T\) seconds, Time(NULL) will be a value for which you generated an anti-test, so the solution will fail.

Random device on MinGW(std::random_device)

Note that on codeforces specifically, std::random_device is deterministic and will produce the same sequence of numbers. Solutions using it can be hacked just like fixed seed solutions.

本文选取自 Anti-rolling-hash test generation by lattice reduction