A PyTorch Bug Caused by The Misused C++ Keyword restrict

When fuzzing the compilation stack in PyTorch 2.0 with NeuRI, we found some interesting bugs. Here we reveal one of them caused by the misused C++ keyword __restrict__.

The __restrict__ Keyword in C++

First, let’s take a look at the effect of the __restrict__ keyword in C++ through a simple example below.

1
2
3
4
5
6
7
8
9
10
// test.cpp
void f1(int* a, int* b, int* x) {
*a += *x;
*b += *x;
}

void f2(int* __restrict__ a, int* __restrict__ b, int* __restrict__ x) {
*a += *x;
*b += *x;
}

The difference of the two functions is that, all the pointer arguments in f2 are decorated with __restrict__, while the ones in f1 are not. __restrict__ tells the compiler that all these pointers are unique, which means they will not refer to the same memory addresss. Then, the compiler can do some optimizations. Let’s see how the compiler optimize it through the assembly code.

We get the assembly code below by

1
2
clang-14 -c -g -O1 test.cpp -o test.o
llvm-objdump-14 -S test.o > test.S
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
; test.S
Disassembly of section .text:

0000000000000000 <_Z2f1PiS_S_>:
; *a += *x;
0: 8b 02 movl (%rdx), %eax
2: 01 07 addl %eax, (%rdi)
; *b += *x;
4: 8b 02 movl (%rdx), %eax
6: 01 06 addl %eax, (%rsi)
; }
8: c3 retq
9: 0f 1f 80 00 00 00 00 nopl (%rax)

0000000000000010 <_Z2f2PiS_S_>:
; *a += *x;
10: 8b 02 movl (%rdx), %eax
12: 01 07 addl %eax, (%rdi)
; *b += *x;
14: 01 06 addl %eax, (%rsi)
; }
16: c3 retq

The first half of the assembly code corresponds to f1, while the remaining one corresponds to f2. We can see the only difference is that in assembly for f2, the second movl (%rdx), %eax instruction is omitted.

Normally, *b += *x will be compiled into two instructions in x86 like the assembly for f1. First, it needs to load *x from memory ((%rdx)) to a register (%eax), then it adds the value in this register to the data stored in memory, which is *b ((%rsi)). You may notice that *x is loaded in the first instruction for *a += *x;, but we still need to load it again for *b += *x; in case the data pointed by x is changed–it is exactly what happens when a == x (a and x point to the same memory address).

However, decorating a and x with __restrict__ will tell the compiler they are different. As a result, the compiler believes that *x will not be changed in f2, so it will only load it once.

The PyTorch Bug

By running the fuzzer NeuRI, we find a bug which can be triggered by the code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
import torch

p0 = torch.tensor([[4.9334, 5.5571]]) # (1, 2)

def fn():
v7 = torch.cat([p0, p0], dim=0) # v7: (2, 2)
v1 = torch.mul(v7, v7) # v1: (2, 2)
return v7, v1

ret_eager = fn()
compiled = torch.compile(fn)
ret_compiled = compiled()

assert torch.allclose(ret_eager[0], ret_compiled[0])
# ^^^ no error
assert torch.allclose(ret_eager[1], ret_compiled[1])
''' ^^^ WRONG!
AssertionError:
ret_eager[1] = tensor([[24.3384, 30.8814],
[24.3384, 30.8814]])
ret_compiled[1] = tensor([[0., 0.],
[0., 0.]])
'''

As you can see, fn is composed by two tensor operations. After compilation, it gives wrong results for the second return value v1. (All values in v1 are zeros, which is incorrect.)

What torch.compile does is that, it generates a C++ kernel function to compute fn. We add some comments to help to understand how the C++ function implements the Python function.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
extern "C" void kernel(const float* __restrict__ in_ptr0, // p0
const float* __restrict__ in_ptr1, // p0
const float* __restrict__ in_ptr2, // v7
float* __restrict__ out_ptr0, // first half of v7
float* __restrict__ out_ptr1, // last half of v7
float* __restrict__ out_ptr2) // v1
{
{ // 1st part of cat operation: copy values in p0 to the first half of v7
#pragma GCC ivdep
for(long i0=0; i0<2; i0+=1)
{
auto tmp0 = in_ptr0[i0];
out_ptr0[i0] = tmp0;
}
}
{ // 2nd part of cat operation: copy values in p0 to the last half of v7
#pragma GCC ivdep
for(long i0=0; i0<2; i0+=1)
{
auto tmp0 = in_ptr1[i0];
out_ptr1[i0] = tmp0;
}
}
{ // mul operation: v1 <- element-wise multiplication of v7 and v7
#pragma GCC ivdep
for(long i0=0; i0<4; i0+=1)
{
auto tmp0 = in_ptr2[i0];
auto tmp1 = tmp0 * tmp0;
out_ptr2[i0] = tmp1;
}
}
}

As you see, it uses __restrict__ for all pointer arguments. It indicates that they are different. But actually, they are NOT. in_ptr2 points to the low-level memory address of tensor v7, while out_ptr0 points to the first half of v7 and out_ptr1 points to the last half one. They are overlapped.

The values of v7 are changed by writing to addresses referred by out_ptr0 and out_ptr1 in the first two for loops. So, for reading data of v7 by in_ptr2 in the last for loop, it should load the values after writing to out_ptr0 and out_ptr1. If it loads them before, it should reload it to ensure the correctness. Otherwise, old values stored in v7 will be used to do the multiplication. I guess that’s why the compiled function gives zeros.

Finally, the developers fixed this bug by removing the usage of __restrict__ keywords in code generation.

However, I could not reproduce the wrong behavior led by __restrict__ at the assembly level. I tried to compile the cpp function above by clang-14 -c -g -O3 k.cpp -o k.o && llvm-objdump-14 -S k.o and got the assembly code below.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
0000000000000000 <kernel>:
; out_ptr0[i0] = tmp0;
0: 48 8b 07 movq (%rdi), %rax
3: 48 89 01 movq %rax, (%rcx)
; out_ptr1[i0] = tmp0;
6: 48 8b 06 movq (%rsi), %rax
9: 49 89 00 movq %rax, (%r8)
c: 31 c0 xorl %eax, %eax
e: 66 90 nop
; auto tmp0 = in_ptr2[i0];
10: f3 0f 10 04 82 movss (%rdx,%rax,4), %xmm0 # xmm0 = mem[0],zero,zero,zero
; auto tmp1 = tmp0 * tmp0;
15: f3 0f 59 c0 mulss %xmm0, %xmm0
; out_ptr2[i0] = tmp1;
19: f3 41 0f 11 04 81 movss %xmm0, (%r9,%rax,4)
; for(long i0=0; i0<4; i0+=1)
1f: 48 83 c0 01 addq $1, %rax
23: 48 83 f8 04 cmpq $4, %rax
27: 75 e7 jne 0x10 <kernel+0x10>
; }
29: c3 retq

I think it works correctly since it loads the operands for multiplication by movss (%rdx,%rax,4), %xmm0 which means reading values from memory. It doesn’t load the value first and then use the old data. So I don’t clearly know why the compiled function in PyTorch gives wrong results. We can only say that __restrict__ should not be used there by its definition.