# Division by constant signed integers

The code accompanying this article can be found in a github repository.

Division is a relatively slow operation. When the divisor is constant, the division can be optimized significantly. In [1] I explored how this can be done for unsigned integers. In this follow-up article, I cover how we can optimize division by constant signed integers. This article should be read as a continuation of [1]. This article essentially provides the same information as [2].

I assume that like in most programming languages, the result of the signed division is rounded toward zero. This presents some challenges which are different than those for optimizing unsigned division. We are only dealing with numbers with a magnitude of at most $2_{N−1}$, and we will see that this means we can always use the round-up method as described in [1]. The challenge consists of efficiently rounding up the quotient when $n$ is negative.

## Mathematical background

### Preliminaries

I will assume that we are working on an $N$-bit machine which can efficiently compute the full $2N$-bit product of two $N$-bit signed integers. I will use the notation $U_{N}$ for the set of unsigned integers that can be represented with $N$ bits:

$U_{N}={0,1,...,2_{N}−1}$Likewise, I will use the notation $S_{N}$ for the set of signed integers that can be represented with $N$ bits:

$S_{N}={−2_{N−1},−2_{N−1}+1,...,2_{N−1}−1}$When $A$ and $B$ are sets, the set $A∖B$ denotes the set of elements of $A$ that are not in $B$.

For some real number $x∈R$, I will denote the absolute value of $x$ by $∣x∣$. That is:

$∣x∣={x−x whenx≥0otherwise $I will use the notation $⌊x⌋$ for the biggest integer smaller than or equal to $x$, and $⌈x⌉$ for the smallest integer bigger than or equal to $x$. I will use $[x]$ to denote the value of $x$ when rounded toward zero. That is

$[x]={⌈x⌉⌊x⌋ whenx<0whenx≥0 $I will use the notation $sgn(x)$ for the *sign function*:

Finally, I will use the notation $1_{P}$ where $P$ is a predicate to denote the *characteristic function*:

### Signed division

Supposing that $d$ is positive, we want to evaluate

$[dn ]={⌈dn ⌉⌊dn ⌋ whenn<0whenn≥0 $So, it is natural to try to find two expressions, one which equals $⌊dn ⌋$ and one which equals $⌈dn ⌉$.

In [1], we saw that we for every $d∈N$ we can always find an $(N+1)$-bit magic number $m$ such that $⌊2_{N+ℓ}m⋅n ⌋=⌊dn ⌋$ for every $n∈U_{N}$. For unsigned division this was a problem since we want $m$ to fit in $N$ bits. For nonnegative $n∈S_{N}$ we have $n∈U_{N−1}$ so in this case $m$ will always fit in $N$ bits.

**Corollary 11**: *Let $d,N,ℓ∈N_{0}$ with $N,d>0$. If there exists an $m∈N_{+}$ with*

*then $⌊2_{N+ℓ}m⋅n ⌋=⌊dn ⌋$ for all nonnegative $n∈S_{N}$*.

**Proof**: Note that the set of nonnegative $n$ in $S_{N}$ is exactly $U_{N−1}$. So this is just theorem 2 from [1] with $N$ replaced by $N−1$.

$□$

We will now prove a similar result for negative $n∈S_{N}$. The following lemma will come in handy.

**Lemma 12**: *Let $n∈Z,d∈N_{+}$, and $x∈R$. When*

*then*

**Proof**: Set $dn−1 =a+db $ with $a∈Z$, $b∈{0,1,...,d−1}$. Then $dn =a+db+1 $. Now $⌊dn−1 ⌋=a$ and $⌈dn ⌉=a+1$ so that $⌈dn ⌉=⌊dn−1 ⌋+1$. So $⌊dn−1 ⌋≤⌊x⌋<⌊dn−1 ⌋+1$. Since $⌊x⌋$ is an integer, it follows that $x=⌊dn−1 ⌋=⌈dn ⌉−1$.

$□$

The following theorem gives us a way to calculate $⌈dn ⌉$ from the same expression $⌊2_{N−1+ℓ}m⋅d ⌋$ when $n$ is negative.

**Lemma 13**: *Let $d,m,N∈N_{+}$. If*

*then*

*for all negative $n∈S_{N}$.*

**Proof**: Multiply $2_{N−1+ℓ}<m⋅d≤2_{N−1+ℓ}+2_{ℓ}$ by $d⋅2_{N+ℓ}n $. Remember that $n$ is negative, so the inequality ‘flips’ and we get $dn +2_{N−1}n ⋅d1 ≤2_{N−1+ℓ}m⋅n <dn $. Now, using that $−2_{N−1}<n$ we see that $−1≤2_{N−1}n $, so we have $dn −d1 ≤2_{N−1+ℓ}m⋅n <dn $. The result now follows from lemma 2.

$□$

So, if we find an $m$ such that $2_{N−1+ℓ}<m⋅d≤2_{N−1+ℓ}+2_{ℓ}$, we have $[dn ]=⌊2_{N−1+ℓ}m⋅n ⌋+1_{n<0}$. The following theorem extends this result to the case where $d$ is negative and shows us how to pick $m$.

**Theorem 14**: *Let $d$ and $N$ be integers with $N>0$ and define $ℓ=⌈g_{2}(∣d∣)⌉$ and $m=⌊∣d∣2_{N−1+ℓ} ⌋+1$. Then*

*for all $n∈S_{N}$.*

**Proof**: First, observe that $m⋅∣d∣$ is simply the first multiple of $∣d∣$ larger than $2_{N−1+ℓ}$. Since there are $2_{ℓ}=2_{⌈log_{2}(∣d∣)⌉}≥∣d∣$, there must be at least one multiple of $d$ in the range $(2_{N−1+ℓ},2_{N−1+ℓ}+2_{ℓ}]$. So we have $2_{N−1+ℓ}<m⋅d≤2_{N−1+ℓ}+2_{ℓ}$. Using corollary 11 we see that $⌊2_{N−1+ℓ}m⋅n ⌋+1_{n<0}=⌊2_{N−1+ℓ}m⋅n ⌋=⌈dn ⌉$ for nonnegative $n∈S_{N}$. Using lemma 13, we see that $⌊2_{N−1+ℓ}m⋅n ⌋+1_{n<0}=⌊2_{N−1+ℓ}m⋅n ⌋+1=⌊dn ⌋$ for negative $n∈S_{N}$. So $[∣d∣n ]=⌊2_{N−1+ℓ}m⋅n ⌋+1_{n<0}$ for all $n∈S_{N}$. Using $[dn ]=sgn(d)⋅[∣d∣n ]$ the result follows.

$□$

Lemma 5 tells us that the magic number $m=⌈d2_{N−1+⌈log(d)⌉} ⌉$ is a positive number with at most $N$ bits. In the following section on implementation, we will see that multiplying an $N$-bit signed number $n∈S_{N}$ by an $N$-bit unsigned number $m∈U_{N}∖U_{N−1}$ is slightly less efficient than multiplying an $N$-bit signed number by an $(N−1)$-bit unsigned number. For some divisors, we can use the following result to reduce the number of bits that we need to represent $m$.

**Corollary 16**: *Let $d∈S_{N}$ be a positive integer that is not a power of two, $m∈U_{N}$ and let $0<ℓ≤⌊g_{2}(d)⌋$, such that $ℓ,m$ satisfy the condition of lemma corollary 11 and lemma 13:*

*If $m$ is even, then this condition also holds for $m_{′}=2m ,ℓ_{′}=ℓ−1$, so we have $2_{N−1+ℓ_{′}}<m_{′}⋅d≤2_{N−1+ℓ_{′}}+2_{ℓ_{′}}$. If $m$ is odd, then it is the smallest integer for which this condition holds.*

**Proof**: This follows directly from theorem 9.

$□$

## Implementation

In this section, I use the `uint`

and `sint`

datatypes, which are an $N$-bit unsigned integer and an $N$-bit signed integer, respectively. I try to provide a general strategy that should work well on most instruction set architectures. Variations in the implementation might give a more efficient result. In general, you should always benchmark your implementation if performance is critical.

While theorem 3 in the previous section seems to provide a straightforward method to compute the quotient $[dn ]$ for any $n,d∈S_{N}$, there is one subtlety we glanced over. In theorem 4, we use the $2N$-bit expression $m⋅n$, where $m∈U_{N}$ is an unsigned value and $n∈S_{N}$ is a signed value. While most processors have instructions to compute the full $2N$-bit product of two $N$-bit unsigned integers or two $N$-bit signed integers, most processors do not provide an instruction to compute the $2N$-bit product of an $N$-bit unsigned integer and an $N$-bit signed integer.

While it is also possible to compute the product $m⋅n$ by first extending $m$ and $n$ to $2N$-bit signed values and computing the product of those extended values, this is less efficient.

### Computing the product of an unsigned and a signed value

In this section, consider $m$ and $n$ to be $N$-bit bit strings. So these variables no longer represent a number, but purely a series of bits, which can hold a zero or a one. So we write $m=m_{N−1}m_{N−2}...m_{1}m_{0}$, where $m_{N−1},...,m_{0}∈{0,1}$ are the individual bits.

Now, we can provide a string $m$ with a value when we interpret it as either an unsigned value $(m)_{u}$, or as a signed value $(m)_{s}$. These interpretations are defined as

$(m)_{u}=k=0∑N−1 2_{k}m_{k}$and

$(m)_{s}=−2_{N−1}m_{N−1}+k=0∑N−2 2_{k}m_{k}$We see that $(m)_{u}=(m)_{s}$ when $m_{N−1}=0$ and $(m)_{u}=(m)_{s}+2_{N}$ when $m_{N−1}=1$. So when $m_{N−1}=0$ we have

$(m)_{u}⋅(n)_{s}=(m)_{s}⋅(n)_{s}$So in this case we can just use signed multiplication. When $m_{N−1}=1$ we have

$(m)_{u}⋅(n)_{s}=((m)_{s}+2_{N})⋅(n)_{s}=(m)_{s}⋅(n)_{s}+2_{N}⋅(n)_{s}$So, the upper $N$ bits of the product $(m)_{u}⋅(n)_{s}$ equals $⌊2_{N}(m)_{s}⋅(n)_{s} ⌋+(n)_{s}$. This expression can be evaluated by multiplying $m$ and $n$ as if they where signed numbers, taking the upper $N$ bits, and adding $n$ to this.

### Runtime optimization

Consider the following code:

```
sint d = read_divisor();
for (int i = 0; i < size; i++) {
quotient[i] = dividend[i] / d;
}
```

The value of the divisor `d`

is not known at compile time, but once it is read at runtime, it does not change. As such, we consider `d`

to be a runtime constant, and we can optimize this code in the following way:

```
sint d = read_divisor();
divdata_t divisor_data = precompute(divisor);
for (int i = 0; i < size; i++) {
quotient[i] = fast_divide(dividend[i], divisor_data);
}
```

Now, the `divdata_t`

datatype needs to hold $m$, the number of bits to shift, and some field to indicate that we should negate the result $[∣d∣n ]$ when $d$ is negative:

```
typedef struct {
uint mul;
uint shift;
bool negative;
} divdata_t;
```

Now, we compute $ℓ$ such that $m$ always has the most significant bit set. This way we can always compute the upper $N$ bits of the product $m⋅n$ in `fast_divide`

by taking the signed product, taking the upper $N$ bit, and adding $n$ to this.

The precomputation is now a relatively straightforward implementation of theorem 4:

```
sdivdata_t precompute(sint d) {
sdivdata_t divdata;
uint d_abs = abs(d);
// Compute ceil(log2(d_abs))
uint l = floor_log2(d_abs);
if ((1 << l) < d_abs) l++;
// Handle case |d| = 1
if (dabs == 1) l = 1;
// Compute m = floor(2^(N - 1 + l) / d) + 1
uint m = (((big_uint)1) << (N - 1 + l)) / d_abs + 1;
divdata.mul = m;
divdata.negative = d < 0;
divdata.shift = l - 1;
return divdata;
}
```

It should be noted that in the `fast_divide`

function, the right shift by $N−1+ℓ$ is implemented by taking the upper $N$ bits of the product $m⋅n$, and shifting this right by $ℓ−1$ bits. If $ℓ=0$ this is not possible, since in this case we need to shift right by $N−1$ bits, but taking the $N$ upper bits is already equivalent to a right shift of $N$ bits. We can fix this by simply setting $ℓ=1$ when $∣d∣=1$. In this case the expression for $m$ becomes $2_{N}+1$, which will overflow to simply $1$. Now, while theorem 4 doesn’t hold anymore for $m=1$, the calculation of the product $m⋅n$ in `fast_divide`

assumes that the most significant bit of $m$ is set, so we will end up with the correct value. In fact, setting $m=0$ would work as well.

The `fast_divide`

function has a lot of steps, but every step should be understandable.

```
sint fast_divide(sint n, sdivdata_t dd) {
big_sint full_signed_product = ((big_sint)n) * (sint)dd.mul;
sint high_word_of_signed_product = full_signed_product >> N;
sint high_word_of_unsigned_product = high_word_of_signed_product + n;
sint rounded_down_quotient = high_word_of_unsigned_product >> dd.shift;
sint quotient_rounded_toward_zero = rounded_down_quotient - (n >> (N - 1));
if (dd.negative) {
quotient_rounded_toward_zero = -quotient_rounded_toward_zero;
}
return quotient_rounded_toward_zero;
}
```

### Compile-time optimization

In this section, I will consider how to generate optimized code for division by compile-time constant signed integers.

Most of the tricks that are applicable to calculate a quotient of unsigned integers efficiently also apply to signed integers, although we might have to do some work to handle negative integers. Of course, a division by one can be ignored and a division by minus one is equivalent to a negation. For some instruction-set architectures, it might be beneficial to implement a special case for big divisors with an absolute value of more than $2_{N−1}$. In this case, the value of the quotient $[dn ]$ is $sgn(n)$ when $∣n∣≥d$ and zero otherwise.

```
expression_t div_by_const_sint(const sint d, expression_t n) {
if (d == 1) return n;
if (d == -1) return neg(n);
uint d_abs = abs(d);
if (is_power_of_two(d_abs)) return div_by_const_signed_power_of_two(n, d);
return div_fixpoint(d, n);
}
```

Let us first consider the case where $∣d∣=2_{ℓ}$ is a power of two. If we do an arithmetic right shift by $ℓ$ bits, the result will be correct when $n$ is positive. However, this will round down the quotient when $n$ is negative. In this case we can add $2_{ℓ}−1$ to $n$ in order to round up.

So, we would like to have an expression which equals $2_{ℓ}−1$ when $n$ is negative and $0$ otherwise, so we can simply add this to $n$. The value $2_{ℓ}−1$ consists of $ℓ$ consecutive ones in the binary representation. It can be created by doing an arithmetic right shift by $ℓ$ bits on $n$, and shifting the result right by $N−ℓ$ bits (with a normal right shift). The first shift produces the $ℓ$ ones in the $ℓ$ most significant bits when $n$ is negative (these are zero bits otherwise), the second shift puts them in the least significant bit positions.

```
expression_t div_by_const_signed_power_of_two(expression_t n, sint d) {
uint d_abs = abs(d);
int l = floor_log2(d_abs);
// addme equals 2^l - 1 when n is negative and 0 otherwise
// We need to add this to n to round towards zero.
expression_t addme = shr(sar(n, constant(l - 1)), constant(N - l));
expression_t result = sar(add(n, addme), constant(l));
if (d < 0) result = neg(result);
return result;
}
```

## References

[1] Division by constant unsigned integers, https://rubenvannieuwpoort.nl/posts/division-by-constant-unsigned-integers

[2] Division by Invariant Integers using Multiplication, Torbjörn Granlund and Peter L. Montgomery, 1994.