Representation of complex numbers in redundant numeration systems | Introduction
Positional numeration systems have a long history. Over centuries, civilizations have slowly accepted them as the most practical way to denote numbers. Several different numbers have been used as the base, including twenty, sixty, twelve and even six, but we have mostly settled on ten — the number of fingers on both our hands.
Only recently, with the advent of computers, the matters have changed yet again. While we still use the decimal system, pretty much all modern electronic devices use base two — the binary system. There is a good reason for this: it is much easier to design electronic circuits that handle only two states — off and on, representing the digits and — than ten different states. Despite that, decimal computers have been tried, but they are now long lost in history.
All the aforementioned positional numeration systems had one thing in common: they used the non-negative integers strictly lower than the base as the digits. However, this is far from being the only possible choice for the set of digits. A few computers built in the Soviet Union [ternary-computers] [ternary-computers-museum] used balanced ternary: a system that uses base three, but the digits are , and . This has a number of advantages, the biggest one being that negative numbers can be represented without any special treatment and it is trivial to compute the negation of a number.
Multiplication in both binary and balanced ternary uses the same “long multiplication” algorithm that is usually taught in elementary school, but with the notable advantage that multiplying two digits always produces a single digit, which completely eliminates overflows during the multiplication phase and allows it to be easily parallelized. However, this does not help with the addition phase — due to overflows, any digit of the result might depend on any less significant digit of the augends.
It turns out that if we use the digits , and like in balanced ternary, there is a way to perform addition in parallel. In this system, numbers have multiple representations, which allows a certain degree of choice. As shown in [binary-arithmetic], the performance of addition and mutliplication is optimal if we minimize the number of non-zero digits — the Hamming weight. This is achieved by representations in the non-adjacent form (NAF), where each pair of adjacent digits contains at most one non-zero digit. Such a representation exists and is unique for every number.
In [optimality-width-w], this idea is generalized to all numeration systems, with emphasis on the case of generalized integers in the complex plane. The NAF condition is generalized to the -NAF condition, which requires every adjacent digits to contain at most one non-zero. It is shown that if a digit set satisfies a certain condition (minimal norm representatives digit set), then each number has a unique -NAF representation, and if the digit set is also -subadditive, then these representations are optimal (have the minimal Hamming weight).
Aside from parallel arithmetic operations, -NAF representations are also useful for algorithms in elliptic curve cryptography [optimality-width-w]. For this application, it is important to know how many optimal representations (with the minimal Hamming weight) each number has and, in particular, what the upper bound is for a given weight. In [num-binary-signed-reprs], it is shown that for numbers in the signed binary system, the number of optimal representations is bounded by the Fibonacci sequence.
In this project, we focus on -NAF representations of the Gaussian and Eisenstein integers, where the minimal norm representatives digit sets are closed under multiplication. We establish a bound for the number of equivalent optimal representations of a given Hamming weight, analogous to the result in [num-binary-signed-reprs].