Imagine you are given a communication channel $latex C$, which takes in a messages $latex x = x_1, \ldots, x_n$ of length $latex n$, with each $latex x_i$ taking values in an alphabet of size $latex k$. $latex C$ is noiseless, except that it has a malfunction, which is that it is deleting a set of indices $latex D \subset \{1, \ldots, n\}$ of size $latex d$, so the output is $latex C(x) = (x_{I_1}, \ldots, x_{i_{n-d}})$, where $latex I = \{1, \ldots, n\}\setminus D$ are the indices which are not deleted by $latex C$. If you can just figure out what the set $latex D$ is, then you can again use $latex C$ as a noiseless channel by sending the actual message only in the indices $latex I$ and putting arbitrary symbols in $latex D$. The situation is diagrammed below in Figure 1 with $latex n = 8$, $latex k = 2$, and $latex D = \{4, 7\}$.

This is essentially the question which we posed to Nixon Elementary School students on Information Theory Night. The implementation had a few kid-friendly changes. First, the messages, instead of symbols, were sequences of colored cards, which the kids got to choose from. The channel itself was a cardboard box with holes on both sides for the messages to go through. To increase the mysteriousness, we draped a big black blanket over the apparatus, so that the whole thing had the appearance of a Houdini-esque magic trick.

To send messages, children would insert their flashcards one by one into the box, and one of us would sit behind it and choose which cards would get through to the other side. Another one of us would then lay out the cards that did get through cleanly on the other side, so that the kids could compare it to what they put in.

Our project had a few key strengths. The first was that it was interactive. We wanted the kids to really get a sense for the problem by playing with it with their bare hands (literally). Instead of lecturing them, we tried to give them a chance to learn what worked and what didn’t through trial and error. This not only kept them engaged, but also gave them a rare opportunity to engage with a problem that was completely new, and which they didn’t know had a ‘right’ or ‘wrong’ answer. Instead of being told what to do, they got to experiment, which is always the first step when approaching any problem.

The next strength was that we could scale the difficult of the problem, either by making them use fewer colors (a smaller alphabet), or by increasing the number of deletions. After solving the one deletion case, many of them found that two deletions was quite a bit more difficult. This introduced them to the idea of generalizability, and showed how a problem which is trivial in one parameter regime could be much more nuanced in another.

Overall, the outreach project was a great experience. It was exciting to see children interact with a problem we had been working on. I also feel that the challenge of communicating the problem to them also gave us a better understanding of what the important aspect of the problem really were.

Moving on from the outreach project, we will now formalize the problem at hand. We will mainly focusing on the binary alphabet case, $latex k=2$. Our first observation is that that given a set of codewords to send through the channel, we can model the problem of identifying the deletion indices as solving linear system.

Let $latex x_1, \ldots, x_m \in \mathbb{F}_2^n$ be the $latex m$ codewords we will send through the channel. We then let $latex X$ be the the matrix whose columns are these codewords, that is,

We then formulate the “deletion matrix” $latex P$, which, when multiplied by a codeword $latex x\in\mathbb{F}_2^n$, yields the result of passing $latex x$ through the channel, $latex y\in\mathbb{F}_2^{n-d}$. We do this by letting $P$ be a submatrix of the $latex n\times n$ identity matrix. In particular, if $latex D$ is the set of indices being deleted and we let $latex e_i$ be the one-hot vector in the $latex i$-th coordinate, let $latex P$ be the matrix of columns $latex \{e_i\}_{i\not\in D}$ in ascending order. For example, if $latex n=4$ and $latex D=\{3\}$, then

Note that knowing $latex P$ tells us exactly which indices are being deleted.

We now observe now that if we write $latex PX=Y$, with our newly defined matrices $latex P$ and $latex X$, then $latex Y$ is the matrix whose columns are the results of passing the columns of $latex X$, $latex x_1, \ldots, x_m \in \mathbb{F}_2^n$, through the channel. Thus, we now see that given a set of codewords, i.e. given $latex X$, identifying the deletion indices is equivalent to solving the linear system $latex PX=Y$ for $latex P$.

The question is then of designing the matrix $latex X$ such that $latex PX=Y$ is solvable for any $latex P$, while minimizing the number $latex m$ of columns of $latex X$. In particular, we are interested in the minimal $latex m$, and the corresponding $latex X$.

We now note an easy upper bound on the minimal such $latex m$, $latex \log_2(n)$. In particular, we let $latex X$ be the $latex n\times \log_2(n)$ matrix whose rows are all the binary of length $latex \log_2(n)$. Upon observing the matrix $latex Y$, it suffices to check which of the length $latex \log_2(n)$ binary strings are missing from the rows of $latex Y$.

Again, this is easiest to understand by way of example. Suppose we have a channel that deletes the indices $latex D=\{2, 4, 5\}$ from binary strings of length eight. Then, the matrix $latex P$ is given by

We then let $latex X$ be the matrix

We then have that $latex Y=PX$ is

We can see that $latex Y$ is missing the rows $latex (0~0~1),(0~1~1),(1~0~0)$, which tells us precisely that $latex D=\{2, 4, 5\}$

Though this scheme works well when thinking of $latex d$ growing as a fraction of $latex n$, it is overkill when $latex d$ is a constant. For those who wish to think about this problem further, it might be interesting to think about this parameter regime, and also think about how the algorithm can better take advantage of the fact that you can observe the output from the previous messages before choosing the next one to put through. Our solution right now ignores this fact, choosing all the messages at once, beforehand (the distinction is referred to as “adaptive” vs. “non-adaptive” algorithms).

The are interesting generalizations of deletion identification, to which this particular problem could be reduced, such as recovering arbitrary projection matrices (over finite fields). We are interested in an closed form for how many columns are needed. By reduction arguments, the case of deletions may give interesting bounds on the above generalizations.

Finally, we would like to thank Prof. Weissman and the teaching staff of EE376A for their advice on this project as well as their hard work in organizing the outreach event.