Authors:
Lauren Kim, Fernando Lopez, Byron Xu, and Matthew Chun
Abstract:
Consider the function, that is, a function that maps each of the
vertices of the
-dimensional hypercube to
or
. Suppose exactly
of the vertices maps to
, while the other
of the vertices maps to
. There would therefore be
distinct permutations. Let
where
and
denotes the neighbor of
in the
-th coordinate. In this study, we aim to find information about the minimum value of
as
varies. This lower bound has implications in both probability theory and information theory. Its relevance can be seen in that its input is a string of binary characters mapping to a boolean 0 or 1 output. The stability of the total binary function under a permutation
is given by the lower bounds for the averages shown above. We proved the described lower bound for dimensions 3 & 4, as well as finding the lower bounds for other, non half-half coloring distributions in dimensions 3 & 4. We could find results in dimensions 5 & 6 as well via an approximation. We then compare our results to the previously known, and smaller, lower bounds.
1. Methods
In this paper, we will be colloquially referring to permutations of 0-1 mappings, earlier described as , on the discrete hypercube as “colorings.” In order to represent lower bounds graphically, we define
where
is the fraction of all vertices mapping to 0,
is the dimension of the hypercube, and
is the minimum value of
for all hypercubes described by
. We’ll also refer to
in any arbitrary dimension as simply
.
In order to generate our curves for computationally, we designed an algorithm that, given a dimension, would cycle through every coloring and calculate
on each one. We used a binary approach in order to characterize each distinct coordinate in a hypercube, placing each one in an array.
This algorithm prints an array with all the coordinates of the hypercube’s vertices. For example, with dimension , the array would be
. Of course, there are
members of this list (because there are
vertices), each of which could map to either
or
. That means we can use, for dimension 3, an eight character long string to represent the color of each item in the array. But in order to generate all the colorings from
to
, we’d need the binary to loop to
.
We can then construct a process to calculate for all those hypercubes. After it calculates for every coloring, it evaluates if it’s smaller than the current lowest value and returns it if it is.
For dimensions 5 and 6, we didn’t have enough computing power to cycle through every coloring in order to find the lower bound of the average of . However, we used a symmetry argument in order to gain an approximation of the curve in those dimensions. Namely, for every dimension there exist many colorings with equivalent values of
because they are symmetric about the hypercube. So, we searched through several million of the colorings, and found that nearly all the lowest bounds in that million were located in the first few thousand. Because the number of vertices exponentially increases with dimension, so does the redundancy caused by symmetry in the cube. We used the smaller searching window in order to approximate the real curve. Because our algorithm identifies the lowest bounds in its search parameters, the approximation curve may be higher than the actual curve, but not lower at any point. So, while our results for dimension 5 and 6 are not rigorous, they still help gain visual and graphical intuition on the behavior of
.
2. Results


The blue lines represent our curve for , while the red represents Bobkov’s. The result of this graph differs from the previous model as it suggests that the lower bound does not reach an absolute maximum at
as theorized before, but rather has a relative minimum at
. It has not yet been proven whether or not
is a relative minimum, but it appears to be that way.

The result of this graph using the approximation of the lower bound of the neighbor function in dimension 5 further supports the speculation that the lower bound does not reach an absolute maximum at , but insteads meets a relative minimum at
. By increasing the dimension of the hypercube and doubling the number of
vertex inputs, the graph more accurately depicted the predicted continuity of the graph. An interesting feature to note is that
on this graph is less than
on the former graph.

This graph of the approximation of the lower bound of the neighbor function in dimension 6 reaffirms the hypothesis that the , the plotted lower bounds of the neighbor function, is continuous.
appears to be our relative minimum in the tested dimensional cases of
.
3. Conclusion
There were several particularly interesting findings present in our final data set. Though we couldn’t computationally search every dimension, and occasionally had to resort to approximations due to computing power, there are several findings that heavily suggest certain conclusions. Note that these conjectures are speculative but may be used in the future to make rigorous mathematical proofs.
Through developing discrete plots of the function for different dimensions
using Python, this investigation has discovered that the lower bound of the neighbor function implies a disagreement with the hypothesis in which the function would reach an absolute maximum at
, which is modeled in Bobkov’s study An Isoperimetric Inequality on the Discrete Cube. Instead the plots suggest a local minimum for the function
at the point
.
To improve the precision of the results, the dimension of the hypercube could be increased, with each additional dimension doubling the amount of vertex data on the discrete plot, and thus allowing for greater precision of the graph to reach more accurate conclusions. However, higher dimensions would require considerably greater computing ability that unfortunately cannot be achieved by the team’s computers. Therefore, further studies relating to this topic ought to improve upon the current results by employing greater computational capacity in order to generate optimum results.
There seems to be a clear shape of . The overall shape does not change as dimension increases, which seems to suggest the curve does not converge down as dimension increases (with one caveat). If
were to be one lower bound curve that holds regardless of the hypercube’s dimension, we should see equivalent fractional inputs of
equaling the same output of
. For example,
. There was one discrepancy with this observation however, namely that
did not equal
, where in fact
. This is the one point in the data set which might compromise the premise that
is non-reliant on dimension. It’s worth noting that there are multiple colorings in multiple dimensions that all produce the
result. In contrast, for
the only coloring that allowed for a better lower bound is the set
. We regard the same set, with 0’s and 1’s swapped, a trivial case. Intuitively, there are two main conclusions which
might suggest. Either
is reliant on dimension and converges down as dimension increases, or
was a special case due to its lack of symmetry relative to other hypercubes of that dimension. If it’s the latter, that suggests there may exist more “special cases” which break the shape of the overall curve of
as dimension increases.
4. References
Bobkov, S. (1997). An Isoperimetric Inequality on the Discrete Cube, and an Elementary Proof of the Isoperimetric Inequality in Gauss Space. [ebook] Syktyvkar University, p.206. Available at: https://projecteuclid.org/download/imagefirstpage_1/euclid.aop/1024404285 [Accessed 7 Jul. 2019].
A. Gayen, “SciPy: Curve Fitting,” GeeksforGeeks, 20-Mar-2019. [Online]. Available: https://www.geeksforgeeks.org/scipy-curve-fitting/. [Accessed: 25-Jul-2019].
“Modeling Data and Curve Fitting — Non-Linear Least-Squares”. [Online]. Available: http://cars9.uchicago.edu/software/python/lmfit/model.html. [Accessed: 25-Jul.-2019].