Welcome to the first issue of Ciphertext Compression, meant to be a quick digest of what happened on ePrint last week. If you are wondering why I am doing this, I am wondering that too, see the last post for what I was thinking.
Range considered. new ePrints posted between Sun, 03 Oct 2021 12:26AM GMT and Sun, 10 Oct 2021 12:26AM GMT.1
Efficient Functional Commitments: How to Commit to Private Functions — in the long crypto tradition of reclaiming technical terms to mean something totally different from what it previously meant to make it harder for me to use Google Scholar, this paper redefines what was previously called “function commitment” as “input-hiding function commitment” and then reclaims the term “function commitment” for “function-hiding function commitment”. Now, is the “function-hiding function commitment” flow older? Yes, we’ve know the general flow since the primordial verifiable random functions. Is it more useful than the other definition? Yes. But, am I happy that they are reclaiming a term? Fuck no. Anyway, the paper, building on top of a preprocessing zk-SNARK, constructs fast “function-hiding function commitments” for arbitrary arithmetic circuits.
Updatable Private Set Intersection — I love PSI, it lets you and me compute an intersection of our sets without leaking our sets. For instance, you can think of using PSI for password breach alerting where your set would be your password hashes and my set would be the set of all password hashes leaked in breaches. However, this might not work in practice because there have been a lot of breached passwords and PSI needs communication at least linear in the size of the sets (and this is easy to prove2). This paper evades this lower-bound in the real-world context where our sets are not totally random each week but just the previous set with a few additions, and gives an algorithm whose communication is linear in the size of the additions and logarithmic in the size of the set.
Generalized Proof of Liabilities — I think proofs of liabilities is a cool idea. Let’s say you want to be convinced that the total number of covid positive cases reported is correct, the reporter can provide a proof of liabilities, and you can verify whether your value is correct (that is, if you are positive you are incrementing the total and if you are negative you are not affecting the total.) And assuming that a large enough fraction verify, anyone can verify that the total reported is correct with the provided proof. This idea emerged out of auditing cryptofinance exchanges (I can’t see why would they need this primitive 🤔) and this paper generalizes it so it can be used in other systems.
New Attacks on LowMC instances with a Single Plaintext/Ciphertext pair — LowMC is 1000x more broken now, and this paper has amazing figures (even I, with no prior experience, could kinda understand what they were saying!)
A Thorough Treatment of Highly-Efficient NTRU Instantiations — If, like me, you thought NTRU definitively lost the efficiency battle to Ring/Module-LWE, this paper proves you wrong giving a construction that has a 15x speedup over the NIST candidate NTRU-HRSS-701 for the keygen-encap-decap routine, bringing it back on par.
Faster Lattice-Based KEMs via a Generic Fujisaki-Okamoto Transform Using Prefix Hashing — this paper, to me, is yet another remainder that post-quantum crypto is weird. Okay, so, Kyber and Saber redefine the hash function in the Fujisaki-Okamoto transform to be the hash with a public key prefix to achieve “multi-user” security. With me so far, cool. This paper shows that you don’t really need to hash in the full public key, just a random 32-byte portion suffices. Still with me, cool. Now, here is where it gets weird, the public keys of Kyber and Saber are like a 1000 bytes, not bits (Ed25519 has 256-bit public keys) bytes, so this change gives a large speedup. 🤯
Large-Precision Homomorphic Sign Evaluation using FHEW/TFHE Bootstrapping — comparing two plaintext integers is super easy (well, most of the time3) but what if they are encrypted floats? That turns out to be super hard, and this paper pushes the state-of-the-art FHE from ~5 bits of precision to ~20 bits of precision.
Paradoxical Compression with Verifiable Delay Functions — in yet another proof that crypto is ✨magic✨, this paper reveals to mortals like you and me how to evade4 the information-theoretic limits on lossless compression.
This was a lot more work than I thought 😄. Thanks for sticking till the end. 💜
Obvious Disclaimer. Views are mine and not serious. All mistakes are mine. If you’d like to point them out, shoot me an email. If I get more than 10 emails, I’ll create a discord and post the invite link here.
There was a time zone bug in my prettified ePrint feed, that is now fixed, which may have caused me to select papers slightly older than the range, sorry about that, this will not be an issue going forward. ↩︎
My definition of easy is that you can explain it in a few blog-post lines: first notice that computing the intersection is at least as hard as telling whether the sets are disjoint, now observe that computing disjointness necessarily requires $\Omega(N)$ communication where $N$ is the size of the sets because if it took time less than that, there is an element one of the sets that is not being compared and I can tweak that and make the algorithm output the wrong answer. The analysis for the bounded-error, randomized case is similar but requires a few more techniques. ↩︎
Working with huge 1000+ bit integers is not super easy, I think, easy maybe but not super easy. ↩︎
Of course, it doesn’t evade the information-theoretic limits, just as stream ciphers with keys shorter than the plaintext don’t evade information-theoretic limits on secrecy, it just makes it really hard to find a counterexample whose compression is longer than the input. ↩︎