The original version of this story appeared in Quanta Magazine.
Sometime in the fall of 2021, Andrew Krapivin, an undergraduate at Rutgers University, encountered a paper that would change his life. At the time, Krapivin didnât give it much thought. But two years later, when he finally set aside time to go through the paper (âjust for fun,â as he put it), his efforts would lead to a rethinking of a widely used tool in computer science.
The paperâs title, âTiny Pointers,â referred to arrowlike entities that can direct you to a piece of information, or element, in a computerâs memory. Krapivin soon came up with a potential way to further miniaturize the pointers so they consumed less memory. However, to achieve that, he needed a better way of organizing the data that the pointers would point to.
He turned to a common approach for storing data known as a hash table. But in the midst of his tinkering, Krapivin realized that he had invented a new kind of hash table, one that worked faster than expectedâtaking less time and fewer steps to find specific elements.
MartĂn Farach-Colton, a coauthor of the âTiny Pointersâ paper and Krapivinâs former professor at Rutgers, was initially skeptical of Krapivinâs new design. Hash tables are among the most thoroughly studied data structures in all of computer science; the advance sounded too good to be true. But just to be sure, he asked a frequent collaborator (and a âTiny Pointersâ coauthor), William Kuszmaul of Carnegie Mellon University, to check out his studentâs invention. Kuszmaul had a different reaction. âYou didnât just come up with a cool hash table,â he remembers telling Krapivin. âYouâve actually completely wiped out a 40-year-old conjecture!â
Together, Krapivin (now a graduate student at the University of Cambridge), Farach-Colton (now at New York University), and Kuszmaul demonstrated in a January 2025 paper that this new hash table can indeed find elements faster than was considered possible. ln so doing, they had disproved a conjecture long held to be true.
âItâs an important paper,â said Alex Conway of Cornell Tech in New York City. âHash tables are among the oldest data structures we have. And theyâre still one of the most efficient ways to store data.â Yet open questions remain about how they work, he said. âThis paper answers a couple of them in surprising ways.â
Hash tables have become ubiquitous in computing, partly because of their simplicity and ease of use. Theyâre designed to allow users to do exactly three things: âqueryâ (search for) an element, delete an element, or insert one into an empty slot. The first hash tables date back to the early 1950s, and computer scientists have studied and used them ever since. Among other things, researchers wanted to figure out the speed limits for some of these operations. How fast, for example, could a new search or insertion possibly be?
The answer generally depends on the amount of time it takes to find an empty spot in a hash table. This, in turn, typically depends on how full the hash table is. Fullness can be described in terms of an overall percentageâthis table is 50 percent full, that oneâs 90 percentâbut researchers often deal with much fuller tables. So instead, they may use a whole number, denoted by x, to specify how close the hash table is to 100 percent full. If x is 100, then the table is 99 percent full. If x is 1,000, the table is 99.9 percent full. This measure of fullness offers a convenient way to evaluate how long it should take to perform actions like queries or insertions.
Researchers have long known that for certain common hash tables, the expected time required to make the worst possible insertionâputting an item into, say, the last remaining open spotâis proportional to x. âIf your hash table is 99 percent full,â Kuszmaul said, âit makes sense that you would have to look at around 100 different positions to find a free slot.â
In a 1985 paper, the computer scientist Andrew Yao, who would go on to win the A.M. Turing Award, asserted that among hash tables with a specific set of properties, the best way to find an individual element or an empty spot is to just go through potential spots randomlyâan approach known as uniform probing. He also stated that, in the worst-case scenario, where youâre searching for the last remaining open spot, you can never do better than x. For 40 years, most computer scientists assumed that Yaoâs conjecture was true.
Krapivin was not held back by the conventional wisdom for the simple reason that he was unaware of it. âI did this without knowing about Yaoâs conjecture,â he said. His explorations with tiny pointers led to a new kind of hash tableâone that did not rely on uniform probing. And for this new hash table, the time required for worst-case queries and insertions is proportional to (log x)2âfar faster than x. This result directly contradicted Yaoâs conjecture. Farach-Colton and Kuszmaul helped Krapivin show that (log x)2 is the optimal, unbeatable bound for the popular class of hash tables Yao had written about.
âThis result is beautiful in that it addresses and solves such a classic problem,â said Guy Blelloch of Carnegie Mellon.
âItâs not just that they disproved [Yaoâs conjecture], they also found the best possible answer to his question,â said Sepehr Assadi of the University of Waterloo. âWe could have gone another 40 years before we knew the right answer.â
Krapivin on the Kingâs College Bridge at the University of Cambridge. His new hash table can find and store data faster than researchers ever thought possible.
Photoraph: Phillip Ammon for Quanta Magazine
In addition to refuting Yaoâs conjecture, the new paper also contains what many consider an even more astonishing result. It pertains to a related, though slightly different, situation: In 1985, Yao looked not only at the worst-case times for queries, but also at the average time taken across all possible queries. He proved that hash tables with certain propertiesâincluding those that are labeled âgreedy,â which means that new elements must be placed in the first available spotâcould never achieve an average time better than log x.
Farach-Colton, Krapivin, and Kuszmaul wanted to see if that same limit also applied to non-greedy hash tables. They showed that it did not by providing a counterexample, a non-greedy hash table with an average query time thatâs much, much better than log x. In fact, it doesnât depend on x at all. âYou get a number,â Farach-Colton said, âsomething that is just a constant and doesnât depend on how full the hash table is.â The fact that you can achieve a constant average query time, regardless of the hash tableâs fullness, was wholly unexpectedâeven to the authors themselves.
The teamâs results may not lead to any immediate applications, but thatâs not all that matters, Conway said. âItâs important to understand these kinds of data structures better. You donât know when a result like this will unlock something that lets you do better in practice.â
Original story reprinted with permission from Quanta Magazine, an editorially independent publication of the Simons Foundation whose mission is to enhance public understanding of science by covering research developments and trends in mathematics and the physical and life sciences.