Penn Logo
Vertical Line

Implementation of Computation Group

Divider

Area-Efficient Near-Associative Memories on FPGAs

Udit Dhawan and André DeHon
ACM Transactions on Reconfigurable Technology and Systems (TRETS) , Volume 7, Number 4, DOI: 10.1145/2629471, January, 2015.




Associative memories can map sparsely used keys to values with low latency but can incur heavy area overheads. The lack of customized hardware for associative memories in today's mainstream FPGAs exacerbates the overhead cost of building these memories using the fixed address match BRAMs. In this article, we develop a new, FPGA-friendly, memory system architecture based on a multiple hash scheme that is able to achieve near-associative performance without the area-delay overheads of a fully associative memory on FPGAs. At the same time, we develop a novel memory management algorithm that allows us to statistically mimic an associative memory. Using the proposed architecture as a 64KB L1 data cache, we show that it is able to achieve near-associative miss rates while consuming 3--13× fewer FPGA memory resources for a set of benchmark programs from the SPEC CPU2006 suite than fully associative memories generated by the Xilinx Coregen tool. Benefits for our architecture increase with key width, allowing area reduction up to 100×. Mapping delay is also reduced to 3.7ns for a 1,024-entry flat version or 6.1ns for an area-efficient version compared to 17.6ns for a fully associative memory for a 64-bit key on a Xilinx Virtex 6 device.

Copyright Dhawan and DeHon 2015. This is the author's version of the work. It is posted here for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Reconfigurable Technology and Systems (TRETS), http://dx.doi.org/10.1145/2629471

Divider
Room# 315, 200 South 33rd Street, Electrical and Systems Engineering Department, Philadelphia, University of Pennsylvania, PA 19104.