Article ID Journal Published Year Pages File Type
463551 Optical Switching and Networking 2016 15 Pages PDF
Abstract

•We present an optical-bus CMP architecture where an optical shared cache is used.•The optical cache resides in a separate chip and no on-chip cache is required.•The CPU-DRAM communication is realized completely in the optical domain.•Significant L1 miss rate reduction of up to 96% for certain cases is attained.•Average speed-up of 19.4% or capacity requirements reduction of ~63% is attained.

We present an optical bus-based chip–multiprocessor architecture where the processing cores share an optical single-level cache implemented in a separate chip next to the Central-Processing-Unit (CPU) die. The interconnection system is realized through Wavelength-Division-Multiplexed optical interfaces connecting the shared cache with the cores and the Main-Memory via spatial-multiplexed waveguides. Evaluating the proposed approach, we realize system-level simulations of a wide-range parallel workloads using Gem5. Optical cache architecture is compared against the conventional one that uses dedicated on-chip Level-1 electronic caches and a shared Level-2 cache. Results show significant Level-1 miss rate reduction of up to 96% for certain cases; on average, a performance speed-up of 19.4% or cache capacity requirements reduction of ~63% is attained. Combined with high-bandwidth CPU-Dynamic Random Access Memory (DRAM) bus solutions based on optical interconnects, the proposed design is a promising architecture bridging the gap between high-speed optically connected CPU-DRAM schemes and high-speed optical memory technologies.

Related Topics
Physical Sciences and Engineering Computer Science Computer Networks and Communications
Authors
, , , ,