Hardware Pipelining of Repetitive Patterns in Processor Instruction Traces

Authors

  • J. Bispo
  • J. Cardoso
  • J. Monteiro

DOI:

https://doi.org/10.29292/jics.v8i1.373

Keywords:

Processor Instruction

Abstract

Dynamic partitioning is a promising technique where computations are transparently moved from a Gene- ral Purpose Processor (GPP) to a coprocessor during application execution. To be effective, the mapping of computations to the coprocessor needs to consider aggressive optimizations. One of the mapping opti- mizations is loop pipelining, a technique extensively studied and known to allow substantial performance improvements. This paper describes a technique for pipelining Megablocks, a type of runtime loop deve- loped for dynamic partitioning. The technique transforms the body of Mega-blocks into an acyclic dataflow graph which can be fully pipe-lined and is based on the atomic execution of loop iterations. For a set of 9 ben- chmarks without memory operations, we generated pipelined hardware versions of the loops and esti-mate that the presented loop pipelining technique increases the average speedup of non-pipelined coprocessor accelerated designs from 1.6× to 2.2×. For a larger set of 61 benchmarks which include memory operations, we estimate through simulation a speedup increase from 2.5× to 5.6× with this technique.

Downloads

Published

2020-12-27