Show simple item record

dc.contributor.authorKyriacou, Costasen
dc.contributor.authorEvripidou, Paraskevasen
dc.contributor.authorTrancoso, Pedroen
dc.creatorKyriacou, Costasen
dc.creatorEvripidou, Paraskevasen
dc.creatorTrancoso, Pedroen
dc.date.accessioned2019-11-13T10:40:49Z
dc.date.available2019-11-13T10:40:49Z
dc.date.issued2006
dc.identifier.urihttp://gnosis.library.ucy.ac.cy/handle/7/54315
dc.description.abstractData-Driven Multithreading is a non-blocking multithreading model of execution that provides effective latency tolerance by allowing the computation processor do useful work, while a long latency event is in progress. With the Data-Driven Multithreading model, a thread is scheduled for execution only if all of its inputs have been produced and placed in the processor's local memory. Data-driven sequencing leads to irregular memory access patterns that could affect negatively cache performance. Nevertheless, it enables the implementation of short-term optimal cache management policies. This paper presents the implementation of CacheFlow, an optimized cache management policy which eliminates the side effects due to the loss of locality caused by the data-driven sequencing, and reduces further cache misses. CacheFlow employs thread-based prefetching to preload data blocks of threads deemed executable. Simulation results, for nine scientific applications, on a 32-node Data-Driven Multithreaded machine show an average speedup improvement from 19.8 to 22.6. Two techniques to further improve the performance of CacheFlow, conflict avoidance and thread reordering, are proposed and tested. Simulation experiments have shown a speedup improvement of 24% and 32%, respectively. The average speedup for all applications on a 32-node machine with both optimizations is 26.1. © World Scientific Publishing Company.en
dc.sourceParallel Processing Lettersen
dc.source.urihttps://www.scopus.com/inward/record.uri?eid=2-s2.0-33746241978&doi=10.1142%2fS0129626406002599&partnerID=40&md5=6516c01be17c0e7978115018bf457ffe
dc.subjectMathematical modelsen
dc.subjectComputer simulationen
dc.subjectOptimizationen
dc.subjectCache memoryen
dc.subjectMultiprocessing systemsen
dc.subjectSchedulingen
dc.subjectData flow analysisen
dc.subjectData-Driven Multithreadingen
dc.subjectPrefetchingen
dc.subjectCache Managementen
dc.subjectLocal memoryen
dc.titleCacheFlow: Cache optimizations for data driven multithreadingen
dc.typeinfo:eu-repo/semantics/article
dc.identifier.doi10.1142/S0129626406002599
dc.description.volume16
dc.description.issue2
dc.description.startingpage229
dc.description.endingpage244
dc.author.faculty002 Σχολή Θετικών και Εφαρμοσμένων Επιστημών / Faculty of Pure and Applied Sciences
dc.author.departmentΤμήμα Πληροφορικής / Department of Computer Science
dc.type.uhtypeArticleen
dc.description.notes<p>Cited By :2</p>en
dc.source.abbreviationParallel Process Letten
dc.contributor.orcidTrancoso, Pedro [0000-0002-2776-9253]
dc.contributor.orcidEvripidou, Paraskevas [0000-0002-2335-9505]
dc.gnosis.orcid0000-0002-2776-9253
dc.gnosis.orcid0000-0002-2335-9505


Files in this item

FilesSizeFormatView

There are no files associated with this item.

This item appears in the following Collection(s)

Show simple item record