Journal ArticleDOI
MinneSPEC: A New SPEC Benchmark Workload for Simulation-Based Computer Architecture Research
A.J. KleinOsowski,David J. Lilja +1 more
TLDR
The MinneSPEC inputset for the SPEC CPU 2000 benchmark suite is developed to facilitate efficient simulations with a range of benchmarkprograms and it is found that for some programs, the Minne SPECprofiles match the SPEC reference dataset program behavior very closely; for other programs, however, theMinneSPEC inputs produce significantly different programbehavior.Abstract:
Computer architects must determine how tomost effectively use finite computational resources whenrunning simulations to evaluate new architectural ideas.To facilitate efficient simulations with a range of benchmarkprograms, rn have developed the MinneSPEC inputset for the SPEC CPU 2000 benchmark suite. Thisnew workload allows computer architects to obtain simulationresults in a reasonable time using existing sirnulators.While the MinneSPEC workload is derived from thestandard SPEC CPU 2000 warklcad, it is a valid benchmarksuite in and of itself for simulation-based research.MinneSPEC also may be used to run Iarge numbers ofsimulations to find "sweet spots" in the evaluation parameterspace. This small number of promising designpoints subsequently may be investigated in more detailwith the full SPEC reference workload. In the processof developing the MinneSPEC datasets, we quantify itsdifferences in terms of function-level execution patterns,instruction mixes, and memory behaviors compared tothe SPEC programs when executed with the reference inputs.We find that for some programs, the MinneSPECprofiles match the SPEC reference dataset program behaviorvery closely. For other programs, however, theMinneSPEC inputs produce significantly different programbehavior. The MinneSPEC workload has been recognizedby SPEC and is distributed with Version 1.2 andhigher of the SPEC CPU 2000 benchmark suite.read more
Citations
More filters
Benchmarking modern multiprocessors
Kai Li,Christian Bienia +1 more
TL;DR: A methodology to design effective benchmark suites is developed and its effectiveness is demonstrated by developing and deploying a benchmark suite for evaluating multiprocessors called PARSEC, which has been adopted by many architecture groups in both research and industry.
Proceedings ArticleDOI
Graphite: A distributed parallel simulator for multicores
Jason E. Miller,Harshad Kasture,George Kurian,Charles Gruenwald,Nathan Beckmann,Christopher Celio,Jonathan Eastep,Anant Agarwal +7 more
TL;DR: This paper introduces the Graphite open-source distributed parallel multicore simulator infrastructure and demonstrates that Graphite can simulate target architectures containing over 1000 cores on ten 8-core servers with near linear speedup.
Journal ArticleDOI
Evaluation of the Raw Microprocessor: An Exposed-Wire-Delay Architecture for ILP and Streams
Michael Taylor,Walter Lee,Jason E. Miller,David Wentzlaff,Ian Rudolf Bratt,Ben Greenwald,Henry Hoffmann,Paul Johnson,Jason Kim,James Psota,Arvind Saraf,Nathan Shnidman,Volker Strumpen,Matthew I. Frank,Saman Amarasinghe,Anant Agarwal +15 more
TL;DR: The evaluation attempts to determine the extent to which Raw succeeds in meeting its goal of serving as a more versatile, general-purpose processor, and proposes a new versatility metric that uses it to discuss the generality of Raw.
Journal ArticleDOI
Cooperative Caching for Chip Multiprocessors
Jichuan Chang,Gurindar S. Sohi +1 more
TL;DR: This paper presents CMP cooperative caching, a unified framework to manage a CMP's aggregate on-chip cache resources by forming an aggregate "shared" cache through cooperation among private caches that performs robustly over a range of system/cache sizes and memory latencies.
Proceedings ArticleDOI
Efficiently exploring architectural design spaces via predictive modeling
TL;DR: This work builds accurate, confident predictive design-space models that produce highly accurate performance estimates for other points in the space, can be queried to predict performance impacts of architectural changes, and are very fast compared to simulation, enabling efficient discovery of tradeoffs among parameters in different regions.
References
More filters
Journal ArticleDOI
Simics: A full system simulation platform
Peter S. Magnusson,M. Christensson,J. Eskilson,D. Forsgren,G. Hallberg,J. Hogberg,Fredrik Larsson,A. Moestedt,Bengt Werner +8 more
TL;DR: Simics is a platform for full system simulation that can run actual firmware and completely unmodified kernel and driver code, and it provides both functional accuracy for running commercial workloads and sufficient timing accuracy to interface to detailed hardware models.
Journal ArticleDOI
SimpleScalar: an infrastructure for computer system modeling
TL;DR: The SimpleScalar tool set provides an infrastructure for simulation and architectural modeling that can model a variety of platforms ranging from simple unpipelined processors to detailed dynamically scheduled microarchitectures with multiple-level memory hierarchies.
Proceedings ArticleDOI
Gprof: A call graph execution profiler
TL;DR: The gprof profiler accounts for the running time of called routines in therunning time of the routines that call them, and the design and use of this profiler is described.
Journal ArticleDOI
SPEC CPU2000: measuring CPU performance in the New Millennium
TL;DR: CPU2000 as mentioned in this paper is a new CPU benchmark suite with 19 applications that have never before been in a SPEC CPU suite, including high-performance numeric computing, Web servers, and graphical subsystems.
MonographDOI
Measuring computer performance : A practitioner's guide
TL;DR: Measuring Computer Performance as mentioned in this paper describes the fundamental techniques used in analyzing and understanding the performance of computer systems and provides a detailed explanation of the key statistical tools needed to interpret measured performance data, and describes the general "design of experiments" technique, and shows how the maximum amount of information can be obtained for the minimum effort.