Workload Discovery and Benchmark Synthesis from Public Code Repositories
Sprache des Titels:
Englisch
Original Kurzfassung:
Researchers often rely on benchmarks to demonstrate feasibility or efficiency of their contributions. However, finding the right benchmark suite can be a daunting task - existing benchmark suites may be outdated, known to be flawed, or simply irrelevant for the proposed approach. Creating a proper benchmark suite is challenging, extremely time consuming, and also - unless it becomes widely popular - a thankless endeavor. This talk introduces AutoBench, a novel approach to help researchers find relevant workloads for their experimental evaluation needs. AutoBench relies on the huge number of open-source projects available in public repositories, and on unit testing having become best practice in software development. Using a repository crawler employing pluggable static and dynamic analyses for filtering and workload characterization, AutoBench allows users to automatically find projects with relevant workloads. In this talk, we illustrate AutoBench's approach to find, filter, and characterize real-world workloads from public open-source repositories, and show several motivating scenarios. Preliminary results towards automatic generation of benchmark suites are also presented, arguing that unit tests can provide a viable source of workloads, and that the combination of static and dynamic analysis improves the ability to identify relevant workloads that can serve as the basis for custom benchmark suites.