Loading...
|
Please use this identifier to cite or link to this item:
https://nccur.lib.nccu.edu.tw/handle/140.119/66906
|
Title: | Benchmarking intelligent information integration – A generic construct-based model |
Authors: | 諶家蘭 Seng, Jia-Lang;Lin, M.I. |
Contributors: | 會計系 |
Keywords: | XML;Ontology;Intelligent information integration;Generic construct;Benchmark;Workload model;Performance measurement and evaluation |
Date: | 2010.06 |
Issue Date: | 2014-06-25 10:23:30 (UTC+8) |
Abstract: | Benchmarks are vital tools in the performance measurement and evaluation of computer hardware and software systems. Standard benchmarks such as the TREC, TPC, SPEC, SAP, Oracle, Microsoft, IBM, Wisconsin, AS3AP, OO1, OO7, XOO7 benchmarks have been used to assess the system performance. These benchmarks are domain-specific in that they model typical applications and tie to a problem domain. Test results from these benchmarks are estimates of possible system performance for certain pre-determined problem types. When the user domain differs from the standard problem domain or when the application workload is divergent from the standard workload, they do not provide an accurate way to measure the system performance of the user problem domain. System performance of the actual problem domain in terms of data and transactions may vary significantly from the standard benchmarks. In this research, we address the issue of domain boundness and workload boundness which results in the ir-representative and ir-reproducible performance reading. We tackle the issue by proposing a domain-independent and workload-independent benchmark method which is developed from the perspective of the user requirements. We present a user-driven workload model to develop a benchmark in a process of workload requirements representation, transformation, and generation. We aim to create a more generalized and precise evaluation method which derives test suites from the actual user domain and application. The benchmark method comprises three main components. They are a high-level workload specification scheme, a translator of the scheme, and a set of generators to generate the test database and the test suite. The specification scheme is used to formalize the workload requirements. The translator is used to transform the specification. The generator is used to produce the test database and the test workload. In web search, the generic constructs are main common carriers we adopt to capture and compose the workload requirements. We determine the requirements via the analysis of literature study. In this study, we have conducted ten baseline experiments to validate the feasibility and validity of the benchmark method. An experimental prototype is built to execute these experiments. Experimental results demonstrate that the method is capable of modeling the standard benchmarks as well as more general benchmark requirements. |
Relation: | Expert Systems with Applications, 37(6), 4242-4255 |
Data Type: | article |
DOI 連結: | http://dx.doi.org/10.1016/j.eswa.2009.11.078 |
DOI: | 10.1016/j.eswa.2009.11.078 |
Appears in Collections: | [會計學系] 期刊論文
|
Files in This Item:
File |
Description |
Size | Format | |
4242-4255.pdf | | 748Kb | Adobe PDF2 | 998 | View/Open |
|
All items in 政大典藏 are protected by copyright, with all rights reserved.
|