OntologySummit2014_Hackathon - Project: (4A27)
Optimized SPARQL performance management via native API (4A2A)
Project roster page: OntologySummit2014_Hackathon_ReferenceDataForSPARQLPeformanceBenchmarking (this page). (4A25)
Team lead: VictorChernov (MSK, UTC+4) vchernov at nitrosbase.com (4A29)
Event starts 29th of March 2014 14:00 MSK / 10:00 UTC / 03:00 PST all over the world via mikogo.com (the session # will come later) (4A57)
The Goals of the project are (4A2C)
Studying the kinds of queries revealing the advantages of one or another RDF database. The goals imply: (4A2D)
- Selection of a SPARQL subset from SP2Bench (4A2I)
- Formin a dataset and loading it to all triple-stores. (4A2E)
- Implementing measurement aids, testing (4A2F)
- Accurate time measurement, getting min, max, average and median times. (4A2G)
- Reflection on the results, advantages and disadvantages of the triplestores on each selected query. (4A2H)
The following triplestores will be compared: (4A2J)
The triplestores have the following important advantages: (4A2N)
- Very high performance on demonstrated on sp2bench benchmark (4A2O)
- Linux and Windows versions (4A2P)
- Native API for fast query processing (4A2Q)
It is important to use native API for fast query execution. All 3 tools provide native API: (4A2S)
- Virtuoso (4A2T)
- Jena, Sesame and Virtuoso ODBC RDF Extensions for SPASQL (4A2U)
- Stardog (4A2V)
- the core SNARL (Stardog Native API for the RDF Language) classes and interfaces (4A2W)
- NitrosBase (4A2X)
- C++ and .NET native API (4A2R)
We suppose writing additional codes needed for accurate testing: (4A2Y)
- Accurate time measurement; (4A2Z)
- Functions for getting min, max, average and median times; (4A30)
- Functions for getting time of scanning through the whole query result; (4A31)
- Functions for getting time of retrieving first several records (for example, the first page of web grid); (4A32)
- Etc. (4A33)
The following steps are needed for loading test dataset: (4A34)
Note: Data are considered as loaded as soon as the system is ready to perform a simplest search query. This is done to eliminate background processes (eg. indexing). (4A36)
We are going to explore the query execution performance by the databases under consideration (Virtuoso, Stardog, NitrosBase). (4A38)
The queries should be fairly simple and cover the different techniques, for example: (4A39)
- search the small range of values (4A3A)
- search the big range of values (4A3B)
- Sorting (4A3C)
- Aggregation (4A3D)
- Several different join queries (4A3E)
- Retrieving part of result (4A3F)
- Retrieving whole result (4A3G)
- etc. (4A3H)
Note: During testing each database may allocate a lot of resources, that can affect the performance of other databases. ThatÂ’s why each test should be stared from system reboot. (4A3J)