Reproducibility material for Arxiv Paper #1207.2615

This website enables you to run live the main parts of the quality evaluation from this publication to play around with our user interface.
You can reach the user interface by running a query set below and clicking on "Show in UI". This will display the corresponding query with the user interface. There you can easily modify the query or create new queries.

You can also download our ground truths for the SemSearch benchmark and the Wikipedia lists benchmark.

Step 1: Choose a query set

Choose a query set from the following selection. If you want, you can modify the queries and/or the ground truth. Or you can write / paste your own. The syntax is explained in the paper. If you have seen SPARQL before, it is also quite evident from the provided query sets.



Step 2: Evaluate query / queries

The queries selected / pasted in Step 1 will now be processed one after the other, and the results and processing times (comparable to those from Table 1 in Section 6 of our submission) will appear line by line in the table below, followed by a summary at the end.


# Query Ground truth1 Result2 Prec Recall F P@10 P@R (M)AP

1 Click on an entry to see the set of relevant entities (manually determined by human assessors).

2 Visualize the result in our interactive UI (opens in a new tab). Feel free to play around with the UI from there, it's a fully functional version. If you entered queries yourself in Step 1, you might get an error here (if the query is malformed).

The entries in the last columns are: precision, recall, F-Measure, precision at 10, precision at the number of relevant entities, average precision, [number of result entities/number of relevant entities].