The evaluation protocol of SSGCI competition is two fold: for evaluating the capability of the participant’s methods for graph retrieval as well as the quality of their proposed node correspondences.

**1. Evaluating graph retrieval capabilities**

A first evaluation of the participant’s methods is performed by employing the classical precision and recall measures for evaluating its graph retrieval capabilities.

The precision and recall measures as defined for the SSGCI competition are given below for a query graph:

**Precision = (Number of relevant graphs ∩ Number of retrieved graphs) / Number of retrieved graphs**

**Recall = (Number of relevant graphs ∩ Number of retrieved graphs) / Number of relevant graphs**

The average precision and average recall values for the given set of query graphs in test dataset, will be used for evaluating the retrieval capacity of the participant’s methods.

**2. Evaluating the subgraph-spotting capabilities**

The subgraph-spotting capabilities of the participant’s methods will be evaluated by calculating the quality of the exact and/or in-exact matching that it provides.

~~a) If the participant’s methods provide a node-to-node correspondence between the query and the result graph (exact matching), a score will be calculated based on the node-to-node correspondence results provided.~~

**Score = (Num of correct node correspondences / Total num of nodes in query graph) – Penalty**

~~ ~~*where the “Penalty” is computed from the incorrect and/or missing node correspondences, by using the following formula:*

~~ ~~**Penalty = (Num of incorrect node correspondences / Total num of nodes in query graph)**

b) If the participant’s methods provide only a list of nodes of the result graph where the query graph is spotted (in-exact matching), a score will be calculated based on the overlap between the list of nodes provided by the participant’s methods and the ground-truth.