Review Comment:
The authors present ConvKB, which is a novel knowledge graph embedding model based on convolutional neural networks. ConvKB advances state-of-the-art models by employing a convolutional neural network, so that it can capture global relationships
and transitional characteristics between entities and relations in knowledge bases. On experiments, ConvKB obtains better link prediction and triple classification results than previous state-of-the-art models on benchmarkdatasets WN18RR, FB15k-237, WN11 and FB13. The authors also apply the model to a search personalization task using query logs, and obtain similarly good performance.
Overall, I believe this is a strong paper and should be accepted. If I had to voice a concern, it would be that I do not see a very direct connection to the Semantic Web, since the authors' approach falls into a more machine learning/representation learning setting. Thus, scope is something that could be a problem but this is for the editors to decide. I do think that adding some more related work or semantic web context will help this paper to reach a wider audience in the SW community.
Strengths of the paper are as follows:
(1) It is well written and relatively easy to follow. The authors use terminology and symbols judiciously, and the method is fairly well explained.
(2) The technical contribution is novel enough for this special issue. While the authors correctly point out that CNNs have been applied to the KG embedding/completion problem, I believe that the shortcomings of the previous work have also been well-motivated to lay the groundwork for this work.
(3) The experimental results are well described, with good descriptions of parameters and implementations. The performance is also convincing.
(4) Most importantly, the code has been released publicly, which is crucial for a new KGE method to have any impact.
(5) I also like the authors' efforts in trying a new dataset beyond Wordnet and Freebase in the search personalization task. Although benchmarks are important for validating against existing algorithms, new datasets and tasks are sorely needed for the KG embedding problem, given how long WordNet and Freebase have now been in use.
Some weaknesses:
(1) On the link prediction task, some description of why only TransE and not the other Trans* (e.g., TransR) algorithms were used would be appreciated. TransE is quite old by this time, although it is fast and effective. If speed was the issue with using other Trans algorithms, the authors should note this.
(2) For some of the bar-graph figures, it is difficult to make out when printed in black-white. Hopefully, the authors can rectify this for a camera-ready version.
|