A dual approach to ShEx visualization with complexity management

Tracking #: 3150-4364

Jorge Alvarez Fidalgo
Jose Emilio Labra-Gayo

Responsible editor: 
Guest Editors Interactive SW 2022

Submission type: 
Full Paper
Shape Expressions (ShEx) are used in various fields of knowledge to define RDF graph structures. ShEx visualizations enable all kinds of users to better comprehend the underlying schemas and perceive its properties. Nevertheless, the only antecedent (RDFShape) suffers from limited scalability which impairs comprehension in large cases. In this work, a visual notation for ShEx is defined which is built upon operationalized principles for cognitively efficient design. Furthermore, two approaches to said notation with complexity management mechanisms are implemented: a 2D diagram (Shumlex) and a 3D Graph (3DShEx). A comparative user evaluation between both approaches and RDFShape was performed. Results show that Shumlex users were significantly faster than 3DShEx users in large schemas. Even though no significant differences were observed for success rates and precision, only Shumlex achieved a perfect score in both. Moreover, while users' ratings were mostly positive for all tools, their feedback was mostly favourable towards Shumlex. By contrast, RDFShape and 3DShEx's scalability is widely criticised. Given those results, it is concluded that Shumlex may have potential as a cognitively efficient visualization of ShEx. In contrast, the more intricate interaction with a 3D environment appears to hinder 3DShEx users.
Full PDF Version: 


Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 02/Aug/2022
Major Revision
Review Comment:

The paper presents a visual notation for ShEx RDF Shape Expression visualization,
together with two its implementations - Shumlex and 3DShEx based on different graph positioning algorithms and their implementations.

The paper deals with an important topic of semantic web artifact (in this case, the ShEx shape) visualization, thus enriching the options for involving domain experts with the semantic technologies.
The described tools clearly enhance the understanding of the possibilities of ShEx shape visualization.

The presented visual notation is claimed to be cognitively efficient and, indeed, the analysis of it in the light of the famous Physics of Notation (PoN) principles is carried through, explaining the correspondence of the designed notation with PoN principles, as well as introducing other design concerns that are taken into account.

The implementations and user interface of Shumlex and 3DShEx are described in a reasonably compact way, as well. The links to the tool implementation prototypes are provided and are working.

The complementary resources for the paper, including the code, are on GitHub, allowing for the reproducibility of the results.

The paper provides also a comparative evaluation with the potential end users of the Shumlex and 3DShEx tools
as well as the RDFShape tool that can be considered the baseline. The tools are compared on user performance in terms of elapsed time, success rate and precision.

For the paper to move out of a somewhat "tool paper" impression it might be important, if the evaluation design and results were more clearly linked to the main claim of "cognitive efficiency" of the created tools, made in the paper. This might be a simple textual explanation (e.g. by providing a more detailed description of what the authors actually mean by the "cognitive efficiency", together with linking the evaluation design and results with this description), or maybe it would involve some larger re-shaping, however, such a well-exposed link between the principal claim and the evaluation would greatly improve the paper.

The evaluation results in the paper do not indicate a dramatic improvement over state of the art (i.e., RDFShape) by the new tools (in fact, the 3DShEx seems to lay somewhat behind the baseline, although not very much things have been found to be statistically important within the results). Still, the obtained results are interesting, as they shed light on the difficulties associated e.g., with 3D presentation of the information, so it would be worth publishing these results.

The related work analysis is appreciated both with respect to cognitive implication and semantic web visualization aspects. Since the paper relies extensively on the PoN principles and related metrics of similarity between visual objects, it might be beneficial for the paper's self-containment, that the main design rationales beyond each of these principles and metrics are briefly re-stated.

Regarding the visualizations in the Semantic Web, the following survey article might point to some more visualization approaches that are mainly in the context of OWL ontologies, but nevertheless, some of them might deserve a mention also here:
Dudáš, M., Lohmann, S., Svátek, V., Pavlov, D.: Ontology visualization methods and tools: a survey of the state of the art. The Knowledge Engineering Review, 33, (2018)

In particular, the ontology editor OWLGrEd (owlgred.lumii.lv) provides a UML style notation for OWL 2, together with an option to visualize a given OWL ontology (a very similar approach to what has been done in this paper for ShEx).

It might be asked, why the considered visualization algorithms and libraries (Mermaid beyound Shumlex and 3DFG beyound 3DShEx) can be expected to ensure the best user experience with the visual graph notation, among the wide variety of the graph positioning algorithms available? Such a study of different algorithms may well be beyond the scope of this paper, however, some motivation (including possibly historical, simplicity and availability reasons) for the positioning algorithm choice would benefit the paper.

From the technical viewpoint:
- the figures in the paper (especially this applies to Figs. 2 and 3, but to some extent others, as well) are totally not legible when printed black and white on A4 sheets.
- Table 1 would benefit from the visual representation that corresponds to the examples in column 4; in the existing presentation, the mapping from the feature and the representation is not clear.
- the text is composed from generally well-organized sentences, still in some places the context may be omitted too much (e.g., Lines 39 and 43 on p.1, Line 5 on p.3, Line 34 on p.6, Line 37 on p.9); the paper needs to be proof-read.

Review #2
Anonymous submitted on 05/Aug/2022
Review Comment:

The authors presented two new tools for visualizing ShEx constraints. The two tools take slightly different approaches to tackle some of the issues observed in prior work. The tools used in a user study used participants following a semantic web technology module. The user study is based on a set of tasks and questionnaires, the latter of which has been developed for this study.

I believe the topic of this paper is important and, given its niche, original. Moreover, there is a potential to have a more significant impact by allowing, in the longer run, for knowledge engineers to edit ShEx files via such visualizations.

The contribution of the two tools is significant. Still, the results obtained via the study are less so, mainly due to how the authors set the experiment (types of participants, number of participants, the surveys, etc.) up. I understand that recruiting participants for a user study is challenging, so I appreciate the evaluation.

I understand that the authors have made the artifacts available on GitHub at this stage of the reviewing process, but I hope the authors will make public those files on other platforms (e.g., Zenodo). The authors have not indicated in the article that they will do that. Annoying was that the authors referred to the running example that did not seem to work in both tools. I had to use the model from the experiment to get the tools to work. The error messages did not help me in identifying the issues. URLs to the GitHub repos of both tools would have been appreciated.

Problems that need to be addressed in this paper are:
- Clarity and structure: the authors use a lot of jargon that has not been described and defined in the paper. This made the article very difficult to read.
- The SOTA is not inadequate. A few (obvious) references are mentioned, but there is no overview. Either the SOTA of "visualization on the Semantic Web" needs to be much more substantial, or the authors need to provide a list of criteria for including/excluding related work in their survey. The authors did not mention literature relevant to this paper, such as Huang, Weidong, Peter Eades, and Seok-Hee Hong. "Measuring effectiveness of graph visualizations: A cognitive load perspective." Information Visualization 8.3 (2009): 139-152. There are even instruments to measure mental workload, which may inform us about cognitive load.
- Evaluation: the authors did not avail of existing instruments for their surveys (e.g., SUS and PSSUQ). The authors need to motivate the development of their survey. I also found that their surveys had many limitations (many aspects were open to interpretation). However, the most significant issue with the evaluation was the precision metric that depends on speed.
- Language: the article needs to be thoroughly proofread. I have added some comments, but this is a problem that is easily adressed.

While reconducting the experiment is likely very difficult, The authors can significantly improve the paper by addressing the clarity, structure, and SOTA. A reanalysis of the existing data (considering time and accuracy as distinct (but maybe correlated) variables) is also necessary.

More specific questions and comments:

The authors state that only one SOTA, RDFShape, exists. As this is a bold claim, it is always prudent to add that this is "to the best of their knowledge." The authors could also briefly describe how they looked for ShEx visualization tools.

Throughout the paper, especially at the beginning, the authors introduce and use terms that have not been defined or described. This makes the paper difficult to comprehend on its own. I would advise the authors to define those ("semantic transparency" and "complexity management" are examples).

The authors' overview of SOTA on visualization is not compelling; the aim was to provide a SOTA on "visualization on the semantic web," but the authors only referred to some. There is a lot of work that has been missed. Examples include:
- a graph-based RMLEditor by Heyvaert et al. 2016;
- Ontodia, a graph-based approach combined with faceted browsing by Mouromtsev et al., 2015. Ontodia places data properties inside a rectangle and represents object properties as arcs between entities. It is, in that sense, also "UML-like."
- RelFinder by Heim et al., 2009 to explore Linked Data datasets.
- ...
The SOTA presented by the authors lacked a goal and a scope so that it could be more exhaustive and from which an analysis could better identify the problem(s).

The authors also missed some related work on the evaluation of visualization tools. Longo and Crotti Junior researched mental workload and cognitive load and have applied their research to mapping languages.

I have tried using both tools, and I sometimes get results. However, the running example in the footnotes (genewiki.shex) leads to errors. The error messages state that a base directive is missing, even if I include the directive and missing namespaces. If the error is due to me, then the tools lack documentation. The file from the experiment, available in a separate GitHub repo, does work. This leads me to question the robustness of the tools.

When using examples from the spec, some valid examples :
- lead to obscure errors in a JavaScript window;
- only part of the numerical facets are shown (MinInclusive is shown, but MaxInclusive isn't); (The tool seems to accept the input but not display the MaxInclusive constraint.)
- ...

Given the following example:
ex:c xsd:integer MinInclusive 10 MaxInclusive 20
ex:Foo {
ex:a @ex:c ;
ex:b xsd:integer MinInclusive 1 MaxInclusive 5 ;
ex:d IRI {1,2}
This input yields a diagram with two "UML classes" with an arrow from ex:Foo to ex:C. The problem, however, is that all the interesting information about the use of the predicate ex:a is lost (i.e., the permissible values being integers between 10 and 20). The example and the grammar outlined in Table 1 lead me to believe that only a subset of ShEx is supported. I have not found motivation for only supporting a "subset" of ShEx.

3DShex was entertaining, but I found the interface not intuitive. I sometimes managed to have all arrows around a shape to light up, but I found it difficult to replicate. Again, some documentation on how to use these tools would be welcome. One gets quickly "lost," especially when dealing with large graphs. It would have been nice to have a feature that allows one to store a particular position or state.

As for the evaluation:
- Why did the authors combine time and precision? Is this based on prior studies? One can easily create a situation where a very swift person with poor results "outperforms" a slower yet more precise participant.
- In the discussion, it seems that the authors do not recognize that precision depends on success rate: "only one member of the Shumlex group achieved a perfect score both in success rate and precision." One can only have a "perfect" score for P if they not only have the perfect score for S but are also the fastest.
- The authors allude to a potential conflict of interest on line 43 of page 12. It may be that students did not want to critique the work conducted by a research group. The authors should elaborate on this in Section 6.1.4.
- The number of participants is minimal and, frankly, too little to compare the three tools. For usability studies, 5 participants per tool may suffice, though a more extensive pool is needed if a tool is only assessed once. Other types of analysis require more participants to ensure that the observations are not merely by chance.
- Minor: The authors deemed the threshold for significance to be < 0.05. It might be worthwhile to indicate that in the article. Is that a choice the authors made or is this based on similar studies?

I appreciate their analysis, but I had a problem with the user evaluation. The authors created their own survey and tied one question to each dimension. There have been well-established surveys available that the authors can use. E.g., SUS for usability and PSSUQ for satisfaction and error handling. Why did the authors not consider these instruments? Or, to put it differently, what was the motivation to build this particular survey, and what measures were undertaken to ensure the questions adequately assessed the dimensions?

Many of the questions in the survey are subjective and vague. "The experience with the tool was satisfactory," for instance, is not targeted enough. Is the experience using the tool in a browser, the experience of solving a specific task, the usability w.r.t. usability, ...? As it is open to interpretation, the input from users is difficult to compare. Existing instruments ask users several questions to better hone in on that dimension.

Questions such as "The meaning of the symbols can be inferred from their appearance.", while useful, could have been indirectly discovered using a survey. E.g., "What is the meaning of this symbol in diagram X?"

- Did the authors mean "knowledge engineering" instead of "knowledge"?
Section 1
- What do the authors mean by "RDF brings together users from various branches of human knowledge"? This span can be interpreted differently and does not provide any added value. The authors could have just stated that RDF is applied in many application domains, requiring the skills and competencies of domain experts, knowledge engineers, and users, amongst others.
- Line 1.39: the implication is unclear as arguments or elaboration are missing. I.e., it's unclear how the authors all suddenly mention "textual programming languages."
- "sheer amounts of data" contains two plurals. Using "sheer" in this context implies a tremendous amount. I wonder whether that is intentional.
- The authors introduce some terminology that needs to be defined. Examples include symbol overload, semantic transparency, ...
- The authors refer to a "Semantic Web ecosystem." What is this ecosystem?
- The authors refer to an "aforementioned scalability issue," but never describe that issue in detail. It is vaguely implied in the paragraph before but needs to be "fleshed out" by the authors, but the authors did not mention the words "scalable" or "scalability" before. This problem is also known as "complexity management," but the authors have not provided a reference. The statement that complexity management is rarely adressed in general, as the sentence on line 2.3 suggests, also lacks a reference.
- Omit "Thus" from Line 2.6 as this sentence does not flow from the other paragraphs. The whole sentence/paragraph needs to be rephrased.
Section 2
- What does DOT stand for?
- What are the common scalability issues we can observe in Fig. 1? And is that based on literature? The authors mention more than one issue, but only one is described in the following sentence.
- The authors "jump to a conclusion" in the last sentence of the second paragraph. How is DOT a suitable testing ground for testing complexity management mechanisms?
- The authors have again introduced terminology that has not been defined or described: "complexity management mechanisms" and "cognitive efficiency."
- The authors should consider converting the SVG into PDF for the paper.
Section 3
- Something is wrong with the sentence starting on line 3.7. Are there words between two sentences missing? Also, what is "high element interactivity"?
- Section 3.1 does not provide a SOTA but some background knowledge.
- I would suggest that the authors also briefly describe the principles outlined by PoN in this section, as they are used in the paper. Note that Section 4.1 does not systematically explain or describe the dimensions. Semiotic clarity, for instance, is not described.
- What's the reference for NV3D?
Section 4
- In Section 4.1.1, do the authors have a reference for UML being "widely recognized"? I would also argue that it is recognized by people with a computer science or software engineering background.
- How was the threshold mentioned in Section 4.1.2 set? Is that set by the authors or a best practice mentioned in literature?
- In Section 4.19, the technical details of what? RDF and ShEx, or UML? Probably the former, but this must be made explicit to avoid confusion.
- Wrong use of "on the contrary."
- Why did the authors not choose to adopt some UML 2 notation such as {XOR} for the OneOf constraint? Wouldn't people familiar with UML not prefer to see as much reuse as possible?
Section 5
- "JavaScript" instead of "Javascript"
- "won't" is informal; use "will not" instead.
- The quality of figures 2 and 3 is unacceptable. Consider using vector-based images instead.
- The relations in Fig. 4 cannot be read.
Section 6
- "based on" instead of "based in" (several times)
- "hasn't" -> "has not" (informal speech)
- I appreciate that the authors dedicated a section to the limitations of their experiment. Some aspects need to be elaborated on, however. Participants were drawn from a course. Does that not entail a conflict of interest, and how was that mitigated? Participants were students; but how do people from industry familiar with semantic technologies or ShEx react to the tool?

There are mistakes in some references. E.g., the name of Ben de Meester in [16], [18] is incomplete, use of HTML entities instead of Latex ('amp;' in [25]), ...

Review #3
By Martin Necasky submitted on 04/Nov/2022
Review Comment:

The submitted paper presents a method for visualizing RDF shape constraints expressed in Shape Expressions (ShEx). The goal is to enable users who are not familiar with the textual language to comprehend the shape constraints. The presented work builds upon an existing approach to visualizing ShEx called RDFShape. RDFShape generates a UML class diagram. As stated by the authors of the submitted paper, RDFShape suffers from some limitations. The authors demonstrate the limitations using a motivating example of the Wikidata GeneWiki project. Based on these limitations, the authors propose their own approach to ShEx expressions visualization and two working implementations, Shumlex (2D visualization) and 3DShEx (3D visualization). Shumlex uses visual constructs of UML class diagrams to visualize ShEx constructs. 3DShEx is a generic graph visualization in a 3D space.
(1) originality
I do not consider the presented approach original much. It just tries to fix some limitations of RDFShape. The limitations of RDFShape – scalability and degree of symbol overload – are mentioned but without any systematic analysis. Based on the vague identification of limitations, the proposal lists several ShEx constructs and maps them to some UML constructs. However, the result is just another UML class diagram. Even though there are some differences compared to RDFShape, both approaches are very similar from the perception point of view. Moreover, the paper does not analyze other related works, e.g., in the field of (XML, JSON, …) data schema visualization. Moreover, it is too focused on UML-style visualization. There are also different approaches, e.g., ShExAuthor, another project of one of the authors. I understand that ShExAuthor is more focused on ShEx constraint creation, but it also presents ShEx visually. Isn’t this kind of presentation better? Therefore, the originality of the approach compared to RDFShape is not clear.
(2) significance of the results
Both implementations are described with few details, and their demos are available. The authors used the implementations to evaluate the approach. For the evaluation, the authors worked with university students with some background knowledge of RDF technologies. However, the evaluation results are not very convincing. 3DShEx has been shown to be perceptually ineffective for users. Shumlex is comparable with RDFShape. Even the authors conclude that there is no significant improvement. There is some improvement, probably caused by a simple feature that enables one to choose a UML node in a diagram, and only its neighbors are highlighted while the rest is shaded. This feature is missing in RDFShape. However, this is only an insignificant feature from the research point of view. Therefore, the result presented in the paper is not very convincing. W. r. t. to the comments above, the paper’s contributions are also not very significant from the theoretical point of view.
(3) quality of writing.
The proposal contains a serious inconsistency. The authors state that the two proposed ShEx visual notations are UML-like. However, this is not true – 3DShEx is a pure graph visualization in a 3D space which has nothing to do with UML. In more detail in Section 5, 3DShEx is presented as the implementation of the approach proposed in Section 4, which is apparently not true. Therefore, the paper’s structure is very confusing in its main message.
The 2D notation, Shumlex, can be considered UML-like. However, it is too inaccurate from a formal point of view. I even think it is a dangerous inaccuracy that should be avoided in scientific papers. The problem is that UML (class diagrams) has precisely defined semantics. A UML (class) diagram is not only a figure that documents a structure vaguely. Its background semantics also provides semantics to the documented structure. Unfortunately, the submitted paper seems to consider UML only as a vague visualization without the background semantics. Therefore, it is not clear from the paper how to correctly interpret a given UML class diagram documenting RDF shape constraints. There is even a risk of misinterpretation, and that is why I think that such reuse of UML for visualizations is dangerous.
In summary, I think that the paper presents only the first step in a potentially promising research direction that could lead to a visual approach improving the understandability of complex RDF shape constraints. However, this first step is not sufficient for a full scientific publication.
Long-term stable URL for resources
(A) The data file is well organized and contains a README file which makes it easy to assess the data.
(B) The provided resources are complete for the replication of experiments. The paper describes the methodology of the evaluation.
(C) The chosen repository is GitHub which is appropriate for long-term repository discoverability.
(D) The provided data artifacts seem to be complete. The source codes are complemented with ShEx constraints for the evaluation. The repository also contains anonymized results of the evaluation presented in the paper.