Ontological Foundation of Hazards and Risks in STAMP

Tracking #: 2105-3318

Jana Ahmad
Bogdan Kostov
Andrej Lališ
Petr Křemen

Responsible editor: 
Krzysztof Janowicz

Submission type: 
Ontology Description
In recent years, there has been a growing interest in smart data-driven safety management systems comparing to the traditional ones. The demand for such upgrade comes from the frequent changes in our daily life and technological innovation which introduce new causes and factors of accidents. However, the increasing amount and heterogeneity of safety-related data introduces a new demand for their proper knowledge management to use them for detecting safety-related problems and predicting them. In this paper, we discuss the ontological foundations of hazards and risks which are represented by such data. We consider their representation in safety systems, specifically in the domain of aviation safety using the STAMP model. As a result, we propose a STAMP hazard risk ontology that could help in analyzing accidents and modeling control loop failures according to the theory of STAMP. For evaluation, we tested our ontology on realistic examples in the aviation safety domain as a use-case.
Full PDF Version: 

Major Revision

Solicited Reviews:
Click to Expand/Collapse
Review #1
By Michael.Uschold submitted on 21/Feb/2019
Major Revision
Review Comment:

OVERALL: The authors have tackled a large and significant problem, given it a lot of thought and have done quite a lot of work. Many of the ideas are right on, including the justification for using an upper ontology, building on existing ontologies, the use of a principled approach to ontology engineering, including competency questions and using ≥ 1 examples for validation, as well as using OWL as a standard language to increase potential usefulness in practical settings in industry and government. They have also put a (albeit very preliminary) version of the ontology online. This is all very refreshing to see and reflects the evolution of ontology engineering and application work in recent years. From a content perspective, the authors have identified the central notions around risk and hazard, which in simple terms include the bad thing that might happen, how bad it might be, how likely it will be to happen and putting measures in place to reduce the likelihood and/or or severity of the potential bad event being realized.
Unfortunately, there are a number of problems with this paper. Chief among them is the lack of clear and consistent definitions for the key concepts. There is an attempt to do so, with several numbered definitions, but there are numerous problems and inconsistencies which shall be described below. The worst example of this is the term ‘hazard’ which is used and described in a wide variety of ways.
Another key problem is the weakness of the validation attempt. The competency questions are far too vague and quite different from what is described in the literature. Competency questions are supposed to be expressed eventually as queries to a knowledge base that has both ABox and TBox data in it. The competency question ‘answers’ in this paper are more like documentation of the ontology itself, and quite far from what could be posed as a query to a triple store.
Finally, although there is a link to the ontology online, there is not much there to look at. There is little evidence that the ontology describe in the paper exists as an owl ontology.
One stated objective is to be able to represent knowledge about the subject area. One aspect of this is to build an ontology and use it to help people understand the subject. Fine, but much more useful would be a database of hazards and accidents and requirements and constraints and factors that can be used to perform data analytics to help do risk assessments and identify better controls. Not only is there no detailed ontology to examine, there are no example triples showing how the ontology might be used as the basis for representing hazard and accident data. There is step in that direction in figure 9.

SPECIFIC: The big ticket item is the term ‘hazard’ and its numerous and inconsistent definitions and descriptions.
1. Definition 1 says it is a set of conditions that can lead to a bad event. This seems like a situation, but Hazard is a subclass of Factor. Unfortunately, there is no attempt anywhere in the paper to clarify what a factor might be. And neither Hazard nor Factor are connected to Situation in any way.
2. The terms ‘hazardous situation’ is used to refer to birds flying near planes taking off and landing. Is the reader to take ‘hazardous situation’ to be exact same meaning as ‘hazard’ from definition 1?
3. On p9, it says: “According to definition 1 hazard concept describes any factor that causes or contributes to an unplanned and undesired loss event.” BUT definition 1 does not say this, rather it says that a hazard is a set of conditions (my emphasis). There is an implicit hint at what you mean by factor here that should be explicitly called out.
4. p12 says: “According to the experience profile of CRVO there are three event categories which we may choose from to model hazards. Namely, threats, enablers and losses.” This is inconsistent with definition 1 which clearly states a hazard is a set of conditions, not events.
5. p13 says “According to the CRVO there should be a disposition (the hazard) which is manifested by the instances of the ‘Inadequate Operation’ event type.” Here is says that the hazard is a disposition, but that is different from a set of conditions.
NOTE: I just looked more carefully at figure 5 and discovered that Hazard is indeed a subclass of Disposition, so some of the points above are more about lack of consistency and clarity in explanation, than about logical inconsistency of meaning, which is what appeared to be the case on initial reading. However, greater consistency and clarity in how a hazard is defined and described would be a great help.

The term ‘incident’ is commonly used in industry, and you use if often in the paper. Where does it show up in the ontology as a concept, if not as a term?

On p5 you say goals are key to risk, but I say only tangentially. Risk is about a bad thing happening, period, independent of one’s particular goals. Sure, what makes it bad somehow impacts on your goals, but those goals may or may not be significant per se in risk assessment. If your house burns down, your goal of having a roof over your head has been shattered. But there would be little point in representing that goal explicitly. The key is to characterize the nature and severity of the bad event, and to understand its impacts. This is only tangentially related to goals.

Speaking of things inhering in other things can be useful, but it did not clarify much meaning for me. I found it confusing. I have no idea what is meant by “Risk qualities inhering in particular relations” (p5). And what is a ‘risk quality’ anyway? I did not understand the ‘inheres in’ relationship connecting to the lead pilot in fig 9.

The Common Ontology of Value and Risk is ok in some respects, but is odd in others. For example, the idea of describing “value as experience” is bizarre. Things that have value are my house, my car, the $100 bill in my desk drawer and the first ever issue of the Superman comic book series in someone’s safe. These things have market value that has at best only tangential relevance to anyone’s experience.

Requirements elicitation: R3 is very vague, and anyway there is no evidence of it being met. In fact, all of R1-R5 are a bit vague or too general to be very useful.

A Risk Enabler causes accidents, but it is always an Object. Do you mean to imply that only objects can cause accidents? That does not seem correct to me. Events can cause accidents, e.g. earthquake, or market crash.

Axiom A2 says the every hazard results in a problem, but that is not so. Some hazards commonly referred to as ‘accidents’ that are waiting to happen in fact never do result in an accident, for whatever reason. High likelihood does not mean certainty.

p10 refers to ‘unsafe concept’, ‘Vulnerability’ and ‘unwanted event’ but I do not see these anywhere in the ontology.

p10 says “In this section, we describe a Risk as a future event, i.e. risk involving uncertainty about whether or not such a loss event will happen in the future.” This is problematic, as the term ‘Risk’ is highly ambiguous. Attaching likelihood and severity to it suggests that you mean “Risk” to be the possible future bad event. But that seems to be exactly what is meant by the class, Risk Event (in blue in fig 5).
For this ontology to hold up and nicely glue together these tricky concepts, you must relate what you are calling Risk to the other things in fig 5, most importantly: RiskEvent, Hazard and Accident. On the other hand, you say that Risk is a quality. I’m not sure exactly how that fits here. According to my (perhaps imperfect) understanding, a ‘quality’ involves things like color of something being red – in other words an attribute or characteristic of something that has a value. This view says that the qualities are likelihood and severity, which are characteristics of a possible future bad event. The possible future bad event itself is not a quality.

p12 says “we model arrows as objects”. This is very confusing, what does it mean? Arrows typically represent what are called properties in OWL or roles in DL.

P13 mentions ResultingSituation, which is also in fig 5. What about an initial situation? Is that not important?

p13 has an excellent summary of the essence of risk and hazard: “Hence, UFO Events existentially depend on the objects that participate in them and an event is a manifestation of a disposition of an object, then a risk event occurs due to the dispositions of its participants, which are in STAMP model the Hazards (i.e. the dispositions). Therefore, we consider Hazard as dispositions in SHRO conceptual model.” I recommend having something like this much earlier in the paper and using it as foundation for describing the overall model/ontology. Also, maybe have a separate, much simpler diagram that shows just this essence.

Section 8: Validation. It’s great to take a significant stab at validation the ontology, however, the way the authors have gone about it is not very illuminating. The chief problem is that you are using the idea of a competency question (CQ) in a very different way than originally intended in the work by Gruninger and Fox. In their work, and indeed, in every work where I have seem CQ’s discussed, a CQ is a question that must eventually be encoded as a formal query that can be posed to a data base and a query engine provides and answer. Originally the query was posed in formal logic and an inference engine gave the answers. These days, CQs are more commonly put to a triple store that has data populating an ontology. The initial form of a CQ will often be in informal English, but as the ontology evolves, it will be more and more specific and use the vocabulary of the ontology (chiefly classes and properties). Ultimately it will be a SPARQL query. The notion of competency is that the ontology has the right concepts in it so that when populated with ABox data, real answers can be provided, which can be the basis of useful data analytics, which is especially important for hazards and accident data.
The authors of this paper have not attempted this activity. Rather, the answers they posed to their CQs are mainly English documentation of the ontology itself.

What is needed are some sample triples representing a realistic example of the key concepts from the ontology. This will help convince the reader that you are on the right track. To really validate the ontology requires the loading of a significant body of data to make sure the ontology can support the nuances of real world situations. During the process, shortcomings in the ontology are identified and corrected in an iterative manner.
The authors have taken a step in this direction, which is depicted in figure 9 with the helicopter example. However, this example needs more work. The diagramming notation is not clear. Is a dotted line representing rdf:type? The orange boxes related to the example, but some seem to be depicting instances, and others classes. This is confusing. Also, the links are not clearly correct and useful. The helicopters being shot down is correctly modeled as an Accident, but so also is the fact that people died. That latter would seem to be better modeled as a ResultingSituation that the accident brings about (per fig 5).
I’m also a bit unclear about the meaning of the ‘inheres in’ link connecting the hazard to the lead pilot. Also, why is the hazard class called “STAMP hazards” in fig 9 but merely “Hazard” in fig 5? The latter is the more normal singular form. Also, why bother specifying that is a STAMP hazard? What is different about that compared to a non-STAMP hazard?

The authors show awareness of this being part of the overall job here, when they say on p14 that: “As a result, we proposed STAMP hazard risk ontology SHRO which its implementation could help with creating semantic analyses of safety systems accidents and hazards.” [my emphasis]. The claim that SHRO itself can provide this help is not well-substantiated.

A central problem with this paper is that I don’t fully understand the ontology, due in large part to various inconsistencies in how the key ideas (especially ‘hazard’) are defined and described. But even more, the purported OWL ontology is not available to look at. Lack of understanding means I cannot tell whether it is likely to be able to do its intended job of representing knowledge and data about hazards and accidents.

TYPOS etc:
P4, Column 1, Line44: Reference [20] should be [28]
P5, Column 1, Line 16: the subclass relationship is backwards.
p9, C1, L39 ‘who’ should be ‘that’
Figure 8: the black text on dark colors shapes is hard to read. Use lighter text, or lighter box colors.
p12, C1, L44: There is a double question mark, “??”.
p13, C1, L25 ‘pouting’  ‘putting’

The term ‘causing safety’ is strange.
Some of the references do not have titles for the main article, e.g.:
[46] C.L.B. Azevedo, M. Iacob, J.P.A. Almeida, M. van Sinderen,
L.F. Pires and G. Guizzardi, in: 2013 17th IEEE International
Enterprise Distributed Object Computing Conference, 2013,
pp. 39–48, ISSN 1541-7719. doi:10.1109/EDOC.2013.14.

MINOR: while the English is very good most of the time, and almost always easy to understand the intended meaning, it would benefit from a review by a native English speaker (or equivalent). There are numerous cases where wording is awkward or ungrammatical. In a few cases it is not clear what is meant.

This is a valiant effort to address a challenging problem, and in general terms the authors are doing just what they should be doing. However, in specific terms, there is a lot more to do. Specific recommendations:
1. Improve the consistency and clarity of the descriptions of the key concepts, especially hazard.
2. Make sure that a reasonably complete version of the ontology and any imported ontologies is easy to download and load into an ontology tool such as Protégé.
3. Use more conventional and more useful competency questions, show some sample SPARQL queries at least, and ideally load up some sample data and run the queries against a Triple Store
4. Expand on what has been done in figure 9, and represent it as actual triples in say Turtle and include them in an appendix.
5. Another part of validation is to have subject matter experts vet the ontology. Do this and describe the results.
6. Tighten up the requirements R1-R5 and give better evidence that they are met by the ontology
7. Suggestion: Refactor the paper a bit by giving the concise summery of the essence of risk and hazard on p13 much earlier in the paper, and expanding out from it in an iterative manner.
8. Good luck!

Note from the reviewer: I have been modeling risk in OWL for many years, mainly in finance but also in manufacturing and shipping of consumer products. My colleagues and I have populated risk ontologies with substantial numbers of triples converted from relational databases and validated the ontology and the data using expert vetters and competency questions represented as SPARQL queries. Unfortunately for the academic community, this work has taken place behind commercial firewalls.

Review #2
By Ewen Denney submitted on 18/Mar/2019
Major Revision
Review Comment:

Motivated by the "heterogeneity of safety-related data" the authors propose the use of ontologies to place the STAMP safety analysis technique on a sound and consistent conceptual foundation. They describe two ontologies - STAMP hazard and risk, and STAMP control loop hazard profile. A small example amenable to STAMP analysis is described.

The ontology appears to have been developed using a rigorous engineering process, by "ontology engineers", with experience in foundational ontologies, and subsequently systematically validated in a process the authors refer to as verification and validation.

The paper could be better motivated. Although you mention various challenges with safety analysis in the introduction, I don't see how any of these are actually addressed by using ontologies. Moreover, the significance of many of the design decisions in the ontologies is unclear. It would be beneficial to flesh out the details of how to represent the running example using your ontologies.

It was not clear to me what's really going on with the V&V process you described, and in what sense it's significantly exercising the ontologies.

What did you gain from using the foundational ontologies? And what would querying buy us?

I strongly suggest to the authors to have a fluent speaker of English edit the paper, since there are many places where the unidiomatic language hampers comprehension. One recurring problem is the misuse of articles.


such upgrade -> such an upgrade

throughout years -> over the years

systemic causation models - systemic as opposed to what? Safety is inherently a system level property


needs both extensive amount of data and significant expertise - but neither of these are addressed by your ontological approach so far as I can see

that is intended for the use case of hazard analysis - isn't that the main point, rather than just a use case?

improved support for risk estimation - has that been empirically validated? give a citation

citation 2 - when citing a book, it would be preferable to indicate where in the book specifically


so as other systemic models - don't understand this

depicted in Fig 1 - move the figure closer

separates data from their interpretation - how so?

descriptive statistics - like what?

Def 1 - this seems fairly standard to me. What's specific to STAMP about it?

Def 2 - to be a "function" of something, without specifying the function, is not defining anything. And what does "hazard level" mean?

hazardous situation - not defined

This hazardous event - also not defined; is it a synonym?

both hazard and risk are not associated -> neither hazard nor risk are ..

this knowledge is always extracted from concrete events .. Risk is based on the loss .. hazards are based on .. incidents - this is not very clear; illustrate using your example.

investigation and similar - what does similar refer to?


so as the reuse of the -> in order to reuse the (?) - I'm not sure if this is what you meant; "so as" is used throughout the paper, usually ungrammatically and confusingly

perdurants - please remind the non-ontologically inclined reader what perdurants and endurants are


tropes or moments - meaning?

existentially dependent entities that are realizable through the occurrence of an Event - this (and much of the rest of this paragraph) is verging on ontological mumbo jumbo. Can you write it with non-ontology experts in mind? ie. experts in safety.

whose instances are individuals - sets of individuals?

sec 2.3.2 - can you clarify what value means here?


solve the safety related problem - what does that mean?

Example of hazard - this should come earlier

This situation cannot be represented - why not?

as well as their severity and likelihood - this is the first mention of severity, so please explain this term

there is a subtle connection between the models - but what is the difference?


control structure above the pilots - what does that mean?

are only hierarchical - unclear

formal representation of the model in modal logic - really? this is the first mention of it

The last requirements - which one? R3?


competency questions - what does this mean? these questions do not relate to competence in the usual sense

SHRO .. for increasing the awareness of analytic methods - really? in what sense does it achieve that?


such that the safety control structure does not account for occurs - ungrammatical; and why is this not violation of a constraint?

denoted in Fig 5 as STAMP hazards - do you just mean hazards? and is a STAMP failure just a failure?


As can be seen from Fig 5 - the text mentions various concepts that do not appear in the figure; eg. failure, mitigates


Another possibility for future progress is not considered - no idea what that means

control control loop


referred in STAMP ?? - broken link

We model arrows as objects - why not relations?

model hazards using threats instead of enablers - why wlog? is it because every enabler is a threat?


what is the ... - sentence missing its end

STAMP hazard model does not mention the disposition of the sensors - what is the significance of that?

pouting -> putting

sec 7 - it would be useful to show the concrete ontological representation for an example

regrading - do you mean regarding?

last sentence of sec 7 - no idea what that means


sec 8.1 - can you explain in what sense this is a verification? And what is the difference from your notion of validation in sec 8.2?

sec 8.2 - I don't see how this exercise is really telling us anything. What could possibly go wrong?

reduce conceptual interoperability - for example? can you motivate this?

allows creating SPARQL queries - if this is important, can you give us some examples?


Table 2 - it's unclear to me how this constitutes a verification


Ref 19 - some bad characters
Ref 29 - capitalize European
Ref 46 - title?