Transformer-Based Architectures versus Large Language Models in Semantic Event Extraction: Evaluating Strengths and Limitations

Tracking #: 3807-5021

Authors: 
Tin Kuculo
Sara Abdollahi
Simon Gottschalk

Responsible editor: 
Guest Editors KG Gen from Text 2023

Submission type: 
Full Paper
Abstract: 
[RESUBMIT of 3673-4887] Dear Reviewers, thank you for your constructive feedback which helped us to improve our article. Please see below for our detailed replies to your suggestions. In summary, these are the major changes to our revised article: - We have clarified definitions in Section 2 "Problem Statement". - We added the new Section 2.1 "Assumptions" which states our assumptions made to address our problem definitions with our methodologies, including a statement about the required dataset characteristics for training and evaluation. - We have updated Fig. 2 and Fig. 5 to clarify the flow of information in our models. - We extended Sections 4.1 and 4.2 to describe our prompting strategies. - We extended Section 5.1.1 to compare our selected event ontologies to other event ontologies and to argue about our selected threshold for class and property selection. - We added a new experiment in Section "5.7 Consistency Analysis" where we demonstrate the robustness of our LLM-based approach when prompting the same LLMs multiple times with the same event classification/relation extraction prompts. - We expanded the Related Work section to incorporate recent studies published since the initial submission, ensuring our analysis reflects the latest advancements and contextualizes our findings within the current research landscape. Best regards The Authors
Full PDF Version: 
Tags: 
Reviewed

Decision/Status: 
Accept

Solicited Reviews:
Click to Expand/Collapse
Review #1
Anonymous submitted on 02/Apr/2025
Suggestion:
Accept
Review Comment:

The authors addressed our previous comments, and most of those from other reviewers, improving significantly the manuscript.

Review #2
Anonymous submitted on 06/Apr/2025
Suggestion:
Accept
Review Comment:

I agreed with the revisions made to the paper, particularly the inclusion of the observation regarding the variability of Large Language Models (LLMs) in experimental studies in Section "5.7 Consistency Analysis"

Review #3
By Daniel Hernandez submitted on 20/Apr/2025
Suggestion:
Accept
Review Comment:

This manuscript has improved regarding the previous submission. I recommend accepting it this time.