UFRM: A User Feedback Reference Model for Managing Feedback in Dynamic Software Scenarios A Systematic Approach to Developing and Evaluating the UFRM in Dy- namic Scenarios Master’s Thesis in Computer Science and Engineering Hoda Sheikholeslamigarvandani Haoru Sui DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING CHALMERS UNIVERSITY OF TECHNOLOGY Gothenburg, Sweden 2025 www.chalmers.se www.chalmers.se Master’s Thesis 2025 UFRM: A User Feedback Reference Model for Managing Feedback in Dynamic Software Scenarios A Systematic Approach to Developing and Evaluating UFRM in Dynamic Scenarios Hoda Sheikholeslamigarvandani Haoru Sui DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING Chalmers University of Technology Gothenburg, Sweden 2025 UFRM: A User Feedback Reference Model for Managing Feedback in Dynamic Software Scenarios A Systematic Approach to Developing and Evaluating UFRM in Dynamic Scenarios Hoda Sheikholeslamigarvandani and Haoru Sui © Hoda Sheikholeslamigarvandani, 2025. © Haoru Sui, 2025. Supervisor: Farnaz Fotrousi, Department of Computer Science and Engineering Examiner: Gregory Gay, Department of Computer Science and Engineering Master’s Thesis 2025 Department of Computer Science and Engineering Chalmers University of Technology SE-412 96 Gothenburg Sweden Telephone +46 31 772 1000 Typeset in LATEX, template by Kyriaki Antoniadou-Plytaria Gothenburg, Sweden 2025 iv UFRM: A User Feedback Reference Model for Managing Feedback in Dynamic Software Scenarios A Systematic Approach to Developing and Evaluating UFRM in Dynamic Scenarios Hoda Sheikholeslamigarvandani and Haoru Sui Department of Computer Science and Engineering Chalmers University of Technology Abstract User feedback is essential for software improvement, shaping usability, functionality, and overall user experience. However, in Dynamic Scenarios, where users try to provide feed- back under high cognitive load, stress, or mental pressure due to factors such as time sensitivity, environmental uncertainty, or task-focused workflows, significant challenges arise. These situations make it difficult for both feedback senders and receivers to effec- tively manage the feedback process. As a result, users may delay or skip giving feedback altogether, making it harder for receivers to collect and process valuable input, especially in dynamic scenarios. This study develops a User Feedback Reference Model (UFRM) to improve feedback management under such conditions. We conducted 10 semi-structured interviews with feedback receivers (such as developers and product managers) and 30 semi-structured interviews with feedback senders (end users) to understand their current issues in collecting and processing feedback in dynamic scenarios. The results were analyzed using Thematic Analysis to identify key preferences and challenges of both sides and suggest solutions. These findings were then structured into a four-layer conceptual model for managing real-time feedback in dynamic scenarios, covering scenario detection, feedback collection, processing, and response. UFRM was validated using three real-world inspired use cases covering smart navigation systems, autonomous autopilot driving, and digital healthcare platforms to ensure its effec- tiveness. These use cases were derived from real feedback contexts shared by users during interviews and were applied step-by-step to test the model’s adaptability and performance under realistic dynamic constraints. The findings provide insights into optimizing feedback mechanisms in dynamic scenarios, balancing the preferences of both feedback senders and receivers, and supporting better software adaptation and user experience. Keywords: User Feedback, Dynamic Scenarios, Feedback Management, Feedback Col- lection, Feedback Processing, Feedback Sender, Feedback Receiver, Thematic Analysis, Use Case-Based Evaluation, UFRM. v Acknowledgements We sincerely thank Farnaz Fotrousi, our software engineering supervisor, for her dedication and invaluable guidance throughout this thesis. Her support and encouragement have been instrumental in shaping our research. With patience and expertise, she helped us navigate the complexities of the user feedback field, providing clarity and direction at every stage. We also extend our gratitude to all interviewees who responded us, for their valuable insights and enthusiasm that enriched our study. Finally, we are deeply grateful to our families for their constant support throughout this research journey. Hoda Sheikh & Haoru Sui, Gothenburg, May 2025 vii Glossary User Feedback Information or comments provided by users about their after interac- tion experience with a software product, often used to improve the product. User Refers only to human feedback senders in our study. The term user consistently refers to a human actor providing software feedback. Feedback Sender An entity that provides feedback. This can be a human user or a system component (e.g., logging tool, sensor), depending on the scenario context. Feedback Receiver An entity responsible for processing and acting on received feed- back. This includes people like developers and support agents, or automated feedback analysis tools. Explicit Feedback Feedback provided actively by users through direct channels (e.g., a survey form, in-app pop-up). Implicit Feedback Feedback gathered automatically during user interactions, often inferred from behavior (e.g., sensor data, system logs). Dynamic Feedback Scenario A high-pressure situation where users attempt to give feedback under time pressure, environmental constraints, system instability, or high cognitive load. Static (traditional) Feedback Scenario A calm and stable situation where users can provide detailed feedback without time pressure or system disruption, typically after task completion. Dynamic Feedback Feedback provided under our defined dynamic feedback scenar- ios. Often limited in quality or completeness due to the user’s situation. Static (traditional) Feedback Common feedback provided in static feedback sce- narios (e.g., post-use surveys). Usually more complete and structured. User Feedback Reference Model (UFRM) A four-layer conceptual model de- veloped in this thesis to improve feedback management in dynamic scenarios by supporting detection, collection, processing, and response. Design-Time The stage where feedback mechanisms are defined and configured before real-time use, based on expected dynamic conditions. Run-Time The stage during which the system operates in real conditions and dynam- ically selects and activates suitable feedback mechanisms. ix Feedback Mechanism A predefined way to collect feedback, including different prop- erties or attributes, such as a voice message, text form, or automatic log, chosen based on user context and scenario. Active Feedback Mechanism A feedback mechanism that has been selected and ac- tivated during run-time to match the user’s current condition. Feedback Data Package A structured group of feedback entries, including user input and contextual metadata, grouped by a certain sender and aggregated together for processing. Feedback Processor The entity in UFRM responsible for cleaning, categorizing, pri- oritizing, and preparing feedback data for action. Closed Feedback Loop A complete cycle in which user feedback is collected, pro- cessed, and followed by a visible response or action, ensuring users see the impact of their input. Processing Feedback A systematic approach for receivers to handle user feedback, in- cluding cleaning, categorizing, prioritizing, analyzing, and acting on the feed- back to improve a software system. x Contents List of Figures xv List of Tables xvii 1 Introduction 3 1.1 Feedback Challenges in Dynamic Scenarios . . . . . . . . . . . . . . . . . . 3 1.2 Research Objectives and Questions . . . . . . . . . . . . . . . . . . . . . . . 4 1.3 Research Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 1.4 The User Feedback Reference Model (UFRM) . . . . . . . . . . . . . . . . . 5 1.4.1 Design Goals of UFRM . . . . . . . . . . . . . . . . . . . . . . . . . 6 1.4.2 Core Structure: Four-Layer Architecture . . . . . . . . . . . . . . . . 6 2 Background and Related works 7 2.1 Introduction To User Feedback For Software Products Under Dynamic Sce- nario . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.2 Understanding The Participants In The Feedback Loop . . . . . . . . . . . 8 2.2.1 Feedback Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2 Feedback Senders . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.2.1 The Impact of Submission Cost and Mechanism Complex- ity On Engagement . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2.2 The Impact Of Psychological Factors and Technical Level on Feedback Quality . . . . . . . . . . . . . . . . . . . . . . 9 2.2.2.3 The Impact of Feedback Channels on Feedback Management 10 2.3 Existing User Feedback Management Approaches . . . . . . . . . . . . . . . 10 2.3.1 Traditional Text-Based Feedback Management . . . . . . . . . . . . 11 2.3.2 Automated Feedback Processing . . . . . . . . . . . . . . . . . . . . 12 2.3.3 Data-Driven Feedback Management . . . . . . . . . . . . . . . . . . 12 2.4 Main Challenges of The Existing User Feedback Management Systems . . . 13 2.4.1 Variety in Quality of Feedback Data . . . . . . . . . . . . . . . . . . 13 2.4.2 Difficulty in Categorizing and Prioritizing Feedback in Dynamic Sce- narios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 2.4.3 Inaccessibility of Feedback During Dynamic Conditions . . . . . . . 14 2.4.4 Lack of Closed Feedback Loop and User Trust Mechanisms . . . . . 15 2.5 Related Research Methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 2.5.1 Semi-Structured Interviews . . . . . . . . . . . . . . . . . . . . . . . 15 2.5.2 Thematic Analysis and Model Conceptualization . . . . . . . . . . . 16 2.5.3 Use Case-Based Demonstration and Evaluation . . . . . . . . . . . . 16 3 Research Methods 17 xi Contents 3.1 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1 Selection Strategy . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1.1 Feedback Receivers . . . . . . . . . . . . . . . . . . . . . . 18 3.1.1.2 Feedback Senders . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.2 Design Interview Questions . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2.1 Feedback Receivers . . . . . . . . . . . . . . . . . . . . . . 20 3.1.2.2 Feedback Senders . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3 Conduct Interviews . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20 3.1.3.1 Feedback Receivers . . . . . . . . . . . . . . . . . . . . . . 21 3.1.3.2 Feedback Senders . . . . . . . . . . . . . . . . . . . . . . . 21 3.2 Data Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 3.2.1 Step 1 - Transcription, Familiarization With the Data, and Selection of Quotations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22 3.2.2 Step 2 - Selection of Keywords . . . . . . . . . . . . . . . . . . . . . 23 3.2.3 Step 3 - Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23 3.2.4 Step 4 - Theme Development and Analysis . . . . . . . . . . . . . . 23 3.2.5 Step 5 - Conceptualization of Core Layers Based on Themes . . . . . 24 3.2.6 Step 6 - Development of Conceptual UFRM . . . . . . . . . . . . . . 25 3.3 Model Demonstration and Evaluation - Use Case . . . . . . . . . . . . . . . 25 4 Results 27 4.1 From Transcripts to Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . 27 4.2 Thematic Analysis with Interview Results . . . . . . . . . . . . . . . . . . . 28 4.2.1 Theme 1: Dynamic Scenarios Characteristics . . . . . . . . . . . . . 30 4.2.2 Theme 2: Feedback Collection in Dynamic Scenarios . . . . . . . . . 32 4.2.3 Theme 3: Feedback Motivation in Dynamic Scenarios . . . . . . . . 35 4.2.4 Theme 4: Feedback Internal Processing and Workflow in Dynamic Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37 4.2.5 Theme 5: Feedback Quality Limitations in Dynamic Scenarios . . . 40 4.2.6 Theme 6: Feedback Follow-up and Response in Dynamic Scenarios . 43 4.3 Insights From Thematic Analysis Results . . . . . . . . . . . . . . . . . . . 44 4.3.1 Identifying and Handling Feedback Barriers in Dynamic Situations . 45 4.3.2 Making Feedback Sending Easy and Timely for Feedback Senders . . 45 4.3.3 Improving Feedback Clarity and Streamlining Internal Processing . . 46 4.3.4 Building Trust Through Meaningful and Timely Responses . . . . . 47 4.4 UFRM Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4.1 Process Flow in UFRM . . . . . . . . . . . . . . . . . . . . . . . . . 47 4.4.2 Entity Classes in UFRM . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.4.3 Class Calls in UFRM Layers . . . . . . . . . . . . . . . . . . . . . . 49 4.5 UFRM Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50 4.5.1 Demonstrate Use Case 1: Lost Navigation During a Trip . . . . . . . 50 4.5.2 Demonstrate Use Case 2: Auto-Pilot Failure on Highway . . . . . . 52 4.5.3 Demonstrate Use Case 3: Interrupted Medical Video Call . . . . . . 53 4.5.4 Evaluate UFRM Design Goal . . . . . . . . . . . . . . . . . . . . . . 55 5 Discussion 57 5.1 Key Differences Between UFRM and Traditional Feedback Management Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.1.1 Real-Time Handling Across Feedback Management Stages . . . . . . 57 5.1.2 Automatic and Context-Driven Decision Making . . . . . . . . . . . 57 xii Contents 5.1.3 Sensitivity to Dynamic Scenarios . . . . . . . . . . . . . . . . . . . . 58 5.1.4 Cross-Layer Integration and Adaptive Flow . . . . . . . . . . . . . . 58 5.2 Answers to the Research Questions . . . . . . . . . . . . . . . . . . . . . . . 58 5.3 Threats to Validity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61 5.4 Informed Consent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62 5.5 Usage of Generative AI in This Thesis . . . . . . . . . . . . . . . . . . . . . 62 5.6 Future Works . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 5.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63 Bibliography 65 A Interviews with feedback receivers I A.1 Demographic Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . I A.2 User Feedback Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . I A.3 Feedback Processing & Analysis . . . . . . . . . . . . . . . . . . . . . . . . . II A.4 Dynamic Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III A.5 Closing Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . III B Interviews with feedback senders V B.1 General Experience with Giving Feedback . . . . . . . . . . . . . . . . . . . V B.2 Feedback in Dynamic Scenarios . . . . . . . . . . . . . . . . . . . . . . . . . VI B.3 About You (Demographic Information) . . . . . . . . . . . . . . . . . . . . VI C Entity Classes and Their Associations in UFRM VII xiii Contents xiv List of Figures 3.1 Overview of The Whole Research Process . . . . . . . . . . . . . . . . . . . 17 3.2 Demographic Information of Interviewed Feedback Receivers . . . . . . . . 19 3.3 Demographic Information of Interviewed Feedback Senders . . . . . . . . . 20 3.4 Six Steps Conceptual Framework Development Process . . . . . . . . . . . . 22 3.5 An Example of Six-Steps Thematic Analysis to Develop a Conceptual Model 24 4.1 Word Cloud from Receivers Interview Results . . . . . . . . . . . . . . . . . 27 4.2 Word Cloud from Senders Interview Results . . . . . . . . . . . . . . . . . . 28 4.3 From Keywords and Codes to Themes . . . . . . . . . . . . . . . . . . . . . 28 4.4 Thematic Analysis Framework: Tree of Themes and Sub-Themes . . . . . . 29 4.5 UFRM - Process Flow Diagram . . . . . . . . . . . . . . . . . . . . . . . . 47 4.6 UFRM - UML Class Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . 48 4.7 UFRM- Class Calls in Four Layers . . . . . . . . . . . . . . . . . . . . . . . 49 4.8 UML Class Diagram Demonstration for Use Case 1 . . . . . . . . . . . . . . 51 4.9 UML Class Diagram Demonstration for Use Case 2 . . . . . . . . . . . . . . 53 4.10 UML Class Diagram Demonstration for Use Case 3 . . . . . . . . . . . . . . 54 C.1 Entity Classes in UFRM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . VII C.2 Associated Classes and Lists with Entity Classes . . . . . . . . . . . . . . . VIII xv List of Figures xvi List of Tables 1.1 Research Questions and Corresponding Objectives . . . . . . . . . . . . . . 5 2.1 Comparison of Existing User Feedback Management Approaches . . . . . . 11 xvii List of Tables xviii List of Tables subcaption 1 List of Tables 2 1 Introduction User Feedback plays an important role in software engineering, offering direct insight into how software is perceived and experienced by its users. It enables software development teams to discover and resolve software defects, enhance usability, and guide the continuous evolution of systems [1] [2]. As software systems grow in complexity and user needs evolve rapidly, it becomes increasingly important to develop effective mechanisms for collecting, processing, and utilizing User Feedback [3]. In calm and stable Static Feedback Scenarios, users can provide clear and detailed Static Feedback, such as bug reports or feature requests, through formal and explicit channels such as surveys or support forms [4]. However, these traditional methods often fail in Dynamic Feedback Scenarios, where users face time pressure, system instability, or environmental constraints. Such scenarios commonly occur in domains like self-driving vehicles, real-time logistics systems, or emergency healthcare platforms [5]. In such high-pressure contexts, users, as Feedback Senders, often lack the time, have system stability constraints, or cognitive capacity to submit timely and meaningful input [6]. For example, during a sudden navigation failure in an autonomous driving system, the user may have only a few seconds to respond, which makes formal reporting impractical. As a result, valuable feedback is frequently lost or submitted too late to be actionable. To overcome these limitations, recent studies emphasize the importance of adaptive mod- els that can support real-time and context-sensitive feedback collection and processing in unstable environments [6]. In response, this study proposes the User Feedback Ref- erence Model (UFRM), a structured four-layer conceptual model for managing User Feedback in Dynamic Feedback Scenarios. UFRM is designed to enable context-aware detection, efficient collection, internal processing, and meaningful feedback responses, even when users are under time pressure or experiencing cognitive load. 1.1 Feedback Challenges in Dynamic Scenarios Within our scope, User Feedback can generally be categorized into two types: Static Feedback and Dynamic Feedback. Static Feedback is typically provided in Static Feedback Scenarios, for example, the calm, post-task contexts where users have time and mental capacity to generate detailed and structured input [7]. In contrast, Dynamic Feedback emerges in Dynamic Feedback Scenarios, where we found from our interviews, the Feedback Senders usually face conditions such as Time Pressure, Environmental Constraints, System Instability, or High Cognitive 3 1. Introduction Load. These scenarios introduce significant challenges for feedback submission, inter- nal handling, and system response. We define the following four core characteristics of Dynamic Feedback Scenarios: • Time Pressure: Users face time pressure to submit feedback, such as reporting an issue or giving a quick rating, as the value of feedback decreases rapidly over time. It is required to be submitted and processed as fast as possible to ensure effectiveness. For example, when an error occurs in a navigation app while driving, the user only has a brief moment to report the problem before it becomes irrelevant. • Environment Constraints: Environmental factors such as unstable network sig- nals and physical mobility limitations hinder feedback submission. It is difficult for users to provide immediate feedback in areas with poor network conditions or in high mobility scenarios such as driving. • System Instability: System instability issues, such as crashes and technical fail- ures, can be a major challenge in dynamic feedback scenarios. It reduces user trust and makes users think that submitting feedback, such as bug reports or incident descriptions, on unreliable systems is meaningless, thus reducing their willingness to submit it. In addition, system failures can interrupt the submission process, preventing users from completing their input. • Task-Focused Workflows: Users often experience high cognitive loads and high mental pressure while engaged in complex tasks, multitask, or an unstoppable pro- cess, making it difficult for them to submit feedback actively. In situations like driving, surgery, or customer service, they may delay or skip feedback, leading to missing or late responses. These characteristics collectively result in Dynamic Feedback that is often vague, incom- plete, or absent, thereby weakening the Closed Feedback Loop between sender and re- ceiver. To address these challenges, we propose the User Feedback Reference Model (UFRM), which is structured around six key design goals: (1) context awareness, (2) low sender effort, (3) improved feedback quality, (4) efficient organizational processing, (5) closed feedback loop, and (6) support for both feedback senders and receivers. These goals were derived from our thematic analysis of interviews and aim to address the practical lim- itations of existing feedback handling methods in dynamic scenarios (see Section 1.4.1 for full details). 1.2 Research Objectives and Questions Based on the challenges identified above and the limitations of existing feedback manage- ment methods, this research aims to design and validate the User Feedback Reference Model (UFRM) to better manage feedback in Dynamic Feedback Scenarios. The study is guided by the following objectives and research questions: 4 1. Introduction Table 1.1: Research Questions and Corresponding Objectives Objectives Corresponding RQs Sections OBJ1: Develop a User Feedback Ref- erence Model (UFRM) for dynamic scenarios. RQ1: What are the key components and characteristics of the User Feedback Ref- erence Model (UFRM) for collecting and processing user feedback in dynamic sce- narios? 4.4 OBJ1.1: Understand the main challenges and contextual characteristics influencing feedback in dynamic scenarios. RQ1.1: What are the characteristics of dy- namic scenarios? 4.2 OBJ1.2: Identify the feedback senders’ preferences and challenges to ensure the model supports their needs. RQ1.2: What are the preferences and chal- lenges of feedback senders for providing feedback in dynamic scenarios? 4.2, 4.3 OBJ1.3: Understand feedback receivers’ needs for internal processing and decision- making. RQ1.3: What are the preferences and chal- lenges of feedback receivers for processing user feedback in dynamic scenarios? 4.2, 4.3 OBJ2: Validate UFRM in real- world-inspired scenarios. RQ2: To what extent is the proposed User Feedback Reference Model (UFRM) appli- cable in real-world dynamic scenarios? 4.5 1.3 Research Methods This study followed qualitative research approaches to explore and address the chal- lenges of user feedback in dynamic scenarios. We combined three research methods to guide the development and validation of the User Feedback Reference Model (UFRM). First, we conducted two sets of semi-structured interviews [8], one with feedback receivers (e.g., developers, product managers) and one with feedback senders (end users). These interviews provided in-depth information about real feedback experiences, chal- lenges, and preferences in dynamic environments. Second, we applied thematic analysis methodologies [9] and approaches designed by M. Naeem [10] to analyze the interview data and extract key themes. These themes formed the conceptual foundation for the UFRM, which we designed as a four-layer model to support real-time, context-aware feedback handling in dynamic situations. Third, we applied a validation strategy extracted from the Design Science Research Method- ology (DSRM) [11], using use case-based demonstration and evaluation. We tested the UFRM through three real-world-inspired use cases to simulate how the model func- tions across its four layers. This helped us assess whether UFRM achieves its intended design goals and effectively supports feedback management in dynamic scenarios. 1.4 The User Feedback Reference Model (UFRM) The User Feedback Reference Model (UFRM) is a structured conceptual model designed to help software systems manage user feedback more effectively in dynamic scenarios. The model is flexible, supports real-time feedback handling, and works with both explicit (manual) and implicit (automatic) feedback sources. In addition to improving the feedback experience for feedback senders, UFRM also acts as a guideline for feedback receivers 5 1. Introduction to better organize, analyze, and respond to incoming feedback, even in unstable or fast- changing environments. By following UFRM, teams can make more informed design and development decisions, leading to better software adaptation and satisfaction on both sides. 1.4.1 Design Goals of UFRM UFRM was designed with several key goals in mind, based on insights from our thematic analysis of interviews with feedback senders and receivers: • Context awareness: The model should detect a dynamic situation and adapt feedback handling accordingly to the context of the feedback sender. • Low sender effort: It should reduce the cognitive load required by the feedback sender to submit input, especially during stressful or time-sensitive tasks. • Improved feedback quality: The model aims to reduce vague or delayed input from senders by capturing contextual metadata and supporting structured feedback. • Efficient organizational processing: UFRM supports feedback receivers by enabling prioritization, classification, and internal routing to process input more effectively. • Closed feedback loop: The model ensures that feedback senders receive meaningful and timely responses, which helps feedback receivers build trust and maintain user en- gagement. • Support for both senders and receivers: UFRM is explicitly designed to support both feedback senders (those who provide input) and feedback receivers (those who process it), ensuring utility and usability for both roles. These goals align with the research objectives presented in Section 1.2 and form the foundation for the model structure and its validation in later chapters. 1.4.2 Core Structure: Four-Layer Architecture UFRM is built around a four-layer architecture that reflects the full lifecycle of user feed- back in dynamic scenarios. Each layer corresponds to key themes identified in our analysis and operates within the system through specific entities and processes (see Section 4.4): 1. Dynamic Scenario Identification Layer: Detects when feedback is likely to be submitted under dynamic conditions such as time pressure or system instability. It activates the appropriate mechanism based on scenario type and the context of the sender. 2. Feedback Collection Layer: Enables submission from the sender using context- aware mechanisms such as voice input, one-tap actions, or passive data triggers. 3. Feedback Processing Layer: Assists receivers in classifying, prioritizing, and routing the feedback to ensure it is actionable and clear. 4. Feedback Response Layer: Provides appropriate responses from the receiver back to the sender, maintaining a closed loop and supporting continuous improve- ment. 6 2 Background and Related works This chapter explores the meaning of user feedback, user feedback management approach, and challenges through Literature Reviews. It provides the theoretical foundation and background support for the proposed User Feedback Reference Model (UFRM) by analyz- ing and summarizing existing research findings. 2.1 Introduction To User Feedback For Software Products Under Dynamic Scenario User feedback refers to the opinions and experience information that end users convey to developers through comments, ratings, or problem reports when using software prod- ucts [12]. This feedback not only includes specific evaluations of software functions but may also involve difficulties encountered by Senders during use or suggestions for future functions [13]. In the software development process, user feedback serves as an important bridge between users and developers, and empirical studies indicate it significantly con- tributes to requirements gathering and software quality improvements [14]. As software systems grow in complexity, the importance of user feedback in guiding the continuous evolution of products becomes even more pronounced. Research further shows that user feedback profoundly impacts optimizing user experi- ence. By systematically analyzing a large volume of user feedback, Receivers can uncover unmet Sender needs and even anticipate emerging requirements[15]. For example, an online education platform may find through user feedback that video loading speed has become a problem, thus optimizing server performance to improve the course experience. The importance of user feedback is also reflected in its guiding role in the Receivers’ decision-making [3]. User feedback offers real-world insights from the Senders’ perspective, helping Receivers adjust their technical choices and prioritize features accordingly [16]. For example, in a social media application, frequent user complaints may prompt the Receiver to re-evaluate the rationality of the push algorithm. This user-centric design concept has become the basis of many successful software products. In addition, user feedback is also considered an important support for software market competitiveness. Prior studies indicate that effectively using user feedback helps Re- ceivers rapidly adapt to market needs and improve product success [14]. For example, an e-commerce platform optimized the payment process through feedback and significantly improved the user conversion rate. This dynamic process from feedback to optimization reflects its core position in the software life cycle. 7 2. Background and Related works 2.2 Understanding The Participants In The Feedback Loop 2.2.1 Feedback Receivers For feedback receivers, such as developers, product managers, and data analysts, man- aging user feedback is a complex and time-consuming system engineering project [17] that requires filtering non-relevant, duplicated, noisy and other useless feedback from guiding points among massive amounts of information flows, in order to optimize product fea- tures, fix technical defects, and improve the overall user experience. Currently, manual triage of the usability and performance of features in developed software is still a common approach adopted by many receivers [18]. Developers rely on years of experience and in- tuition to read comments or reports submitted by Senders one by one to determine which feedback deserves priority attention [3]. However, this method is inadequate when faced with a large user base and diverse feedback. Not only is the processing efficiency low, but it is also easily affected by personal subjective bias [19]. For example, previous studies suggest that developers focusing on backend issues may prioritise crash-related feedback and sometimes ignore interface-related suggestions, which can lead to uneven resource al- location [13] [20]. Especially in mobile applications, user feedback is often fragmented and lacks contextual information, making manual screening less effective for supporting fast development cycles [17]. To overcome the shortcomings of manual triage, automated text analysis has gradu- ally become a highly respected management method. By introducing Natural Language Processing (NLP) technology, the system can classify, sentiment analyse, and topic model user feedback, thereby significantly improving processing efficiency [5]. For example, video editing software may use automated tools to identify that "long rendering time" is the most common Sender complaint, and then optimise the algorithm to shorten the pro- cessing time. In addition, this method can also discover hidden thematic patterns and provide new inspiration for product design [15]. For example, through thematic analy- sis, feedback Receivers may find that feedback Senders have an increasing demand for multilingual support, thereby promoting the development of international features. This technological advancement is particularly good at processing unstructured text (such as emotional comments or short sentences), reducing the manual burden, and providing a feasible solution for large-scale feedback management. At the same time, implicit usage data can effectively reveal Sender behaviour patterns and potential UX issues. Tracking user clicks, browsing behaviour, and page dwell time can help developers infer usability issues and interface intuitiveness, enabling proactive UI optimisations to meet Sender expectations better [21]. For example, a fitness app may analyze that Senders spend too little time on a certain function and infer that its UI design is not intuitive enough, so they fix their UI. This combination of multi-dimensional data not only helps developers identify surface problems but also explores deep Sender behavior patterns, providing data support for product strategy adjustments. 2.2.2 Feedback Senders The feedback sender, i.e., the end user, can be constrained to multiple factors when submitting feedback in dynamic software scenarios, and their motivations are significantly different from traditional static environments [22]. Studies have shown that Senders are more likely to provide feedback when they face major functional failures or highly emo- tional experiences, while routine or minor issues are often overlooked [23]. For example, 8 2. Background and Related works in a real-time navigation application, if positioning errors cause Senders to get lost, they may immediately give a complaint. Still, if the interface buttons are slightly inconvenient, Senders often choose to adapt rather than provide feedback. This selective behavior is usually prominent, such as in smart home systems, where Senders may manually adjust settings due to device response delays instead of reporting to the developer. The dy- namic feedback scenarios defined in chapter 1, including time sensitivity, environmental uncertainty, and task-focused workflows, can further exacerbate this trend. Therefore, it is important to understand these dynamic motivations to optimise feedback collection mechanisms. 2.2.2.1 The Impact of Submission Cost and Mechanism Complexity On En- gagement In highly dynamic feedback scenarios, Senders often find it difficult to devote a lot of energy or time to writing detailed feedback reports. Therefore, cumbersome reporting processes, such as requiring manual entry of detailed information, can reduce Senders’ willingness to submit feedback. For example, if mobile Senders are required to log in, upload screenshots, and provide system logs manually, they may skip submitting feedback in urgent situations. Prior studies have shown that complex feedback submission processes significantly reduce Senders’ willingness to report issues[24]. In contrast, offering a one- click feedback entry or pre-filled options and automatically capturing device context can greatly shorten submission time, enabling Senders to provide feedback with minimal effort. In this way, the Receiver can obtain real faults and improvement requirements in dynamic scenarios more timely, without missing critical usage data. At the same time, some researchers suggested combining a simple UI with an automated capture mechanism to improve the depth of feedback. For example, when faults such as crashes or freezes occur, modern crash-reporting tools can automatically generate de- tailed bug reports—including crash logs, system state, and steps to reproduce the issue. Senders only need minimal interaction to verify and submit these auto-generated reports, significantly reducing the manual effort involved [25]. In addition, if the operation log is allowed to be automatically collected in the background, it will not only reduce the Sender’s operating pressure in extreme situations but also provide more complete con- text information for subsequent analysis. After adopting such a mechanism, the Sender’s willingness to submit feedback is significantly improved even in time-sensitive dynamic scenarios (such as using software while working or performing outdoor activities). The submission cost and the complexity of the feedback mechanism have a great impact on the Sender’s willingness to participate in dynamic scenarios. Combining simplified processes with automatic recording is necessary to achieve more efficient feedback collection. 2.2.2.2 The Impact Of Psychological Factors and Technical Level on Feedback Quality The feedback behavior of Senders in dynamic situations is not only limited by time and environment, but also closely related to Senders’ psychological factors and technical level [5]. When Senders have negative expectations about the Receivers, they may choose not to provide feedback even if they encounter serious problems, thinking that "no one will deal with it anyway." Similarly, some novices or ordinary Senders can only provide vague and scattered descriptions due to their lack of understanding of the system structure, making it difficult to accurately locate the fault or explain the key points of the require- 9 2. Background and Related works ments. However, high-level Senders with more professional backgrounds can provide more complete technical information or reproduction steps, which significantly improves the efficiency of subsequent debugging and improvement. Therefore, when collecting Sender opinions, we need to fully consider the differences in cognition and skills of different groups of people. We should also provide corresponding guidance and templates to avoid ignoring those technically disadvantaged groups or missing the deep insights brought by professional opinions. In addition, the emotions and motivations of the feedback giver will also affect the depth and accuracy of the feedback [24]. If the Sender becomes emotional during an emer- gency, the content they submit may be overly negative and lack objective details. When there are no major pain points or benefit drivers, they tend to ignore smaller bugs or areas for improvement, thus losing opportunities for improvement. At the same time, when there are privacy or security concerns in the system, some Senders are unwilling to provide real account information or operation logs out of concern, resulting in a lack of key diagnostic data. 2.2.2.3 The Impact of Feedback Channels on Feedback Management In dynamic scenarios, the design and accessibility of feedback channels play a key role in shaping Senders’ willingness and ability to provide feedback. The effectiveness of feedback collection in dynamic scenarios depends largely on the usability, timing, and integration of the feedback channel into the Sender experience. Literature highlights that Senders prefer channels that are simple, fast, and embedded directly into their ongoing workflow. For instance, one-tap ratings, brief comment boxes, screenshots, or voice inputs allow Senders to quickly express their opinions without interrupting their tasks. These low-effort mech- anisms are particularly important in time-sensitive or high-cognitive-load environments, such as navigating, multitasking, or interacting with critical systems. When feedback sub- mission involves complex steps or long forms, Senders often choose to skip it, especially when under pressure or focused on other goals [26] [27]. Studies also emphasize that context-aware and well-integrated feedback channels, such as in-app pop-ups triggered by system events or errors, can improve engagement by offering relevant moments for input. Additionally, multi-modal options, such as voice messages or annotated screenshots, are more suitable for mobile and dynamic use cases than traditional typing. The PORTNEUF framework, for example, stresses the importance of proactive and continuous feedback collection, tailored to Sender experience contexts [28]. Moreover, feedback channels that provide visible outcomes, such as update logs, acknowledgments, or responses, enhance Sender motivation by reinforcing the idea that their input matters. 2.3 Existing User Feedback Management Approaches To better explain why we propose the UFRM, it is important to first look at existing user feedback management methods and understand what they can and cannot do. Over the years, different approaches have been used to collect and handle feedback from users, including manual text reading, automated text analysis, and more recent data-driven methods that combine user input with behavior data. The table below compares these three common approaches. It shows their main features, how they work, and what challenges they face, especially in dynamic software environ- 10 2. Background and Related works ments. This comparison helps to show why current methods are not enough for handling real-time, fast-changing feedback situations. These gaps form the starting point for this study, which aims to build a better model for feedback collection and response in dynamic scenarios. Table 2.1: Comparison of Existing User Feedback Management Approaches Criteria Traditional Text-Based Automated Feedback Processing Data-Driven Feed- back Management Feedback Source Explicit text (e.g., emails, app re- views) Mostly explicit text Explicit + Implicit (e.g., logs, sensor data) Processing Method Manual reading, tagging, and triage NLP-based classification, sentiment, and topic modeling Multi-source fusion com- bining user input with behavioral analytics Timeliness Often delayed and post-task Faster than manual, usu- ally offline Supports near real-time insight with streaming data Scalability Low (manual ef- fort limits cover- age) Medium to high depend- ing on model perfor- mance High when infrastructure is in place Accuracy Varies based on reviewer expertise Affected by dataset bias, lack of context, language variation High potential with richer context, but vul- nerable to noisy signals Limitations Time-consuming, subjective bias, fragmented across platforms Sensitive to training data, lacks real-time capability, poor in han- dling ambiguity Requires large infrastruc- ture, privacy concerns, difficult filtering of irrel- evant data Typical Use Sce- narios Post-task bug re- porting, support forms Sentiment flagging, issue trend detection Real-time usage monitor- ing, predictive issue de- tection 2.3.1 Traditional Text-Based Feedback Management Early Receivers mainly relied on text-based user feedback management methods, such as app store reviews, email support, user forums, and issue-tracking systems. These methods provide developers with direct channels to collect Senders’ real experiences and improve software products accordingly [29]. Since this method is intuitive, easy to implement, and Senders can describe their problems and needs in free text, it has long been the core of user feedback management. However, with the evolution of software development environments, especially in dynamic software scenarios, traditional text-based feedback methods have gradually exposed multiple limitations. First, the lag of feedback is a key issue. Senders usually do not submit feedback at the moment they encounter a problem, but review the problem and fill out a report after a long time interval [30]. This feedback delay makes it difficult for the Receivers to accu- rately restore the context of the problem, affecting the efficiency of problem location [15]. Second, the unstructured nature of text feedback also increases information processing difficulty. Developers often need to manually screen, classify, and analyze text submitted by Senders, which not only consumes a lot of human resources but is also easily affected by 11 2. Background and Related works subjective judgment, resulting in inconsistency in feedback processing [5]. Moreover, user feedback is often scattered across multiple platforms (e.g., app stores, social media, fo- rums), forcing Receivers to integrate and compare information from diverse sources. This fragmentation greatly increases the complexity of feedback management, especially in dy- namic scenarios where software changes rapidly. In such cases, the fast pace of evolution demands timely processing of feedback, which traditional text-based methods struggle to achieve, highlighting the need for shorter feedback loops in practice [31]. 2.3.2 Automated Feedback Processing With the development of machine learning and Natural Language Processing technology, automated feedback processing has gradually become an important means to improve the efficiency of user feedback management [5] [32]. These methods mainly rely on technologies such as text classification, sentiment analysis, and topic modeling to process and analyze a large amount of user feedback in an automated manner [33]. In dynamic software scenarios, automated analysis of user feedback (e.g., app reviews) can identify defect reports, feature requests, and usability issues, enabling developers to detect problems and improve the software more quickly [34]. In addition, sentiment analysis techniques can automatically flag negative feedback, allowing Receivers to address the most critical Sender complaints first [13]. Automated feedback processing further reduces manual effort and improves handling efficiency, enabling developers to respond more quickly [33]. The application of automated methods has reduced the workload of feedback management to a certain extent, improved the efficiency of feedback processing, and enabled Receivers to respond to Sender needs more quickly. Although automated feedback processing methods have made significant progress com- pared to traditional methods, they still have limitations in dynamic software environments. First, these methods are highly dependent on the quality of the training dataset. At the same time, real-world user feedback often contains typos, slang, and ambiguities that can reduce automated analysis accuracy [33]. Second, textual user feedback often lacks sufficient context for accurate NLP interpretation. Empirical evidence from app store analyses shows user feedback frequently being brief, incomplete, or lacking clear context, making automated classification and analysis challenging [35]. In addition, many auto- mated systems still rely on offline processing and cannot meet the needs of some real- time feedback management in dynamic software environments. For example, in highly interactive software systems such as smart homes and autonomous driving, developers need to be able to identify user experience problems quickly. However, existing auto- mated techniques still struggle to support real-time feedback needs in highly interactive, time-critical systems (due to constraints like processing latency and context availability) [36].” 2.3.3 Data-Driven Feedback Management Data-driven user feedback management methods have received increasing attention in re- cent years. Unlike traditional approaches or purely automated text analysis, data-driven feedback methods combine explicit feedback (actively submitted by Senders) with implicit feedback (usage behavior data) to provide richer insights [30]. Data-driven feedback loops, such as continuously collecting usage analytics alongside explicit Sender input, can gen- erate more accurate findings. Recent continuous software engineering roadmaps highlight data-driven user feedback integration as a key area for research and practice [37]. For 12 2. Background and Related works instance, in autonomous driving systems, vehicle sensor data can reflect driver behavior patterns, allowing for proactive optimization by combining sensor data with user feedback [30]. Moreover, multimodal feedback fusion (e.g., combining textual reports with usage logs and screenshots) helps anticipate issues and enables developers to act proactively [30]. These methods offer richer and more precise feedback, which better supports product improvement in dynamic environments. However, data-driven methods also face several challenges. First, privacy protection and data compliance are major obstacles. During behavior data collection, regulations like GDPR must be followed to ensure data security and legality [38]. Second, as implicit feedback often includes large amounts of irrelevant data, effective screening and extrac- tion remain core research issues [39]. In addition, processing massive feedback data in real time requires high computing resources, which can limit deployment, especially on mobile or low-power IoT platforms [36]. Therefore, although data-driven methods offer promising directions, further research is needed in privacy, filtering, and resource optimization. User feedback management has gradually evolved from traditional manual text-based feed- back collection (e.g., app reviews, support forms), to automated NLP-based feedback processing (e.g., classification, sentiment analysis) [20], and now to more advanced data- driven approaches that integrate multiple sources such as behavioral logs, sensor data, and explicit reports [39]. Despite these improvements, each method still has some limita- tions in dynamic software environments [40]. Traditional methods lack real-time respon- siveness, automated methods often suffer from contextual ambiguity and data bias, and data-driven methods demand high computation and strong privacy guarantees. Therefore, the UFRM proposed in this study aims to support feedback management across all key stages: collection, processing, and response. It does so by integrating multiple comple- mentary mechanisms to overcome existing limitations and improve support for dynamic feedback scenarios. 2.4 Main Challenges of The Existing User Feedback Man- agement Systems 2.4.1 Variety in Quality of Feedback Data In dynamic scenarios, if Senders submit feedback in an emergency, distracted, or time- sensitive situation, the integrity and standardization of the feedback content will be seri- ously challenged. Some Senders will only leave a brief and blurry description in the interface, such as "stuck" or "cannot be turned on", but lack key contexts such as reproduc- tion steps, device information or network environment, making it difficult for developers to locate the root cause of the fault [41]. Emotional or colloquial expressions are quite common, with spelling errors, network slang or incoherent phrases added, which reduces the accuracy of automated parsing and clustering. Especially for Receivers that intend to build machine learning models to iden- tify problem types, if they do not plan text cleaning and noise filtering in the early stage, they will face a large amount of noise feedback in the data set, which will make it difficult for clustering results or sentiment analysis to reflect Sender pain points truly, and also to 13 2. Background and Related works some extent hinder the overall product evolution. In addition, multi-channel parallel collection further increases the inconsistency of feedback quality. Feedback from different channels varies in format and content. For example, social media posts may be long but irrelevant, app store comments are often short and emotional, and chat-based reports are usually informal and fragmented [42]. This variation makes it hard to understand the full context of a problem and increases the effort needed to merge and compare data from multiple sources. In dynamic software environments, these issues become more serious. Users may report the same issue through different platforms using inconsistent descriptions, making it difficult to identify duplicates or maintain consistency. Studies show that duplicate, incomplete, or low-quality feedback is common when feedback is collected from diverse sources without standardization [43]. Without proper processing, the same issue might be recorded mul- tiple times, wasting developer resources. Therefore, improving feedback quality requires better collection methods on the front end and more reliable data cleaning and merging techniques on the back end. 2.4.2 Difficulty in Categorizing and Prioritizing Feedback in Dynamic Scenarios As mentioned in the section 2.3.1, in dynamic scenarios, Receivers often receive a large amount of feedback in a variety of formats. This information often appears simultaneously in multiple channels such as app stores, forums, and chat tools [31]. Due to the confusion of sources, there are many duplicate and conflicting contents, and manual verification and noise elimination become particularly laborious. If we only rely on the subjective judgment of developers, we may ignore some seemingly rare but serious problems and waste energy on a large number of similar but meaningless improvement requests. Some feedback is just some simple complaints or emotional comments, and the high-quality reports that contain reproduction steps or device information are buried, making subsequent analysis more time-consuming. In addition, Receivers often lack automated classification and merging strategies, making it difficult to aggregate similar faults together in a timely manner [28]. Even if there are scripts that can identify keywords and make preliminary groupings, there will still be many missed detections or misjudgments due to semantic ambiguity or spelling errors, so core bugs or requirements will not receive due attention. At the same time, developers need to balance the different demands for basic defect repair and new feature launches. Even if some bugs are found to have a large impact, they may be temporarily shelved due to resource shortages, causing Senders to have negative emotions and question the efficiency of the Receiver. 2.4.3 Inaccessibility of Feedback During Dynamic Conditions In dynamic environments, feedback senders may encounter technical issues, such as unstable systems or poor connectivity, or environment constraints, such as the lack of good network connectivity in their environment. These factors may discourage them from submitting feedback even if they are motivated to do so [44]. In these cases, traditional feedback systems are often not applicable because they are usually designed for stable desktop environments and lack support for timely or context-aware input [45]. When feedback tools do not integrate smoothly into the sender’s workflow, the receiver may lose 14 2. Background and Related works the opportunity to make meaningful reports. If feedback cannot be submitted quickly and with minimal disruption, many senders will abandon the intention of providing feedback [46]. 2.4.4 Lack of Closed Feedback Loop and User Trust Mechanisms Some feedback systems do not offer a clear feedback loop that informs Senders how their input is processed or whether it led to any action. Research shows that visible and timely responses, such as status messages, public updates, or simple acknowledgments, help maintain user engagement and increase trust in the system [45]. However, many systems still lack these features or rely only on automatic email replies, which are often overlooked. In dynamic environments, where feedback is usually submitted under urgent needs, users expect faster confirmation and clearer outcomes. Without confirmation or follow-up, feedback senders may feel uncertain about the value of their contribution, which can reduce trust and lower their willingness to provide feedback again [6]. 2.5 Related Research Methods In this section, we review three core research methods that formed the basis of our study: semi-structured interviews, thematic analysis, and use case-based demonstration and eval- uation. These methods were selected after a review of key literature that supports their use in qualitative, exploratory, and design-oriented research. 2.5.1 Semi-Structured Interviews Semi-structured interviews are a widely used qualitative data collection method that combines the consistency of structured interviews with the flexibility of open conversations. According to the SAGE Encyclopedia of Social Science Research Methods [8], this method involves pre-determined guiding questions while still allowing space for follow-up questions and exploration of emerging topics. This balance makes semi-structured interviews partic- ularly effective for studying complex human behaviors, attitudes, and perceptions. A key strength of this method lies in its adaptability across disciplines such as education, health- care, and information systems. It enables researchers to maintain a consistent framework while still capturing diverse and unanticipated insights from participants. Unlike struc- tured interviews, which may limit the depth of responses, semi-structured interviews invite elaboration and storytelling, which often reveal nuanced motivations or problems that re- searchers did not initially anticipate. Palinkas et al. [47] emphasize the importance of purposeful selection when conducting semi-structured interviews in applied research contexts. This selection strategy focuses on selecting information-rich cases, individuals who are especially knowledgeable about the topic. The goal is not statistical generalizability but rather in-depth understanding. In addition, Fusch and Ness [48] introduce the concept of data saturation, a critical benchmark in qualitative interviewing. Data saturation occurs when no new themes or insights emerge from continued interviews. Achieving saturation ensures data adequacy and supports the trustworthiness of qualitative findings. 15 2. Background and Related works 2.5.2 Thematic Analysis and Model Conceptualization Thematic analysis is a foundational technique in qualitative research, used for iden- tifying, analyzing, and interpreting patterns or “themes” within qualitative data. Braun and Clarke [9] define it as a flexible yet structured approach that allows researchers to move from unorganized text to meaningful insights. This method is not bound to any particular theoretical framework, which makes it accessible for researchers from different backgrounds and disciplines. In our study, thematic analysis was used to analyze interview data from both feedback senders and receivers. We followed the first four steps in the six-step model proposed by Naeem et al. [10] for conducting thematic analysis, including: (1) transcription of data, (2) keyword identification, (3) systematic coding, and (4) grouping of codes into themes. These steps helped us identify patterns in the data and organize them into coherent themes. This combined approach supports both inductive and deductive reasoning [9]. In the inductive mode, themes emerge directly from the data without a predefined coding structure. In contrast, the deductive approach uses a coding frame informed by theory or existing literature. Steps 5 and 6 in Naeem et al.’s model go beyond traditional thematic analysis and focus on conceptual model development. In our study, we used these additional steps to interpret themes in relation to the research questions and to construct the UFRM framework. Both sources emphasize that thematic analysis is especially suitable for exploring how people make sense of their experiences, particularly in under-researched or evolving domains. It transforms qualitative narratives into structured insights while retaining the richness of the original data. 2.5.3 Use Case-Based Demonstration and Evaluation The Design Science Research Methodology (DSRM) developed by Peffers et al. [11] provides a structured approach to designing and validating artifacts in information systems research. One of the core principles of DSRM is that any proposed solution must be evaluated for its effectiveness, utility, and applicability. The authors outline a six-step process: problem identification, objective definition, design and development, demonstration, evaluation, and communication. The demonstration and evaluation phases are particularly relevant for conceptual model validation. During the demonstration phase, the artifact, such as a framework, process, or diagram, is applied to a practical use case or scenario to show how it functions in context. This approach does not require real-world deployment or user testing but instead uses realistic, well-defined examples to simulate application. Prefers et al. refer to these as use case applications, which are ideal for demonstrating the potential of an artifact in a safe, controlled environment. The subsequent evaluation phase involves assessing whether the artifact meets its intended design goals. This can include measures such as usefulness, accuracy, completeness, and alignment with stakeholder needs. Evaluation methods can vary, ranging from interviews and observations to comparisons against benchmarks, but the goal is always to assess how well the solution solves the original problem. 16 3 Research Methods To answer the RQs listed in Section 1.2, this chapter shows the research methods em- ployed to create and validate the User Feedback Reference Model (UFRM) for managing user feedback in dynamic scenarios. It is structured into three key sections: • 3.1 Data Collection: details the interview process. • 3.2 Data Analysis: covers thematic analysis and model construction. • 3.3 Model Demonstration and Evaluation: presents the validation of UFRM. An overview of the research steps and their relation to the research objectives and questions is illustrated in Figure 3.1. Figure 3.1: Overview of The Whole Research Process 17 3. Research Methods 3.1 Data Collection This section describes in detail the implementation process of data collection in our study. We aim to explore user feedback management in dynamic scenarios through semi- structured interviews [8] to provide an empirical basis for the development of the UFRM. Our data collection focuses on two key participants: Feedback Receivers (such as developers and product managers) and Feedback Senders (i.e., end users), representing the supply and request perspectives of feedback management. We adopted a purposive selecting technique [47] and recruited 10 Feedback Receivers and 30 Feedback Senders. Considering the difficulty of recruitment and interviews, we first interviewed Receivers, which also provided preliminary insights for designing and implementing questions for interviewing Senders. 3.1.1 Selection Strategy This section explains how we selected participants using purposive sampling strategies for two groups: Feedback Receivers and Feedback Senders. Our goal was to ensure relevance to dynamic software scenarios, participant diversity, and thematic saturation. 3.1.1.1 Feedback Receivers Feedback Receivers were selected from professionals with experience in managing user feedback systems in time-sensitive and complex environments. We contacted over 100 candidates through professional networks (e.g., LinkedIn, forums) and enterprise partner- ships, and conducted interviews with 10 qualified participants between February 5 and March 5, 2025. The participants represented five industry sectors: 50% worked in automotive and au- tonomous driving, while the remaining came from logistics, navigation, telecom, and med- ical software. Their roles included product managers (5), developers (4), and a data analyst (1). Most had more than 10 years of experience and were advanced in processing feedback (Figure 3.2). • Feedback Experience: Selection prioritized professionals familiar with fast-changing systems such as real-time navigation and emergency coordination. This aligns with the study’s goal to understand feedback in dynamic environments (OBJ1). • Data Saturation: Following qualitative research standards [48], new insights began to repeat after the 8th interview, and saturation was confirmed at the 10th. • Anonymity: Participants’ voices were anonymized using codes (e.g., R01–R10) and securely stored for research use only. Recruitment was challenged by data privacy concerns and limited availability. Flexible scheduling and secure interview formats (e.g., using Zoom meetings) were used to address this. 18 3. Research Methods Figure 3.2: Demographic Information of Interviewed Feedback Receivers 3.1.1.2 Feedback Senders We recruited 30 participants between March 10 and March 29, 2025, using open calls via social media, university mailing lists, and personal networks. Senders were selected based on experience using feedback features in mobile or web apps under dynamic conditions such as navigation, e-commerce, or emergency services. As shown in Figure 3.3, the participants varied across application contexts: 43% shared e-commerce experience, while others came from education, social media, transport, and medical apps. Most were aged 30-50, with a majority being female. Their digital profi- ciency ranged from beginner to highly proficient. • Feedback Experience: Participants who had previously given feedback under time pressure or unstable conditions were prioritized. These conditions reflect typical dynamic scenarios the study aims to support (OBJ1). • Data Saturation: After 30 interviews, no major new insights emerged, confirming that thematic saturation was reached [48]. • Anonymity: All responses were anonymized using sender IDs (S01–S30), with audio stored securely for transcription and coding. Although recruitment was easier than for Receivers, we still addressed some concerns about data use and scheduling by offering flexible online formats and informed consent procedures. 19 3. Research Methods Figure 3.3: Demographic Information of Interviewed Feedback Senders 3.1.2 Design Interview Questions 3.1.2.1 Feedback Receivers In order to reach out all our goals in OBJ1, we designed 24 questions for Feedback Receivers, divided them into 5 parts (see Appendix A), focusing on asking about chal- lenges of Receivers in feedback collection, processing, analysis and their special needs, their preferences in dynamic scenarios and improvement suggestions. We tried to design the questions as simple as possible and without overlaps to be understandable for all levels of proficiency in interviewees and avoid their frustrations. 3.1.2.2 Feedback Senders Aiming to answer all our research questions in RQ1 and achieve our research objective OBJ1, to understand challenges and preferences of feedback Senders in Dynamic Scenarios, we designed 14 questions for Feedback Senders, divided them into 3 parts (see Ap- pendix B), covering the general feedback experience, Motivation and preventions, behavior of dynamic scenarios, and improvement suggestions. We tried to design non-redundant questions with minimum complexity to be understandable for all diverse interviewees. 3.1.3 Conduct Interviews Before all the formal interviews, we conducted pilot studies with our supervisor and two of our volunteer friends with relevant experience, to evaluate the clarity, relevance, and rationality of the structure of the interview questions. This is a key step in qualitative research to ensure that the questions were scientifically designed, easy to understand, and can effectively promote good responses. Through pilot studies, we aimed to identify and modify questions that may cause ambiguity or appear redundant to avoid confusion 20 3. Research Methods among participants when answering. We also aimed to verify whether these questions can effectively cover the research objectives and ensure that the interview content is consistent with the research direction. In addition, we also paid attention to the impact of the number and complexity of questions on the cognitive burden of participants to prevent them from feeling fatigued during the interview. We confirmed that all questions could be understood and answered smoothly by the interviewees without obvious obstacles. The results of the pilot interview showed that some questions needed to be adjusted, such as by simplifying the wording and optimizing the order of questions to improve overall clarity and fluency. After these modifications, the interview guide was more concise, clearly structured, and easy to answer for the subsequent formal interviews. When conducting interviews, all interviews were recorded, transcribed using Microsoft Word 365’s automatic tool, manually verified, and stored anonymously. 3.1.3.1 Feedback Receivers We interviewed 10 Feedback Receivers, with each session lasting 60-80 minutes, conducted online (via Zoom) or offline (at Chalmers University). Before starting, we explained the study’s purpose, obtained informed consent, and assured anonymity. One researcher led the interview, while another recorded audio and took notes. We used guiding questions but encouraged free expression to capture detailed insights. 3.1.3.2 Feedback Senders We interviewed 30 Feedback Senders, with each session lasting 5-15 minutes. Due to the simpler questions, we used both formal interviews (semi-structured questions, full recordings) and informal interviews (flexible questions, brief notes). Interviews were conducted via Zoom, phone calls, or in person to fit participants’ schedules. To avoid bias from leading questions, we increased the number of interviews to gather more data, ensuring richer insights. 3.2 Data Analysis We used Thematic Analysis as the main method to analyze our interview results from the receivers and senders. This approach is widely used for identifying, analyzing, and reporting patterns (themes) in qualitative data [9]. It offers a flexible but structured way to interpret rich Sender input and is especially suitable for understanding complex topics like user feedback management in dynamic software scenarios. We followed the Process of Thematic Analysis to develop a Conceptual Model in qualitative research as described by Naeem et al. [10] to support the development of our conceptual model. This process helps turn interview data into clear themes and insights. Figure 3.4 shows the steps we used to organize and analyze the data, which later helped us build the UFRM framework. 21 3. Research Methods Figure 3.4: Six Steps Conceptual Framework Development Process We applied Steps 1-3 of the thematic analysis process separately to the interview data collected from Feedback Senders and Feedback Receivers. This allowed us to capture distinct perspectives from each group during transcription, keyword selection, and coding. Then, we combined the results from both groups to perform Steps 4-6, integrating the insights to develop themes, conceptualize relationships, and build the UFRM framework. We first present a detailed textual explanation of the six steps used in our thematic analysis below. And there is Figure 3.5, which is presented later and shows how we conduct the six-step thematic analysis in our research process. 3.2.1 Step 1 - Transcription, Familiarization With the Data, and Selec- tion of Quotations In the first step, we transcribed the interview recordings into text format using an au- tomatic transcription tool from Microsoft Word 365, due to its high accuracy and seamless integration with the text processing workflow, which reduced post-editing time and was easily accessible through our university account. Then, we made manual cor- rections to ensure accuracy. To protect participants’ privacy and comply with ethical 22 3. Research Methods standards, we removed personal identifiers (e.g., names, company details) and eliminated off-topic or distracting content. We also manually normalized the data by extracting key concepts, removing duplicates, and standardizing expressions to ensure consistency. Next, we read each transcript several times to become familiar with the content and context. Through this repeated reading, we noted emerging patterns, such as recurring phrases related to time pressure or task urgency in dynamic environments. We high- lighted segments that contained meaningful insights or illustrated challenges relevant to feedback under pressure. During this process, we selected representative quotes that aligned with our research objectives. These quotations were chosen based on their clarity, relevance to dynamic feedback scenarios, and ability to reflect key issues expressed by par- ticipants. For example, quotes describing in-task interruptions, stressful use conditions, or urgent decision-making were prioritized to ensure the themes were grounded in concrete user experience. 3.2.2 Step 2 - Selection of Keywords In the second step, we manually extracted keywords from the transcripts into Excel, to capture core concepts and experiences of participants in dynamic scenarios. We reviewed the data to identify recurring terms and meaningful patterns, such as time pressure, urgency, and fast, which feedback senders frequently mentioned due to their situational importance [10]. Keywords were selected based on their frequency, relevance to research objectives, and ability to reflect real experiences of participants, ensuring that they were derived directly from the data. This process prepared the data for the next coding stage. 3.2.3 Step 3 - Coding In the third step, we coded the data by assigning concise labels to summarize the codes, transforming words into actionable units for analysis. The Coding was performed man- ually in Excel, with iterative refinement between the two researchers until consensus was reached. Using a combination of inductive (data-driven) and deductive (research- driven) methods, we created codes like time constraint and environmental limitation to capture key feedback management issues in dynamic scenarios. These codes were de- rived from the keywords identified earlier, ensuring alignment with research objectives. As shown in Figure 3.5, keywords such as “time pressure” and “bad signal” were grouped into codes like “Time Constraint” and “Network Instability,” which later contributed to themes like “Dynamic Scenarios Characteristics.” We then clustered related codes to ex- plore connections, preparing for theme development in the next step. For Steps 1-3, some simple examples of how we find keywords from word clouds, and how keywords and codes become themes, can be seen in the Chapter 4. Results - Section 4.1. 3.2.4 Step 4 - Theme Development and Analysis The fourth step is to integrate the codes into themes and identify patterns and rela- tions in the data to answer the research questions. Themes were higher-level sum- maries of the codes, which not only reflect the extracted elements of the data, but also reflect meaningful explanatory concepts. After completing the keyword extraction and coding work, we summarized the codes with similar meanings and summarized represen- tative themes and their sub-themes based on repeated comparison and verification. 23 3. Research Methods The theme identification process was guided by the research questions and combined with the patterns, views, and situations that recurred in the participant interviews. For ex- ample, several interviewees mentioned that it was difficult to provide effective feedback when "time was tight" and "environment was complex". We classified this type of cod- ing under the theme of "Contextual Challenges" and further refined it into sub-themes such as "Time Pressure" and "Environment and System Instability". The entire develop- ment process was based on Inductive logic (inductive approach), supplemented by some theoretical guidance to ensure that the theme was both in line with the data and theoretically meaningful [9]. So far, we have identified 6 main themes, which cover the key aspects of user feedback behavior and reflect main challenges and preferences of feedback senders and receivers in Dynamic Scenarios. The detailed results of step 4 can be seen in the Chapter 4. Results - Section 4.2. Figure 3.5: An Example of Six-Steps Thematic Analysis to Develop a Conceptual Model 3.2.5 Step 5 - Conceptualization of Core Layers Based on Themes After completing the thematic analysis, we proceeded to the fifth step: conceptualiza- tion. This step transforms the six identified themes and their sub-themes into a higher- level structure that reflects the user feedback lifecycle in dynamic scenarios. By comparing and interpreting the themes, we found that they naturally correspond to four functional stages of feedback processing. These four stages formed the conceptual backbone of the UFRM. 24 3. Research Methods Theme 1 (Dynamic Scenario Characteristics) became the Dynamic Scenario Identifi- cation Layer, which captures external and psychological conditions under which feedback is generated. Themes 2 and 3 (Feedback Motivation and Collection) were grouped into the Feedback Collection Layer, highlighting Sender drivers and methods for submitting feedback. Themes 4 and 5 (Internal Processing and Feedback Limitations) informed the Feedback Processing Layer, focusing on how feedback is classified and handled internally. Theme 6 (Feedback Response) shaped the Feedback Response Layer, ensuring user feedback is acknowledged and closed. The results of Step 5 are discussed in Section 4.3 represent the output of this conceptu- alization step. They show how user experiences and challenges were translated into design ideas, which directly support the construction of the User Feedback Reference Model. 3.2.6 Step 6 - Development of Conceptual UFRM With the conceptual structure established, we moved to the sixth step: building the actual model UFRM. The UFRM integrates the findings from Step 5 into a complete design artifact. To operationalize the UFRM, we developed two key components: • A Process Flow Diagram that illustrates how feedback flows across four layers (see Section 4.4.1). • A UML Class Diagram that defines the data structure, entities, and their rela- tionships (see Section 4.4.2). These two parts turn our themes into a clear and practical model. The process diagram shows how feedback moves from detection to response, while the class model ensures each entity is explicitly defined. The detailed results of step 6 can be seen in the Chapter 4. Results - Section 4.4. 3.3 Model Demonstration and Evaluation - Use Case To evaluate the Validity and applicability of the proposed model (UFRM), we adopted one of the recommended evaluation approaches from the Design Science Research Methodology (DSRM) proposed by Peffers et al [11]. According to their framework, validating design artifacts can be carried out in two complementary stages: • Demonstration: showing how the model functions in realistic situations by apply- ing it to practical use cases. • Evaluation: assessing whether the model effectively addresses the original problem and meets its intended design goals. According to this approach, we validated our model using three real-world-inspired use cases: smart navigation, autonomous driving, and digital healthcare. These use cases 25 3. Research Methods show typical dynamic situations where Senders need to give feedback quickly but often find it hard. For the demonstration stage, we applied our two core model artifacts, the UML Class Diagram and the Process Flow Diagram (see Section 4.3.1) to each use case. We mapped each scenario step-by-step to the four layers of UFRM: (L1) detecting the feedback situation, (L2) collecting feedback, (L3) processing it, and (L4) generating responses. This helped us visualize how the model works in action, under time pressure or system instability. In the evaluation stage, we assessed whether the model achieved its design objec- tives, such as enabling quick, easy, and adaptive feedback interactions in dynamic condi- tions. These evaluation points were based on our research goals defined in Section 1.4. The results of this validation were discussed further in Section 4.4 and reflected on in Chapter5. 26 4 Results 4.1 From Transcripts to Codes This section describes how the interview data were prepared and structured for analysis. Following the six-step thematic analysis and conceptual model design approach [10], we first worked on Steps 1-3 mentioned in the Section 3.2 through transcript familiar- ization, open coding, and initial theme generation as an integrated process. This phase provided the analytical foundation for the thematic structure of our study and laid the groundwork for the conceptualization of the UFRM. We began by transcribing all interview recordings from both feedback senders and re- ceivers. The transcripts were carefully read and annotated, allowing us to identify recur- ring ideas and expressions relevant to feedback in dynamic software scenarios. We then manually extracted and labeled meaningful content segments based on frequently mentioned terms, participant phrases, and topic relevance. Each code was iteratively re- fined by grouping semantically similar ideas together, combining participant terms and researcher-generated concepts. It was explained in details in Section 3.2 (3.2.1 to 3.2.3) that how we manually performed codes from transcripts. To support the open coding process, we used word cloud visualizations as an auxiliary tool for identifying frequently mentioned terms in the interview transcripts. The visualized patterns provided a preliminary overview of common expressions used by feedback Senders and Receivers. These results helped guide the manual coding process by highlighting candidate terms and recurring linguistic patterns. Figure 4.1 and Figure 4.2 show the word clouds for each group. Figure 4.1: Word Cloud from Receivers Interview Results 27 4. Results Figure 4.2: Word Cloud from Senders Interview Results After coding, we organized the codes into larger themes based on meaning. We reviewed and refined the themes to make sure they were clear, consistent, and relevant to the study. These themes later shaped the structure of the UFRM model. Figure 4.3 shows how we moved from raw keywords and codes to final themes. Figure 4.3: From Keywords and Codes to Themes Related codes were then grouped into larger thematic patterns based on their semantic meaning. These initial themes were refined to ensure coherence and coverage, forming the basis for our model design. 4.2 Thematic Analysis with Interview Results To identify and structure the core insights from our studies, in Step 4 - Theme Devel- opment and Analysis, we clustered the codes into sub-themes and major themes that 28 4. Results reflected the challenges and preferences that emerged when handling feedback in dynamic scenarios. We refined our findings into 6 major themes and 24 sub-themes through this process, then identified 4 layers based on these themes to describe the feedback processing stages, as shown in Figure 4.4. Figure 4.4: Thematic Analysis Framework: Tree of Themes and Sub-Themes Layer 1: Dynamic Scenario Identification This foundational layer originates from Theme 1 (Dynamic Scenarios Characteristics), which emphasizes external and internal conditions like time pressure, system instability, and cognitive load that impact feedback behavior. This layer supports the system’s ability to recognize when feedback is happening under dynamic constraints. Layer 2: Feedback Collection Derived from Themes 2 and 3 (Feedback Collection Methods and Feedback Motivation), this layer explains how feedback is initiated, including the consideration of trigger timing, data formats, user factors, etc. The mechanism for capturing Sender input is adapted to Senders’ states and environmental factors, combining both explicit and implicit feedback collection methods. Layer 3: Feedback Processing Themes 4 and 5 (Internal Processing and Quality Limitations) informed the third layer, which handles the user feedback by classification, prioritization, routing, etc. It ensures that receivers establish efficient workflows that balance automation and manual review of dynamic cases. Layer 4: Feedback Response 29 4. Results Theme 6 (Feedback Follow-up and Response) defines the last layer, where the system closes the loop by responding to Senders. This includes personalized responses, trans- parency of feedback impact, and continued engagement, all of which are crucial in dynamic contexts to maintain trust and motivation for Senders. In the following subsections, we present real examples of original quotes from interviews with Senders (e.g., S1, S2) and Receivers (e.g., R1, R2) to illustrate how we extracted sub-themes and themes from their statements and identified the key challenges faced by both sides. The interview questions were carefully designed to place participants within dynamic scenarios, allowing their responses to reflect the specific difficulties that arise in such contexts. The exact questions corresponding to these quoted responses can be found in Appendix A (for Senders) and Appendix B (for Receivers). 4.2.1 Theme 1: Dynamic Scenarios Characteristics From what we have learned in our interviews, dynamic scenarios are complicated by Time Pressure, Environmental Constraints, System Instability, and High Cognitive Loads, which hinder Senders from submitting feedback. Time pressure causes Senders to postpone feedback, while environmental constraints, such as network problems, increase submission difficulty. System instability reduces trust, and a high cognitive load makes tasks take precedence over feedback, requiring the design of fast and non-intrusive systems. Sub-theme 1.1: Time Pressure S2: "In such high-stress, time-sensitive moments, I usually postpone feedback or ignore it." S4: "I won’t give feedback right then but if it’s frequently happening, I will email them afterward." R7: "Users may forget details or provide incomplete reports due to time pressure and urgent tasks." Time pressure in dynamic scenarios significantly hinders feedback submission. Senders prioritize primary activities over urgent tasks (such as navigation or medical emergencies), resulting in delayed or ignored feedback and a reduction in quality. Developers recog- nize that this not only affects the timeliness of feedback, but may also result in incomplete content, making it more difficult to identify problems. Designs need to allow for delayed feedback to ensure that Senders provide input when it does not affect the task. By sim- plifying the feedback process and providing immediate options, the system can reduce the negative impact of time pressure. Developers also recommend proactively prompting for feedback after the task is completed to capture insights that may get forgotten. Sub-theme 1.2: Environmental Constraints S27: "If I’m in a place with a bad signal, I can’t send feedback right away." R6: "Dynamic scenarios like GPS errors require systems to handle feedback without user effort." R8: "Dynamic scenarios like driving pose challenges as real-time user feedback is limited. Automated logs and system alerts are used instead." Environmental constraints are also a significant barrier to feedback collection in dy- namic scenarios. Unstable network signals, physical mobility restrictions, or other external 30 4. Results factors often prevent Senders from submitting feedback instantly. For example, in remote areas or environments with weak signals, Senders may not be able to send detailed reports, or even simple notifications. In high-mobility scenarios (such as walking or driving), it may be impractical or unsafe to stop to provide feedback, affecting real-time and the relevance of the feedback. Developers recommend developing systems that sup- port offline storage and automatic logging to ensure that data is still captured when external conditions are limited. By allowing Senders to automatically upload feedback after the connection is restored or recording key events through background processes, the system can improve the robustness and completeness of data collection. These strategies ensure that feedback is not lost due to environmental constraints and enhance the system’s adaptability. Sub-theme 1.3: System Instability S8: "If the system crashes, I don’t bother giving feedback because it feels pointless." R5: "End-user feedback is sometimes lost due to system glitches." System instability undermines Sender trust and willingness to provide feedback in dy- namic scenarios. Crashes or technical failures make Senders worry that feedback will not be processed, especially in emergencies. For example, application failures may lead to interruptions in feedback submission, which increases Sender frustration and reduces the likelihood of future feedback. Developers emphasize the importance of system sta- bility and reliability, and recommend implementing strong error handling mechanisms and clear feedback receipt confirmations (such as "Your feedback has been received"). By separating feedback collection functions from the main application functions, main appli- cation failures can be prevented from affecting the submission of feedback. Stable system design is key to maintaining Sender engagement in dynamic scenarios. Sub-theme 1.4: High Cognitive Loads S10: "Lack of time, high mental load, and not wanting extra distraction..." S25: "When I’m focused on a critical task, feedback is the last thing on my mind." R7: "Users may forget details or provide incomplete reports due to time pressure and urgent tasks." R9: "Feedback collection should not add to the user’s cognitive load; it needs to be seamless." High cognitive loads significantly affect feedback submission in dynamic scenarios. Senders perceive feedback as a distraction during complex tasks, such as driving or surgery, and prioritize primary activities. For example, doctors find it difficult to fill out feedback forms during emergency surgery, which leads to delayed or forgotten feedback and reduced data quality. Developers recommend non-intrusive methods, such as sub- mission options after task completion or automated data capture, to reduce cognitive and operational burden. By embedding feedback into the workflow, the system ensures that the feedback process is aligned with Sender priorities. Developers also recommend using context-aware prompts to trigger feedback requests based on the Sender’s task status to maximize the amount of information collected. The challenges of high cognitive load need to be addressed by designing feedback mechanisms that fit the Sender’s workflow, requiring minimal effort or attention to finish giving feedback. This includes approaches such as automated data capture, post-task feedback prompts, or context-aware feedback 31 4. Results options that do not disrupt Senders’ main task during their critical activities. 4.2.2 Theme 2: Feedback Collection in Dynamic Scenarios In dynamic scenarios, feedback collection needs to be embedded in the Sender workflow to minimize interference, and strategies must be seamlessly integrated. Common feedback topics include usability issues and feature requests. Multimodal formats such as voice feedback meet the feedback needs of different Senders. Multiple channels for feedback, such as in-app forms, can improve convenience, and diverse inputs need to be processed efficiently. Sub-theme 2.1: Feedback Collection Strategies S22: "In time limitation and stress I prefer One-tap/voice input within the application" R2: "For collecting real-time data of users I suggest Automated data logging: Instead of relying on manual input, logs automatically capture system behavior during criti- cal events." R3: "Using predefined short messages or topics for issues, or simplified reporting buttons which can help in collecting structured feedback and better categorizing and routing can be helpful for Senders when they are in dynamic scenarios." In dynamic scenarios, both explicit and implicit feedback collection strategies are adopted to obtain Sender input effectively. Explicit strategies include direct Sender input, such as clicking a button or filling out a short form, while implicit strategies capture data through automatic logging or behavioral analysis. For example, a navigation app can automatically record path deviations or error message interactions, providing developers with the necessary insights without requiring additional Sender action. The combination of the two improves the quality and timeliness of feedback and adapts to fast-paced environments. Simplified feedback methods not only increase the amount of data but also ensure contextual relevance. By optimizing these strategies, developers can enhance the efficiency of feedback management, support rapid problem identification and resolution, and improve the overall user experience. Sub-theme 2.2: Typical Feedback Topics R1: "In unstable situations of environment such as unstable network connection, the feedback mainly includes system errors (bugs) and feature requests. Senders usually state, ’This part of the system is not working.’" R4: "While using autonomous systems for piloting a car as an on-the-go and un- stable situation, the most common types of feedback are system bugs related to autonomous driving features. Issues such as emergency function failures, braking problems, and unexpected system behaviors are commonly reported." R10: "Bug reports, performance complaints, desires for other features, usabil- ity issues and comparison with similar products." S10: "I usually report bugs or inappropriate content. I provide feedback to help improve the user experience." S16: "Mostly, my feedback is about making the app or website easier to use. Since people from different backgrounds use the application, I believe it’s much better if it is user-friendly and simple." In dynamic software scenarios, user feedback focuses on several key areas that directly 32 4. Results affect system performance and user satisfaction. First, Senders frequently report system errors or bugs, especially in safety-critical applications such as autonomous driv- ing. Problems such as emergency functional failures, braking issues, or unexpected system behaviors can lead to severe consequences, underscoring the importance of prioritizing safety. Second, Senders often make feature requests or suggestions for improve- ments to existing features, reflecting their expectations for system functional enhance- ments to improve practicality. Third, usability issues are an important concern, and Senders emphasize that the interface must be intuitive and easy to use, especially to provide a simple and friendly experience for Sender groups from different backgrounds. Fourth, performance-related complaints, such as slow response time or system insta- bility, are prevalent, indicating that optimizing system efficiency is critical. In addition, Senders point out system deficiencies by comparing with similar products, providing valuable insights for competitive positioning and improvement. These feedback themes highlight the importance of balancing both functional and non-functional requirements in dynamic environments to ensure system reliability and user satisfaction. Sub-theme 2.3: Multimodal Feedback Formats R4: "Feedback is mainly received in text format via logs and structured reports. In some cases, screenshots and images are uploaded when Senders report an issue, when they go to a workshop or service point and a technical person in that point will add screenshots to the text report or logs." R9: "Feedback is received in various formats, including text, audio, screenshots, and logs. The specific format depends on the context of the feedback. For instance, sensor data such as steering angles, acceleration, and jerk are used to analyze driver reactions... using eye-tracking cameras and accelerometers to gather more detailed user behavior data." R10: "We invite the users to the truck, we make videos collecting the feedback/ From other areas like forums we just pick the text. Sometimes we pick YouTube reviews as well." S1:"Text + pictures, also rating is easy for me to give." S16: "I prefer to submit my report in writing by sending a text or email, along with screenshots and photos, to document my feedback properly." S28: "In-app rating or emojis, send voice or call with support center." In dynamic software scenarios, the multimodal formats of user feedback significantly en- hance the depth and breadth of data collection and provide diverse perspectives for system improvement. Textual feedback, such as logs, structured reports, and emails, is the most common form, supporting detailed problem description and analysis. Images and screenshots are particularly suitable for visual problem descriptions by intuitively displaying interface problems or error scenarios, reducing barriers to developer under- standing. Audio feedback, including voice messages and support center calls, provides a convenient way of expression in scenarios where Senders’ hands are busy or moving, and is suitable for quick feedback. Video feedback is particularly suitable for diagnos- ing complex dynamic problems such as autonomous driving by recording event sequences. Ratings and emoticons are quick emotional feedback tools that allow Senders to express satisfaction or problem severity when time is tight. In safety-critical applications, sensor data (such as steering angle, acceleration, and eye tracking) provides objective behavior and system performance indicators to supplement the shortcomings of subjective feedback. The integration of these multimodal formats 33 4. Results requires UFRM to design flexible processing mechanisms to adapt to inputs in different formats, ensure the comprehensiveness and accuracy of feedback, and support system optimization and user experience improvement in dynamic environments. 34 4. Results Sub-theme 2.4: Variety of Feedback Channels R3: "Feedback is collected mainly through support tickets, emails, live chat, and internal reporting systems. Some feedback also comes from app store reviews and website surveys." R8: "Users provide feedback via various channels, including in-app reporting, cus- tomer service calls, and dealership reports." R10: "Live interviews, customer service, forums." S7: "I usually prefer to give feedback through direct customer support channels like live chat or email, especially when I need a quick resolution." S15: "I prefer text, In-app forms, chatbots." In dynamic software scenarios, the diversity of feedback channels ensures that Senders can submit feedback conveniently in different situations. Support tickets and emails are suitable for addressing complex issues, as they provide detailed records for easy track- ing an