Chalmers Open Digital Repository
Välkommen till Chalmers öppna digitala arkiv!
Här hittar du:
- Studentarbeten utgivna på lärosätet, såväl kandidatarbeten som examensarbeten på grund- och masternivå
- Digitala specialsamlingar, som t ex Chalmers modellkammare
- Utvalda projektrapporter
Forskningspublikationer, rapporter och avhandlingar hittar du i research.chalmers.se
Enheter i Chalmers ODR
Välj en enhet för att se alla samlingar.
Senast inlagda
Designing and Developing DirectorAI: An AI Assistant for Generating Vehicle Simulation Scenarios
(2025) Telles, Adam; Raaholt Larsson, Hannes
This thesis investigates the use and potential benefits of AI in automating the generation of vehicle simulation scenarios. Focused on enhancing the usability of Director, a scripting tool for Volvo Cars’ Product Simulator software, this project involved designing and developing DirectorAI, an AI assistant featuring a chatbot interface. Using large language models, the research explored how to create effective and reliable scenarios without domain-specific model training. Moreover, the project identified limitations that emerge when deploying general-purpose language models in complex, domain-specific environments, alongside design patterns that enable these models to function effectively as assistants. The primary research question addressed was: "What design choices or patterns enable general-purpose language models to function as effective assistants in complex, domain-specific software environments without domain-specific training?" The outcome and findings demonstrated that the success of general-purpose LLMs in domain-specific environments relies less on model modification and more on how the system is designed to supply the model with relevant information. By iteratively crafting system prompts that embed domain context, constraints, and examples, DirectorAI was able to perform effectively without the need for custom training or fine-tuning. Through prototyping and user evaluation, several key design patterns were identified that enabled the assistant to support complex workflows within the existing simulation software. This research emphasized the importance of interaction design in shaping the utility and usability of AI-assisted systems. By identifying and analyzing the design choices and patterns that facilitate the effective use of general-purpose LLMs in domainspecific environments, this thesis contributes to the understanding of how AI-assisted tools can be developed for complex simulation scenarios, offering valuable insights for future applications. Ultimately, this study demonstrated the potential of AI to significantly improve the efficiency and reliability of vehicle simulation scenarios, with implications for the automotive industry and beyond.
Using Neural Tangent Kernel metrics to measure Intrinsic Motivation in Reinforcement Learning
(2025) Jansson, Vilgot
We aim to investigate if the Neural Tangent Kernel (NTK) is a useful perspective to explain why intrinsic motivation works, in an effort to try to apply theories from supervised learning to the reinforcement learning domain. We find some inconclusive evidence that suggests that an intrinsic motivation, Intrinsic Curiosity Module (ICM), does in fact increase NTK trace as a mechanism to improve performance, and show that NTK trace and other metrics based on the NTK, can be used to artificially select better training sets that decreases test loss.
Securing 5G URLLC
(2025) Sehic, Dennis; Ahmad, Talha
Fifth-generation (5G) wireless networks are networks that implement the 5G telecommunication standard. 5G networks feature a service called Ultra Reliable Low- Latency Communication (URLLC), which requires messages to arrive in under 1 millisecond. Resource-constrained devices may not be able to implement common security algorithms suggested for URLLC and will have to communicate insecurely. As it stands, the integrity of URLLC is most often achieved with a Cyclic Redundancy Check, which is malleable and can be generated for any arbitrary message by an adversary. In this thesis, a number of established security algorithms are benchmarked and evaluated for use in URLLC. Results show that Ascon and ChaCha20- Poly1305 are viable authenticated encryption algorithms, SipHash-2-4 is a viable integrity algorithm, and AES-GCM can be used for authenticated encryption only if hardware acceleration is available. The algorithms were tested against one set of standard encryption and integrity algorithms from fifth-generation (5G) wireless networks.
Balancing Performance and Memory
(2025) LIU, YI
Stream processing has become a cornerstone of real-time data analytics, particularly in edge-to-cloud computing environments where timely insights are crucial. Stream Aggregates, as fundamental stateful operators, maintain windowed state to
compute summaries over continuous data streams. However, in resource-constrained edge deployments, managing memory efficiently while maintaining high performance remains a significant challenge. Recent work has shown that compressing infrequently accessed window instances can reduce memory usage, but existing approaches rely on fixed, manually configured thresholds-parameters that are difficult to tune without prior knowledge of data characteristics such as reporting frequency and access patterns. Moreover, these methods often overlook the impact of different compression libraries on system behavior. To address these limitations, this thesis presents a study on adaptive memory compression for stream aggregates. We first evaluate the performance trade-offs of three widely used compression libraries-Snappy, Zstandard (Zstd), and JZlib-within a stream processing context. Our results show that each library offers a distinct balance between compression efficiency and processing overhead, with Snappy providing superior throughput and latency at the cost of lower memory savings, while Zstd achieves better compression at the expense of higher CPU cost. Building on these findings, we propose a dynamic, self-tuning mechanism that automatically adjusts the compression threshold D based on runtime feedback. Instead of requiring analysts to specify a fixed D, our approach allows them to define a target range for a performance metric-such as the non-compressed/compressed (n/c) ratioand the system adapts D online using simple adjustment rules. This enables robust and predictable compression behavior under varying workloads, without requiring expert knowledge. We implement and evaluate our approach using the Liebre stream processing engine and the Linear Road benchmark. Experimental results demonstrate that our adaptive mechanism effectively stabilizes the n/c ratio within user-defined bounds, with negligible performance overhead compared to an optimally tuned fixed-D configuration. The dynamic strategy proves resilient to workload fluctuations and configuration resets, making it suitable for real-world, unpredictable environments.
Pisa - Facilitating conjecture generation in Lean
(2025) Führ, Nor; Nygren, Erik
As time has passed, the rigour needed for mathematical proofs has increased. Interactive theorem provers were introduced to help with proof verification during proof development. This project aims to aid users while they use a particular interactive theorem prover, known as Lean 4, by facilitating the use of conjecture generation for the language. Conjecture generation can help users understand the domain that they are working in, or automate what proofs might be needed for a more intricate proof. To facilitate conjecture generation in Lean, a tool called Pisa was introduced. It utilizes tooling from Haskell to generate conjectures. To be able to utilize Haskell tooling for conjecture generation, a translation from Lean to Haskell was created, that supports data type translation and has an interpreter for functions. The translation works for enumerable, recursive, and polymorphic data types. With the translation, generation of conjectures is performed and then translated back into Lean code for the user to prove them. This was all tied into a macro, that is callable from the users’ editor, meaning that usage of Pisa can be done while adding and modifying definitions.
