Chalmers Open Digital Repository

Välkommen till Chalmers öppna digitala arkiv!

Här hittar du:

  • Studentarbeten utgivna på lärosätet, såväl kandidatarbeten som examensarbeten på grund- och masternivå
  • Digitala specialsamlingar, som t ex Chalmers modellkammare
  • Utvalda projektrapporter
 

Enheter i Chalmers ODR

Välj en enhet för att se alla samlingar.

Senast inlagda

Post
Progress-Based Distributed Queues - Exploring the effects of a novel heuristic with partial queues using fetch-and-add
(2024) Hermansson, Sebastian; Johansson, Elias; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Seger, Carl-Johan; Tsigas, Philippas; von Geijer, Kåre
With the increasing use of multi-threaded processors comes the challenge of scaling with a higher number of threads. To that end, the semantics of a data structure can be relaxed to lessen contention and achieve better performance. The data structure of interest in this thesis is a relaxed first-in-first-out queue called d-RA, which is made up of partial queues. It utilizes a length-based heuristic, that chooses a queue to perform an operation on. However, it employs relatively slow lock-free partial queues and uses a heuristic that is not definitively proven to be optimal. Furthermore, relaxed first-in-first-out queues lack real-world applications as they can not be used if the order of operations is strict. This thesis improves on the d-RA algorithm by using faster partial queues and a new progress-based heuristic, which generally increases the data structure’s throughput, causes it to scale with more threads, and lowers the level of relaxation. Additionally, we use relaxed queues in an unordered breadth-first-search to calculate the shortest paths in graphs where they are shown to outperform concurrent queues.
Post
Long-term Agile Success for Startups and Entrepreneurship
(2024) LUROTIMI-ODU, DAMILOLA ADEBOLA; NISAN, PETRA; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Penzenstadler, Birgit; Hanif, Muhammad
The principles of continuous improvement, adaptability, innovation and customer focus are crucial elements for long-term success of many new companies. However, the implementation and adoption of agile methodologies can help to eliminate complex organizational issues and unclear ambiguity in many industries. This study investigates the adaptability, adoption and applicability of agile methodologies in relation to the long-term agile success of startups and entrepreneurship. These agile methodology principles are studied from the context of startups companies. With the use of startups as our case study with qualitative interviews, the research presents insight into challenges that arise within the context of startups and entrepreneurship and how agile methodology can be adapted and applied to overcome them. Our thesis findings revealed the need for the adoption of agile methodologies by startups and entrepreneurship as a way of achieving long-term success. The topic has provided guidelines which are also validated with specific recommendations for startups. This paper offers practical guidance for achieving long-term agile success by startups and entrepreneurship and it will also contribute to the understanding of agile practices and will emphasize the importance of agile methods for startup companies.
Post
Accelerating Embedded Code of Simulink with Pipeline-Friendly Code Synthesis and Parallel Computing
(2024) Zhang, Yeben; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Massimiliano Gulisano, Vincenzo; Massimiliano Gulisano, Vincenzo
The model-based design (MBD) software development method is becoming more common in the automotive engineering industry due to the growing importance of software in vehicle development. With this approach, automotive engineers create control models visually, and then a code generator in the MBD tool automatically produces the code based on the visual model. However, code generators often need to consider code compatibility on different platforms. Therefore, the performance of these codes cannot be guaranteed to be optimal. In order to optimize performance, this thesis first examines existing performance optimization schemes. Moreover, two performance optimization schemes are proposed based on the existing methods. Modern processors are structured around pipelines, which consist of multiple stages. Every instruction must go through each stage. Ideally, each instruction at every stage can be processed in just one CPU cycle. When a task takes longer than one cycle, it causes a pipeline stall, typically due to data hazards. These hazards happen when the execution of the current instruction relies on the data value produced by the previous instruction. The first method analyzes the pipeline state to identify the cause of a pipeline stall when there is only one pipeline on the CPU. In a pipeline, if the next instruction does not depend on the result of the previous instruction, it does not have to wait for the previous instruction to execute. When such an instruction is executed first, it is referred to as out-of-order execution. A solution was found to reduce the pipeline stall by implementing out-of-order execution. Based on this approach, Mercury is being researched in academic studies. It proposes an algorithm to minimize data hazards by reordering code statements at the code level. However, the flaws of this method are evident in analysis of the evaluation section. This method adversely impacts the performance of complex models that heavily utilize cache for storing intermediate variables. The thesis investigates the causes of negative optimization. When the data size of these variables exceeds the cache, the performance becomes slower compared to the original Simulink-generated code. This thesis improves the Mercury algorithm. The enhanced thesis method can also optimize the performance of code running on embedded devices with limited cache resources. The second approach is based on the multi-core architecture of modern processors. It decouples a task system into several small tasks and assigns these tasks to multiple cores at a fine granularity based on the data dependencies relationship and the v execution time. This method minimizes barrier wait times and improves overall execution time. In the experiment, thesis conducted tests on hardware to compare a new method with an existing one. The hardware used for the tests is the Infineon development board, which is commonly used in the automotive industry to control specific functions in vehicles. This board is equipped with three Tricore processors, each with a 6-stage pipeline to process software instructions. The new method aims to improve the performance of a single task running on a single processor by addressing pipeline stalls. The task in thesis is the control program of two proportional integral derivative controller. This control program is widely used in industrial systems. The program has two inputs. The first input is the expected temperature of the controlled object, and the second is the temperature detected by the sensor. The output of the controller is the voltage of the air conditioning compressor. According to the control parameters, the control program will output different voltages to stabilize the temperature of the controlled object at the expected temperature. Compared to Mercury, the thesis method achieves the same optimization when the cache has sufficient capacity. When the cache resources are insufficient, the thesis method achieves about 20% improvement compared to Mercury. The second approach increases the parallelism rate of multiple tasks and reduces the processing time when multiple tasks are scheduled to the three processors. Compared to the existing coarse-grain scheduling method, the thesis fine-grain method achieves about 4% improvement.
Post
Joint Venture: Designing a Digital Solution for Monitoring Arthroplasty Patients’ Movement Patterns
(2024) Ingheden, Agaton; Sjöberg, Sofia; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Fjeld, Morten; Weilenmann, Alexandra
This Master’s thesis explores the design of a mobile application for enhancing postarthroplasty care through self-tracking. The primary goal is to provide a userfriendly interface for caregivers to monitor and support patient rehabilitation after arthroplasty. Current rehabilitation practices within arthroplasty often lack digital solutions that can monitor and encourage patient mobility and compliance. To fill this gap, the proposed application uses self-tracking data from the patient’s iPhone to visualise and analyse movement data, offering caregivers real-time insights into patients’ progress. This approach aims to improve surgical outcomes and enhance communication between caregivers and patients by providing a platform for continuous feedback. Through user studies, the needs of caregivers working in relation to arthroplasty have been explored, identifying a desire to incorporate objective evaluation measures to improve current practices. Beyond user studies, the research also reviews related literature on the impact of digital solutions within healthcare, particularly the utility of mobile health applications in promoting physical activity. The application’s design process, grounded in the Design Thinking framework, emphasises usability and empathy towards the target group to ensure the application is practical for caregivers. Key findings from user testing suggest that the application can contribute to a more personalised and responsive care process. Through iterative design and testing, the application has been refined to meet the specific needs of its users, with particular attention to intuitive design and ease of use. In conclusion, this thesis presents a step towards integrating digital health solutions in post-arthroplasty care, potentially leading to better patient engagement and improved health outcomes. It calls for further research into the long-term impacts of such applications and their integration into standard care practices, suggesting a paradigm shift towards more technologically integrated healthcare solutions for ageing populations.
Post
Balancing strict performance requirements and trade-offs for efficient data handling in unbounded flows - Design considerations for a proof-of-concept stream processing pipeline for vehicular data validation
(2024) Josefsson, Måns; Wall, Carl-Magnus; Chalmers tekniska högskola / Institutionen för data och informationsteknik; Chalmers University of Technology / Department of Computer Science and Engineering; Massimiliano Gulisano, Vincenzo; Papatriantafilou, Marina
In the current era of ever-growing networks of sophisticated sensors and smart devices, there are vast volumes of data created at every moment. Big Data environments such as these necessitate scalable and efficient data pipelines in order to provide near real-time analytics and monitoring. However, with especially high, unbounded data rates, traditional (store-then-process) database procedures and batchbased processing methods are struggling to remain performant. To this end, processing streams of data continuously has become an increasingly appealing approach, targeting low latency, high scalability and real-time data processing. Stream processing tools enable the execution of both stateless and stateful computations on data as it is being transferred through the pipeline, providing opportunities to increase the efficiency of processing procedures such as, e.g., data validation. However, stream processing pipelines can be complex, especially in multi-tenant settings, and it is not always clear how to efficiently approach their implementation. A process such as data validation might be subject to a multitude of requirements that all affect pipeline design and considerations. To investigate such requirements and give detailed insight on how to approach the design of a stream processing pipeline for efficient data validation on unbounded flows of data, a proof of concept pipeline is developed and tested in a case study at Volvo Trucks. The case study involves multi-tenant automotive testing, where the proposed pipeline enables near real-time validation of data, for purposes such as monitoring vehicle sensor behavior. The pipeline is comprised of Apache Kafka, for persistent event storing, Apache Flink, for continuous stateful analysis, and Apache Druid, for data serving. Evaluation of the pipeline is performed from the perspective of a set of metrics, namely data completeness, sustainable throughput, latency, scalability and fault tolerance. In order to harmonize the requirements of the pipeline and discern how trade-offs affect performance, various tool-tuning experiments and stress tests are performed. Performance evaluation of the pipeline reveals that in a controlled environment, with limited resources, the minimum throughput requirement of the use case can be sustained, while still achieving sub-second latencies and offering a degree of fault tolerance. The pipeline also shows promise of adapting well to different levels of scale, providing enough headroom for a tenfold increase in data volumes over current demands.