Examensarbeten för masterexamen // Master Theses
Länka till denna samling:
Browse
Browsar Examensarbeten för masterexamen // Master Theses efter Program "Complex adaptive systems (MPCAS), MSc"
Visar 1 - 20 av 25
Sökresultat per sida
Sortera efter
- PostA genetic programming approach to finding discrepancies in log files(2021) Gulliksson, Martin; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Raum, Martin; Helgegren, ElinAn evolutionary algorithm is used to find discrepancies in log files. From a set of error-free reference logs, a set of regular expression patterns describing the logs’ general structure is generated using genetic programming. The patterns can then be checked against logs containing errors, with the goal being that added, removed and reordered lines are detected. Using a regex-oriented approach allows for grouping lines together even though the contents are not exactly the same in every instance. The approach works well so long as the log files provided do not contain too much noise.
- PostAdaptive Driver Modelling for Forward Collision Warning Systems(2021) Forsman, Oscar; Warnqvist, Johanna; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Lundh, Torbjörn; Irekvist, Daniel; Runhäll, AndreasIn this work, driving behaviour is analysed with the purpose of finding connections between a drivers routine driving and their behaviour in collision and near-collision situations. The ambition is to improve the Forward Collision Warning (FCW) system on Volvo cars by taking information from previous driving situations of the current driver into account when determining the best timing for issuing a collision warning. The analysis is performed by means of feature extraction on multivariate time series data, containing measurements from various sensors. Using principal component analysis (PCA) and clustering methods such as k-means and DBSCAN, no connections relevant to the formulated aim could be found in the investigation. The conclusion drawn is that a more thorough evaluation of the available data is required. Removing parts of drive sequences that are not of interest or categorise the sequences into different scenarios can make the information more comparable and hence yield a better result. A more careful data cleaning of the available time series could also lead to an improvement.
- PostAdaptive Radar Illuminations with Deep Reinforcement Learning: Illumination Scheduling for Long Range Surveillance Radar with the use of Proximal Policy Optimization(2023) Sandelius, Samuel; Ekelund Karlsson, Albin; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Helgesson, Peter; Andersson, AdamA modern radar antenna can direct its energy electronically without inertia or the need for mechanically steering. This opens up several degrees of freedom such as transmission direction and illumination time, and thus also the potential to optimise operation in real-time. Long range surveillance radars solve the trade-off between searching for new targets and tracking known targets. This optimisation is often rule-based. In recent years, Reinforcement Learning (RL) Algorithms have been able to efficiently solve increasingly difficult tasks, such as mastering game strategies or solving complex control tasks. In this thesis we show that reinforcement learning can outperform such rule-based approaches for a simulated radar.
- PostApplication Classification - Feature Selection and Classification of Vehicle Applications from Usage Data(2016) Wireklint, Nils; Herder, Björn; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Chalmers University of Technology / Department of Mathematical Sciences
- PostBayesian Reinforcement Learning on Semi–Markov Decision Processes(2020) Hilding Södergren, Marcus; Vrede, Samuel; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Axelson-Fisk, Marina; Ehn, GustafAutomated decision making is a highly relevant research topic in the context of supply chains as the usage of automated vehicles is increasing. Further, utilising historical data and domain knowledge to increase the performance and ensure robust behaviour of the decision–making agent is valuable from an industry point of view. The dynamic vehicle routing problem (DVRP) is a routing problem where costs of stochastically and dynamically appearing tasks in a system are minimised. Further, the semi–Markov decision process (SMDP) is an extension of the, in reinforcement learning commonly used, Markov decision process (MDP), that allows for modelling systems with stochastic state and time transition dynamics. In Bayesian reinforcement learning, ideas from Bayesian statistics are incorporated with reinforcement learning as a method to obtain better models based on historical data. In this thesis, we study how SMDPs can be applied, within the context of Bayesian reinforcement learning, to the DVRP, while considering risk–aversion. We develop an SMDP model of the DVRP and a Bayesian reinforcement learning solver for SMDPs, and show that our solver is able to outperform a naive routing strategy such as first in, first out (FIFO). Our results show that the, to our knowledge, novel idea of applying SMDPs in a Bayesian reinforcement learning context to the DVRP is promising, though further work is needed. In addition, while we have incorporated risk–aversion into our solver we believe that the topic of risk–aversion needs further study. Based on the results, and by the development of the research fields, we believe that the ideas covered in this thesis deserve, and will get, more research attention. More autonomous decision making in the global supply chains is interesting from multiple perspectives. Improved decision making may result in more rapid supply chains while improving the efficiency resulting in reduced resource consumption. Further, incorporating risk–aversion into the decision making may lead to less fragile supply chains, potentially reducing the impacts caused by unexpected events and disturbances.
- PostCross-tissue variance analysis of gene sets(2023) Thune , Oskar; Kööhler, Mauritz; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Kristiansson, Erik; Angermann, BastianGene set enrichment is used to investigate the differences between gene expression for genetic pathways in transcriptomic data. Gene set scoring methods like GSVA and singscore are used in gene set enrichment analysis to assess the enrichment of genes of interest, called gene sets. GSVA and singscore produces a score of how expressed a gene set is in relationship with a reference expression, a reference that is not always accessible. In this work we apply variance decomposition to investigate the use of singscore and GSVA to create a baseline for RNA-seq data that lacks control samples and apply a VAE for prediction of gene set scores across tissues. To this end, variance decomposition was done on GTEx to assess the dataset’s use as a baseline, and a VAE was trained on GTEx with the aim of predicting gene set scores across tissues. Our results show that there is a limited use of using a reference dataset as a basis for RNA-seq data. The results are not conclusive enough to warrant usage in applications with the precision needed in pharmaceutical research. The VAE based prediction shows lacklustre results in predicting expression over tissues, and other machine learning methods should be investigated for this application.
- PostDecentralized Deep Learning under Distributed Concept Drift: A Novel Approach to Dealing with Changes in Data Distributions Over Clients and Over Time(2023) Klefbom, Emilie; Örtenberg Toftås, Marcus; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Modin, Klas; Dubhashi, DevdattIn decentralized deep learning, clients train local models in a peer-to-peer fashion by sharing model parameters, rather than data. This allows collective model training in cases where data may be sensitive or for other reasons unable to be transferred. In this setting, variations in data distributions across clients have been extensively studied, however, variations over time have received no attention. This project proposes a solution to address decentralized learning where the data distributions vary both across clients and over time. We propose a novel algorithm that can adapt to the evolving concepts in the network without any prior knowledge or estimation of the number of concepts. Evaluation of the algorithm is done using standard benchmarks adapted to the temporal setting, where it outperforms previous methods for decentralized learning.
- PostDeep learning for robust road object detection(2017) Diego Solana, Lucia; Scanlan, Dónal; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Chalmers University of Technology / Department of Mathematical Sciences
- PostDynamic Beta Estimation in Volatile Markets(2016) Vikström, Ludvig; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Chalmers University of Technology / Department of Mathematical Sciences
- PostEstimation of Lidar Point Clouds Based on Ultrasonic Sensors Using Deep Learning(2023) Hjalmarsson, Carl; Jäghagen, Jesper; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Andersson, Adam; Abramsson, AndreasSafety is a key point of research in the automotive industry. Car companies dedicate a great amount of time and resources to make their cars safer by developing sensory systems like Low Speed Perception (LSP). In this thesis, we have explored the possibility to enhance the contribution of ultrasonic sensors to LSP by leveraging deep learning to mimic data from the more expensive Lidar sensors. To do this we draw inspiration from three different deep learning approaches: image denoising, image segmentation and image-to-image translation, resulting in five different models: DAE(bin), DAE(mse), UNET(bin), UNET(ce) and an I2I cGAN. We train these models on three datasets, each containing paired ultrasonic and Lidar representations of a two-dimensional environment around the car. In order to measure the performance of these models, we develop an evaluation framework where we assess the ability of the models to map ultrasonic object detections to corresponding Lidar detections. We find that the performance of all networks is highly dependent on the data representation. When using a basic representation, consisting only of point detections and free-space, all models fail to improve upon the baseline ultrasonic sensor score. When using a sparse representation consisting of detection arcs, however, UNET(bin) succeeds in outperforming the baseline and mimicking the more accurate Lidar representation (see cover). Finally, when adding unknown areas of the environment to the sparse representation, both UNET(ce) and the cGAN manage to outperform the baseline in most aspects, and we see a convergence towards the more realistic Lidar representation. The results show that there is indeed a possibility to enhance ultrasonic sensor perception using deep learning and Lidar reference data, and while there is still much room for improvement, we have shown that there is potential in further research on this task.
- PostEvolutionary dynamics of spatial games(2020) Ekesryd, Olle; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Gerlee, Philip; Gerlee, PhilipGame theory is a subject that can be applied to many fields of study, for instance biology and economics. In this report, we look at two known games in game theory, Prisoner’s dilemma and Hawk dove. In contrast to previous research on these two games, every agent is located in two-dimensional space on the unit square. This spatial case on game theory changes the dynamics of the games. For example, we found that cooperating is a superior strategy in Prisoner’s dilemma, for some combinations of parameters. For reference, in the normal Prisoner’s dilemma, the strategy defect dominates the strategy to cooperate. It is also found that the initial state of the simulations has a big impact on the results obtained. The simulations made are computationally demanding. An attempt was therefore made to make approximations that could replace the spatial model. The result from the approximations shows that it is not good enough to replace the spatial model. However, steps has been made that can help future researchers to make an approximation that works well.
- PostFast Fourier Transform using compiler auto-vectorization(2019) Göc, Dundar; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Raum, Martin; Raum, MartinThe purpose of this thesis is to develop a fast Fourier transformation algorithm written in C with the use of GCC (GNU Compiler Collection) auto-vectorization in the multiplicative group of integers modulo n, where n is a word sized odd integer. The specific Fourier transform algorithm employed is the cache-friendly version of the Truncated Fourier Transform. The algorithm was implemented by modifying an existing library for modular arithmetic written in C called zn poly. The majority of the thesis work consisted of changing the code in a way to make integration into FLINT possible. FLINT is a versatile library, written in C, aimed at fast arithmetic of different kinds. The results show that auto-vectorization is possible with a potential speedup factor of 3. The performance increase is however entirely dependent on the task at hand and the nature of the computation being made.
- PostImproving Lower Bound for Aircraft Routing Optimization. An Application of Lagrange Relaxation(2023) Hult , Karin; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Strömberg, Ann-Brith; Westerlund, AndreasAircraft routing is a complicated optimization problem which can be modelled as a form of resource-constrained integer multi-commodity network flow problem. When optimizing this problem, a good lower bound is useful to assess the quality of feasible solutions and for determining convergence of the solution algorithm. The technique currently used to calculate lower bounds is very fast, but does several relaxations on top of each other, which might yield a quite weak lower bound. This fact raises the question if it could be possible to develop an efficient method to strengthen the lower bound. This master thesis examines an alternative method of calculating a lower bound by performing Lagrange relaxation with respect to only one constraint group, which theoretically would lead to a stronger lower bound. Subgradient optimization with Polyak step size is used to solve the Lagrangian dual problem. Tests are performed, both with realistic implementations and with idealistic parameters, to investigate the potential of Lagrange relaxation, and solving the Lagrange dual problem with subgradient optimization. Lastly the results are compared with an linear programmingbased relaxation of the original problem. The Lagrange relaxation shows promise as the idealized tests resulted in a significant improvement of the lower bound for most of the studied test cases. However, the realistic implementations of subgradient optimization did not perform consistently well for all the test cases; this indicates that improvements of the implementations are required for this method of solving the Lagrange dual problem to be usable in practice.
- PostItem batching for picker routing in warehouses: Applying column generation with an exact, novel subproblem algorithm(2023) Löfving, Joseph; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Strömberg, Ann-Brith; Strömberg, Ann-Brith; Härdmark, JohanThe item batching problem concerns finding an optimal partitioning of items waiting to be picked in a warehouse into batches, such that these batches can be picked with minimal total walking distance. In this project, the item batching problem is solved through column generation, an efficient method for solving linear optimization problems with an enormous number of variables (which, in this case, represent possible batches). In order to achieve optimality through column generation, one needs to be able to solve a problem-specific subproblem to optimality. For the item batching problem, this subproblem is a version of the computationally intensive prize-collecting travelling salesperson problem. To solve this subproblem to optimality, a novel dynamic programming algorithm with a heuristic guide is developed, which solves the subproblem optimally in times ranging from milliseconds to seconds, depending on the details of the problem instance. Overall, the implemented column generation algorithm finds optimal solutions in a matter of seconds per generated batch for all test cases, but sometimes requires hours to verify that these solutions are optimal. The optimal solutions provide great improvements over current batching strategies, often more than halving the total distance that the warehouse workers need to walk while picking. However, simple, alternative algorithms provide similar improvements in a matter of milliseconds, making the column generation time-consuming, computationally intensive, and unwieldy for very little gain when compared to the simple, alternative algorithms. The developed subproblem algorithm is easily generalizable, and could be used for more complicated variants of the item batching problem, like the order batching problem, where the simple, alternative algorithms are less likely to perform well.
- PostMonte Carlo simulations for brachytherapy(2021) Skarin, Miriam; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Lundh, Torbjörn; Engwall, Erik; Niessen, TomBrachytherapy is an internal radiation treatment where a small, encapsulated radioactive source is placed at different dwell positions close to or within the tumor. The standard method for treatment planning in brachytherapy is to follow the TG43-formalism where the full patient geometry is treated as water. This approximation has limitations in the modelling of skin–air interfaces and tissue variations and does not take into consideration shielding effects. To overcome the limitations of this analytical framework, Monte Carlo (MC) simulations can be used. To reduce the computational time, which is often a limiting factor in MC computations, pre-generated information about the radiation can be utilized. In this work, independent Monte Carlo simulations performed with the software egs_brachy are setup to characterize the radiation from a high-dose-rate brachytherapy source. The output is a phase space with information about the radiation properties. The phase space is analyzed and resulted in a 4D-histogram which can be used as an input to the brachy Monte Carlo dose engine prototype in the treatment planning system RayStation®. In addition to constructing the input data, dose distributions for two different treatment plans are scored with egs_brachy in both water and tissue. The same treatment plans are used to compute dose with TG43 and the brachy Monte Carlo prototype and the dose distributions are compared with the ones obtained with egs_brachy. Local gamma tests were performed with gamma criteria (1%/1 mm) and (3%/2 mm), yielding low fail rates in comparison to both RayStation dose engines. The favorable gamma results show that both TG43 and the brachy Monte Carlo dose engine can reproduce the simulated dose distributions from egs_brachy well. In conclusion, the produced phase space is suitable as input for the brachy MC dose engine and the simulated dose distributions can be utilized in a validation framework.
- PostMultivariate Time Series Forecasting of Earnings Before Interests and Taxes(2022) Moberg, Frida; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Picchini, Umberto; Zetterqvist, OlofFinancial forecasting is an important tool for companies when planning their operations and structuring their organization. At the Packaging Solutions division at Stora Enso they work with three different forecasts for their corrugated business, where one of them is a 15-months rolling forecast. This forecast is updated each month, which is a time consuming process. It is desired to decrease that time to make more time available for analysing the result from the forecast and plan for actions. One of the most important outcomes is the forecast for Earnings Before Interest and Taxes (EBIT). Therefore, this project aims to create an automatic model that can make the forecasting process for EBIT more efficient and more accurate. The input data consists of 16 different features, such as raw material costs, end product price and maintenance costs, that are recorded monthly between 2014-2021, for seven different countries and three regions. Since the amount of data is relatively small and multivariate, the vector autoregressive moving average (VARMA) model was selected. During the model training, five different combinations of features were tested for all countries and regions. The result showed that the accuracy increased compared to the company model for four countries and one region. It could be seen that the countries that had more stable EBIT data worked well with the VARMA model while the ones that included sudden increases or decreases were more difficult to model, as expected. To conclude, the VARMA model is a good option to make the forecasting process more efficient but the model would benefit from some fine adjustments before it can be implemented in the daily work at Stora Enso.
- PostOptimal Restart Games: General Theory and Near-Optimal Strategies for Rivest's Coin Game(2019) Älgmyr, Anton; Martinsson, Björn; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Steif, Jeffrey; Steif, JeffreyThe purpose of this thesis was to analyze the Rivest coin game. To do this we formally introduced a type of game which we call restart games and built a theoretical framework for analysis of such games. Most of this theory was derived by connecting restart games to quitting games (optimal stopping problems) with ideas based on the paper “On Playing Golf With Two Balls”. Through this framework we have developed two strategies for playing the Rivest coin game which we have shown to be constant-factor from optimal. These strategies seem to generalize well to other similar games, in particular we have shown them to be constant-factor from optimal in another related coin game. The theoretical framework also provided means to analyze the Rivest coin game numerically and even construct an optimal strategy for any particular instance of the game. The constructed optimal strategy, along with some other simple strategies, was analyzed and reaffirms the optimality analysis while also highlighting some important differences between our strategies and an optimal strategy.
- PostPredicting Antibiotic Resistance with a Language AI Model(2022) Örtenberg Toftås, Mathias; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Kristiansson, Erik; Kristiansson, ErikThe effectiveness of utilizing a BERT language AI model to predict antibiotic resistance within beta-lactamase/transpeptidase-like proteins is tested. The performance of the model is compared to a traditional Hidden Markov Model (HMM) to verify its capabilities. To further ensure the models capabilities, several tests such as crossvalidation and resistance towards sequence read errors are performed. We find that the BERT model outperforms the HMM model on amino acid sequences with lengths around 40-50 and shorter. These are the type of sequence lengths which we expect to encounter in our use case. For longer sequences, the HMM model is preferable as it requires less computation time. We also find that the BERT model and the HMM model has learned different aspects about the data, allowing the combination of the results of both methods to achieve even finer results. Keywords:
- PostSimple stochastic populations in habitats with bounded and varying carrying capacities(2016) Korveh, Edward; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Chalmers University of Technology / Department of Mathematical SciencesA population consisting of one single type of individuals where reproduction is seasonal, and by means of asexual binary-splitting with a probability, which depends on the carrying capacity of the habitat, K and the present population is considered. Current models for such binary-splitting populations do not explicitly capture the concepts of early and late extinctions. A new parameter v, called the ‘scaling parameter’ is introduced to scale down the splitting probabilities in the first season, and also in subsequent generations in order to properly observe and record early and late extinctions. The modified model is used to estimate the probabilities of early and late extinctions, and the expected time to extinction in two main cases. The first case is for fixed and large K, where a new and more general upper bound for the expected time to extinction is proposed to be evK. The risk of such populations going extinct is found to be of the order O _ p 1 (2v−1)K _ . The second case considers a scenario where the carrying capacity of the habitat varies in each season between two values L (for low) and H (for high), randomly chosen with equal probabilities to represent either a good or a bad season respectively. Both cases yielded similar results with the probability of early extinction tending to zero as v increases, and the probability of late extinction tending to one as v increases. Keywords: habitat; carrying capacity; branching processes; supercritical; subcritical; extinction time; binary-splitting. v
- PostSimulation and analysis of spinodal decomposition using the Allen-Cahn equation(2023) West, William; Chalmers tekniska högskola / Institutionen för matematiska vetenskaper; Gebäck, Tobias; Gebäck, TobiasSpinodal decomposition is a type of phase separation that can occur for certain compositions of mixtures. If one of the phases is then resolved, the result is a labyrinthine structure that can advantageously be used to transport material, such as medicine through tablets. Simulations of spinodal decomposition based on the Allen-Cahn equation were run with a variety of parameters to gather time-based data. Methods were then developed to analyse characteristics from the data, both in 2D and 3D. The results were compared to those from previous physical experiments, with some accuracy. Finally, the effects of boundary conditions on spatial variations and phase patterns were studied.