Examensarbeten för masterexamen // Master Theses
Länka till denna samling:
Browse
Browsar Examensarbeten för masterexamen // Master Theses efter Program "Computer systems and networks (MPCSN), MSc"
Visar 1 - 2 av 2
Sökresultat per sida
Sortera efter
- PostDetection and classification of marine vehicles(2021) Rofalis, Athanasios; Chalmers tekniska högskola / Institutionen för mekanik och maritima vetenskaper; Benderius, Ola; Benderius, OlaOne of the most common tasks within the computer vision field is the detection and classification of different objects. This thesis aims to deliver a software that can be deployed into real world scenarios and mange to detect and classify marine vehicles accurately. Using one of the pre-defined deep neural network models You look only once (YOLO), we managed to achieve a high performance for the detection and classification task. The training of the model took place using a specific dataset of grayscale images, which led to a model that can classify the objects with an accuracy of 68% and predict the relevant position with mean average precision (mAP) of 0.77. Moreover, the model tested into different weather conditions and achieved an accyracy of 0.85% and mAP of 0.068. In general, the YOLO model seems to be a robust detector that can be trained and deployed for detecting efficiently objects with high performance. Keywords:
- PostUsing pre-trained language models for extractive text summarisation of academic papers(2020) Hermansson, Erik; Boddien, Charlottte; Chalmers tekniska högskola / Institutionen för mekanik och maritima vetenskaper; Selpi, Selpi; Selpi, SelpiGiven the overwhelming amount of textual information on the internet and elsewhere, the automatic creation of high-quality summaries is becoming increasingly important. With the development of neural networks and pre-trained models within natural language processing in recent years, such as BERT and its derivatives, the field of automatic text summarisation has seen a lot of progress. These models have been pre-trained on large amounts of data for general knowledge and are then finetuned for specific tasks. Datasets are a limiting factor for training summarisation models, as they require a large amount of manual summaries to be created. Most of the current summarisation models have been trained and evaluated using textual data mostly from the news domain. However, pre-trained models, fine-tuned on data from the news domain, could potentially also be able to generalize and perform well on other data as well. The main objective of this thesis is to investigate the suitability of several pre-trained language models for automatic text summarisation. The chosen models were finetuned on readily available news data, and evaluated on a very different dataset of academic texts to determine their ability to generalise. There were only slight differences between the models on the news data. But more interestingly, the results on the academic texts showed significant differences between the models. The results indicate that the more robustly pre-trained models are able to generalise better and according to the metrics perform quite well. However, human evaluation puts this into question, showing that even the high-scoring summaries did not necessarily read well. This highlights the need for better evaluation methods and metrics.