DSpace Collection:
https://hdl.handle.net/20.500.12380/44
2020-09-15T14:28:24ZEvolutionary dynamics of spatial games
https://hdl.handle.net/20.500.12380/301719
Title: Evolutionary dynamics of spatial games
Authors: Ekesryd, Olle
Abstract: Game theory is a subject that can be applied to many fields of study, for instance
biology and economics. In this report, we look at two known games in game theory,
Prisoner’s dilemma and Hawk dove. In contrast to previous research on these two
games, every agent is located in two-dimensional space on the unit square. This
spatial case on game theory changes the dynamics of the games. For example, we
found that cooperating is a superior strategy in Prisoner’s dilemma, for some combinations
of parameters. For reference, in the normal Prisoner’s dilemma, the strategy
defect dominates the strategy to cooperate. It is also found that the initial state
of the simulations has a big impact on the results obtained. The simulations made
are computationally demanding. An attempt was therefore made to make approximations
that could replace the spatial model. The result from the approximations
shows that it is not good enough to replace the spatial model. However, steps has
been made that can help future researchers to make an approximation that works
well.2020-01-01T00:00:00ZWhere to stand when playing darts? Questions in Probability Theory inspired by dart throwing
https://hdl.handle.net/20.500.12380/301701
Title: Where to stand when playing darts? Questions in Probability Theory inspired by dart throwing
Authors: Franzén, Björn
Abstract: This thesis investigates questions in probability theory inspired by a model of dart throwing. The deviation from where you aim is modelled as the distance to the dart board times a random vector, and in this thesis we refer to such a random vector as a dart. Points are then assigned by some bounded payoff function, and we study how the choice of the payoff function as well as the distribution of the dart affect the properties of the expected score when aiming in an optimal way.
Interestingly, it turns out that sometimes it can be better to move further away from the dartboard before throwing, and the main focus of this thesis is to characterise under what circumstances this is or isn't the case. For a dart X and payoff function f we call the pair (X,f) reasonable if it is always better to stand closer to the dartboard, and a dart X is called reasonable if (X,f) is reasonable for all payoff functions f.
We have found a large class of darts which are reasonable, namely those which have so-called selfdecomposable distributions, which includes many well known distributions, such as the exponential distribution, the logistic distribution, and also all stable distributions. Whether there exist reasonable darts that are not selfdecomposable remains an open question.
It turns out that when the payoff function is cosine, then reasonableness can be characterised in terms of the characteristic function of the dart, from which many different results follow. Furthermore, by studying functions of the form e^{cx}cos(omega x), we have found that no dart with compact support is reasonable.
We have also found several sufficient conditions for a dart to be non-reasonable with respect to some continuous payoff function, one of which is the following. If a dart has a point mass, but is not a constant, then it is non-reasonable with respect to some continuous function.
Finally, we have investigated under what types of operations a set of reasonable darts may be closed, and have found two results of this nature. Firstly any independent sum of reasonable darts is reasonable. Secondly, for any f in C_0(R^n), the set of darts which are reasonable with respect to f is closed with respect to convergence in distribution.2020-01-01T00:00:00ZDeep-learning-accelerated Bayesian inference for state-space models
https://hdl.handle.net/20.500.12380/301661
Title: Deep-learning-accelerated Bayesian inference for state-space models
Authors: Hölén Hannouch, Elias; Holmstedt, Oskar
Abstract: Bayesian inference is an important statistical tool for estimating uncertainties in
model parameters from data. One very important method is the Metropolis-Hastings
algorithm, which allows for parameter inference when analytical solutions are intractable.
The only requirement is that the likelihood function can be evaluated.
However, it is a computationally expensive algorithm, as it is usually run for several
thousand iterations. This is especially true for inference in state-space models, where
the likelihood is computed via Bayesian filtering, which is a costly operation in and
of itself. We propose a new method for doing Bayesian inference via Metropolis-
Hastings for state-space models by replacing the standard likelihood computation
with a neural network. The network is trained on data generated by a much shorter
earlier run of Metropolis-Hastings.
We show, both qualitatively and quantitatively, that our method produces comparable
results to the traditional method for several models. Moreover, our results
indicate that the performance of our method is consistent as the dimensionality of
the state-space model increases.
Finally, we show that our method is much more computationally efficient than
the traditional method for large runs. We investigate at what point our method
becomes the preferable alternative and find that the threshold occurs at quite small
runs, both in terms of computational time and desired output size.2020-01-01T00:00:00ZAdversarial Representation Learning for Synthetic Replacement of Sensitive Speech Data
https://hdl.handle.net/20.500.12380/301651
Title: Adversarial Representation Learning for Synthetic Replacement of Sensitive Speech Data
Authors: Östberg, Adam; Ericsson, David
Abstract: As more data is collected in various settings across organizations, companies, and
countries, there has been an increase in the demand of user privacy. Developing
privacy preserving methods for data analytics is thus an important area of research.
In this work we present a model based on generative adversarial networks (GANs)
that learns to obfuscate specific sensitive attributes in speech data. We train a
model that learns to hide sensitive information in the data, while preserving the
meaning in the utterance. The model is trained in two steps: first to filter sensitive
information in the spectrogram domain, and then to generate new and private information
independent of the filtered one. The model is based on a CNN that takes
mel-spectrograms as input. A MelGAN is used to invert the spectrograms back
to raw audio waveforms. We show that it is possible to hide sensitive information
such as gender by generating new data, trained adversarially to maintain utility and
realism.2020-01-01T00:00:00Z