Privacy Risks in Time Series Models: Membership Inference in Deep Learning-Based Time Series Forecasting Models
| dc.contributor.author | Johansson, Nicolas | |
| dc.contributor.author | Olsson, Tobias | |
| dc.contributor.department | Chalmers tekniska högskola / Institutionen för data och informationsteknik | sv |
| dc.contributor.department | Chalmers University of Technology / Department of Computer Science and Engineering | en |
| dc.contributor.examiner | Axelson-Fisk, Marina | |
| dc.contributor.supervisor | Sarao Mannelli, Stefano | |
| dc.date.accessioned | 2025-09-23T12:30:05Z | |
| dc.date.issued | 2025 | |
| dc.date.submitted | ||
| dc.description.abstract | As machine learning models become increasingly prevalent in time series forecasting applications—such as healthcare or energy systems—their potential to leak sensitive data, for example through membership inference attacks (MIAs), raises serious privacy concerns. While MIAs are well-studied in domains like image classification, their implications for time series models remain underexplored. This thesis investigates the vulnerability of deep learning-based time series forecasting models to MIAs, focusing on univariate and multivariate prediction tasks using LSTM and N-HiTS architectures. We conduct systematic experiments on two real-world datasets—EEG signals and electricity load usage—from distinct domains, characterized by varying temporal granularity and privacy sensitivity. Several MIA techniques are evaluated, including LiRA and RMIA—originally stateof-the-art in other data modalities and adapted here for time series—as well as an ensemble-based attack previously shown to be effective for time series models. In addition, we introduce two new attacks: (i) Multi-Signal LiRA, a multi-signal variant of LiRA that can leverage a diverse set of time series-specific signals such as trend, seasonality, and TS2Vec embeddings; and (ii) Deep Time Series (DTS) attack, a novel deep learning-based MIA tailored specifically for time series forecasting. Performance is assessed across both sample-level and user-level inference scenarios. Our findings reveal key patterns in MIA performance relative to model type, dataset characteristics, and attack signals, providing insight into how various attack settings affect privacy leakage in time series domains. These discoveries underscore the need for robust privacy auditing tools and the integration of privacy-preserving techniques when working with sensitive temporal data, while also providing guidance for future research on privacy risks in time series machine learning models. | |
| dc.identifier.coursecode | DATX05 | |
| dc.identifier.uri | http://hdl.handle.net/20.500.12380/310510 | |
| dc.language.iso | eng | |
| dc.setspec.uppsok | Technology | |
| dc.subject | machine learning | |
| dc.subject | deep learning | |
| dc.subject | membership | |
| dc.subject | inference attacks | |
| dc.subject | time series forecasting | |
| dc.subject | data privacy | |
| dc.subject | EEG | |
| dc.subject | electricity load | |
| dc.subject | privacy auditing | |
| dc.title | Privacy Risks in Time Series Models: Membership Inference in Deep Learning-Based Time Series Forecasting Models | |
| dc.type.degree | Examensarbete för masterexamen | sv |
| dc.type.degree | Master's Thesis | en |
| dc.type.uppsok | H | |
| local.programme | Data science and AI (MPDSC), MSc |
