Privacy Risks in Time Series Models: Membership Inference in Deep Learning-Based Time Series Forecasting Models

dc.contributor.authorJohansson, Nicolas
dc.contributor.authorOlsson, Tobias
dc.contributor.departmentChalmers tekniska högskola / Institutionen för data och informationstekniksv
dc.contributor.departmentChalmers University of Technology / Department of Computer Science and Engineeringen
dc.contributor.examinerAxelson-Fisk, Marina
dc.contributor.supervisorSarao Mannelli, Stefano
dc.date.accessioned2025-09-23T12:30:05Z
dc.date.issued2025
dc.date.submitted
dc.description.abstractAs machine learning models become increasingly prevalent in time series forecasting applications—such as healthcare or energy systems—their potential to leak sensitive data, for example through membership inference attacks (MIAs), raises serious privacy concerns. While MIAs are well-studied in domains like image classification, their implications for time series models remain underexplored. This thesis investigates the vulnerability of deep learning-based time series forecasting models to MIAs, focusing on univariate and multivariate prediction tasks using LSTM and N-HiTS architectures. We conduct systematic experiments on two real-world datasets—EEG signals and electricity load usage—from distinct domains, characterized by varying temporal granularity and privacy sensitivity. Several MIA techniques are evaluated, including LiRA and RMIA—originally stateof-the-art in other data modalities and adapted here for time series—as well as an ensemble-based attack previously shown to be effective for time series models. In addition, we introduce two new attacks: (i) Multi-Signal LiRA, a multi-signal variant of LiRA that can leverage a diverse set of time series-specific signals such as trend, seasonality, and TS2Vec embeddings; and (ii) Deep Time Series (DTS) attack, a novel deep learning-based MIA tailored specifically for time series forecasting. Performance is assessed across both sample-level and user-level inference scenarios. Our findings reveal key patterns in MIA performance relative to model type, dataset characteristics, and attack signals, providing insight into how various attack settings affect privacy leakage in time series domains. These discoveries underscore the need for robust privacy auditing tools and the integration of privacy-preserving techniques when working with sensitive temporal data, while also providing guidance for future research on privacy risks in time series machine learning models.
dc.identifier.coursecodeDATX05
dc.identifier.urihttp://hdl.handle.net/20.500.12380/310510
dc.language.isoeng
dc.setspec.uppsokTechnology
dc.subjectmachine learning
dc.subjectdeep learning
dc.subjectmembership
dc.subjectinference attacks
dc.subjecttime series forecasting
dc.subjectdata privacy
dc.subjectEEG
dc.subjectelectricity load
dc.subjectprivacy auditing
dc.titlePrivacy Risks in Time Series Models: Membership Inference in Deep Learning-Based Time Series Forecasting Models
dc.type.degreeExamensarbete för masterexamensv
dc.type.degreeMaster's Thesisen
dc.type.uppsokH
local.programmeData science and AI (MPDSC), MSc

Ladda ner

Original bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
CSE 25-50 NJ TO.pdf
Storlek:
6.12 MB
Format:
Adobe Portable Document Format

License bundle

Visar 1 - 1 av 1
Hämtar...
Bild (thumbnail)
Namn:
license.txt
Storlek:
2.35 KB
Format:
Item-specific license agreed upon to submission
Beskrivning: