Open this publication in new window or tab >>2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]
Self-supervised Representation Learning has emerged as a powerful paradigm to mitigate the dependency on labeled data by leveraging intrinsic structures within data. While SSL has demonstrated remarkable progress, SSL remains constrained by two fundamental challenges: its vulnerability to distribution shifts and adversarial attacks, and its limited adaptability to domain-specific visual concepts. These challenges stem from the very nature of SSL, where models learn representations through invariance, yet the current formulation of invariance does not explicitly account for environmental distortions or structured variations unique to different domains. As a consequence, SSL models exhibit limited generalization in real-world scenarios, where unprecedented environmental and human-induced factors introduce variations that are not encountered during training, leading to suboptimal representation learning.
This thesis addresses these limitations by introducing a novel conceptual framework that enhances SSL robustness and domain-awareness in a modular and plug-and-play manner. The foundation of this approach lies in the realization that view generation—a core component of Joint Embedding Architecture and Method (JEAM)-based SSL—offers a natural intervention point for achieving both robustness and domain-awareness without disrupting the underlying loss objective. By systematically improving how invariance is enforced within a common conceptual JEAM-SSL framework, this work ensures that robustness against real-world perturbations and adaptability to diverse domains can be achieved as complementary aspects of the same formulation.
One of the critical challenges in real-world representation learning is perspective distortion (PD), an inevitable artifact that alters the geometric relationships within images. Since explicit correction through camera parameters is often impractical, this thesis introduces mathematically grounded transformations—Möbius Transform and Log Conformal Maps—which model the fundamental properties of PD such as non-linearity and conformality. By incorporating these transformations into the view generation process, SSL models achieve improved robustness against perspective-induced variations while retaining the flexibility of standard SSL objectives.
While environmental factors impact representation learning at the sensory level, SSL is also challenged by human-driven manipulations, particularly in the form of adversarial perturbations. Traditional adversarial training in self-supervised settings relies on brute-force perturbation strategies, which fail to adapt dynamically to the model’s evolving representations. To overcome this, this thesis proposes a learnable adversarial attack policy, embedded within the same modular SSL framework, allowing models to refine adversarial perturbations just-in-time. By aligning the adversarial training process with the way invariance is learned, SSL models gain resilience to adversarial manipulations while maintaining their generalization capabilities.
While robustness ensures that models retain meaningful representations across transformations, domain-awareness is essential for learning representations that are tailored to specialized datasets. The conventional augmentation schemes used in SSL are optimized for natural images but do not incorporate domain-specific information, which is essential for capturing meaningful features in specialized datasets such as medical imaging and industrial inspection. This thesis integrates domain-specific information directly into view generation, incorporating magnification factors in histopathology images and depth cues in industrial materials to guide SSL models toward more meaningful representations. By maintaining SSL’s plug-and-play modularity, domain-awareness is seamlessly integrated into the learning process without requiring extensive changes to the underlying framework.
Beyond robustness and domain-awareness, SSL’s capacity to generalize under limited data availability is another crucial aspect of its practical utility. While the contrastive loss formulation is inherently domain-agnostic, its effectiveness often depends on large-scale data. To address this, this thesis explores functional knowledge transfer, where self-supervised and supervised learning are jointly optimized rather than treated as sequential tasks. This joint optimization framework enables SSL representations to dynamically adapt to supervised objectives, improving generalization in data-scarce regimes while preserving the advantages of label-free pre-training.
By advancing view generation within a unified SSL framework, this thesis establishes a structured and scalable foundation for making self-supervised learning both robust and domain-aware. The proposed methodologies significantly enhance SSL’s ability to operate in real-world scenarios, where distribution shifts, adversarial threats, and domain complexities are inevitable. In doing so, this work lays the groundwork for future advancements in adaptive, generalizable, and structured self-supervised representation learning.
Place, publisher, year, edition, pages
Luleå tekniska universitet, 2025
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
Keywords
Self-supervised Representation Learning, Representation Learning, Robustness, Domain-aware, Perspective Distortion, Adversarial Attacks, Medical Imaging, Computer Vision
National Category
Computer Vision and Learning Systems Artificial Intelligence
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111571 (URN)978-91-8048-761-0 (ISBN)978-91-8048-762-7 (ISBN)
Public defence
2025-04-08, A-117, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
2025-02-072025-02-072025-02-07Bibliographically approved