Change search
Link to record
Permanent link

Direct link
Chhipa, Prakash ChandraORCID iD iconorcid.org/0000-0002-6903-7552
Publications (10 of 21) Show all publications
Chhipa, P. C., Vashishtha, G., Anantha sai Settur, J., Saini, R., Shah, M. & Liwicki, M. (2025). ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks. In: ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks: . Paper presented at International Conference on Learning Representations (ICLR) 2025.
Open this publication in new window or tab >>ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks
Show others...
2025 (English)In: ASTrA: Adversarial Self-supervised Training with Adaptive-Attacks, 2025Conference paper, Published paper (Refereed)
Abstract [en]

Existing self-supervised adversarial training (self-AT) methods rely on hand-crafted adversarial attack strategies for PGD attacks, which fail to adapt to the evolving learning dynamics of the model and do not account for instance-specific characteristics of images. This results in sub-optimal adversarial robustness and limits the alignment between clean and adversarial data distributions. To address this, we propose ASTrA (Adversarial Self-supervised Training with Adaptive-Attacks), a novel framework introducing a learnable, self-supervised attack strategy network that autonomously discovers optimal attack parameters through exploration-exploitation in a single training episode. ASTrA leverages a reward mechanism based on contrastive loss, optimized with REINFORCE, enabling adaptive attack strategies without labeled data or additional hyperparameters. We further introduce a mixed contrastive objective to align the distribution of clean and adversarial examples in representation space. ASTrA achieves state-of-the-art results on CIFAR10, CIFAR100, and STL10 while integrating seamlessly as a plug-and-play module for other self-AT methods. ASTrA shows scalability to larger datasets, demonstrates strong semi-supervised performance, and is resilient to robust overfitting, backed by explainability analysis on optimal attack strategies. Project page for source code and other details at https://prakashchhipa.github.io/projects/ASTrA.

National Category
Computer Vision and Learning Systems Artificial Intelligence
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111564 (URN)
Conference
International Conference on Learning Representations (ICLR) 2025
Available from: 2025-02-07 Created: 2025-02-07 Last updated: 2025-04-11
Chippa, M. S., Chhipa, P. C., De, K., Liwicki, M. & Saini, R. (2025). LCM: Log Conformal Maps for Robust Representation Learning to Mitigate Perspective Distortion. In: Minsu Cho; Ivan Laptev; Du Tran; Angela Yao; Hongbin Zha (Ed.), Computer Vision – ACCV 2024: 17th Asian Conference on Computer VisionHanoi, Vietnam, December 8–12, 2024 Proceedings, Part VIII. Paper presented at 17th Asian Conference on Computer Vision (ACCV 2024), Hanoi, Vietnam, December 8-12, 2024 (pp. 175-191). Springer Nature
Open this publication in new window or tab >>LCM: Log Conformal Maps for Robust Representation Learning to Mitigate Perspective Distortion
Show others...
2025 (English)In: Computer Vision – ACCV 2024: 17th Asian Conference on Computer VisionHanoi, Vietnam, December 8–12, 2024 Proceedings, Part VIII / [ed] Minsu Cho; Ivan Laptev; Du Tran; Angela Yao; Hongbin Zha, Springer Nature, 2025, p. 175-191Conference paper, Published paper (Refereed)
Abstract [en]

Perspective distortion (PD) leads to substantial alterations in the shape, size, orientation, angles, and spatial relationships of visual elements in images. Accurately determining camera intrinsic and extrinsic parameters is challenging, making it hard to synthesize perspective distortion effectively. The current distortion correction methods involve removing distortion and learning vision tasks, thus making it a multi-step process, often compromising performance. Recent work leverages the Möbius transform for mitigating perspective distortions (MPD) to synthesize perspective distortions without estimating camera parameters. Möbius transform requires tuning multiple interdependent and interrelated parameters and involving complex arithmetic operations, leading to substantial computational complexity. To address these challenges, we propose Log Conformal Maps (LCM), a method leveraging the logarithmic function to approximate perspective distortions with fewer parameters and reduced computational complexity. We provide a detailed foundation complemented with experiments to demonstrate that LCM with fewer parameters approximates the MPD. We show that LCM integrates well with supervised and self-supervised representation learning, outperform standard models, and matches the state-of-the-art performance in mitigating perspective distortion over multiple benchmarks, namely Imagenet-PD, Imagenet-E, and Imagenet-X. Further LCM demonstrate seamless integration with person re-identification and improved the performance. Source code is made publicly available at https://github.com/meenakshi23/Log-Conformal-Maps. 

Place, publisher, year, edition, pages
Springer Nature, 2025
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15479
Keywords
Perspective Distortion, Robust Representation Learning, Self-supervised Learning
National Category
Computer graphics and computer vision
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111235 (URN)10.1007/978-981-96-0966-6_11 (DOI)2-s2.0-85212922792 (Scopus ID)
Conference
17th Asian Conference on Computer Vision (ACCV 2024), Hanoi, Vietnam, December 8-12, 2024
Funder
Knut and Alice Wallenberg Foundation
Note

ISBN for host publication: 978-981-96-0965-9;

Available from: 2025-01-08 Created: 2025-01-08 Last updated: 2025-02-07Bibliographically approved
Chhipa, P. C., Chippa, M. S., De, K., Saini, R., Liwicki, M. & Shah, M. (2025). Möbius Transform for Mitigating Perspective Distortions in Representation Learning. In: Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol (Ed.), Computer Vision – ECCV 2024: 18th European Conference Milan, Italy, September 29–October 4, 2024 Proceedings, Part LXXIII. Paper presented at 18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024 (pp. 345-363). Springer Science and Business Media Deutschland GmbH
Open this publication in new window or tab >>Möbius Transform for Mitigating Perspective Distortions in Representation Learning
Show others...
2025 (English)In: Computer Vision – ECCV 2024: 18th European Conference Milan, Italy, September 29–October 4, 2024 Proceedings, Part LXXIII / [ed] Aleš Leonardis; Elisa Ricci; Stefan Roth; Olga Russakovsky; Torsten Sattler; Gül Varol, Springer Science and Business Media Deutschland GmbH , 2025, p. 345-363Conference paper, Published paper (Refereed)
Abstract [en]

Perspective distortion (PD) causes unprecedented changes in shape, size, orientation, angles, and other spatial relationships of visual concepts in images. Precisely estimating camera intrinsic and extrinsic parameters is a challenging task that prevents synthesizing perspective distortion. Non-availability of dedicated training data poses a critical barrier to developing robust computer vision methods. Additionally, distortion correction methods make other computer vision tasks a multi-step approach and lack performance. In this work, we propose mitigating perspective distortion (MPD) by employing a fine-grained parameter control on a specific family of Möbius transform to model real-world distortion without estimating camera intrinsic and extrinsic parameters and without the need for actual distorted data. Also, we present a dedicated perspectively distorted benchmark dataset, ImageNet-PD, to benchmark the robustness of deep learning models against this new dataset. The proposed method outperforms existing benchmarks, ImageNet-E and ImageNet-X. Additionally, it significantly improves performance on ImageNet-PD while consistently performing on standard data distribution. Notably, our method shows improved performance on three PD-affected real-world applications—crowd counting, fisheye image recognition, and person re-identification—and one PD-affected challenging CV task: object detection. The source code, dataset, and models are available on the project webpage at https://prakashchhipa.github.io/projects/mpd.

Place, publisher, year, edition, pages
Springer Science and Business Media Deutschland GmbH, 2025
Series
Lecture Notes in Computer Science (LNCS), ISSN 0302-9743, E-ISSN 1611-3349 ; 15131
Keywords
Perspective Distortion, Self-supervised Learning, Robust Representation Learning
National Category
Computer graphics and computer vision
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111233 (URN)10.1007/978-3-031-73464-9_21 (DOI)2-s2.0-85212279211 (Scopus ID)
Conference
18th European Conference on Computer Vision (ECCV 2024), Milano, Italy, September 29 - October 4, 2024
Funder
Knut and Alice Wallenberg Foundation
Note

ISBN for host publication: 978-3-031-73463-2, 978-3-031-73464-9

Available from: 2025-01-08 Created: 2025-01-08 Last updated: 2025-02-07Bibliographically approved
Chhipa, P. C. (2025). Towards Robust and Domain-aware Self-supervised Representation Learning. (Doctoral dissertation). Luleå tekniska universitet
Open this publication in new window or tab >>Towards Robust and Domain-aware Self-supervised Representation Learning
2025 (English)Doctoral thesis, comprehensive summary (Other academic)
Abstract [en]

Self-supervised representation learning (SSL) has emerged as a fundamental paradigm in representation learning, enabling models to learn meaningful representations without requiring labeled data. Despite its success, SSL remains constrained by two core challenges: (i) lack of robustness against real-world distribution shifts and adversarial perturbations, and (ii) lack of domain-awareness, limiting its usability beyond natural scenes. These limitations arise from the generic invariance assumptions in SSL, which rely on predefined augmentations to learn representations but suffer to generalize when exposed to unseen environmental distortions, adversarial attacks, and domain-specific nuances. Existing SSL approaches—whether contrastive learning, knowledge distillation, or information maximization—do not explicitly account for these factors, making them vulnerable in real-world applications and suboptimal in specialized domains.

This thesis aims to enhance both robustness and domain-awareness in a modular, plug-and-play manner, ensuring that the advancements are applicable across different joint embedding architecture and method (JEAM)-based SSL approaches and adaptable to future developments in SSL. To achieve this, this thesis follows a guiding principle-leveraging invariant representations to improve robustness and domain-awareness in a modular and plug-and-play manner without altering fundamental SSL objectives. This principle guides that improvements can be seamlessly integrated into existing and future SSL approaches.

To systematically address the above-stated core challenges, this thesis begins with a foundational study of SSL approaches, identifying the common schema that underlies different SSL approaches. This unification provides a conceptual view of SSL methods, allowing us to isolate the domain-sensitive and domain-agnostic components across approaches. This conceptual outcome set the stage to establish precisely where improvements are needed to enhance robustness and domain-awareness across methods as current SSL methods fail under real-world challenges.

Next, the thesis conducts a large-scale empirical evaluation of existing SSL methods against relevant robustness benchmarks, uncovering their failures under distribution shifts caused by real-world environmental challenges. This evaluation reveals a significant decline in the robustness performance of existing SSL methods across different SSL approaches. It establishes the fundamental research gap and motivates the advancements introduced in this thesis.

The first advancement focuses on robustness against distribution shifts, particularly geometric distortions such as perspective distortion (PD), which are prevalent in real-world environment but not addressed by existing SSL methods. Since PD introduces nonlinear spatial transformations, standard affine augmentations fail to model these effects, leading to degraded representation stability. To address this, this thesis introduces Möbius-based mitigating perspective distortion (MPD) and log conformal maps (LCM), mathematically grounded transformations that enable robustness without requiring perspective-distorted training data and estimation of camera parameters. These methods are additionally adapted to multiple real-world computer vision applications—including crowd counting, object detection, person re-identification, and fisheye view recognition—showcasing their effectiveness. Further, addressing the non-availability of dedicated perspectively distorted benchmark, ImageNet-PD robustness benchmark is developed to fill the gap.

Beyond environmental challenges, another critical real-world challenge is adversarial attacks. SSL methods are highly susceptible to adversarial attacks, as the learned representations lack perturbation-invariant constraints. Existing adversarial training approaches in SSL rely on brute-force attack strategies, which fail to adapt dynamically. To address this, this thesis introduces adversarial self-supervised training with adaptive-attacks (ASTrA), where attack strategies evolve dynamically based on the model’s learning dynamics and establish a correspondence between attack parameters and training examples, optimizing adversarial perturbations in a learnable manner. Unlike conventional adversarial training, ASTrA ensures robustness while maintaining SSL’s efficiency and scalability.

While robustness, in this thesis, focuses on real-world challenges in natural scenes, domain-awareness focuses on specialized visual domains beyond natural scenes. Standard SSL augmentations are designed for variations in natural scenes, making them ill-suited for specialized fields such as medical imaging and industrial mining material inspection. This thesis introduces domain-awareness in SSL that incorporates domain-specific information into SSL’s view generation process. Particularly, (i) magnification prior contrastive similarity (MPCS) makes learned representations invariant to magnifications for histopathology images by inducing varying magnifications in the view generation process, improving breast cancer recognition. (ii) depth contrast explicitly enforces modality alignment between material images and attained height of materials on conveyor belt, ensuring that the learned representations become aware of physical properties, thereby improving material classification.

Beyond robustness and domain-awareness, SSL’s ability to generalize with limited data is advantageous for its practicality. While the loss objective in SSL is generally domain-agnostic, its effectiveness relies on large-scale data. In this direction, this thesis explores functional knowledge transfer (FKT), where self-supervised and supervised learning objectives are jointly optimized, enabling SSL representations to adapt dynamically to supervised tasks. This approach enhances generalization in low-data regimes.

In conclusion, this thesis provides a foundation for robust and domain-aware self-supervised representation learning in a modular manner, highlighting its applicability to existing and future JEAM-based SSL approaches, which can inherit these advancements and adapt to emerging challenges.

Place, publisher, year, edition, pages
Luleå tekniska universitet, 2025
Series
Doctoral thesis / Luleå University of Technology 1 jan 1997 → …, ISSN 1402-1544
Keywords
Self-supervised Representation Learning, Representation Learning, Robustness, Domain-aware, Perspective Distortion, Adversarial Attacks, Medical Imaging, Computer Vision
National Category
Computer Vision and Learning Systems Artificial Intelligence
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111571 (URN)978-91-8048-761-0 (ISBN)978-91-8048-762-7 (ISBN)
Public defence
2025-04-08, C305, Luleå University of Technology, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2025-02-07 Created: 2025-02-07 Last updated: 2025-03-13Bibliographically approved
Saini, R., Upadhyay, R., Gupta, V., Chhipa, P. C., Rakesh, S., Mokayed, H., . . . Das Chakladar, D. (2024). An EEG Analysis Framework for Brain Disorder Classification Using Convolved Connectivity Features. In: 2024 9th International Conference on Frontiers of Signal Processing (ICFSP 2024): . Paper presented at 9th International Conference on Frontiers of Signal Processing (ICFSP 2024), Paris, France, September 12-14, 2024 (pp. 158-162). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>An EEG Analysis Framework for Brain Disorder Classification Using Convolved Connectivity Features
Show others...
2024 (English)In: 2024 9th International Conference on Frontiers of Signal Processing (ICFSP 2024), Institute of Electrical and Electronics Engineers Inc. , 2024, p. 158-162Conference paper, Published paper (Refereed)
Abstract [en]

Electroencephalography (EEG) is a fundamental tool in the non-invasive evaluation of brain activity, providing insights into the intricate dynamics at play within neurode-generative disorders. Conventional methodologies often lack in effectively capturing the temporal and intricate intra- and inter-channel dynamics, leading to diminished predictive accuracy. To address this problem, we present an innovative framework that effectively captures temporal along with intra- and inter-channel dynamics for EEG analysis aimed at predicting neu-rodegenerative disorders, explicitly targeting Alzheimer's and dementia. The proposed method involves constructing aggregated recurrence matrices from EEG channels followed by kernel formation and convolution operation, effectively encapsulating intra- and inter-channel spatiotemporal patterns, thereby achieving a more comprehensive representation of neural dynamics. The proposed approach was validated using public datasets, revealing competitive performance. Implementation details with codes will be accessible on GitHub.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2024
Keywords
Alzheimer’s, Dementia, Electroencephalography (EEG), Brian signals, Convolution, Machine Learning
National Category
Neurosciences Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-111522 (URN)10.1109/ICFSP62546.2024.10785421 (DOI)2-s2.0-85215675831 (Scopus ID)
Conference
9th International Conference on Frontiers of Signal Processing (ICFSP 2024), Paris, France, September 12-14, 2024
Funder
Promobilia foundation
Note

ISBN for host publication: 979-8-3503-5323-5

Available from: 2025-02-04 Created: 2025-02-04 Last updated: 2025-03-12Bibliographically approved
Mishra, A. R., Kumar, R., Gupta, V., Prabhu, S., Upadhyay, R., Chhipa, P. C., . . . Saini, R. (2024). SignEEG v1.0: Multimodal Dataset with Electroencephalography and Hand-written Signature for Biometric Systems. Scientific Data, 11, Article ID 718.
Open this publication in new window or tab >>SignEEG v1.0: Multimodal Dataset with Electroencephalography and Hand-written Signature for Biometric Systems
Show others...
2024 (English)In: Scientific Data, E-ISSN 2052-4463, Vol. 11, article id 718Article in journal (Refereed) Published
Abstract [en]

Handwritten signatures in biometric authentication leverage unique individual characteristics for identification, offering high specificity through dynamic and static properties. However, this modality faces significant challenges from sophisticated forgery attempts, underscoring the need for enhanced security measures in common applications. To address forgery in signature-based biometric systems, integrating a forgery-resistant modality, namely, noninvasive electroencephalography (EEG), which captures unique brain activity patterns, can significantly enhance system robustness by leveraging multimodality’s strengths. By combining EEG, a physiological modality, with handwritten signatures, a behavioral modality, our approach capitalizes on the strengths of both, significantly fortifying the robustness of biometric systems through this multimodal integration. In addition, EEG’s resistance to replication offers a high-security level, making it a robust addition to user identification and verification. This study presents a new multimodal SignEEG v1.0 dataset based on EEG and hand-drawn signatures from 70 subjects. EEG signals and hand-drawn signatures have been collected with Emotiv Insight and Wacom One sensors, respectively. The multimodal data consists of three paradigms based on mental, & motor imagery, and physical execution: i) thinking of the signature’s image, (ii) drawing the signature mentally, and (iii) drawing a signature physically. Extensive experiments have been conducted to establish a baseline with machine learning classifiers. The results demonstrate that multimodality in biometric systems significantly enhances robustness, achieving high reliability even with limited sample sizes. We release the raw, pre-processed data and easy-to-follow implementation details.

Place, publisher, year, edition, pages
Nature Research, 2024
National Category
Computer Sciences Signal Processing
Research subject
Operation and Maintenance Engineering; Machine Learning
Identifiers
urn:nbn:se:ltu:diva-108479 (URN)10.1038/s41597-024-03546-z (DOI)001261561300002 ()38956046 (PubMedID)2-s2.0-85197457964 (Scopus ID)
Note

Validerad;2024;Nivå 2;2024-08-07 (hanlid);

Full text license: CC BY

Available from: 2024-08-07 Created: 2024-08-07 Last updated: 2025-02-05Bibliographically approved
Chhipa, P. C., Rodahl Holmgren, J., De, K., Saini, R. & Liwicki, M. (2023). Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?. In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023): . Paper presented at IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, October 2-6, 2023 (pp. 4469-4478). Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Can Self-Supervised Representation Learning Methods Withstand Distribution Shifts and Corruptions?
Show others...
2023 (English)In: 2023 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Institute of Electrical and Electronics Engineers Inc. , 2023, p. 4469-4478Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-103984 (URN)10.1109/ICCVW60793.2023.00481 (DOI)001156680304060 ()2-s2.0-85182928560 (Scopus ID)
Conference
IEEE/CVF International Conference on Computer Vision Workshops (ICCVW 2023), Paris, France, October 2-6, 2023
Note

ISBN for host publication: 979-8-3503-0745-0;

Available from: 2024-01-29 Created: 2024-01-29 Last updated: 2025-02-07
Chopra, M., Chhipa, P. C., Mengi, G., Gupta, V. & Liwicki, M. (2023). Domain Adaptable Self-supervised Representation Learning on Remote Sensing Satellite Imagery. In: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings: . Paper presented at 2023 International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, June 18-23, 2023. Institute of Electrical and Electronics Engineers Inc.
Open this publication in new window or tab >>Domain Adaptable Self-supervised Representation Learning on Remote Sensing Satellite Imagery
Show others...
2023 (English)In: IJCNN 2023 - International Joint Conference on Neural Networks, Conference Proceedings, Institute of Electrical and Electronics Engineers Inc. , 2023Conference paper, Published paper (Refereed)
Abstract [en]

This work presents a novel domain adaption paradigm for studying contrastive self-supervised representation learning and knowledge transfer using remote sensing satellite data. Major state-of-the-art remote sensing visual domain ef-forts primarily focus on fully supervised learning approaches that rely entirely on human annotations. On the other hand, human annotations in remote sensing satellite imagery are always subject to limited quantity due to high costs and domain expertise, making transfer learning a viable alternative. The proposed approach investigates the knowledge transfer of self-supervised representations across the distinct source and target data distributions in depth in the remote sensing data domain. In this arrangement, self-supervised contrastive learning- based pretraining is performed on the source dataset, and downstream tasks are performed on the target datasets in a round-robin fashion. Experiments are conducted on three publicly avail-able datasets, UC Merced Landuse (UCMD), SIRI-WHU, and MLRSNet, for different downstream classification tasks versus label efficiency. In self-supervised knowledge transfer, the pro-posed approach achieves state-of-the-art performance with label efficiency labels and outperforms a fully supervised setting. A more in-depth qualitative examination reveals consistent evidence for explainable representation learning. The source code and trained models are published on GitHub1.

Place, publisher, year, edition, pages
Institute of Electrical and Electronics Engineers Inc., 2023
Series
Proceedings of the International Joint Conference on Neural Networks, ISSN 2161-4393, E-ISSN 2161-4407
Keywords
contrastive learning, domain adaptation, remote sensing, representation learning, satellite image, self-supervised learning
National Category
Computer Sciences Signal Processing
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-101307 (URN)10.1109/IJCNN54540.2023.10191249 (DOI)001046198701085 ()2-s2.0-85169612572 (Scopus ID)978-1-6654-8868-6 (ISBN)978-1-6654-8867-9 (ISBN)
Conference
2023 International Joint Conference on Neural Networks, IJCNN 2023, Gold Coast, Australia, June 18-23, 2023
Available from: 2023-09-12 Created: 2023-09-12 Last updated: 2024-03-07Bibliographically approved
Rakesh, S., Liwicki, F., Mokayed, H., Upadhyay, R., Chhipa, P. C., Gupta, V., . . . Saini, R. (2023). Emotions Classification Using EEG in Health Care. In: Tistarelli, Massimo; Dubey, Shiv Ram; Singh, Satish Kumar; Jiang, Xiaoyi (Ed.), Computer Vision and Machine Intelligence: Proceedings of CVMI 2022. Paper presented at International Conference on Computer Vision & Machine Intelligence (CVMI), Allahabad, Prayagraj, India, August 12-13, 2022 (pp. 37-49). Springer Nature
Open this publication in new window or tab >>Emotions Classification Using EEG in Health Care
Show others...
2023 (English)In: Computer Vision and Machine Intelligence: Proceedings of CVMI 2022 / [ed] Tistarelli, Massimo; Dubey, Shiv Ram; Singh, Satish Kumar; Jiang, Xiaoyi, Springer Nature, 2023, p. 37-49Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
Springer Nature, 2023
Series
Lecture Notes in Networks and Systems (LNNS) ; 586
National Category
Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-98587 (URN)10.1007/978-981-19-7867-8_4 (DOI)2-s2.0-85161601282 (Scopus ID)
Conference
International Conference on Computer Vision & Machine Intelligence (CVMI), Allahabad, Prayagraj, India, August 12-13, 2022
Note

ISBN för värdpublikation: 978-981-19-7866-1, 978-981-19-7867-8

Available from: 2023-06-19 Created: 2023-06-19 Last updated: 2025-02-05Bibliographically approved
Chhipa, P. C., Chopra, M., Mengi, G., Gupta, V., Upadhyay, R., Chippa, M. S., . . . Liwicki, M. (2023). Functional Knowledge Transfer with Self-supervised Representation Learning. In: 2023 IEEE International Conference on Image Processing: Proceedings: . Paper presented at 30th IEEE International Conference on Image Processing, ICIP 2023, October 8-11, 2023, Kuala Lumpur, Malaysia (pp. 3339-3343). IEEE
Open this publication in new window or tab >>Functional Knowledge Transfer with Self-supervised Representation Learning
Show others...
2023 (English)In: 2023 IEEE International Conference on Image Processing: Proceedings, IEEE , 2023, p. 3339-3343Conference paper, Published paper (Refereed)
Place, publisher, year, edition, pages
IEEE, 2023
Series
Proceedings - International Conference on Image Processing, ISSN 1522-4880
National Category
Computer graphics and computer vision Computer Sciences
Research subject
Machine Learning
Identifiers
urn:nbn:se:ltu:diva-103659 (URN)10.1109/ICIP49359.2023.10222142 (DOI)001106821003077 ()2-s2.0-85180766253 (Scopus ID)978-1-7281-9835-4 (ISBN)978-1-7281-9836-1 (ISBN)
Conference
30th IEEE International Conference on Image Processing, ICIP 2023, October 8-11, 2023, Kuala Lumpur, Malaysia
Available from: 2024-01-15 Created: 2024-01-15 Last updated: 2025-02-07Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0002-6903-7552

Search in DiVA

Show all publications