Change search
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Integrating Neural Networks and Particle Image Velocimetry for Advanced Digital Twins in Experimental Fluid Mechanics
Luleå University of Technology, Department of Engineering Sciences and Mathematics, Fluid and Experimental Mechanics.ORCID iD: 0009-0005-5670-2022
2024 (English)Licentiate thesis, comprehensive summary (Other academic)
Abstract [en]

Digital twins are revolutionizing industries by leveraging machine learning and data-driven models to create dynamic synchronized representations of physical systems. These virtual counterparts operate in real time, bridging the gap between the physical and digital worlds to simulate, predict, optimize, and control system behaviors, thereby enhancing the performance and efficiency of their physical analogs. Their transformative potential is particularly evident in manufacturing, where they contribute to precision metrology and quality assurance. This research explores the implementation of digital twins in experimental fluid mechanics, focusing on real-time data integration from in-line coherent imaging setups. By incorporating real-time sensor data or validated datasets, the goal is to develop dynamic models that accurately represent the behavior of the system under varying operating conditions.

The thesis emphasizes the use of advanced deep learning algorithms, including artificial neural networks (ANNs) and Vision Transformers (ViT), to create an end-to-end model for analyzing Particle Image Velocimetry (PIV) data. Convolutional neural network (CNNs) blocks, based on optical flow estimation techniques, are used to extract flow patterns by learning spatial features and correlations from PIV images. To refine predictions by capturing temporal dependencies and transient behaviors, iterative recurrent CNN blocks are integrated. However, these deep learning models, typically trained on synthetic datasets with reference results derived from analytical equations or high-fidelity numerical models, face challenges in robustness and generalization when applied to real-world industrial scenarios. To address this, experimental PIV datasets, focused on flow past a circular cylinder, were generated to evaluate the performance of RAFT-PIV (Recurrent All-Pairs Field Transforms), the state-of-the-art CNN model for optical flow estimation (Paper A). These datasets included variations in key parameters such as particle size, seeding density, and displacement ranges. The study also investigated the influence of image preprocessing on RAFT-PIV performance. The results showed that the root mean squared errors between the RAFT-PIV predictions and the reference data ranged from 0.5 to 2 pixels. Errors decreased with an increase in particle density or a reduction in the maximum particle displacement, while image pre-processing had a minimal impact on model performance.

Given the complexity of fluid flow phenomena, characterized by intricate structures across multiple scales, traditional ANNs using convolutional operations may capture local features but often miss long-range dependencies. To overcome this limitation, this research introduces Twins-PIVNet, a novel framework that replaces traditional convolution-based feature extractors with attention-based vision transformers (Paper B). This approach improves the model’s capacity to capture both global and local contexts, selectively focusing on relevant features. Twins-PIVNet outperforms most of the well-established PIV methods on both synthetic and experimental datasets, achieving a significant reduction in computational costs and inference compared to other self-attention-based models for PIV. Moreover, the model also excels in generalization and robustness, performing effectively on experimental data not included in the training set. These neural networks are capable of handling the nonlinear dynamics characteristic of fluid systems, significantly enhancing the predictive capabilities of digital twins. This advancement facilitates real-time analysis, predictive maintenance, and optimization, highlighting the critical role of digital twins in the advancement of flow analysis, smart manufacturing, and industrial optimization.

Place, publisher, year, edition, pages
Luleå: Luleå University of Technology, 2024.
Series
Licentiate thesis / Luleå University of Technology, ISSN 1402-1757
Keywords [en]
digital twins, particle image velocimetry, deep learning, experimental fluid mechanics
National Category
Fluid Mechanics
Research subject
Experimental Mechanics
Identifiers
URN: urn:nbn:se:ltu:diva-110252ISBN: 978-91-8048-656-9 (print)ISBN: 978-91-8048-657-6 (electronic)OAI: oai:DiVA.org:ltu-110252DiVA, id: diva2:1903568
Presentation
2024-12-05, E632, Luleå tekniska universitet, Luleå, 09:00 (English)
Opponent
Supervisors
Available from: 2024-10-07 Created: 2024-10-04 Last updated: 2025-02-09Bibliographically approved
List of papers
1. Experimental dataset investigation of deep recurrent optical flow learning for particle image velocimetry: flow past a circular cylinder
Open this publication in new window or tab >>Experimental dataset investigation of deep recurrent optical flow learning for particle image velocimetry: flow past a circular cylinder
2024 (English)In: Measurement science and technology, ISSN 0957-0233, E-ISSN 1361-6501, Vol. 35, no 8, article id 085402Article in journal (Refereed) Published
Abstract [en]

Current optical flow-based neural networks for particle image velocimetry (PIV) are largely trained on synthetic datasets emulating real-world scenarios. While synthetic datasets provide greater control and variation than what can be achieved using experimental datasets for supervised learning, it requires a deeper understanding of how or what factors dictate the learning behaviors of deep neural networks for PIV. In this study, we investigate the performance of the recurrent all-pairs field transforms-PIV (RAFTs-PIV) network, the current state-of-the-art deep learning architecture for PIV, by testing it on unseen experimentally generated datasets. The results from RAFT-PIV are compared with a conventional cross-correlation-based method, Adaptive PIV. The experimental PIV datasets were generated for a typical scenario of flow past a circular cylinder in a rectangular channel. These test datasets encompassed variations in particle diameters, particle seeding densities, and flow speeds, all falling within the parameter range used for training RAFT-PIV. We also explore how different image pre-processing techniques can impact and potentially enhance the performance of RAFT-PIV on real-world datasets. Thorough testing with real-world experimental PIV datasets reveals the resilience of the optical flow-based method's variations to PIV hyperparameters, in contrast to the conventional PIV technique. The ensemble-averaged root mean squared errors between the RAFT-PIV and Adaptive PIV estimations generally range between 0.5–2 (px) and show a slight reduction as particle densities increase or Reynolds numbers decrease. Furthermore, findings indicate that employing image pre-processing techniques to enhance input particle image quality does not improve RAFT-PIV predictions; instead, it incurs higher computational costs and impacts estimations of small-scale structures.

Place, publisher, year, edition, pages
Institute of Physics (IOP), 2024
Keywords
particle image velocimetry, experimental dataset, deep learning, optical flow
National Category
Fluid Mechanics Other Engineering and Technologies
Research subject
Experimental Mechanics
Identifiers
urn:nbn:se:ltu:diva-105449 (URN)10.1088/1361-6501/ad4387 (DOI)001215214500001 ()2-s2.0-85192673315 (Scopus ID)
Conference
20th International Symposium on Flow Visualization (ISFV20), Delft, Netherlands, July 10-13, 2023
Note

Validerad;2024;Nivå 2;2024-08-12 (hanlid);

Full text license: CC BY 4.0; 

Part of special issue: The 20th International Symposium on Flow Visualization (ISFV20)

Available from: 2024-05-13 Created: 2024-05-13 Last updated: 2025-02-10Bibliographically approved
2. Twins-PIVNet: Spatial attention-based deep learning framework for particle image velocimetry using Vision Transformer
Open this publication in new window or tab >>Twins-PIVNet: Spatial attention-based deep learning framework for particle image velocimetry using Vision Transformer
2025 (English)In: Ocean Engineering, ISSN 0029-8018, E-ISSN 1873-5258, article id 120205Article in journal (Refereed) Published
Abstract [en]

Particle Image Velocimetry (PIV) for flow visualization has advanced with the integration of deep learning algorithms. These methods enable end-to-end processing, extracting dense flow fields directly from raw particle images. However, conventional deep learning-based PIV models, which predominantly rely on convolutional architectures, are limited in their ability to utilize contextual information and capture dependencies between pixels across sequential images, impacting the prediction accuracy. We introduce Twins-PIVNet, a deep learning framework for PIV optical flow estimation that leverages a spatial attention-based vision transformer architecture. Its self-attention mechanism captures multi-scale features of particle motion, significantly improving the dense flow field estimation. Trained on synthetic PIV datasets covering a wide range of flow conditions, Twins-PIVNet has been evaluated on both synthetic and experimental datasets, demonstrating superior accuracy and performance. In comparative studies, Twins-PIVNet outperforms existing optical flow and conventional methods, achieving accuracy improvements of 51% for backstep flow, 42% for DNS-turbulence, and 33% for surface quasi-geostrophic flow. Additionally, it also exhibits strong generalization on experimental PIV data, demonstrating robustness in handling real-world PIV uncertainties. Despite its attention mechanism, Twins-PIVNet maintains faster inference and training times compared to other PIV models, offering an optimal balance between complexity, efficiency, and performance.

Place, publisher, year, edition, pages
Elsevier, 2025
Keywords
particle image velocimetry, deep learning, vision transformer, self-attention, optical flow estimation
National Category
Fluid Mechanics
Research subject
Experimental Mechanics; Fluid Mechanics
Identifiers
urn:nbn:se:ltu:diva-110244 (URN)10.1016/j.oceaneng.2024.120205 (DOI)2-s2.0-85212978937 (Scopus ID)
Note

Validerad;2025;Nivå 2;2025-01-02 (signyg);

Fulltext license: CC BY;

This article has previously appeared as a manuscript in a thesis

Available from: 2024-10-04 Created: 2024-10-04 Last updated: 2025-02-09Bibliographically approved

Open Access in DiVA

fulltext(2733 kB)163 downloads
File information
File name FULLTEXT01.pdfFile size 2733 kBChecksum SHA-512
fbe8fd09f5c99116307b7d22b94cf22b7738ae8185112930e78af7a28d49d2bc4b4b38596481c9f88b8c1896f1337ae18e34a9a330ae0b47cd969e708de47bb8
Type fulltextMimetype application/pdf

Authority records

Anjaneya Reddy, Yuvarajendra

Search in DiVA

By author/editor
Anjaneya Reddy, Yuvarajendra
By organisation
Fluid and Experimental Mechanics
Fluid Mechanics

Search outside of DiVA

GoogleGoogle Scholar
Total: 163 downloads
The number of downloads is the sum of all downloads of full texts. It may include eg previous versions that are now no longer available

isbn
urn-nbn

Altmetric score

isbn
urn-nbn
Total: 388 hits
CiteExportLink to record
Permanent link

Direct link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf