Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 89) Show all publications
Kulahci, M. (2019). Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments". Quality Engineering, 31(1), 149-153
Open this publication in new window or tab >>Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments"
2019 (English)In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, no 1, p. 149-153Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Milwaukee‎: Taylor & Francis, 2019
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-73649 (URN)10.1080/08982112.2018.1537446 (DOI)000467061400025 ()2-s2.0-85064277851 (Scopus ID)
Available from: 2019-04-15 Created: 2019-04-15 Last updated: 2019-06-12Bibliographically approved
Frumosu, F. D. & Kulahci, M. (2019). Outliers detection using an iterative strategy for semi‐supervised learning. Quality and Reliability Engineering International
Open this publication in new window or tab >>Outliers detection using an iterative strategy for semi‐supervised learning
2019 (English)In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638Article in journal (Refereed) Epub ahead of print
Abstract [en]

As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets.

Place, publisher, year, edition, pages
John Wiley & Sons, 2019
Keywords
Industry 4.0, iterative strategy, latent structures methods, production statistics, semi‐supervised learning
National Category
Reliability and Maintenance
Research subject
Quality technology and logistics
Identifiers
urn:nbn:se:ltu:diva-75583 (URN)10.1002/qre.2522 (DOI)000477441600001 ()
Available from: 2019-08-19 Created: 2019-08-19 Last updated: 2019-08-19
Capaci, F., Vanhatalo, E., Kulahci, M. & Bergquist, B. (2019). The Revised Tennessee Eastman Process Simulator as Testbed for SPC and DoE Methods. Quality Engineering, 31(2), 212-229
Open this publication in new window or tab >>The Revised Tennessee Eastman Process Simulator as Testbed for SPC and DoE Methods
2019 (English)In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, no 2, p. 212-229Article in journal (Refereed) Published
Abstract [en]

Engineering process control and high-dimensional, time-dependent data present great methodological challenges when applying statistical process control (SPC) and design of experiments (DoE) in continuous industrial processes. Process simulators with an ability to mimic these challenges are instrumental in research and education. This article focuses on the revised Tennessee Eastman process simulator providing guidelines for its use as a testbed for SPC and DoE methods. We provide flowcharts that can support new users to get started in the Simulink/Matlab framework, and illustrate how to run stochastic simulations for SPC and DoE applications using the Tennessee Eastman process.

Place, publisher, year, edition, pages
Taylor & Francis, 2019
Keywords
Simulation, Tutorial, Statistical process control, Design of experiments, Engineering process control, Closed-loop
National Category
Reliability and Maintenance
Research subject
Quality technology and logistics
Identifiers
urn:nbn:se:ltu:diva-66255 (URN)10.1080/08982112.2018.1461905 (DOI)000468617000002 ()2-s2.0-85066129240 (Scopus ID)
Projects
Statistical Methods for Improving Continuous Production
Note

Validerad;2019;Nivå 2;2019-06-11 (johcin)

Available from: 2017-10-25 Created: 2017-10-25 Last updated: 2019-06-18Bibliographically approved
Frumosu, F. D. & Kulahci, M. (2018). Big data analytics using semi‐supervised learning methods. Quality and Reliability Engineering International, 34(7), 1413-1423
Open this publication in new window or tab >>Big data analytics using semi‐supervised learning methods
2018 (English)In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 34, no 7, p. 1413-1423Article in journal (Refereed) Published
Abstract [en]

The expanding availability of complex data structures requires development of new analysis methods for process understanding and monitoring. In manufacturing, this is primarily due to high‐frequency and high‐dimensional data available through automated data collection schemes and sensors. However, particularly for fast production rate situations, data on the quality characteristics of the process output tend to be scarcer than the available process data. There has been a considerable effort in incorporating latent structure–based methods in the context of complex data. The research question addressed in this paper is to make use of latent structure–based methods in the pursuit of better predictions using all available data including the process data for which there are no corresponding output measurements, ie, unlabeled data. Inspiration for the research question comes from an industrial setting where there is a need for prediction with extremely low tolerances. A semi‐supervised principal component regression method is compared against benchmark latent structure–based methods, principal components regression, and partial least squares, on simulated and experimental data. In the analysis, we show the circumstances in which it becomes more advantageous to use the semi‐supervised principal component regression over these competing methods.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-69525 (URN)10.1002/qre.2338 (DOI)000445334700011 ()2-s2.0-85053643774 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-09-25 (svasva)

Available from: 2018-06-14 Created: 2018-06-14 Last updated: 2018-11-22Bibliographically approved
Spooner, M., Kold, D. & Kulahci, M. (2018). Harvest time prediction for batch processes. Computers and Chemical Engineering, 117, 32-41
Open this publication in new window or tab >>Harvest time prediction for batch processes
2018 (English)In: Computers and Chemical Engineering, ISSN 0098-1354, E-ISSN 1873-4375, Vol. 117, p. 32-41Article in journal (Refereed) Published
Abstract [en]

Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-68892 (URN)10.1016/j.compchemeng.2018.05.019 (DOI)000441891600004 ()2-s2.0-85048458709 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-06-25 (andbra)

Available from: 2018-05-24 Created: 2018-05-24 Last updated: 2019-03-27Bibliographically approved
Spooner, M. & Kulahci, M. (2018). Monitoring batch processes with dynamic time warping and k-nearest neighbours. Chemometrics and Intelligent Laboratory Systems, 183, 102-112
Open this publication in new window or tab >>Monitoring batch processes with dynamic time warping and k-nearest neighbours
2018 (English)In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 183, p. 102-112Article in journal (Refereed) Published
Abstract [en]

A novel data driven approach to batch process monitoring is presented, which combines the k-Nearest Neighbour rule with the dynamic time warping (DTW) distance. This online method (DTW-NN) calculates the DTW distance between an ongoing batch, and each batch in a reference database of batches produced under normal operating conditions (NOC). The sum of the k smallest DTW distances is monitored. If a fault occurs in the ongoing batch, then this distance increases and an alarm is generated. The monitoring statistic is easy to interpret, being a direct measure of similarity of the ongoing batch to its nearest NOC predecessors and the method makes no distributional assumptions regarding normal operating conditions. DTW-NN is applied to four extensive datasets from simulated batch production of penicillin, and tested on a wide variety of fault types, magnitudes and onset times. Performance of DTW-NN is contrasted with a benchmark multiway PCA approach, and DTW-NN is shown to perform particularly well when there is clustering of batches under NOC.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Batch process, Dynamic time warping, Nearest neighbours, Pensim
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-71487 (URN)10.1016/j.chemolab.2018.10.011 (DOI)000453490700011 ()2-s2.0-85056005015 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-11-07 (johcin) 

Available from: 2018-11-07 Created: 2018-11-07 Last updated: 2019-01-30Bibliographically approved
Kulahci, M. (2018). Rare-events classification: An approach based on genetic algorithm and voronoi tessellation. In: : . Paper presented at 22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne; Australia; 3 June 2018.
Open this publication in new window or tab >>Rare-events classification: An approach based on genetic algorithm and voronoi tessellation
2018 (English)Conference paper (Refereed)
Identifiers
urn:nbn:se:ltu:diva-72858 (URN)2-s2.0-85059055751 (Scopus ID)
Conference
22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne; Australia; 3 June 2018
Available from: 2019-02-12 Created: 2019-02-12 Last updated: 2019-02-12
Gajjar, S., Kulahci, M. & Palazoglu, A. (2018). Real-time fault detection and diagnosis using sparse principal component analysis. Journal of Process Control, 67, 112-128
Open this publication in new window or tab >>Real-time fault detection and diagnosis using sparse principal component analysis
2018 (English)In: Journal of Process Control, ISSN 0959-1524, E-ISSN 1873-2771, Vol. 67, p. 112-128Article in journal (Refereed) Published
Abstract [en]

With the emergence of smart factories, large volumes of process data are collected and stored at high sampling rates for improved energy efficiency, process monitoring and sustainability. The data collected in the course of enterprise-wide operations consists of information from broadly deployed sensors and other control equipment. Interpreting such large volumes of data with limited workforce is becoming an increasingly common challenge. Principal component analysis (PCA) is a widely accepted procedure for summarizing data while minimizing information loss. It does so by finding new variables, the principal components (PCs) that are linear combinations of the original variables in the dataset. However, interpreting PCs obtained from many variables from a large dataset is often challenging, especially in the context of fault detection and diagnosis studies. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing PCs with sparse loadings via variance-sparsity trade-off. Using SPCA, some of the loadings on PCs can be restricted to zero. In this paper, we introduce a method to select the number of non-zero loadings in each PC while using SPCA. The proposed approach considerably improves the interpretability of PCs while minimizing the loss of total variance explained. Furthermore, we compare the performance of PCA- and SPCA-based techniques for fault detection and fault diagnosis. The key features of the methodology are assessed through a synthetic example and a comparative study of the benchmark Tennessee Eastman process.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-63207 (URN)10.1016/j.jprocont.2017.03.005 (DOI)000436649900011 ()2-s2.0-85018342458 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-06-11 (rokbeg)

Available from: 2017-05-02 Created: 2017-05-02 Last updated: 2018-09-13Bibliographically approved
Rauf Khan, A., Schioler, H. & Kulahci, M. (2018). Selection of objective function for imbalanced classification: an industrial case study. In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA): . Paper presented at 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) Limassol, Cyprus, September 12-15, 2017. Piscataway, NJ: IEEE
Open this publication in new window or tab >>Selection of objective function for imbalanced classification: an industrial case study
2018 (English)In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2018
Series
IEEE International Conference on Emerging Technologies and Factory Automation, ISSN 1946-0759
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-72519 (URN)10.1109/ETFA.2017.8396223 (DOI)978-1-5090-6505-9 (ISBN)978-1-5090-6506-6 (ISBN)
Conference
22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) Limassol, Cyprus, September 12-15, 2017
Available from: 2019-01-11 Created: 2019-01-11 Last updated: 2019-01-11Bibliographically approved
Capaci, F., Bergquist, B., Kulahci, M. & Vanhatalo, E. (2017). Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control. Quality and Reliability Engineering International, 33(7), 1601-1614
Open this publication in new window or tab >>Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control
2017 (English)In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 33, no 7, p. 1601-1614Article in journal (Refereed) Published
Abstract [en]

Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios

Place, publisher, year, edition, pages
John Wiley & Sons, 2017
National Category
Reliability and Maintenance
Research subject
Quality Technology and Management
Identifiers
urn:nbn:se:ltu:diva-61872 (URN)10.1002/qre.2128 (DOI)000413906100024 ()2-s2.0-85012952363 (Scopus ID)
Note

Validerad;2017;Nivå 2;2017-11-03 (andbra)

Available from: 2017-02-08 Created: 2017-02-08 Last updated: 2019-06-18Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4222-9631

Search in DiVA

Show all publications