Change search
Link to record
Permanent link

Direct link
BETA
Publications (10 of 87) Show all publications
Kulahci, M. (2019). Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments". Quality Engineering, 31(1), 149-153
Open this publication in new window or tab >>Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments"
2019 (English)In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, no 1, p. 149-153Article in journal (Refereed) Published
Place, publisher, year, edition, pages
Milwaukee‎: Taylor & Francis, 2019
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-73649 (URN)10.1080/08982112.2018.1537446 (DOI)
Available from: 2019-04-15 Created: 2019-04-15 Last updated: 2019-04-15
Frumosu, F. D. & Kulahci, M. (2018). Big data analytics using semi‐supervised learning methods. Quality and Reliability Engineering International, 34(7), 1413-1423
Open this publication in new window or tab >>Big data analytics using semi‐supervised learning methods
2018 (English)In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 34, no 7, p. 1413-1423Article in journal (Refereed) Published
Abstract [en]

The expanding availability of complex data structures requires development of new analysis methods for process understanding and monitoring. In manufacturing, this is primarily due to high‐frequency and high‐dimensional data available through automated data collection schemes and sensors. However, particularly for fast production rate situations, data on the quality characteristics of the process output tend to be scarcer than the available process data. There has been a considerable effort in incorporating latent structure–based methods in the context of complex data. The research question addressed in this paper is to make use of latent structure–based methods in the pursuit of better predictions using all available data including the process data for which there are no corresponding output measurements, ie, unlabeled data. Inspiration for the research question comes from an industrial setting where there is a need for prediction with extremely low tolerances. A semi‐supervised principal component regression method is compared against benchmark latent structure–based methods, principal components regression, and partial least squares, on simulated and experimental data. In the analysis, we show the circumstances in which it becomes more advantageous to use the semi‐supervised principal component regression over these competing methods.

Place, publisher, year, edition, pages
John Wiley & Sons, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-69525 (URN)10.1002/qre.2338 (DOI)000445334700011 ()2-s2.0-85053643774 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-09-25 (svasva)

Available from: 2018-06-14 Created: 2018-06-14 Last updated: 2018-11-22Bibliographically approved
Spooner, M., Kold, D. & Kulahci, M. (2018). Harvest time prediction for batch processes. Computers and Chemical Engineering, 117, 32-41
Open this publication in new window or tab >>Harvest time prediction for batch processes
2018 (English)In: Computers and Chemical Engineering, ISSN 0098-1354, E-ISSN 1873-4375, Vol. 117, p. 32-41Article in journal (Refereed) Published
Abstract [en]

Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-68892 (URN)10.1016/j.compchemeng.2018.05.019 (DOI)000441891600004 ()2-s2.0-85048458709 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-06-25 (andbra)

Available from: 2018-05-24 Created: 2018-05-24 Last updated: 2019-03-27Bibliographically approved
Spooner, M. & Kulahci, M. (2018). Monitoring batch processes with dynamic time warping and k-nearest neighbours. Chemometrics and Intelligent Laboratory Systems, 183, 102-112
Open this publication in new window or tab >>Monitoring batch processes with dynamic time warping and k-nearest neighbours
2018 (English)In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 183, p. 102-112Article in journal (Refereed) Published
Abstract [en]

A novel data driven approach to batch process monitoring is presented, which combines the k-Nearest Neighbour rule with the dynamic time warping (DTW) distance. This online method (DTW-NN) calculates the DTW distance between an ongoing batch, and each batch in a reference database of batches produced under normal operating conditions (NOC). The sum of the k smallest DTW distances is monitored. If a fault occurs in the ongoing batch, then this distance increases and an alarm is generated. The monitoring statistic is easy to interpret, being a direct measure of similarity of the ongoing batch to its nearest NOC predecessors and the method makes no distributional assumptions regarding normal operating conditions. DTW-NN is applied to four extensive datasets from simulated batch production of penicillin, and tested on a wide variety of fault types, magnitudes and onset times. Performance of DTW-NN is contrasted with a benchmark multiway PCA approach, and DTW-NN is shown to perform particularly well when there is clustering of batches under NOC.

Place, publisher, year, edition, pages
Elsevier, 2018
Keywords
Batch process, Dynamic time warping, Nearest neighbours, Pensim
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-71487 (URN)10.1016/j.chemolab.2018.10.011 (DOI)000453490700011 ()2-s2.0-85056005015 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-11-07 (johcin) 

Available from: 2018-11-07 Created: 2018-11-07 Last updated: 2019-01-30Bibliographically approved
Kulahci, M. (2018). Rare-events classification: An approach based on genetic algorithm and voronoi tessellation. In: : . Paper presented at 22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne; Australia; 3 June 2018.
Open this publication in new window or tab >>Rare-events classification: An approach based on genetic algorithm and voronoi tessellation
2018 (English)Conference paper (Refereed)
Identifiers
urn:nbn:se:ltu:diva-72858 (URN)2-s2.0-85059055751 (Scopus ID)
Conference
22nd Pacific-Asia Conference on Knowledge Discovery and Data Mining, PAKDD 2018; Melbourne; Australia; 3 June 2018
Available from: 2019-02-12 Created: 2019-02-12 Last updated: 2019-02-12
Gajjar, S., Kulahci, M. & Palazoglu, A. (2018). Real-time fault detection and diagnosis using sparse principal component analysis. Journal of Process Control, 67, 112-128
Open this publication in new window or tab >>Real-time fault detection and diagnosis using sparse principal component analysis
2018 (English)In: Journal of Process Control, ISSN 0959-1524, E-ISSN 1873-2771, Vol. 67, p. 112-128Article in journal (Refereed) Published
Abstract [en]

With the emergence of smart factories, large volumes of process data are collected and stored at high sampling rates for improved energy efficiency, process monitoring and sustainability. The data collected in the course of enterprise-wide operations consists of information from broadly deployed sensors and other control equipment. Interpreting such large volumes of data with limited workforce is becoming an increasingly common challenge. Principal component analysis (PCA) is a widely accepted procedure for summarizing data while minimizing information loss. It does so by finding new variables, the principal components (PCs) that are linear combinations of the original variables in the dataset. However, interpreting PCs obtained from many variables from a large dataset is often challenging, especially in the context of fault detection and diagnosis studies. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing PCs with sparse loadings via variance-sparsity trade-off. Using SPCA, some of the loadings on PCs can be restricted to zero. In this paper, we introduce a method to select the number of non-zero loadings in each PC while using SPCA. The proposed approach considerably improves the interpretability of PCs while minimizing the loss of total variance explained. Furthermore, we compare the performance of PCA- and SPCA-based techniques for fault detection and fault diagnosis. The key features of the methodology are assessed through a synthetic example and a comparative study of the benchmark Tennessee Eastman process.

Place, publisher, year, edition, pages
Elsevier, 2018
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-63207 (URN)10.1016/j.jprocont.2017.03.005 (DOI)000436649900011 ()2-s2.0-85018342458 (Scopus ID)
Note

Validerad;2018;Nivå 2;2018-06-11 (rokbeg)

Available from: 2017-05-02 Created: 2017-05-02 Last updated: 2018-09-13Bibliographically approved
Rauf Khan, A., Schioler, H. & Kulahci, M. (2018). Selection of objective function for imbalanced classification: an industrial case study. In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA): . Paper presented at 22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) Limassol, Cyprus, September 12-15, 2017. Piscataway, NJ: IEEE
Open this publication in new window or tab >>Selection of objective function for imbalanced classification: an industrial case study
2018 (English)In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018Conference paper, Published paper (Refereed)
Abstract [en]

Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

Place, publisher, year, edition, pages
Piscataway, NJ: IEEE, 2018
Series
IEEE International Conference on Emerging Technologies and Factory Automation, ISSN 1946-0759
National Category
Reliability and Maintenance
Research subject
Quality Technology & Management
Identifiers
urn:nbn:se:ltu:diva-72519 (URN)10.1109/ETFA.2017.8396223 (DOI)978-1-5090-6505-9 (ISBN)978-1-5090-6506-6 (ISBN)
Conference
22nd IEEE International Conference on Emerging Technologies and Factory Automation (ETFA) Limassol, Cyprus, September 12-15, 2017
Available from: 2019-01-11 Created: 2019-01-11 Last updated: 2019-01-11Bibliographically approved
Capaci, F., Bergquist, B., Kulahci, M. & Vanhatalo, E. (2017). Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control. Quality and Reliability Engineering International, 33(7), 1601-1614
Open this publication in new window or tab >>Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control
2017 (English)In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 33, no 7, p. 1601-1614Article in journal (Refereed) Published
Abstract [en]

Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios

Place, publisher, year, edition, pages
John Wiley & Sons, 2017
National Category
Reliability and Maintenance
Research subject
Quality Technology and Management
Identifiers
urn:nbn:se:ltu:diva-61872 (URN)10.1002/qre.2128 (DOI)000413906100024 ()2-s2.0-85012952363 (Scopus ID)
Note

Validerad;2017;Nivå 2;2017-11-03 (andbra)

Available from: 2017-02-08 Created: 2017-02-08 Last updated: 2018-03-26Bibliographically approved
Capaci, F., Vanhatalo, E., Bergquist, B. & Kulahci, M. (2017). Managerial implications for improvingcontinuous production processes. In: : . Paper presented at 24th EurOMA Conference, Edinburgh, July 1-5, 2017.
Open this publication in new window or tab >>Managerial implications for improvingcontinuous production processes
2017 (English)Conference paper, Published paper (Refereed)
Abstract [en]

Data analytics remains essential for process improvement and optimization. Statistical process control and design of experiments are among the most powerful process and product improvement methods available. However, continuous process environments challenge the application of these methods. In this article, we highlight SPC and DoE implementation challenges described in the literature for managers, researchers and practitioners interested in continuous production process improvement. The results may help managers support the implementation of these methods and make researchers and practitioners aware of methodological challenges in continuous process environments.

Keywords
Productivity, Statistical tools, Continuous processes
National Category
Engineering and Technology Reliability and Maintenance
Research subject
Quality Technology and Management
Identifiers
urn:nbn:se:ltu:diva-65568 (URN)
Conference
24th EurOMA Conference, Edinburgh, July 1-5, 2017
Projects
Statistical Methods for Improving Continuous Production
Funder
Swedish Research Council, 4731241
Available from: 2017-09-11 Created: 2017-09-11 Last updated: 2018-03-26Bibliographically approved
Vanhatalo, E., Kulahci, M. & Bergquist, B. (2017). On the structure of dynamic principal component analysis used in statistical process monitoring. Chemometrics and Intelligent Laboratory Systems, 167, 1-11
Open this publication in new window or tab >>On the structure of dynamic principal component analysis used in statistical process monitoring
2017 (English)In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 167, p. 1-11Article in journal (Refereed) Published
Abstract [en]

When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time-dependent data. In DPCA the input matrix is augmented by adding time-lagged values of the variables. In building a DPCA model the analyst needs to decide on (1) the number of lags to add, and (2) given a specific lag structure, how many principal components to retain. In this article we propose a new analyst driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using simulated vector autoregressive and moving average data, and tested on Tennessee Eastman process data.

Place, publisher, year, edition, pages
Elsevier, 2017
Keywords
Dynamic principal component analysis, Vector autoregressive process, Vector moving average process, Autocorrelation, Simulation, Tennessee Eastman process simulator
National Category
Reliability and Maintenance
Research subject
Quality Technology and Management
Identifiers
urn:nbn:se:ltu:diva-63377 (URN)10.1016/j.chemolab.2017.05.016 (DOI)000408790200001 ()2-s2.0-85019887093 (Scopus ID)
Funder
Swedish Research Council, 340-2013-5108
Note

Validerad;2017;Nivå 2;2017-06-02 (rokbeg)

Available from: 2017-05-16 Created: 2017-05-16 Last updated: 2018-07-10Bibliographically approved
Organisations
Identifiers
ORCID iD: ORCID iD iconorcid.org/0000-0003-4222-9631

Search in DiVA

Show all publications