Ändra sökning
Avgränsa sökresultatet
12 1 - 50 av 91
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf
Träffar per sida
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sortering
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
  • Standard (Relevans)
  • Författare A-Ö
  • Författare Ö-A
  • Titel A-Ö
  • Titel Ö-A
  • Publikationstyp A-Ö
  • Publikationstyp Ö-A
  • Äldst först
  • Nyast först
  • Skapad (Äldst först)
  • Skapad (Nyast först)
  • Senast uppdaterad (Äldst först)
  • Senast uppdaterad (Nyast först)
  • Disputationsdatum (tidigaste först)
  • Disputationsdatum (senaste först)
Markera
Maxantalet träffar du kan exportera från sökgränssnittet är 250. Vid större uttag använd dig av utsökningar.
  • 1.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Technical University of Denmark, Lyngby, Denmark .
    Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments"2019Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, nr 1, s. 149-153Artikel i tidskrift (Refereegranskat)
  • 2.
    Frumosu, Flavia D.
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Outliers detection using an iterative strategy for semi‐supervised learning2019Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 35, nr 5, s. 1408-1423Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets.

  • 3.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Technical university of Denmark .
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    The Revised Tennessee Eastman Process Simulator as Testbed for SPC and DoE Methods2019Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, nr 2, s. 212-229Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Engineering process control and high-dimensional, time-dependent data present great methodological challenges when applying statistical process control (SPC) and design of experiments (DoE) in continuous industrial processes. Process simulators with an ability to mimic these challenges are instrumental in research and education. This article focuses on the revised Tennessee Eastman process simulator providing guidelines for its use as a testbed for SPC and DoE methods. We provide flowcharts that can support new users to get started in the Simulink/Matlab framework, and illustrate how to run stochastic simulations for SPC and DoE applications using the Tennessee Eastman process.

  • 4.
    Rauf Khan, Abdul
    et al.
    Department of Electronic Systems, Aalborg University, Denmark.
    Schioler, Henrik
    Department of Electronic Systems, Aalborg University, Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Dept Applied Mathematics and Computer Science, Technical university of Denmark .
    Knudsen, Torben
    Department of Electronic Systems, Aalborg University, Denmark.
    Big data analytics for industrial process control2018Ingår i: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018, Vol. Part F134116Konferensbidrag (Refereegranskat)
    Abstract [en]

    Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

  • 5.
    Frumosu, Flavia D.
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Big data analytics using semi‐supervised learning methods2018Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 34, nr 7, s. 1413-1423Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The expanding availability of complex data structures requires development of new analysis methods for process understanding and monitoring. In manufacturing, this is primarily due to high‐frequency and high‐dimensional data available through automated data collection schemes and sensors. However, particularly for fast production rate situations, data on the quality characteristics of the process output tend to be scarcer than the available process data. There has been a considerable effort in incorporating latent structure–based methods in the context of complex data. The research question addressed in this paper is to make use of latent structure–based methods in the pursuit of better predictions using all available data including the process data for which there are no corresponding output measurements, ie, unlabeled data. Inspiration for the research question comes from an industrial setting where there is a need for prediction with extremely low tolerances. A semi‐supervised principal component regression method is compared against benchmark latent structure–based methods, principal components regression, and partial least squares, on simulated and experimental data. In the analysis, we show the circumstances in which it becomes more advantageous to use the semi‐supervised principal component regression over these competing methods.

  • 6.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark. DTU Compute, Asmussens Alle 322, 2800 Kgs. Lyngby, Denmark.
    Kold, David
    Chr. Hansen A/S, Hvidovre.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Harvest time prediction for batch processes2018Ingår i: Computers and Chemical Engineering, ISSN 0098-1354, E-ISSN 1873-4375, Vol. 117, s. 32-41Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping.

  • 7.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Monitoring batch processes with dynamic time warping and k-nearest neighbours2018Ingår i: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 183, s. 102-112Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A novel data driven approach to batch process monitoring is presented, which combines the k-Nearest Neighbour rule with the dynamic time warping (DTW) distance. This online method (DTW-NN) calculates the DTW distance between an ongoing batch, and each batch in a reference database of batches produced under normal operating conditions (NOC). The sum of the k smallest DTW distances is monitored. If a fault occurs in the ongoing batch, then this distance increases and an alarm is generated. The monitoring statistic is easy to interpret, being a direct measure of similarity of the ongoing batch to its nearest NOC predecessors and the method makes no distributional assumptions regarding normal operating conditions. DTW-NN is applied to four extensive datasets from simulated batch production of penicillin, and tested on a wide variety of fault types, magnitudes and onset times. Performance of DTW-NN is contrasted with a benchmark multiway PCA approach, and DTW-NN is shown to perform particularly well when there is clustering of batches under NOC.

  • 8.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Rare-events classification: An approach based on genetic algorithm and voronoi tessellation2018Konferensbidrag (Refereegranskat)
  • 9.
    Gajjar, Shriram
    et al.
    Department of Chemical Engineering, University of California, Davis.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Department of Informatics and Mathematical Modeling, Technical University of Denmark.
    Palazoglu, Ahmet
    Department of Chemical Engineering, University of California, Davis.
    Real-time fault detection and diagnosis using sparse principal component analysis2018Ingår i: Journal of Process Control, ISSN 0959-1524, E-ISSN 1873-2771, Vol. 67, s. 112-128Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    With the emergence of smart factories, large volumes of process data are collected and stored at high sampling rates for improved energy efficiency, process monitoring and sustainability. The data collected in the course of enterprise-wide operations consists of information from broadly deployed sensors and other control equipment. Interpreting such large volumes of data with limited workforce is becoming an increasingly common challenge. Principal component analysis (PCA) is a widely accepted procedure for summarizing data while minimizing information loss. It does so by finding new variables, the principal components (PCs) that are linear combinations of the original variables in the dataset. However, interpreting PCs obtained from many variables from a large dataset is often challenging, especially in the context of fault detection and diagnosis studies. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing PCs with sparse loadings via variance-sparsity trade-off. Using SPCA, some of the loadings on PCs can be restricted to zero. In this paper, we introduce a method to select the number of non-zero loadings in each PC while using SPCA. The proposed approach considerably improves the interpretability of PCs while minimizing the loss of total variance explained. Furthermore, we compare the performance of PCA- and SPCA-based techniques for fault detection and fault diagnosis. The key features of the methodology are assessed through a synthetic example and a comparative study of the benchmark Tennessee Eastman process.

  • 10.
    Rauf Khan, Abdul
    et al.
    Department of Electronic Systems, Aalborg University, Denmark.
    Schioler, Henrik
    Department of Electronic Systems, Aalborg University, Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Dept Applied Mathematics and Computer Science, Technical university of Denmark .
    Selection of objective function for imbalanced classification: an industrial case study2018Ingår i: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018Konferensbidrag (Refereegranskat)
    Abstract [en]

    Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

  • 11.
    Löwe, Roland
    et al.
    Section of Urban Water Systems, Department of Environmental Engineering, Technical University of Denmark (DTU Environment).
    Urich, Christian
    Department of Civil Engineering, Faculty of Engineering, Monash University.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU Compute).
    Radhakrishnan, Mohanasundar
    IHE Delft, Institute for Water Education.
    Deletic, Ana
    Department of Civil Engineering, Faculty of Engineering, Monash University.
    Arnbjerg-Nielsen, Karsten
    Section of Urban Water Systems, Department of Environmental Engineering, Technical University of Denmark (DTU Environment).
    Simulating flood risk under non-stationary climate and urban development conditions: Experimental setup for multiple hazards and a variety of scenarios2018Ingår i: Environmental Modelling & Software, ISSN 1364-8152, E-ISSN 1873-6726, Vol. 102, s. 155-171Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A framework for assessing economic flood damage for a large number of climate and urban development scenarios with limited computational effort is presented. Response surfaces are applied to characterize flood damage based on physical variables describing climate-driven hazards and changing vulnerability resulting from urban growth. The framework is embedded in an experimental setup where flood damage obtained from combined hydraulic-urban development simulations is approximated using kriging-metamodels. Space-filling, sequential and stratified sequential sampling strategies are tested. Reliable approximations of economic damage are obtained in a theoretical case study involving pluvial and coastal hazards, and the stratified sequential sampling strategy is most robust to irregular surface shapes. The setup is currently limited to considering only planned urban development patterns and flood adaptation options implemented over short time horizons. However, the number of simulations is reduced by up to one order of magnitude compared to scenario-based methods, highlighting the potential of the approach.

  • 12.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control2017Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 33, nr 7, s. 1601-1614Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios

  • 13.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Managerial implications for improvingcontinuous production processes2017Konferensbidrag (Refereegranskat)
    Abstract [en]

    Data analytics remains essential for process improvement and optimization. Statistical process control and design of experiments are among the most powerful process and product improvement methods available. However, continuous process environments challenge the application of these methods. In this article, we highlight SPC and DoE implementation challenges described in the literature for managers, researchers and practitioners interested in continuous production process improvement. The results may help managers support the implementation of these methods and make researchers and practitioners aware of methodological challenges in continuous process environments.

  • 14.
    Vanhatalo, Erik
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Technical University of Denmark.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    On the structure of dynamic principal component analysis used in statistical process monitoring2017Ingår i: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 167, s. 1-11Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time-dependent data. In DPCA the input matrix is augmented by adding time-lagged values of the variables. In building a DPCA model the analyst needs to decide on (1) the number of lags to add, and (2) given a specific lag structure, how many principal components to retain. In this article we propose a new analyst driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using simulated vector autoregressive and moving average data, and tested on Tennessee Eastman process data.

  • 15.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby.
    Kold, David
    Chr. Hansen A/S, Hvidovre.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. DTU Compute, Technical University of Denmark.
    Selecting local constraint for alignment of batch process data with dynamic time warping2017Ingår i: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 167, s. 161-170Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    There are two key reasons for aligning batch process data. The first is to obtain same-length batches so that standard methods of analysis may be applied, whilst the second reason is to synchronise events that take place during each batch so that the same event is associated with the same observation number for every batch. Dynamic time warping has been shown to be an effective method for meeting these objectives. This is based on a dynamic programming algorithm that aligns a batch to a reference batch, by stretching and compressing its local time dimension. The resulting ”warping function” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid an alignment which is too aggressive and which contains pathological warping. Previous work has focused on addressing this issue using global constraints. In this work, we investigate the use of local constraints in dynamic time warping and define criteria for evaluating the degree of time distortion and variable synchronisation obtained. A local constraint scheme is extended to include constraints not previously considered, and a novel method for selecting the optimal local constraint with respect to the two criteria is proposed. For illustration, the method is applied to real data from an industrial bacteria fermentation process.

  • 16.
    Gajjar, Shriram
    et al.
    Department of Chemical Engineering, University of California, Davis, CA.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Palazoglu, Ahmet
    Department of Chemical Engineering, University of California, Davis, CA.
    Selection of Non-zero Loadings in Sparse Principal Component Analysis2017Ingår i: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 162, s. 160-171Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Principal component analysis (PCA) is a widely accepted procedure for summarizing data through dimensional reduction. In PCA, the selection of the appropriate number of components and the interpretation of those components have been the key challenging features. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing principal components with sparse loadings via the variance-sparsity trade-off. Although several techniques for deriving sparse loadings have been offered, no detailed guidelines for choosing the penalty parameters to obtain a desired level of sparsity are provided. In this paper, we propose the use of a genetic algorithm (GA) to select the number of non-zero loadings (NNZL) in each principal component while using SPCA. The proposed approach considerably improves the interpretability of principal components and addresses the difficulty in the selection of NNZL in SPCA. Furthermore, we compare the performance of PCA and SPCA in uncovering the underlying latent structure of the data. The key features of the methodology are assessed through a synthetic example, pitprops data and a comparative study of the benchmark Tennessee Eastman process.

  • 17.
    Kulahci, Murat
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Tyssedal, John Sølve
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Split-plot designs for multistage experimentation2017Ingår i: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 44, nr 3, s. 493-510Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Most of today’s complex systems and processes involve several stages through which input or the raw material has to go before the final product is obtained. Also in many cases factors at different stages interact. Therefore, a holistic approach for experimentation that considers all stages at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based on the Kronecker product representation of orthogonal designs and can be used for any number of stages, for various numbers of subplots and for different number of subplots for each stage. The procedure is demonstrated on both regular and nonregular designs and provides the maximum number of factors that can be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided.

  • 18.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    A two-step procedure for fault detection in the Tennessee Eastman Process simulator2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    High-technological and complex production processes and high availability and sample frequencies of data in large scale industrial processes need the concurrent development of appropriate statistical control tools and monitoring techniques. Therefore, multivariate control charts based on latent variables are essential tools to detect and isolate process faults.Several Statistical Process Control (SPC) charts have been developed for multivariate and megavariate data, such as the Hotelling T2, MCUSUM and MEWMA control charts as well as charts based on principal component analysis (PCA) and dynamic PCA (DPCA). The ability of SPC procedures based on PCA (Kourti, MacGregor 1995) or DPCA (Ku et al. 1995) to detect and isolate process disturbances for a large number of highly correlated (and time-dependent in the case of DPCA) variables has been demonstrated in the literature. However, we argue that the fault isolation capability and the fault detection rate for processes can be improved further for processes operating under feedback control loops (in closed loop).The purpose of this presentation is to illustrate a two-step method where [1] the variables are pre-classified prior to the analysis and [2] the monitoring scheme based on latent variables is implemented. Step 1 involves a structured qualitative classification of the variables to guide the choice of which variables to monitor in Step 2. We argue that the proposed method will be useful for many practitioners of SPC based on latent variables techniques in processes operating in closed loop. It will allow clearer fault isolation and detection and an easier implementation of corrective actions. A case study based on the data available from the Tennessee Eastman Process simulator under feedback control loops (Matlab) will be presented. The results from the proposed method are compared with currently available methods through simulations in R statistics software.

  • 19.
    Vanhatalo, Erik
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control2016Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 32, nr 4, s. 1483-1500Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC) is that the data are independent in time. In many industrial processes frequent sampling and process dynamics make this assumption unrealistic rendering sampled data autocorrelated (serially dependent). PCA can be used to reduce data dimensionality and to simplify multivariate SPC. Although there have been some attempts in the literature to deal with autocorrelated data in PCA, we argue that the impact of autocorrelation on PCA and PCA-based SPC is neither well understood nor properly documented.This article illustrates through simulations the impact of autocorrelation on the descriptive ability of PCA and on the monitoring performance using PCA-based SPC when autocorrelation is ignored. In the simulations cross- and autocorrelated data are generated using a stationary first order vector autoregressive model.The results show that the descriptive ability of PCA may be seriously affected by autocorrelation causing a need to incorporate additional principal components to maintain the model’s explanatory ability. When all variables have the same autocorrelation coefficients the descriptive ability is intact while a significant impact occurs when the variables have different degrees of autocorrelation. We also illustrate that autocorrelation may impact PCA-based SPC and cause lower false alarm rates and delayed shift detection, especially for negative autocorrelation. However, for larger shifts the impact of autocorrelation seems rather small.

  • 20.
    Vanhatalo, Erik
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Capaci, Francesca
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Lag Structure in Dynamic Principal Component Analysis2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Purpose of this PresentationAutomatic data collection schemes and abundant availability of multivariate data increase the need for latent variable methods in statistical process control (SPC) such as SPC based on principal component analysis (PCA). However, process dynamics combined with high-frequency sampling will often cause successive observations to be autocorrelated which can have a negative impact on PCA-based SPC, see Vanhatalo and Kulahci (2015).Dynamic PCA (DPCA) proposed by Ku et al. (1995) has been suggested as the remedy ‘converting’ dynamic correlation into static correlation by adding the time-lagged variables into the original data before performing PCA. Hence an important issue in DPCA is deciding on the number of time-lagged variables to add in augmenting the data matrix; addressed by Ku et al. (1995) and Rato and Reis (2013). However, we argue that the available methods are rather complicated and lack intuitive appeal.The purpose of this presentation is to illustrate a new and simple method to determine the maximum number of lags to add in DPCA based on the structure in the original data. FindingsWe illustrate how the maximum number of lags can be determined from time-trends in the eigenvalues of the estimated lagged autocorrelation matrices of the original data. We also show the impact of the system dynamics on the number of lags to be considered through vector autoregressive (VAR) and vector moving average (VMA) processes. The proposed method is compared with currently available methods using simulated data.Research Limitations / Implications (if applicable)The method assumes that the same numbers of lags are added for all variables. Future research will focus on adapting our proposed method to accommodate the identification of individual time-lags for each variable. Practical Implications (if applicable)The visualization possibility of the proposed method will be useful for DPCA practitioners.Originality/Value of PresentationThe proposed method provides a tool to determine the number of lags in DPCA that works in a manner similar to the autocorrelation function (ACF) in the identification of univariate time series models and does not require several rounds of PCA. Design/Methodology/ApproachThe results are based on Monte Carlo simulations in R statistics software and in the Tennessee Eastman Process simulator (Matlab).

  • 21.
    Gronskyte, Ruta
    et al.
    DTU Compute, Technical University of Denmark.
    Clemmensen, Line Harder
    DTU Compute, Technical University of Denmark.
    Hviid, Marchen Sonja
    Danish Meat Research Institute, Taastrup.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Monitoring pig movement at the slaughterhouse using optical flow and modified angular histograms2016Ingår i: Biosystems Engineering, ISSN 1537-5110, E-ISSN 1537-5129, Vol. 141, s. 19-30Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We analyse the movement of pig herds through video recordings at a slaughterhouse by using statistical analysis of optical flow (OF) patterns. Unlike the previous attempts to analyse pig movement, no markers, trackers nor identification of individual pigs are needed. Our method handles the analysis of unconstrained areas where pigs are constantly entering and leaving. The goal is to improve animal welfare by real-time prediction of abnormal behaviour through proper interventions. The aim of this study is to identify any stationary pig, which can be an indicator of an injury or an obstacle. In this study, we use the OF vectors to describe points of movement on all pigs and thereby analyse the herd movement. Subsequently, the OF vectors are used to identify abnormal movements of individual pigs. The OF vectors, obtained from the pigs, point in multiple directions rather than in one movement direction. To accommodate the multiple directions of the OF vectors, we propose to quantify OF using a summation of the vectors into bins according to their angles, which we call modified angular histograms. Sequential feature selection is used to select angle ranges, which identify pigs that are moving abnormally in the herd. The vector lengths from the selected angle ranges are compared to the corresponding median, 25th and 75th percentiles from a training set, which contains only normally moving pigs. We show that the method is capable of locating stationary pigs in the recordings regardless of the number of pigs in the frame

  • 22.
    Gao, Huihui
    et al.
    School of Information Science and Technology, Beijing University of Chemical Technology, Beijing.
    Gajjar, Shriram
    Department of Chemical Engineering, University of California, Davis.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Zhu, Qunxiong
    School of Information Science and Technology, Beijing University of Chemical Technology, Beijing.
    Palazoglu, Ahmet
    Department of Chemical Engineering, University of California, Davis.
    Process Knowledge Discovery Using Sparse Principal Component Analysis2016Ingår i: Industrial & Engineering Chemistry Research, ISSN 0888-5885, E-ISSN 1520-5045, Vol. 55, nr 46, s. 12046-12059Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    As the goals of ensuring process safety and energy efficiency become ever more challenging, engineers increasingly rely on data collected from such processes for informed decision making. During recent decades, extracting and interpreting valuable process information from large historical data sets have been an active area of research. Among the methods used, principal component analysis (PCA) is a well-established technique that allows for dimensionality reduction for large data sets by finding new uncorrelated variables, namely principal components (PCs). However, it is difficult to interpret the derived PCs, as each PC is a linear combination of all of the original variables and the loadings are typically nonzero. Sparse principal component analysis (SPCA) is a relatively recent technique proposed for producing PCs with sparse loadings via the variance–sparsity trade-off. We propose a forward SPCA approach that helps uncover the underlying process knowledge regarding variable relations. This approach systematically determines the optimal sparse loadings for each sparse PC while improving interpretability and minimizing information loss. The salient features of the proposed approach are demonstrated through the Tennessee Eastman process simulation. The results indicate how knowledge and process insight can be discovered through a systematic analysis of sparse loadings.

  • 23.
    Vining, Geoff
    et al.
    Virginia Polytechnic Institute and State University, Blacksburg, VA.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Pedersen, Søren
    Technical University of Denmark, Lyngby.
    Recent Advances and Future Directions for Quality Engineering2016Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 32, nr 3, s. 863-875Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The origins of quality engineering are in manufacturing, where quality engineers apply basic statistical methodologies to improve the quality and productivity of products and processes. In the past decade, people have discovered that these methodologies are effective for improving almost any type of system or process, such as financial, health care, and supply chains.This paper begins with a review of key advances and trends within quality engineering over the past decade. The second part uses the first part as a foundation to outline new application areas for the field. It also discusses how quality engineering needs to evolve in order to make significant contributions to these new areas

  • 24.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Simulating Experiments in Closed-Loop Control Systems2016Ingår i: ENBIS-16 in Sheffield, 2016Konferensbidrag (Refereegranskat)
    Abstract [en]

    Design of Experiments (DoE) literature extensively discusses how to properly plan, conduct and analyze experiments for process and product improvement. However, it is typically assumed that the experiments are run on processes operating in open-loop: the changes in experimental factors are directly visible in process responses and are not hidden by (automatic) feedback control. Under this assumption, DoE methods have been successfully applied in process industries such as chemical, pharmaceutical and biological industries.

    However, the increasing instrumentation, automation and interconnectedness are changing how the processes are run. Processes often involve engineering process control as in the case of closed-loop systems. The closed-loop environment adds complexity to experimentation and analysis since the experimenter must account for the control actions that may aim to keep a response variable at its set-point value.  The common approach to experimental design and analysis will likely need adjustments in the presence of closed-loop controls. Careful consideration is for instance needed when the experimental factors are chosen. Moreover, the impact of the experimental factors may not be directly visible as changes in the response variables (Hild, Sanders, & Cooper, 2001). Instead other variables may need to be used as proxies for the intended response variable(s).

    The purpose of this presentation is to illustrate how experiments in closed-loop system can be planned and analyzed. A case study based on the Tennessee Eastman Process simulator run with a decentralized feedback control strategy (Matlab) (Lawrence Ricker, 1996) is discussed and presented. 

  • 25.
    Kulahci, Murat
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Menon, Anil
    Celgene Corporation, 556 Morris Ave, Summit, NJ.
    Trellis Plots as Visual Aids for Analyzing Split Plot Experiments2016Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 29, nr 2, s. 211-225Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The analysis of split plot experiments can be challenging due to a complicated error structure resulting from restrictions on complete randomization. Similarly, standard visualization methods do not provide the insight practitioners desire to understand the data, think of explanations, generate hypotheses, build models, or decide on next steps. This article demonstrates the effective use of trellis plots in the preliminary data analysis for split plot experiments to address this problem. Trellis displays help to visualize multivariate data by allowing for conditioning in a general way. They can also be used after the statistical analysis for verification, clarification, and communication.

  • 26.
    Gajjar, Shriram
    et al.
    University of California, Davis.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Palazoglu, Ahmet
    Department of Geology, University of California, Davis.
    Use of Sparse Principal Component Analysis (SPCA) for Fault Detection2016Ingår i: IFAC PAPERSONLINE, ISSN 2405-8963, Vol. 49, nr 7, s. 693-698Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Principal component analysis (PCA) has been widely used for data dimension reduction and process fault detection. However, interpreting the principal components and the outcomes of PCA-based monitoring techniques is a challenging task since each principal component is a linear combination of the original variables which can be numerous in most modern applications. To address this challenge, we first propose the use of sparse principal component analysis (SPCA) where the loadings of some variables in principal components are restricted to zero. This paper then describes a technique to determine the number of non-zero loadings in each principal component. Furthermore, we compare the performance of PCA and SPCA in fault detection. The validity and potential of SPCA are demonstrated through simulated data and a comparative study with the benchmark Tennessee Eastman process

  • 27.
    Tyssedal, John Sølve
    et al.
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Experiments for multi-stage processes2015Ingår i: Quality Technology & Quantitative Management, ISSN 1684-3703, E-ISSN 1811-4857, Vol. 12, nr 1, s. 13-28Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Multi-stage processes are very common in both process and manufacturing industries. In this article we present a methodology for designing experiments for multi-stage processes. Typically in these situations the design is expected to involve many factors from different stages. To minimize the required number of experimental runs, we suggest using mirror image pairs of experiments at each stage following the first. As the design criterion, we consider their projectivity and mainly focus on projectivity P ≥ 3 designs. We provide the methodology for generating these designs for processes with any number of stages and also show how to identify and estimate the effects. Both regular and non-regular designs are considered as base designs in generating the overall design

  • 28.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Guest editorial2015Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 27, nr 1, s. 1-Artikel i tidskrift (Övrigt vetenskapligt)
  • 29.
    Guyonvarch, Estelle
    et al.
    Department of Environmental Engineering (DTU Environment), Technical University of Denmark.
    Ramin, Elham
    Department of Environmental Engineering (DTU Environment), Technical University of Denmark.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Plósz, Benedek Gy
    Department of Environmental Engineering (DTU Environment), Technical University of Denmark.
    iCFD: Interpreted computational fluid dynamics – Degeneration of CFD to one-dimensional advection-dispersion models using statistical experimental design – The secondary clarifier2015Ingår i: Water Research, ISSN 0043-1354, E-ISSN 1879-2448, Vol. 83, s. 396-411Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The present study aims at using statistically designed computational fluid dynamics (CFD) simulations as numerical experiments for the identification of one-dimensional (1-D) advection-dispersion models – computationally light tools, used e.g., as sub-models in systems analysis. The objective is to develop a new 1-D framework, referred to as interpreted CFD (iCFD) models, in which statistical meta-models are used to calculate the pseudo-dispersion coefficient (D) as a function of design and flow boundary conditions. The method – presented in a straightforward and transparent way – is illustrated using the example of a circular secondary settling tank (SST). First, the significant design and flow factors are screened out by applying the statistical method of two-level fractional factorial design of experiments. Second, based on the number of significant factors identified through the factor screening study and system understanding, 50 different sets of design and flow conditions are selected using Latin Hypercube Sampling (LHS). The boundary condition sets are imposed on a 2-D axi-symmetrical CFD simulation model of the SST. In the framework, to degenerate the 2-D model structure, CFD model outputs are approximated by the 1-D model through the calibration of three different model structures for D. Correlation equations for the D parameter then are identified as a function of the selected design and flow boundary conditions (meta-models), and their accuracy is evaluated against D values estimated in each numerical experiment. The evaluation and validation of the iCFD model structure is carried out using scenario simulation results obtained with parameters sampled from the corners of the LHS experimental region. For the studied SST, additional iCFD model development was carried out in terms of (i) assessing different density current sub-models; (ii) implementation of a combined flocculation, hindered, transient and compression settling velocity function; and (iii) assessment of modelling the onset of transient and compression settling. Furthermore, the optimal level of model discretization both in 2-D and 1-D was undertaken. Results suggest that the iCFD model developed for the SST through the proposed methodology is able to predict solid distribution with high accuracy – taking a reasonable computational effort – when compared to multi-dimensional numerical experiments, under a wide range of flow and design conditions. iCFD tools could play a crucial role in reliably predicting systems’ performance under normal and shock events.

  • 30.
    Gronskyte, Ruta
    et al.
    DTU Compute, Technical University of Denmark.
    Clemmensen, Line Harder
    DTU Compute, Technical University of Denmark.
    Hviid, Marchen Sonja
    Danish Meat Research Institute, Taastrup.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Pig herd monitoring and undesirable tripping and stepping prevention2015Ingår i: Computers and Electronics in Agriculture, ISSN 0168-1699, E-ISSN 1872-7107, Vol. 119, s. 51-60Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Humane handling and slaughter of livestock are of major concern in modern societies. Monitoring animal wellbeing in slaughterhouses is critical in preventing unnecessary stress and physical damage to livestock, which can also affect the meat quality. The goal of this study is to monitor pig herds at the slaughterhouse and identify undesirable events such as pigs tripping or stepping on each other. In this paper, we monitor pig behavior in color videos recorded during unloading from transportation trucks. We monitor the movement of a pig herd where the pigs enter and leave a surveyed area. The method is based on optical flow, which is not well explored for monitoring all types of animals, but is the method of choice for human crowd monitoring. We recommend using modified angular histograms to summarize the optical flow vectors. We show that the classification rate based on support vector machines is 93% of all frames. The sensitivity of the model is 93.5% with 90% specificity and 6.5% false alarm rate. The radial lens distortion and camera position required for convenient surveillance make the recordings highly distorted. Therefore, we also propose a new approach to correct lens and foreshortening distortions by using moving reference points. The method can be applied real-time during the actual unloading operations of pigs. In addition, we present a method for identification of the causes leading to undesirable events, which currently only runs off-line. The comparative analysis of three drivers, which performed the unloading of the pigs from the trucks in the available datasets, indicates that the drivers perform significantly differently. Driver 1 has 2.95 times higher odds to have pigs tripping and stepping on each other than the two others, and Driver 2 has 1.11 times higher odds than Driver 3.

  • 31.
    Kulahci, Murat
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Capaci, Francesca
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Projekt: Statistiska metoder för förbättring av kontinuerliga tillverkningsprocesser2015Övrigt (Övrig (populärvetenskap, debatt, mm))
  • 32.
    Capaci, Francesca
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Bergquist, Bjarne
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vanhatalo, Erik
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Simulating and Analyzing Experiments in the Tennessee Eastman Process Simulator2015Ingår i: ENBIS-15, 2015Konferensbidrag (Refereegranskat)
    Abstract [en]

    In many of today’s continuous processes, the data collection is usually performed automatically yielding exorbitant amount of data on various quality characteristics and inputs to the system. Moreover, such data are usually collected at high frequency introducing significant serial dependence in time. This violates the independent data assumption of many industrial statistics methods used in process improvement studies. These studies often involve controlled experiments to unearth the causal relationships to be used for robustness and optimization purposes.

    However real production processes are not suitable for studying new experimental methodologies, partly because unknown disturbances/experimental settings may lead to erroneous conclusions. Moreover large scale experimentation in production processes is frowned upon due to consequent disturbances and production delays. Hence realistic simulation of such processes offers an excellent opportunity for experimentation and methodological development.

    One commonly used process simulator is the Tennessee Eastman (TE) challenge chemical process simulator (Downs & Vogel, 1993)[1]. The process produces two products from four reactants, containing 41 measured variables and 12 manipulated variables. In addition to the process description, the problem statement defines process constraints, 20 types of process disturbances, and six operating modes corresponding to different production rates and mass ratios in the product stream.

    The purpose of this paper is to illustrate the use of the TE process with an appropriate feedback control as a test-bed for the methodological developments of new experimental design and analysis techniques.

    The paper illustrates how two-level experimental designs can be used to identify how the input factors affect the outputs in a chemical process.

    Simulations using Matlab/Simulink software are used to study the impact of e.g. process disturbances, closed loop control and autocorrelated data on different experimental arrangements.

    The experiments are analysed using a time series analysis approach to identify input-output relationships in a process operating in closed-loop with multivariate responses. The dynamics of the process are explored and the necessary run lengths for stable effect estimates are discussed.

  • 33.
    Vanhatalo, Erik
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi. Technical University of Denmark, Department of Applied Mathematics and Computer Science.
    The Effect of Autocorrelation on the Hotelling T2 Control Chart2015Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 31, nr 8, s. 1779-1796Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One of the basic assumptions for traditional univariate and multivariate control charts is that the data are independent in time. For the latter in many cases the data is serially dependent (autocorrelated) and cross-correlated due to, for example, frequent sampling and process dynamics. It is well-known that the autocorrelation affects the false alarm rate and the shift detection ability of the traditional univariate control charts. However, how the false alarm rate and the shift detection ability of the Hotelling 2T control chart are affected by various auto- and cross-correlation structures for different magnitudes of shifts in the process mean is not fully explored in the literature. In this article, the performance of the Hotelling T2 control chart for different shift sizes and various auto- and cross-correlation structures are compared based on the average run length (ARL) using simulated data. Three different approaches in constructing the Hotelling T2 chart are studied for two different estimates of the covariance matrix: [1] ignoring the autocorrelation and using the raw data with theoretical upper control limits; [2] ignoring the autocorrelation and using the raw data with adjusted control limits calculated through Monte Carlo simulations; and [3] constructing the control chart for the residuals from a multivariate time series model fitted to the raw data. To limit the complexity we use a first-order vector autoregressive process, VAR(1), and focus mainly on bivariate data.

  • 34.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Discussion of “The Statistical Evaluation of Categorical Measurements: Simple Scales, but Treacherous Complexity Underneath’”2014Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 26, nr 1, s. 40-43Artikel i tidskrift (Refereegranskat)
  • 35.
    Li, Jing
    et al.
    Industrial Engineering, School of Computing, Informatics, and Decision Systems Engineering Arizona State University.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Editorial: a Special Issue on Data Mining2014Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 30, nr 6, s. 813-Artikel i tidskrift (Övrigt vetenskapligt)
  • 36.
    Hansen, Merete Kjær
    et al.
    Technical University of Denmark, Department of Applied Mathematics and Computer Science.
    Sharma, Anoop Kumar
    Technical University of Denmark, National Food Institute.
    Dybdahl, Marianne
    Technical University of Denmark, National Food Institute.
    Boberg, Julie
    Technical University of Denmark, National Food Institute.
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    In vivo Comet assay: statistical analysis and power calculations of mice testicular cells2014Ingår i: Mutation research. Genetic toxicology and environmental mutagenesis, ISSN 1383-5718, E-ISSN 1879-3592, Vol. 774, s. 29-40Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The in vivo Comet assay is a sensitive method for evaluating DNA damage. A recurrent concern is how to analyze the data appropriately and efficiently. A popular approach is to summarize the raw data into a summary statistic prior to the statistical analysis. However, consensus on which summary statistic to use has yet to be reached. Another important consideration concerns the assessment of proper sample sizes in the design of Comet assay studies. This study aims to identify a statistic suitably summarizing the % tail DNA of mice testicular samples in Comet assay studies. A second aim is to provide curves for this statistic outlining the number of animals and gels to use. The current study was based on 11 compounds administered via oral gavage in three doses to male mice: CAS no. 110-26-9, CAS no. 512-56-1, CAS no. 111873-33-7, CAS no. 79-94-7, CAS no. 115-96-8, CAS no. 598-55-0, CAS no. 636-97-5, CAS no. 85-28-9, CAS no. 13674-87-8, CAS no. 43100-38-5 and CAS no. 60965-26-6. Testicular cells were examined using the alkaline version of the Comet assay and the DNA damage was quantified as % tail DNA using a fully automatic scoring system. From the raw data 23 summary statistics were examined. A linear mixed-effects model was fitted to the summarized data and the estimated variance components were used to generate power curves as a function of sample size. The statistic that most appropriately summarized the within-sample distributions was the median of the log-transformed data, as it most consistently conformed to the assumptions of the statistical model. Power curves for 1.5-, 2-, and 2.5-fold changes of the highest dose group compared to the control group when 50 and 100 cells were scored per gel are provided to aid in the design of future Comet assay studies on testicular cells.

  • 37.
    Li, Jing
    et al.
    Industrial Engineering, School of Computing, Informatics, and Decision Systems Engineering Arizona State University, School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    Technical University of Denmark, Department of Applied Mathematics and Computer Science.
    Data Mining: a Special Issue of Quality and Reliability Engineering International (QREI)2013Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 29, nr 3, s. 437-Artikel i tidskrift (Övrigt vetenskapligt)
  • 38.
    Lundkvist, Peder
    et al.
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    Vännman, Kerstin
    Kulahci, Murat
    Luleå tekniska universitet, Institutionen för ekonomi, teknik och samhälle, Industriell Ekonomi.
    A Comparison of Decision Methods for Cpk When Data are Autocorrelated2012Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 24, nr 4, s. 460-472Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In many industrial applications, autocorrelated data are becoming increasingly common due to, for example, on-line data collection systems with high-frequency sampling. Therefore the basic assumption of independent observations for process capability analysis is not valid. The purpose of this article is to compare decision methods using the process capability index Cpk, when data are autocorrelated. This is done through a case study followed by a simulation study. In the simulation study the actual significance level and power of the decision methods are investigated. The outcome of the article is that two methods appeared to be better than the others.

  • 39.
    Capehart, Shay R.
    et al.
    Department of Mathematics at the Air Force Institute of Technology.
    Keha, Ahmet
    Industrial Engineering at Arizona State University.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Designing fractional factorial split-plot experiments using integer programming2011Ingår i: International Journal of Experimental Design and Process Optimisation, ISSN 2040-2252, E-ISSN 2040-2260, Vol. 2, nr 1, s. 34-57Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Split-plot designs are commonly used in industrial experiments when there are hard-to-change and easy-to-change factors. Due to the number of factors and resource limitations, it is more practical to run a fractional factorial split-plot (FFSP) design. These designs are variations of the fractional factorial (FF) design, with the restricted randomisation structure to account for the whole plots and subplots. We discuss the formulation of FFSP designs using integer programming (IP) to achieve various design criteria. We specifically look at the maximum number of clear two-factor interactions and variations on this criterion.

  • 40.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Technical University of Denmark, Lyngby.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby.
    Designing simulation experiments with controllable and uncontrollable factors for applications in healthcare2011Ingår i: Journal of the Royal Statistic Society, Series C: Applied Statistics, ISSN 0035-9254, E-ISSN 1467-9876, Vol. 60, nr 1, s. 31-49Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    We propose a new methodology for designing computer experiments that was inspired by the split-plot designs that are often used in physical experimentation. The methodology has been developed for a simulation model of a surgical unit in a Danish hospital. We classify the factors as controllable and uncontrollable on the basis of their characteristics in the physical system. The experiments are designed so that, for a given setting of the controllable factors, the various settings of the uncontrollable factors cover the design space uniformly. Moreover the methodology allows for overall uniform coverage in the combined design when all settings of the uncontrollable factors are considered at once

  • 41.
    Tyssedal, John Sølve
    et al.
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst.
    Split-plot designs with mirror image pairs as sub-plots2011Ingår i: Journal of Statistical Planning and Inference, ISSN 0378-3758, E-ISSN 1873-1171, Vol. 141, nr 12, s. 3686-3696Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this article we investigate two-level split-plot designs where the sub-plots consist of only two mirror image trials. Assuming third and higher order interactions negligible, we show that these designs divide the estimated effects into two orthogonal sub-spaces, separating sub-plot main effects and sub-plot by whole-plot interactions from the rest. Further we show how to construct split-plot designs of projectivity P≥3. We also introduce a new class of split-plot designs with mirror image pairs constructed from non-geometric Plackett-Burman designs. The design properties of such designs are very appealing with effects of major interest free from full aliasing assuming that 3rd and higher order interactions are negligible

  • 42.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Time Series Analysis and Forecasting by Example2011Bok (Refereegranskat)
  • 43.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby, Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Analysis of computer experiments with multiple noise sources2010Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 26, nr 2, s. 137-146Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    In this paper we present a modeling framework for analyzing computermodels with two types of variations. The paper is based on a case study of an orthopedic surgical unit, which has both controllable and uncontrollable factors. Our results show that this structure of variation can be modeled effectively with linear mixed effects models and generalized additive models

  • 44.
    Gupta, Shilpa D.
    et al.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Kulahci, Murat
    Division of Mathematical and Natural Sciences, Arizona State University.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Borror, Connie M.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Analysis of signal-response systems using generalized linear mixed models2010Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 26, nr 4, s. 375-385Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Robust parameter design is one of the important tools used in Design for Six Sigma. In this article, we present an application of the generalized linear mixed model (GLMM) approach to robust design and analysis of signal-response systems. We propose a split-plot approach to the signal-response system characterized by two variance components-within-profile variance and between-profile variance. We demonstrate that explicit modeling of variance components using GLMMs leads to more precise point estimates of important model coefficients with shorter confidence intervals

  • 45.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark.
    Merser, Sören
    Frederiksberg University Hospital, Clinic of Orthopaedic Surgery.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby.
    Conditional Value at Risk as a Measure for Waiting Time in Simulations of Hospital Units2010Ingår i: Quality Technology & Quantitative Management, ISSN 1684-3703, E-ISSN 1811-4857, Vol. 7, nr 3, s. 321-336Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The utility of conditional value at risk (CVaR) of a sample of waiting times as a measure for reducing long waiting times is evaluated with special focus on patient waiting times in a hospital. CVaR is the average of the longest waiting times, i.e., a measure at the tail of the waiting time distribution. The presented results are based on a discrete event simulation (DES) model of an orthopedic surgical unit at a university hospital in Denmark. Our analysis shows that CVaR offers a highly reliable performance measure. The measure targets the longest waiting times and these are generally accepted to be the most problematic from the points of view of both the patients and the management. Moreover, CVaR can be seen as a compromise between the well known measures: average waiting time and the maximum waiting time

  • 46.
    McClary, Daniel W.
    et al.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Syrotiuk, Violet R.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    DTU Informatics, Richard Petersens Plads, building 321, DK-2800 Lyngby.
    Profile-driven regression for modeling and runtime optimization of mobile networks2010Ingår i: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 20, nr 3, artikel-id 17Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Computer networks often display nonlinear behavior when examined over a wide range of operating conditions. There are few strategies available for modeling such behavior and optimizing such systems as they run. Profile-driven regression is developed and applied to modeling and runtime optimization of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike others, the throughput model accounts for node speed. The resulting optimization is very effective; locally optimizing the network factors at runtime results in throughput as much as six times higher than that achieved with the factors at their default levels.

  • 47.
    Kulahci, Murat
    et al.
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Holcomb, Don
    Xie, Min
    Special Issue: Design for Six Sigma2010Ingår i: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 26, nr 4 Spec Issue, s. 315-Artikel i tidskrift (Övrigt vetenskapligt)
  • 48.
    McClary, Daniel W.
    et al.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Syrotiuk, Violet R.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    Technical University of Denmark, Lyngby.
    Steepest-ascent constrained simultaneous perturbation for multiobjective optimization2010Ingår i: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 21, nr 1, artikel-id 2Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    The simultaneous optimization of multiple responses in a dynamic system is challenging. When a response has a known gradient, it is often easily improved along the path of steepest ascent. On the contrary, a stochastic approximation technique may be used when the gradient is unknown or costly to obtain. We consider the problem of optimizing multiple responses in which the gradient is known for only one response. We propose a hybrid approach for this problem, called simultaneous perturbation stochastic approximation steepest ascent, SPSA-SA or SP(SA)2 for short. SP(SA)2 is an SPSA technique that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)2 to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self-organizing wireless network. The results show that SP(SA)2 achieves higher throughput and lower packet loss and end-to-end delay than the steepest ascent, SPSA, and the Nelder--Mead stochastic approximation approaches. It also reduces the cost in the number of iterations to perform the optimization

  • 49.
    Almimi, Ashraf A.
    et al.
    NASA Langley Research Center, Hampton.
    Kulahci, Murat
    Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University, Department of Industrial, Systems and Operations Engineering Arizona State University.
    Checking the adequacy of fit of models from split-plot designs2009Ingår i: Journal of QualityTechnology, ISSN 0022-4065, Vol. 41, nr 3, s. 272-284Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot models. In this article, we propose the computation of two R2, R 2-adjusted, prediction error sums of squares (PRESS), and R 2 -prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have been included in the model and describe the predictive performance of each group of effects.

  • 50.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Time series model selection and parsimony2009Ingår i: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 21, nr 3, s. 341-353Artikel i tidskrift (Refereegranskat)
    Abstract [en]

    Choosing an adequate model for a certain set of data is considered to be one of the more difficult tasks in time series analysis as experienced analysts are also having a hard time selecting such appropriate model. Thus, one popular approach have been discussed with the use of certain numerical criteria which is believed to be a useful input for the decision making process. However, using this technique solely is also not advisable on choosing a model but the use of judgement and the use of information criteria are more preferred. Specifically, the use of parsimonious mixed autoregressive model (ARMA) is more favorable to be used as it considers the context of the model as well as illustrating what is trying to be modeled and what model is to be used.

12 1 - 50 av 91
RefereraExporteraLänk till träfflistan
Permanent länk
Referera
Referensformat
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Annat format
Fler format
Språk
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Annat språk
Fler språk
Utmatningsformat
  • html
  • text
  • asciidoc
  • rtf