Change search
Refine search result
123 1 - 50 of 121
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Aakjær, Mia
    et al.
    Department of Drug Design and Pharmacology, Pharmacovigilance Research Center, University of Copenhagen, Copenhagen, Denmark.
    De Bruin, Marie Louise
    Department of Pharmacy, Copenhagen Centre for Regulatory Science (CORS), University of Copenhagen, Copenhagen, Denmark; Division of Pharmacoepidemiology and Clinical Pharmacology, Utrecht Institute for Pharmaceutical Sciences (UIPS), Utrecht University, Utrecht, The Netherlands.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Andersen, Morten
    Department of Drug Design and Pharmacology, Pharmacovigilance Research Center, University of Copenhagen, Copenhagen, Denmark.
    Surveillance of Antidepressant Safety (SADS): Active Signal Detection of Serious Medical Events Following SSRI and SNRI Initiation Using Big Healthcare Data2021In: Drug Safety, ISSN 0114-5916, E-ISSN 1179-1942, Vol. 44, p. 1215-1230Article in journal (Refereed)
    Abstract [en]

    Introduction The current process for generating evidence in pharmacovigilance has several limitations, which often lead to delays in the evaluation of drug-associated risks.

    Objectives In this study, we proposed and tested a near real-time epidemiological surveillance system using sequential, cumulative analyses focusing on the detection and preliminary risk quantification of potential safety signals following initiation of selective serotonin reuptake inhibitors (SSRIs) and serotonin-norepinephrine reuptake inhibitors (SNRIs).

    Methods We emulated an active surveillance system in an historical setting by conducting repeated annual cohort studies using nationwide Danish healthcare data (1996–2016). Outcomes were selected from the European Medicines Agency's Designated Medical Event list, summaries of product characteristics, and the literature. We followed patients for a maximum of 6 months from treatment initiation to the event of interest or censoring. We performed Cox regression analyses adjusted for standard sets of covariates. Potential safety signals were visualized using heat maps and cumulative hazard ratio (HR) plots over time.

    Results In the total study population, 969,667 new users were included and followed for 461,506 person-years. We detected potential safety signals with incidence rates as low as 0.9 per 10,000 person-years. Having eight different exposure drugs and 51 medical events, we identified 31 unique combinations of potential safety signals with a positive association to the event of interest in the exposed group. We proposed that these signals were designated for further evaluation once they appeared in a prospective setting. In total, 21 (67.7%) of these were not present in the current summaries of product characteristics.

    Conclusion The study demonstrated the feasibility of performing epidemiological surveillance using sequential, cumulative analyses. Larger populations are needed to evaluate rare events and infrequently used antidepressants.

  • 2.
    Almimi, Ashraf A.
    et al.
    NASA Langley Research Center, Hampton.
    Kulahci, Murat
    Informatics and Mathematical Modeling, Technical University of Denmark, Lyngby.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University, Department of Industrial, Systems and Operations Engineering Arizona State University.
    Checking the adequacy of fit of models from split-plot designs2009In: Journal of QualityTechnology, ISSN 0022-4065, Vol. 41, no 3, p. 272-284Article in journal (Refereed)
    Abstract [en]

    One of the main features that distinguish split-plot experiments from other experiments is that they involve two types of experimental errors: the whole-plot (WP) error and the subplot (SP) error. Taking this into consideration is very important when computing measures of adequacy of fit for split-plot models. In this article, we propose the computation of two R2, R 2-adjusted, prediction error sums of squares (PRESS), and R 2 -prediction statistics to measure the adequacy of fit for the WP and the SP submodels in a split-plot design. This is complemented with the graphical analysis of the two types of errors to check for any violation of the underlying assumptions and the adequacy of fit of split-plot models. Using examples, we show how computing two measures of model adequacy of fit for each split-plot design model is appropriate and useful as they reveal whether the correct WP and SP effects have been included in the model and describe the predictive performance of each group of effects.

  • 3.
    Almimi, Ashraf A.
    et al.
    NASA Langley Research Center, Hampton.
    Kulahci, Murat
    Arizona State University, Tempe.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University, Arizona State University, Tempe.
    Follow-up designs to resolve confounding in split-plot experiments2008In: Journal of QualityTechnology, ISSN 0022-4065, Vol. 40, no 2, p. 154-166Article in journal (Refereed)
    Abstract [en]

    Split-plot designs are effective in industry due to time and/or cost constraints, restriction on randomization of the treatment combinations of the hard-to-change factors, and different sizes of experimental units. Some of the results of fractional factorial split-plot experiments can be ambiguous and a need may arise to conduct follow-up experiments to separate effects of potential interest by breaking their alias links with others. For completely randomized fractional factorial experiments, methods have been developed to construct follow-up experiments. In this article, we extend the foldover technique to break the alias chains of split-plot experiments. Because it is impractical or not economically possible to foldover the whole-plot factors, as their levels are often hard or expensive to change, the focus of this article is on folding over only one or more subplot factors in order to de-alias certain effects. Six rules are provided to develop foldovers for minimum aberration resolution III and resolution IV fractional factorial split-plot designs.

  • 4.
    Andersen, Emil B.
    et al.
    Technical University of Denmark, Department of Chemical and Biochemical Engineering, Process and Systems Engineering Center (PROSYS), Building 229, 2800 Kongens Lyngby, Denmark.
    Udugama, Isuru A.
    Technical University of Denmark, Department of Chemical and Biochemical Engineering, Process and Systems Engineering Center (PROSYS), Building 229, 2800 Kongens Lyngby, Denmark.
    Gernaey, Krist V.
    Technical University of Denmark, Department of Chemical and Biochemical Engineering, Process and Systems Engineering Center (PROSYS), Building 229, 2800 Kongens Lyngby, Denmark.
    Bayer, Cristoph
    TH Nurnberg, Department of Process Engineering, Wassertorstraβe 10, 90489 Nurnberg, Germany.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Technical University of Denmark, DTU Compute, Richard Petersens Plads 324, 2800 Kongens Lyngby, Denmark.
    Big Data Generation for Time Dependent Processes: The Tennessee Eastman Process for Generating Large Quantities of Process Data2020In: 30th European Symposium on Computer Aided Process Engineering: Part A / [ed] Sauro Pierucci; Flavio Manenti; Giulia Luisa Bozzano; Davide Manca, Elsevier, 2020, p. 1309-1314Conference paper (Refereed)
    Abstract [en]

    The concept of applying data-driven process monitoring and control techniques on industrial chemical processes is well established. With concepts such as Industry 4.0, Big Data and the Internet of Things receiving attention in industrial chemical production, there is a renewed focus on data-driven process monitoring and control in chemical production applications. However, there are significant barriers that must be overcome in obtaining sufficiently large and reliable plant and process data from industrial chemical processes for the development of data-driven process monitoring and control concepts, specifically in obtaining plant and process data that are required to develop and test data driven process monitoring and control tools without investing significant efforts in acquiring, treating and interpreting the data. In this manuscript a big data generation tool is presented that is based on the Tennessee Eastman Process (TEP) simulation benchmark, which has been specifically designed to generate massive amounts of process data without spending significant effort in setting up. The tool can be configured to carry out a large number of data generation runs both using a graphical user interface (GUI) and through a.CSV file. The output from the tool is a file containing process data for all runs as well as process faults (deviations) that have been activated. This tool enables users to generate massive amounts of data for testing applicability of big data concepts in the realm of process control for continuously operating time dependent processes. The tool is available for all researchers and other parties who are interested.

  • 5.
    Andersen, Emil B.
    et al.
    Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark, Lyngby, Denmark.
    Udugama, Isuru A.
    Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark, Lyngby, Denmark.
    Gernaey, Krist V.
    Process and Systems Engineering Center (PROSYS), Department of Chemical and Biochemical Engineering, Technical University of Denmark, Lyngby, Denmark.
    Khan, Abdul R.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Bayer, Christoph
    Department of Process Engineering, TH Nuernberg, Nuernberg, Germany.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    An easy to use GUI for simulating big data using Tennessee Eastman process2022In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 38, no 1, p. 264-282Article in journal (Refereed)
    Abstract [en]

    Data-driven process monitoring and control techniques and their application to industrial chemical processes are gaining popularity due to the current focus on Industry 4.0, digitalization and the Internet of Things. However, for the development of such techniques, there are significant barriers that must be overcome in obtaining sufficiently large and reliable datasets. As a result, the use of real plant and process data in developing and testing data-driven process monitoring and control tools can be difficult without investing significant efforts in acquiring, treating, and interpreting the data. Therefore, researchers need a tool that effortlessly generates large amounts of realistic and reliable process data without the requirement for additional data treatment or interpretation. In this work, we propose a data generation platform based on the Tennessee Eastman Process simulation benchmark. A graphical user interface (GUI) developed in MATLAB Simulink is presented that enables users to generate massive amounts of data for testing applicability of big data concepts in the realm of process control for continuous time-dependent processes. An R-Shiny app that interacts with the data generation tool is also presented for illustration purposes. The app can visualize the results generated by the Tennessee Eastman Process and can carry out a standard fault detection and diagnosis studies based on PCA. The data generator GUI is available free of charge for research purposes at https://github.com/dtuprodana/TEP. 

  • 6.
    Bach Andersen, Peter
    et al.
    Department of Electrical Engineering, Technical University of Denmark, Lyngby.
    Sousa, Tiago
    Department of Electrical Engineering, Technical University of Denmark, Lyngby.
    Thingvad, Andreas
    Department of Electrical Engineering, Technical University of Denmark, Lyngby.
    Sass Berthou, Lea
    Department of Electrical Engineering, Technical University of Denmark, Lyngby.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby.
    Added Value of Individual Flexibility Profiles of Electric Vehicle Users For Ancillary Services2018In: 2018 IEEE International Conference on Communications, Control, and Computing Technologies for Smart Grids (SmartGridComm), IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    Vehicle-Grid Integration (VGI) research may serve to limit the self-induced adverse effects of electric vehicles (EVs) in terms of additional grid loading, but also as to make the EV an active asset in supporting a stable, economic power system based on renewable energy. Any use of the vehicle for grid services requires an accurate understanding of the user's driving needs. This paper proposes the introduction of a user profile, describing the energy requirements for driving in terms of an energy deadline, target and minimum. To explore the use of such a profile, the paper analyses data from a Danish pilot project where the driving patterns of ten electric Nissan e-NV200 vans are investigated in terms of leave times and energy consumption. It is shown that the data can be fitted with a log-normal distribution that can be used to establish a per user profile which provides a certain statistical probability of fulfilling the driving needs while allowing an aggregator to optimize earnings. Initially, aggregators may apply similar driving assumptions across an entire fleet. Considering that the driving needs of individual EV owners are different, statistical representations of the individual behaviour may result in more flexibility, and thereby time, for providing grid services. The paper quantifies the value of such added flexibility based on the Danish market for frequency containment reserves.

  • 7.
    Bekki, Jennifer M.
    et al.
    Arizona State University, Polytechnic Campus, Mesa, AZ.
    Fowler, John W.
    Arizona State University, Tempe.
    Mackulak, Gerald T.
    Arizona State University, Tempe.
    Kulahci, Murat
    Technical University of Denmark, Lyngby.
    Simulation-based cycle-time quantile estimation in manufacturing settings employing non-FIFO dispatching policies2009In: Journal of Simulation, ISSN 1747-7778, Vol. 3, no 2, p. 69-83Article in journal (Refereed)
    Abstract [en]

    Previous work shows that a combination of the Cornish-Fisher Expansion (CFE) with discrete-event simulation produces accurate and precise estimates of cycle-time quantiles with very little data storage, provided all workstations in the model are operating under the first-in-first-out (FIFO) dispatching rule. The accuracy of the approach degrades, however, as non-FIFO dispatching policies are employed in at least one workstation. This paper proposes the use of a power transformation for use in combination with the CFE to combat these accuracy problems. The suggested approach is detailed, and three methods for selecting the λ parameter of the power transformation are given. The results of a thorough empirical evaluation of each of the three approaches are given, and the advantages and drawbacks of each approach are discussed. Results show that the combination of the CFE with a power transformation generates cycle-time quantile estimates with high accuracy even for non-FIFO systems.

  • 8.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Checking process stability with the variogram2005In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 17, no 2, p. 323-327Article in journal (Refereed)
    Abstract [en]

    Modern quality control methods are increasingly being used to monitor complex industrial processes. A key requirement for such methods is the derivation of long records. Once such records are obtained, the variogram becomes a simple and useful exploratory tool that can be used by quality professionals to investigate whether a process is stationary or not.

  • 9.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst, Institute for Technology Management, University of St. Gallen.
    Kulahci, Murat
    Institute for Technology Management, University of St. Gallen.
    Finding assignable causes2000In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 12, no 4, p. 633-640Article in journal (Refereed)
  • 10.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst, University of Amsterdam.
    Kulahci, Murat
    University of Wisconsin-Madison.
    Improving and controlling business processes2002In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 14, no 2, p. 341-344Article in journal (Refereed)
  • 11.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: Beware of autocorrelation in regression2007In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 19, no 2, p. 143-148Article in journal (Refereed)
    Abstract [en]

    In the quality engineering context, the problem of trying to assess whether there exists a relationship between several inputs and an output of a process is often encountered. The main reason for spurious relationships between time series is that two unrelated time series that are internally autocorrelated sometimes by chance can produce very large cross correlations. Perhaps the safest approach to assessing the relationship between the input and the output of a process when the data is autocorrelated is to use prewhitening

  • 12.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Box-cox transformations and time series modeling - Part I2008In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 20, no 3, p. 376-388Article in journal (Refereed)
    Abstract [en]

    A demonstration in determining a Box-Cox transformation in the context of seasonal time series modeling has been provided. The first step is postulating a general class of statistical models and transformations, identifying a transformation and a model to be tentatively entertained, estimating parameters in tentatively entertained and fitted model, then checking the transformation and so on. Starting this iterative process a number of graphical methods are typically applied. Graphical determination of appropriate transformation may include log transformation and use of range-mean chart. Proceeding this step is identification of an appropriate ARIMA time series model. The Box-Cox transformation family of transformations is continuous in λ and contains the log transformation as a special case. It does have repeated model fittings but can be done relatively quick with standard time series software.

  • 13.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Box-cox transformations and time series modeling - Part II2008In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 20, no 4, p. 516-523Article in journal (Refereed)
    Abstract [en]

    The sales data from the engineering firm called Company X has a type of problem called "Big Q" that involve issues about time series analysis and the use of data transformations. According to Chatfield and Prothero (CP), the forecasts produced using a seasonal ARIMA model fitted to the log of the sales data produced unrealistic forecasts. The application of Box-Cox transformations to Company X's sales data provided a "useful" transformation of these data. The CP tried to find "useful" models that characterize the dynamics in the particular data appropriately, and thus produced sensible forecasts. The forecasting model proposed by CP and the alternative model proposed by Box and Jenkins were analyzed. As a result, both types of models provided to be quite good reasonable forecasts.

  • 14.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Forecasting with seasonal time series models2008In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 20, no 2, p. 250-260Article in journal (Refereed)
    Abstract [en]

    Forecasting is increasingly a part of the standard list among quality engineers specifically the domain of Six Sigma and quality engineering that deals with operational problems in manufacturing and service organizations. One of the most versatile approaches is the so-called Box-Jenkins approach using regular and seasonal integrated autoregressive moving average. The international airline data has been used with a seasonal autoregressive integrated moving average time series model to demonstrate how seasonal ARIMA models can be used to model cyclic data and how the model can be used for short term forecasting.

  • 15.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: Interpretation of time series models2005In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 17, no 4, p. 653-658Article in journal (Refereed)
  • 16.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: Practical time series modeling2007In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 19, no 3, p. 253-262Article in journal (Refereed)
    Abstract [en]

    Time series analysis is important in modern quality monitoring and control. The analysis has no precise methods and no single true, final answer. There are three general classes for stationary time series models: autoregressive (AR), moving average (MA) or the autoregressive moving average model. If in case a data is nonstationary, differencing before using ARMA model to fit to the data is necessary. The formulations for AR(p), MA(q) and the ARMA(p,q) has zero intercept which is attained by subtracting the average from the stationary data before modeling the process. If it is applied in a nonstationary data, there is a need to differenced either once or twice, adding a nonzero intercept term to the model. This implies that there is an underlying deterministic first or second order polynomial trend in the data. In reality, the type of model and the order necessary to adequately model a given process is not known. Hence, there is a need to determine the model that best fit the data based on looking at the autocorrelation function (ACF) and the partial autocorrelation function (PACF). Since the time series modeling requires judgment and experience, an literative model is suggested. Once the model is fitted, diagnostic checks are conducted using the ACF and PACF. Series C consisting of 226 observations of the temperature of a chemical pilot plant has been used as an example.

  • 17.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Industrial Engineering, Arizona State University, Tempe.
    Quality Quandaries: Practical Time Series Modeling II2007In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 19, no 4, p. 393-400Article in journal (Refereed)
  • 18.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: Process regime changes2007In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 19, no 1, p. 83-87Article in journal (Refereed)
    Abstract [en]

    Gaining understanding of process behavior and exploring the relationship between process variables are important prerequisites for quality improvement. In any diagnosis of a process, the quality engineer needs to try to understand and interpret relationships between inputs and outputs as well as between intermediate variables. Regime changes occassionally occur in the process engineering context. The tell tale sign of a regime change is most easily seen in scatter plots. Geographical analysis is proven to be useful in the early diagnostic phase of analyzing processes suspected ot having undergone regime changes.

  • 19.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University.
    Quality quandaries: Studying input-output relationships, part I2006In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 18, no 2, p. 273-281Article in journal (Refereed)
  • 20.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: Studying input-output relationships, part II2006In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 18, no 3, p. 405-410Article in journal (Refereed)
    Abstract [en]

    When analyzing process data to see if there exists an input variable that can be used to control an output variable, one should be aware of the possibility of spurious relationships. One way to check for this possibility is to carefully analyze the residuals. If they show signs of autocorrelation, the apparent relationship may be spurious. An effective method for checking such relationship is that of William S. Gosset.

  • 21.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality quandaries: The application of principal component analysis for process monitoring2006In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 18, no 1, p. 95-103Article in journal (Refereed)
    Abstract [en]

    An overview of graphical techniques that are useful when dealing with process monitoring is given. Focus is on contemporaneous correlation. Specifically, principal component analysis (PCA), a method akin to a Pareto analysis is demonstrated. The geometry of PCA to enhance intuition is described.

  • 22.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Quality Quandaries: The Effect of Autocorrelation on Statistical Process Control Procedures2005In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 17, no 3, p. 481-489Article in journal (Refereed)
  • 23.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Time series model selection and parsimony2009In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 21, no 3, p. 341-353Article in journal (Refereed)
    Abstract [en]

    Choosing an adequate model for a certain set of data is considered to be one of the more difficult tasks in time series analysis as experienced analysts are also having a hard time selecting such appropriate model. Thus, one popular approach have been discussed with the use of certain numerical criteria which is believed to be a useful input for the decision making process. However, using this technique solely is also not advisable on choosing a model but the use of judgement and the use of information criteria are more preferred. Specifically, the use of parsimonious mixed autoregressive model (ARMA) is more favorable to be used as it considers the context of the model as well as illustrating what is trying to be modeled and what model is to be used.

  • 24.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Quality quandaries: Using a time series model for process adjustment and control2008In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 20, no 1, p. 134-141Article in journal (Refereed)
    Abstract [en]

    The behavior of a chemical manufacturing process can be characterized with a time series model. A time series model can control and adjust the manufacturing process. Time series control of a process is about the prediction that the process will deviate excessively from the target in the next time period and make the predicted difference to make compensatory adjustment in the opposite direction. A detailed example on how a nonstationary time series model can be be used to develop two types of charts has been provided, one for periodic adjustments of the process counteracting the naturally occurring common cause variability and one more traditional control chart based on the residuals to look for special causes. The nonstationary time series model requires the acknowledgment that processes are inherently nonstationary and work from that more realistic assumption rather than the traditional Shewhart model of a fixed error distribution around a constant mean. The process, once too far off from a given target, will be adjusted by bringing its level back to target while the data coming from the process continues to conform to the assumed nonstationary time series model.

  • 25.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst, University of St.Gallen.
    Kulahci, Murat
    University of St.Gallen.
    Robust product design: Saving trials with split-plot confounding2001In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 13, no 3, p. 525-530Article in journal (Refereed)
    Abstract [en]

    Robust production experimentation is described as an important quality engineering activity. A cake mix experiment performed to illustrate split-plot confounding was used to eliminate the low resolution of standard inner and outer array designs. A necessary amount of information was furnished by the split plot design due to switching between fractions. The robustness of the product was improved by identifying the interaction between enviromental and design factors

  • 26.
    Bisgaard, Søren
    et al.
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Time Series Analysis and Forecasting by Example2011Book (Refereed)
  • 27.
    Box, George E.P.
    et al.
    Center for Quality and Productivity, University of Wisconsin-Madison.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Graves, Spencer B.
    PDF Solutions, Inc., San Jose, CA.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Marko, Kenneth A.
    ETAS Group, Ann Arbor, MI , Ford Scientific Research, Dearborn, MI.
    James, John V.
    Ford Research Labs., Dearborn, MI.
    van Gilder, John F.
    General Motors Proving Ground, Milford, MI.
    Ting, Tom
    General Motors Research, Development and Planning, Warren, MI.
    Zatorski, Hal
    DaimlerChrysler, Auburn Hills, MI.
    Wu, Cuiping
    DaimlerChrysler Proving Grounds, Chelsea, MI.
    Performance Evaluation of Dynamic Monitoring Systems: The Waterfall Chart2003In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 16, no 2, p. 183-191Article in journal (Refereed)
    Abstract [en]

    Computers are increasingly employed to monitor the performance of complex systems. An important issue is how to evaluate the performance of such monitors. In this article we introduce a three-dimensional representation that we call a "waterfall chart" of the probability of an alarm as a function of time and the condition of the system. It combines and shows the conceptual relationship between the cumulative distribution function of the run length and the power function. The value of this tool is illustrated with an application to Page's one-sided Cusum algorithm. However, it can be applied in general for any monitoring system.

  • 28.
    Cacciarelli, Davide
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark; Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
    A novel fault detection and diagnosis approach based on orthogonal autoencoders2022In: Computers and Chemical Engineering, ISSN 0098-1354, E-ISSN 1873-4375, Vol. 163, article id 107853Article in journal (Refereed)
    Abstract [en]

    In recent years, there have been studies focusing on the use of different types of autoencoders (AEs) for monitoring complex nonlinear data coming from industrial and chemical processes. However, in many cases the focus was placed on detection. As a result, practitioners are encountering problems in trying to interpret such complex models and obtaining candidate variables for root cause analysis once an alarm is raised. This paper proposes a novel statistical process control (SPC) framework based on orthogonal autoencoders (OAEs). OAEs regularize the loss function to ensure no correlation among the features of the latent variables. This is extremely beneficial in SPC tasks, as it allows for the invertibility of the covariance matrix when computing the Hotelling T2 statistic, significantly improving detection and diagnosis performance when the process variables are highly correlated. To support the fault diagnosis and identification analysis, we propose an adaptation of the integrated gradients (IG) method. Numerical simulations and the benchmark Tennessee Eastman Process are used to evaluate the performance of the proposed approach by comparing it to traditional approaches as principal component analysis (PCA) and kernel PCA (KPCA). In the analysis, we explore how the information useful for fault detection and diagnosis is stored in the intermediate layers of the encoder network. We also investigate how the correlation structure of the data affects the detection and diagnosis of faulty variables. The results show how the combination of OAEs and IG represents a compelling and ready-to-use solution, offering improved detection and diagnosis performances over the traditional methods.

  • 29.
    Cacciarelli, Davide
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark; Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Active learning for data streams: a survey2023In: Machine Learning, ISSN 0885-6125, E-ISSN 1573-0565Article, review/survey (Refereed)
    Abstract [en]

    Online active learning is a paradigm in machine learning that aims to select the most informative data points to label from a data stream. The problem of minimizing the cost associated with collecting labeled observations has gained a lot of attention in recent years, particularly in real-world applications where data is only available in an unlabeled form. Annotating each observation can be time-consuming and costly, making it difficult to obtain large amounts of labeled data. To overcome this issue, many active learning strategies have been proposed in the last decades, aiming to select the most informative observations for labeling in order to improve the performance of machine learning models. These approaches can be broadly divided into two categories: static pool-based and stream-based active learning. Pool-based active learning involves selecting a subset of observations from a closed pool of unlabeled data, and it has been the focus of many surveys and literature reviews. However, the growing availability of data streams has led to an increase in the number of approaches that focus on online active learning, which involves continuously selecting and labeling observations as they arrive in a stream. This work aims to provide an overview of the most recently proposed approaches for selecting the most informative observations from data streams in real time. We review the various techniques that have been proposed and discuss their strengths and limitations, as well as the challenges and opportunities that exist in this area of research.

    Download full text (pdf)
    fulltext
  • 30.
    Cacciarelli, Davide
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark; Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark.
    Hidden dimensions of the data: PCA vs autoencoders2023In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 35, no 4, p. 741-750Article in journal (Refereed)
  • 31.
    Cacciarelli, Davide
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark; Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Tyssedal, John Sølve
    Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Robust online active learning2024In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 40, no 1, p. 277-296Article in journal (Refereed)
    Abstract [en]

    In many industrial applications, obtaining labeled observations is not straightforward as it often requires the intervention of human experts or the use of expensive testing equipment. In these circumstances, active learning can be highly beneficial in suggesting the most informative data points to be used when fitting a model. Reducing the number of observations needed for model development alleviates both the computational burden required for training and the operational expenses related to labeling. Online active learning, in particular, is useful in high-volume production processes where the decision about the acquisition of the label for a data point needs to be taken within an extremely short time frame. However, despite the recent efforts to develop online active learning strategies, the behavior of these methods in the presence of outliers has not been thoroughly examined. In this work, we investigate the performance of online active linear regression in contaminated data streams. Our study shows that the currently available query strategies are prone to sample outliers, whose inclusion in the training set eventually degrades the predictive performance of the models. To address this issue, we propose a solution that bounds the search area of a conditional D-optimal algorithm and uses a robust estimator. Our approach strikes a balance between exploring unseen regions of the input space and protecting against outliers. Through numerical simulations, we show that the proposed method is effective in improving the performance of online active learning in the presence of outliers, thus expanding the potential applications of this powerful tool.

    Download full text (pdf)
    fulltext
  • 32.
    Cacciarelli, Davide
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark; Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark; .
    Tyssedal, John Sølve
    Department of Mathematical Sciences, Norwegian University of Science and Technology, Trondheim, Norway.
    Stream-based active learning with linear models2022In: Knowledge-Based Systems, ISSN 0950-7051, E-ISSN 1872-7409, Vol. 254, article id 109664Article in journal (Refereed)
    Abstract [en]

    The proliferation of automated data collection schemes and the advances in sensorics are increasing the amount of data we are able to monitor in real-time. However, given the high annotation costs and the time required by quality inspections, data is often available in an unlabeled form. This is fostering the use of active learning for the development of soft sensors and predictive models. In production, instead of performing random inspections to obtain product information, labels are collected by evaluating the information content of the unlabeled data. Several query strategy frameworks for regression have been proposed in the literature but most of the focus has been dedicated to the static pool-based scenario. In this work, we propose a new strategy for the stream-based scenario, where instances are sequentially offered to the learner, which must instantaneously decide whether to perform the quality check to obtain the label or discard the instance. The approach is inspired by the optimal experimental design theory and the iterative aspect of the decision-making process is tackled by setting a threshold on the informativeness of the unlabeled data points. The proposed approach is evaluated using numerical simulations and the Tennessee Eastman Process simulator. The results confirm that selecting the examples suggested by the proposed algorithm allows for a faster reduction in the prediction error.

  • 33.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Exploring the Use of Design of Experiments in Industrial Processes Operating Under Closed-Loop Control2017In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 33, no 7, p. 1601-1614Article in journal (Refereed)
    Abstract [en]

    Industrial manufacturing processes often operate under closed-loop control, where automation aims to keep important process variables at their set-points. In process industries such as pulp, paper, chemical and steel plants, it is often hard to find production processes operating in open loop. Instead, closed-loop control systems will actively attempt to minimize the impact of process disturbances. However, we argue that an implicit assumption in most experimental investigations is that the studied system is open loop, allowing the experimental factors to freely affect the important system responses. This scenario is typically not found in process industries. The purpose of this article is therefore to explore issues of experimental design and analysis in processes operating under closed-loop control and to illustrate how Design of Experiments can help in improving and optimizing such processes. The Tennessee Eastman challenge process simulator is used as a test-bed to highlight two experimental scenarios. The first scenario explores the impact of experimental factors that may be considered as disturbances in the closed-loop system. The second scenario exemplifies a screening design using the set-points of controllers as experimental factors. We provide examples of how to analyze the two scenarios

  • 34.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Simulating and Analyzing Experiments in the Tennessee Eastman Process Simulator2015In: ENBIS-15, 2015Conference paper (Refereed)
    Abstract [en]

    In many of today’s continuous processes, the data collection is usually performed automatically yielding exorbitant amount of data on various quality characteristics and inputs to the system. Moreover, such data are usually collected at high frequency introducing significant serial dependence in time. This violates the independent data assumption of many industrial statistics methods used in process improvement studies. These studies often involve controlled experiments to unearth the causal relationships to be used for robustness and optimization purposes.

    However real production processes are not suitable for studying new experimental methodologies, partly because unknown disturbances/experimental settings may lead to erroneous conclusions. Moreover large scale experimentation in production processes is frowned upon due to consequent disturbances and production delays. Hence realistic simulation of such processes offers an excellent opportunity for experimentation and methodological development.

    One commonly used process simulator is the Tennessee Eastman (TE) challenge chemical process simulator (Downs & Vogel, 1993)[1]. The process produces two products from four reactants, containing 41 measured variables and 12 manipulated variables. In addition to the process description, the problem statement defines process constraints, 20 types of process disturbances, and six operating modes corresponding to different production rates and mass ratios in the product stream.

    The purpose of this paper is to illustrate the use of the TE process with an appropriate feedback control as a test-bed for the methodological developments of new experimental design and analysis techniques.

    The paper illustrates how two-level experimental designs can be used to identify how the input factors affect the outputs in a chemical process.

    Simulations using Matlab/Simulink software are used to study the impact of e.g. process disturbances, closed loop control and autocorrelated data on different experimental arrangements.

    The experiments are analysed using a time series analysis approach to identify input-output relationships in a process operating in closed-loop with multivariate responses. The dynamics of the process are explored and the necessary run lengths for stable effect estimates are discussed.

  • 35.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    A two-step procedure for fault detection in the Tennessee Eastman Process simulator2016Conference paper (Refereed)
    Abstract [en]

    High-technological and complex production processes and high availability and sample frequencies of data in large scale industrial processes need the concurrent development of appropriate statistical control tools and monitoring techniques. Therefore, multivariate control charts based on latent variables are essential tools to detect and isolate process faults.Several Statistical Process Control (SPC) charts have been developed for multivariate and megavariate data, such as the Hotelling T2, MCUSUM and MEWMA control charts as well as charts based on principal component analysis (PCA) and dynamic PCA (DPCA). The ability of SPC procedures based on PCA (Kourti, MacGregor 1995) or DPCA (Ku et al. 1995) to detect and isolate process disturbances for a large number of highly correlated (and time-dependent in the case of DPCA) variables has been demonstrated in the literature. However, we argue that the fault isolation capability and the fault detection rate for processes can be improved further for processes operating under feedback control loops (in closed loop).The purpose of this presentation is to illustrate a two-step method where [1] the variables are pre-classified prior to the analysis and [2] the monitoring scheme based on latent variables is implemented. Step 1 involves a structured qualitative classification of the variables to guide the choice of which variables to monitor in Step 2. We argue that the proposed method will be useful for many practitioners of SPC based on latent variables techniques in processes operating in closed loop. It will allow clearer fault isolation and detection and an easier implementation of corrective actions. A case study based on the data available from the Tennessee Eastman Process simulator under feedback control loops (Matlab) will be presented. The results from the proposed method are compared with currently available methods through simulations in R statistics software.

  • 36.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Simulating Experiments in Closed-Loop Control Systems2016In: ENBIS-16 in Sheffield, 2016Conference paper (Refereed)
    Abstract [en]

    Design of Experiments (DoE) literature extensively discusses how to properly plan, conduct and analyze experiments for process and product improvement. However, it is typically assumed that the experiments are run on processes operating in open-loop: the changes in experimental factors are directly visible in process responses and are not hidden by (automatic) feedback control. Under this assumption, DoE methods have been successfully applied in process industries such as chemical, pharmaceutical and biological industries.

    However, the increasing instrumentation, automation and interconnectedness are changing how the processes are run. Processes often involve engineering process control as in the case of closed-loop systems. The closed-loop environment adds complexity to experimentation and analysis since the experimenter must account for the control actions that may aim to keep a response variable at its set-point value.  The common approach to experimental design and analysis will likely need adjustments in the presence of closed-loop controls. Careful consideration is for instance needed when the experimental factors are chosen. Moreover, the impact of the experimental factors may not be directly visible as changes in the response variables (Hild, Sanders, & Cooper, 2001). Instead other variables may need to be used as proxies for the intended response variable(s).

    The purpose of this presentation is to illustrate how experiments in closed-loop system can be planned and analyzed. A case study based on the Tennessee Eastman Process simulator run with a decentralized feedback control strategy (Matlab) (Lawrence Ricker, 1996) is discussed and presented. 

  • 37.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Managerial implications for improvingcontinuous production processes2017Conference paper (Refereed)
    Abstract [en]

    Data analytics remains essential for process improvement and optimization. Statistical process control and design of experiments are among the most powerful process and product improvement methods available. However, continuous process environments challenge the application of these methods. In this article, we highlight SPC and DoE implementation challenges described in the literature for managers, researchers and practitioners interested in continuous production process improvement. The results may help managers support the implementation of these methods and make researchers and practitioners aware of methodological challenges in continuous process environments.

  • 38.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Technical university of Denmark .
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    The Revised Tennessee Eastman Process Simulator as Testbed for SPC and DoE Methods2019In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, no 2, p. 212-229Article in journal (Refereed)
    Abstract [en]

    Engineering process control and high-dimensional, time-dependent data present great methodological challenges when applying statistical process control (SPC) and design of experiments (DoE) in continuous industrial processes. Process simulators with an ability to mimic these challenges are instrumental in research and education. This article focuses on the revised Tennessee Eastman process simulator providing guidelines for its use as a testbed for SPC and DoE methods. We provide flowcharts that can support new users to get started in the Simulink/Matlab framework, and illustrate how to run stochastic simulations for SPC and DoE applications using the Tennessee Eastman process.

  • 39.
    Capaci, Francesca
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Palazoglu, Ahmet
    Department of Chemical Engineering, University of California, Davis, California, USA.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Department of Applied Mathematics andComputer Science, Technical Universityof Denmark, Kongens Lyngby, Denmark.
    On Monitoring Industrial Processes under Feedback Control2020In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 36, no 8, p. 2720-2737Article in journal (Refereed)
    Abstract [en]

    The concurrent use of statistical process control and engineering process con-trol involves monitoring manipulated and controlled variables. One multivari-ate control chart may handle the statistical monitoring of all variables, butobserving the manipulated and controlled variables in separate control chartsmay improve understanding of how disturbances and the controller perfor-mance affect the process. In this article, we illustrate how step and ramp dis-turbances manifest themselves in a single-input–single-output system bystudying their resulting signatures in the controlled and manipulated variables.The system is controlled by variations of the widely used proportional-integral-derivative(PID) control scheme. Implications for applying control charts forthese scenarios are discussed.

  • 40.
    Capehart, Shay R.
    et al.
    Department of Mathematics at the Air Force Institute of Technology.
    Keha, Ahmet
    Industrial Engineering at Arizona State University.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Designing fractional factorial split-plot experiments using integer programming2011In: International Journal of Experimental Design and Process Optimisation, ISSN 2040-2252, E-ISSN 2040-2260, Vol. 2, no 1, p. 34-57Article in journal (Refereed)
    Abstract [en]

    Split-plot designs are commonly used in industrial experiments when there are hard-to-change and easy-to-change factors. Due to the number of factors and resource limitations, it is more practical to run a fractional factorial split-plot (FFSP) design. These designs are variations of the fractional factorial (FF) design, with the restricted randomisation structure to account for the whole plots and subplots. We discuss the formulation of FFSP designs using integer programming (IP) to achieve various design criteria. We specifically look at the maximum number of clear two-factor interactions and variations on this criterion.

  • 41.
    Conseil-Gudla, Helene
    et al.
    Department of Civil and Mechanical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark.
    Spooner, Max
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark.
    Ambat, Rajan
    Department of Civil and Mechanical Engineering, Technical University of Denmark, Kongens Lyngby, Denmark.
    Transient risk of water layer formation on PCBAs in different climates: Climate data analysis and experimental study2022In: Microelectronics and reliability, ISSN 0026-2714, E-ISSN 1872-941X, Vol. 136, article id 114655Article in journal (Refereed)
    Abstract [en]

    The reliability of electronic devices depends on the environmental loads at which they are exposed. Climatic conditions vary greatly from one geographical location to another (from hot and humid to cold and dry areas), and the temperature and humidity vary from season to season and from day to day. High levels of temperature and relative humidity mean high water content in the air, but saturated conditions (i.e. 100 % RH) can also be reached at low temperatures. This paper analyses the relationship between temperature, dew point temperature, their difference (here called ΔT), and occurrence and time period of dew point closeness to temperature on transient condensation effects on electronics.

    This paper has two parts: (i) Data analysis of typical climate profiles within the different zones of the Köppen -Geiger classification to pick up conditions where ΔT is very low (for example ≤0.4 °C). Various summary statistics of these events are calculated in order to assess the temperature at which these events happen, their durations and their frequency and (ii) Empirical investigation of the effect of ΔT ≤ 0.4 °C on the reliability of electronics by mimicking an electronic device, for which the time period of the ΔT is varied in one set of experiments, and the ambient temperature is varied in the other. The effect of the packaging of the electronics is also studied in this section.

    The statistical study of the climate profiles shows that the transient events (ΔT ≤ 0.4 °C) occur in almost every location, at different temperature levels, with a duration of at least one observation (where observations were hourly in the database). The experimental results show that presence of the enclosure, cleanliness and bigger pitch size reduce the levels of leakage current, while similar high levels of leakage current are observed for the different durations of the transient events, indicating that these climatic transient conditions can have a big impact on the electronics reliability.

  • 42.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby, Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Analysis of computer experiments with multiple noise sources2010In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 26, no 2, p. 137-146Article in journal (Refereed)
    Abstract [en]

    In this paper we present a modeling framework for analyzing computermodels with two types of variations. The paper is based on a case study of an orthopedic surgical unit, which has both controllable and uncontrollable factors. Our results show that this structure of variation can be modeled effectively with linear mixed effects models and generalized additive models

  • 43.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby, Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Designing simulation experiments with controllable and uncontrollable factors2008In: 2008 Winter Simuation Conference: (WSC 2008); Miami, Florida, USA, 7 - 10 December 2008; [incorporate ... the MASM (Modeling and Analysis for Semiconductor Manufacturing) Conference] / [ed] Scott J. Mason, Piscataway, NJ: IEEE Communications Society, 2008, p. 2909-2915Conference paper (Refereed)
    Abstract [en]

    In this study we propose a new method for designing computer experiments inspired by the split plot designs used in physical experimentation. The basic layout is that each set of controllable factor settings corresponds to a whole plot for which a number of subplots, each corresponding to one combination of settings of the uncontrollable factors, is employed. The caveat is a desire that the subplots within each whole plot cover the design space uniformly. A further desire is that in the combined design, where all experimental runs are considered at once, the uniformity of the design space coverage should be guaranteed. Our proposed method allows for a large number of uncontrollable and controllable settings to be run in a limited number of runs while uniformly covering the design space for the uncontrollable factors

  • 44.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Technical University of Denmark, Lyngby.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby.
    Designing simulation experiments with controllable and uncontrollable factors for applications in healthcare2011In: The Journal of the Royal Statistical Society, Series C: Applied Statistics, ISSN 0035-9254, E-ISSN 1467-9876, Vol. 60, no 1, p. 31-49Article in journal (Refereed)
    Abstract [en]

    We propose a new methodology for designing computer experiments that was inspired by the split-plot designs that are often used in physical experimentation. The methodology has been developed for a simulation model of a surgical unit in a Danish hospital. We classify the factors as controllable and uncontrollable on the basis of their characteristics in the physical system. The experiments are designed so that, for a given setting of the controllable factors, the various settings of the uncontrollable factors cover the design space uniformly. Moreover the methodology allows for overall uniform coverage in the combined design when all settings of the uncontrollable factors are considered at once

  • 45.
    Dehlendorff, Christian
    et al.
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark, Department of DTU Informatics, DTU Data Analysis, Technical University of Denmark, Technical University of Denmark.
    Kulahci, Murat
    Informatics and Mathematical Modelling, Section for Statistics, Lyngby, Technical University of Denmark.
    Merser, Sören
    Frederiksberg University Hospital, Clinic of Orthopaedic Surgery.
    Andersen, Klaus Kaae Aae
    Technical University of Denmark, Lyngby.
    Conditional Value at Risk as a Measure for Waiting Time in Simulations of Hospital Units2010In: Quality Technology & Quantitative Management, ISSN 1684-3703, E-ISSN 1811-4857, Vol. 7, no 3, p. 321-336Article in journal (Refereed)
    Abstract [en]

    The utility of conditional value at risk (CVaR) of a sample of waiting times as a measure for reducing long waiting times is evaluated with special focus on patient waiting times in a hospital. CVaR is the average of the longest waiting times, i.e., a measure at the tail of the waiting time distribution. The presented results are based on a discrete event simulation (DES) model of an orthopedic surgical unit at a university hospital in Denmark. Our analysis shows that CVaR offers a highly reliable performance measure. The measure targets the longest waiting times and these are generally accepted to be the most problematic from the points of view of both the patients and the management. Moreover, CVaR can be seen as a compromise between the well known measures: average waiting time and the maximum waiting time

  • 46.
    Elias, Russel J.
    et al.
    Department of Industrial Engineering, Arizona State University.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University.
    An overview of short-term statistical forecasting methods2006In: International Journal of Management Science and Engineering Management, ISSN 1750-9653, Vol. 1, no 1, p. 17-36Article in journal (Refereed)
    Abstract [en]

    An overview of statistical forecasting methodology is given, focusing on techniques appropriate to short- and medium-term forecasts. Topics include basic definitions and terminology, smoothing methods, ARIMA models, regression methods, dynamic regression models, and transfer functions. Techniques for evaluating and monitoring forecast performance are also summarized

  • 47.
    Elias, Russel J.
    et al.
    Arizona State University, Tempe.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University, Arizona State University, Tempe.
    Low, Stuart
    Arizona State University, Tempe.
    Kulahci, Murat
    Arizona State University, Tempe.
    Demand signal modelling: A short-range panel forecasting algorithm for semiconductor firm device-level demand2008In: European Journal of Industrial Engineering, ISSN 1751-5254, E-ISSN 1751-5262, Vol. 2, no 3, p. 253-278Article in journal (Refereed)
    Abstract [en]

    A model-based approach to the forecasting of short-range product demand within the semiconductor industry is presented. Device-level forecast models are developed via a novel two-stage stochastic algorithm that permits leading indicators to be optimally blended with smoothed estimates of unit-level demand. Leading indicators include backlog, bookings, delinquencies, inventory positions, and distributor resales. Group level forecasts are easily obtained through upwards aggregation of the device level forecasts. The forecasting algorithm is demonstrated at two major US-based semiconductor manufacturers. The first application involves a product family consisting of 254 individual devices with a 26-month training dataset and eight-month ex situ validation dataset. A subsequent demonstration refines the approach, and is demonstrated across a panel of six high volume devices with a 29-month training dataset and a 13-month ex situ validation dataset. In both implementations, significant improvement is realised versus legacy forecasting systems

  • 48.
    Fink Andersen, Jesper
    et al.
    Technical University of Denmark, Department of Applied Mathematics and Computer Science, Anker Engelunds Vej 1, 2800 Kgs. Lyngby, Denmark.
    Andersen, Anders Reenberg
    Technical University of Denmark, Department of Applied Mathematics and Computer Science, Anker Engelunds Vej 1, 2800 Kgs. Lyngby, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Social Sciences, Technology and Arts, Business Administration and Industrial Engineering. Technical University of Denmark, Department of Applied Mathematics and Computer Science, Anker Engelunds Vej 1, 2800 Kgs. Lyngby, Denmark.
    Nielsen, Bo Friis
    Technical University of Denmark, Department of Applied Mathematics and Computer Science, Anker Engelunds Vej 1, 2800 Kgs. Lyngby, Denmark.
    A numerical study of Markov decision process algorithms for multi-component replacement problems2022In: European Journal of Operational Research, ISSN 0377-2217, E-ISSN 1872-6860, Vol. 299, no 3, p. 898-909Article in journal (Refereed)
    Abstract [en]

    We present a unified modeling framework for Time-Based Maintenance (TBM) and Condition-Based Maintenance (CBM) for optimization of replacements in multi-component systems. The considered system has a K-out-of-N reliability structure, and components deteriorate according to a multivariate gamma process with Lévy copula dependence. The TBM and CBM models are formulated as Markov Decision Processes (MDPs), and optimal policies are found using dynamic programming. Solving the CBM model requires that the continuous deterioration process is discretized. We therefore investigate the discretization level required for obtaining a near-optimal policy. Our results indicate that a coarser discretization level than previously suggested in the literature is adequate, indicating that dynamic programming is a feasible approach for optimization in multi-component systems. We further demonstrate this through empirical results for the size limit of the MDP models when solved with an optimized implementation of modified policy iteration. The TBM model can generally be solved with more components than the CBM model, since the former has a sparser state transition structure. In the special case of independent component deterioration, transition probabilities can be calculated efficiently at runtime. This reduces the memory requirements substantially. For this case, we also achieved a tenfold speedup when using ten processors in a parallel implementation of algorithm. Altogether, our results show that the computational requirements for systems with independent component deterioration increase at a slower rate than for systems with stochastic dependence.

  • 49.
    Frumosu, Flavia D.
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
    Big data analytics using semi‐supervised learning methods2018In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 34, no 7, p. 1413-1423Article in journal (Refereed)
    Abstract [en]

    The expanding availability of complex data structures requires development of new analysis methods for process understanding and monitoring. In manufacturing, this is primarily due to high‐frequency and high‐dimensional data available through automated data collection schemes and sensors. However, particularly for fast production rate situations, data on the quality characteristics of the process output tend to be scarcer than the available process data. There has been a considerable effort in incorporating latent structure–based methods in the context of complex data. The research question addressed in this paper is to make use of latent structure–based methods in the pursuit of better predictions using all available data including the process data for which there are no corresponding output measurements, ie, unlabeled data. Inspiration for the research question comes from an industrial setting where there is a need for prediction with extremely low tolerances. A semi‐supervised principal component regression method is compared against benchmark latent structure–based methods, principal components regression, and partial least squares, on simulated and experimental data. In the analysis, we show the circumstances in which it becomes more advantageous to use the semi‐supervised principal component regression over these competing methods.

  • 50.
    Frumosu, Flavia D.
    et al.
    Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Outliers detection using an iterative strategy for semi‐supervised learning2019In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 35, no 5, p. 1408-1423Article in journal (Refereed)
    Abstract [en]

    As a direct consequence of production systems' digitalization, high‐frequency and high‐dimensional data has become more easily available. In terms of data analysis, latent structures‐based methods are often employed when analyzing multivariate and complex data. However, these methods are designed for supervised learning problems when sufficient labeled data are available. Particularly for fast production rates, quality characteristics data tend to be scarcer than available process data generated through multiple sensors and automated data collection schemes. One way to overcome the problem of scarce outputs is to employ semi‐supervised learning methods, which use both labeled and unlabeled data. It has been shown that it is advantageous to use a semi‐supervised approach in case of labeled data and unlabeled data coming from the same distribution. In real applications, there is a chance that unlabeled data contain outliers or even a drift in the process, which will affect the performance of the semi‐supervised methods. The research question addressed in this work is how to detect outliers in the unlabeled data set using the scarce labeled data set. An iterative strategy is proposed using a combined Hotelling's T2 and Q statistics and applied using a semi‐supervised principal component regression (SS‐PCR) approach on both simulated and real data sets.

123 1 - 50 of 121
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf