Change search
Refine search result
12 51 - 91 of 91
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 51.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Blocking two-level factorial experiments2007In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 23, no 3, p. 283-289Article in journal (Refereed)
    Abstract [en]

    Blocking is commonly used in experimental design to eliminate unwanted variation by creating more homogeneous conditions for experimental treatments within each block. While it has been a standard practice in experimental design, blocking fractional factorials still presents many challenges due to differences between treatment and blocking variables. Lately, new design criteria such as the total number of clear effects and fractional resolution have been proposed to design blocked two-level fractional factorial experiments. This article presents a flexible matrix representation for two-level fractional factorials that will allow experimenters and software developers to block such experiments based on any design criterion that is suitable with the experimental conditions.

  • 52.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Discussion of “The Statistical Evaluation of Categorical Measurements: Simple Scales, but Treacherous Complexity Underneath’”2014In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 26, no 1, p. 40-43Article in journal (Refereed)
  • 53.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Technical University of Denmark, Lyngby, Denmark .
    Discussion on "Søren Bisgaard's contributions to Quality Engineering: Design of experiments"2019In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 31, no 1, p. 149-153Article in journal (Refereed)
  • 54.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Guest editorial2015In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 27, no 1, p. 1-Article in journal (Other academic)
  • 55.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Rare-events classification: An approach based on genetic algorithm and voronoi tessellation2018Conference paper (Refereed)
  • 56.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Split-Plot Experiments with Unusual Numbers of Subplot Runs2007In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 19, no 4, p. 363-371Article in journal (Refereed)
    Abstract [en]

    In many experimental situations, it may not be feasible or even possible to run experiments in a completely randomized fashion as usually recommended. Under these circumstances, split-plot experiments in which certain factors are changed less frequently than the others are often used. Most of the literature on split-plot designs is based on 2-level factorials. For those designs, the number of subplots is a power of 2. There may however be some situations where for cost purposes or physical constraints, we may need to have unusual number of subplots such as 3, 5, 6, etc. In this article, we explore this issue and provide some examples based on the Plackett and Burman designs. Also algorithmically constructed D-optimal split-plot designs are compared to those based on Plackett and Burman designs.

  • 57.
    Kulahci, Murat
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vanhatalo, Erik
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Capaci, Francesca
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Projekt: Statistiska metoder för förbättring av kontinuerliga tillverkningsprocesser2015Other (Other (popular science, discussion, etc.))
  • 58.
    Kulahci, Murat
    et al.
    Department of Industrial Engineering, Arizona State University, Tempe.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    A generalization of the alias matrix2006In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 33, no 4, p. 387-395Article in journal (Refereed)
    Abstract [en]

    The investigation of aliases or biases is important for the interpretation of the results from factorial experiments. For two-level fractional factorials this can be facilitated through their group structure. For more general arrays the alias matrix can be used. This tool is traditionally based on the assumption that the error structure is that associated with ordinary least squares. For situations where that is not the case, we provide in this article a generalization of the alias matrix applicable under the generalized least squares assumptions. We also show that for the special case of split plot error structure, the generalized alias matrix simplifies to the ordinary alias matrix

  • 59.
    Kulahci, Murat
    et al.
    Technical University of Denmark.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    Challenges in multivariate control charts with autocorrelated data2006In: 2006 proceedings: Twelfth ISSAT International Conference Reliability and Quality in Design : August 3 - 5, 2006, Chicago, Illinois, U.S.A / [ed] Hoang Pham, New Brunswick, NJ: International Society of Science and Applied Technologies , 2006, p. 215-219Conference paper (Refereed)
    Abstract [en]

    With the recent proliferation of computers, automatic sensor technology, communications networks and sophisticated software, quality control applications have fundamentally changed. The data used for quality control increasingly are multivariate and sampled individually in time at a high sampling rate. Hence the data will typically be cross-correlated and autocorrelated especially if sampled quickly relative to the dynamics of the system being monitored. Indeed the data stream used as input for process monitoring algorithms will often be high-dimensional discrete time vector time series. Unfortunately traditional approaches of statistical process control often fall short in delivering effective answers under such circumstances. In this paper we discuss some of the challenges this type of data bring forth in statistical quality control applications. We specifically discuss the robustness or lack of robustness of certain standard estimators.

  • 60.
    Kulahci, Murat
    et al.
    Arizona State University, Tempe.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, University of Massachusetts, Amherst, MA.
    Partial confounding and projective properties of plackett-burman designs2007In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 23, no 7, p. 791-800Article in journal (Refereed)
    Abstract [en]

    Screening experiments are typically used when attempting to identify a few active factors in a larger pool of potentially significant factors. In general, two-level regular factorial designs are used, but Plackett-Burman (PB) designs provide a useful alternative. Although PB designs are run-efficient, they confound the main effects with fractions of strings of two-factor interactions, making the analysis difficult. However, recent discoveries regarding the projective properties of PB designs suggest that if only a few factors are active, the original design can be reduced to a full factorial, with additional trials frequently forming attractive patterns. In this paper, we show that there is a close relationship between the partial confounding in certain PB designs and their projective properties. With the aid of examples, we demonstrate how this relationship may help experimenters better appreciate the use of PB designs.

  • 61.
    Kulahci, Murat
    et al.
    University of Wisconsin.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst, University of Amsterdam.
    Switching-one-column follow-up experiments for Plackett-Burman designs2001In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 28, no 8, p. 943-949Article in journal (Refereed)
    Abstract [en]

    Industrial experiments are frequently performed sequentially using two-level fractional factorial designs. In this context, a common strategy for the design of follow-up experiments is to switch the signs in one column. It is well known that this strategy, when applied to two-level fractional factorial resolution III designs, will clear the main effect, for which the switch was performed, from any confounding with any other two-factor interactions and will also clear all the two-factor interactions between that factor and the other main effects from any confounding with other two-factor interactions. In this article, we extend this result and show that this strategy applies to any orthogonal two-level resolution III design and therefore specifically to any two-level Plackett-Burman design.

  • 62.
    Kulahci, Murat
    et al.
    Department of Industrial Engineering, Arizona State University, Tempe.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst, Eugene M. Isenberg School of Management, University of Massachusetts Amherst.
    The use of plackett-burman designs to construct split-plot designs2005In: Technometrics, ISSN 0040-1706, E-ISSN 1537-2723, Vol. 47, no 4, p. 495-501Article in journal (Refereed)
    Abstract [en]

    When some factors are hard to change and others are relatively easier, split-plot experiments are often an economic alternative to fully randomized designs. Split-plot experiments, with their structure of subplot arrays imbedded within whole-plot arrays, have a tendency to become large, particularly in screening situations when many factors are considered. To alleviate this problem, we explore, for the case of two-level designs, various ways to use orthogonal arrays of the Plackett-Burman type to reduce the number of individual tests. General construction principles are outlined, and the resulting alias structure is derived and discussed

  • 63.
    Kulahci, Murat
    et al.
    Department of Industrial Engineering, Arizona State University, Tempe.
    Box, George E.P
    Center for Quality and Productivity, University of Wisconsin-Madison.
    Catalysis of discovery and development in engineering and industry2003In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 15, no 3, p. 513-517Article in journal (Refereed)
    Abstract [en]

    study was performed on the catalysis of discovery and development in engineering and industry. It was found that the standard ideas of statistical design and analysis had less success in engineering because the courses offered by statistics departments depended on the one-shot paradigm, which was inappropriate for most of engineering experimentation. Although industry has long recognized the need for engineers possessing appropriate statistical skills, engineering departments in universities had been slow to appreciate that

  • 64.
    Kulahci, Murat
    et al.
    Department of Industrial Engineering, Arizona State University, Tempe.
    Box, George E.P
    Center for Quality and Productivity, University of Wisconsin-Madison.
    Erratum: Catalysis of discovery and development in engineering and industry (Quality Quandaries) (Quality Engineering 15:3 (513-517))2003In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 16, no 1, p. 165-Article in journal (Refereed)
    Abstract [en]

    study was performed on the catalysis of discovery and development in engineering and industry. It was found that the standard ideas of statistical design and analysis had less success in engineering because the courses offered by statistics departments depended on the one-shot paradigm, which was inappropriate for most of engineering experimentation. Although industry has long recognized the need for engineers possessing appropriate statistical skills, engineering departments in universities had been slow to appreciate that

  • 65.
    Kulahci, Murat
    et al.
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Holcomb, Don
    Xie, Min
    Special Issue: Design for Six Sigma2010In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 26, no 4 Spec Issue, p. 315-Article in journal (Other academic)
  • 66.
    Kulahci, Murat
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Menon, Anil
    Celgene Corporation, 556 Morris Ave, Summit, NJ.
    Trellis Plots as Visual Aids for Analyzing Split Plot Experiments2016In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 29, no 2, p. 211-225Article in journal (Refereed)
    Abstract [en]

    The analysis of split plot experiments can be challenging due to a complicated error structure resulting from restrictions on complete randomization. Similarly, standard visualization methods do not provide the insight practitioners desire to understand the data, think of explanations, generate hypotheses, build models, or decide on next steps. This article demonstrates the effective use of trellis plots in the preliminary data analysis for split plot experiments to address this problem. Trellis displays help to visualize multivariate data by allowing for conditioning in a general way. They can also be used after the statistical analysis for verification, clarification, and communication.

  • 67.
    Kulahci, Murat
    et al.
    Department of Industrial Engineering, Arizona State University, Tempe.
    Montgomery, Douglas C.
    Division of Mathematical and Natural Sciences, Arizona State University, Department of Industrial Engineering, Arizona State University, Tempe.
    Feng, JAck
    Department of Industrial and Manufacturing Engineering and Technology, Bradley University, Peoria, IL.
    Editorial2007In: International Journal of Production Research, ISSN 0020-7543, E-ISSN 1366-588X, Vol. 45, no 23, p. 5453-5454Article in journal (Other academic)
  • 68.
    Kulahci, Murat
    et al.
    Arizona State University, Tempe.
    Ramirez, José
    W. L. Gore & Associates, Inc.
    Tobias, Randy
    SAS Institute Inc.
    Split-plot fractional designs: Is minimum aberration enough?2006In: Journal of QualityTechnology, ISSN 0022-4065, Vol. 38, no 1, p. 56-64Article in journal (Refereed)
    Abstract [en]

    Split-plot experiments are commonly used in industry for product and process improvement. Recent articles on designing split-plot experiments concentrate on minimum aberration as the design criterion. Minimum aberration has been criticized as a design criterion for completely randomized fractional factorial design and alternative criteria, such as the maximum number of clear two-factor interactions, are suggested (Wu and Hamada (2000)). The need for alternatives to minimum aberration is even more acute for split-plot designs. In a standard split-plot design, there are several types of two-factor interactions, not all of them equally interesting. However, minimum aberration is not designed to distinguish among the different types of two-factor interactions. It should be noted that this criticism is valid not only for the minimum aberration but also for any other design criteria originally proposed for completely randomized designs. Consequently, we provide a modified version of the maximum number of clear two-factor interactions design criterion to be used for split-plot designs

  • 69.
    Kulahci, Murat
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Tyssedal, John Sølve
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Split-plot designs for multistage experimentation2017In: Journal of Applied Statistics, ISSN 0266-4763, E-ISSN 1360-0532, Vol. 44, no 3, p. 493-510Article in journal (Refereed)
    Abstract [en]

    Most of today’s complex systems and processes involve several stages through which input or the raw material has to go before the final product is obtained. Also in many cases factors at different stages interact. Therefore, a holistic approach for experimentation that considers all stages at the same time will be more efficient. However, there have been only a few attempts in the literature to provide an adequate and easy-to-use approach for this problem. In this paper, we present a novel methodology for constructing two-level split-plot and multistage experiments. The methodology is based on the Kronecker product representation of orthogonal designs and can be used for any number of stages, for various numbers of subplots and for different number of subplots for each stage. The procedure is demonstrated on both regular and nonregular designs and provides the maximum number of factors that can be accommodated in each stage. Furthermore, split-plot designs for multistage experiments with good projective properties are also provided.

  • 70.
    Li, Jing
    et al.
    Industrial Engineering, School of Computing, Informatics, and Decision Systems Engineering Arizona State University, School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    Technical University of Denmark, Department of Applied Mathematics and Computer Science.
    Data Mining: a Special Issue of Quality and Reliability Engineering International (QREI)2013In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 29, no 3, p. 437-Article in journal (Other academic)
  • 71.
    Li, Jing
    et al.
    Industrial Engineering, School of Computing, Informatics, and Decision Systems Engineering Arizona State University.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Editorial: a Special Issue on Data Mining2014In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 30, no 6, p. 813-Article in journal (Other academic)
  • 72.
    Lundkvist, Peder
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Vännman, Kerstin
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    A Comparison of Decision Methods for Cpk When Data are Autocorrelated2012In: Quality Engineering, ISSN 0898-2112, E-ISSN 1532-4222, Vol. 24, no 4, p. 460-472Article in journal (Refereed)
    Abstract [en]

    In many industrial applications, autocorrelated data are becoming increasingly common due to, for example, on-line data collection systems with high-frequency sampling. Therefore the basic assumption of independent observations for process capability analysis is not valid. The purpose of this article is to compare decision methods using the process capability index Cpk, when data are autocorrelated. This is done through a case study followed by a simulation study. In the simulation study the actual significance level and power of the decision methods are investigated. The outcome of the article is that two methods appeared to be better than the others.

  • 73.
    Löwe, Roland
    et al.
    Section of Urban Water Systems, Department of Environmental Engineering, Technical University of Denmark (DTU Environment).
    Urich, Christian
    Department of Civil Engineering, Faculty of Engineering, Monash University.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Department of Applied Mathematics and Computer Science, Technical University of Denmark (DTU Compute).
    Radhakrishnan, Mohanasundar
    IHE Delft, Institute for Water Education.
    Deletic, Ana
    Department of Civil Engineering, Faculty of Engineering, Monash University.
    Arnbjerg-Nielsen, Karsten
    Section of Urban Water Systems, Department of Environmental Engineering, Technical University of Denmark (DTU Environment).
    Simulating flood risk under non-stationary climate and urban development conditions: Experimental setup for multiple hazards and a variety of scenarios2018In: Environmental Modelling & Software, ISSN 1364-8152, E-ISSN 1873-6726, Vol. 102, p. 155-171Article in journal (Refereed)
    Abstract [en]

    A framework for assessing economic flood damage for a large number of climate and urban development scenarios with limited computational effort is presented. Response surfaces are applied to characterize flood damage based on physical variables describing climate-driven hazards and changing vulnerability resulting from urban growth. The framework is embedded in an experimental setup where flood damage obtained from combined hydraulic-urban development simulations is approximated using kriging-metamodels. Space-filling, sequential and stratified sequential sampling strategies are tested. Reliable approximations of economic damage are obtained in a theoretical case study involving pluvial and coastal hazards, and the stratified sequential sampling strategy is most robust to irregular surface shapes. The setup is currently limited to considering only planned urban development patterns and flood adaptation options implemented over short time horizons. However, the number of simulations is reduced by up to one order of magnitude compared to scenario-based methods, highlighting the potential of the approach.

  • 74.
    McClary, Daniel W.
    et al.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University, Computer Science and Engineering, Arizona State University, Tempe.
    Syrotiuk, Violet R.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University, Computer Science and Engineering, Arizona State University, Tempe.
    Kulahci, Murat
    Informatics and Mathematical Modelling, Technical University of Denmark.
    A framework for reactive optimization in mobile ad hoe networks2008In: Proceedings of the 2008 1st International Conference on Information Technology: 19 - 21 May 2008, Poland, Gdansk University of Technology, Faculty of Electronics, Telecommunications and Informatics / [ed] Andrzej Stepnowski, Piscataway, NJ: IEEE Communications Society, 2008, article id 4621578Conference paper (Refereed)
    Abstract [en]

    We present a framework to optimize the performance of a mobile ad hoc network over a wide range of operating conditions. It includes screening experiments to quantify the parameters and interactions among parameters influential to throughput. Profile-driven regression is applied to obtain a model of the non-linear behaviour of throughput. The intermediate models obtained in this modelling effort are used to adapt the parameters as the network conditions change, in order to maximize throughput. The improvements in throughput range from 10-26 times the use of the default parameter settings. The predictive accuracy of the model is monitored and used to update the model dynamically. The results indicate the framework may be useful for the optimization of dynamic systems of high dimension

  • 75.
    McClary, Daniel W.
    et al.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Syrotiuk, Violet R.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    DTU Informatics, Richard Petersens Plads, building 321, DK-2800 Lyngby.
    Profile-driven regression for modeling and runtime optimization of mobile networks2010In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 20, no 3, article id 17Article in journal (Refereed)
    Abstract [en]

    Computer networks often display nonlinear behavior when examined over a wide range of operating conditions. There are few strategies available for modeling such behavior and optimizing such systems as they run. Profile-driven regression is developed and applied to modeling and runtime optimization of throughput in a mobile ad hoc network, a self-organizing collection of mobile wireless nodes without any fixed infrastructure. The intermediate models generated in profile-driven regression are used to fit an overall model of throughput, and are also used to optimize controllable factors at runtime. Unlike others, the throughput model accounts for node speed. The resulting optimization is very effective; locally optimizing the network factors at runtime results in throughput as much as six times higher than that achieved with the factors at their default levels.

  • 76.
    McClary, Daniel W.
    et al.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Syrotiuk, Violet R.
    School of Computing, Informatics and Decision Systems Engineering, Arizona State University.
    Kulahci, Murat
    Technical University of Denmark, Lyngby.
    Steepest-ascent constrained simultaneous perturbation for multiobjective optimization2010In: ACM Transactions on Modeling and Computer Simulation, ISSN 1049-3301, E-ISSN 1558-1195, Vol. 21, no 1, article id 2Article in journal (Refereed)
    Abstract [en]

    The simultaneous optimization of multiple responses in a dynamic system is challenging. When a response has a known gradient, it is often easily improved along the path of steepest ascent. On the contrary, a stochastic approximation technique may be used when the gradient is unknown or costly to obtain. We consider the problem of optimizing multiple responses in which the gradient is known for only one response. We propose a hybrid approach for this problem, called simultaneous perturbation stochastic approximation steepest ascent, SPSA-SA or SP(SA)2 for short. SP(SA)2 is an SPSA technique that leverages information about the known gradient to constrain the perturbations used to approximate the others. We apply SP(SA)2 to the cross-layer optimization of throughput, packet loss, and end-to-end delay in a mobile ad hoc network (MANET), a self-organizing wireless network. The results show that SP(SA)2 achieves higher throughput and lower packet loss and end-to-end delay than the steepest ascent, SPSA, and the Nelder--Mead stochastic approximation approaches. It also reduces the cost in the number of iterations to perform the optimization

  • 77.
    Montgomery, Douglas C.
    et al.
    Division of Mathematical and Natural Sciences, Arizona State University, Department of Industrial Engineering, Arizona State University, Tempe.
    Almimi, Ashraf A.
    NASA Langley Research Center, Hampton, Department of Industrial Engineering, Arizona State University, Tempe.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Estimation of missing observations in two-level split-plot designs2008In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 24, no 2, p. 127-152Article in journal (Refereed)
    Abstract [en]

    Inserting estimates for the missing observations from split-plot designs restores their balanced or orthogonal structure and alleviates the difficulties in the statistical analysis. In this article, we extend a method due to Draper and Stoneman to estimate the missing observations from unreplicated two-level factorial and fractional factorial split-plot (FSP and FFSP) designs. The missing observations, which can either be from the same whole plot, from different whole plots, or comprise entire whole plots, are estimated by equating to zero a number of specific contrast columns equal to the number of the missing observations. These estimates are inserted into the design table and the estimates for the remaining effects (or alias chains of effects as the case with FFSP designs) are plotted on two half-normal plots: one for the whole-plot effects and the other for the subplot effects. If the smaller effects do not point at the origin, then different contrast columns to some or all of the initial ones should be discarded and the plots re-examined for bias. Using examples, we show how the method provides estimates for the missing observations that are very close to their actual values

  • 78.
    Montgomery, Douglas C.
    et al.
    Division of Mathematical and Natural Sciences, Arizona State University.
    Jennings, Cheryl L.
    Kulahci, Murat
    Industrial Engineering, Arizona State University, Tempe.
    Introduction to time series analysis and forecasting2008Book (Refereed)
  • 79.
    Rauf Khan, Abdul
    et al.
    Department of Electronic Systems, Aalborg University, Denmark.
    Schioler, Henrik
    Department of Electronic Systems, Aalborg University, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Dept Applied Mathematics and Computer Science, Technical university of Denmark .
    Selection of objective function for imbalanced classification: an industrial case study2018In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018Conference paper (Refereed)
    Abstract [en]

    Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

  • 80.
    Rauf Khan, Abdul
    et al.
    Department of Electronic Systems, Aalborg University, Denmark.
    Schioler, Henrik
    Department of Electronic Systems, Aalborg University, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Dept Applied Mathematics and Computer Science, Technical university of Denmark .
    Knudsen, Torben
    Department of Electronic Systems, Aalborg University, Denmark.
    Big data analytics for industrial process control2018In: IEEE International Conference on Emerging Technologies and Factory Automation (ETFA), Piscataway, NJ: IEEE, 2018, Vol. Part F134116Conference paper (Refereed)
    Abstract [en]

    Today, in modern factories, each step in manufacturing produces a bulk of valuable as well as highly precise information. This provides a great opportunity for understanding the hidden statistical dependencies in the process. Systematic analysis and utilization of advanced analytical methods can lead towards more informed decisions. In this article we discuss some of the challenges related to big data analysis in manufacturing and relevant solutions to some of these challenges.

  • 81.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark. DTU Compute, Asmussens Alle 322, 2800 Kgs. Lyngby, Denmark.
    Kold, David
    Chr. Hansen A/S, Hvidovre.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Harvest time prediction for batch processes2018In: Computers and Chemical Engineering, ISSN 0098-1354, E-ISSN 1873-4375, Vol. 117, p. 32-41Article in journal (Refereed)
    Abstract [en]

    Batch processes usually exhibit variation in the time at which individual batches are stopped (referred to as the harvest time). Harvest time is based on the occurrence of some criterion and there may be great uncertainty as to when this criterion will be satisfied. This uncertainty increases the difficulty of scheduling downstream operations and results in fewer completed batches per day. A real case study is presented of a bacteria fermentation process. We consider the problem of predicting the harvest time of a batch in advance to reduce variation and improving batch quality. Lasso regression is used to obtain an interpretable model for predicting the harvest time at an early stage in the batch. A novel method for updating the harvest time predictions as a batch progresses is presented, based on information obtained from online alignment using dynamic time warping.

  • 82.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby.
    Kold, David
    Chr. Hansen A/S, Hvidovre.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. DTU Compute, Technical University of Denmark.
    Selecting local constraint for alignment of batch process data with dynamic time warping2017In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 167, p. 161-170Article in journal (Refereed)
    Abstract [en]

    There are two key reasons for aligning batch process data. The first is to obtain same-length batches so that standard methods of analysis may be applied, whilst the second reason is to synchronise events that take place during each batch so that the same event is associated with the same observation number for every batch. Dynamic time warping has been shown to be an effective method for meeting these objectives. This is based on a dynamic programming algorithm that aligns a batch to a reference batch, by stretching and compressing its local time dimension. The resulting ”warping function” may be interpreted as a progress signature of the batch which may be appended to the aligned data for further analysis. For the warping function to be a realistic reflection of the progress of a batch, it is necessary to impose some constraints on the dynamic time warping algorithm, to avoid an alignment which is too aggressive and which contains pathological warping. Previous work has focused on addressing this issue using global constraints. In this work, we investigate the use of local constraints in dynamic time warping and define criteria for evaluating the degree of time distortion and variable synchronisation obtained. A local constraint scheme is extended to include constraints not previously considered, and a novel method for selecting the optimal local constraint with respect to the two criteria is proposed. For illustration, the method is applied to real data from an industrial bacteria fermentation process.

  • 83.
    Spooner, Max
    et al.
    DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. DTU Compute, Technical University of Denmark, Kgs. Lyngby, Denmark.
    Monitoring batch processes with dynamic time warping and k-nearest neighbours2018In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 183, p. 102-112Article in journal (Refereed)
    Abstract [en]

    A novel data driven approach to batch process monitoring is presented, which combines the k-Nearest Neighbour rule with the dynamic time warping (DTW) distance. This online method (DTW-NN) calculates the DTW distance between an ongoing batch, and each batch in a reference database of batches produced under normal operating conditions (NOC). The sum of the k smallest DTW distances is monitored. If a fault occurs in the ongoing batch, then this distance increases and an alarm is generated. The monitoring statistic is easy to interpret, being a direct measure of similarity of the ongoing batch to its nearest NOC predecessors and the method makes no distributional assumptions regarding normal operating conditions. DTW-NN is applied to four extensive datasets from simulated batch production of penicillin, and tested on a wide variety of fault types, magnitudes and onset times. Performance of DTW-NN is contrasted with a benchmark multiway PCA approach, and DTW-NN is shown to perform particularly well when there is clustering of batches under NOC.

  • 84.
    Tyssedal, John Sølve
    et al.
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Kulahci, Murat
    Department of Industrial Engineering, Arizona State University, Tempe.
    Analysis of split-plot designs with mirror image pairs as sub-plots2005In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 21, no 5, p. 539-551Article in journal (Refereed)
    Abstract [en]

    In this article we present a procedure to analyze split-plot experiments with mirror image pairs as sub-plots when third- and higher-order interactions can be assumed negligible. Although performing a design in a split-plot manner induces correlation among observations, we show that with such designs the essential search for potentially active factors can be done in two steps using ordinary least squares. The suggested procedure is tested out on a real example and on two simulated screening examples; one with a split-plot design based on a geometric design and one with a split-plot design based on a non-geometric Plackett and Burman design. The examples also illustrate the advantage of using non-geometric designs where the effects are partially aliased instead of being fully aliased as in highly fractionated fractional factorials

  • 85.
    Tyssedal, John Sølve
    et al.
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Experiments for multi-stage processes2015In: Quality Technology & Quantitative Management, ISSN 1684-3703, E-ISSN 1811-4857, Vol. 12, no 1, p. 13-28Article in journal (Refereed)
    Abstract [en]

    Multi-stage processes are very common in both process and manufacturing industries. In this article we present a methodology for designing experiments for multi-stage processes. Typically in these situations the design is expected to involve many factors from different stages. To minimize the required number of experimental runs, we suggest using mirror image pairs of experiments at each stage following the first. As the design criterion, we consider their projectivity and mainly focus on projectivity P ≥ 3 designs. We provide the methodology for generating these designs for processes with any number of stages and also show how to identify and estimate the effects. Both regular and non-regular designs are considered as base designs in generating the overall design

  • 86.
    Tyssedal, John Sølve
    et al.
    Department of Mathematical Sciences, The Norwegian University of Science and Technology, Trondheim.
    Kulahci, Murat
    Department of Informatics and Mathematical Modelling, Technical University of Denmark.
    Bisgaard, Søren
    Isenberg School of Management, University of Massachusetts Amherst.
    Split-plot designs with mirror image pairs as sub-plots2011In: Journal of Statistical Planning and Inference, ISSN 0378-3758, E-ISSN 1873-1171, Vol. 141, no 12, p. 3686-3696Article in journal (Refereed)
    Abstract [en]

    In this article we investigate two-level split-plot designs where the sub-plots consist of only two mirror image trials. Assuming third and higher order interactions negligible, we show that these designs divide the estimated effects into two orthogonal sub-spaces, separating sub-plot main effects and sub-plot by whole-plot interactions from the rest. Further we show how to construct split-plot designs of projectivity P≥3. We also introduce a new class of split-plot designs with mirror image pairs constructed from non-geometric Plackett-Burman designs. The design properties of such designs are very appealing with effects of major interest free from full aliasing assuming that 3rd and higher order interactions are negligible

  • 87.
    Vanhatalo, Erik
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Impact of Autocorrelation on Principal Components and Their Use in Statistical Process Control2016In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 32, no 4, p. 1483-1500Article in journal (Refereed)
    Abstract [en]

    A basic assumption when using principal component analysis (PCA) for inferential purposes, such as in statistical process control (SPC) is that the data are independent in time. In many industrial processes frequent sampling and process dynamics make this assumption unrealistic rendering sampled data autocorrelated (serially dependent). PCA can be used to reduce data dimensionality and to simplify multivariate SPC. Although there have been some attempts in the literature to deal with autocorrelated data in PCA, we argue that the impact of autocorrelation on PCA and PCA-based SPC is neither well understood nor properly documented.This article illustrates through simulations the impact of autocorrelation on the descriptive ability of PCA and on the monitoring performance using PCA-based SPC when autocorrelation is ignored. In the simulations cross- and autocorrelated data are generated using a stationary first order vector autoregressive model.The results show that the descriptive ability of PCA may be seriously affected by autocorrelation causing a need to incorporate additional principal components to maintain the model’s explanatory ability. When all variables have the same autocorrelation coefficients the descriptive ability is intact while a significant impact occurs when the variables have different degrees of autocorrelation. We also illustrate that autocorrelation may impact PCA-based SPC and cause lower false alarm rates and delayed shift detection, especially for negative autocorrelation. However, for larger shifts the impact of autocorrelation seems rather small.

  • 88.
    Vanhatalo, Erik
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Technical University of Denmark, Department of Applied Mathematics and Computer Science.
    The Effect of Autocorrelation on the Hotelling T2 Control Chart2015In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 31, no 8, p. 1779-1796Article in journal (Refereed)
    Abstract [en]

    One of the basic assumptions for traditional univariate and multivariate control charts is that the data are independent in time. For the latter in many cases the data is serially dependent (autocorrelated) and cross-correlated due to, for example, frequent sampling and process dynamics. It is well-known that the autocorrelation affects the false alarm rate and the shift detection ability of the traditional univariate control charts. However, how the false alarm rate and the shift detection ability of the Hotelling 2T control chart are affected by various auto- and cross-correlation structures for different magnitudes of shifts in the process mean is not fully explored in the literature. In this article, the performance of the Hotelling T2 control chart for different shift sizes and various auto- and cross-correlation structures are compared based on the average run length (ARL) using simulated data. Three different approaches in constructing the Hotelling T2 chart are studied for two different estimates of the covariance matrix: [1] ignoring the autocorrelation and using the raw data with theoretical upper control limits; [2] ignoring the autocorrelation and using the raw data with adjusted control limits calculated through Monte Carlo simulations; and [3] constructing the control chart for the residuals from a multivariate time series model fitted to the raw data. To limit the complexity we use a first-order vector autoregressive process, VAR(1), and focus mainly on bivariate data.

  • 89.
    Vanhatalo, Erik
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering. Technical University of Denmark.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    On the structure of dynamic principal component analysis used in statistical process monitoring2017In: Chemometrics and Intelligent Laboratory Systems, ISSN 0169-7439, E-ISSN 1873-3239, Vol. 167, p. 1-11Article in journal (Refereed)
    Abstract [en]

    When principal component analysis (PCA) is used for statistical process monitoring it relies on the assumption that data are time independent. However, industrial data will often exhibit serial correlation. Dynamic PCA (DPCA) has been suggested as a remedy for high-dimensional and time-dependent data. In DPCA the input matrix is augmented by adding time-lagged values of the variables. In building a DPCA model the analyst needs to decide on (1) the number of lags to add, and (2) given a specific lag structure, how many principal components to retain. In this article we propose a new analyst driven method to determine the maximum number of lags in DPCA with a foundation in multivariate time series analysis. The method is based on the behavior of the eigenvalues of the lagged autocorrelation and partial autocorrelation matrices. Given a specific lag structure we also propose a method for determining the number of principal components to retain. The number of retained principal components is determined by visual inspection of the serial correlation in the squared prediction error statistic, Q (SPE), together with the cumulative explained variance of the model. The methods are illustrated using simulated vector autoregressive and moving average data, and tested on Tennessee Eastman process data.

  • 90.
    Vanhatalo, Erik
    et al.
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Bergquist, Bjarne
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Capaci, Francesca
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Lag Structure in Dynamic Principal Component Analysis2016Conference paper (Refereed)
    Abstract [en]

    Purpose of this PresentationAutomatic data collection schemes and abundant availability of multivariate data increase the need for latent variable methods in statistical process control (SPC) such as SPC based on principal component analysis (PCA). However, process dynamics combined with high-frequency sampling will often cause successive observations to be autocorrelated which can have a negative impact on PCA-based SPC, see Vanhatalo and Kulahci (2015).Dynamic PCA (DPCA) proposed by Ku et al. (1995) has been suggested as the remedy ‘converting’ dynamic correlation into static correlation by adding the time-lagged variables into the original data before performing PCA. Hence an important issue in DPCA is deciding on the number of time-lagged variables to add in augmenting the data matrix; addressed by Ku et al. (1995) and Rato and Reis (2013). However, we argue that the available methods are rather complicated and lack intuitive appeal.The purpose of this presentation is to illustrate a new and simple method to determine the maximum number of lags to add in DPCA based on the structure in the original data. FindingsWe illustrate how the maximum number of lags can be determined from time-trends in the eigenvalues of the estimated lagged autocorrelation matrices of the original data. We also show the impact of the system dynamics on the number of lags to be considered through vector autoregressive (VAR) and vector moving average (VMA) processes. The proposed method is compared with currently available methods using simulated data.Research Limitations / Implications (if applicable)The method assumes that the same numbers of lags are added for all variables. Future research will focus on adapting our proposed method to accommodate the identification of individual time-lags for each variable. Practical Implications (if applicable)The visualization possibility of the proposed method will be useful for DPCA practitioners.Originality/Value of PresentationThe proposed method provides a tool to determine the number of lags in DPCA that works in a manner similar to the autocorrelation function (ACF) in the identification of univariate time series models and does not require several rounds of PCA. Design/Methodology/ApproachThe results are based on Monte Carlo simulations in R statistics software and in the Tennessee Eastman Process simulator (Matlab).

  • 91.
    Vining, Geoff
    et al.
    Virginia Polytechnic Institute and State University, Blacksburg, VA.
    Kulahci, Murat
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Business Administration and Industrial Engineering.
    Pedersen, Søren
    Technical University of Denmark, Lyngby.
    Recent Advances and Future Directions for Quality Engineering2016In: Quality and Reliability Engineering International, ISSN 0748-8017, E-ISSN 1099-1638, Vol. 32, no 3, p. 863-875Article in journal (Refereed)
    Abstract [en]

    The origins of quality engineering are in manufacturing, where quality engineers apply basic statistical methodologies to improve the quality and productivity of products and processes. In the past decade, people have discovered that these methodologies are effective for improving almost any type of system or process, such as financial, health care, and supply chains.This paper begins with a review of key advances and trends within quality engineering over the past decade. The second part uses the first part as a foundation to outline new application areas for the field. It also discusses how quality engineering needs to evolve in order to make significant contributions to these new areas

12 51 - 91 of 91
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf