Change search
Refine search result
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Arslan, O.
    et al.
    Department of Mathematics, Cukurova University.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Ekblom, Håkan
    Algorithms to compute CM - and S-estimates for regression2003In: International Conference on Robust Statistics: ICORS / [ed] Rudolf Dutter; P. Filzmoser; U. Gather; P. J. Pousseeuw, Physica-Verlag Rudolf Liebig GmbH , 2003, p. 62-76Conference paper (Refereed)
    Abstract [en]

    Constrained M-estimators for regression were introduced by Mendes and Tyler in 1995 as an alternative class of robust regression estimators with high breakdown point and high asymptotic efficiency. To compute the CM-estimate, the global minimum of an objective function with an inequality constraint has to be localized. To find the S-estimate for the same problem, we instead restrict ourselves to the boundary of the feasible region. The algorithm presented for computing CM-estimates can easily be modified to compute S-estimates as well. Testing is carried out with a comparison to the algorithm SURREAL by Ruppert

  • 2.
    Bergström, Per
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Robust registration of point sets using iteratively reweighted least squares2014In: Computational optimization and applications, ISSN 0926-6003, E-ISSN 1573-2894, Vol. 58, no 3, p. 543-561Article in journal (Refereed)
    Abstract [en]

    Registration of point sets is done by finding a rotation and translation that produces a best fit between a set of data points and a set of model points. We use robust M-estimation techniques to limit the influence of outliers, more specifically a modified version of the iterative closest point algorithm where we use iteratively re-weighed least squares to incorporate the robustness. We prove convergence with respect to the value of the objective function for this algorithm. A comparison is also done of different criterion functions to figure out their abilities to do appropriate point set fits, when the sets of data points contains outliers. The robust methods prove to be superior to least squares minimization in this setting.

  • 3.
    Bergström, Per
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Fluid and Experimental Mechanics.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Robust registration of surfaces using a refined iterative closest point algorithm with a trust region approach2017In: Numerical Algorithms, ISSN 1017-1398, E-ISSN 1572-9265, Vol. 74, no 3, p. 755-779Article in journal (Refereed)
    Abstract [en]

    The problem of finding a rigid body transformation, which aligns a set of data points with a given surface, using a robust M-estimation technique is considered. A refined iterative closest point (ICP) algorithm is described where a minimization problem of point-to-plane distances with a proposed constraint is solved in each iteration to find an updating transformation. The constraint is derived from a sum of weighted squared point-to-point distances and forms a natural trust region, which ensures convergence. Only a minor number of additional computations are required to use it. Two alternative trust regions are introduced and analyzed. Finally, numerical results for some test problems are presented. It is obvious from these results that there is a significant advantage, with respect to convergence rate of accuracy, to use the proposed trust region approach in comparison with using point-to-point distance minimization as well as using point-to-plane distance minimization and a Newton- type update without any step size control.

  • 4.
    Bergström, Per
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Efficient computation of the Gauss-Newton direction when fitting NURBS using ODR2012In: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 52, no 3, p. 571-588Article in journal (Refereed)
    Abstract [en]

    We consider a subproblem in parameter estimation using the Gauss-Newton algorithm with regularization for NURBS curve fitting. The NURBS curve is fitted to a set of data points in least-squares sense, where the sum of squared orthogonal distances is minimized. Control-points and weights are estimated. The knot-vector and the degree of the NURBS curve are kept constant. In the Gauss-Newton algorithm, a search direction is obtained from a linear overdetermined system with a Jacobian and a residual vector. Because of the properties of our problem, the Jacobian has a particular sparse structure which is suitable for performing a splitting of variables. We are handling the computational problems and report the obtained accuracy using different methods, and the elapsed real computational time. The splitting of variables is a two times faster method than using plain normal equations.

  • 5.
    Bergström, Per
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Söderkvist, Inge
    Luleå University of Technology, Department of Engineering Sciences and Mathematics.
    Repeated surface registration for on-line use2011In: The International Journal of Advanced Manufacturing Technology, ISSN 0268-3768, E-ISSN 1433-3015, Vol. 54, no 5-8, p. 677-689Article in journal (Refereed)
    Abstract [en]

    We consider the problem of matching sets of 3D points from a measured surface to the surface of a corresponding computer-aided design (CAD) object. The problem arises in the production line where the shape of the produced items is to be compared on-line with its pre-described shape. The involved registration problem is solved using the iterative closest point (ICP) method. In order to make it suitable for on-line use, i.e., make it fast, we pre-process the surface representation of the CAD object. A data structure for this purpose is proposed and named Distance Varying Grid tree. It is based on a regular grid that encloses points sampled from the CAD surfaces. Additional finer grids are added to the vertices in the grid that are close to the sampled points. The structure is efficient since it utilizes that the sampled points are distributed on surfaces, and it provides fast identification of the sampled point that is closest to a measured point. A local linear approximation of the surface is used for improving the accuracy. Experiments are done on items produced for the body of a car. The experiments show that it is possible to reach good accuracy in the registration and decreasing the computational time by a factor 700 compared with using the common kd-tree structure.

  • 6.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    A software package for sparse orthogonal factorization and updating2002In: ACM Transactions on Mathematical Software, ISSN 0098-3500, E-ISSN 1557-7295, Vol. 28, no 4, p. 448-482Article in journal (Refereed)
    Abstract [en]

    Although there is good software for sparse QR factorization, there is little support for updating and downdating, something that is absolutely essential in some linear programming algorithms, for example. This article describes an implementation of sparse LQ factorization, including block triangularization, approximate minimum degree ordering, symbolic factorization, multifrontal factorization, and updating and downdating. The factor Q is not retained. The updating algorithm expands the nonzero pattern of the factor L, which is reflected in the dynamic representation of L. The block triangularization is used as an `ordering for sparsity' rather than as a prerequisite for block backward substitution. In symbolic factorization, something called `element counters' is introduced to reduce the overestimation of the number of nonzeros that the commonly used methods do. Both the approximate minimum degree ordering and the symbolic factorization are done without explicitly forming the nonzero pattern of the symmetric matrix in the corresponding normal equations. Tests show that the average time used for a single update or downdate is essentially the same as the time used for a single forward or backward substitution. Other parts of the implementation show the same range of performance as existing code, but cannot be replaced because of the special character of the systems that are solved.

  • 7.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    CMregr - A Matlab software package for finding CM-Estimates for Regression2004In: Journal of Statistical Software, ISSN 1548-7660, E-ISSN 1548-7660, Vol. 10, no 3, p. 1-11Article in journal (Refereed)
    Abstract [en]

    This paper describes how to use the Matlab software package CMregr, and also gives some limited information on the CM-estimation problem itself. For detailed information on the algorithms used in CMregr as well as extensive testings, please refer to Arslan, Edlund & Ekblom (2002) and Edlund & Ekblom (2004).

  • 8.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Linear M-estimation algorithms in optimization1996Licentiate thesis, comprehensive summary (Other academic)
  • 9.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Linear M-estimation with bounded variables1997In: BIT Numerical Mathematics, ISSN 0006-3835, E-ISSN 1572-9125, Vol. 37, no 1, p. 13-23Article in journal (Refereed)
    Abstract [en]

    A subproblem in the trust region algorithm for non-linear M-estimation by Ekblom and Madsen is to find the restricted step. It is found by calculating the M-estimator of the linearized model, subject to anL 2-norm bound on the variables. In this paper it is shown that this subproblem can be solved by applying Hebden-iterations to the minimizer of the Lagrangian function. The new method is compared with an Augmented Lagrange implementation.

  • 10.
    Edlund, Ove
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Solution of linear programming and non-linear regression problems using linear M-estimation methods1999Doctoral thesis, comprehensive summary (Other academic)
    Abstract [en]

    This thesis is devoted to algorithms for solving two optimization problems, using linear M-estimation methods, and their implementation. First, an algorithm for the non-linear M-estimation problem is considered. The main idea of the algorithm is to linearize the residual function in each iteration and thus calculate the iteration step by solving a linear M- estimation problem. A 2-norm bound on the variables restricts the step size, to guarantee convergence. The other algorithm solves the dual linear programming problem by making a ``smooth'' approximation of edges and inequality constraints using quadratic functions, thus making it possible to use Newton's method to find the optimal solution. The quadratic approximation of the inequality constraint makes it a penalty function algorithm. The implementation uses sparse matrix techniques. Since it is an active set method, it is possible to reuse the old factor when calculating the new step, by up- and downdating the old factor. It is only occasionally, when the downdating fails, that the factor instead has to be found with a sparse multifrontal LQ-factorization.

  • 11.
    Edlund, Ove
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Ekblom, Håkan
    Algorithms for robustified error-in-variables problems1998In: COMPSTAT [1998]: proceedings in computational statistics ; 13th symposium held in Bristol, Great Britain, 1998 ; with 61 tables / [ed] Roger Payne; Peter Green, Heidelberg: Physica-Verlag Rudolf Liebig GmbH , 1998, p. 293-298Conference paper (Refereed)
    Abstract [en]

    From the introduction: We consider the problem of fitting a model of the form $y=f(x,\beta)$ to a set of points $(x_i,y_i)$, $i=1,\dots,n$. If there are measurement or observation errors in $x$ as well as in $y$, we have the so-called errors-in-variables-problem with model equation $$y_i=f(x_i+\delta_i,\beta)+\varepsilon_i,\ i=1,\dots,n,\tag 1$$ where $\delta_i\in\bbfR^m$, $i=1,\dots,n$, are the errors in $x_i\in\bbfR^m$. Then the problem is to find a vector of parameters $\beta\in\bbfR^p$ that minimizes the errors $\varepsilon_i$ and $\delta_i$ in some loss function subject to (1). We present algorithms using more robust alternatives to the least squares criterion.\par We will further discuss, from

  • 12.
    Edlund, Ove
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Ekblom, Håkan
    Computing the constrained M-estimates for regression2005In: Computational Statistics & Data Analysis, ISSN 0167-9473, E-ISSN 1872-7352, Vol. 49, no 1, p. 19-32Article in journal (Refereed)
    Abstract [en]

    Constrained M-estimates for regression have been previously proposed as an alternative class of robust regression estimators with high breakdown point and high asymptotic efficiency. These are closely related to S-estimates, and it is shown that in some cases they will necessarily coincide. It has been difficult to use the CM-estimators in practice for two reasons. Adequate computational methods have been lacking and there has also been some confusion concerning the tuning parameters. Both of these problems are addressed; an updated table for choice of suitable parameter value is given, and an algorithm to compute CM-estimates for regression is presented. It can also be used to compute S-estimates. The computational problem to be solved is global optimization with an inequality constraint. The algorithm consists of two phases. The first phase is finding suitable starting values for the local optimization. The second phase, the efficient finding of a local minimum, is described in more detail. There is a MATLAB code generally available from the net. A Monte Carlo simulation is performed, using this code, to test the performance of the estimator as well as the algorithm.

  • 13.
    Edlund, Ove
    et al.
    Luleå University of Technology, Department of Engineering Sciences and Mathematics, Mathematical Science.
    Ekblom, Håkan
    Madsen, Kaj
    Institute of Mathematical Modelling, Technical University of Denmark.
    Algorithms for non-linear M-estimation1997In: Computational statistics (Zeitschrift), ISSN 0943-4062, E-ISSN 1613-9658, Vol. 12, no 3, p. 373-383Article in journal (Refereed)
1 - 13 of 13
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf