Change search
Refine search result
1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 1.
    Li, Yuangui
    et al.
    Department of Automation, Shanghai Jiaotong University.
    Lin, Chen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Space Technology.
    Zhang, Weidong
    Department of Automation, Shanghai Jiaotong University.
    Improved sparse least-squares support vector machine classifiers2006In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 69, no 13-15, p. 1655-1658Article in journal (Refereed)
    Abstract [en]

    The least-squares support vector machines (LS-SVM) can be obtained by solving a simpler optimization problem than that in standard support vector machines (SVM). Its shortcoming is the loss of sparseness and this usually results in slow testing speed. Several pruning methods have been proposed. It is found that these methods can be further improved for classification problems. In this paper a different reduced training set is selected to re-train LS-SVM. Then a new procedure is proposed to obtain the sparseness. The performance of the proposed method is compared with other typical ones and the results indicate that it is more effective.

  • 2.
    Lin, Di
    et al.
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Tang, Yu
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu.
    Yao, Yuanzhe
    School of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu.
    Neural Networks for Computer-Aided Diagnosis in Medicine: a review2016In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 216, p. 700-708Article in journal (Refereed)
    Abstract [en]

    This survey makes an overview of the most recent applications on the neural networks for the computer-aided medical diagnosis (CAMD) over the past decade. CAMD can facilitate the automation of decision making, extraction and visualization of complex characteristics for clinical diagnosis purposes. Over the past decade, neural networks have attained considerable research interest and are widely employed to complex CAMD systems in diverse clinical application domains, such as detecting diseases, classification of diseases, testing the compatibility of new drugs, etc. Overall, this paper reviews the state-of-the-art of neural networks for CAMD. It helps the readers understand the topic of neural networks for CAMD by summarizing the findings addressed in recent academic papers as well as presenting a few open issues of developing the research on this topic.

  • 3.
    Makkie, Milad
    et al.
    Computer Science Department, University of Georgia, Athens, GA, USA.
    Huang, Heng
    School of Automation, Northwestern Polytechnical University, Xi'an, China.
    Zhao, Yu
    Computer Science Department, University of Georgia, Athens, GA, USA.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Liu, Tianming
    Harvard Center for Neurodegeneration and Repair, Boyd GSRC 420, Athens, GA 30602, United States.
    Fast and Scalable Distributed Deep Convolutional Autoencoder for fMRI Big Data Analytics2019In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 325, p. 20-30Article in journal (Refereed)
    Abstract [en]

    In recent years, analyzing task-based fMRI (tfMRI) data has become an essential tool for understanding brain function and networks. However, due to the sheer size of tfMRI data, its intrinsic complex structure, and lack of ground truth of underlying neural activities, modeling tfMRI data is hard and challenging. Previously proposed data modeling methods including Independent Component Analysis (ICA) and Sparse Dictionary Learning only provided shallow models based on blind source separation under the strong assumption that original fMRI signals could be linearly decomposed into time series components with corresponding spatial maps. Given the Convolutional Neural Network (CNN) successes in learning hierarchical abstractions from low-level data such as tfMRI time series, in this work we propose a novel scalable distributed deep CNN autoencoder model and apply it for fMRI big data analysis. This model aims to both learn the complex hierarchical structures of the tfMRI big data and to leverage the processing power of multiple GPUs in a distributed fashion. To deploy such a model, we have created an enhanced processing pipeline on the top of Apache Spark and Tensorflow, leveraging from a large cluster of GPU nodes over cloud. Experimental results from applying the model on the Human Connectome Project (HCP) data show that the proposed model is efficient and scalable toward tfMRI big data modeling and analytics, thus enabling data-driven extraction of hierarchical neuroscientific information from massive fMRI big data.

  • 4.
    Ramadan, Rabie A.
    et al.
    Department of Computer Engineering, Cairo University, Egypt and Hail University.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Brain Computer Interface: control Signals Review2017In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 223, p. 26-44Article in journal (Refereed)
    Abstract [en]

    Brain Computer Interface (BCI) is defined as a combination of hardware and software that allows brain activities to control external devices or even computers. The research in this field has attracted academia and industry alike. The objective is to help severely disabled people to live their life as regular persons as much as possible. Some of these disabilities are categorized as neurological neuromuscular disorders. A BCI system goes through many phases including preprocessing, feature extraction, signal classifications, and finally control. Large body of research are found at each phase and this might confuse researchers and BCI developers. This article is a review to the state-of-the-art work in the field of BCI. The main focus of this review is on the Brain control signals, their types and classifications. In addition, this survey reviews the current BCI technology in terms of hardware and software where the most used BCI devices are described as well as the most utilized software platforms are explained. Finally, BCI challenges and future directions are stated. Due to the limited space and large body of literature in the field of BCI, another two review articles are planned. One of these articles reviews the up-to-date BCI algorithms and techniques for signal processing, feature extraction, signals classification, and control. Another article will be dedicated to BCI systems and applications. The three articles are written as base and guidelines for researchers and developers pursue the work in the field of BCI.

  • 5.
    Tang, Rui
    et al.
    Department of Computer and Information Science, University of Macau.
    Fong, Simon
    Department of Computer and Information Science, University of Macau.
    Deb, Suash
    INNS India Regional Chapter.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Millham, Richard C.
    Department of Information Technology, Durban University of Technology.
    Dynamic Group Optimisation Algorithm for Training Feed-Forward Neural Networks2018In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 314, p. 1-19Article in journal (Refereed)
    Abstract [en]

    Feed-forward neural networks are efficient at solving various types of problems. However, finding efficient training algorithms for feed-forward neural networks is challenging. The dynamic group optimisation (DGO) algorithm is a recently proposed half-swarm half-evolutionary algorithm, which exhibits a rapid convergence rate and good performance in searching and avoiding local optima. In this paper, we propose a new hybrid algorithm, FNNDGO that integrates the DGO algorithm into a feed-forward neural network. DGO plays an optimisation role in training the neural network, by tuning parameters to their optimal values and configuring the structure of feed-forward neural networks. The performance of the proposed algorithm was determined by comparing its performance with those of other training methods in solving two types of problems. The experimental results show that our proposed algorithm exhibits promising performance for solving real-world problems.

  • 6.
    Zhou, Lina
    et al.
    Information Systems Department, UMBC, Baltimore.
    Pan, Shimei
    Information Systems Department, UMBC, Baltimore.
    Wang, Jianwu
    Information Systems Department, UMBC, Baltimore.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Machine Learning on Big Data: Opportunities and Challenges2017In: Neurocomputing, ISSN 0925-2312, E-ISSN 1872-8286, Vol. 237, p. 350-361Article in journal (Refereed)
    Abstract [en]

    Machine learning (ML) is continuously unleashing its power in a wide range of applications. It has been pushed to the forefront in recent years partly owing to the advert of big data. ML algorithms have never been better promised while challenged by big data. Big data enables ML algorithms to uncover more fine-grained patterns and make more timely and accurate predictions than ever before; on the other hand, it presents major challenges to ML such as model scalability and distributed computing. In this paper, we introduce a framework of ML on big data (MLBiD) to guide the discussion of its opportunities and challenges. The framework is centered on ML which follows the phases of preprocessing, learning, and evaluation. In addition, the framework is also comprised of four other components, namely big data, user, domain, and system. The phases of ML and the components of MLBiD provide directions for the identification of associated opportunities and challenges and open up future work in many unexplored or under explored research areas.

1 - 6 of 6
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf