Change search
Refine search result
3456789 251 - 300 of 1546
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf
Rows per page
  • 5
  • 10
  • 20
  • 50
  • 100
  • 250
Sort
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
  • Standard (Relevance)
  • Author A-Ö
  • Author Ö-A
  • Title A-Ö
  • Title Ö-A
  • Publication type A-Ö
  • Publication type Ö-A
  • Issued (Oldest first)
  • Issued (Newest first)
  • Created (Oldest first)
  • Created (Newest first)
  • Last updated (Oldest first)
  • Last updated (Newest first)
  • Disputation date (earliest first)
  • Disputation date (latest first)
Select
The maximal number of hits you can export is 250. When you want to export more records please use the Create feeds function.
  • 251.
    Chaltseva, Anna
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Network state estimation in wireless multi-hop networks2012Licentiate thesis, comprehensive summary (Other academic)
    Abstract [en]

    Multi-hop wireless networks in general and those built upon IEEE 802.11 standard in particular are known for their highly dynamic and unstable performance. The commonly accepted way for improving the situation is to jointly optimize the performance of protocols across different communications layers. Being able to characterize a state of the network is essential to enable the cross-layer optimization. This licentiate thesis investigates methods for passive characterization of network state at medium access control and transport layers based on information accessible from the corresponding layers below.Firstly, the thesis investigates a possibility for characterizing traffic intensity relying solely on the statistics of measurements from the physical layer. An advantage of this method is that it does not require decoding of the captured packets, by this accounting for the effect from long-range interferences introduced by transmissions at the border of the communication range of a receiver.Secondly, a question of predicting TCP throughput over a multi-hop wireless path is addressed. The proposed predictor is a practically usable function of statistically significant parameters at transport, medium access control and physical communication layers. The presented model is able to predict the TCP throughput with 99% accuracy, which provides an essential input for various cross-layer optimization processes.Finally, during the course of the experimental work the issues of accuracy of simulation-based modeling of communication processes were investigated. The thesis is concluded by presenting a comparative study of the performance characteristics measured in a single channel multi-hop wireless network test-bed and the corresponding measurements obtained from popular network simulators ns-2 and ns-3 when configured with identical settings. The thesis presents the evaluation of the mismatch between the results obtained in the test-bed and the simulators with their standard empirical radio models.

  • 252.
    Chaltseva, Anna
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Empirical cross-layer model of TCP throughput in multihop wireless chain2010Report (Other academic)
    Abstract [en]

    Analysis of TCP throughput in multihop wireless networks is a continuously important research topic. Yet a neat and practically useful formula for the TCP transfer rate similar to the macroscopic model of TCP in the Internet, however, capturing the cross-layer dependencies is unavailable for wireless networks. In this paper we statistically analyze the significance of parameters on physical, MAC and transport layers in a multihop wireless chains and derive a practically usable cross-layer throughput formula. The resulting model allows estimation of the throughput with less than 2% error.

  • 253.
    Chaltseva, Anna
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Empirical predictor of TCP throughput on a multihop wireless path2010In: Smart spaces and next generation Wired/Wireless networking: third Conference on Smart Spaces, ruSMART 2010, and 10th international conference, NEW2AN 2010, St. Petersburg, Russia, August 23-25, 2010 ; proceedings / [ed] Sergey Balandin; Roman Dunaytsev; Yevgeni Koucheryavy, Encyclopedia of Global Archaeology/Springer Verlag, 2010, p. 323-334Conference paper (Refereed)
    Abstract [en]

    This paper addresses a question of predicting TCP throughput over a multihop wireless path. Since it is useful for a variety of applications it is desirable that TCP throughput prediction technique introduces low-overhead while avoiding active measurement techniques. Analytical derivation of the throughput predictor for multihop wireless networks is difficult if not impossible at all due to complex cross-layer dependencies. In this article we statistically analyze the significance of parameters on physical, MAC and transport layers in a multihop wireless chain and empirically derive a practically usable throughput predictor. The resulting model allows prediction of the throughput with less than 2% error.

  • 254.
    Chaltseva, Anna
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On passive characterization of aggregated traffic in wireless networks2011Report (Other academic)
    Abstract [en]

    We present a practical measurement-based model of aggregated traffic intensity on microseconds time scale for wireless networks. The model allows estimating the traffic intensity for the period of time required to transmit data structures of different size (short control frames and a data packet of the maximum size). The presented model opens a possibility to mitigate the effect of interferences in the network by optimizing the communication parameters of the MAC layer (e.g. size of contention window, retransmission strategy, etc.) for the forthcoming transmission to minimize the packet collision probability and further increase network's capacity. We also discuss issues and challenges associated with PHY-layer characterization of the network state.

  • 255.
    Chaltseva, Anna
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Osipov, Evgeny
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On passive characterization of aggregated traffic in wireless networks2012In: Wired/Wireless Internet Communication: 10th International Conference, WWIC 2012, Proceedings, New York: Encyclopedia of Global Archaeology/Springer Verlag, 2012, p. 282-289Conference paper (Refereed)
    Abstract [en]

    We present a practical measurement-based characterization of the aggregated traffic on microseconds time scale in wireless networks. The model allows estimating the channel utilization for the period of time required to transmit data structures of different sizes (short control frames and a data packet of the maximum size). The presented model opens a possibility to mitigate the effect of interferences in the network by optimizing the communication parameters of the MAC layer (e.g. the size of contention window, retransmission strategy, etc.) for the forthcoming transmission. The article discusses issues and challenges associated with the PHY-layer characterization of the network state.

  • 256.
    Channakeshava, Karthik
    et al.
    Virginia Tech, Blacksburg, Virginia.
    Phanse, Kaustubh
    DaSilva, Luiz A.
    Virginia Tech, Blacksburg, Virginia.
    Ravindran, Binoy
    Virginia Tech, Blacksburg, Virginia.
    Midkiff, Scott F.
    Virginia Tech, Blacksburg, Virginia.
    Jensen, E. Douglas
    MITRE Corporation, Bedford, MD.
    IP quality of service support for soft real-time applications2005In: Proceedings - RTN 2005, 4th Intl. Workshop on Real-Time Networks (formerly RTLIA: Palma de Mallorca, Spain, July 5, 2005 ; satellite event to ECRTS 2005, 6 - 8 July, Palma de Mallorca, Spain / [ed] Jörg Kaiser, Magdeburg: Otto-von-Guericke Univ , 2005Conference paper (Refereed)
    Abstract [en]

    To obtain acceptable timeliness performance for emerging large-scale distributed real-time control applications operating in large IP internetworks, a scalable quality of service (QoS) architecture is needed. In this paper, we propose a scalable QoS architecture (abbreviated as RTQoS) in support of real-time systems, one that implements real-time scheduling at end-hosts and stateless QoS in the core routers. We address challenges and explore potential benefits achieved by integrating network services with real-time systems, through the use of a network testbed. Experimental evaluation demonstrates the RTQoS architecture as a promising approach for soft real-time applications that are subject to time/utility function time constraints and utility accrual optimality criteria.

  • 257.
    Chatterjee, Santanu
    et al.
    Research Center Imarat, Defence Research and Development Organization, Hyderabad.
    Roy, Sandip
    Department of Computer Science and Engineering, Asansol Engineering College, Asansol.
    Kumar Das, Ashok
    Center for Security, Theory and Algorithmic Research, International Institute of Information Technology, Hyderabad.
    Chattopadhyay, Samiran
    Department of Information Technology, Jadavpur University, Salt Lake City.
    Kumar, Neeraj
    Department of Computer Science and Engineering, Thapar University, Patiala.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Secure Biometric-Based Authentication Schemeusing Chebyshev Chaotic Map for Multi-Server Environment2018In: IEEE Transactions on Dependable and Secure Computing, ISSN 1545-5971, E-ISSN 1941-0018, Vol. 15, no 5, p. 824-839Article in journal (Refereed)
    Abstract [en]

    Abstract: Multi-server environment is the most common scenario for a large number of enterprise class applications. In this environment, user registration at each server is not recommended. Using multi-server authentication architecture, user can manage authentication to various servers using single identity and password. We introduce a new authentication scheme for multi-server environments using Chebyshev chaotic map. In our scheme, we use the Chebyshev chaotic map and biometric verification along with password verification for authorization and access to various application servers. The proposed scheme is light-weight compared to other related schemes. We only use the Chebyshev chaotic map, cryptographic hash function and symmetric key encryption-decryption in the proposed scheme. Our scheme provides strong authentication, and also supports biometrics & password change phase by a legitimate user at any time locally, and dynamic server addition phase. We perform the formal security verification using the broadly-accepted AVISPA (Automated Validation of Internet Security Protocols and Applications) tool to show that the presented scheme is secure. In addition, we use the formal security analysis using the Burrows-Abadi-Needham (BAN) logic along with random oracle models and prove that our scheme is secure against different known attacks. High security and significantly low computation and communication costs make our scheme is very suitable for multi-server environments as compared to other existing related schemes.

  • 258.
    Chen, Feng
    et al.
    Parallel Computing Laboratory, Institute of Software Chinese Academy of Sciences, Beijing.
    Deng, Pan
    Parallel Computing Laboratory, Institute of Software Chinese Academy of Sciences, Beijing.
    Wan, Jiafu
    School of Mechanical and Automotive Engineering, South China University of Technology, Guangzhou.
    Zhang, Daqiang
    School of Software Engineering, Tongji University, Shanghai.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Rong, Xiaohui
    Chinese Academy of Civil Aviation Science and Technology, Beijing.
    Data mining for the internet of things: Literature review and challenges2015In: International Journal of Distributed Sensor Networks, ISSN 1550-1329, E-ISSN 1550-1477, Vol. 2015, article id 431047Article in journal (Refereed)
    Abstract [en]

    The massive data generated by the Internet of Things (IoT) are considered of high business value, and data mining algorithms can be applied to IoT to extract hidden information from data. In this paper, we give a systematic way to review data mining in knowledge view, technique view, and application view, including classification, clustering, association analysis, time series analysis and outlier analysis. And the latest application cases are also surveyed. As more and more devices connected to IoT, large volume of data should be analyzed, the latest algorithms should be modified to apply to big data. We reviewed these algorithms and discussed challenges and open research issues. At last a suggested big data mining system is proposed

  • 259.
    Chen, Jialu
    et al.
    Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
    Zhou, Jun
    Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
    Cao, Zhenfu
    Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Computer Science and Technology, Fuzhou University, China.
    Dong, Xiaolei
    Shanghai Key Laboratory of Trustworthy Computing, East China Normal University, Shanghai, China..
    Choo, Kim-Kwang Raymond
    Department of Information Systems and Cyber Security, University of Texas at San Antonio, San Antonio, TX, USA.
    Lightweight Privacy-preserving Training and Evaluation for Discretized Neural Networks2019In: IEEE Internet of Things Journal, ISSN 2327-4662Article in journal (Refereed)
    Abstract [en]

    Machine learning, particularly the neural network, is extensively exploited in dizzying applications. In order to reduce the burden of computing for resource-constrained clients, a large number of historical private datasets are required to be outsourced to the semi-trusted or malicious cloud for model training and evaluation. To achieve privacy preservation, most of the existing work either exploited the technique of public key fully homomorphic encryption (FHE) resulting in considerable computational cost and ciphertext expansion, or secure multiparty computation (SMC) requiring multiple rounds of interactions between user and cloud. To address these issues, in this paper, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized neural networks is proposed. Firstly, we put forward an efficient single key fully homomorphic data encapsulation mechanism (SFH-DEM) without exploiting public key FHE. Based on SFH-DEM, a series of atomic calculations over the encrypted domain including multivariate polynomial, nonlinear activation function, gradient function and maximum operations are devised as building blocks. Furthermore, a lightweight privacy-preserving model training and evaluation scheme LPTE for discretized neural networks is proposed, which can also be extended to convolutional neural network. Finally, we give the formal security proofs for dataset privacy, model training privacy and model evaluation privacy under the semi-honest environment and implement the experiment on real dataset MNIST for recognizing handwritten numbers in discretized neural network to demonstrate the high efficiency and accuracy of our proposed LPTE.

  • 260.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    A framework for constructing heap-like structures in-place1993In: Algorithms and Computation: 4th International Symposium, ISAAC '93 Hong Kong, December 15-17, 1993 Proceedings / [ed] K.W. Ng, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 1993, p. 118-127Conference paper (Refereed)
    Abstract [en]

    Priority queues and double-ended priority queues are fundamental data types in computer science. Several implicit data structures have been proposed for implementing the queues, such as heaps, min-max heaps, deaps, and twin-heaps. Over the years the problem of constructing these heap-like structures has received much attention in the literature, but different structures possess different construction algorithms. In this paper, we present a uniform approach for building the heap-like data structures in-place. The study is carried out by investigating hardest instances of the problem and developing an algorithmic paradigm for the construction. Our paradigm produces comparison- and space-efficient construction algorithms for the heaplike structures, which improve over those previously fast known algorithms.

  • 261.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    An efficient construction algorithm for a class of implicit double-ended priority queues1995In: Computer journal, ISSN 0010-4620, E-ISSN 1460-2067, Vol. 38, no 10, p. 818-821Article in journal (Refereed)
    Abstract [en]

    Priority queues and double-ended priority queues are fundamental data types in Computer Science, and various data structures have been proposed to implement them. In particular, diamond deques, interval heaps, min-max-pair heaps, and twin-heaps provide implicit structures for double-ended priority queues. Although these heap-like structures are essentially the same when they are presented in an abstract manner, they possess different implementations and thus have different construction algorithms. In this paper, we present a fast algorithm for building these data structures. Our results improve over previously fast known algorithms.

  • 262.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Average cost to produce partial orders1994In: Algorithms and Computation: 5th International Symposium, ISAAC '94 Beijing, P. R. China, August 25-27, 1994 Proceedings / [ed] Ding-Zhu Du, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 1994, p. 155-163Conference paper (Refereed)
    Abstract [en]

    We study the average-case complexity of partial order productions. Any comparison-based algorithm for solving problems in computer science can be viewed as the partial order production. The productions of some specific partial orders, such as sorting, searching, and selection, have received much attention in the past decades. As to arbitrary partial orders, very little is known about the inherent complexity of their productions. In particular, no non-trivial average-case lower bounds were known. By combining information-theoretic lower bounds with adversary-based arguments, we present some non-trivial average-case lower bounds for the productions of arbitrary partial orders. More precisely, we first demonstrate a counter-example to some intuition about lower bounds on partial order productions, and then present some simple lower bounds. By utilizing adversaries which were originally constructed for deriving worst-case lower bounds, we prove non-trivial average-case lower bounds for partial order productions. Our lower-bound techniques of mixing the information-theoretical and adversary-based approaches are interesting, as well as the lower-bound results obtained. Moreover, several conjectures concerning the production complexity of partial orders are answered. Motivating from the selection problem and from the design of efficient algorithms, we also investigate average-case cost for producing many isomorphic copies simultaneously of some partial order.

  • 263.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Constructing priority queues and deques optimally in parallel1992In: IFIP transactions. Computer science and technology, ISSN 0926-5473, Vol. A12, p. 275-83Article in journal (Refereed)
    Abstract [en]

    The author investigates the parallel complexity of constructing data structures that implement priority queues (viz. the heap) and double-ended priority queues (namely, the twin-heap, the min-max heap, and the deap). The study is carried out by developing cost optimal parallel deterministic algorithms of time complexity Θ(log log n) and randomized algorithms of time complexity Θ(α(n)) with probability that converges rapidly to one on the parallel comparison tree model, where α(n) is the inverse Ackermann function. The author also describes constant-time parallel algorithms for the constructions with n1+∈ processors for any constant ∈>0 on the same model. Furthermore, the author designs optimal Θ(log n)-time EREW-PRAM algorithms for constructing double-ended priority queue structures

  • 264.
    Chen, Jingsen
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Parallel heap construction using multiple selection1994In: Parallel Processing: CONPAR 94 - VAPP VI Third Joint International Conference on Vector and Parallel Processing Linz, Austria, September 6-8, 1994 Proceedings / [ed] Bruno Buchberger, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 1994, p. 371-380Conference paper (Refereed)
    Abstract [en]

    We consider the problem of constructing data structures that implement priority queues (viz. the heap) and double-ended priority queues (namely, the twin-heap, the min-max heap, and the deap) quickly and optimally in parallel. Whereas all these heap-like structures can be built in linear sequential time, we show in this paper that the construction problem can be solved in O(log n·log* n/log log n) time using n·log log n/log n·log * n processors in the Arbitrary CRCW PRAM model. Moreover, by applying random sampling techniques, we reduce the construction time to O with probability ≥ 1-n-c for some constant c>0. As a by-product, we also investigate the parallel complexity of the multiple selection problem. The problem is to select a subset of elements having specified ranks from a given set. We design optimal solutions to the problem with respect to various models of parallel computation.

  • 265.
    Chen, Jingsen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Carlsson, Svante
    Luleå tekniska universitet.
    Parallel complexity of heaps and min-max heaps1992In: LATIN '92: 1st Latin American Symposium on Theoretical Informatics, São Paulo, Brazil, April 6 - 10, 1992 proceedings / [ed] Imre Simon, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 1992, p. 108-116Conference paper (Refereed)
    Abstract [en]

    We study parallel solutions to the problem of implementing priority queues and priority deques. It is known that data structures for the implementation (e.g., the heap, the minmax heap, and the deap) can be constructed in linear sequential time. In this paper, we design optimal Ω((log log n)2) time parallel algorithms with n/(log logn)2 processors for the constructions on the parallel comparison tree model. For building heaps in parallel, our algorithm improves the previous best result of Ω(log n) time with n/log n processors. For building min-max heaps and deaps, our algorithms are the first attempt to design parallel algorithms for constructing the data structures of the priority deque that are cost optimal.

  • 266.
    Chen, Jingsen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Carlsson, Svante
    Luleå tekniska universitet.
    Searching rigid data structures1995In: Computing and Combinatorics: First Annual International Conference, COCOON '95 Xi'an, China, August 24-26, 1995 Proceedings / [ed] Ding-Zhu Du, Encyclopedia of Global Archaeology/Springer Verlag, 1995, p. 446-451Conference paper (Refereed)
    Abstract [en]

    We study the exact complexity of searching for a given element in a rigid data structure (i.e., an implicit data structure consistent with a fixed family of partial orders). In particular, we show how the ordering information available in the structure facilitates the search operation. Some general lower bounds on the search complexity are presented, which apply to concrete rigid data structures as well. Optimal search algorithms for certain rigid structures are also developed. Moreover, we consider a general problem of searching for a number of elements in a given set. Non-trivial lower bounds are derived and efficient search algorithms are constructed.

  • 267.
    Chen, Jingsen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Edelkamp, Stefan
    Faculty 3-Mathematics and Computer Science, University of Bremen.
    Elmasry, Amr
    Department of Computer Science, University of Copenhagen.
    Katajainen, Jyrki
    Department of Computer Science, University of Copenhagen.
    In-place heap construction with optimized comparisons, moves, and cache misses2012In: Mathematical foundations of computer science 2012: 37th international symposium, MFCS 2012, Bratislava, Slovakia, August 27-31, 2012 : proceedings / [ed] Branislav Rovan; Vladimiro Sassone ; Peter Widmayer, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 2012, p. 259-270Conference paper (Refereed)
    Abstract [en]

    We show how to build a binary heap in-place in linear time by performing ~ 1.625n element comparisons, at most ~ 2.125n element moves, and ~ n/B cache misses, where n is the size of the input array, B the capacity of the cache line, and ~ f(n) approaches f(n) as n grows. The same bound for element comparisons was derived and conjectured to be optimal by Gonnet and Munro; however, their procedure requires Θ(n) pointers and does not have optimal cache behaviour. Our main idea is to mimic the Gonnet-Munro algorithm by converting a navigation pile into a binary heap. To construct a binary heap in-place, we use this algorithm to build bottom heaps of size and adjust the heap order at the upper levels using Floyd's sift-down procedure. On another frontier, we compare different heap-construction alternatives in practice.

  • 268.
    Chen, Jingsen
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Levcopoulos, Christos
    Lunds universitet.
    Improved parallel sorting of presorted sequences1992In: Parallel Processing: CONPAR 92-VAPP V: Second Joint International Conference on Vector and Parallel Processing Lyon, France, September 1-4, 1992 Proceedings, Berlin: Encyclopedia of Global Archaeology/Springer Verlag, 1992, p. 539-544Conference paper (Refereed)
    Abstract [en]

    An adaptive parallel sorting algorithm is presented, which is cost optimal with respect to the number of oscillations in a sequence (Osc). More specifically, the algorithm sorts any sequence X of length n in time O(log n) by using O(n/log n · log Osc(X)/n) CRCW PRAM processors. This is the first adaptive parallel sorting algorithm that is cost optimal with respect to Osc and, hence, it is also optimal with respect to both the number of inversions (Inv) and the number of runs (Runs) in the sequence. Our result improves previous results on adaptive parallel sorting.

  • 269.
    Chen, Lin
    et al.
    Laboratorie Recherche Informatique (LRI-CNRS UMR 8623), Université Paris-Sud.
    Li, Yong
    Department of Electronic Engineering, Tsinghua University, Bejing.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Oblivious Neighbor Discovery for Wireless Devices with Directional Antennas2016In: IEEE INFOCOM 2016: The 35th Annual IEEE International Conference on Computer Communications, San Francisco, 10-14 April 2016, Piscataway, NJ: IEEE Communications Society, 2016, article id 7524570Conference paper (Refereed)
    Abstract [en]

    Neighbor discovery, the process of discovering all neighbors in a device's communication range, is one of the bootstrapping networking primitives of paramount importance and is particularly challenging when devices have directional antennas instead of omni-directional ones. In this paper, we study the following fundamental problem which we term as oblivious neighbor discovery: How can neighbor nodes with heterogeneous antenna configurations and without clock synchronization discover each other within a bounded delay in a fully decentralised manner without any prior coordination? We first establish a theoretical framework on oblivious neighbor discovery and establish the performance bound of any neighbor discovery protocol achieving oblivious discovery. Guided by the theoretical results, we then design an oblivious neighbor discovery protocol and prove that it achieves guaranteed oblivious discovery with order-minimal worst-case discovery delay in the asynchronous and heterogeneous environment. We further demonstrate how our protocol can be configured to achieve a desired trade-off between average and worst-case performance

  • 270.
    Chen, Lin
    et al.
    Laboratory Recherche Informatique (LRI-CNRS UMR 8623), Université Paris-Sud.
    Li, Yong
    Department of Electronic Engineering, Tsinghua University, Bejing.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On Oblivious Neighbor Discovery in Distributed Wireless Networks With Directional Antennas: Theoretical Foundation and Algorithm Design2017In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 25, no 4, p. 1982-1993Article in journal (Refereed)
    Abstract [en]

    Neighbor discovery, one of the most fundamental bootstrapping networking primitives, is particularly challenging in decentralized wireless networks where devices have directional antennas. In this paper, we study the following fundamental problem, which we term oblivious neighbor discovery: How can neighbor nodes with heterogeneous antenna configurations discover each other within a bounded delay in a fully decentralised manner without any prior coordination or synchronisation? We establish a theoretical framework on the oblivious neighbor discovery and the performance bound of any neighbor discovery algorithm achieving oblivious discovery. Guided by the theoretical results, we then devise an oblivious neighbor discovery algorithm, which achieves guaranteed oblivious discovery with order-minimal worst case discovery delay in the asynchronous and heterogeneous environment. We further demonstrate how our algorithm can be configured to achieve a desired tradeoff between average and worst case performance.

  • 271.
    Chen, Yifan
    et al.
    Southern University of Science and Technology.
    Nakano, Tadashi
    Osaka University.
    Kosmas, Panagiotis
    King’s College London.
    Yuen, Chau
    Singapore University of Technology and Design.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Asvial, Muhamad
    University of Indonesia.
    Green Touchable Nanorobotic Sensor Networks2016In: IEEE Communications Magazine, ISSN 0163-6804, E-ISSN 1558-1896, Vol. 54, no 11, p. 136-142Article in journal (Refereed)
    Abstract [en]

    Recent advancements in biological nanomachineshave motivated the research on nanoroboticsensor networks (NSNs), where thenanorobots are green (i.e., biocompatible andbiodegradable) and touchable (i.e., externallycontrollable and continuously trackable). In theformer aspect, NSNs will dissolve in an aqueousenvironment after finishing designated tasksand are harmless to the environment. In the latteraspect, NSNs employ cross-scale interfacesto interconnect the in vivo environment and itsexternal environment. Specifically, the in-messagingand out-messaging interfaces for nanorobotsto interact with a macro-unit are defined.The propagation and transient characteristicsof nanorobots are described based on the existingexperimental results. Furthermore, planningof nanorobot paths is discussed by taking intoaccount the effectiveness of region-of-interestdetection and the period of surveillance. Finally,a case study on how NSNs may be applied tomicrowave breast cancer detection is presented

  • 272.
    Cheng, Jie
    et al.
    Shannon Cognitive Computing Laboratory, Huawei Technologies Company.
    Liu, Yaning
    Harbin Institute of Technology Shenzhen Graduate School,.
    Ye, Qiang
    University of Prince Edward Island, Charlottetown.
    Du, Hongwei
    Harbin Institute of Technology Shenzhen Graduate School,.
    Vasilakos, Athanasios V.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    DISCS: a distributed coordinate system based on robust nonnegative matrix completion2017In: IEEE/ACM Transactions on Networking, ISSN 1063-6692, E-ISSN 1558-2566, Vol. 25, no 2, p. 934-947Article in journal (Refereed)
    Abstract [en]

    Many distributed applications, such as BitTorrent, need to know the distance between each pair of network hosts in order to optimize their performance. For small-scale systems, explicit measurements can be carried out to collect the distance information. For large-scale applications, this approach does not work due to the tremendous amount of measurements that have to be completed. To tackle the scalability problem, network coordinate system (NCS) was proposed to solve the scalability problem by using partial measurements to predict the unknown distances. However, the existing NCS schemes suffer seriously from either low prediction precision or unsatisfactory convergence speed. In this paper, we present a novel distributed network coordinate system (DISCS) that utilizes a limited set of distance measurements to achieve high-precision distance prediction at a fast convergence speed. Technically, DISCS employs the innovative robust nonnegative matrix completion method to improve the prediction accuracy. Through extensive experiments based on various publicly-available data sets, we found that DISCS outperforms the state-of-the-art NCS schemes in terms of prediction precision and convergence speed, which clearly shows the high usability of DISCS in real-life Internet applications.

  • 273.
    Cherkaoui, S.
    et al.
    Université de Sherbrooke.
    Brännström, Robert
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    On-Move 2011: Message from the On-Move 2011 workshop chairs2011Other (Other academic)
  • 274.
    Cherkaoui, S.
    et al.
    Sherbrooke University.
    Åhlund, Christer
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Welcome message from the ON-MOVE 2010 co-chairs2011Other (Other academic)
  • 275.
    Cherkaoui, Soumaya
    et al.
    Sherbrooke University.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Khoukhi, Lyes
    University of Technology of Troyes.
    Sahoo, Jagruti
    South Carolina State University.
    Johansson, Dan
    Umeå universitet.
    On-Move 2017 Message from the Chairs2017In: Proceedings: 2017 IEEE 42nd Conference on Local Computer Networks Workshops, LCN Workshops 2017, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. xv-, article id 8110191Conference paper (Refereed)
  • 276.
    Chiu, Wei-Yu
    et al.
    Department of Electrical Engineering, National Tsing Hua University, Hsinchu, Taiwan.
    Sun, Hongjian
    Department of Engineering Durham University Durham, U.K.
    Wang, Chao
    Department of Computer Science University of Exeter, Innovation Center, Exeter, U.K.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Guest Editorial: Special Issue on Computational Intelligence for Smart Energy Applications to Smart Cities2019In: IEEE Transactions on Emerging Topics in Computational Intelligence, E-ISSN 2471-285X, Vol. 3, no 3, p. 173-176Article in journal (Refereed)
    Abstract [en]

    The papers in this special section focus on computational intelligence for smart energy applications in smart cities. By 2050, more than half the world’s population is expected to live in urban regions. This rapid expansion of population in the cities of the future will lead to increasing demands on various infrastructures; the urban economics will play a major role in national economics. Cities must be competitive by providing smart functions to support high quality of life. There is thus an urgent need to develop smart cities that possess a number of smart components. Among them, smart energy is arguably the first infrastructure to be established because almost all systems require energy to operate. Smart energy refers to energy monitoring, prediction, use or management in a smart way. In smart cities, smart energy applications include smart grids, smart mobility, and smart communications. While realizing smart energy is promising to smart cities, it involves a number of challenges. The articles in this section aim to provide in-depth CI technologies that enable smart energy applications to smart cities.

  • 277.
    Chivilikhin, Daniil
    et al.
    ITMO Univ, Comp Technol Lab, St Petersburg, Russia.
    Buzhinsky, Igor
    ITMO Univ, Comp Technol Lab, St Petersburg, Russia Aalto Univ, Dept Elect Engn & Automat, Espoo, Finland.
    Ulyantsev, Vladimir
    ITMO Univ, Comp Technol Lab, St Petersburg, Russia.
    Stankevich, Andrey
    ITMO Univ, Comp Technol Lab, St Petersburg, Russia.
    Shalyto, Anatoly
    ITMO Univ, Comp Technol Lab, St Petersburg, Russia.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. ITMO Univ, Comp Technol Lab, St Petersburg, Russia Department of Electrical Engineering and Automation, Aalto University, Helsinki .
    Counterexample-guided inference of controller logic from execution traces and temporal formulas2018In: 2018 IEEE 23RD INTERNATIONAL CONFERENCE ON EMERGING TECHNOLOGIES AND FACTORY AUTOMATION (ETFA), Piscataway, NJ: IEEE, 2018, p. 91-98Conference paper (Refereed)
    Abstract [en]

    We developed an algorithm for inferring controller logic for cyber-physical systems (CPS) in the form of a state machine from given execution traces and linear temporal logic formulas. The algorithm implements an iterative counterexample-guided strategy: constraint programming is employed for constructing a minimal state machine from positive and negative traces (counterexamples) while formal verification is used for discovering new counterexamples. The proposed approach extends previous work by (1) considering a more intrinsic model of a state machine making the algorithm applicable to synthesizing CPS controller logic, and (2) using closed-loop verification which allows considering more expressive temporal properties.

  • 278.
    Chivilikhin, Daniil S.
    et al.
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Ivanov, Ilya
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Shalyto, Anatoly A
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Reconstruction of function block controllers based on test scenarios and verification2017In: IEEE International Conference on Industrial Informatics (INDIN), Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 646-651, article id 7819240Conference paper (Refereed)
    Abstract [en]

    The paper addresses the problem of reverse engineering a function block (FB) in situations when its source code is either not available or is too complex to understand. The proposed approach builds up on a recent method for reconstructing FBs based on testing and a search-based optimization algorithm. In our work the method is augmented with candidate solution verification using the NuSMV model checker. Verification is done in a closed-loop way using a manually constructed surrogate model of the plant and environment

  • 279.
    Chivilikhin, Daniil S.
    et al.
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Ulyantsev, Vladimir I.
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Shalyto, Anatoly A
    Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Computer Technologies Laboratory, ITMO University, Saint Petersburg.
    CSP-based inference of function block finite-state models from execution traces2017In: Proceedings: 2017 IEEE 15th International Conference on Industrial Informatics, INDIN 2017, Piscataway, NJ: Institute of Electrical and Electronics Engineers (IEEE), 2017, p. 714-719, article id 8104860Conference paper (Refereed)
    Abstract [en]

    A method for inferring finite-state models of function blocks from given execution traces based on translation to the constraint satisfaction problem (CSP) is proposed. In contrast to the previous method based on a metaheuristic algorithm, the approach suggested in this paper is exact: it allows to find a solution if it exists or to prove the opposite. The proposed method is evaluated on the example of constructing a finite-state model of a controller for a Pick-and-Place manipulator and is shown to be significantly faster then the metaheuristic algorithm

  • 280.
    Chivilikhin, Daniil
    et al.
    Computer Technologies Laboratory, ITMO University, St. Petersburg.
    Shalyto, Anatoly
    Computer Technologies Laboratory, ITMO University, St. Petersburg.
    Patil, Sandeep
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Reconstruction of Function Block Logic Using Metaheuristic Algorithm2017In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 13, no 4, p. 1763-1771, article id 7936605Article in journal (Refereed)
    Abstract [en]

    An approach for automatic reconstruction of automation logic from execution scenarios using a metaheuristic algorithm is proposed. IEC 61499 basic function blocks are chosen as implementation language and reconstruction of Execution Control Charts for basic function blocks is addressed. The synthesis method is based on a metaheuristic algorithm that combines ideas from ant colony optimization and evolutionary computation. Execution scenarios can be recorded from testing legacy software solutions. At this stage results are only limited to generation of basic function blocks having only Boolean input/output variables.

  • 281.
    Chivilikhin, Daniil
    et al.
    ITMO University.
    Shalyto, Anatoly
    ITMO University.
    Patil, Sandeep
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Reconstruction of function block logic using metaheuristic algorithm: Initial explorations2015In: IEEE 13th International Conference on Industrial Informatics (INDIN'15): Cambridge, United Kingdom, 22-24 July 2015, Piscatasway, NJ: IEEE Communications Society, 2015, p. 1239-1242, article id 7281912Conference paper (Refereed)
    Abstract [en]

    This paper presents an approach for automatic reconstruction of automation logic from execution scenarios using a metaheuristic algorithm. The IEC 61499 basic function blocks is chosen as implementation language and reconstruction of Execution Control Charts for basic function blocks is addressed. The synthesis method is based on a metaheuristic algorithm most closely related to ant colony optimization and evolutionary computation. Execution scenarios can be recorded from testing legacy software solutions. At this stage results are only limited to generation of basic function blocks having only Boolean input/output variables

  • 282.
    Chivilikhin, Daniil
    et al.
    Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Ulyantsev, Vladimir
    Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Shalyto, Anatoly
    Computer Technologies Laboratory, ITMO University, St. Petersburg, Russia.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science. Department of Electrical Engineering and Automation, Aalto University, Espoo, Finland.
    Function Block Finite-State Model Identification Using SAT and CSP Solvers2019In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 15, no 8, p. 4558-4568Article in journal (Refereed)
    Abstract [en]

    We propose a two-stage exact approach for identifying finite-state models of function blocks based on given execution traces. First, a base finite-state model is inferred with a method based on translation to the Boolean satisfiability problem, and then, the base model is generalized by inferring minimal guard conditions of the state machine with a method based on translation to the constraint satisfaction problem.

  • 283.
    Chowdhury, Rumman Rashid
    et al.
    University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Hossain, Sazzad
    University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Analyzing Sentiment of Movie Reviews in Bangla by Applying Machine Learning Techniques2019In: Proceedings of the International Conference on Bangla Speech and Language Processing, 2019Conference paper (Refereed)
    Abstract [en]

    This paper proposes a process of sentiment analysis of movie reviews written in Bangla language. This process can automate the analysis of audience’s reaction towards a specific movie or TV show. With more and more people expressing their opinions openly in the social networking sites, analyzing the sentiment of comments made about a specific movie can indicate how well the movie is being accepted by the general public. The dataset used in this experiment was collected and labeled manually from publicly available comments and posts from social media websites. Using Support Vector Machine algorithm, this model achieves 88.90% accuracy on the test set and by using Long Short Term Memory network [1] the model manages to achieve 82.42% accuracy. Furthermore, a comparison with some other machine learning approaches is presented in this paper.

  • 284.
    Chowdhury, Rumman Rashid
    et al.
    Department of Computer Science and Engineering, University of Chittagong, Bangladesh.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    Islam, Raihan Ul
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Sazzad
    Department of Computer Science and Engineering, University of Liberal Arts Bangladesh, Dhaka, Bangladesh.
    Bangla Handwritten Character Recognition using Convolutional Neural Network with Data Augmentation2019In: Proceedings of the Joint 2019 8th International Conference on Informatics, Electronics & Vision (ICIEV), 2019Conference paper (Refereed)
    Abstract [en]

    This paper proposes a process of Handwritten Character Recognition to recognize and convert images of individual Bangla handwritten characters into electronically editable format, which will create opportunities for further research and can also have various practical applications. The dataset used in this experiment is the BanglaLekha-Isolated dataset [1]. Using Convolutional Neural Network, this model achieves 91.81% accuracy on the alphabets (50 character classes) on the base dataset, and after expanding the number of images to 200,000 using data augmentation, the accuracy achieved on the test set is 95.25%. The model was hosted on a web server for the ease of testing and interaction with the model. Furthermore, a comparison with other machine learning approaches is presented.

  • 285.
    Chowdury, Mohammad Salah Uddin
    et al.
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Bin Emranb, Talha
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Ghosha, Subhasish
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Pathak, Abhijit
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Alama, Mohd. Manjur
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Absar, Nurul
    BGC Trust University Bangladesh, Chandanaish, Chittagong-4381, Bangladesh.
    Andersson, Karl
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hossain, Mohammad Shahadat
    University of Chittagong, Bangladesh.
    IoT Based Real-time River Water Quality Monitoring System2019In: Procedia Computer Science, ISSN 1877-0509, E-ISSN 1877-0509, Vol. 155, p. 161-168Article in journal (Refereed)
    Abstract [en]

    Current water quality monitoring system is a manual system with a monotonous process and is very time-consuming. This paper proposes a sensor-based water quality monitoring system. The main components of Wireless Sensor Network (WSN) include a microcontroller for processing the system, communication system for inter and intra node communication and several sensors. Real-time data access can be done by using remote monitoring and Internet of Things (IoT) technology. Data collected at the apart site can be displayed in a visual format on a server PC with the help of Spark streaming analysis through Spark MLlib, Deep learning neural network models, Belief Rule Based (BRB) system and is also compared with standard values. If the acquired value is above the threshold value automated warning SMS alert will be sent to the agent. The uniqueness of our proposed paper is to obtain the water monitoring system with high frequency, high mobility, and low powered. Therefore, our proposed system will immensely help Bangladeshi populations to become conscious against contaminated water as well as to stop polluting the water.

  • 286.
    Chronéer, Diana
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Johansson, Jeaneth
    Luleå University of Technology, Department of Business Administration, Technology and Social Sciences, Innovation and Design.
    Nilsson, Michael
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Distance- Spanning Technology.
    Runardotter, Mari
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Digital platform ecosystems: From information transactions to collaboration impact2017Conference paper (Refereed)
  • 287.
    Chude-Okonkwo, U.A.K.
    et al.
    University of Pretoria, South Africa.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Malekian, Reza
    University of Pretoria, South Africa.
    Maharaj, Bodhaswar T.J.
    University of Pretoria, South Africa.
    Simulation analysis of inter-symbol interference in diffusion-based molecular communication with non-absorbing receiver2017In: Proceedings of the 4th ACM International Conference on Nanoscale Computing and Communication, NanoCom 2017, New York: ACM Digital Library, 2017, article id 13Conference paper (Refereed)
    Abstract [en]

    This paper presents the analysis of inter-symbol interference (ISI) in a typical diffusion-based molecular communication system for a non-absorbing molecular receiver with no consideration to any artificially applied ISI mitigation technique. We employ stochastic simulation approach to analyze the influence of varied number of transmitted molecules, and molecules' degradation rates.

  • 288.
    Chude-Okonkwo, Uche A.K.
    et al.
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Malekian, Reza
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Maharaj, B.T.
    Department of Electrical, Electronic & Computer Engineering, University of Pretoria.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Molecular Communication and Nanonetwork for Targeted Drug Delivery: a survey2017In: IEEE Communications Surveys and Tutorials, ISSN 1553-877X, E-ISSN 1553-877X, Vol. 19, no 4, p. 3046-3096Article in journal (Refereed)
    Abstract [en]

    Molecular communication (MC) and molecular network (MN) are communication paradigms that use biochemical signalling to achieve information exchange among naturally and artificially synthesized nanosystems. Among the envisaged application areas of MC and MN is the field of nanomedicine where the subject of targeted drug delivery (TDD) is at the forefront. Typically, when someone gets sick, therapeutic drugs are administered to the person for healing purpose. Since no therapeutic drug can be effective until it is delivered to the target site in the body, different modalities to improve the delivery of drugs to the targeted sites are being explored in contemporary research. The most promising of these modalities is TDD. TDD modality promises a smart localization of appropriate dose of therapeutic drugs to the targeted part of the body at reduced system toxicity. Research in TDD has been going on for many years in the field of medical science; however, the translation of expectations and promises to clinical reality has not been satisfactorily achieved because of several challenges. The exploration of TDD ideas under the MC and MN paradigms is considered as an option to addressing these challenges and to facilitate the translation of TDD from the bench to the patients’ bedsides. Over the past decade, there have been some research efforts made in exploring the ideas of TDD on the MC and MN platforms. While the number of research output in terms of scientific articles is few at the moment, the desire in the scientific community to participate in realizing the goal of TDD is quite high as is evidence from the rise in research output over the last few years. To increase awareness and provide the multidisciplinary research community with the necessary background information on TDD, this paper presents a visionary survey of this subject within the domain of MC and MN. We start by introducing in an elaborate manner, the motivation behind the application of MC and MN paradigms to the study and implementation of TDD. Specifically, an explanation on how MC-based TDD concepts differ from traditional TDD being explored under the field of medical science is provided. We also summarize the taxonomy of the different perspectives through which MC-based TDD research can be viewed. System models and design challenges/requirements for developing MC-based TDD are discussed. Various metrics that can be used to evaluate the performance of MC-based TDD systems are highlighted. We also provide a discussion on the envisaged path from contemporary research activities to clinical implementation of the MC-based TDD. Finally, we discuss issues such as informatics and software tools, as well as issues that border on the requirement for standards and regulatory policies in MC-based TDD research and practice.

  • 289.
    Cleland, I.
    et al.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Donnelly, M.P.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Nugent, C.D:
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Espinilla, M.
    Department of Computer Science, University of Jaen, Jaen, Spain.
    Garcia-Constantino, M.
    School of Computing, Ulster University, Co. Antrim, Northern Ireland, United Kingdom.
    Collection of a Diverse, Realistic and Annotated Dataset for Wearable Activity Recognition2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops, PerCom Workshops 2018, IEEE, 2018, p. 555-560, article id 8480322Conference paper (Refereed)
    Abstract [en]

    This paper discusses the opportunities and challenges associated with the collection of a large scale, diverse dataset for Activity Recognition. The dataset was collected by 141 undergraduate students, in a controlled environment. Students collected triaxial accelerometer data from a wearable accelerometer whilst each carrying out 3 of the 18 investigated activities, categorized into 6 scenarios of daily living. This data was subsequently labelled, anonymized and uploaded to a shared repository. This paper presents an analysis of data quality, through outlier detection and assesses the suitability of the dataset for the creation and validation of Activity Recognition models. This is achieved through the application of a range of common data driven machine learning approaches. Finally, the paper describes challenges identified during the data collection process and discusses how these could be addressed. Issues surrounding data quality, in particular, identifying and addressing poor calibration of the data were identified. Results highlight the potential of harnessing these diverse data for Activity Recognition. Based on a comparison of six classification approaches, a Random Forest provided the best classification (F-measure: 0.88). In future data collection cycles, participants will be encouraged to collect a set of 'common' activities, to support generation of a larger homogeneous dataset. Future work will seek to refine the methodology further and to evaluate model on new unseen data.

  • 290.
    Cleland, Ian
    et al.
    Ulster University.
    Donnelly, Mark
    Ulster University.
    Nugent, Chris
    Ulster University.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Espinilla, Macarena
    University of Jaen.
    García-Constantino, Matías
    Ulster University.
    Collection of a Diverse, Naturalistic and Annotated Dataset for Wearable Activity Recognition2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2018, p. 555-560Conference paper (Refereed)
    Abstract [en]

    This paper discusses the opportunities and challenges associated with the collection of a large scale, diverse dataset for Activity Recognition. The dataset was collected by 141 undergraduate students, in a controlled environment. Students collected triaxial accelerometer data from a wearable accelerometer whilst each carrying out 3 of the 18 investigated activities, categorized into 6 scenarios of daily living. This data was subsequently labelled, anonymized and uploaded to a shared repository. This paper presents an analysis of data quality, through outlier detection and assesses the suitability of the dataset for the creation and validation of Activity Recognition models. This is achieved through the application of a range of common data driven machine learning approaches. Finally, the paper describes challenges identified during the data collection process and discusses how these could be addressed. Issues surrounding data quality, in particular, identifying and addressing poor calibration of the data were identified. Results highlight the potential of harnessing these diverse data for Activity Recognition. Based on a comparison of six classification approaches, a Random Forest provided the best classification (F-measure: 0.88). In future data collection cycles, participants will be encouraged to collect a set of “common” activities, to support generation of a larger homogeneous dataset. Future work will seek to refine the methodology further and to evaluate model on new unseen data.

  • 291.
    Cleland, Ian
    et al.
    University of Ulster. School of Computing and Mathematics.
    Kikhia, Basel
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Nugent, Chris
    University of Ulster. School of Computing and Mathematics.
    Boytsov, Andrey
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    McClean, Sally
    Computing and Information Engineering, University of Ulster.
    Finlay, Dewar
    University of Ulster. School of Computing and Mathematics.
    Optimal Placement of Accelerometers for the Detection of Everyday Activities2013In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 13, no 7, p. 9183-9200Article in journal (Refereed)
    Abstract [en]

    This article describes an investigation to determine the optimal placement of accelerometers for the purpose of detecting a range of everyday activities. The paper investigates the effect of combining data from accelerometers placed at various bodily locations on the accuracy of activity detection. Eight healthy males participated within the study. Data were collected from six wireless tri-axial accelerometers placed at the chest, wrist, lower back, hip, thigh and foot. Activities included walking, running on a motorized treadmill, sitting, lying, standing and walking up and down stairs. The Support Vector Machine provided the most accurate detection of activities of all the machine learning algorithms investigated. Although data from all locations provided similar levels of accuracy, the hip was the best single location to record data for activity detection using a Support Vector Machine, providing small but significantly better accuracy than the other investigated locations. Increasing the number of sensing locations from one to two or more statistically increased the accuracy of classification. There was no significant difference in accuracy when using two or more sensors. It was noted, however, that the difference in activity detection using single or multiple accelerometers may be more pronounced when trying to detect finer grain activities. Future work shall therefore investigate the effects of accelerometer placement on a larger range of these activities

  • 292.
    Cruciani, Federico
    et al.
    Ulster University.
    Cleland, Ian
    Ulster University.
    Nugent, Chris
    Ulster University.
    McCullagh, Paul
    Ulster University.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Personalized Online Training for Physical Activity monitoring using weak labels2018In: 2018 IEEE International Conference on Pervasive Computing and Communications Workshops (PerCom Workshops), IEEE, 2018, p. 567-572Conference paper (Refereed)
    Abstract [en]

    The use of smartphones for activity recognition is becoming common practice. Most approaches use a single pretrained classifier to recognize activities for all users. Research studies, however, have highlighted how a personalized trained classifier could provide better accuracy. Data labeling for ground truth generation, however, is a time-consuming process. The challenge is further exacerbated when opting for a personalized approach that requires user specific datasets to be labeled, making conventional supervised approaches unfeasible. In this work, we present early results on the investigation into a weakly supervised approach for online personalized activity recognition. This paper describes: (i) a heuristic to generate weak labels used for personalized training, (ii) a comparison of accuracy obtained using a weakly supervised classifier against a conventional ground truth trained classifier. Preliminary results show an overall accuracy of 87% of a fully supervised approach against a 74% with the proposed weakly supervised approach.

  • 293.
    Cruciani, Frederico
    et al.
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Cleland, Ian
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Nugent, Chris
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    McCullagh, Paul
    Computer Science Research Institute, Ulster University, Newtownabbey BT370QB, UK.
    Synnes, Kåre
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Hallberg, Josef
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Automatic annotation for human activity recognition in free living using a smartphone2018In: Sensors, ISSN 1424-8220, E-ISSN 1424-8220, Vol. 18, no 7, article id 2203Article in journal (Refereed)
    Abstract [en]

    Data annotation is a time-consuming process posing major limitations to the development of Human Activity Recognition (HAR) systems. The availability of a large amount of labeled data is required for supervised Machine Learning (ML) approaches, especially in the case of online and personalized approaches requiring user specific datasets to be labeled. The availability of such datasets has the potential to help address common problems of smartphone-based HAR, such as inter-person variability. In this work, we present (i) an automatic labeling method facilitating the collection of labeled datasets in free-living conditions using the smartphone, and (ii) we investigate the robustness of common supervised classification approaches under instances of noisy data. We evaluated the results with a dataset consisting of 38 days of manually labeled data collected in free living. The comparison between the manually and the automatically labeled ground truth demonstrated that it was possible to obtain labels automatically with an 80–85% average precision rate. Results obtained also show how a supervised approach trained using automatically generated labels achieved an 84% f-score (using Neural Networks and Random Forests); however, results also demonstrated how the presence of label noise could lower the f-score up to 64–74% depending on the classification approach (Nearest Centroid and Multi-Class Support Vector Machine).

  • 294.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering.
    Key challenges in automation of earth-moving machines2016In: Automation in Construction, ISSN 0926-5805, E-ISSN 1872-7891, Vol. 68, p. 212-222Article in journal (Refereed)
    Abstract [en]

    A wheel loader is an earth-moving machine used in construction sites, gravel pits and mining to move blasted rock, soil and gravel. In the presence of a nearby dump truck, the wheel loader is said to be operating in a short loading cycle. This paper concerns the moving of material (soil, gravel and fragmented rock) by a wheel loader in a short loading cycle with more emphasis on the loading step. Due to the complexity of bucket-environment interactions, even three decades of research efforts towards automation of the bucket loading operation have not yet resulted in any fully autonomous system. This paper highlights the key challenges in automation and tele-remote operation of earth-moving machines and provides a survey of different areas of research within the scope of the earth-moving operation. The survey of publications presented in this paper is conducted with an aim to highlight the previous and ongoing research work in this field with an effort to strike a balance between recent and older publications. Another goal of the survey is to identify the research areas in which knowledge essential to automate the earth moving process is lagging behind. The paper concludes by identifying the knowledge gaps to give direction to future research in this field.

  • 295.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Andersson, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    From Tele-remote Operation to Semi-automated Wheel-loader2018In: International Journal of Electrical and Electronic Engineering and Telecommunications, ISSN 2319-2518, Vol. 7, no 4, p. 178-182Article in journal (Refereed)
    Abstract [en]

    This paper presents experimental results with tele-remote operation of a wheel-loader and proposes a method to semi-automate the process. The different components of the tele-remote setup are described in the paper. We focus on the short loading cycle, which is commonly used at quarry and construction sites for moving gravel from piles onto trucks. We present results from short-loading-cycle experiments with three operators, comparing productivity between tele-remote operation and manual operation. A productivity loss of 42% with tele-remote operation motivates the case for more automation. We propose a method to automate the bucket-filling process, which is one of the key operations performed by a wheel-loader.

  • 296.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Andersson, Ulf
    Machine Learning approach to Automatic Bucket Loading2016In: 24th Mediterranean Conference on Control and Automation (MED): June 21-24, Athens, Greece, 2016, Piscataway, NJ: IEEE Communications Society, 2016, p. 1260-1265, article id 7535925Conference paper (Refereed)
    Abstract [en]

    The automation of bucket loading for repetitive tasks of earth-moving operations is desired in several applications at mining sites, quarries and construction sites where larger amounts of gravel and fragmented rock are to be moved. In load and carry cycles the average bucket weight is the dominating performance parameter, while fuel efficiency and loading time also come into play with short loading cycles. This paper presents the analysis of data recorded during loading of different types of gravel piles with a Volvo L110G wheel loader. Regression models of lift and tilt actions are fitted to the behavior of an expert driver for a gravel pile. We present linear regression models for lift and tilt action that explain most of the variance in the recorded data and outline a learning approach for solving the automatic bucket loading problem. A general solution should provide good performance in terms of average bucket weight, cycle time of loading and fuel efficiency for different types of material and pile geometries. We propose that a reinforcement learning approach can be used to further refine models fitted to the behavior of expert drivers, and we briefly discuss the scooping problem in terms of a Markov decision process and possible value functions and policy iteration schemes.

  • 297.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Predicting bucket-filling control actions of a wheel-loader operator using aneural network ensemble2018In: 2018 International Joint Conference on Neural Networks (IJCNN), Piscataway, NJ: IEEE, 2018, article id 8489388Conference paper (Refereed)
    Abstract [en]

    Automatic bucket filling is an open problem since three decades. In this paper, we address this problem with supervised machine learning using data collected from manual operation. The range-normalized actuations of lift joystick, tilt joystick and throttle pedal are predicted using information from sensors on the machine and the prediction errors are quantified. We apply linear regression, k-nearest neighbors, neural networks, regression trees and ensemble methods and find that an ensemble of neural networks results in the most accurate predictions. The prediction root-mean-square-error (RMSE) of the lift action exceeds that of the tilt and throttle actions, and we obtain an RMSE below 0.2 for complete bucket fillings after training with as little as 135 bucket filling examples

  • 298.
    Dadhich, Siddharth
    et al.
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Sandin, Fredrik
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Internet Systems Lab.
    Bodin, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Andersson, Ulf
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Signals and Systems.
    Martinsson, Torbjörn
    Volvo CE, Bolindervägen 5, 63185 Eskilstuna, Sweden.
    Field test of neural-network based automatic bucket-filling algorithm for wheel-loaders2019In: Automation in Construction, ISSN 0926-5805, E-ISSN 1872-7891, Vol. 97, p. 1-12Article in journal (Refereed)
    Abstract [en]

    Automation of earth-moving industries (construction, mining and quarry) require automatic bucket-filling algorithms for efficient operation of front-end loaders. Autonomous bucket-filling is an open problem since three decades due to difficulties in developing useful earth models (soil, gravel and rock) for automatic control. Operators make use of vision, sound and vestibular feedback to perform the bucket-filling operation with high productivity and fuel efficiency. In this paper, field experiments with a small time-delayed neural network (TDNN) implemented in the bucket control-loop of a Volvo L180H front-end loader filling medium coarse gravel are presented. The total delay time parameter of the TDNN is found to be an important hyperparameter due to the variable delay present in the hydraulics of the wheel-loader. The TDNN network successfully performs the bucket-filling operation after an initial period (100 examples) of imitation learning from an expert operator. The demonstrated solution show only 26% longer bucket-filling time, an improvement over manual tele-operation performance.

  • 299.
    Dai, Hong-Ning
    et al.
    Faculty of Information Technology, Macau University of Science and Technology, Avenida Wai Long, Taipa, Macau.
    Wong, Raymond Chi-Wing
    Department of Computer Science and Engineering, Hong Kong University of Science and Technology (HKUST), Clear Water Bay, Kowloon, Hong Kong.
    Wang, Hao
    Norwegian University of Science and Technology, Norway.
    Zheng, Zibin
    School of Data and Computer Science, Sun Yat-sen University, Xiaoguwei Island, Guangzhou, China.
    Vasilakos, Athanasios
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Big Data Analytics for Large-scale Wireless Networks: Challenges and Opportunities2019In: ACM Computing Surveys, ISSN 0360-0300, E-ISSN 1557-7341, Vol. 52, no 5, article id 99Article in journal (Refereed)
    Abstract [en]

    The wide proliferation of various wireless communication systems and wireless devices has led to the arrival of big data era in large-scale wireless networks. Big data of large-scale wireless networks has the key features of wide variety, high volume, real-time velocity, and huge value leading to the unique research challenges that are different from existing computing systems. In this article, we present a survey of the state-of-art big data analytics (BDA) approaches for large-scale wireless networks. In particular, we categorize the life cycle of BDA into four consecutive stages: Data Acquisition, Data Preprocessing, Data Storage, and Data Analytics. We then present a detailed survey of the technical solutions to the challenges in BDA for large-scale wireless networks according to each stage in the life cycle of BDA. Moreover, we discuss the open research issues and outline the future directions in this promising area.

  • 300.
    Dai, Wenbin
    et al.
    University of Auckland, Shanghai Jiao Tong University.
    Dubinin, Victor N.
    University of Auckland, Penza State University, Department of Computer Science, University of Penza, Computer Science Department, Penza State University, Penza.
    Christensen, James H.
    Rockwell Automation Advanced Technologies, Mayfield Heights, Holobloc Inc., Cleveland Heigths.
    Vyatkin, Valeriy
    Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Computer Science.
    Guan, Xinping
    Key Laboratory of System Control and Information Processing, Ministry of Education of China, Department of Automation, Shanghai Jiao Tong University, Shanghai Jiao Tong University.
    Towards Self-Manageable and Adaptive Industrial Cyber-Physical Systems with Knowledge-Driven Autonomic Service Management2017In: IEEE Transactions on Industrial Informatics, ISSN 1551-3203, E-ISSN 1941-0050, Vol. 13, no 2, p. 725-736, article id 7523919Article in journal (Refereed)
    Abstract [en]

    An increasingly important goal of industrial automation systems is to continuously optimize physical resource utilization such as materials. Distributed automation is seen as one enabling technology for achieving this goal, in which networking controller nodes collaborate in a peer to peer way to form a new paradigm, namely industrial cyber-physical systems (iCPS). In order to achieve rapid response to changes from both high level control systems and plant environment, the proposed self-manageable agent relies on the use of the Service-Oriented Architecture (SOA) that improves flexibility and interoperability. It is enhanced by the autonomic service management (ASM) to implement software modifications in a fully automatic manner, thus achieving self-manageable and adaptive industrial cyberphysical systems. The architecture design of the autonomic service manager is provided and integration with SOA based execution environment is illustrated. Preliminary tests on selfmanagement are completed using a case study of airport baggage handling system.

3456789 251 - 300 of 1546
CiteExportLink to result list
Permanent link
Cite
Citation style
  • apa
  • harvard1
  • ieee
  • modern-language-association-8th-edition
  • vancouver
  • Other style
More styles
Language
  • de-DE
  • en-GB
  • en-US
  • fi-FI
  • nn-NO
  • nn-NB
  • sv-SE
  • Other locale
More languages
Output format
  • html
  • text
  • asciidoc
  • rtf