Wireless Sensor Networks (WSN) have been highly developed which can be used in agriculture to enable optimal irrigation scheduling. Since there is an absence of widely used available methods to support effective agriculture practice in different weather conditions, WSN technology can be used to optimise irrigation in the crop fields. This paper presents architecture of an irrigation system by incorporating interoperable IP based WSN, which uses the protocol stacks and standard of the Internet of Things paradigm. The performance of fundamental issues of this network is emulated in Tmote Sky for 6LoWPAN over IEEE 802.15.4 radio link using the Contiki OS and the Cooja simulator. The simulated results of the performance of the WSN architecture presents the Round Trip Time (RTT) as well as the packet loss of different packet size. In addition, the average power consumption and the radio duty cycle of the sensors are studied. This will facilitate the deployment of a scalable and interoperable multi hop WSN, positioning of border router and to manage power consumption of the sensors.
Wireless Sensor Networks (WSNs) are playing remarkable contribution in real time decision making by actuating the surroundings of environment. As a consequence, the contemporary agriculture is now using WSNs technology for better crop production, such as irrigation scheduling based on moisture level data sensed by the sensors. Since WSNs are deployed in constraints environments, the life time of sensors is very crucial for normal operation of the networks. In this regard routing protocol is a prime factor for the prolonged life time of sensors. This research focuses the performances analysis of some clustering based routing protocols to select the best routing protocol. Four algorithms are considered, namely Low Energy Adaptive Clustering Hierarchy (LEACH), Threshold Sensitive Energy Efficient sensor Network (TEEN), Stable Election Protocol (SEP) and Energy Aware Multi Hop Multi Path (EAMMH). The simulation is carried out in Matlab framework by using the mathematical models of those algortihms in heterogeneous environment. The performance metrics which are considered are stability period, network lifetime, number of dead nodes per round, number of cluster heads (CH) per round, throughput and average residual energy of node. The experimental results illustrate that TEEN provides greater stable region and lifetime than the others while SEP ensures more througput.
Because of the increased popularity and fast expansion of the Internet as well as Internet of things, networks are growing rapidly in every corner of the society. As a result, huge amount of data is travelling across the computer networks that lead to the vulnerability of data integrity, confidentiality and reliability. So, network security is a burning issue to keep the integrity of systems and data. The traditional security guards such as firewalls with access control lists are not anymore enough to secure systems. To address the drawbacks of traditional Intrusion Detection Systems (IDSs), artificial intelligence and machine learning based models open up new opportunity to classify abnormal traffic as anomaly with a self-learning capability. Many supervised learning models have been adopted to detect anomaly from networks traffic. In quest to select a good learning model in terms of precision, recall, area under receiver operating curve, accuracy, F-score and model built time, this paper illustrates the performance comparison between Naïve Bayes, Multilayer Perceptron, J48, Naïve Bayes Tree, and Random Forest classification models. These models are trained and tested on three subsets of features derived from the original benchmark network intrusion detection dataset, NSL-KDD. The three subsets are derived by applying different attributes evaluator’s algorithms. The simulation is carried out by using the WEKA data mining tool.
Glaucoma detection is an important research area in intelligent system and it plays an important role to medical field. Glaucoma can give rise to an irreversible blindness due to lack of proper diagnosis. Doctors need to perform many tests to diagnosis this threatening disease. It requires a lot of time and expense. Sometime affected people may not have any vision loss, at the early stage of glaucoma. For detecting glaucoma, we have built a model to lessen the time and cost. Our work introduces a CNN based Inception V3 model. We used total 6072 images. Among this image 2336 were glaucomatous and 3736 were normal fundus image. For training our model we took 5460 images and for testing we took 612 images. After that we obtained an accuracy of 0.8529 and a value of 0.9387 for AUC. For comparison, we used DenseNet121 and ResNet50 algorithm and got an accuracy of 0.8153 and 0.7761 respectively.
Early prediction of whether a product will go to backorder or not is necessary for optimal management of inventory that can reduce the losses in sales, establish a good relationship between the supplier and customer and maximize the revenues. In this study, we have investigated the performance and effectiveness of tree based machine learning algorithms to predict the backorder of a product. The research methodology consists of preprocessing of data, feature selection using statistical hypothesis test, imbalanced learning using the random undersampling method and performance evaluating and comparing of four tree based machine learning algorithms including decision tree, random forest, adaptive boosting and gradient boosting in terms of accuracy, precision, recall, f1-score, area under the receiver operating characteristic curve and area under the precision and recall curve. Three main findings of this study are (1) random forest model without feature selection and with random undersampling method achieved the highest performance in terms of all performance measure metrics, (2) feature selection cannot contribute to the performance enhancement of the tree based classifiers, and (3) random undersampling method significantly improves performance of tree based classifiers in product backorder prediction.
Accurate and rapid identification of the severe and non-severe COVID-19 patients is necessary for reducing the risk of overloading the hospitals, effective hospital resource utilization, and minimizing the mortality rate in the pandemic. A conjunctive belief rule-based clinical decision support system is proposed in this paper to identify critical and non-critical COVID-19 patients in hospitals using only three blood test markers. The experts’ knowledge of COVID-19 is encoded in the form of belief rules in the proposed method. To fine-tune the initial belief rules provided by COVID-19 experts using the real patient’s data, a modified differential evolution algorithm that can solve the constraint optimization problem of the belief rule base is also proposed in this paper. Several experiments are performed using 485 COVID-19 patients’ data to evaluate the effectiveness of the proposed system. Experimental result shows that, after optimization, the conjunctive belief rule-based system achieved the accuracy, sensitivity, and specificity of 0.954, 0.923, and 0.959, respectively, while for disjunctive belief rule base, they are 0.927, 0.769, and 0.948. Moreover, with a 98.85% AUC value, our proposed method shows superior performance than the four traditional machine learning algorithms: LR, SVM, DT, and ANN. All these results validate the effectiveness of our proposed method. The proposed system will help the hospital authorities to identify severe and non-severe COVID-19 patients and adopt optimal treatment plans in pandemic situations.
Tomato leaves can be infected with various infectious viruses and fungal diseases that drastically reduce tomato production and incur a great economic loss. Therefore, tomato leaf disease detection and identification are crucial for maintaining the global demand for tomatoes for a large population. This paper proposes a machine learning-based technique to identify diseases on tomato leaves and classify them into three diseases (Septoria, Yellow Curl Leaf, and Late Blight) and one healthy class. The proposed method extracts radiomics-based features from tomato leaf images and identifies the disease with a gradient boosting classifier. The dataset used in this study consists of 4000 tomato leaf disease images collected from the Plant Village dataset. The experimental results demonstrate the effectiveness and applicability of our proposed method for tomato leaf disease detection and classification.
Artificial intelligence has achieved notable advances across many applications, and the field is recently concerned with developing novel methods to explain machine learning models. Deep neural networks deliver the best performance accuracy in different domains, such as text categorization, image classification, and speech recognition. Since the neural network models are black-box types, they lack transparency and explainability in predicting results. During the COVID-19 pandemic, Fake News Detection is a challenging research problem as it endangers the lives of many online users by providing misinformation. Therefore, the transparency and explainability of COVID-19 fake news classification are necessary for building the trustworthiness of model prediction. We proposed an integrated LIME-BiLSTM model where BiLSTM assures classification accuracy, and LIME ensures transparency and explainability. In this integrated model, since LIME behaves similarly to the original model and explains the prediction, the proposed model becomes comprehensible. The performance of this model in terms of explainability is measured by using Kendall’s tau correlation coefficient. We also employ several machine learning models and provide a comparison of their performances. Therefore, we analyzed and compared the computation overhead of our proposed model with the other methods because the model takes the integrated strategy.
The enormous use of facial expression recognition in various sectors of computer science elevates the interest of researchers to research this topic. Computer vision coupled with deep learning approach formulates a way to solve several real-world problems. For instance, in robotics, to carry out as well as to strengthen the communication between expert systems and human or even between expert agents, it is one of the requirements to analyze information from visual content. Facial expression recognition is one of the trending topics in the area of computer vision. In our previous work, a facial expression recognition system is delivered which can classify an image into seven universal facial expressions—angry, disgust, fear, happy, neutral, sad, and surprise. This is the extension of our previous research in which a real-time facial expression recognition system is proposed that can recognize a total of ten facial expressions including the previous seven facial expressions and additional three facial expressions—mockery, think, and wink from video streaming data. After model training, the proposed model has been able to gain high validation accuracy on a combined facial expression dataset. Moreover, the real-time validation of the proposed model is also promising.
The novel Coronavirus-induced disease COVID-19 is the biggest threat to human health at the present time, and due to the transmission ability of this virus via its conveyor, it is spreading rapidly in almost every corner of the globe. The unification of medical and IT experts is required to bring this outbreak under control. In this research, an integration of both data and knowledge-driven approaches in a single framework is proposed to assess the survival probability of a COVID-19 patient. Several neural networks pre-trained models: Xception, InceptionResNetV2, and VGG Net, are trained on X-ray images of COVID-19 patients to distinguish between critical and non-critical patients. This prediction result, along with eight other significant risk factors associated with COVID-19 patients, is analyzed with a knowledge-driven belief rule-based expert system which forms a probability of survival for that particular patient. The reliability of the proposed integrated system has been tested by using real patient data and compared with expert opinion, where the performance of the system is found promising.
It is my pleasure to welcome you to the sixth Demonstration Session at the IEEE Conference on Local Computer Networks (LCN) 2014. We were looking for demonstrations for all topics covered by the main conference as well as all the workshops held in conjunction with the conference. The technical demonstrations were strongly encouraged to show innovative and original research. The main purpose of the demo session is to provide demonstrations that validate important research issues and/or show innovative prototypes.
Emojis are like small icons or images used to express our sentiments or feelings via text messages. They are extensively used in different social media platforms like Facebook, Twitter, Instagram etc. We considered hand-drawn emojis to classify them into 8 classes in this research paper. Hand-drawn emojis are the emojis drawn in any digital platform or in just a paper with a pen. This paper will enable the users to classify the hand-drawn emojis so that they could use them in any social media without any confusion. We made a local dataset of 500 images for each class summing a total of 4000 images of hand-drawn emojis. We presented a system which could recognise and classify the emojis into 8 classes with a convolutional neural network model. The model could favorably recognise as well as classify the hand-drawn emojis with an accuracy of 97%. Some pre-trained CNN models like VGG16, VGG19, ResNet50, MobileNetV2, InceptionV3 and Xception are also trained on the dataset to compare the accuracy and check whether they are better than the proposed one. On the other hand, machine learning models like SVM, Random Forest, Adaboost, Decision Tree and XGboost are also implemented on the dataset.
Mosquitoes are responsible for the most number of deaths every year throughout the world. Bangladesh is also a big sufferer of this problem. Dengue, malaria, chikungunya, zika, yellow fever etc. are caused by dangerous mosquito bites. The main three types of mosquitoes which are found in Bangladesh are aedes, anopheles and culex. Their identification is crucial to take the necessary steps to kill them in an area. Hence, a convolutional neural network (CNN) model is developed so that the mosquitoes could be classified from their images. We prepared a local dataset consisting of 442 images, collected from various sources. An accuracy of 70% has been achieved by running the proposed CNN model on the collected dataset. However, after augmentation of this dataset which becomes 3,600 images, the accuracy increases to 93%. We also showed the comparison of some methods with the CNN method which are VGG-16, Random Forest, XGboost and SVM. Our proposed CNN method outperforms these methods in terms of the classification accuracy of the mosquitoes. Thus, this research forms an example of humanitarian technology, where data science can be used to support mosquito classification, enabling the treatment of various mosquito borne diseases.
One of the most vital parts of medical image analysis is the classification of brain tumors. Because tumors are thought to be origins to cancer, accurate brain tumor classification can save lives. As a result, CNN (Convolutional Neural Network)-based techniques for classifying brain cancers are frequently employed. However, there is a problem: CNNs are exposed to vast amounts of training data in order to produce good performance. This is where transfer learning enters into the picture. We present a 4-class transfer learning approach for categorizing Glioma, Meningioma, and Pituitary tumors and non-tumors in this study. The three most prevalent types of brain tumors are glioma, meningioma, and pituitary tumors. Our presented method, which employs the theory of transfer learning, utilizes a pre-trained InceptionResnetV1 method for classifying brain MRI images by extracting features from them using the softmax classifier method. The proposed approach outperforms all prior techniques with a mean classification accuracy of 93.95%. For the evaluation of our method we use kaggle dataset. Precision, recall, and F-score are one of the key performance metrics employed in this study.
Optimization problem like Travelling Salesman Problem (TSP) can be solved by applying Genetic Algorithm (GA) to obtain perfect approximation in time. In addition, TSP is considered as a NP-hard problem as well as an optimal minimization problem. Selection, crossover and mutation are the three main operators of GA. The algorithm is usually employed to find the optimal minimum total distance to visit all the nodes in a TSP. Therefore, the research presents a new crossover operator for TSP, allowing the further minimization of the total distance. The proposed crossover operator consists of two crossover point selection and new offspring creation by performing cost comparison. The computational results as well as the comparison with available well-developed crossover operators are also presented. It has been found that the new crossover operator produces better results than that of other cross-over operators.
The main challenge of any mobile robot is to detect and avoid obstacles and potholes. This paper presents the development and implementation of a novel mobile robot. An Arduino Uno is used as the processing unit of the robot. A Sharp distance measurement sensor and Ultrasonic sensors are used for taking inputs from the environment. The robot trains a neural network based on a feedforward backpropagation algorithm to detect and avoid obstacles and potholes. For that purpose, we have used a truth table. Our experimental results show that our developed system can ideally detect and avoid obstacles and potholes and navigate environments.
An earthquake is a tremor felt on the surface of the earth created by the movement of the major pieces of its outer shell. Till now, many attempts have been made to forecast earthquakes, which saw some success, but these attempted models are specific to a region. In this paper, an earthquake occurrence and location prediction model is proposed. After reviewing the literature, long short-term memory (LSTM) is found to be a good option for building the model because of its memory-keeping ability. Using the Keras tuner, the best model was selected from candidate models, which are composed of combinations of various LSTM architectures and dense layers. This selected model used seismic indicators from the earthquake catalog of Bangladesh as features to predict earthquakes of the following month. Attention mechanism was added to the LSTM architecture to improve the model’s earthquake occurrence prediction accuracy, which was 74.67%. Additionally, a regression model was built using LSTM and dense layers to predict the earthquake epicenter as a distance from a predefined location, which provided a root mean square error of 1.25.
This article describes the architecture and service enablers developed in the NIMO project. Furthermore, it identifies future challenges and knowledge gaps in upcoming ICT service development for public sector units empowering citizens with enhanced tools for interaction and participation. We foresee crowdsourced applications where citizens contribute with dynamic, timely and geographically spread gathered information.
An Internet-of-Things (IoT)-Belief Rule Base (BRB) based hybrid system is introduced to assess Autism spectrum disorder (ASD). This smart system can automatically collect sign and symptom data of various autistic children in realtime and classify the autistic children. The BRB subsystem incorporates knowledge representation parameters such as rule weight, attribute weight and degree of belief. The IoT-BRB system classifies the children having autism based on the sign and symptom collected by the pervasive sensing nodes. The classification results obtained from the proposed IoT-BRB smart system is compared with fuzzy and expert based system. The proposed system outperformed the state-of-the-art fuzzy system and expert system.
Distributed ledgers and blockchain technologies can improve system security and trustworthiness by providing immutable replicated histories of data. Blockchain is a linked list of blocks containing digitally signed transactions, a cryptographic hash of the previous block, and a timestamp stored in a decentralized and distributed network. The Internet of Things (IoT) is one of the application domains in which security based on blockchain is discussed. In this article, we review the structure and architectures of distributed IoT systems and explain the motivations, challenges, and needs of blockchain to secure such systems. However, there are substantial threats and attacks to blockchain that must be understood, as well as suitable approaches to mitigate them. We, therefore, survey the most common attacks to blockchain systems and the solutions to mitigate them, with the objective of assessing how malicious these attacks are in the IoT context.
Decentralization is essential when trust and performance must not depend on a single organization. Distributed Ledger Technologies (DLTs) and Decentralized Hash Tables (DHTs) are examples where the DLT is useful for transactional events, and the DHT is useful for large-scale data storage. The combination of these two technologies can meet many challenges. The blockchain is a DLT with immutable history protected by cryptographic signatures in data blocks. Identification is an essential issue traditionally provided by centralized trust anchors. Self-sovereign identities (SSIs) are proposed decentralized models where users can control and manage their identities with the help of DHT. However, slowness is a challenge among decentralized identification systems because of many connections and requests among participants. In this article, we focus on decentralized identification by DLT and DHT, where users can control their information and store biometrics. We survey some existing alternatives and address the performance challenge by comparing different decentralized identification technologies based on execution time and throughput. We show that the DHT and machine learning model (BioIPFS) performs better than other solutions such as uPort, ShoCard, and BBID.
Video conferencing applications help people communicate via the Internet and provide a significant and consistent basis for virtual meetings. However, integrity, security, identification, and authentication problems are still universal. Current video conference technologies typically rely on cloud systems to provide a stable and secure basis for executing tasks and processes. At the same time, video conferencing applications are being migrated from centralized to decentralized solutions for better performance without the need for third-party interactions. This article demonstrates a decentralized smart identification scheme for video conferencing applications based on biometric technology, machine learning, and a decentralized hash table combined with blockchain technology. We store users' information on a distributed hash table and transactional events on the distributed ledger after identifying users by implementing machine learning functions. Furthermore, we leverage distributed ledger technology's immutability and traceability properties and distributed hash table unlimited storage feature to improve the system's storage capacity and immutability by evaluating three possible architectures. The experimental results show that an architecture based on blockchain and distributed hash table has better efficiency but needs a longer time to execute than the two other architectures using a centralized database.
Blockchain technology has enabled the keeping of a decentralized, tamper-proof, immutable, and ordered ledger of transactional events. Efforts to leverage such a ledger may be challenging when data storage requirements exceed most blockchain protocols’ current capacities. Storing large amounts of decentralized data while maintaining system efficiency is the challenge that we target. This paper proposes using the IPFS distributed hash table (DHT) technology to store information immutably and in a decentralized manner to mitigate the high cost of storage. A storage system involving blockchain and other storage systems in concert should be based on immutable data and allow removal of data from malicious users in the DHT. Efficiency is improved by decreasing the overall processing time in the blockchain with the help of DHT technology and introducing an agreement service that communicate with the blockchain via a RESTful API. We demonstrate the applicability of the proposed method and conclude that the combination of IPFS and blockchain provides efficient cryptographic storage, immutable history and overall better efficiency in a decentralized manner.
User identification in decentralized systems is a demanding task. Identification systems should work resiliently and have efficient performance. Moreover, identification systems should protect the data that they must store against hackers and saboteurs. Keeping a system with decentralized identification without any intervention in the middle has attracted attention to improve earlier centralized identification systems. Decentralized Identifiers (DIDs) constitute a solution for identification divided into different modules. The verifiable data registry is one of the main parts of this technology, which is distributed storage of identity properties. We analyze the decentralized identification data registry and compare the performance of verifiable data registry based on blockchain and the Distributed Hash Table (DHT) on different scales of systems. Our evaluation results show that DHT has better performance. Furthermore, a model based on DHT shows that in addition to immutable storage and faster query time, it makes systems handle or search in data storage with lower searching time compared to Ethereum Blockchain as another immutable secure technology. Finally, our results show that DHT is a better solution than other models in different scenarios. Although blockchain has promising results on a small scale, it still has problems with storage and query time in large-scale systems.
Evolving traceability requirements increasingly challenge manufacturing supply chain actors to collect tamperproof and auditable evidence about what inputs they process, in what way these inputs are used, and what the resulting process outputs are. Traceability solutions based on blockchain technology have shown ways to satisfy the requirements of creating a tamper-proof and auditable trail of traceability data. However, the existing solutions struggle to meet the increasing storage requirements necessary to create an evidence trail using manufacturing data. In this paper, we show a way to create a tamper-proof and auditable evolving product story that uses a decentralized file system called the InterPlanetary File System (IPFS). We also show how using linked data can help auditors derive a traceable product story from such an accumulating evidence trail. The solution proposed herein can supplement existing blockchain-based traceability solutions and enable traceability in global manufacturing supply chains where forming a consortium incurs prohibitive costs and where storage requirements are high.
Assigned as expert reviewer in the national quality assessment of higher education in the area of Computer Science/ICT/Media technology.
Board of the faculty of science and technology and its research strategy committe
Member of the election committee of postgraduate student's association
Opponent/External Reviewer for Licentiate thesis. Tahir Nawaz Minhas: "Network Impact on Quality of Experience of Mobile Video". Blekinge Institute of Technology, Karlskrona, Sweden on March 28, 2012
Vice chairman and member of the board of the postgraduate student's association
Demands from users of future generation networks will include seamless access to services across multiple wireless access networks. Handsets and laptops will typically be equipped with several network interfaces and have enough computing power and battery capacity to connect to several access networks simultaneously within and over administrative domains in a multi-homed manner.Existing technologies and solutions solve parts of the problems raised in such a scenario. However, a remaining important research challenge that still needs to be solved is to find an optimal mobility management solution offering terminal mobility as well as session, service, and personal mobility. Furthermore, a uniform and scalable mechanism for quality of service provisioning needs to be defined and integrated in an overall architecture offering differentiated handling of real-time and non real-time flows. The architecture will be based on the Internet Protocol as its least common denominator, but there are still a lot of research questions to be answered.This article surveys existing solutions and outlines future work in the area of cross-layer designed mobility management solutions and integrated quality of service support using a cross-layer design and policy-based approach.
Fourth generation (4G) wireless systems targeting 100 Mb/s for highly mobile scenarios and 1 Gb/s for low mobility communication are soon to be deployed on a broad basis with LTE-Advanced and IEEE 802.16m as the two candidate systems. Traditional applications spanning everything from voice, video, and data to new machine-to-machine (M2M) applications with billions of connected devices transmitting sensor data will in a soon future use these networks. Still, interworking solutions integrating those new 4G networks with existing legacy wireless networks are important building blocks in order to achieve cost-efficient solutions, offer smooth migration paths from legacy systems, and to provide means for load balancing among different radio access technologies.This article categorizes and analyzes different interworking solutions for heterogeneous wireless networks and provides suggestions for further research.
Presents the introductory welcome message from the conference proceedings. May include the conference officers' congratulations to all involved with the conference event and publication of the proceedings record.
The aim of this thesis is to define a solution offering end-users seamless mobility in a multi-radio access technology environment. Today an increasing portion of cell phones and PDAs have more than one radio access technology and wireless access networks of various types are commonly available with overlapping coverage. This creates a heterogeneous network environment in which mobile devices can use several networks in parallel. In such environment the device needs to select the best network for each application to use available networks wisely. Selecting the best network for individual applications constitutes a major core problem.The thesis proposes a host-based solution for access network selection in heterogeneous wireless networking environments. Host-based solutions use only information available in mobile devices and are independent of information available in the networks to which these devices are attached. The host-based decision mechanism proposed in this thesis takes a number of constraints into account including network characteristics and mobility patterns in terms of movement speed of the user. The thesis also proposes a solution for network-based mobility management contrasting the other proposals using a host-based approach. Finally, this thesis proposes an architecture supporting mobility for roaming users in heterogeneous environments avoiding the need for scanning the medium when performing vertical handovers.Results include reduced handover latencies achieved by allowing hosts to use multihoming, bandwidth savings on the wireless interface by removing the tunneling overhead, and handover guidance through the usage of directory-based solutions instead of scanning the medium. User-perceived quality of voice calls measured on the MOS (Mean Opinion Score) scale shows no or very little impact from the mobility support procedures proposed in this thesis. Results also include simulation models, real-world prototypes, and testbeds that all could be used in future work. The proposed solutions in this thesis are mainly evaluated using simulations and experiments with prototypes in live testbeds. Analytical methods are used to complement some results from simulations and experiments
This thesis proposes and evaluates architectures and algorithms for access network selection in heterogeneous networking environments. The ultimate goal is to select the best access network at any time taking a number of constraints into account including user requirements and network characteristics. The proposed architecture enables global roaming between access networks within an operator's domain, as well as across operators without any changes in the data and control plane of the access networks being required. Also, the proposed architecture includes an algorithm for measuring performance of access networks that can be used on a number of access technologies being wired or wireless. The proposed access network selection algorithm also has an end-to-end perspective giving a network performance indication of user traffic being communicated. The contributions of this thesis include an implementation of a simulation model in OPNET Modeler, a proposal of a metric at the network layer for heterogeneous access networks, an implementation of a real-world prototype, a study of multimedia applications on perceived quality of service, an access network selection algorithm for highly mobile users and vehicular networks, and an extension of the mentioned access network selection algorithm to support cross-layer decision making taking application layer and datalink layer metrics into account.
IP Mobility Management allowing for handover and location management to be handled by the network layer has been around for a couple of 20 years or so now. A number of protocols have been proposed including the GPRS Tunneling Protocol (GTP), Mobile IP, Proxy Mobile IP, and the Locator/Identifier Split Protocol (LISP) being the best known and most discussed in the research literature. This paper overviews existing solutions and suggests a new distributed and dynamic mobility management scheme and evaluates it against existing static approaches.
Mobility management is today handled at various layers in the network stack including the datalink layer, the network layer, the transport layer and the application layer. Also, cross-layer designed solutions exist and the most known example is the work performed by IEEE currently standardizing media-independent handover services under the name of IEEE 802.21.This paper proposes a new interworking scheme for Mobile IP and the Session Initiation Protocol being the most popular solutions for mobility management at the network and application layers respectively. The goal is to deliver seamless mobility for both TCP-based and UDP-based applications taking the best features from each mobility management scheme. In short, this paper proposes that TCP connections are handled through Mobile IP while UDP-based connection-less applications may use the Session Initiation Protocol for handling mobility. The two mobility management solutions are integrated into one common solution.