Machine Learning (ML) and especially implementations of Neural Networks (NNs) is growing in popularity across numerous application areas. One of which is the use of a trained NN as an interatomic potential in Atomistic Simulations (AS), a NN applied in this manner is referred to as a Neural Network Potential (NNP).
A well established method of atomistic calculations is the use of the first principle Density Functional Theory (DFT). DFT can very precisely model properties of nanomaterials, but for large systems of atoms DFT is not a feasible method because of its heavy computational load. The use of NNPs enables accurate simulations of big systems of atoms with a reasonable low computational cost.
Previous work by students at Luleå University of Technology (LTU) where NNs were trained on fullerenes and carbon nanotubes (CNTs) demonstrated promising results. The NNs trained by the use of Atomistic Machine-Learning Package (AMP) managed to predict energies with considerable accuracy (100 meV/atom), but the force predictions were problematic and did not reach desired accuracy (the force RMSE reached was 6 eV/Å). Attempts made to run AS such as Molecular Dynamics (MD) and Geometry Optimization (GO) were unsuccessful, likely due to a poor representation of forces.
This work aims to improve the performance of NNs on carbon materials by studying diamond structures using AMP, such that working AS can be achieved.This was done in two stages, first a feasibility study was made to find appropriate hyperparameters. Moreover a study was made, where NNs was trained with the hyperparameters found. Two types of feature mapping descriptors were considered here, Gaussian and Zernike.The NNs trained was used as NNPs to perform MD and GO simulations as a mean of evaluation. The NNPs were also used to calculate the phonon dispersion curve of diamond.The trained NNPs in this work managed to perform AS and calculate the phonon dispersion curve with varying success. The best performing NN trained on 333 super-cells of diamond reached an accuracy of 120 meV/atom when predicting energies, and 640 meV/Å predicting forces. A NNP trained with Gaussian descriptors turned out to be 10 times faster than the reference simulation done with DFT, compared while performing a single step in a GO. The phonon dispersion curve produced by the Gaussian NNP displayed a striking resemblance to the reference produced by using DFT. Phonon dispersion curves produced by the Zernike NNP was distorted and involved a great deal of imaginary frequencies, but the correct amplitude was reached.The Gaussian NNPs trained in this work turned out to be faster and better in almost all regards compared to the Zernike alternative. The only time Zernike outperformed Gaussian descriptors were in the total energy reached in a GO simulation applying the NNPs from the study. Compared to DFT results the Zernike error was 0.26 eV (0.05%) and the Gaussian error was 0.855 eV (0.17%). MD simulations where the NNPs was used worked well for the Gaussian variant but not for the Zernike.With the AS up and running (at least for the Gaussian NNP) the following step is either to improve the performance on diamond structures. Or to include more carbon materials in the studies such as CNT and fullerenes.
Biological sensor systems are remarkably robust and power efficient systems that solve complex pattern recognition problems. Neuromorphic engineering concerns the design of very-large-scale-integration (VLSI) systems with power-efficient analog circuits that mimic biological sensors and neural systems. In this work, we examine how dynamical models of spiking neural networks (SNNs) and the low-power neuromorphic processor Dynap-se, developed at iniLabs in Zurich, can be used in a pattern-recognition application. We implement and investigate a training protocol for signal classification with an SNN-model incorporating the same neuron- and synapse models as those implemented in Dynap-se. We use the model to classify sampled vibration signals generated in healthy- and faulty states of a wind turbine. We investigate two different methods for conversion of an analog signal to spikes, a software delta-modulator and a neuromorphic sensor system known as the Dynamic Audio Sensor (DAS) from iniLabs. The SNN-based classifier is tested on 10 pairs of healthy- and faulty signals not included in the training set. We achieve 90% classification test accuracy using the delta-modulator. The SNN-based classifier is trained on a rather small dataset. Larger training and test sets are needed to increase the performance and reliability of the results. For the delta-modulator stimuli, the model needs to be further evaluated using for instance cross-validation. For the DAS-stimuli, the classifier is not functioning well in its current state. Possibly, this can be improved by modifying the network architecture. A prototype training protocol for Monte Carlo-based synaptic configuration of Dynap-se is developed using a hardware-in-the-loop approach. The protocol enables optimization of a given cost function, and thus has potential to be further developed for the optimization of neural networks implemented in Dynap-se.