Accepted Papers


Tool Condition Monitoring Method In Milling Process Using Wavelet Transform And Long Short-Term Memory
Fatemeh Aghazadeh, Antoine Tahan and Marc Thomas,École de technologie supérieure ÉTS,Canada
ABSTRACT
Industrial automation is a promising move to fulfill today’s competitive manufacturing industry demands by lowering operation costs, increasing productivity and quality. Monitoring the production process is one of the important steps toward total autonomy of manufacturing plants, which reduces routine checks, enables proactive maintenance and reduces repair costs. This research investigates tool wear as one of the most common faults in milling process during cutting of the D2 high speed steel as a hard to cut material using Carbide Walter End Mill Protostar tool. Vibration signal is chosen to represent the system status due to its applicability in industry. Signals are transformed into time-frequency domain using Wavelet Transform method to reveal both time domain and frequency domain features of the signal simultaneously. In order to model the complex and non-linear relations between tool wear and vibration signals under varying cutting parameters, a deep learning based algorithm, Long Short-Term Memory (LSTM) Artificial neural networks (ANNs) is employed. Deep learning algorithms are getting lots of attention recently within the diagnosis and prognosis community because of their exceptional performance in exploiting information in big data to solve complex problems. LSTM network is a type of recurrent ANNs that have some internal cells that act as long-term or short-term memory units, which is most suitable for sequential data and time series like vibration signals in our analysis. After designing the system, performance of the monitoring method is validated using experimentally acquired data with K2X10 Huron high speed CNC machine in LIPPS and Dynamo labs of ETS.
KEYWORDS

Deep Learning, Tool Wear, Wavelet Transform, Condition Monitoring, Time-Frequency Transformation, Machining Process.


Artificial Intelligence-Based Process For Metal Scrap Sorting
Maximilian Auer, Kai Oßwald, Raphael Volz, Jörg Woidasky,Pforzheim University, Germany.
ABSTRACT
Machine learning offers remarkable benefits for improving workplaces and working conditions amongst others in the recycling industry. Here e. g. hand-sorting of medium value scrap is labor intensive and requires experienced and skilled workers. On the one hand they have to be highly concentrated for making proper readings and analyses of the material, but on the other hand this work is monotonous. Therefore, a machine learning approach is proposed for a quick and reliable automated identification of alloys in the recycling industry, while the mere scrap handling is regarded to be left in the hands of the workers. To this end, a set of twelve tool and high-speed steels from the field were selected to be identified by their spectrum induced by electric arcs. For data acquisition the optical emission spectrometer Thorlabs CCS 100 was used. Spectra have been post-processed to be fed into the supervised machine learning algorithm. The development of the machine learning software is conducted according to the steps of the VDI 2221 standard method. For programming Python 3 as well as the python-library sklearn were used. By systematic parameter variation, the appropriate machine learning algorithm was selected and validated. Subsequent validation steps showed that the automated identification process using a machine learning approach and the optical emission spectrometry is applicable, reaching a maximum F1 score of 96.9 %. This performance is as good as the performance of a highly trained worker using visual grinding spark identification. The tests were based on a self-generated set of 600 spectra per single alloy (7,200 spectra in total) which were produced using an industry workshop device.
KEYWORDS

Supervised Learning, Spectroscopy, Metal scrap recycling


Artificial Intelligence Surpassing Human Intelligence: Factual or Hoax
Sana Khanam,Jamia Hamdard University,India.
ABSTRACT
AI is a trending topic in the field of Computer Science which aims to make computers "smart". There are various diverse technical and specialized research associated with artificial intelligence. Every day we can hear new stories on how AI is making progress in yet another industry. Artificial Intelligence has already taken over a lot of Human jobs and performing more efficiently and effectively, than humans. However a lot of time this question has arisen - Will Artificial Intelligence Surpass Human Intelligence? Is computers’ ever accelerating abilities to outpace human skills is a matter of concern? The different views and myths on the subject have made it even a more - topic of discussion. This research studies, analyzes, collects and summarizes existing re-search and future aspects on the topic. Later we discuss the possibilities if AI can eventually replace human jobs in the market. Also, we analyze different AI types and discuss whether machines will ultimately surpass human beings or not.


Feedback Control of a Lower Limb Exoskeleton System by Determining Optimal LQR Weighting Matrices
Jatin Gupta 1, Rituparna Datta 2, Arun Kumar Sharma 1, and Bishakh Bhattacharya 1,1Department of Mechanical Engineering, Indian Institute of Technology (IIT) Kanpur 208016, India. 2Department of Computer Science, University of South Alabama, Mobile, AL 36688, USA.
ABSTRACT
The present work discusses optimal feedback control of a lower limb exoskeleton by determining optimal LQR weighting matrices. A simplified model of human gait having four degrees of freedom is developed to describe the dynamics of Single Support Phase (SSP) of gait cycle. System is linearized about reference trajectory to apply a series of optimal controller. It is observed that choice of weight matrices are important to controller performance. Instead of conventional diagonal weight matrices, more complex symmetric weight matrix is used in the study. Optimization problem is formulated to find the optimal weighting matrix by minimizing tracking error of joint angles. Non-dominated sorting genetic algorithm (NSGA)-II is then used to obtain the solution of multi-objective constrained optimization problem.
KEYWORDS

Linear Quadratic Regulator, Exoskeleton, NSGA-II, Human gait


Condition Based Maintenance of Turbine and Compressor of a CODLAG Naval Propulsion System using Deep Neural Network
Palash Pal1, Rituparna Datta2, Aviv Segev2, and Alec Yasinsac2,1University Institute of Technology, Burdwan University, West Bengal, India,2Department of Computer Science, University of South Alabama, 150 Jaguar Drive, Mobile, AL 36688, USA
ABSTRACT
System and sub-system maintenance is a significant task for every dynamic system. A plethora of approaches, both quantitative and qualitative, have been proposed to ensure the system safety and to minimize the system downtime. The rapid progress of computing technologies and different machine learning approaches makes it possible to integrate complex machine learning techniques with maintenance strategies to predict system maintenance in advance. The present work analyzes different methods of integrating an Artificial Neural Network (ANN) and ANN with Principle Component Analysis (PCA) to model and predict compressor decay state coefficient and turbine decay state coefficient of a Gas Turbine (GT) mounted on a frigate characterized by a Combined Diesel-Electric and Gas (CODLAG) propulsion plant used in naval vessels. The input parameters are GT parameters and the outputs are GT compressor and turbine decay state coefficients. Due to the presence of a large number of inputs, more hidden layers are required, and as a result a deep neural network is found appropriate. The simulation results confirm that most of the proposed models accomplish the prediction of the decay state coefficients of the gas turbine of the naval propulsion. The results show that a consistently declining hidden layers size which is proportional to the input and to the output outperforms the other neural network architectures. In addition, the results of ANN outperforms hybrid PCA-ANN in most cases. The ANN architecture design might be relevant to other predictive maintenance systems.
KEYWORDS

Condition based maintenance, Neural Network, Deep neural network, Principle Component Analysis( PCA), Naval propulsion


Research On Gait Prediction Based On LSTM
Bo Fan Liang and Q. Chen,Beijing Information Science and Technology University,China.
ABSTRACT
With the increase of the proportion of the aging population, the protection and assistance of the older persons has become an important issue in the society. Among them, the safety problems of the elderly due to falls accounts for a large proportion, so it is very important to predict the fall. The fall is mainly characterized by abnormal gait, for gait mode, the gait has a strong periodicity, each step is completed in one cycle, and each cycle is repeatable. In this paper, a gait prediction method is proposed. Firstly, the lumbar posture of the human body is measured by the acceleration gyroscope as the gait feature, and then the gait is predicted by the LSTM network. The experimental results show that the RMSE between the gait trend predicted by the method and the actual gait trend can be reached a level of 0.06 ± 0.01.
KEYWORDS

ELDLY FALL, ACCELERASTION GYRO, LUMBAR POSTURE, GAIT PREDICTION, LSTM


    Evidence for the correlation between Conflict Risk Indicators GCRI and FSI using Deep Learning
    Vera Kamp1, JP Knust1, Reinhard Moratz1,2, Kevin Stehn1 and Sören Stöhrmann1 1data42 GmbH, Gotenstrasse 18, Hamburg, Germany, 2Institute for Geoinformatics. University of Münster,
    ABSTRACT
    Data mining enables an innovative, largely automatic meta-analysis of the relationship between political and economic geography analyses of crisis regions. As an example, the two approaches Global Conflict Risk Index (GCRI) and Fragile States Index (FSI) can be related to each other. The GCRI is a quantitative conflict risk assessment based on open source data and a statistical regression method developed by the Joint Research Centre of the European Commission. The FSI is based on a conflict assessment framework developed by The Fund for Peace in Washington, DC. In contrast to the quantitative GCRI, the FSI is essentially focused on qualitative data. Both approaches therefore have closely related objectives, but very different methodologies and data sources. It is therefore hoped that the two complementary approaches can be combined to form an even more meaningful meta-analysis, or that contradictions can be discovered, or that a validation of the approaches can be obtained if there are similarities. We propose an approach to automatic meta-analysis that makes use of machine learning (data mining). Such a procedure represents a novel approach in the meta-analysis of conflict risk analysis.
    KEYWORDS

    Data Science, Deep Learning, Conflict Risk Prediction


    Automated Advanced Remote-Control Car System
    Amit Dutta1 and Dr. B.K Daas2, 1BRAC University, Department of Computer Science and Engineering, Mohakhali, Dhaka 1212, Bangladesh, and 2Product Development engineer, Intel, OR 97229, USA
    ABSTRACT
    Our work is based on Arduino, motor driver, Bluetooth module, and Firebase and sonar sensor. Arduino is an open source platform which is easy-to-use between hardware and software. Arduino uses ATmega328 microcontroller [2]. We present our automated advanced remote control car that is controlled by an Android application through a server and also have camera for live broadcasting. What firebase does here is, it receives data from web application, stores those data. It updates the data into our android application. Our robotic car is controlled using those data and also can avoid any obstacle using those data. All in all, our project is an IOT based robotic car that is controlled over the internet using Firebase Realtime database. It can be made using in a bigger scale for real time vehicles [2].
    KEYWORDS

    Automation, IOT RC car, Bluetooth, Arduino, Google Firebase, Realtime database.


    An Advising System For Parking Using Canny And K-Nn Techniques
    Chyi-Ren Dow, Wei-Kang Wang, Huu-Huy Ngo, and Shiow-Fen Hwang,Feng Chia University,Taiwan.
    ABSTRACT
    This study proposes a system which provides the parking characteristics and an application service platform. This system can be used to assist in selecting the parking space for drivers. The system can identify the contours of vehicles, such as cars and motorcycles by using the Canny algorithm. The data can be used to create the dataset and calculate the Parking density. Next, we use the k-nearest neighbor (K-NN) algorithm to produce the parking pattern. The model makes predictions for different conditions at different time. We also analyze the parking hotspots at each parking location, as well as the popular parking period.
    KEYWORDS

    Parking Space, Big Data of Traffic, k-Nearest Neighbor, Canny Edge Detection.


    Probability-Directed Problem Optimization Technique for Solving Systems of Linear and Non-Linear Equations
    Muhammed J. Al-Muhammed,American University of Madaba,Jordan.
    ABSTRACT
    Although many methods have been proposed for solving linear or nonlinear systems of equations, there is always a pressing need for more effective and efficient methods. Good methods should produce solutions with high precision and speed. This paper proposed an innovative method for solving systems of linear and nonlinear equations. This method transforms the problem into an optimization problem and uses a probability guided search technique for solving this optimization problem, which is the solution for the system of equations. The transformation results in an aggregate violation function and a criterion function. The aggregation violation function is composed of the constraints that represent the equations and whose satisfaction is a solution for the system of equations. The criterion function intelligently guides the search for the solution to the aggregate violation function by determining when the constraints must be checked; thereby avoiding unnecessary, timeintensive checks for the constraints. Experiments conducted with our prototype implementation showed that our method is effective in finding solutions with high precision and efficient in terms of CPU time.
    KEYWORDS

    Solutions for systems of linear and non-linear equations; random-guided search; optimization problem; global minimum.


    A Novel Method To Prevent Phishing
    Yunjia Wang and Ishbel Duncan,University of St Andrews, UK
    ABSTRACT
    Phishing is one of the most common attacks in the world, especially with the increasing usage of mobile platforms and e-commerce. Although many users are sensible about phishing attacks from suspicious paths in the URL address, phishing still accounts for a large proportion of all of malicious attacks as it is easy to deploy. Most browser vendors mainly adopt two approaches against phishing; the blacklist and the heuristic-based. However, both have related limitations. In this paper, a novel method was presented and developed to protect against phishing attacks. An easy to implement prototype demonstrated high accuracy detection in the experimental trials.
    KEYWORDS

    Phishing, OCR, Phishing Prevention.


    Panel Analysis Of Physiological Signals: Study Of Obstructive Sleep Apnea Syndrome
    Samir Ghouali1,3 , Fayçal Amine Haddam2 and Mohammed Feham3
    1 Mustapha Stambouli University, Algeria, 2 Zhongxing Telecommunication Equipment Company,Algeria, 3 STIC Laboratory, Tlemcen, Algeria.
    ABSTRACT
    This paper provided an overview of the methods used for the main unit root tests, panel data co-integration, estimation models and the use of Granger causality in panels. This research has developed considerably since the pioneering work of Levin and Lin and is now being applied in many empirical ways. The theoretical framework, which is the basis of any empirical study, provides a content of legitimacy to our problem, as it serves to clarify concepts and makes it possible to define each notion. In this article, we have contributed with the study of sleep Apnea. Non-stationary panel data estimators can still solve a number of problems, including estimation and inference. To estimate co-integrated variable systems, as well as to perform tests on co-integration vectors, it is necessary to use efficient estimation methods. The FM-OLS and DOLS models are used to quantify our results. The results found show the long-term interaction between physiological signals, and can help the physician to understand the risks associated with these interactions.
    KEYWORDS

    FM-OLS, DOLS, Panel Granger Causality, Sleep Apnea, MATLAB.


    Face Detection In Color Images Using Skin Color Detection And RGB Normalization
    Aishwarya,National Institute of Technology Durgapur, India.
    ABSTRACT
    There are many types of biometric system like fingerprint recognition, face detection and recognition. Face detection is which automatically identifies, verifies a person from a digital image or a video frame. Face detection has always been a complex and difficult problem to be solved because there are many factors may affect the look of the face in the picture. Face detection has been successfully used in biometrics, video surveillance, human-computer interface and image database management. This paper comprises of detection of different faces with different backgrounds
    KEYWORDS

    RGB, Face detection.


    Brain Computer Interface For Biometric Authentication By Recording Signal
    Abd Abrahim Mosslah. Reyadh Hazim Mahdi and Shokhan M. Al-Barzinji, University of Anbar, College of Islamic Science, Anbar- Iraq
    ABSTRACT
    Electroencephalogram(EEG) is done in several ways, which are referred to as brainwaves, which scientists interpret as an electromagnetic phenomenon that reflects the activity in the human brain, this study is used to diagnose brain diseases such as schizophrenia, epilepsy, Parkinson's, Alzheimer's, etc. It is also used in brain machine interfaces and in brain computers. In these applications wireless recording is necessary for these waves. What we need today is Authentication? Authentication is obtained from several techniques, in this paper we will check the efficiency of these techniques such as password and pin. There are also biometrics techniques used to obtain authentication such as heart rate, fingerprint, eye mesh and sound, these techniques give acceptable authentication. If we want to get a technology that gives us integrated and efficient authentication, we use brain wave recording. Through this paper we will work to improve the recording efficiency of the radio waves of the brain and provide authentication.
    KEYWORDS

    Related work, EEG brain signal, Brain wave, Overall projcet outline, System requirements.


    Leakage Resilient Additively Homomorphic IBE with Auxiliary Input
    Zhiwei Wang and Congcong Zhu,Nanjing University of Posts and Telecommunications, China.
    ABSTRACT
    Additively homomorphic encryption is a relaxed notion of homomorphic encryption, which enables us to compute linear functions over encrypted data. Additively homomorphic encryption is an efficient resolution tool for the problem of security with privacy in big data applications. Compared with additively homomorphic public-key based encryption (PKE), additively homomorphic identity-based encryption (IBE) may be a better choice, since it does not need to maintain publickey infrastructure (PKI) with heavy costs. In this paper, we design a leakage resilient additive homomorphic IBE scheme with auxiliary input to resist side-channel attacks for the end users. We prove that our scheme is auxiliary input chosen-plaintext attack (AI-CPA) secure, and implement our scheme on the Intel Edison Platform which is a resourceconstrained system. From theoretical analysis and experimental result, our scheme is very suitable for aggregating data submitted from the end users who are at the risk of leaking their secret keys.
    KEYWORDS

    security with privacy; big data; additively homomorphic IBE; auxiliary input; CPA secure


    A Dendritic Cell Algorithm Based Approach for Malicious TCP Port Scanning Detection
    Nuha Almasalmeh1,Zouheir Trabelsi1and Firas Saidi2,1College of Information Technology,United Arab Emirates University AlAin, UAE
    2National School of Computer Sciences,University of Manouba,Tunisia
    ABSTRACT
    The proliferation of cyber-attacks brings up an urgent need to develop sophisticated detection tools. Some of these tools are based on algorithms inspired from the Human Immune System (HIS). The Dendritic Cell Algorithm (DCA) is one of such HIS inspired methods, which is based on the Danger model. In this paper, we applied and enhanced the DCA algorithm to cover the malicious TCP port scanning detection. Experimentations and evaluation results of different use cases show the efficiency of the two versions of DCA algorithm in abnormal Port scanning detection.
    KEYWORDS

    Artificial immune systems, dendritic cell algorithm, denial of service, intrusion detection, port scanning, performance analysis.


    Mitigating The Threat Of Lsb Steganography Within Data Loss Prevention Systems
    Yunjia Wang and Ishbel Duncan,University of St Andrews,UK.
    ABSTRACT
    Data Loss Prevention systems need to consider the passing of information out of an organisation through emails or common shared data repositories via images or diagrams. Steganography has historically been used to hide information, images or tex, within cover images and as email and shared data spaces allow high MB or GB file transfers it is important to detect or destroy hidden information. This paper discusses an empirical trial to validate protection mechanisms against data loss through steganographic images.
    KEYWORDS

    Security; LSB Steganography; Data Loss Prevention


    Real-time intrusion detection of DOS-type attacks based on a statistical and neuronal approach
    Djionang Lekagning Berlin H, Tchuisseu Jules C, Dr. Tindo Gilbert,University of Yaounde,Cameroon.
    ABSTRACT
    Internet offers services to all the users in the world. His infrastructure presents several vulnerabilities which are exploited by the hackers who use it to damage the smooth running of networks. More than 90% of the attacks on the Internet are denials of service which exploit the protocol TCP. Several approaches were proposed in the literature to detect this type of attack: the based approaches router, the statistical approaches and the approaches by artificial intelligence. The problem of positive and negative forgery remains. We propose in this work a hybrid approach based on a statistical and neuronal approach. The neuronal approach allows us to select the relevant attributes and to learn of the normal traffic and the statistical approach is based on the sampling of the traffic TCP.From the captured packages, we study the evolution of the difference between the number of packages TCP SYN and the number of packages TCP FIN or RST received by the waiter. A comparative study was made with several works. A real-time simulation was o made. Our results present the better results with regard to the other works.
    KEYWORDS

    Networks intrusion detection, DOS Attack, Neural network, statistics model


    A Design For A Secure Malware Laboratory
    Xavier Riofriol12 and David Galindo1, 1University of Birmingham, Birmingham, United Kingdom,2Universidad Nacional de Loja, Loja, Ecuador
    ABSTRACT
    Malicious software teaching is based on theory, consequently, students do not experiment with real practice. Therefore, when they confront a rising incidence in the real world, the response is not usually at the adequate time neither valuable enough. A practical focus will provide a different understanding of the problem due to the fact that the student will be able to recognise suspicious behaviour. This paper proposes the design of an entire platform that experiments with topics related to malware in a controlled and safe environment. The strategy presented is a virtual machine that integrates tools including Metasploit Framework, vulnerable systems, and software scanners. Besides, a web tutorial is available for user orientation; it incorporates additional exclusive components for Metasploit and a tutorial to develop them.
    KEYWORDS

    Malware, Metasploit, course, practice, virus.


    Detection of Malicious nodes using collaborative neighbour monitoring in DSA networks
    Takyi Augustine,David Johnson,University of Cape Town,South Africa.
    ABSTRACT
    This work addresses position falsification attack in dynamic spectrum access networks. The work models possible threats of malicious nodes and presents a novel detection algorithm. Our algorithm detection strategy uses collaborative neighbor monitoring by the secondary nodes within the deployment area to detect malicious nodes. The simulation results obtained show that our algorithm works well in detecting position falsification attacks in the dynamic spectrum access networks, provided the distance between the actual malicious node position and the falsified position is at least 0.035 km. Even with high fluctuation with RSSI values, we obtained right samples that were closer to the means using Kullback Leibler (KL) divergence.
    KEYWORDS

    Spectrum sensing, neighbor monitoring, malicious node, secondary node, position falsification, attack.


    Performance Analysis of Witricity in Typical Real World Model Situations
    Hafiz Usman Tahseen, Lixia Yang,Jiangsu University, Zhenjiang, China .
    ABSTRACT
    Witricity is transfer of electric power wirelessly at resonant. At resonant circuits behave as pure resonant circuit. This happens when two eleromagnetic coupled coils are present in very near field and at resonant frequency. Both circuits transfer maximum energy with the minimum energy reflection. The transformers with the same technique are usually used in switching circuits, radio etc. At resonant, oscillating current becomes a source of magnetic field, so during consecutive many cycles energy loss is very low, hence any circuit lying in reactive near field causes mutual induction and there is a definite wireless energy transmission. In this research, authors design two multi track coil Resonators and observe energy transmission at near field from first Resonator to the 2nd in simulation and hardware both. The authors further analyse the said Witricity technique with misalignment at three different positions; lateral, angular, and axial rotational. They observe energy transmission at each position with all three misalignments at fixed resonant frequency to demonstrate the feasibility of the system through practical measurements in order to make meaningful comparison. They develop a design procedure for charging low power electronic devices wirelessly through inductive linkage within a room through single source.
    KEYWORDS

    Lateral misalignment, Angular misalignment, Axial rotational misalignment.


    Machine Learning And Wearable Devices For Phonocardiogram-Based Diagnosis
    Shaima Abdelmageed,University of Vasa, Vasa, Finland.
    ABSTRACT
    The heart sound signal, Phonocardiogram (PCG) is difficult to interpret even for experienced cardiologists. Interpretation are very subjective depending on the hearing ability of the physician. mHealth has been the adopted approach towards simplifying that and getting quick diagnosis using mobile devices. However, it has been challenging due to the required high quality of data, high computation load, and high-power consumption. The aim of this paper is to diagnose the heart condition based on Phonocardiogram analysis using Machine Learning techniques assuming limited processing power to be encapsulated later in a wearable device. The cardiovascular system is modelled in a transfer function to provide PCG signal recording as it would be recorded at the wrist. The signal is, then, decomposed using filter bank and the analysed using discriminant function. The results showed that PCG with a 19 dB Signal-to-Noise-Ratio can lead to 97.33% successful diagnosis.
    KEYWORDS

    Analysis, Classification, data quality, diagnosis, filter banks, mHealth, PCG, SNR, transfer function, Wavelet Transform, wearable


    Constructing a Semantic Graph with Depression Symptoms Extracted from Twitter
    Long Ma,Troy University, Troy, USA.
    ABSTRACT
    Depression diagnosis is a critical challenge in mental precision medicine since there is currently no a gold standard using depression symptoms. Usually, a doctor makes depression diagnosis based on patients’ answers to interview questions. The depression diagnosis depends on a person’s behavior symptoms. Due to privacy of clinical data in a hospital, it is very hard to get patients’ medical data. Thus, we directly use public social media data containing much information from patients, doctors and other people on Twitter. The research goal is to extract depression symptoms from the massive social data from Twitter via text mining and then make a semantic graph to representing relations among the depression symptoms. Different from commonly used statistical methods, we propose a hybrid method that integrates the statistical analysis and natural language processing techniques to make the semantic graph with the discovered depression symptoms from tweets. In the future, the depression symptom semantic graph will be used to build an intelligent depression diagnosis software system for medical doctors and a convenient depression self-screening software system for ordinary people.
    KEYWORDS

    Depression, Social Media, Text Mining, Twitter, Social Networks, Natural Language Processing, Word2Vec


    Evaluating Sequencing And Alignment Parameters For SNP Calling In Allotetraploid Plants
    Amith Chandrashekar, Benjamin J. Mason, Madhav Subedi, Medini Weerasinghe,University of New Hampshire,USA.
    ABSTRACT
    Allopolyploid are formed by the hybridization of two related species of plant. They contain multiple similar gene copies known as homoeologs which are used in studying major structural rearrangements or conservation between related homoeologous chromosomes. Quinoa (Chenopodium quinoa) is an allotetraploid plant originated from Andean region of South America. Single nucleotide polymorphisms (SNP) are one of the most common markers developed today that are utilized in the various aspects of genomic study. High throughput sequencing and variant calling is a common technique used to call SNPs. The presence of homoeologous sequences in the allotetraploid plant creates complexity in the SNPs calling. The parameters of the reads generated through the sequencing directly influences the efficiency of SNP calling, which can be improved by selection of appropriate sequencing and alignment parameters. This research evaluates the efficiency of different sequencing and alignment parameters for SNP calling in homoeologous sequences of the quinoa genome.
    KEYWORDS

    Homoeologs, Allotetraploid, Quinoa, Sequencing, Single nucleotide polymorphisms (SNPs)


    Microscopy Image Segmentation With Improved Deep U-Net Based Models
    Shuo Wen Chang and Shih Wei Liao,National Taiwan University, Taiwan.
    ABSTRACT
    Due to the temporal behaviour of living cells, the microscopy sequences analysis is indeed a challenging task. U-Net architectures are normally considered as powerful tool for segmentation of biomedical images. However, U-Net is trained from scratch starting with randomly initialized weights. Neural network initialized with weights from networks pre-trained on a large and ablative data set such as ImageNet, are believed to have better performance than original network. In this paper, we proposed two improved deep U-Net convolutional models: CLSTM encoder and VGG13 encoder with pre-trained weights on ImageNet. Our models could also extend to biomedical image segmentation, aiming to attain consistently good segmentation results and we compared their performance with each other. In the same experiment, we also compared two weight initialization schemes: Glorot uniform initialized weights and pre-trained weights on ImageNet. The other experiment proved our proposed model perform better than original U-Net, and both of them used the pre-trained weights. To prevent from overfitting, we trained and tested our models with four data sets. Two evaluation methods are adopted to solidify the work of the improvement. From the promising result of pre-trained networks, it was proved that our approach shows superior performance in comparison with the original methods. What’s more, we’ve got the conclusion that CLSTM encoder generally shows superior performance in comparison to other methods.
    KEYWORDS

    Computer Vision, Cell Segmentation, Deep learning, Medical Image Processing, Satellite Imagery


    Development Of A Smartphone Application Using Novel Image Processing Method For Detecting Periodontal (Gum) Disease
    Behnam Askarian and Jo Woon Chong,Texas Tech University,USA
    ABSTRACT
    Recently there has been increasing interest in health monitoring and disease diagnosis using smartphones and mobile health application. However, few have been focused on detecting inner mouth diseases using smartphones. Periodontal disease is the main cause of tooth loss in adults and early detection ofthis disease is vital for preventing tooth loss. In this paper, we propose an affordable easy to use versatile smartphone-based method (application) using a novel image processing approach for periodontal disease detection that could be used in remote area with medical shortage for detecting gum disease.The proposed method uses a state of the art color converter method and SVM classification algorithm to detect diseased gum from healthy one.The proposed method provides acceptable accuracy of over 93%. for detecting periodontal disease.
    KEYWORDS

    Periodontal disease, Color gamut, Support Vector Machine, SVM, Smartphone.


    Bayesian Survival Analysis of Marshall and Olkin Models with Stan
    Mohammed H AbuJarad , Athar Ali Khan, Aligarh Muslim University,India.
    ABSTRACT
    In this paper, an endeavor has been made to t three distributions Marshall Olkin with expo- nential distributions, Marshall Olkin with exponentiated extponential distributions and Marshall Olkin with exponentiated extension distribution keeping in mind the end goal to actualize Bayesian techniques to examine visualization of prognosis of women with breast cancer, demonstrate through utilizing Stan. Stan is an abnormal model dialect for Bayesian displaying and deduction. This model is applies to a genuine survival controlled information with the goal that every one of the ideas and calculations will be around similar information. R code has been created and enhanced to actualize censored system all through utilizing Stan technique. Moreover, parallel simulations tools are also implemented are additionally actualized with a broad utilization of R.
    KEYWORDS

    Marshall Olkin with exponential, Marshall Olkin with exponentiated exponential, Mar- shall Olkin with exponentiated extension, Posterior, Simulation, RStan, Bayesian Inference, R.


    Force Parameterization Of Literals
    Bhashyam Ramesh, Mohankumar KJ, J Venkataramana,Shrikant Salunke, Ganit Kumar, Syed Nawaz,Teradata India Pvt Ltd, India
    ABSTRACT
    Query plan cache (QPC) in any database avoids repeated query optimization for the same query. QPC has strong significance in relational database management systems (RDBMSs). In Teradata QPC saves the plan of the query when the query is seen for the first time and optimized. The saved optimized plan is used if the query repeats and repeated optimization is avoided. In most RDBMSs, the query must exactly match with the saved query in QPC in order to reuse the query plan. In most of the cases user given queries are exactly same and may vary only in the literal values present in the predicates. Since the queries vary in the literal values used in the predicates, queries are treated as distinct queries and query optimization is done to generate the plan. We propose an approach called as Force Parameterization of Literals (FPL). In this approach, we parameterize the literals present in the predicates and generate a query template. Use this query template to generate a generic plan and reuse the plan for all the queries that matches with the query template after parameterizing the literals present in the predicates. One of the key challenge is making sure the generic plan generated is optimal for all the subsequent literal values come. We experimentally validate that our approach is efficient in space and time and adaptive, requiring no repeated query optimization for queries that vary only in the literal values used in the predicates.
    KEYWORDS

    Query Plan Cache, Query Optimization, Query Plan Generation, Database, RDBMS .


    Determination Of substructures In Social Networks By Graph Coloring Using Fuzzy Irregular Cellular Automata (Fica)
    Mostafa Kashani, Saeid Gorgin, Seyed Vahab Shojaedini, Iranian Research Organization for Science and Technology (IROST),Iran.
    ABSTRACT
    Determination of the relationship between individuals in virtual networks can be modeled as a graph; where each node shows an individual and each edge a relationship between two individuals. Determining Infrastructures in virtual networks can be critical for objective advertising, so that each advertisement is sent to subgroups and Infrastructures which are considered the target group for that advertisement. In this study, an optimal Infrastructure is provided for virtual networks using graph theory and coloring the graph. A new method; Fuzzy irregular cellular automata; was used on two graph of virtual networks with different complexity, and then compared with other methods. The results showed that the proposed method has a more optimal Infrastructure comparing to other methods; and the execution time increases with the increase of graph complexity.
    KEYWORDS

    Social networks, Graph coloring, Irregular Cellular automata, Fuzzy System


    Development Of A Knowledge- Based System For Undertaking The Risk Analysis Of Proposed Building Projects For A Selected Client
    Ibrahim Yakubu,Abubakar Tafawa Balewa University, Nigeria.
    ABSTRACT
    A Knowledge-Based System for the risk analysis of proposed building projects was developed for a selected client. The Fuzzy Decision Variables (FDVs) that cause differences between initial and final contract sums of building projects were identified, the likelihood of the occurrence of the risks were determined and a Knowledge-Based System that would rank the risks was constructed using JAVA programming language and Graphic User Interface. The Knowledge-Based System is composed a Knowledge Base for storing data, an Inference Engine for controlling and directing the use of knowledge for problem-solution, and a User Interface that assists the user retrieve, use and alter data in the Knowledge Base. The developed Knowledge-Based System was compiled, implemented and validated with data of previously completed projects. The client could utilize the Knowledge-Based System to undertake proposed building projects
    KEYWORDS

    RISK ANALYZER, Risk analysis, Knowledge-Based Systems, JAVA, Graphic User Interface


    Intelligence analysis with matrix creation model for all models and combining matrix intelligence analysis with network analysis
    Mohammad Hassan Anjom SHoa,Vali-e-Ars University, Rafsanjan, Iran.
    ABSTRACT
    In this paper, we try to express the strengths and weaknesses of each of intelligence techniques to form a suitable combination of these techniques for a moment of maximum intelligence, as well as more and more accurate mathematical structures. A new technique is a combination of applied methods. In this new technique for all previous techniques such as a coherent method, actors, etc., a matrix has been proposed to establish a matrix of elements and parameters which communicate with the help of network analysis techniques between their parameters. While using the weak assumptions that are used in the competing assumptions analysis technique, the other matrices can be used, and with the help of the network, parameters that have the greatest relevance to other parameters are selected as a job preference for analysis.
    KEYWORDS

    competitor analysis technique, coherent effect analysis model, model of analysis of actors, intelligence network analysis technique, parameter matrix, and component .


    Support Vector Machine method application in the Data Mining process for oil well classification problem decision
    Mihailov Ilya Sergeevich,Zayar Aung,National Research University,Russian.
    ABSTRACT
    SVM method for classification problems solving is considered in the article. The main aspects of this method application for classification problems solving are formulated. Its advantages and disadvantages are shown. The oil wells classification problem description and approaches to solving this problem using the developed SVM method modification are also given. The SVM method modification for solving the considered problem is substantiated.
    KEYWORDS

    Artificial Intelligence, Support Vector Machine, Data Mining, oil well.


    Obstacles detection on road
    Mosbah Ramzi, Guezouli Larbi,University of Batna,Algeria.
    ABSTRACT
    According to the World Health Organization (WHO) [1], more than 1.25 million people have died in a car accident caused by the driver's lack of attention, sleep or fatigue. Almost half of those who die on the world's roads are "vulnerable road users": pedestrians, cyclists and motorcyclists. In this work we present an approach where we detect roadsides, then we seek objects located on the road area to prevent driver. Parallel to this, we provide a system for detecting driver's drowsiness. In some critical cases, we built an arduino microcontroller to take control of the car when driver sleeps or an obstacle appeared in a way and a collision is imminent. We choose the 4th level of autonomous driving for our system. Levels [2] are de ned by experts where they categorize the evolution of autonomous driving in 5 categories. Each level describes how the car and driver interact. We compared the detection accuracy among object classes and analyzed the recognition results with another detectors on KITTI dataset.
    KEYWORDS

    Object detection, help driving, road-edges detection, arduino.


    Edgebase: A Cooperative Query Answering Database System With A Natural Language Interface.
    Edmund Sowah and Jianqiu Xu, Nanjing University of Aeronautics and Astronautics, Nanjing, China.
    ABSTRACT
    Traditional Database Management Systems (DBMS) require users to meticulously construct and submit queries to generate answers. The lack of query syntax flexibility in traditional database systems make results in simple and direct answers – queries retrieve precisely matched elements stated in the given Boolean query. In this paper, we propose a Cooperative Query Answering Database System (CDBS) that provide answers to user queries in the same manner as humans do, and not as machines. The method of “Cooperative Query Answering (CQA)”, emanated from the perception that to provide adequate and “complete” answers to queries, recognition of users’ intentions is vital. Most database systems require users to submit their queries using SQL syntax. In addition to presenting answers to queries in human-like manner, we present a cooperative approach to query submission. By this, we present an architecture that combines the rich features of html, Natural Language (NL) with Query-ByForm (QBF) method and MySQL to enable our proposed system accept user queries in plain English language. To authenticate our approach and proposed system, a set of thorough experiments were conducted on two database systems using mysqlslap benchmark and a comparative study with other methods is done.
    KEYWORDS

    Cooperative Query Answering, Natural Language, Query Language, Query-By-Form, Query syntax


    Coping with Class Imbalance in Classification of Traffic Crash Severity Based on Sensor and Road Data: A Feature Selection and Data Augmentation Approach
    Deepti Lamba1, Majed Alsadhan1, William Hsu1, Eric Fitzsimmons2, Greg Newmark3, 1Department of Computer Science, Kansas State University, USA, 2Department of Civil Engineering, Kansas State University, USA and 3Department of Landscape Architecture / Regional and Community Planning, Kansas State University, USA
    ABSTRACT
    This paper presents machine learning-based approaches to classification of historical traffic crashes in Kansas by severity, applied to a data set consisting of highway geometry, weather, and road sensor data. The goal of this work is to identify relevant features using a variety of loss measures and algorithms for feature selection. This is shown to facilitate the discovery of the most relevant sensors for the task of learning to predict severe crashes (those involving bodily injury). The key technical challenges are to cope with class imbalance (as a 75% majority of crashes are non-severe) and a highly correlated and redundant set of features from multiple coalesced sources. The major novel contributions of this work are the development of a random oversampling strategy for data augmentation combined with the systematic application of multiple feature selection measures over a range of supervised inductive learning models and algorithms. Positive results from this approach, on a data set of 277 initial ground features and 20,000 vehicle crashes collected over 9 years (2007 – 2015) by the Kansas Department of Transportation (KDOT), included models trained using 30 features (out of 277) that achieve cross-validation precision and recall comparable to those obtained using the full set of features. These and other results point towards potential use of feature selection findings and the resultant models in planning future road construction.

    KEYWORDS

    machine learning, class imbalance, predictive analytics, feature selection, data augmentation, traffic engineering


    Optimizing Data Structures With Alternating-Oriented Bisected Layouts
    Aleksandar Tucovic, University of Manitoba, Canada
    ABSTRACT
    Layouts are a general and universal design tool. However, for the purposes of modern computing layouts are lacking a general and universal data structure. In this paper, a layout data structure will be described and applied to various areas of computer science. It will be proposed that there is one data structure that can be thought of as a general theory of layouts. The XOO layout schema is a general-purpose and scalable tool for design logic. For the purposes of determining and optimizing data structures, the XOO layout schema is constrained by bisecting divisions of the layout in alternating orientations. There has been some interesting research in layout and applications of layout structure over the years but with a little bit of insight it extends into many areas of computer science. One of the interesting properties of the XOO schema in applying a loose and strict set of rules to its data structure is that interesting geometries appear, and that there are overlaps in many areas of computer science.