(0 Item)

Proceedings from the Journal on Computing (JoC) Vol.1 No.2 February 2011

Academic Conferences - Proceedings from the Journal on Computing (JoC) Vol.1 No.2 February 2011
Academic Conferences

By : Global Science & Technology Forum

Date : 2011

Location : United States US / New York

PDF 246p
Description :

The proceedings present the theory, application, and social implications of diverse frontier research areas in science and technology.

Keywords :

distance learning, Automatic Test Pattern generation, Binary decision diagrams, Boolean difference, Fitness Evaluation Function, Synthesized Binary decision diagram, tripartite Diffie-Hellman key agreement, computing conference proceedings

Keywords inside documents :

algorithm ,system ,computing ,based ,image ,business ,network ,processor ,curve ,application ,service ,model ,quantum ,imputation ,missing ,layer ,architecture ,quality ,rules ,pattern

Proceedings from the Journal on Computing (JoC) Vol.1 No.2 February 2011 conference.

 

The Global Science and Technology Forum (GSTF) publishes international journals that feature peer-reviewed scholarly articles rigorously selected from conferences and open call for papers. These articles are the end result of scholars exploring the theory, application, and social implications of diverse frontier research areas in science and technology.

 


Abstracts from papers included on The Proceedings from the Journal on Computing (JoC) Vol.1 No.2 February 2011.

 

1. Issues and Challenges in Applying Computer-Based Distance Learning, system as an alternative to traditional training methods

Thamer Ahmad, Information Technology Programme, College of Arts and Sciences, Universiti Utara Malaysia (UUM)

Huda Ibrahim, Information Technology Programme, College of Arts and Sciences, Universiti Utara Malaysia (UUM)

Shafiz Affendi Mohd Yusof, Information Technology Programme, College of Arts and Sciences, Universiti Utara Malaysia (UUM)
 

Many scholars have listed the problems that prevent organizations’ employees to attend face to face training methods. Additionally, they have presented Information and Communication Technology (ICT) especially distance learning system as important way to overcome these obstacles. However, they did not depend on empirical studies to mention those problems and to compare between traditional training method and applying computer-based distance learning system. Therefore, this survey aims to distinguish between the traditional training methods and computer-based distance learning system as an important way to overcome employees’ problems with traditional training, including the challenges and some issues.


2. Test Pattern Generation Algorithm Using Structurally Synthesized BDD

Mousumi Saha, Naveen Singh Bisht, Shrinivas Yadav, Praveen Kumar K

 

Structurally Synthesized Binary Decision Diagrams (SSBDDs) have an important characteristic property of keeping information about circuit’s structure. Boolean difference of a circuit is used to find test pattern for stuck at fault in combinational circuit but the algebraic manipulation involved in solving Boolean difference is a tedious job. In this paper an efficient algorithm is proposed to compute Boolean difference and test patterns simply using searching the paths of SSBDD. This model reduces algebraic manipulations and takes less time to compute the test pattern.


3. Implementation of DNA Pattern Recognition in Turing Machines

Sumitha C.H, Department of Computer Science and Engineering, Karunya University, Coimbatore, India

 

Pattern recognition is the act of taking in raw data and taking an action based on the category of the pattern. DNA pattern recognition has applications in almost any field. It has applications in forensics, genetic engineering, bio informatics, DNA nanotechnology, history and so on. The size of the DNA molecules can be very large that it is a tedious task to perform pattern recognition for the same using common techniques. Hence this paper describes the pattern recognition for DNA molecules using the concept of Turing Machines. It also performs a simulation of the standard Turing Machine that performs DNA pattern recognition on the Universal Turing Machine.

 

4. A Precise Evolutionary Approach to Solve Multivariable Functional Optimization

Md. Robiul Islam, Dept. of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, Bangladesh

M.A.H. Akhand, Dept. of Computer Science and Engineering, Khulna University of Engineering & Technology, Khulna, Bangladesh

 

Genetic Algorithm (GA) is a stochastic search and optimization method imitating the metaphor of natural biological evolution. GA manages population of solutions instead of a single solution to find an optimal solution to a given problem. Although GA draws attention for functional optimization, it may search same point again due to its probabilistic operations that hinder its performance. In this study, we make a novel approach of standard GeneticAlgorithm (sGA) to achieve better performance. The modification of sGA is investigated in selection and recombination stages and proposed Precise Genetic Algorithm (PGA). PGA searches the target space efficiently and it shows several potential advantages over the conventional GA when tested for solving functions having multiple independent variables.

 

5. GPUMemSort: A High Performance Graphics Co-processors Sorting Algorithm for Large Scale In-Memory Data
Yin Ye, National Laboratory for Information Science and Technology
Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
Zhihui Du, National Laboratory for Information Science and Technology
Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
David A. Bader, College of Computing, Georgia Institute of Technology, Atlanta, GA, 30332, USA
Quan Yang,National Laboratory for Information Science and Technology
Department of Computer Science and Technology, Tsinghua University, Beijing, 100084, China
Weiwei Huo, School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, China

 

In this paper, we present a GPU-based sorting algorithm, GPUMemSort, which achieves high performance in sorting large-scale in-memory data by take advantage of GPU processors. It consists of two algorithms: an in-core algorithm, which is responsible for sorting data in GPU global memory efficiently, and an out-of-core algorithm, which is responsible for dividing large-scale data into multiple chunks that fit GPU global memory. GPUMemSort is implemented based on NVIDIA’s CUDA framework and some critical and detailed optimization methods are also presented. The tests of different algorithms have been run on multiple data sets. The experimental results show that our in-core sorting can outperform other comparison-based algorithms and GPUMemSort is highly effective in sorting large-scale inmemory data.

 

6. Image Segmentation using Two-Layer Pulse Coupled Neural Network with Inhibitory Linking Field

Heggere S. Ranganath, The University of Alabama in Huntsville, Huntsville, Alabama, 35899, USA

Ayesha Bhatnagar, The University of Alabama in Huntsville, Huntsville, AL 35899, USA

 

For over a decade, the Pulse Coupled Neural Network (PCNN) based algorithms have been used for image segmentation. Though there are several versions of the PCNN based image segmentation methods, almost all of them use singlelayer PCNN with excitatory linking inputs. There are four major issues associated with the single-burst PCNN which need attention. Often, the PCNN parameters including the linking coefficient are determined by trial and error. The segmentation accuracy of the single-layer PCNN is highly sensitive to the value of the linking coefficient. Finally, in the single-burst mode, neurons corresponding to background pixels do not participate in the segmentation process. This paper presents a new 2-layer network organization of PCNN in which excitatory and inhibitory linking inputs exist. The value of the linking coefficient and the threshold signal at which primary firing of neurons start are determined directly from the image statistics. Simulation results show that the new PCNN achieves significant
improvement in the segmentation accuracy over the widely known Kuntimad’s single burst image segmentation approach. The two-layer PCNN based image segmentation method overcomes all three drawbacks of the single-layer PCNN.

 

 

7. Efficient Fractal Image Coding using Fast Fourier Transform

S.B Dhok,Visvesvaraya National Institute of Technology,Nagpur (INDIA)

R.B.Deshmukh,Visvesvaraya National Institute of Technology,Nagpur (INDIA)

A.G. Keskar, Visvesvaraya National Institute of Technology,Nagpur (INDIA)

 

The fractal coding is a novel technique for image compression. Though the technique has many attractive features, the large encoding time makes it unsuitable for real time applications. In this paper, an
efficient algorithm for fractal encoding which operates on entire domain image instead of overlapping domain blocks is presented.The algorithm drastically reduces the encoding time as compared to classical full search method. The reduction in encoding time is mainly due to use of modified crosscorrelation based similarity measure. The implemented algorithm employs exhaustive search of domain blocks and their isometry transformations to investigate their similarity with every range block. The application of Fast Fourier Transform in similarity measure calculation speeds up the encoding process. The proposed eight isometry transformations of a domain block exploit the properties of Discrete Fourier Transform to minimize the number of Fast Fourier Transform calculations. The experimental studies on the proposed algorithm demonstrate that the encoding time is reduced drastically with average speedup factor of 538 with respect to the classical full search method with comparable values of Peak Signal To Noise Ratio.

 

8. Application Virtualization for Teaching Hospitals in Nigeria

Desmennu Senanu Rita, Dept. of Computer and Info. Sci, Covenant University Ota, Ogun State, Nigeria

Ikhu-Omoregbe Nicholas, Dept. of Computer and Info. Sci, Covenant University Ota, Ogun State, Nigeria

 

Information technology has improved operations management globally. The health sector has benefited from this revolution through the introduction of eHealth solutions. The cost-effective utilization of Information Technology and flexibility in adapting and adopting organizational changes has posed some challenges to health institutions in developing countries. In the case of Nigeria, the implementation of the National Health Policy entails the delivery of a full-packaged health care system; this package includes health education, maternal, newborn and child healthcare, nutrition and immunization. All these, require record keeping and data storage. The management of massive data storage and its availability on-demand has been sources of concern to health institutions in the country. This has brought about a slow rate in hospital-tohospital collaboration, insecure information exchange between and across institutions and lack of proper accountability in the health sector amongst other challenges.

In this paper we propose a Cloud computing infrastructure which will adopt application virtualization to address the challenges in health care delivery in the country. This is an emerging technology that will provide eHealth solutions as services to tenants; a process known as Software-as-a-Service (SaaS). The infrastructure should deliver a single application through the browser to thousands of clients or
stakeholders using scalable multitenant architecture. This will help to minimize cost, manage healthcare resources effectively, and help with the realization of the Millennium Development Goals (MDGs) on healthcare.

 

9. Rotation independent hierarchical representation for Open and Closed Curves and its Applications

Siddharth Shivapuja, Honeywell Scanning and Mobility, Blackwood, NJ 08012, USA
Vineetha Bettaiah,The University of Alabama in Huntsville, Huntsville, AL 35899, USA
Thejaswi Raya, The University of Alabama in Huntsville, Huntsville, AL 35899, USA
Heggere Ranganath, The University of Alabama in Huntsville, Huntsville, AL 35899, USA


The algorithm used for the segmentation of an image, and scheme used for the representation of the segmentation result are mostly selected based on the final image analysis or interpretation objective. The boundary based image segmentation and representation system developed by Nabors segments and stores the result as a graph-tree hierarchical structure that is capable of supporting diverse applications. This paper shows that Nabors’ hierarchical representation of curves is not invariant to rotation, and proposes an enhanced representation which retains its structure and remains invariant under rotation. The curve matching algorithm which matches two curves based on their hierarchical representation makes it easy to determine if a curve is a section of a larger curve. The potential of the representation is illustrated by developing image registration and image stitching methods based on the new representation.
 

10. Key Agreement For Large-Scale Dynamic Peer Group

Xun Yi and Eiji Okamoto

 

Many applications in distributed computing systems, such as IP telephony, teleconferencing, collaborative workspaces, interactive chats and multi-user games, involve dynamic peer groups. In order to secure communications in dynamic peer groups, group key agreement protocols are needed. In this paper, we come up with a new group key agreement protocol, composed of a basic protocol and a dynamic protocol, for large-scale dynamic peer groups. Our protocols are natural extensions of one round tripartite Diffie-Hellman key agreement protocol. In view of it, our protocols are believed to be more efficient than those group key agreement protocols built on two-party Diffie- Hellman key agreement protocol. In addition, our protocols have the properties of group key secrecy, forward and backward secrecy, and key independence

 

11. Identifying Potential Security Flaws Using Loophole Analyzis And The Secret

Curtis Busby-Earle, Department of Computing, The university of The West Indies, Mona, Jamaica

Ezra K.Mugisa, Department of Computing, The University of West Indies, Mona, Jamaica


In contemporary software development there are a number of methods that attempt to ensure the security of a system. Many of these methods are however introduced in the latter stages of development or try to address the issues of securing a software system by envisioning possible threats to that system, knowledge that is usually both subjective and esoteric.

In this paper we introduce the concept of path fixation and discuss how contradictory paths or loopholes, discovered during requirements engineering and using only a requirements specification document, can lead to potential security flaws in a proposed system.

The SECREt is a proof-of-concept prototype tool developed to demonstrate the effectiveness of loophole analysis. We discuss how the tool performs a loophole analysis and present the results of tests conducted on an actual specification document. We conclude that loophole analysis is an effective, objective method for the discovery of potential vulnerabilitites that exist in proposed systems and that the SECREt can be successfully incorporated into the requirements engineering process.

 

12. Exploitation of Vulnerabilities in Cloud-Storage

Narendran Calluru Rajasekar, School of Computing, Information Technology and Engineering, University of East London, London, U.K

Chris O. Imafidon, School of Computing, Information Technology and Engineering, University of East London, London, U.K.


The paper presents the vulnerabilities of cloud storage and various possible attacks exploiting these vulnerabilities that relate to cloud security, which is one of the challenging features of cloud computing. The attacks are classified into three broad categories of which the social networking based attacks are the recent attacks which are evolving out of existing technologies such as P2P file sharing. The study is extended to available defence mechanisms an current research areas of cloud storage. Based on the study, simple cloud storage is implemented and the major aspects such as login mechanism, encryption techniques and key management techniques are evaluated against the presented attacks. The study proves that the cloud storage consumers are still dependent on the trust and contracts agreed with the service provider and there is no hard way of proven defense mechanisms against the attacks. Further down, the emerging technologies could possibly break down all key based encryption mechanisms.

 

13. How to improve performance of Neural Network in the hardened password mechanism

Narainsamy Pavaday, Insah Bhurtah and Dr. K.M.Sunjiv Soyjaudah, Member IEEE

 

A wide variety of systems, ubiquitous in our daily activities, require personal identification schemes that verify the identity of individual requesting their services. A non exhaustive list of such application includes secure access to buildings, computer systems, cellular phones, ATMs, crossing of national borders, boarding of planes among others. In the absence of robust schemes, these systems are vulnerable to the wiles of an impostor. Current systems are based on the three vertex of the authentication triangle which are, possession of the token, knowledge of a secret and possessing the required biometric. Due to weaknesses of the de facto password scheme, inclusion of its inherent keystroke rhythms, have been proposed and systems that implement such security measures are also on the market. This correspondence investigates possibility and ways for optimising performance of hardened password mechanism using the widely accepted Neural Network classifier. It represents continuation of a previous  work in that direction.

 

14. ATM Frauds -Preventive Measures and Cost Benefit

Lawan Ahmed Mohammed, King Fahd University of Petroleum and Minerals HBCC Campus, Hafr Al Batin 31991, Saudi Arabia

 

It is well-known that criminals have many ways of illegally accessing ATM machines to access the account of legitimate users. In this paper, we briefly provide an overview of the possible fraudulent activities that may be perpetrated against ATMs and justify why the use of biometric should be considered as preventive measure. A prototype of biometric ATM was designed and questionnaires were distributed to users for their opinion. Finally, the paper concludes by giving a simple risk and cost benefit analysis for the proposed design.

 

15. SIGN LANGUAGE

R.Dhanagopal, ECE,Jayaram College of Engg & Tech, Trichy,India

B.Manivasakam, ECE,Jayaram College of Engg & Tech, Trichy,India

 

Usually a human interaction focuses on the sound world, where the communication is based on the speech and in which most information is conveyed via voice and other sounds. However, there are people who live in the world of silence. For them nothing can be heard, as they are hearing impaired. For all of them a voice communication is impossible or troublesome. Hence they have invented a sign language. The sign language consists of a grammar and a vocabulary. Usually the grammar is significantly different to the spoken and written languages. Whereas the vocabulary is composed of many hand gestures and hand movements which convey the most important information, but which are supported by the whole body movement and facial expressions. Considering the differences in the way the hearing impaired observe the world, they encounter huge difficulties while learning and using the writing language, which is so common in daily communication. Since this sign language cannot be understand by others, we are in need of systems which can understand sign language. The existing systems handle that task not appropriate and accurate. We introduce the concept of a chat for the sign language based communication, which overcomes the deficiencies of the existing approaches. In the current system there is an action sensor available, this sensor has pressure switches when any pressure is given then the pressure switch will be closed and the signal is given to microcontroller.

The microcontroller senses the signal coming from the pressure switch and it understands the switch position and it sends the command signal to the computer through RS232 cable. The RS232 cable is used to convert microcontroller understandable signal to computer understandable signal. As soon as the computer gets the signal the program written in the computer will detect the particular word and it will be played at the same instant. That’s how the sign language is converted in to voice language.*

 

16. Mobile Based Interaction System for the Paralyzed People

Kirti Srivastava, Undergraduate Student at JIITU

Animesh, Student of JIITU

Dr.M.Hima Bindu, Asst. Professor, JIITU

 

New methods and technologies are needed to face the various kinds of present and future challenges like addressing the problems or issues related to disabled people. We have focused on enabling quadriplegic’s entertainment in the present work. By utilizing the sensing and processing capabilities of today’s mobile devices it is possible to capture rich quantitative data about the usage and context of mobile and ubiquitous applications in the field. In this paper, we are proposing a tool which is using one of these capabilities of the mobile device which is the accelerometer sensors. The mobile having accelerometer sensors fitted on the head of the quadriplegic will be used to track the head gestures made. Paralyzed people are those people whose body movements are restricted due to injury of brain or some other malfunction of parts of brain or the spinal cord. The tool will capture the gesture and interpret it or recognize the gesture and perform the required action i.e. mapped on to the operations of the computer in front of the subject. The tool thus aids the mobility-restricted person to use the computer and entertain himself/herself when alone at home. The final part of the paper describes the experimentation goals, the total process and preliminary results. Future work directions are also indicated.

 

17. Design and Simulation of High Performance Parallel Architectures Using the ISAC Language

Zdeněk Přikryl, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

Jakub Křoustek, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

Tomáš Hruška, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

Dušan Kolář, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

Karel Masařík,Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

Adam Husár, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic

 

Most of modern embedded systems for multimedia and network applications are based on parallel data stream processing. The data processing can be done using very long instruction word processors (VLIW), or using more than one high performance application-specific instruction set processor (ASIPs), or even by their combination on single chip.

Design and testing of these complex systems is time-consuming and iterative process. Architecture description languages (ADLs) are one of the most effective solutions for single processor design.
However, support for description of parallel architectures and multi-processor systems is very low or completely missing in nowadays ADLs. This article presents utilization of new extensions for existing architecture description language ISAC. These extensions are used for easy and fast prototyping and testing of parallel based systems and processors.

 

18. Efficient Implementation of Parallel Path Planning Algorithms on GPUs

Ralf Seidler, Department of Computer Science, Chair of Computer Architecture, FAU Erlangen-Nuremberg, Germany

Michael Schmidt, Department of Computer Science, Chair of Computer Architecture, FAU Erlangen-Nuremberg, Germany

Andreas Schäfer,Department of Computer Science, Chair of Computer Architecture, FAU Erlangen-Nuremberg, Germany

Dietmar Fey,Department of Computer Science, Chair of Computer Architecture, FAU Erlangen-Nuremberg, Germany


In robot systems several computationally intensive tasks can be found, with path planning being one of them. Especially in dynamically changing environments, it is difficult to meet real-time constraints with a serial processing approach. For those systems using standard computers, a promising option is to employ a GPGPU as a coprocessor in order to offload those tasks which can be efficiently parallelized. We implemented selected parallel path planning algorithms on NVIDIA's CUDA platform and were able to accelerate all of these algorithms efficiently compared to a multi-core implementation. We present the results and more detailed information about the implementation of these algorithms.

 

19. An Improved Modular Hybrid Ant Colony Approach for Solving Traveling Salesman Problem

Sudip Kumar Sahana, Assistant Prof, Dept of CSE, Birla Institute of Technology, Ranchi, Jharkhand, India.

Dr.(Mrs).Aruna Jain, Reader, Dept of IT, Birla Institute of Technology, Ranchi, Jharkhand, India

 

Our primary aim is to design a framework to solve the well known traveling salesman problem(TSP) using combined approach of Ant Colony Optimization (ACO) and Genetic Algorithm (GA). Several solutions exists for the above problem using ACO or GA and even using a hybrid approach of ACO and GA. Our framework gives the optimal solution for the above problem by using the modular hybrid approach of ACO and GA along with heuristic approaches.We have incorporated GA, RemoveSharp and LocalOpt heuristic approaches in ACO module, hence each iteration calls the GA and heuristics within ACO module which results in a higher amount of pheromone deposited in the optimal path for global pheromone update. As a result the convergence is quicker and solution is optimal.

 

20. Parallel Solution of Covering Problems Super-Linear Speedup on a small Set of Cores

Bernd Steinbach, Institue of Computer Science, Freiberg University of Mining And Technology, Freiberg, Germany

Christian Posthoff, Department Of Mathematics & Computer Science, The University of The West Indies, Trinidad & Tobago

 

This paper aims at better possibilities to solve problems of exponential complexity. Our special focus is the combination of the computational power of four cores of a standard PC with better approaches in the application domain. As the main example we selected the unate covering problem which must be solved, among others, in the process of circuit synthesis and for graph-covering (domination) problems. We introduce into the wide field of problems that can be solved using Boolean models. We explain the models and the classic solutions, and discuss the results of a selected model by using a benchmark set. Subsequently we study sources of parallelism in the application domain and explore improvements given by the parallel utilization of the available four cores of a PC. Starting with a uniform splitting of the problem, we suggest improvements by means of an adaptive division and an intelligent master. Our experimental results confirm that the combination of improvements of the application models and of the algorithmic domain leads to a remarkable speedup and an overall improvement factor of more than 35 millions in comparison with the improved basic approach.


21. Estimating Demand for Dynamic Pricing in Electronic Markets

John Cartlidge, Department of Computer Science, University of Bristol, Bristol, UK

Steve Phelpsk, Centre for Computational Finance & Economic Agents, University of Essex, Colchester, UK

 

Economic theory suggests sellers can increase revenue through dynamic pricing; selling identical goods or services at different prices. However, such discrimination requires knowledge of the maximum price that each consumer is willing to pay; information that is often unavailable. Fortunately, electronic markets offer a solution; generating vast quantities of transaction data that, if used intelligently, enable consumer behaviour to be modelled and predicted. Using eBay as an exemplar market, we introduce a model for dynamic pricing that uses a statistical method for deriving the structure of demand from temporal bidding data. This work is a tentative first step of a wider research program to discover a practical methodology for automatically generating dynamic pricing models for the provision of cloud computing services; a pertinent problem with widespread commercial and theoretical interest.

 

22. The ICT Induced Business Reconfiguration from Evolution to Revolution

Poongothai Selvarajan, Faculty of Business Studies, Vavuniya Campus of the University of Jaffna, Sri Lanka

 

This paper explores the recent revolutionary levels of Business Reconfiguration in the Public Sector Banks in Sri Lanka through a Case Study exploratory analysis. This study has compared the five levels of Business Reconfiguration introduced by Venkatraman (1991) with the Sri Lankan Public Sector Banks. Findings show that rather than evolutionary levels, these banks have achieved the revolutionary levels of business Reconfiguration within a short period of time and it is believed that they will achieve the optimum capability of level five in near future.

 

23. A Rule Based Taxonomy of Dirty Data

Lin Li, School of Computing, Edinburgh Napier University, Edinburgh, UK

Taoxin Peng, School of Computing, Edinburgh Napier University, Edinburgh, UK

Jessie Kennedy, School of Computing, Edinburgh Napier University, Edinburgh, UK

There is a growing awareness that high quality of data is a key to today’s business success and that dirty data existing within data sources is one of the causes of poor data quality. To ensure high quality data, enterprises need to have a process, methodologies and resources to monitor, analyze and maintain the quality of data. Nevertheless, research shows that many enterprises do not pay adequate attention to the existence of dirty data and have not applied useful methodologies to ensure high quality data for their applications. One of the reasons is a lack of appreciation of the types and extent of dirty data. In practice, detecting and cleaning all the dirty data that exists in all data sources is quite expensive and unrealistic. The cost of cleaning dirty data needs to be considered for most of enterprises. This problem has not attracted enough attention from researchers. In this paper, a rule-based taxonomy of dirty data is developed. The proposed taxonomy not only provides a mechanism to deal with this problem but also includes more dirty data types than any of existing such taxonomies.

 

24. Effects of Data Imputation Methods on Data Missingness in Data Mining

Marvin L. Brown, Department of Computer Information Systems, College of Business, Grambling State University, Grambling, LA

Chien-Hua Mike Lin, Department of Computer and Information Science, School of Business, Cleveland State University, Cleveland, OH


The purpose of this paper is to study the effectiveness of data imputation methods in dealing with data missingness in the data mining phase of knowledge discovery in Database (KDD). The application of data mining techniques without careful consideration of missing data can result into biased results and skewed conclusions. This research explores the impact of data missingness at various levels in KDD models employing neural networks as the primary data mining algorithm. Four of the most commonly utilized data imputation methods - Case Deletion, Mean Substitution, Regression Imputation, and Multiple Imputation were evalutated using Root Mean Square (RMS) Values, ANOVA Testing, T-tests, and Tukey’s Honestly Significant Difference Test to assess the differences of performance levels between various Knowledge Discovery and Neural Network Models, both in the presence and absence of Missing Data.

 

25. Hybrid Distributed Real Time Scheduling Algorithm

A.Prashanth Rao, Reasearch Scholar, JNTU College of Engineering, Hyderabad (Dt), A.P, India

Dr.A. Govardhan, Professor of CSE & Principal, JNTU College of Engineering, KarimNagar (Dt), A.P, India

C.Venu Gopal,31Reasearch Associate of CSE, Osmania University, Hyderabad, A.P, India
 

In the design of real time distributed system, the scheduling problem is considered to be nature of NP Hard and has been addressed in the literature. However due growing complexities of real time applications, there is need to find optimal dynamic scheduling algorithm. In this paper, we describe a heuristic hybrid scheduling algorithm which combines both static and dynamic tasks. Initially a processor can be allocated a fixed number of units based on pre-defined tasks which are generated from different sensors and there will certain number of units are meant for dynamically created tasks. When a dynamic task arrives at a node, the local scheduler at that node attempts to guarantee that the task will complete execution before its deadline, on that node. If the attempt fails the scheduler searches the node where task will feasibly scheduled. This type of scheduling performs the best results and scheduling algorithm can be configurable.

 

26. Applying GPUs for Smith-Waterman Sequence Alignment Acceleration

Phong H.Pham, High Performance Computing Center, Hanoi University of Science and Technology, Hanoi, Vietnam
Tan N. Duong, High Performance Computing Center, Hanoi University of Science and Technology, Hanoi, Vietnam
Ngoc M.Ta, High Performance Computing Center, Hanoi University of Science and Technology, Hanoi, Vietnam

 

The Smith-Waterman algorithm is a common local sequence alignment method which gives a high accuracy. However, it needs a high capacity of computation and a large amount of storage memory, so implementations based on common computing systems are impractical. Here, we present our implementation of the Smith-Waterman algorithm on a cluster including graphics cards (GPU cluster) – swGPUCluster. The algorithm implementation is tested on a cluster of two nodes: a node is equipped with two dual graphics cards NVIDIA GeForce GTX 295, the other node includes a dual graphics cards NVIDIA GeForce 295 and a Tesla C1060 card. Depending on the length of query sequences, the swGPUCluster performance increases from 37.33 GCUPS to 46.71 GCUPS. This result demonstrates the great computing power of GPUs and their high applicability in the bioinformatics field.

 

27. A Framework for Measuring the Performance and Power Consumption of Storage Components under Typical Workload

Dongjin Lee, Department of Engineering Science, The University Of Auckland, New Zealand

Michael O'Sullivan, Department of Engineering Science, The University Of Auckland, New Zealand

Cameron Walker, Department of Engineering Science, The University Of Auckland, New Zealand

 

Although the cost of storage components are reported accurately by the vendors, it is not clear whether the performance (IOps, MiBps) and power consumption (W) specifications they provide are accurate under ‘typical’ workloads. Accurately measuring this information is a vital step in providing input for optimal storage systems design. This paper measures storage disk performance and power consumption using ‘typical’ workloads. The workloads are generated using an open source version of the (industry standard) SPC-1 benchmark. This benchmark creates a realistic synthetic workload that aggregates multiple users utilizing data storage simultaneously. A flexible current sensor board has also been developed to measure various storage devices simultaneously. This work represents a significant contribution to data storage benchmarking resources (both performance and power consumption) as we have embedded the open source SPC-1 benchmark spc1 within an open source workload generator fio, in addition to our flexible current sensor development. The integration provides an easily available benchmark for researchers developing new storage technologies. This benchmark should give a reasonable estimation of performance with the official SPC-1 benchmark for systems that do not yet fulfill all the requirements for an official SPC-1 benchmark. With accurate information, our framework shows promise in alleviating much of the complexity in future storage systems design.


28. New Architecture for EIA-709.1 Protocol Implementation

Su Goog Shon , The University of Suwon, Bongdam-eup Wau-ri, San 2-2, Hwasung-city, Gyeonggi-do, Republic of Korea

Soo Mi Yang, The University of Suwon, Bongdam-eup Wau-ri, San 2-2, Hwasung-city, Gyeonggi-do, Republic of Korea

 

This paper proposes a new architecture for EIA-709.1 protocol implementation. The protocol is conventionally implemented with the proprietary processor and language, Neuron chip and Neuron C, respectively, where the Neuron chip consists of 3 processors inside. The proposed architecture uses only one general purpose processor and general ANSI C to implement the layers of EIA-709.1 except the physical layer. The data link, network, and other layers are implemented onto one RISC processor, ARM. Specifically, the data link layer of the EIA-709.1 based on predictive p-persistent CSMA/CA is implemented. The interface between the transceiver based on power line communication and the data link layer based on the ARM is described. As a conclusion, this research shows the improvement of performance and the compatibility with the existing Neuron chip.
 

29. A Novel Approach to Multiagent based Scheduling for Multicore Architecture

G.Muneeswari, Research Scholar, R.M.K Engineering College, Anna University, Chennai
A.Sobitha Ahila, Research Scholar, R.M.K Engineering College, Anna University, Chennai
Dr.K.L.Shunmuganathan, Professor & Head, Department of CSE, R.M.K Engineering College, TamilNadu, India

 

In a Multicore architecture, each package consists of large number of processors. This increase in processor core brings new evolution in parallel computing. Besides enormous performance enhancement, this multicore package injects lot of challenges and opportunities on the operating system scheduling point of view. We know that multiagent system is concerned with the development and analysis of optimization problems. The main objective of multiagent system is to invent some methodologies that make the developer to build complex systems that can be used to solve sophisticated problems. This isdifficult for an individual agent to solve. In this paper we combine the AMAS theory of multiagent system with the scheduler of operating system to develop a new process scheduling algorithm for multicorearchitecture. This multiagent based scheduling algorithm promises in minimizing the average waiting time of the processes in the centralized queue and also reduces the task of the scheduler. We actuallymodified and simulated the linux 2.6.11 kernel process scheduler to incorporate the multiagent system concept. The comparison is made for different number of cores with multiple combinations of process and the results are shown for average waiting time Vs number of cores in the centralized queue.

 

30. A Knowledge Management Approach: Business Intelligence in an Intranet Data Warehouse

Lisa Soon, School of Information & Communication Technology, Central Queensland University, Mackay, Australia

Campbell Fraser, Department of International Business & Asian Studies, Griffith University, Brisbane, Australia

 

For contemporary businesses to stay viable, business intelligence is mission critical. Although the importance of business intelligence is recognised, there is limited research on what information contributes to business intelligence and how business intelligence is sought for use in an organisational intranet. This research discusses how business intelligence is sought, captured and used having tapped into an intranet data warehouse as a knowledge management approach. It adopts qualitative case study method using interviews and observation techniques. A case study was conducted to examine how an Intranet system was designed, how business intelligence was captured, and how it aided strategic planning and decision making in business operation. The respondents explained how structured business intelligence data was categorised and disseminated to users and how the used information empowered staff in their work performance. The intranet design successfully retains staff knowledge within the organisation. It was also successful in drawing all internal resources together, capturing resources from external sources, and forming a common repository of organisational assets for use through organisational work procedures within the intranet.


31. A Real-time Service Oriented Infrastructure

Dimosthenis Kyriazis, National Technical University of Athens, Athens, Greece

Andreas Menychtas, National Technical University of Athens, Athens, Greece

George Kousiouris, National Technical University of Athens, Athens, Greece
Karsten Oberle, Alcatel Lucent, Stuttgart, Germany

Thomas Voith, Alcatel Lucent, Stuttgart, Germany

Michael Boniface, University of Southampton IT Innovation Centre, Southampton, UK

Eduardo Oliveros, Telefonica Investigation e Disarollo, Madrid, Spain
Tommaso Cucinotta, Scuola Superiore Sant'Anna, Pisa, Italy

Sören Berger, University of Stuttgart, Stuttgart, Germany

 

Service oriented environments and real-time systems have been two mutually exclusive technological areas. Taking into consideration the main concepts of service orientation, significant challenges exist in providing and managing the offered on-demand resources with the required level of Quality of Service (QoS), especially for real-time interactive and streaming applications. In this paper we propose an approach for providing real-time QoS guarantees by enhancing service oriented infrastructures with coherent and consistent real-time attributes at various levels (application, network, storage, processing). The approach considers the full lifecycle of service-based systems including service engineering, Service Level Agreement (SLA) negotiation and management, service provisioning and monitoring. QoS parameters at application, platform and infrastructure levels are given specific attention as the basis for provisioning policies in the context of temporal constraints. We also demonstrate through use cases the need for real-time scheduling as a fundamental process to provide QoS guarantees.

 

32. A Novel Co-operative Channel Assignment Scheme for Indoor Base Stations

Akindele Segun AFOLABI, Graduate School of Engineering, Kobe University, Japan

Chikara OHTA, Graduate School of System Informatics, Kobe University, Japan

Hisashi TAMAKI, Graduate School of System Informatics, Kobe University, Japan

 

This paper presents a co-operation technique of channel assignment (CA) for indoor base stations (BSs). Indoor BSs are most of the time deployed by users in an ad-hoc manner which makes prior network planning by network operators impossible. If the same pool of radio resources (e.g channels) is used by close BSs, co-operation between these BSs is vital for resolving problems such as interference. In the proposed scheme, femtocell base station (FBS), which is a typical example of indoor BS, is considered. FBSs in close proximity exchange UE-assisted (User Equipment) measured reference power information, and based on individual position of each FBS, inter-BS interaction is used to form clusters. In each cluster, the cluster-head (CH) uses channel assignment tables to assign channel resources to clustermembers (CMs) in a distributed manner. This scheme helps to ensure that the interest of neighbor BSs is always considered whenever a BS makes use of the available network resources. Our simulation results show that co-operative CA using a clusterbased approach yields higher average user throughput than autonomous channel selection by individual BSs.

 

33. SCATTERED IDENTITIES- A GOVERNANCE NIGHTMARE!

Nachiketa Sharma, Ramana Kapavarapu

 

This study is aimed at describing the result of collaboration between a hi tech manufacturer and its partners to develop an identity management architecture. The fundamental goal of this architecture is flexible on-boarding which can in turn support quarterly quoting processes. Enabling smooth accessibility to forms, templates, and process documents. This will also increase direct communication of partners with suppliers via e-mail. Enhancing file exchange, archive and storage capabilities are other objectives. This study will help in creating a backbone and framework for next generation collaboration capabilities (ex. Instant messaging, Video on Demand). The transaction time will be reduced by providing access to the right set of tools and systems in one click and enhance security framework for all the data exchanges.

 

34. Developing a neighborhood-scale wireless notification prototype

Sumita Mishra

Murali Venkatesh, Syracuse University

Bahram Attaie, Syracuse University


We outline an innovative approach to the development of a prototype of a neighborhood notification system (NNS). The NNS application residing on smart phones will use software defined radio and cognitive radio components to interface with radio frequency transceivers. Mesh networking is proposed for emergency notification and disaster response coordination using NNS. Our focus has been on the IEEE 802.15.4 and the very recent IEEE 802.15.5 mesh networking standard for low data rate connectivity among low power nodes (or nodes whose power consumption needs to be low). The innovation stems from bringing together different hardware and software components – some of which, like our Software Defined Radio (SDR) platform, are themselves still evolving and others, like the meshing platform, are very new – to propose an adaptive, reconfigurable, infrastructure-less ad hoc wireless solution to emergency communications in the unlicensed ISM RF band.

 

35. Securing End-to-End Wireless Mesh Networks Ticket-Based Authentication

Rushdi A. Hamamreh, Computer Engineering Department, Faculty of Engineering, Al-Quds University, Al-Quds, Palestine
Anas M. Melhem, Computer Engineering Department, Faculty of Engineering, Al-Quds University, Al-Quds, Palestine

 

Hybrid wireless mesh network (WMN) consists of two types of nodes: Mesh Routers which are relatively static and energy-rich devices, and Mesh Clients which are relatively dynamic and power constrained devices. In this paper we present a new model for WMN end-to-end security which divide authentication process into two phases: Mesh Access Point phase which based on asymmetric cryptography and Mesh Client phase which based on a server-side certificate such as EAP-TTLS and PEAP.

 

36. New road traffic networks models for control

PETER, Department of Control and Transport Automation,, Budapest University of Technology and Economics (BME), Budapest, Hungary

J. BOKOR, Systems and Control Laboratory, Computer and Automation Research Institute of the Hungarian Academy of Sciences, Budapest, Hungary

 

This paper introduces a method of mathematical modeling of high scale road traffic networks. The analysed model can be applicable to the simulation test and planning of large scale road traffic networks, to the regulation of traffic systems. The elaborated model is in state space form where the states are vehicle densities on a particular lane and the dynamics is described by a nonlinear state constrained positive system. This model can be used directly for simulation and analysis and as a starting point for investigating various control strategies. Stability of the traffic over the network can be analyzed by construction linear Lyapunov function and the associated theory.

 

37. The Computing Journey: From Abacus to Quantum Computer

Nan Wu, Computer Science Department, NanJing University, NanJing, China

FangMin Song, Computer Science Department, NanJing University, NanJing, China

Xiangdong Li,  CST, NYC College of Technology, CUNY, NY, USA

 

This paper briefly reviews the journey of the human development of computing technology: from abacus to traditional computer, and to quantum computer. The quantum information has been heavily studied recently since the concept appeared in 1980s. Today’s technology for the quantum computing devices is summarized and a future quantum computer’s architecture is introduced.

 

38. Harmony Search for finding the Best Hamiltony Tour in Iran

Seyyed Peyman Emadi, Institution of higher education, Roozbeh
Hamid Maleki, Institution of higher education, Roozbeh
Mina Honari, Institution of higher education, Roozbeh

 

Traveler Salesman Problem is one of the most important and application problems in the combination optimization district that transportation usage allocate the most important place to itself among practical. Since the success of problem solution represent its power of usage into the different science and engineering variety of methods are suggested for it's solution. In this paper, we find out the shortest tour by solving the Traveler Salesman Problem. That is concurrent for 104 Iran selected points and we use Harmony Search algorithm to solve it. In order to survay the applicability of results, we optimize them by changing algorithm's parameters. This comparison shows a remakable priority in answer qualification wich are result from Harmony Search algorithm by changing parameters.

Issues and Challenges in Applying Computer-Based Distance Learning, system as an alternative...

Company Description : Many scholars have listed the problems that prevent organizations’ employees to attend face to face training methods. Additionally, they have presented Information and Communication Technology (ICT) especially distance learning system as important way to overcome these obstacles. However, they did not depend on empirical studies to mention those problems and to compare between traditional training method and applying computer-based distance learning system. Therefore, this survey aims to distinguish between the traditional training methods and computer-based distance learning system as an important way to overcome employees’ problems with traditional training, including the challenges and some issues.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

Test Pattern Generation Algorithm Using Structurally Synthesized BDD

Company Description : Structurally Synthesized Binary Decision Diagrams (SSBDDs) have an important characteristic property of keeping information about circuit’s structure. Boolean difference of a circuit is used to find test pattern for stuck at fault in combinational circuit but the algebraic manipulation involved in solving Boolean difference is a tedious job. In this paper an efficient algorithm is proposed to compute Boolean difference and test patterns simply using searching the paths of SSBDD. This model reduces algebraic manipulations and takes less time to compute the test pattern.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

Implementation of DNA Pattern Recognition in Turing Machines

Company Description : Pattern recognition is the act of taking in raw data and taking an action based on the category of the pattern. DNA pattern recognition has applications in almost any field. It has applications in forensics, genetic engineering, bio informatics, DNA nanotechnology, history and so on. The size of the DNA molecules can be very large that it is a tedious task to perform pattern recognition for the same using common techniques. Hence this paper describes the pattern recognition for DNA molecules using the concept of Turing Machines. It also performs a simulation of the standard Turing Machine that performs DNA pattern recognition on the Universal Turing Machine.

Product Type : Academic Conferences

Author : Sumitha C.H

PDF 6p

Languages : English

A Precise Evolutionary Approach to Solve Multivariable Functional Optimization

Company Description : Genetic Algorithm (GA) is a stochastic search and optimization method imitating the metaphor of natural biological evolution. GA manages population of solutions instead of a single solution to find an optimal solution to a given problem. Although GA draws attention for functional optimization, it may search same point again due to its probabilistic operations that hinder its performance. In this study, we make a novel approach of standard GeneticAlgorithm (sGA) to achieve better performance. The modification of sGA is investigated in selection and recombination stages and proposed Precise Genetic Algorithm (PGA). PGA searches the target space efficiently and it shows several potential advantages over the conventional GA when tested for solving functions having multiple independent variables.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

GPUMemSort: A High Performance Graphics Co-processors Sorting Algorithm for Large Scale...

Company Description : In this paper, we present a GPU-based sorting algorithm, GPUMemSort, which achieves high performance in sorting large-scale in-memory data by take advantage of GPU processors. It consists of two algorithms: an in-core algorithm, which is responsible for sorting data in GPU global memory efficiently, and an out-of-core algorithm, which is responsible for dividing large-scale data into multiple chunks that fit GPU global memory. GPUMemSort is implemented based on NVIDIA’s CUDA framework and some critical and detailed optimization methods are also presented. The tests of different algorithms have been run on multiple data sets. The experimental results show that our in-core sorting can outperform other comparison-based algorithms and GPUMemSort is highly effective in sorting large-scale inmemory data.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Image Segmentation using Two-Layer Pulse Coupled Neural Network with Inhibitory Linking Field

Company Description : For over a decade, the Pulse Coupled Neural Network (PCNN) based algorithms have been used for image segmentation. Though there are several versions of the PCNN based image segmentation methods, almost all of them use singlelayer PCNN with excitatory linking inputs. There are four major issues associated with the single-burst PCNN which need attention. Often, the PCNN parameters including the linking coefficient are determined by trial and error. The segmentation accuracy of the single-layer PCNN is highly sensitive to the value of the linking coefficient. Finally, in the single-burst mode, neurons corresponding to background pixels do not participate in the segmentation process. This paper presents a new 2-layer network organization of PCNN in which excitatory and inhibitory linking inputs exist. The value of the linking coefficient and the threshold signal at which primary firing of neurons start are determined directly from the image statistics. Simulation results show that the new PCNN achieves significant improvement in the segmentation accuracy over the widely known Kuntimad’s single burst image segmentation approach. The two-layer PCNN based image segmentation method overcomes all three drawbacks of the single-layer PCNN.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Efficient Fractal Image Coding using Fast Fourier Transform

Company Description : The fractal coding is a novel technique for image compression. Though the technique has many attractive features, the large encoding time makes it unsuitable for real time applications. In this paper, an efficient algorithm for fractal encoding which operates on entire domain image instead of overlapping domain blocks is presented.The algorithm drastically reduces the encoding time as compared to classical full search method. The reduction in encoding time is mainly due to use of modified crosscorrelation based similarity measure. The implemented algorithm employs exhaustive search of domain blocks and their isometry transformations to investigate their similarity with every range block. The application of Fast Fourier Transform in similarity measure calculation speeds up the encoding process. The proposed eight isometry transformations of a domain block exploit the properties of Discrete Fourier Transform to minimize the number of Fast Fourier Transform calculations. The experimental studies on the proposed algorithm demonstrate that the encoding time is reduced drastically with average speedup factor of 538 with respect to the classical full search method with comparable values of Peak Signal To Noise Ratio.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Application Virtualization for Teaching Hospitals in Nigeria

Company Description : Information technology has improved operations management globally. The health sector has benefited from this revolution through the introduction of eHealth solutions. The cost-effective utilization of Information Technology and flexibility in adapting and adopting organizational changes has posed some challenges to health institutions in developing countries. In the case of Nigeria, the implementation of the National Health Policy entails the delivery of a full-packaged health care system; this package includes health education, maternal, newborn and child healthcare, nutrition and immunization. All these, require record keeping and data storage. The management of massive data storage and its availability on-demand has been sources of concern to health institutions in the country. This has brought about a slow rate in hospital-tohospital collaboration, insecure information exchange between and across institutions and lack of proper accountability in the health sector amongst other challenges. In this paper we propose a Cloud computing infrastructure which will adopt application virtualization to address the challenges in health care delivery in the country. This is an emerging technology that will provide eHealth solutions as services to tenants; a process known as Software-as-a-Service (SaaS). The infrastructure should deliver a single application through the browser to thousands of clients or stakeholders using scalable multitenant architecture. This will help to minimize cost, manage healthcare resources effectively, and help with the realization of the Millennium Development Goals (MDGs) on healthcare.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

Rotation independent hierarchical representation for Open and Closed Curves and its Applications

Company Description : The algorithm used for the segmentation of an image, and scheme used for the representation of the segmentation result are mostly selected based on the final image analysis or interpretation objective. The boundary based image segmentation and representation system developed by Nabors segments and stores the result as a graph-tree hierarchical structure that is capable of supporting diverse applications. This paper shows that Nabors’ hierarchical representation of curves is not invariant to rotation, and proposes an enhanced representation which retains its structure and remains invariant under rotation. The curve matching algorithm which matches two curves based on their hierarchical representation makes it easy to determine if a curve is a section of a larger curve. The potential of the representation is illustrated by developing image registration and image stitching methods based on the new representation.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Key Agreement For Large-Scale Dynamic Peer Group

Company Description : Many applications in distributed computing systems, such as IP telephony, teleconferencing, collaborative workspaces, interactive chats and multi-user games, involve dynamic peer groups. In order to secure communications in dynamic peer groups, group key agreement protocols are needed. In this paper, we come up with a new group key agreement protocol, composed of a basic protocol and a dynamic protocol, for large-scale dynamic peer groups. Our protocols are natural extensions of one round tripartite Diffie-Hellman key agreement protocol. In view of it, our protocols are believed to be more efficient than those group key agreement protocols built on two-party Diffie- Hellman key agreement protocol. In addition, our protocols have the properties of group key secrecy, forward and backward secrecy, and key independence

Product Type : Academic Conferences

Author : Various

PDF 9p

Languages : English

Identifying Potential Security Flaws Using Loophole Analyzis And The Secret

Company Description : In contemporary software development there are a number of methods that attempt to ensure the security of a system. Many of these methods are however introduced in the latter stages of development or try to address the issues of securing a software system by envisioning possible threats to that system, knowledge that is usually both subjective and esoteric. In this paper we introduce the concept of path fixation and discuss how contradictory paths or loopholes, discovered during requirements engineering and using only a requirements specification document, can lead to potential security flaws in a proposed system. The SECREt is a proof-of-concept prototype tool developed to demonstrate the effectiveness of loophole analysis. We discuss how the tool performs a loophole analysis and present the results of tests conducted on an actual specification document. We conclude that loophole analysis is an effective, objective method for the discovery of potential vulnerabilitites that exist in proposed systems and that the SECREt can be successfully incorporated into the requirements engineering process.

Product Type : Academic Conferences

Author : Various

PDF 8p

Languages : English

Exploitation of Vulnerabilities in Cloud-Storage

Company Description : The paper presents the vulnerabilities of cloud storage and various possible attacks exploiting these vulnerabilities that relate to cloud security, which is one of the challenging features of cloud computing. The attacks are classified into three broad categories of which the social networking based attacks are the recent attacks which are evolving out of existing technologies such as P2P file sharing. The study is extended to available defence mechanisms an current research areas of cloud storage. Based on the study, simple cloud storage is implemented and the major aspects such as login mechanism, encryption techniques and key management techniques are evaluated against the presented attacks. The study proves that the cloud storage consumers are still dependent on the trust and contracts agreed with the service provider and there is no hard way of proven defense mechanisms against the attacks. Further down, the emerging technologies could possibly break down all key based encryption mechanisms.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

How to improve performance of Neural Network in the hardened password mechanism

Company Description : A wide variety of systems, ubiquitous in our daily activities, require personal identification schemes that verify the identity of individual requesting their services. A non exhaustive list of such application includes secure access to buildings, computer systems, cellular phones, ATMs, crossing of national borders, boarding of planes among others. In the absence of robust schemes, these systems are vulnerable to the wiles of an impostor. Current systems are based on the three vertex of the authentication triangle which are, possession of the token, knowledge of a secret and possessing the required biometric. Due to weaknesses of the de facto password scheme, inclusion of its inherent keystroke rhythms, have been proposed and systems that implement such security measures are also on the market. This correspondence investigates possibility and ways for optimising performance of hardened password mechanism using the widely accepted Neural Network classifier. It represents continuation of a previous work in that direction.

Product Type : Academic Conferences

Author : Various

PDF 8p

Languages : English

ATM Frauds -Preventive Measures and Cost Benefit

Company Description : It is well-known that criminals have many ways of illegally accessing ATM machines to access the account of legitimate users. In this paper, we briefly provide an overview of the possible fraudulent activities that may be perpetrated against ATMs and justify why the use of biometric should be considered as preventive measure. A prototype of biometric ATM was designed and questionnaires were distributed to users for their opinion. Finally, the paper concludes by giving a simple risk and cost benefit analysis for the proposed design.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Sign Language

Company Description : Usually a human interaction focuses on the sound world, where the communication is based on the speech and in which most information is conveyed via voice and other sounds. However, there are people who live in the world of silence. For them nothing can be heard, as they are hearing impaired. For all of them a voice communication is impossible or troublesome. Hence they have invented a sign language. The sign language consists of a grammar and a vocabulary. Usually the grammar is significantly different to the spoken and written languages. Whereas the vocabulary is composed of many hand gestures and hand movements which convey the most important information, but which are supported by the whole body movement and facial expressions. Considering the differences in the way the hearing impaired observe the world, they encounter huge difficulties while learning and using the writing language, which is so common in daily communication. Since this sign language cannot be understand by others, we are in need of systems which can understand sign language. The existing systems handle that task not appropriate and accurate. We introduce the concept of a chat for the sign language based communication, which overcomes the deficiencies of the existing approaches. In the current system there is an action sensor available, this sensor has pressure switches when any pressure is given then the pressure switch will be closed and the signal is given to microcontroller. The microcontroller senses the signal coming from the pressure switch and it understands the switch position and it sends the command signal to the computer through RS232 cable. The RS232 cable is used to convert microcontroller understandable signal to computer understandable signal. As soon as the computer gets the signal the program written in the computer will detect the particular word and it will be played at the same instant. That’s how the sign language is converted in to voice language.

Product Type : Academic Conferences

Author : Various

PDF 4p

Languages : English

Mobile Based Interaction System for the Paralyzed People

Company Description : New methods and technologies are needed to face the various kinds of present and future challenges like addressing the problems or issues related to disabled people. We have focused on enabling quadriplegic’s entertainment in the present work. By utilizing the sensing and processing capabilities of today’s mobile devices it is possible to capture rich quantitative data about the usage and context of mobile and ubiquitous applications in the field. In this paper, we are proposing a tool which is using one of these capabilities of the mobile device which is the accelerometer sensors. The mobile having accelerometer sensors fitted on the head of the quadriplegic will be used to track the head gestures made. Paralyzed people are those people whose body movements are restricted due to injury of brain or some other malfunction of parts of brain or the spinal cord. The tool will capture the gesture and interpret it or recognize the gesture and perform the required action i.e. mapped on to the operations of the computer in front of the subject. The tool thus aids the mobility-restricted person to use the computer and entertain himself/herself when alone at home. The final part of the paper describes the experimentation goals, the total process and preliminary results. Future work directions are also indicated.

Product Type : Academic Conferences

Author : Various

PDF 10p

Languages : English

Design and Simulation of High Performance Parallel Architectures Using the ISAC Language

Company Description : Most of modern embedded systems for multimedia and network applications are based on parallel data stream processing. The data processing can be done using very long instruction word processors (VLIW), or using more than one high performance application-specific instruction set processor (ASIPs), or even by their combination on single chip. Design and testing of these complex systems is time-consuming and iterative process. Architecture description languages (ADLs) are one of the most effective solutions for single processor design. However, support for description of parallel architectures and multi-processor systems is very low or completely missing in nowadays ADLs. This article presents utilization of new extensions for existing architecture description language ISAC. These extensions are used for easy and fast prototyping and testing of parallel based systems and processors.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Efficient Implementation of Parallel Path Planning Algorithms on GPUs

Company Description : In robot systems several computationally intensive tasks can be found, with path planning being one of them. Especially in dynamically changing environments, it is difficult to meet real-time constraints with a serial processing approach. For those systems using standard computers, a promising option is to employ a GPGPU as a coprocessor in order to offload those tasks which can be efficiently parallelized. We implemented selected parallel path planning algorithms on NVIDIA's CUDA platform and were able to accelerate all of these algorithms efficiently compared to a multi-core implementation. We present the results and more detailed information about the implementation of these algorithms.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

An Improved Modular Hybrid Ant Colony Approach for Solving Traveling Salesman Problem

Company Description : Our primary aim is to design a framework to solve the well known traveling salesman problem(TSP) using combined approach of Ant Colony Optimization (ACO) and Genetic Algorithm (GA). Several solutions exists for the above problem using ACO or GA and even using a hybrid approach of ACO and GA. Our framework gives the optimal solution for the above problem by using the modular hybrid approach of ACO and GA along with heuristic approaches.We have incorporated GA, RemoveSharp and LocalOpt heuristic approaches in ACO module, hence each iteration calls the GA and heuristics within ACO module which results in a higher amount of pheromone deposited in the optimal path for global pheromone update. As a result the convergence is quicker and solution is optimal.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

Parallel Solution of Covering Problems Super-Linear Speedup on a small Set of Cores

Company Description : This paper aims at better possibilities to solve problems of exponential complexity. Our special focus is the combination of the computational power of four cores of a standard PC with better approaches in the application domain. As the main example we selected the unate covering problem which must be solved, among others, in the process of circuit synthesis and for graph-covering (domination) problems. We introduce into the wide field of problems that can be solved using Boolean models. We explain the models and the classic solutions, and discuss the results of a selected model by using a benchmark set. Subsequently we study sources of parallelism in the application domain and explore improvements given by the parallel utilization of the available four cores of a PC. Starting with a uniform splitting of the problem, we suggest improvements by means of an adaptive division and an intelligent master. Our experimental results confirm that the combination of improvements of the application models and of the algorithmic domain leads to a remarkable speedup and an overall improvement factor of more than 35 millions in comparison with the improved basic approach.

Product Type : Academic Conferences

Author : Various

PDF 10p

Languages : English

Estimating Demand for Dynamic Pricing in Electronic Markets

Company Description : Economic theory suggests sellers can increase revenue through dynamic pricing; selling identical goods or services at different prices. However, such discrimination requires knowledge of the maximum price that each consumer is willing to pay; information that is often unavailable. Fortunately, electronic markets offer a solution; generating vast quantities of transaction data that, if used intelligently, enable consumer behaviour to be modelled and predicted. Using eBay as an exemplar market, we introduce a model for dynamic pricing that uses a statistical method for deriving the structure of demand from temporal bidding data. This work is a tentative first step of a wider research program to discover a practical methodology for automatically generating dynamic pricing models for the provision of cloud computing services; a pertinent problem with widespread commercial and theoretical interest.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

The ICT Induced Business Reconfiguration from Evolution to Revolution

Company Description : This paper explores the recent revolutionary levels of Business Reconfiguration in the Public Sector Banks in Sri Lanka through a Case Study exploratory analysis. This study has compared the five levels of Business Reconfiguration introduced by Venkatraman (1991) with the Sri Lankan Public Sector Banks. Findings show that rather than evolutionary levels, these banks have achieved the revolutionary levels of business Reconfiguration within a short period of time and it is believed that they will achieve the optimum capability of level five in near future.

Product Type : Academic Conferences

Author : Poongothai Selvarajan

PDF 6p

Languages : English

A Rule Based Taxonomy of Dirty Data

Company Description : There is a growing awareness that high quality of data is a key to today’s business success and that dirty data existing within data sources is one of the causes of poor data quality. To ensure high quality data, enterprises need to have a process, methodologies and resources to monitor, analyze and maintain the quality of data. Nevertheless, research shows that many enterprises do not pay adequate attention to the existence of dirty data and have not applied useful methodologies to ensure high quality data for their applications. One of the reasons is a lack of appreciation of the types and extent of dirty data. In practice, detecting and cleaning all the dirty data that exists in all data sources is quite expensive and unrealistic. The cost of cleaning dirty data needs to be considered for most of enterprises. This problem has not attracted enough attention from researchers. In this paper, a rule-based taxonomy of dirty data is developed. The proposed taxonomy not only provides a mechanism to deal with this problem but also includes more dirty data types than any of existing such taxonomies.

Product Type : Academic Conferences

Author : Various

PDF 9p

Languages : English

Effects of Data Imputation Methods on Data Missingness in Data Mining

Company Description : The purpose of this paper is to study the effectiveness of data imputation methods in dealing with data missingness in the data mining phase of knowledge discovery in Database (KDD). The application of data mining techniques without careful consideration of missing data can result into biased results and skewed conclusions. This research explores the impact of data missingness at various levels in KDD models employing neural networks as the primary data mining algorithm. Four of the most commonly utilized data imputation methods - Case Deletion, Mean Substitution, Regression Imputation, and Multiple Imputation were evalutated using Root Mean Square (RMS) Values, ANOVA Testing, T-tests, and Tukey’s Honestly Significant Difference Test to assess the differences of performance levels between various Knowledge Discovery and Neural Network Models, both in the presence and absence of Missing Data.

Product Type : Academic Conferences

Author : Various

PDF 13p

Languages : English

Hybrid Distributed Real Time Scheduling Algorithm

Company Description : In the design of real time distributed system, the scheduling problem is considered to be nature of NP Hard and has been addressed in the literature. However due growing complexities of real time applications, there is need to find optimal dynamic scheduling algorithm. In this paper, we describe a heuristic hybrid scheduling algorithm which combines both static and dynamic tasks. Initially a processor can be allocated a fixed number of units based on pre-defined tasks which are generated from different sensors and there will certain number of units are meant for dynamically created tasks. When a dynamic task arrives at a node, the local scheduler at that node attempts to guarantee that the task will complete execution before its deadline, on that node. If the attempt fails the scheduler searches the node where task will feasibly scheduled. This type of scheduling performs the best results and scheduling algorithm can be configurable.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Applying GPUs for Smith-Waterman Sequence Alignment Acceleration

Company Description : The Smith-Waterman algorithm is a common local sequence alignment method which gives a high accuracy. However, it needs a high capacity of computation and a large amount of storage memory, so implementations based on common computing systems are impractical. Here, we present our implementation of the Smith-Waterman algorithm on a cluster including graphics cards (GPU cluster) – swGPUCluster. The algorithm implementation is tested on a cluster of two nodes: a node is equipped with two dual graphics cards NVIDIA GeForce GTX 295, the other node includes a dual graphics cards NVIDIA GeForce 295 and a Tesla C1060 card. Depending on the length of query sequences, the swGPUCluster performance increases from 37.33 GCUPS to 46.71 GCUPS. This result demonstrates the great computing power of GPUs and their high applicability in the bioinformatics field.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

A Framework for Measuring the Performance and Power Consumption of Storage Components under...

Company Description : Although the cost of storage components are reported accurately by the vendors, it is not clear whether the performance (IOps, MiBps) and power consumption (W) specifications they provide are accurate under ‘typical’ workloads. Accurately measuring this information is a vital step in providing input for optimal storage systems design. This paper measures storage disk performance and power consumption using ‘typical’ workloads. The workloads are generated using an open source version of the (industry standard) SPC-1 benchmark. This benchmark creates a realistic synthetic workload that aggregates multiple users utilizing data storage simultaneously. A flexible current sensor board has also been developed to measure various storage devices simultaneously. This work represents a significant contribution to data storage benchmarking resources (both performance and power consumption) as we have embedded the open source SPC-1 benchmark spc1 within an open source workload generator fio, in addition to our flexible current sensor development. The integration provides an easily available benchmark for researchers developing new storage technologies. This benchmark should give a reasonable estimation of performance with the official SPC-1 benchmark for systems that do not yet fulfill all the requirements for an official SPC-1 benchmark. With accurate information, our framework shows promise in alleviating much of the complexity in future storage systems design.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

New Architecture for EIA-709.1 Protocol Implementation

Company Description : This paper proposes a new architecture for EIA-709.1 protocol implementation. The protocol is conventionally implemented with the proprietary processor and language, Neuron chip and Neuron C, respectively, where the Neuron chip consists of 3 processors inside. The proposed architecture uses only one general purpose processor and general ANSI C to implement the layers of EIA-709.1 except the physical layer. The data link, network, and other layers are implemented onto one RISC processor, ARM. Specifically, the data link layer of the EIA-709.1 based on predictive p-persistent CSMA/CA is implemented. The interface between the transceiver based on power line communication and the data link layer based on the ARM is described. As a conclusion, this research shows the improvement of performance and the compatibility with the existing Neuron chip.

Product Type : Academic Conferences

Author : Various

PDF 4p

Languages : English

A Novel Approach to Multiagent based Scheduling for Multicore Architecture

Company Description : In a Multicore architecture, each package consists of large number of processors. This increase in processor core brings new evolution in parallel computing. Besides enormous performance enhancement, this multicore package injects lot of challenges and opportunities on the operating system scheduling point of view. We know that multiagent system is concerned with the development and analysis of optimization problems. The main objective of multiagent system is to invent some methodologies that make the developer to build complex systems that can be used to solve sophisticated problems. This isdifficult for an individual agent to solve. In this paper we combine the AMAS theory of multiagent system with the scheduler of operating system to develop a new process scheduling algorithm for multicorearchitecture. This multiagent based scheduling algorithm promises in minimizing the average waiting time of the processes in the centralized queue and also reduces the task of the scheduler. We actuallymodified and simulated the linux 2.6.11 kernel process scheduler to incorporate the multiagent system concept. The comparison is made for different number of cores with multiple combinations of process and the results are shown for average waiting time Vs number of cores in the centralized queue.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

A Knowledge Management Approach: Business Intelligence in an Intranet Data Warehouse

Company Description : For contemporary businesses to stay viable, business intelligence is mission critical. Although the importance of business intelligence is recognised, there is limited research on what information contributes to business intelligence and how business intelligence is sought for use in an organisational intranet. This research discusses how business intelligence is sought, captured and used having tapped into an intranet data warehouse as a knowledge management approach. It adopts qualitative case study method using interviews and observation techniques. A case study was conducted to examine how an Intranet system was designed, how business intelligence was captured, and how it aided strategic planning and decision making in business operation. The respondents explained how structured business intelligence data was categorised and disseminated to users and how the used information empowered staff in their work performance. The intranet design successfully retains staff knowledge within the organisation. It was also successful in drawing all internal resources together, capturing resources from external sources, and forming a common repository of organisational assets for use through organisational work procedures within the intranet.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

A Real-time Service Oriented Infrastructure

Company Description : Service oriented environments and real-time systems have been two mutually exclusive technological areas. Taking into consideration the main concepts of service orientation, significant challenges exist in providing and managing the offered on-demand resources with the required level of Quality of Service (QoS), especially for real-time interactive and streaming applications. In this paper we propose an approach for providing real-time QoS guarantees by enhancing service oriented infrastructures with coherent and consistent real-time attributes at various levels (application, network, storage, processing). The approach considers the full lifecycle of service-based systems including service engineering, Service Level Agreement (SLA) negotiation and management, service provisioning and monitoring. QoS parameters at application, platform and infrastructure levels are given specific attention as the basis for provisioning policies in the context of temporal constraints. We also demonstrate through use cases the need for real-time scheduling as a fundamental process to provide QoS guarantees.

Product Type : Academic Conferences

Author : Various

PDF 9p

Languages : English

A Novel Co-operative Channel Assignment Scheme for Indoor Base Stations

Company Description : This paper presents a co-operation technique of channel assignment (CA) for indoor base stations (BSs). Indoor BSs are most of the time deployed by users in an ad-hoc manner which makes prior network planning by network operators impossible. If the same pool of radio resources (e.g channels) is used by close BSs, co-operation between these BSs is vital for resolving problems such as interference. In the proposed scheme, femtocell base station (FBS), which is a typical example of indoor BS, is considered. FBSs in close proximity exchange UE-assisted (User Equipment) measured reference power information, and based on individual position of each FBS, inter-BS interaction is used to form clusters. In each cluster, the cluster-head (CH) uses channel assignment tables to assign channel resources to clustermembers (CMs) in a distributed manner. This scheme helps to ensure that the interest of neighbor BSs is always considered whenever a BS makes use of the available network resources. Our simulation results show that co-operative CA using a clusterbased approach yields higher average user throughput than autonomous channel selection by individual BSs.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Scattered Identities - A Governance nightmare !

Company Description : This study is aimed at describing the result of collaboration between a hi tech manufacturer and its partners to develop an identity management architecture. The fundamental goal of this architecture is flexible on-boarding which can in turn support quarterly quoting processes. Enabling smooth accessibility to forms, templates, and process documents. This will also increase direct communication of partners with suppliers via e-mail. Enhancing file exchange, archive and storage capabilities are other objectives. This study will help in creating a backbone and framework for next generation collaboration capabilities (ex. Instant messaging, Video on Demand). The transaction time will be reduced by providing access to the right set of tools and systems in one click and enhance security framework for all the data exchanges.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

Developing a neighborhood-scale wireless notification prototype

Company Description : We outline an innovative approach to the development of a prototype of a neighborhood notification system (NNS). The NNS application residing on smart phones will use software defined radio and cognitive radio components to interface with radio frequency transceivers. Mesh networking is proposed for emergency notification and disaster response coordination using NNS. Our focus has been on the IEEE 802.15.4 and the very recent IEEE 802.15.5 mesh networking standard for low data rate connectivity among low power nodes (or nodes whose power consumption needs to be low). The innovation stems from bringing together different hardware and software components – some of which, like our Software Defined Radio (SDR) platform, are themselves still evolving and others, like the meshing platform, are very new – to propose an adaptive, reconfigurable, infrastructure-less ad hoc wireless solution to emergency communications in the unlicensed ISM RF band.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Securing End-to-End Wireless Mesh Networks Ticket-Based Authentication

Company Description : Hybrid wireless mesh network (WMN) consists of two types of nodes: Mesh Routers which are relatively static and energy-rich devices, and Mesh Clients which are relatively dynamic and power constrained devices. In this paper we present a new model for WMN end-to-end security which divide authentication process into two phases: Mesh Access Point phase which based on asymmetric cryptography and Mesh Client phase which based on a server-side certificate such as EAP-TTLS and PEAP.

Product Type : Academic Conferences

Author : Various

PDF 5p

Languages : English

New road traffic networks models for control

Company Description : This paper introduces a method of mathematical modeling of high scale road traffic networks. The analysed model can be applicable to the simulation test and planning of large scale road traffic networks, to the regulation of traffic systems. The elaborated model is in state space form where the states are vehicle densities on a particular lane and the dynamics is described by a nonlinear state constrained positive system. This model can be used directly for simulation and analysis and as a starting point for investigating various control strategies. Stability of the traffic over the network can be analyzed by construction linear Lyapunov function and the associated theory.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

The Computing Journey: From Abacus to Quantum Computer

Company Description : This paper briefly reviews the journey of the human development of computing technology: from abacus to traditional computer, and to quantum computer. The quantum information has been heavily studied recently since the concept appeared in 1980s. Today’s technology for the quantum computing devices is summarized and a future quantum computer’s architecture is introduced.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Harmony Search for finding the Best Hamiltony Tour in Iran

Company Description : Traveler Salesman Problem is one of the most important and application problems in the combination optimization district that transportation usage allocate the most important place to itself among practical. Since the success of problem solution represent its power of usage into the different science and engineering variety of methods are suggested for it's solution. In this paper, we find out the shortest tour by solving the Traveler Salesman Problem. That is concurrent for 104 Iran selected points and we use Harmony Search algorithm to solve it. In order to survay the applicability of results, we optimize them by changing algorithm's parameters. This comparison shows a remakable priority in answer qualification wich are result from Harmony Search algorithm by changing parameters.

Product Type : Academic Conferences

Author : Various

PDF 6p

Languages : English

Organizer : Global Science & Technology Forum

GSTF provides a global intellectual platform for top notch academics and industry professionals to actively interact and share their groundbreaking research achievements. GSTF is dedicated to promoting research and development and offers an inter-disciplinary intellectual platform for leading scientists, researchers, academics and industry professionals across Asia Pacific to actively consult, network and collaborate with their counterparts across the globe.