A Scalable approach to detect the duplicate data using Iterative parallel sorted neighbourhood method
Journal Title: International Journal for Research in Applied Science and Engineering Technology (IJRASET) - Year 2016, Vol 4, Issue 11
Abstract
Determining the redundant data in the data server is open research in the data intensive application. Traditional Progressive duplicate detection algorithms namely progressive sorted neighbourhood method (PSNM) with scalable approaches named as Parallel sorted neighbourhood Method, which performs best on small and almost clean datasets, and progressive blocking (PB), which performs best on large and very dirty datasets. Both enhance the efficiency of duplicate detection even on very large datasets; In this paper , we propose Iterative Progressive Sorted Neighbourhood method which is treated as progressive duplicate record detection in order to detect the duplicate records in any kind of the dataset. In comparison to traditional duplicate detection, progressive duplicate record detection satisfies two conditions through improved early quality. Iterative algorithms on PSNM and PB dynamically adjust their behaviour by automatically choosing optimal parameters, e.g., window sizes, block sizes, and sorting keys, rendering their manual specification superfluous. In this way, we significantly ease the parameterization complexity for duplicate detection in general and contribute to the development of more user interactive applications: We can offer fast feedback and alleviate the often difficult parameterization of the algorithms. The contrition of the work is as follows, we propose three dynamic progressive duplicate detection algorithms, PSNM, Iterative PSNM parallel and PB, which expose different strengths and outperform current approaches. We define a novel quality measure for progressive duplicate detection to objectively rank the performance of different approaches. The Duplicate detection algorithm is evaluated on several real-world datasets testing our own and previous algorithms. The duplicate detection workflow comprises the three steps pair-selection, pair-wise comparison, and clustering. For a progressive workflow, only the first and last step needs to be modified. The Experimental results prove that proposed system outperforms the state of arts approaches accuracy and efficiency.
Authors and Affiliations
Dr. R. Priya, Ms. Jiji. R
A Clustering Based Efficient Intrusion Detecting In Multitier Dynamic Web Applications
We present DoubleGuard, a system used to detect attacks in multitiered web services and classify through Hierarchal clustering Algorithm. Our approach can create normality models of isolated user sessions that include b...
Performance Analysis of an OFDM System for Different channel Models
In today’s world depending on the requirements in the wireless communication, a system which provides high capacity and high performance and yields lower bit error rate (BER) value is preferred. Orthogonal frequency div...
Design and Implementation of Area Efficient BPSK and QPSK Modulators Based On FPGA
Digital communications devices designed on FPGAs are capable of affording multiple communications protocols without the need to arrange new hardware, and can support new protocols in a matter of seconds. In addition, FP...
Software Integrity Attestation for SaaS Cloud Systems Using KL Divergence
SaaS Cloud systems provide efficient and cost-effective service hosting infrastructure for SaaS service providers. The infrastructures are often shared by multiple users from a variety of security domains, which make th...
A Possibilities of Simulation of Three Phases to Thirteen Phase’s Transformer Connection
Possibilities of simulation three to thirteen AC multiphase transformer to convert into DC power through modelling. For this modelling we have need multiphase transformer. The whole modelling has been simulated by using...