A Scalable approach to detect the duplicate data using Iterative parallel sorted neighbourhood method
Journal Title: International Journal for Research in Applied Science and Engineering Technology (IJRASET) - Year 2016, Vol 4, Issue 11
Abstract
Determining the redundant data in the data server is open research in the data intensive application. Traditional Progressive duplicate detection algorithms namely progressive sorted neighbourhood method (PSNM) with scalable approaches named as Parallel sorted neighbourhood Method, which performs best on small and almost clean datasets, and progressive blocking (PB), which performs best on large and very dirty datasets. Both enhance the efficiency of duplicate detection even on very large datasets; In this paper , we propose Iterative Progressive Sorted Neighbourhood method which is treated as progressive duplicate record detection in order to detect the duplicate records in any kind of the dataset. In comparison to traditional duplicate detection, progressive duplicate record detection satisfies two conditions through improved early quality. Iterative algorithms on PSNM and PB dynamically adjust their behaviour by automatically choosing optimal parameters, e.g., window sizes, block sizes, and sorting keys, rendering their manual specification superfluous. In this way, we significantly ease the parameterization complexity for duplicate detection in general and contribute to the development of more user interactive applications: We can offer fast feedback and alleviate the often difficult parameterization of the algorithms. The contrition of the work is as follows, we propose three dynamic progressive duplicate detection algorithms, PSNM, Iterative PSNM parallel and PB, which expose different strengths and outperform current approaches. We define a novel quality measure for progressive duplicate detection to objectively rank the performance of different approaches. The Duplicate detection algorithm is evaluated on several real-world datasets testing our own and previous algorithms. The duplicate detection workflow comprises the three steps pair-selection, pair-wise comparison, and clustering. For a progressive workflow, only the first and last step needs to be modified. The Experimental results prove that proposed system outperforms the state of arts approaches accuracy and efficiency.
Authors and Affiliations
Dr. R. Priya, Ms. Jiji. R
Tapped Inductor Quasi-Z-source Inverter With High Inversion Gain
This paper proposes a high inversion gain single stage boost inverter, which introduces a tapped inductor network into the traditional quasi-Z-source inverter (qZSI), called tapped inductor quasi-Z-source inverter (TL-q...
Quantum Realization Full Adder-Subtractor Circuit Design Using Islam gate
Quantum Computing is one of the emerging computing methods of future computing technologies. The construction of quantum computer that performs computation is implemented using Quantum Gate, the basic gate level element...
Behaviour of Concrete with Waste Glass Fiber Powder
Glass is basically used in the industries, for decoration purpose in building’s construction and also in our daily use items such as bottle, container and utensils etc. After the life span or use it is dumped anywhere a...
An Efficient System to Access Network Using Time Based Scheduling and Automatic Database Creation Using Data Perturbation
With the wide deployment of public cloud computing infrastructures the time constrain based system is required. Using clouds to host data query services has become an appealing solution for the advantages on scalability...
Impact of Land use Change on Climatic Conditions
The changes in Land use have mostly occurred locally, regionally and globally over the last few decades and will carry on in the future as well. The increase in impervious surfaces has a major impact on rainfall and gro...