A Scalable approach to detect the duplicate data using Iterative parallel sorted neighbourhood method
Journal Title: International Journal for Research in Applied Science and Engineering Technology (IJRASET) - Year 2016, Vol 4, Issue 11
Abstract
Determining the redundant data in the data server is open research in the data intensive application. Traditional Progressive duplicate detection algorithms namely progressive sorted neighbourhood method (PSNM) with scalable approaches named as Parallel sorted neighbourhood Method, which performs best on small and almost clean datasets, and progressive blocking (PB), which performs best on large and very dirty datasets. Both enhance the efficiency of duplicate detection even on very large datasets; In this paper , we propose Iterative Progressive Sorted Neighbourhood method which is treated as progressive duplicate record detection in order to detect the duplicate records in any kind of the dataset. In comparison to traditional duplicate detection, progressive duplicate record detection satisfies two conditions through improved early quality. Iterative algorithms on PSNM and PB dynamically adjust their behaviour by automatically choosing optimal parameters, e.g., window sizes, block sizes, and sorting keys, rendering their manual specification superfluous. In this way, we significantly ease the parameterization complexity for duplicate detection in general and contribute to the development of more user interactive applications: We can offer fast feedback and alleviate the often difficult parameterization of the algorithms. The contrition of the work is as follows, we propose three dynamic progressive duplicate detection algorithms, PSNM, Iterative PSNM parallel and PB, which expose different strengths and outperform current approaches. We define a novel quality measure for progressive duplicate detection to objectively rank the performance of different approaches. The Duplicate detection algorithm is evaluated on several real-world datasets testing our own and previous algorithms. The duplicate detection workflow comprises the three steps pair-selection, pair-wise comparison, and clustering. For a progressive workflow, only the first and last step needs to be modified. The Experimental results prove that proposed system outperforms the state of arts approaches accuracy and efficiency.
Authors and Affiliations
Dr. R. Priya, Ms. Jiji. R
In Vitro Studies of Ficus carica and its Application in Crop Improvement
Recent advances in cell culture and molecular biology of higher plants, which are key components of plant biotechnology, have demonstrated the considerable power and potential of these technologies in the genetic modifi...
Effect on The Performance of Concrete Containing Copper Slag and Recycled Aggregate as Fine Aggregate
Conservation of natural resources and preservation of environment is the essence of any modern development. In last few decades, construction activities increase rapidly. Itsss require more raw materials and it will res...
P-leach Protocol for Wireless Sensor Network
Reducing the energy consumption of available resources is still a problem to be solved in Wireless Sensor Networks (WSNs). Many types of existing routing protocols are developed to save power consumption. In these proto...
Cloud Computing Challenges and Its Models
The cloud computing is the fastest growing concept in research and industry. The ‘Cloud’ represents the internet and it related to several technologies and the convergence of various technologies has emerged to be call...
slugPerformance Evaluation of Routing Protocols for Anomaly Detection
Wireless sensor network is a data centric network consisting of a number of sensor nodes that work cooperatively to form an infrastructure less network. The sensing technology combined with processing power and wireless...