ijct
  • Home
  • Topics
  • Call For Paper
  • Publication Charges
  • Archives
    • Current Issue
    • Past Issues
    • conference
  • Submission
  • IRG Journals
  • Contact Us

ijct Submit Your Article : editorijctjournal@gmail.com

international journal of computer techniques(ijct)

Paper Title : Managing Rich Meta Data In High Performance Computing System Using A Graph Model

ISSN : 2394-2231
Year of Publication : 2020

10.29126/23942231/IJCT-V7I2P4
Authors: Mr. S.Sambasivam.,M.C.A.,M.Phil, Mr. E.P.Pranesh, M C A

         



MLA Style: Mr. S.Sambasivam.,M.C.A.,M.Phil, Mr. E.P.Pranesh, M C A, "Managing Rich Meta Data In High Performance Computing System Using A Graph Model" Volume 7 - Issue 2 March - April,2020 International Journal of Computer Techniques (IJCT) ,ISSN:2394-2231 , www.ijctjournal.org

APA Style: Mr. S.Sambasivam.,M.C.A.,M.Phil, Mr. E.P.Pranesh, M C A, "Managing Rich Meta Data In High Performance Computing System Using A Graph Model" Volume 7 - Issue 2 March - April,2020 International Journal of Computer Techniques (IJCT) ,ISSN:2394-2231 , www.ijctjournal.org

Abstract
The gold standard internet statistics mining evaluation of net page structure acts as a key element in instructional area which affords the systematic way of novel implementation in the direction of real- time information with exclusive stage of implications. With the rapid improvement and boom in worldwide data on world extensive internet and with expanded and speedy boom in internet customers throughout the globe, an acute need has arisen to enhance and alter or layout search algorithms that enables in successfully and efficaciously searching the specific required facts from the big repository to be had. In current work that use specific web crawlers for obtaining seek consequences efficiently. a few sirs use targeted net crawler that collects distinctive internet pages that commonly fulfill some particular property, by way of successfully prioritizing the crawler frontier and managing the exploration manner for link. A focused net crawler analyzes its move slowly boundary to find the hyperlinks that are in all likelihood to be maximum applicable for the move slowly, and avoids beside the point areas of the web. This ends in good sized savings in hardware and community sources, and helps keep the crawl extra up-to- date. The procedure of proposed I-Spider focused internet malicious web page crawler is to nurture a group set of web documents which can be centered on a few topical subspaces. It identifies the next most important and relevant link to follow by counting on probabilistic models for correctly predIJCTing the relevancy of the file. Researchers across have proposed numerous algorithms for improving performance of focused internet malicious web page crawler. We try to investigate various types of crawlers with their professionals and cons. Principal cognizance vicinity is focused internet malicious web page crawler. destiny instructions for improving performance of centered net crawler had been mentioned. This can offer a base reference for anyone who wishes in getting to know or the use of concept of targeted WebCrawler of their studies work that he/she wishes to perform. The overall performance of a focused WebCrawler depends at the richness of links inside the specific subject matter being searched by using the user, and it usually relies on a well known web search engine for providing beginning factors for looking

Reference
[1] C. GAO, L. WANG, C.-Y. LIN ET AL , AND T. WANG-CHIEW, “WHY AND WHERE: A CHARACTERIZATION OF DATA PROVENANCE,” IN DATABASE THEORY ICDT 2001. SPRINGER, 2001, PP. 316– 330. [2] K.-K. Muniswamy-Reddy, D. A. Holland, U. Braun, and M. I. Seltzer, “Provenance-Aware Storage Systems,” in USENIX Annual Technical Conference, General Track, 2006, pp. 43–56. [3] Y. L. Simmhan, B. Plale, and D. Gannon, “A Survey of Data Provenance in e-Science,” ACM Sigmod Record, vol. 34, no. 3, pp. 31–36, 2005. [4] C. T. Silva, J. Freire, and S. P. Callahan, “Provenance for Visualizations: Reproducibility and Beyond,” Computing in Science & Engineering, vol. 9, no. 5, pp. 82–89, 2007. [5] A. W. Leung, M. Shao, T. Bisson, S. Pasupathy, and E. L. Miller, “Spyglass: Fast, Scalable Metadata Search for Large-Scale Storage Systems,” in FAST, vol. 9, 2009, pp. 153–166. [6] A. Leung, I. Adams, and E. L. Miller, “Magellan: A Searchable Metadata Architecture for Large-Scale File Systems,” University of California, Santa Cruz, Tech. Rep. UCSC-SSRC-09-07, 2009. [7] D. Dai, P. Carns, B. R. Ross, J. Jenkins, K. Blauer, and Y. Chen, “GraphTrek: Asynchronous Graph Traversal for Property Graph Based Metadata Management,” in IEEE International Conference on Cluster Computing, IEEE CLUSTER. IEEE, 2015. [8] D. Dai, R. B. Ross, P. Carns, D. Kimpe, and Y. Chen, “Using Property Graphs for Rich Metadata Management in HPC Systems,” in Parallel Data Storage Workshop (PDSW), 2014 9th. IEEE, 2014, pp. 7–12. [9] A. S. Tanenbaum and A. Tannenbaum, Modern Operating Systems. Prentice Hall, Englewood Cliffs, 1992, [10] D. Dai, P. Carns, R. B. Ross, J. Jenkins, N. Muirhead, and Y. Chen, “An asynchronous traversal engine for graphbased rich metadata management,” Parallel Computing, vol. 58, pp. 140–156, 2016.

Keywords
Key Element, Internet, Web crawlers, Web search Engine.

IJCT Management

  • Home
  • Aim & Scope
  • Indexing
  • Author instruction
  • Call for paper IJCT JOURNAL
  • Current Issues
  • special issue
  • Review process
  • Impact factor
  • Board members
  • Publication ethics
  • Copyright Infringement
  • Join as a Reviewer
  • FAQ
  • Downloads

  • CopyrightForm
  • Paper Template
  • IJCT Policy

  • Terms & Conditions
  • Cancellation & Refund
  • Privacy Policy
  • Shipping &Delivery
  • Publication Rights
  • Plagiarism Policy
Copyright ©2015 IJCT- International Journal of Computer Techniques Published By International Research Group , All rights reserved

This work is licensed under a Creative Commons Attribution 4.0 (International) Licence. (CC BY-NC 4.0)