WebReader: A mechanism for automating the search and collecting information from the World Wide Web

J. C Y Chen, Qing Li

Research output: Chapters, Conference Papers, Creative and Literary WorksRGC 32 - Refereed conference paper (with host publication)peer-review

6 Citations (Scopus)

Abstract

Current Web search engines are based on keyword search, and relevance of a web page is dependent on the number of hit count on the keywords. As keyword matching is not at the same level as semantic matching, the searching scope is unnecessarily broad and the precision (and recall) can be rather low. These problems give rise to undesirable performance on web information searching. In this paper, we describe a mechanism called WebReader, which is a middleware between the browser and the Web for automating the search and collecting information from the Web. By facilitating meta-data specification in XML and manipulation in XSL, WebReader provides the users with a centralized, structured, and categorized means to specify and Web information. An experimental prototype based on XML, XSL and Java has been developed to show the feasibility and practicality of our approach through a real-life application example.
Original languageEnglish
Title of host publicationProceedings of the 1st International Conference on Web Information Systems Engineering, WISE 2000
PublisherIEEE
Pages47-54
Volume2
ISBN (Print)0769505775, 9780769505770
DOIs
Publication statusPublished - 2000
Event1st International Conference on Web Information Systems Engineering, WISE 2000 - Hong Kong, China
Duration: 19 Jun 200021 Jun 2000

Publication series

Name
Volume2

Conference

Conference1st International Conference on Web Information Systems Engineering, WISE 2000
PlaceChina
CityHong Kong
Period19/06/0021/06/00

Fingerprint

Dive into the research topics of 'WebReader: A mechanism for automating the search and collecting information from the World Wide Web'. Together they form a unique fingerprint.

Cite this