A search engine is a program designed to help find information stored on a computer system such as the World Wide Web, or a personal computer. The search engine allows one to ask for content meeting specific criteria (typically those containing a given word or phrase) and retrieving a list of references that match those criteria. Search engines use regularly updated indexes to operate quickly and efficiently.
1. The search engines use robots (also known as spiders) to search the internet for websites.
2 The results of the spiders' travels are put in an database which is then indexed based on words found and where these words were found.
3 The users of search engines search for words or phrases related to what they are looking for and the search engine index returns related sites.
A web crawler (also known as web spider) is a program which browses the World Wide Web in a methodical, automated manner. A web crawler is one type of bot. Web crawlers keep a copy of all the visited pages for later processing.
(Also known as search engine robots or search engine spiders)
How Web Crawlers Work
Web crawlers (search engine robots or search engine spiders) use a process called "crawling the web" or web crawling. They start with the web servers that have heavy traffic and most popular web pages.
The web crawler sets out from the search engine's base computer system looking for websites to index.
The web crawler collects information about the website and it's links.
- the website url
- the web page title
- the meta tag information
- the web page content
- the links on the page, and where they go to
0 comments:
Post a Comment
Thanks For Your FeedBack....