1. Search engines like Google maintain huge databases called “_______” of all the keywords and the web addresses of pages where these keywords appear
2. Search engines also use “bots” (software robots) called web-bots or spider bots that crawl the web 24/7 to scan webpages, _____________and follow hyperlinks to move from one page to the other
3. In their indexes, for each ____ that has been indexed, a page rank score is also stored.
4. A page rank score is a number that will be used to _____________________
5. Read the following excerpt that lists steps taken by a search engine like Google. Fill in the blanks for no. 4
6. Search engines provide an interface to a group of items that enables users to specify criteria about an item of interest and have the engine find the matching items. The criteria are referred to as a ___________
7. Page rank score is therefore a ________________calculated by a search engine for each page on the web to measure its degree of importance.
8. Note that Google does not disclose its exact PageRank formula. The key concept of this formula though is as follows. Fill in the blanks
9. Which of the following stataements are true, in reference to search engine indexing?
10. If you build a website and want it to be as high up as posssible in search results, SEO is important. SEO stands for ________
11. An XML sitemap gives search engines a list of all the pages on your site, as well as additional details about it such as when it was last modified
12. For HTML pages, the __________ contains keywords which are also included in the index.
13. Optimizing a website may involve editing its content, adding content, including specific keywords and removing barriers to the indexing activities of search engines
14. Promoting a site to increase the number of______________ or inbound links, is another SEO tactic
15. By May 2015, mobile search had surpassed desktop search. Read the following excerpt and decide whether it is, in its entirety, true or false.
Webmasters and content providers began optimizing websites for search engines in the
mid-1990s, as the first search engines were cataloging the early Web.
Initially, all webmasters needed only to submit the address of a page, or URL,
to the various engines which would send a "spider" to "crawl" that page, extract
links to other pages from it, and return information found on the page
to be indexed.
The process involves a search engine SPIDER downloading a page and storing it on
the search engine's own server.
A second program, known as an INDEXER, extracts information about
the page, such as the words it contains, where they are located,
and any weight for specific words as well as all links the page contains.
All of this information is then placed into a __________ for crawling at a later date