Tuesday, January 5, 2010

important-seo-tips

1. Place your keyword phrases at title tag. Title must be less than 65 characters.
2. Place the most important keyword phrase close the beginning of the page title.
3. Put some main keyword in keyword meta tag.
4. Write a good description for meta description tag, description must be unique to each page.
5. Keep Meta description short and meaningful write only 1 or 2 sentences in description.
6. Target the most important competitive keyword phrase at the home page.
7. Target one or two keywords phrases per page.
8. Use only one H1 header per page.
9. Place the most important keyword in H1 tag.
10. Use H2 and H3 for sub headers where required.
11. Use Bold / Italic / Underline for your keyword phrase for extra weight in contents.
12. Use bulleted lists to make content easier to read.
13. Use ALT tag for images, so that crawler can know about images.
14. Don't use flash on your website because crawler can't read flash.
15. Try to keep easier navigation of your website.
16. Use text based navigation.
17. Use CSS to creating navigation menu instead of JavaScript.
18. Use keyword phrases in file name, you can use hyphens (-) to separate the word in file names.
19. Create a valid robot.txt file.
20. Create a HTML site map for crawler and user.
21. Create a XML site map for Google crawler.
22. Add text links to others page in the footer of site.
23. Use keyword phrase in Anchor text.
24. Link the entire pages to each others.
25. Use keyword rich breadcrumb navigation to help search engines understand the structure of your site.
26. Add a feedback form and place a link to this form from all the pages.
27. Add bookmark button.
28. Add a subscription form at every page for increasing your mailing list.
29. Add RSS feed button so that user can subscribe easily.
30. Add Social Media sharing button.
31. Use images on every page but don't forget to use ALT tag.
32. Use videos on your site which is related to your niche.
33. Write informative, fresh, unique, useful content on your site.
34. Write site content in between 300 to 500 words.
35. Keywords % must be 3 to 5%.
36. Don't copy any content from other websites, fresh and unique content is the key of your success.
37. Add deep link to related articles.
38. Regular update your website with fresh content.
39. Use CSS to improve the look of your website.
40. Write your content for human not for robot.
41. Buy Country Level Domain name if your website is targeting local users.
42. Use a good keyword suggestion tools to finding good keywords phrases for your website.
43. Use 301 redirection to redirect http://www.domainname.com thttp://domainname.com.
44. Try to buy local hosting server for your website if your site is targeting local people.
45. Use keywords rich URL instead of dynamic URL.
46. Break your article in paragraph if your article is long.
47. Add full contact address on contact page with direction map.
48. Validate XHTML and CSS at http://validator.w3.org/.
49. Don’t use hidden text or hidden links.
50. Avoid graphic links, because text in the image

51. Don’t create multi page for with same contents. 

social-bookmarking


Social bookmarking


Social bookmarking is a method for Internet users to share, organize, search, and manage bookmarks of web resources. Unlike file sharing, the resources themselves aren't shared, merely bookmarks that reference them.
Descriptions may be added to these bookmarks in the form of metadata, so that other users may understand the content of the resource without first needing to download it for themselves. Such descriptions may be free text comments, votes in favor of or against its quality, or tags that collectively or collaboratively become a folksonomy. Folksonomy is also called social tagging, "the process by which many users add metadata in the form of keywords to shared content".
In a social bookmarking system, users save links to web pages that they want to remember and/or share. These bookmarks are usually public, and can be saved privately, shared only with specified people or groups, shared only inside certain networks, or another combination of public and private domains. The allowed people can usually view these bookmarks chronologically, by category or tags, or via a search engine.
Most social bookmark services encourage users to organize their bookmarks with informal tags instead of the traditional browser-based system of folders, although some services feature categories/folders or a combination of folders and tags. They also enable viewing bookmarks associated with a chosen tag, and include information about the number of users who have bookmarked them. Some social bookmarking services also draw inferences from the relationship of tags to create clusters of tags or bookmarks.
Many social bookmarking services provide web feeds for their lists of bookmarks, including lists organized by tags. This allows subscribers to become aware of new bookmarks as they are saved, shared, and tagged by other users.
As these services have matured and grown more popular, they have added extra features such as ratings and comments on bookmarks, the ability to import and export bookmarks from browsers, emailing of bookmarks, web annotation, and groups or other social network features.

List of social software


Blogs
  • Blogger
  • Telligent Community
  • IBM Lotus Connections
  • Roller Weblogger
  • Tumblr
  • Typepad
  • Wordpress
  • Xanga
Clipping
  • Diigo
  • Evernote
  • Google Notebook
Instant messaging
  • IBM Lotus Sametime
  • Meebo
  • Pidgin
Internet forums
  • phpBB
Media sharing
  • blip.tv
  • Dailymotion
  • Flickr
  • Ipernity
  • Metacafe
  • OneWorldTV
  • Putfile
  • SmugMug
  • Tangle
  • Vimeo
  • YouTube
  • Zooomr
Personals
  • eHarmony.com
  • Facebook
  • Gaydar
  • Match.com
  • Matchmaker.com
  • OkCupid
  • Orkut
  • Passado
  • Plentyoffish.com
  • Yahoo! Personals
Social bookmarking
  • Twitter
  • Balatarin
  • BibSonomy
  • BookmarkSync
  • CiteULike
  • Connotea
  • Delicious
  • Digg
  • Diigo
  • Faves
  • GiveALink.org
  • IBM Lotus Connections
  • Jumper 2.0 Enterprise
  • Linkwad
  • Ma.gnolia
  • My Web
  • Mister Wong
  • Mixx
  • MSDN
  • Newsvine
  • oneview
  • Propeller.com
  • Reddit
  • Simpy
  • SiteBar
  • StumbleUpon
  • TechNet
  • Twine
  • Windows Live Favorites
  • Yattle

Monday, January 4, 2010

url-normalization


URL normalization (or URL canonicalization) is the process by which URLs are modified and standardized in a consistent manner. The goal of the normalization process is to transform a URL into a normalized or canonical URL so it is possible to determine if two syntactically different URLs are equivalent.
Search engines employ URL normalization in order to assign importance to web pages and to reduce indexing of duplicate pages. Web crawlers perform URL normalization in order to avoid crawling the same resource more than once. Web browsers may perform normalization to determine if a link has been visited or to determine if a page has been cached.
Search Engine Optimization For Dummies (For Dummies (Computer/Tech))Normalization process
There are several types of normalization that may be performed:
  • Converting the scheme and host to lower case. The scheme and host components of the URL are case-insensitive. Most normalizers will convert them to lowercase. Example:
HTTP://www.Example.com/ → http://www.example.com/
  • Adding trailing / Directories are indicated with a trailing slash and should be included in URLs. Example:
http://www.example.com → http://www.example.com/
  • Removing directory index. Default directory indexes are generally not needed in URLs. Examples:
http://www.example.com/default.asp → http://www.example.com/
http://www.example.com/a/index.html → http://www.example.com/a/
  • Capitalizing letters in escape sequences. All letters within a percent-encoding triplet (e.g., "%3A") are case-insensitive, and should be capitalized. Example:
http://www.example.com/a%c2%b1b → http://www.example.com/a%C2%B1b
  • Removing the fragment. The fragment component of a URL is usually removed. Example:
http://www.example.com/bar.html#section1 → http://www.example.com/bar.html
  • Removing the default port. The default port (port 80 for the “http” scheme) may be removed from (or added to) a URL. Example:
http://www.example.com:80/bar.html → http://www.example.com/bar.html
  • Removing dot-segments. The segments “..” and “.” are usually removed from a URL according to the algorithm described in RFC 3986(or a similar algorithm). Example:
http://www.example.com/../a/b/../c/./d.html → http://www.example.com/a/c/d.html
  • Removing “www” as the first domain label. Some websites operate in two Internet domains: one whose least significant label is “www” and another whose name is the result of omitting the least significant label from the name of the first. For example,http://example.com/ and http://www.example.com/ may access the same website. Although many websites redirect the user to the non-www address (or vice versa), some do not. A normalizer may perform extra processing to determine if there is a non-www equivalent and then normalize all URLs to the non-www prefix. Example:
http://www.example.com/ → http://example.com/
  • Sorting the variables of active pages. Some active web pages have more than one variable in the URL. A normalizer can remove all the variables with their data, sort them into alphabetical order (by variable name), and reassemble the URL. Example:
http://www.example.com/display?lang=en&article=fred → http://www.example.com/display?article=fred&lang=en
  • Removing arbitrary querystring variables. An active page may expect certain variables to appear in the querystring; all unexpected variables should be removed. Example:
http://www.example.com/display?id=123&fakefoo=fakebar → http://www.example.com/display?id=123
  • Removing default querystring variables. A default value in the querystring will render identically whether it is there or not. When a default value appears in the querystring, it can be removed. Example:
http://www.example.com/display?id=&sort=ascending → http://www.example.com/display
  • Removing the "?" when the querystring is empty. When the querystring is empty, there is no need for the "?". Example:
http://www.example.com/display? → http://www.example.com/display


Normalization based on URL lists

Some normalization rules may be developed for specific websites by examining URL lists obtained from previous crawls or web server logs. For example, if the URL
http://foo.org/story?id=xyz
appears in a crawl log several times along with
http://foo.org/story_xyz
we may assume that the two URLs are equivalent and can be normalized to one of the URL forms.
Schonfeld et al. (2006) present a heuristic called DustBuster for detecting DUST (different URLs with similar text) rules that can be applied to URL lists. They showed that once the correct DUST rules were found and applied with a canonicalization algorithm, they were able to find up to 68% of the redundant URLs in a URL list.

web-crawler

Web crawler

A Web crawler is a computer program that browses the World Wide Web in a methodical, automated manner. Other terms for Web crawlers are ants, automatic indexers, bots, and worms or Web spider, Web robot, or—especially in the FOAF community—Web scutter.

This process is called Web crawling or spidering. Many sites, in particular search engines, use spidering as a means of providing up-to-date data. Web crawlers are mainly used to create a copy of all the visited pages for later processing by a search engine that will index the downloaded pages to provide fast searches. Crawlers can also be used for automating maintenance tasks on a Web site, such as checking links or validating HTML code. Also, crawlers can be used to gather specific types of information from Web pages, such as harvesting e-mail addresses (usually for spam).

A Web crawler is one type of bot, or software agent. In general, it starts with a list of URLs to visit, called the seeds. As the crawler visits these URLs, it identifies all the hyperlinks in the page and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies.

Crawling policies

There are important characteristics of the Web that make crawling it very difficult:
its large volume,
its fast rate of change, and
dynamic page generation.
These characteristics combine to produce a wide variety of possible crawlable URLs.
The large volume implies that the crawler can only download a fraction of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change implies that by the time the crawler is downloading the last pages from a site, it is very likely that new pages have been added to the site, or that pages have already been updated or even deleted.
The number of pages being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with forty-eight different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained." [3]. A crawler must carefully choose at each step which pages to visit next.

The behavior of a Web crawler is the outcome of a combination of policies:
a selection policy that states which pages to download,
a re-visit policy that states when to check for changes to the pages,
a politeness policy that states how to avoid overloading Web sites, and
a parallelization policy that states how to coordinate distributed Web crawlers.

Web crawler architectures



A crawler must not only have a good crawling strategy, as noted in the previous sections, but it should also have a highly optimized architecture.

Shkapenyuk and Suel noted that: "While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability."

Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents others from reproducing the work. There are also emerging concerns about "search engine spamming", which prevent major search engines from publishing their ranking algorithms.

URL normalization
Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of "." and ".." segments, and adding trailing slashes to the non-empty path component.

Crawler identification

Web crawlers typically identify themselves to a Web server by using the User-agent field of an HTTP request. Web site administrators typically examine their Web servers’ log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the Web site administrator may find out more information about the crawler. Spambots and other malicious Web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler.

It is important for Web crawlers to identify themselves so that Web site administrators can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a Web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their Web pages to be indexed by a particular search engine.

Examples of Web crawlers

The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features:

Yahoo Crawler (Slurp) is the name of the Yahoo Search crawler.

MSNBot is the name of Microsoft's Bing webcrawler.

FAST Crawleris a distributed crawler, used by Fast Search & Transfer, and a general description of its architecture is available.

Google Crawler is described in some detail, but the reference is only about an early version of its architecture, which was based in C++ and Python. The crawler was integrated with the indexing process, because text parsing was done for full-text indexing and also for URL extraction. There is a URL server that sends lists of URLs to be fetched by several crawling processes. During parsing, the URLs found were passed to a URL server that checked if the URL have been previously seen. If not, the URL was added to the queue of the URL server.

Methabot is a scriptable web crawler written in C, released under the ISC license.

PolyBot is a distributed crawler written in C++ and Python, which is composed of a "crawl manager", one or more "downloaders" and one or more "DNS resolvers". Collected URLs are added to a queue on disk, and processed later to search for seen URLs in batch mode. The politeness policy considers both third and second level domains (e.g.: www.example.com and www2.example.com are third level domains) because third level domains are usually hosted by the same Web server.

RBSE was the first published web crawler. It was based on two programs: the first program, "spider" maintains a queue in a relational database, and the second program "mite", is a modified www ASCII browser that downloads the pages from the Web.

WebCrawler was used to build the first publicly-available full-text index of a subset of the Web. It was based on lib-WWW to download pages, and another program to parse and order URLs for breadth-first exploration of the Web graph. It also included a real-time crawler that followed links based on the similarity of the anchor text with the provided query.

World Wide Web Worm was a crawler used to build a simple index of document titles and URLs. The index could be searched by using the grep Unix command.

WebFountaiz is a distributed, modular crawler similar to Mercator but written in C++. It features a "controller" machine that coordinates a series of "ant" machines. After repeatedly downloading pages, a change rate is inferred for each page and a non-linear programming method must be used to solve the equation system for maximizing freshness. The authors recommend to use this crawling order in the early stages of the crawl, and then switch to a uniform crawling order, in which all pages are being visited with the same frequency.

WebRACE is a crawling and caching module implemented in Java, and used as a part of a more generic system called eRACE. The system receives requests from users for downloading web pages, so the crawler acts in part as a smart proxy server. The system also handles requests for "subscriptions" to Web pages that must be monitored: when the pages change, they must be downloaded by the crawler and the subscriber must be notified. The most outstanding feature of WebRACE is that, while most crawlers start with a set of "seed" URLs, WebRACE is continuously receiving new starting URLs to crawl from.

In addition to the specific crawler architectures listed above, there are general crawler architectures published by Cho and Chakrabarti.

White-hat-versus-black-hat

White hat versus black hat

SEO techniques can be classified into two broad categories: techniques that search engines recommend as part of good design, and those techniques of which search engines do not approve. The search engines attempt to minimize the effect of the latter, among them spamdexing. Some industry commentators have classified these methods, and the practitioners who employ them, as either white hat SEO, or black hat SEO. White hats tend to produce results that last a long time, whereas black hats anticipate that their sites may eventually be banned either temporarily or permanently once the search engines discover what they are doing.

An SEO technique is considered white hat if it conforms to the search engines' guidelines and involves no deception. As the search engine guidelines are not written as a series of rules or commandments, this is an important distinction to note. White hat SEO is not just about following guidelines, but is about ensuring that the content a search engine indexes and subsequently ranks is the same content a user will see. White hat advice is generally summed up as creating content for users, not for search engines, and then making that content easily accessible to the spiders, rather than attempting to trick the algorithm from its intended purpose. White hat SEO is in many ways similar to web development that promotes accessibility, although the two are not identical.

Black Hat

Black hat SEO attempts to improve rankings in ways that are disapproved of by the search engines, or involve deception. One black hat technique uses text that is hidden, either as text colored similar to the background, in an invisible div, or positioned off screen. Another method gives a different page depending on whether the page is being requested by a human visitor or a search engine, a technique known as cloaking.

Search engines may penalize sites they discover using black hat methods, either by reducing their rankings or eliminating their listings from their databases altogether. Such penalties can be applied either automatically by the search engines' algorithms, or by a manual site review. Infamous examples are the February 2006 Google removal of both BMW Germany and Ricoh Germany for use of deceptive practices.and the April 2006 removal of the PPC Agency BigMouthMedia. All three companies, however, quickly apologized, fixed the offending pages, and were restored to Google's list.

Many Web applications employ back-end systems that dynamically modify page content (both visible and meta-data, for example the page title or meta-keywords) and are designed to increase page relevance to search engines based upon how past visitors reached the original page. This dynamic search engine optimization and tuning process can be (and has been) abused by criminals in the past. Exploitation of Web applications that dynamically alter themselves can be poisoned.

Gray hat techniques

Gray hat techniques are those that are neither really white nor black hat. Some of these gray hat techniques may be argued either way. These techniques might have some risk associated with them. A very good example of such a technique is purchasing links. The average price for a text link depends on the page rank of the linking page.
While Google is against sale and purchase of links there are people who subscribe to online magazines, memberships and other resources for the purpose of getting a link back to their website.
Another widely used gray hat technique is a webmaster creating multiple 'micro-sites' which he controls for the sole purpose of cross linking to the target site. Since it is the same owner of all the micro-sites, this is a violation of the principles of the search engine's algorithms (by self-linking) but since ownership of sites is not traceable by search engines it is impossible to detect and therefore they can appear as different sites, especially when using separate Class-C IPs.


The Difference Between Black Hat SEO And White Hat SEO

In order to illustrate the difference between white hat and Black Hat SEO techniques or methods it is best to summarize how each is aimed at either search engines or humans. White hat techniques will make sense even if search engines weren’t involved while Black Hat Techniques are useless without them.
Black hat Content & links are aimed at search engines.
White hat content & links are aimed at humans
Black hat techniques are invisible to humans.
White hat techniques are visible to humans
The quality of black hat work is hidden.
The quality of while hat work is visible
Black hat sees search engines as their enemy.
White hat sees search engines as their ally
Black hat sees domains or branding as inconsequential.
White hat sees them as something to be nurtured
Black hat realizes short term results. Black hat employs unethical techniques.
White hat realizes long term results. White hat employs only ethical techniques

link-building-tips

Before Start Building Links…

There are some rules you should to follow when you start building links.
  1. Vary anchor text of your website.
  2. Include different Keywords in title.
  3. Link site from different class C - IPs (Important factor for SERP.)
  4. Vary Type of site, like blogs, article resources, directories etc..
  5. Not get all links from newer sites, and try to get links from older (and trusted) sites as well.
  6. Not get links from same location (i.e. all links from footer etc..)
  7. Try to get links from relevant content/industry.
  8. Should Have varied PR from pages you’re linking (If you’re getting a lot of PR6 Links that will adversely affect as Google may think, you’re purchasing high PR Links
  9. Try to get authority links, like links from news sites… etc.
  10. Do not try to build hundreds of links quickly.
Relevant Links Building.

It is not difficult to find relevant resource pages for listing your websites. Search following terms…
Terms You Can Search for Relevant Pages.

  • “Suggest link” +”keyword”
  • “Suggest a link” +”keyword”
  • “Suggest site” + “keyword”
  • “Suggest website” + “keyword”
  • “Suggest a site” + “keyword”
  • “Suggest URL” +”keyword”
  • “Suggest a URL” +”keyword”
  • “Suggest an URL” +”keyword”
  • “Submit link” +”keyword”
  • “Submit Website” +”keyword”
  • “Submit resources” + “Keyword”
  • “Submit site” +”keyword”
  • “Submit a site” +”keyword”
  • “Submit URL” +”keyword”
  • “Submit a URL” +”keyword”
  • “Submit an URL” +”keyword”
  • “Submit resources” + “Keyword”
  • “cool sites” +”keyword”
  • directory +”keyword”
  • directories +”keyword”
  • List of “Keyword” Directory
  • “Your Category” +”add url”
  • “Your Category” +directory
  • “Your Category” +”submit site”
  • “recommended links” +”keyword”
  • “Your Niche Category” +”suggest a site”
  • “favorite links” +”keyword”
  • “Add link” +”keyword”
  • “Add site” +”keyword”
  • “Add website” +”keyword”
  • “Add a site” +”keyword”
  • “Add URL” +”keyword”
  • “Add a URL” +”keyword”
  • “Add an URL” +”keyword”
  • “Add resources” + “Keyword”

You can fine relevant website resources with listing option, where you can submit your website to build links

Link-building

Link Building

Link building is one of the SEO solutions that help in providing high search engine page ranking and improved visibility to a website. For any professional web developer, link building forms the backbone of SEO operations that helps in bringing quality web traffic to your website. Link building is an efficient way of enhancing the popularity of a website.

While going for a professional link building service, make sure you take the quality services of web developer that ensures you get the facility of quality web content and business centric write ups that are backed by one way back links. To ensure smooth web traffic and online sales, make sure you avail quality content management solution that gives your website a professional look and makes it user friendly.

To avail the benefit of link building, create genuine and information rich back links that are useful to clients. Make sure that the links are keyword rich that help in boosting the web traffic of website. Once the back links are ready, one can submit the articles to highly ranked directories such Digg, Ezine, Sphinn, De.icio.us, Reddit, Yahoo! and Stumble Upon to name a few.

In case one is looking for ways of link building, one can use various kinds of back link strategies that include URL links, text links, dynamic links, and image links. While going for link building service, one should always keep in mind the fact that back links are often spidered by the web crawlers and spiders.

Always remember the fact that well-written businesses write up that is rich with professional information helps in enhancing the web traffic and providing easy web traffic and web page visibility. Usually link building includes two kinds of links such as one way and two-way links.

While taking the services of link service, it is necessary to ensure the fact one uses relevant links that already have higher search engine result rankings along with useful content. In the process of link building, one way links is known to be perfect way of availing quality link building strategy choose link sharing partner with care as a reliable link will come useful in promoting the website.

So, in case you are looking forward to make use of professional link building strategy, ensure you choose quality SEO service provider that helps you with business centric write ups and ensures high web traffic. As an online marketing and advertising strategy, link building is the best way to enhance your online sales in a perfect way.

To popularize your website and enhance its web traffic, ensure that you use quality web link that is useful in drawing quality sales. Always remember the fact that smart and information rich content is what every client is the best way to enhance prospective clientele and link building is one of the best way to do so. For high visibility on the World Wide Web, link building is a useful optimization tool that ensures online advertising and marketing.

keywords

Keywords

Keywords. That’s a term you hear associated with search engine optimization all the time. In fact, it’s very rare that you hear anything about SEO in which keywords aren’t involved some way. So, what’s so special about keywords?

Simply put, keywords are those words used to catalog, index, and find your web site. But of course, it’s not nearly as simple as it sounds. There is a fine science to finding and using the right keywords on your web site to improve your site’s ranking. In fact, an entire industry has been built around keywords and their usage. Consultants spend countless hours finding and applying the right keywords for their customers, and those who design web sites with SEO in mind also agonize over finding just the right ones.

Using popular -and effective - keywords on your web site will help to assure that it will be visible in the search engine results instead of being buried under thousands of other web-site results. There are keyword research tools that can help you find the exact keywords to use for your site and therefore for your search engine optimization. Understanding the use of keywords, where to find them, which ones to use, and the best ways to use them allows you to have a highly visible and successful web site.

The Importance of Keywords

Basically, keywords capture the essence of your web site. Keywords are what a potential visitor to your site puts into a search engine to find web sites related to a specific subject, and the keywords that you choose will be used throughout your optimization process. As a small-business owner, you will want your web site to be readily visible when those search engine results come back.Using the correct keywords in your web-site content can mean the difference in whether you come back in search engine results as one of the first 20 web sites (which is optimum) or buried under other web sites several pages into the results (which means hundreds of results were returned before your site). Studies show that searchers rarely go past the second page of search results when looking for something online.

Take into consideration for a moment the telephone book Yellow Pages. Say you’re looking for a
restaurant. The first thing you’re going to do is find the heading restaurant, which would be your keyword. Unfortunately, even in a smaller city, there might be a page or more of restaurants to look through. However, if you narrow your search to Chinese restaurants, that’s going to cut in half your time searching for just the right one. Basically, that’s how keywords work in search engines and search engine optimization. Choosing the appropriate keywords for your web site will improve your search engine rankings and lead more search engine users to your site.
How do you know which keywords to use? Where do you find them? How do you use them? The answer to these questions will save you a great deal of time when creating a web site. Where you rank in search engine results will be determined by what keywords are used and how they are positioned on your web site. It’s critical to choose appropriate keywords, include variations of those keywords, avoid common (or “stop”) words, and know where and how many times to place them throughout your web site.

Used correctly, keywords will allow you to be placed in the first page or two of the most popular search engines. This tremendously increases the traffic that visits your web site. Keep in mind, the majority of Internet users find new web sites through use of a search engine. High search engine rankings can be as effective, if not more effective, than paid ads for publicity of your business. The business you receive from search engine rankings will also be more targeted to your services than it would be with a blanket ad. By using the right keywords, your customer base will consist of people who set out to find exactly what your site has to offer, and those customers will be more likely to visit you repeatedly in the future.

To decide which keywords should be used on your web site, you can start by asking yourself the most simple, but relevant, question. Who needs the services that you offer? It’s an elementary question, but one that will be most important in searching for the correct keywords and having the best search engine optimization. If you’re marketing specialty soaps, you will want to use words such as soap (which really is too broad a term), specialty soap, bath products, luxury bath products, or other such words that come to mind when you think of your product. It’s also important to remember to use words that real people use when talking about your products. For example, using the term “cleaning supplies” as a keyword will probably not result in a good
ranking because people thinking of personal cleanliness don’t search for “cleaning supplies.” They search for “soap” or something even more specific, like “chamomile soap.”

In addition to the terms that you think of, people also will look for web sites using variations of words and phrases — including misspellings. It might help to have friends and family members make suggestions of what wording they would use to find a similar product and include those words in your keyword research as well as misspellings of those words. An example might be “chamomile.” Some people may incorrectly spell it “chammomile,” so including that spelling in your keywords can increase your chance of reaching those searchers. Also remember to use capitalized and plural keywords. The more specific the words are, the better the chance will be that your web site is targeted. Just remember that words such as “a,” “an,” “the,” “and,” “or,” and “but” are called stop words. These words are so common they are of no use as keywords.

search-engine-optimization

What Is a Search Engine?

so you know the basic concept of a search engine. Type a word or phrase into a search box and click a button. Wait a few seconds, and references to thousands (or hundreds of thousands) of pages will appear. Then all you have to do is click through those pages to find what you want. But what exactly is a search engine, beyond this general concept of  “seek and ye shall find”?

It’s a little complicated. On the back end, a search engine is a piece of software that uses applications
to collect information about web pages. The information collected is usually key words or phrases that are possible indicators of what is contained on the web page as a whole, the URL of the page, the code that makes up the page, and links into and out of the page. That information is then indexed and stored in a database.
On the front end, the software has a user interface where users enter a search term — a word or phrase — in an attempt to find specific information. When the user clicks a search button, an algorithm then examines the information stored in the back-end database and retrieves links to web pages that appear to match the search term the user entered.
The process of collecting information about web pages is performed by an agent called a crawler,
spider, or robot. The crawler literally looks at every URL on the Web, and collects key words and phrases on each page, which are then included in the database that powers a search engine. Considering that the number of sites on the Web went over 100 million some time ago and is increasing by more than 1.5 million sites each month, that’s like your brain cataloging every single word you read, so that when you need to know something, you think of that word and every reference to it comes to mind.
In a word . . . overwhelming.

Search engine optimization (SEO) is the process of improving the volume or quality of traffic to a web site from search engines via "natural" or un-paid ("organic" or "algorithmic") search results as opposed to search engine marketing (SEM) which deals with paid inclusion. Typically, the earlier (or higher) a site appears in the search results list, the more visitors it will receive from the search engine. SEO may target different kinds of search, including image search, local search, video search and industry-specific vertical search engines. This gives a web site web presence.

As an Internet marketing strategy, SEO considers how search engines work and what people search for. Optimizing a website primarily involves editing its content and HTML and associated coding to both increase its relevance to specific keywords and to remove barriers to the indexing activities of search engines.
The acronym "SEO" can refer to "search engine optimizers," a term adopted by an industry of consultants who carry out optimization projects on behalf of clients, and by employees who perform SEO services in-house. Search engine optimizers may offer SEO as a stand-alone service or as a part of a broader marketing campaign. Because effective SEO may require changes to the HTML source code of a site, SEO tactics may be incorporated into web site development and design. The term "search engine friendly" may be used to describe web site designs, menus, content management systems, images, videos, shopping carts, and other elements that have been optimized for the purpose of search engine exposure.

Another class of techniques, known as black hat SEO or spamdexing, use methods such as link farms, keyword stuffing and article spinning that degrade both the relevance of search results and the user-experience of search engines. Search engines look for sites that employ these techniques in order to remove them from their indices.