What is On-Page optimization and procedures for On-page optimization

On-Page optimization is one of the major concept in SEO.Before learning Onpage optimization, we should have basic knowledge on SEO and what is the importance of OnPage optimization in SEO
What is SEO?
SEO concept started in the year 1990’s with people submitting a particular website to a Search Engine. Google Spiders that would preuse the web, means Spiders is a special software program that is setup by the search engine for define anything new information in website and stores the data also classification the millions of pages it came across.The search pulled up information using an indexer that store the various types of information for the search engine.

in simple manner SEO is highlighting or optimizing the Website so as to get rank top in search engines.

Mainly SEO devided in to two major parts

1> ON Page Optimization

2> OFF Page Optimization

ON-PAGE OPTIMIZATION

On page optimization involves changing the html code of the website’s every page which wants to rank top in search engines.It basically involves changing of

1> Title tag

2> Meta Tags

3> Header Tags

4> Image Tags

5> Font Decoration Tags

6> Keyword Density

7> xml sitemap

8> Robots file

All the above procedures need to apply in webpage.Now we will describe in detailed manner of each and every procedure.

1> Title Tag

Title tag needed to add anywhere between the <head> … </head> tags of the webpage.

The title of a webpage is displayed in the title of the website.

In google organic search title of the website is displayed in the first line of the each result in blue color.The title has to be attracting so that the reader is interested to open the website.The google reads the first 65 charecters of web title.

Rules :-

There should be only one title tag per webpage.

Every page of website must have unique title.

google reads the first 60 charecters of the web title.

title must start with the keywords that your targeting to perform well in the search engine.

Meta tags are very important role in part of onpage seo.These tags plays the role for showing the webpages information in search engines.

That means metatags provides the main content of the webpage to the search engines like webpage title, webpage description all those things also it provides data how to show particular webpage information in search engines.All these kind of actions perform by the metatags and sending information to the search engines.

Mandatory Meta tags for webpage

Mainly we are adding 5 type of metatags for webpage

Description Meta tag

Keywords Meta tag

Robots Meta tag

Revisit-After Meta tag

Googlebot Meta tag

-Description Meta Tag

Meta description tag is need to mention in between <head > ….</head> tag of webpage.

Rules

Google collects the first 25 words of the result webpage content as meta description.

Meta description must be unique to each page of website.

There should be only one Meta description tag per webpage.

Need to Highlight the target keywords in between 25 words of the description.

Syntax of “Meta description tag”

<meta name=”description” content=” ….”/>

The meta description tag should consist of not more than 100 words, the first 25 words (160 characters) of the meta description should have your keywords.

This part is displayed next to title of each search result in the google organic search.

-Keywords Meta Tag

This tag should contain all the keywords that you are targeting seperated by comma.There should be only 1 meta keywords tag per webpage.

Ex:- software company,software company,software company seattle,software company names,software company logos,software company organizational structure,

software company for sale,software company business plan,software company name generator,software company valuation,software company website .

For getting the keywords check the website – “www.keyword-finder.net”

Syntax of “Meta Keywords tag”

<meta name=”keywords” content=”…”/>

Note:- Now this keyword tag not using in websites, google restrict this tag and blocking by using algorithms.The main reason to restrict is by using meta keyword tags ,many website users are added lot of keywords un-necessarly those are irrelavent to website description.So if your highlight the keywords in Description tag only with meaningfull content, no need to use meta-keywords tag.

-Robots Meta Tags

meta name robots are used for inform the crawlers of search engines to restrict which pages need to index and which pages not to index.

like website owner from his website for security purpose want to restrict admin dashboard of specific pages then robots file is best solution for this condition.

meta name robots tag we can use in 4 ways.Based on requirement use this tags for individual pages which you want to stop crawling.Through robots file also we can do this procedure we will discuss later in last part of this section.

spiders will not crawling the page but page is indexing so available in search result.

4. <meta name=”robots” content=”noindex, follow”>

spiders will crawl the page but not available in search result why because noindex used in metatag.

-Revisit After Meta Tags

Revisit-after tag is usefull for inform the crawler to revisit again at particular time.need to set the value for creating of your posting frequency.if your creating new posts everyday the need to set the value is 1 day.If you are posting weekly or monthly then need mentions the value as 1 week or 1 month.Even with out adding anything new but if you added this tag in the page, then crawler will not stop searching about content.Every visit Crawler searching for Updated data either content newly posted by the website owner or not.

syntax of “Meta Revisit-After” Tag

<meta name=”revisit-after” content=”….revisit time….”>

Ex:- <-meta name=”revisit-after” content=”7 days or 1 month”>

– Meta Name Googlebot

Google bot tag main purpose of usage is for inform to read the new modified decription data can be crawl by Google.Suppose sometime if something modify in description tag in the website and submit to google. but old description is stored in search like cache so google shows the old description content.These kind of search reslult problem occur in Open Directory Project DMOZ.For over-come this issue we are usning ‘Googlebot’ tag.Google bot identifying the new description data while crawling your website.This googlebot tag applicable only for Google Search Engine.It is same as robot meta tag.this tag we can use in different ways

syntax of “googlebot tag”

<meta name=”googlebot” content=”code”>

<meta name=”googlebot” content=”noodp”>

“noodp” content is inform to Googlbot NOT to duplicate the ODP but description which is located on your website.

<meta name=”googlebot” content=”noarchive”>

noarchive can’t allow Google to display cached content.

<meta name=”googlebot” content=”nosnippet”>

nosnippet can’t allow Google to show cached content.

3> HEADING TAGS

Header tags of the website page should always including the keywords that you are targeting.There should be only one <H1> tag per webpage for getting good ranking in google.At the same need to add more than one or more <H2> and <H3> tags.

Syntax of “Header tags”

Only one H1 per page <h1> Targeted keywords </h1>

One or more H2 tags in page <h2> Targeted keywords </h2>

One or more H3 tags in page <h3>Targeted keywords</h3>

4> IMAGE ALTERNATE TAG

The Image alternative tag should always containe the targeted kaywords. Dont use the same text for all Image tags.Use different text for individual image tags with targeted keywords.

For example my keyword is ‘software’ then i can have images with alt text as, software java, software oracle, software android etc…

Syntax for “image alternative text tag”

<img src=”pikestewardlogo.png” alt=”software projects” />.

5>FONT DECORATION TAG

Any where in the webpage some particular data want to highlight then we are Using font declaration tags “bold”, “italic”, “underline” . webpage has font in Bold, italics or underline in any part of the text, also that they contain the keywords that your are targeting.This is also one of the best method for crawling the keywords for getting rank top in the search.

6> KEYWORD DENSITY

Keyword density is not a tag,This is the process to display the keywords in webpage, Those keywords need to display atleast 3% of page content.In that manner need to prepare the content for crawling and get rank top in the search.Google when crawling the page from each paragraph it fethes first 20 words of the content every time so when ever preparing the content for website we must add the chosen keywords in between 20 words of sentence in each paragraph meaningfully.

7>SITEMAP

Sitemap is a list of all webpages of a website accessible to crawler and user.Sitemap is a way for Website Owners to inform Search Engines about all pages that exists on their website and usefull for better crawling.There are two kinds of sitemaps need to create

HTML Sitemap:- Build for website users can easily to access all webpages of the website.

XML Sitemap:- Build for Google spiders or Crawlers can easily to access all webpages in search engines.

Note:- Crawler or Spiders doesn’t read sitemaps in .html and .txt or any other formats.It reads only in the form of “.xml” format.

How to create Webpages in XML Understandable format Manually

For create sitemap manually at first need to list out the all webpages in one document those are want to add in sitemap file.

In Any Text Editor create a new file and save with “.xml” extension. Ex:- sitemap.xml

The first line of sitemap line is to specify the version and encoding detalis.

line 1: <?xml version=”1.0″ encoding=”UTF-8″?>

The second line is <urlset> tag used for declare the format which has all the needed namespaces and website URLs.

<changefreq> : This tag is used for set crawling frequency at every day or month to find new changes of webpages.

Ex:- Any Website owner doning modifications or updating the webpages every day or every week or every month based on that need to fix the frequency of the webpage.So at that frequency of time google crawling the updated data.

priority : priority tag used for preferene of page.The highest priorty is 1.00 and lowest priority is 0.01

Sitemap generation plays main role in seo, but adding a sitemap doesn’t effect for ranking.

For New websites sitemap is veryhelpfull for getting rank in optimization.It allows your website to interact with search engines more effectively.

General static websites having only maximum of 5 to 10 pages so there is no issued for create sitemap manually.If website owner have lot of webpages above 50 or 100 then need to spend lot of time and there is problems sometime few of webpages missing.

To overcome these issue we are using automatic sitemap generated tools.

1> go to www.xml-sitemaps.com

2> Enter your domain name url

3> set the frequency

4> set the priority for overall webpages

5> Click on “start”

it geneates the “.XML” and “.HTML” files for given website.Download those files and store in root directory of the website.

Where to submit .XML FILE for Crawling in Search Engines?

upto now we got information about how to create sitemaps manually and automatically.Generated sitemaps we stored in root directory of website but search engines how to know read the site map.We didn’t gave any access to search engines to read these sitemap.

For submit sitemap in google and yahoo or any other search engines maintaining webmaster tools individually, by using using webmaster tool need to submit tha generated sitemap, then automatically website will be indexing and getting result in search engine.

Google : – www.google.com/webmasters

follow the link

Bing :- www.bing.com/toolbox/webmaster

8>ROBOTS FILE

Robots file also known as Robots.txt protocol or Robots Exclusion Standard Protocol.

Main use of Robots file is in that scenario, in website have some security webpages like admin dashboard related pages, those kind of webpages if you don’t want to crawl in search engine for security purpose in this kind of scenario through by using Robots.txt file you can restrict the particular pages.

Incase site owner didn’t add the robots file with restricted pages, Search Engine Crawlers will crawling and indexing all pages from website including admin details.Then The general user can easily access the admin pages.

so before starting seo need to check the robots.txt file available or not , if not available need to create and add in root directory of website.

Procedure for restrict the particular webpages

1> User – agent : *

2> Disallow: /about us /

Here, The symbol “*” indicating to crawl the all webpages in all search engines like google , yahoo, bing etc.If you want to particularly restrict the google crawler add “googlebot” in the place of “*”. googlebot is the name of google crawler (or) spider.For restrict the particular page in crawler ,write “Disallow : / about us/” in robots file after “user-agent”.now in this example “about us” page will not crawl by search engines.Through meta tags also we can do this process for particular page crawling restrictions.

<meta name=”robots” content=”nofollow”>

<meta name=”googlebot” content=”noindex”>

nofollow means don’t crawl the page.noindex means dont search the particular page in google only not in other search engines.why because here meta name is “googlebot”.It is a google spider name.Once stop the crawling automatically indexing also not working , indexing also not working means there is no way to get result in search engine.

All these tags are mandatory tags , need to add in every page for On-Page Optimization.