Notice:Use the search button to discover this site, Some Direct Download links are updated. ------->You can Even try to search other stuff such as lawyer, Pet supply, SEO, Web Development and etc. You will find some surprise.

I would like to tell you about free web hosting service I use now. Register here: Free Web Hosting They give 1500 MB of disk space and 100 GB data transfer. I am now using them for about 3 months and never seen any downtime of server problems. There is no any kind of advertising on my pages too, so I think its worth to Signup.
Just go and sign up for FREE! You have nothing to lost!

Why LinkBacks?

LinkBacks (Trackbacks, Pingbacks and Refbacks) allow you to notify another site that you wrote something related to what is written on a specific page. This improves the chances of contributors to this page noticing that you gave them credit for something, or that you improved upon something they wrote. With LinkBacks, websites are interconnected. Think of them as the equivalents of acknowledgments and references at the end of an academic paper, or a chapter in a textbook.

Linkbacks have long been a major force in the development of the blogging network, by creating an interconnected series of blogs and posts acknowledging one another. Not only does this improve the general community ethos throughout the "blog-o-sphere", but it also helps to make blogs into more powerful link-building tools.

Note: Links built via this method are highly relevant and do not carry the disadvantages typically associated with "link farms" or "link exchanges".

Trackback

A Trackback is simply an acknowledgment. This acknowledgment is sent via a network signal (ping) from Site A (originator) to Site B (receptor). The receptor often publishes a link back to the originator indicating its worthiness.

Trackback requires both, Site A and Site B to be Trackback enabled in order to establish this communication. Trackback does not require for Site A to phisically link to Site B.

Pingback

A Pingback is also a signal (ping) sent from Site A to Site B. However, it's also a link. When Site B receives the notification signal, it automatically goes back to Site A checking for the existance of a live incoming link, if it exists, the Pingback is recorded successfully. This makes Pingbacks less prone to SPAM than Trackbacks.

Both sites must be Pingback enabled in order to establish this communication. If a site is Pingback enabled, each time you link-out you will be "pinging" external sites. Pingback requires for Site A to phisically link to Site B.

Refback

A Refback is also a link. However in this case, Site A (link originator) does not need to "tell" anything to Site B (receptor). Instead, the receptor Site "discovers" this link immediately after the first web visitor gets to the site by clicking on the link. This is done by analyzing information carried by this web visitor's browser referer header.

This is an easier method than Pingbacks since the Site originating the link doesn't have to be Pingback enabled (Posting a link back within any webpage is good enough).

New Releases: 2.3.11 and 3.0.4

Posted by michael February 08, 2011 @ 10:39 PM

Two new versions of Ruby On Rails have been released today. As well as including a number of bugfixes they contain fixes for some security issues. The full details of each of the vulnerabilities are available on the rubyonrails-security mailing list. We strongly urge you to update production Rails applications as soon as possible. Rather than post the advisories individually to this blog, I’ll just link to the google talk archives.

Install the latest version using gem install rails. Or if you’re using bundler, edit your gemfile and run bundle update rails.

The PHP development team would like to announce the immediate availability of PHP 5.3.6. This release focuses on improving the stability of the PHP 5.3.x branch with over 60 bug fixes, some of which are security related.Security Enhancements and Fixes in PHP 5.3.6:

Windows users: please mind that we do no longer provide builds created with Visual Studio C++ 6. It is impossible to maintain a high quality and safe build of PHP for Windows using this unmaintained compiler.
For Apache SAPIs (php5_apache2_2.dll), be sure that you use a Visual Studio C++ 9 version of Apache. We recommend the Apache builds as provided by ApacheLounge. For any other SAPI (CLI, FastCGI via mod_fcgi, FastCGI with IIS or other FastCGI capable server), everything works as before. Third party extension providers must rebuild their extensions to make them compatible and loadable with the Visual Studio C++9 builds that we now provide.
All PHP users should note that the PHP 5.2 series is NOT supported anymore. All users are strongly encouraged to upgrade to PHP 5.3.6.
For a full list of changes in PHP 5.3.6, see the ChangeLog. For source downloads please visit our downloads page, Windows binaries can be found on windows.php.net/download/.

This method is recommended if you want to include the contact form on any existing PHP page of your website (say your existing contact page). The limitation of this method is that you can only display a confirmation message and not redirect to any specific thank you page you may have. If you would rather access the contact form as a separate page, skip to Method II. Compared to this method, Method II is bit easier to implement.
Open contact-config.php and edit the below 3 variables:

$to = 'youremail@email.com';
Change youremail@email.com to the email address where you wish the contact form messages to be delivered.$subject_prefix = 'My Website Contact';
This will be the prefix that'll be attached to the subject of all contact form email messages.$where_included = '';
You first have to select an existing PHP page on your website where you'll be displaying the contact form. This is the name of the file where you are including the contact form. If you don't have a PHP page, don't worry. You can simply rename your existing HTML page to PHP page. For example:contact_us.html > contact_us.php

Note: The file where you are including the contact form and the rest of contact form files must be in the same directory.

So if you are including the contact form in a file called contact_us.php, then the above variable will be:$where_included = 'contact_us.php';
Now open the file where you are going to include the contact form, in this case contact_us.php, and put the below code to the place where you want the contact form to be displayed:<?php include "contact.php"; ?>
Then add the below code to the very top of the same file:<?php session_start(); ?>

That's it! Upload all the files to your server. If you load contact_us.php, the contact form should be visible.

Note: The purpose of $where_included variable is to define the action attribute of the form tag:
<form action="contact_us.php#cform">
Setting this variable incorrectly will display the form but give an error on submitting.

This method is recommended if you want to access the contact form as a separate page and would like to redirect users to a separate thank you page you may have. You can use this method even if you don't want to redirect users to a separate thank you page.
1. Change:$use_header_footer = FALSE;
to:$use_header_footer = TRUE;
2. $thank_you_url = '';
Add the url to your thank you page to the above variable. Example:$thank_you_url = 'http://www.yourdomain.com/thank_you.html';
3. Change:$where_included = '';
to:$where_included = 'contact.php';
Open contact-header.php and contact-footer.php and put your site header and footer information in them.
That's it! Upload all the files to your server. You can now access your contact form directly by visiting:http://www.yourdomain.com/contact.php
or:http://www.yourdomain.com/form/contact.php
depending on where you have uploaded the contact form files.

To modify the looks of your contact form - background color, border, fonts, etc. please modify the variables under the "COSMETICS" section in contact-config.php file. All the variables are self-explanatory.

Intro to new Microsoft Bing Webmaster Tools

Microsoft launched a revamp to their Bing Webmaster Tools. I talked to them back in June, when they previewed the tools at SMX Advanced, and they told me that they were starting from scratch and rebuilding the tool from the ground up. So how are things different? They say they are focused on three key areas: crawling, indexing, and traffic. They provide charts that enable you to analyze up to six months of this data. Note that none of this information is available unless Silverlight is installed. See more on that later.Crawling, Indexing, and Traffic Data

Microsoft tells me that they provide, per day, the number of:

pages crawled

crawl errors

pages indexed

impressions

clicks

Sounds pretty cool. Let's go and see it

Traffic – Impressions and Clicks
The data is very similar to what Google provides. (Although Google currently only provides the latest month’s data. I’m not sure what happened to the historical data they used to provide.)
How does the accuracy stack up? I looked at a few samples.
It’s potentially useful to compare click through rates for Google and Bing, although Google provides the additional data point of the average position. Without that on the Bing side, it’s hard to discern anything meaningful from the comparison. Note that for both Google and Bing, the click numbers reported in webmaster tools in some cases vary significantly from what is reported in Google Analytics (and in other cases are nearly exactly the same). Google has some explanation of why the numbers sometimes vary, but my guess is that Google Analytics is reporting in particular organic Google traffic from more sources than Google Webmaster Tools is. Google Webmaster Tools also clearly buckets the numbers.
Unfortunately, while Microsoft provides six months of data, it appears that you can only view it on screen and can’t download the data. This makes the data much more difficult to use in actionable ways.Index Summary
This chart shows the number of pages in the Bing index per day. This certainly seems useful, but it’s deceptive. Decreased indexing over time seems like a bad thing, worthy of sounding the alarms and investing resources to figure out the cause, but indexing numbers should always be looked at in conjunction with traffic numbers. Is traffic down? If not, there may not be a problem. In fact, if a site has had duplication and canonicalization problems, a reduction in indexing is often a good thing.
The ability to use XML Sitemaps to categorize your page types and submit canonical lists of those URLs to Google and monitor those indexing numbers over time provides much more actionable information. (Of course, Google doesn’t provide historical indexing numbers, so in order to make this data truly actionable, you have to manually store it each week or month.)Index Explorer
The Index Explorer enables you to view the specific pages of your site that are indexed and filter reports by directory and other criteria.
Again it can be useful to drill in to this data, but it would be significantly more useful if it were downloadable. When you click on a URL, you see a pop up with controls to block the cache, block the URL and cache, and recrawl the URL. These are the same actions described below (see “block URLs” and “submit URLs”).Crawl Summary
This chart is similar to what Google provides and shows the number of pages crawled each data.
Crawl errors are still available, but the “long dynamic URLs” and “Unsupported content type” reports are missing. In their places are additional HTTP error code reports. (The previous version of the tool listed only URLs with 404 errors.) Since Google provides all of these reports as well, the additional value is mostly in knowing if BingBot is having particular problems crawling the site that Googlebot isn’t. As with the query data, you can’t download any of this information, only view it on screen, which makes it much more cumbersome to use.Block URLs
The new block URLs feature appears to be similar to Google’s removal URL feature. You can specify URLs that you want to remove from Bing search results. However, this feature differs from Google’s in that you don’t also have to block the URL with robots.txt, a robots meta tag, or return a 404 for the page. Microsoft told me that they are offering this feature because site owners may need to have page removed from the search results right away but might not be able to quickly block or remove the page from the site itself.
I find this a bit dangerous as it makes troubleshooting later very difficult. I can see someone blocking a bunch of URLs or a directory and someone else, months or years later, building new content on those pages and wondering why they never show up in the Bing index. Microsoft did tell me that they recommend this feature as a short term, emergency solution only, as the pages will still be crawled and indexed, they simply won’t display in results. But recommended uses and actual uses tend to vary.Submit URLs
This feature enable you to “signal which URLs Bing should add to its index”. When I talked to Microsoft back in June, I asked how this feature was different from submitting an XML Sitemap. (And for that matter, different from the existing Submit URLs feature.) They said that you can submit a much smaller number of URLs via this feature (up to 10 a day and up to 50 a month). So I guess you submit XML Sitemaps for URLs you want indexed and use this feature for URLs you REALLY want indexed?Silverlight
Yes, I realize this is a technology, not a feature. And in fact, it may well be an obstacle for some users rather than a benefit. (For instance, I primarily use Chrome on my Mac, which Silverlight doesn’t support.) But Microsoft is touting it as the primary new feature of this reboot. Since most of the data is available only graphically, and not as a download, without Silverlight, you basically can’t use Bing Webmaster Tools at all.What’s Missing
Microsoft says that they “hit the reset button and rebuilt the tools from the group up.” This means that many of the features from the previous version of the tool are now missing. When I spoke to them, they said that they took a hard look at the tool and jettisoned those items that didn’t provide useful, actionable data. So, what have they removed?

Backlinks report – What is this feature does, in fact, have useful data if you invested a little effort in configuring the reports. You could only download 1,000 external links (and the UI showed only 20), but you could see a count of the total number of incoming links and could use filters to download different buckets of 1,000. For instance, you could filter the report to show only links to certain parts of your web site or from certain types of sites (by domain, TLD, etc.). Of course, I have no way of knowing how accurate this data was. It seems just about impossible to get accurate link data no matter what tool you use. Below is some comparison data I grabbed before this report went away yesterday.

Outbound links report

Robots.txt validator – This tool enabled you to test a robots.txt file to see if it blocked and allowed what you expected. Google provides a similar tool.

Domain score – I don’t think anyone will be sad that this “feature” has gone away. No one could ever figure out what it (or the related page score) could possibly mean.

Language and region information – This was potentially useful information, particularly in troubleshooting.

Overall, the relaunch provides data that’s potentially more useful than before, although this usefulness is limited without the ability to download the data. I also find the Silverlight requirement frustrating, but it remains to be seen if this is a significant obstacle to use of the tool. There’s nothing here that Google doesn’t provide in its tools, but with the Bing soon to be powering Yahoo search, site owners may find getting insight on Bing-specific issues and statistics to be valuable.. Historical information is great (although you can get this manually from Google if you download the data regularly), but particularly with query data, it’s hard to know how accurate the reports are (for both Google and Bing). In some cases, the data is misleading without additional data points (such as having click through data without position information and overall indexing trends without details).I always welcome additional information from the search engines, but as always, make sure that the data you use to drive your business decisions is actionable and is truly telling you what you think and what it actually is.