Make Your Blog Easy To Read For Search Engines



Sometimes one has to hunt down for good articles in the Internet, as in Google Search,Bing.

The content may be excellent authentic, yet it does not appear with out very specific search.

While content, post headings,Categories, tags are important, one might as well understand howt he search engines like ,Bing, Google read and reach your site.

As the volume of internet material is very huge, people are not  used.

Instead,Search Engines use bots*

These are the tools that  locate your site.

What do they do?

1.The bot will first look into your file names.

The file must be bot  friendly.

This is for people who write on HTML format.

Check with qualified people on this subject for more.

If you are writing on WordPress or blog spot, they will take care.

2.Once this is over, the bot moves onto Meta descriptions.

This is nothing but a Note, preferably in short words,about what the site contain/ is about..

It is recommended that the Meta descriptions do not exceed 60 characters.

However the search engines have recently sopped this step as there was Spamming.

However it is better to have neat and short description of what you plan to say in your site.

3.the bots move to the actual content that is everything that you find between section in your webpage, just be aware that if you are using any frames, tables in your content area then bots might not crawl through them and bots have lower capacity to crawl through javascript and flash over HTML so its better to have a webpage that is designed with HTML and other programs over flash and javascript.

4.The bot , then checks for duplicate content.

If your contents are duplicate to some other contents on the web then there is a chance that your rankings for that particular page will be low or it will be included in the supplementary index.


Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.

We use a huge set of computers to fetch (or “crawl”) billions of pages on the web. The program that does the fetching is called Googlebot (also known as a robot, bot, or spider). Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.

Google’s crawl process begins with a list of web page URLs, generated from previous crawl processes, and augmented with Sitemap data provided by webmasters. As Googlebot visits each of these websites it detects links on each page and adds them to its list of pages to crawl. New sites, changes to existing sites, and dead links are noted and used to update the Google index.

Google doesn’t accept payment to crawl a site more frequently, and we keep the search side of our business separate from our revenue-generating AdWords service.


Googlebot processes each of the pages it crawls in order to compile a massive index of all the words it sees and their location on each page. In addition, we process information included in key content tags and attributes, such as Title tags and ALT attributes. Googlebot can process many, but not all, content types. For example, we cannot process the content of some rich media files or dynamic pages.

Serving results



Internet bots, also known as web robotsWWW robots or simply bots, are software applications that run automated tasks over the Internet. Typically, bots perform tasks that are both simple and structurally repetitive, at a much higher rate than would be possible for a human alone. The largest use of bots is in web spidering, in which an automated script fetches, analyses and files information from web servers at many times the speed of a human. Each server can have a file called robots.txt, containing rules for the spidering of that server that the bot is supposed to obey or be removed.

In addition to their uses outlined above, bots may also be implemented where a response speed faster than that of humans is required (e.g., gaming bots and auction-site robots) or less commonly in situations where the emulation of human activity is required, for example chat bots.

Bots are also being used as organization and content access applications for media delivery. is one recent example of utilizing bots to deliver personal media across the web from multiple sources. In this case the bots track content updates on host computers and deliver live streaming access to a browser based logged in user.(wiki)


Facebook ‘Likes’ For You.


If you think that by clicking ‘Like’ in Facebook that’s the end of it.

Facebook Like Button..jpg.
Facebook Like Button.

You are wrong.


Facebook ‘Like’s on your behalf and updates.


That means not only your information is shared but some machine determines what it thinks you ‘Like!’


You might think clicking “Like” is the only way to stamp that public FB affirmation on something—you’re wrong. Facebook is checking your private messages and automatically liking things you talk about. Update: Sort of.

The scanning which is either an oversight on Facebook’s part of a deliberate effort—we’re waiting to hear back from FB increases the Like count for a given page Like-able link just by you talking about it. Auto-scanning is nothing new: Gmail has done it since day one to serve us ads. But there are serious potential personal consequences here—what if I’m talking about something disgusting, loathsome, and offensive with a friend? Do I want Facebook to automatically chalk that up as a Like? No. And I doubt you do either.

The auto-liking could also be a big deal for those who want to artificially inflate their popularity online—say, people with something to sell. “Yeap, it won’t drive any traffic to your website. But if your [sic] visiting an online store and you see a lot of likes under the product then this might cloud your judgement,” notes one commenter on Hacker News, where the mechanism was first reported.

To test the auto-scanning, message this link to a friend—it should increase the like count by two. I was able to independently verify the same effect by messaging a link to singer The-Dream’s official page to a friend. It increased his Likes without me ever clicking the button. As much as I truly to Like (and love!) The-Dream, this isn’t how it’s supposed to work, Facebook. It turns out this was just a very unlikely coincidence that played out in more than one place—the auto-liking only applies to external links with embedded Facebook liking. So, say I send someone a private Facebook message with a Gizmodo post, which contains a Like buttonThat will increase the counter, not talking about The-Dream on FB itself.

So your name isn’t being associated publicly with something you’re talking about privately—but if even a mention is enough to kick up a Like, it seems like that’s pretty heavily diluting (even further) what “like” even means—from preference to mere reference. Would you say every single proper noun you utter each day should be something you like? [Hacker News via Forbes]

Update: According to a Facebook spokesperson, although messaging will auto-increase a page or link’s Like count, it won’t publicly associate you with that Like. In other words, your identity won’t be exposed. The full statement is below:

Absolutely no private information has been exposed. Each time a person shares a URL to Facebook, including through messages, the number of shares displayed on the social plugin for that website increases. Our systems parse the URL being shared in order to render the appropriate preview, and to also ensure that the message is not spam. These counts do not affect the privacy settings of content, and URLs shared through private messages are not attributed publicly with user profiles.

We did recently find a bug with our social plugins where at times the count for the Share or Like goes up by two, and we are working on fix to solve the issue now. To be clear, this only affects social plugins off of Facebook and is not related to Facebook Page likes. This bug does not impact the user experience with messages or what appears on their timelines.

Update 2: Facebook has further clarified the auto-like mechanism, explaining that Facebook Pages aren’t affected:


Enhanced by Zemanta