Search Engine Optimisation: The Soon to be Impossible Dream!
There are today search engine and internet marketing services, in fact a new industry has materialised to exploit the fear of low search rankings.
This is not a new trend, back when simply resubmitting your website to the engines resulted in keeping your site at the top of the index, there was an accompanying boom in resubmitting "companies", as we know, these were just men in back bedrooms with a host of CGI and Perl submitting scripts and a timetable.
Search Engine optimisation or "SEO", is the latest incarnation of this bedroom profiteering, the important difference is that now the webmaster's are not just passively involved but are being forced to adopt totally artificial and unsocial practices that ultimately serve only to help damage the Internet!
SEO is supposedly the methodology and processes related to designing search engine "friendly" web content, the basic premise is something like "If I follow all the engines formatting and connectivity criteria, then my website will rank higher then a comparable website that does not".
All other things being equal, this seems quite positive given that the quality of a search engines database (index) directly effects its output; then webmaster's optimising their content so that search engines can correctly categorise the internet should logically improve the speed and quality of "the crawl".
SEO then, logically, should be good for the search providers, being able to maintain an efficient index, this should use less raw processing power, require less equipment and thus less energy; this must also be good for the users, being able to quickly and intuitively find what they want from a reliable source. Sounds reasonable right?
Well that's the happy version. The fact is that initially this may be true, you may gain a short term advantage, but once we have all optimised our content for analysis and (in so doing) ignored our users; We will then be back to where we started, and the search providers will just think up some even more ridiculous "laws" by which to "judge" us by, and like sheep we will all do that as well, thus the causal paradox is perpetuated and the users feel abused!
Even this is a vast oversimplification, the true nature of SEO is a lot more complicated; The heart of the problem and the real issue here is related to the search providers task, which is to strip mine the information junk yard otherwise known as the Internet, it may be full of interesting stuff but also plenty of garbage and they need to devise intelligent techniques to mine the interesting stuff!
The current "solution" is literally for the search engines to use their hegemonic standing to bully the webmaster's into organising their work in ways that have the primary effect of allowing quick "analysis" so they can categorise the website, but this has the secondary effect of requiring content to be designed "for" analysis, which typically translates to highly distributed connectivity, ie the website being effectively divided into "micro sites", which makes the maintenance of links and content more troublesome!
This is not necessarily a bad thing, most of these imposed linking and design methodologies are often positive and beneficial for a lot of subjects. My problem is that this is unilaterally enforced and it is this type of issue that is generating all the money for the SEO boys.
However this will soon be of no consequence. To understand the problem with this type of SEO operation, it is necessary to think about how we can approximate and simulate the human process of mining information and knowledge.
Let us assume we have set our Crawlers to work, automatically indexing pages (at random, looking at previous indexing and guided by user requests); we then format the resulting text: ASCII is usually used and validation follows this, search engines tend to ignore some tags and make use of good ones that help identify the content. At this point we would have reduced the Internet to a corpora, ie the collection of all HTML documents about no particular subject.
We then would set about item normalisation, ie identification of tokens (words), characterisation of tokens (tagging meaning to words), and finally running stemming algorithms to remove suffixes (and/or prefixes) to derive the final database of terms; this can be efficiently and compactly represented in lower term dimensional spaces, (Goggle are still essentially using inverted file structures).
Imagine each document of a corpus as a point ie a term in an N dimensional space, here the literal word matching type search is lost, but we acquire more of a semantic flavour, where closely related information can be grouped in to clusters of documents bearing similarities, however N dimensional vector spaces are of no help to the users.
After applying our algorithms to the corpora, we get a term by document matrix, where terms and documents are represented by vectors, a query can also be represented by a vector. So we have a query and our corpora (represented as vectors, both having the same dimensions), we can now start matching the query against all the available documents using the cosine angle between these two vectors.
But we now have a new artificial "problem"; we know the general answer to the question "which website's best match my search terms", this information now exists in our mathematical object, at a high level of abstraction, ie the cosine angles for all terms against the query vector, this is expressed as a vector corresponding to the sought column and therefore the document we are after, all we need do is present this to the user, right, well....
The issue is that a search engine needs to generate a linear index, ie convert the vectors corresponding to the minimum cosine angles into a human readable format, and until such time as someone thinks of a better way to do it, all engines output lists, like your shopping list, it has a start, a middle and an end, therein lies the problem, how to order the list!
The hypothesis seems simple, ordering information that might look chaotic at first, using the fact that closely associated documents tend to be relevant to similar requests. However, the internet (being a scale free network) is so vast that it is not possible to present a chosen feature space that represents the x closest documents to the convergence point in a given cluster from the common Euclidean distance. This is what should then be presented to the user in a more intelligible (semantic) display.
The engines could just present the returns as produced by the matching algorithms after decomposition, because the grouping generated using probabilistic/fuzzy patterns directly from the cluster might belong to more than one class, but the strength (degree of membership) value measured on a scale; using probability on a [0,1] interval, is quite adequate.
The reason decomposition in singular values works for ordering is related to the fact that the occurrence of two terms (say tomato and potato) is very high is reflected in the term-by-document matrix by showing that only x of the n terms are used very frequently.
The idea is that since the term say pepper is used/mentioned very little, then its axis/dimension does not affect much the search space, making it flat and relevant only in the other two dimensions
However the engine's demonic creators can't do this because they are still essentially using an inverted file structure, but they still want absolute correctness in their indexes and returned results which means trouble, because this assumes your index is perfect, incapable of being manipulated and that you can somehow order the returns in a meaningful way!
So the returned results can't generally represent the documents that match semantically, we now need to account for some subjective quantities, that can not be derived directly from the corpora, they attempt to deal with this by a cocktail of criteria that rank the returns in such a way as its more likely that the "better" results are closer to the top of the list.
There are many ways of doing this, the current trend is to use inference about the quality of web sites were possible because such quantities are beyond the direct control of the content creators and the webmaster's.
PageRank provides a more sophisticated way of citation counting but this is embodied in the consept of link analysis, using a relative value of importance for a page measured based on the average number of citations per referance item.
PageRank is currently one of the main ways to determine who gets into the top of the listings, but soon this will all become irrelevant when the engines stop using inverted file structures, because they can just use the grouping generated using probabilistic/fuzzy patterns resulting from the convergence point in a given cluster from the common Euclidean distance.
When the changeover from inverted file structures occurs, there will be two direct consequences:
1) The corpora will be capable of vastly more representative and more detailed data then is Currently possible.
2) The corpora will no longer be indexed as is currently done, they will embody semantic meaning and value, where some subjective quantities can be derived directly from the corpora without the need for cocktails or totally artificial rules.
The effect is that corpora will be more accurate and incapable of manipulation, thus variations of SEO that involve indirect manipulation of the index will become pointless overnight.
It is worth noting that the search providers are becoming increasingly pessimistic about website promotion in all forms, they currently penalise many things that can effect the results such as duplicated content (which can be perfectly legitimate), and satellite sites, ie one webmaster interlinking seemingly separate but highly relevant website's.
They may well start penalising webmaster's that promote their website's through articles they submit for third party distribution, as they do for people that post their sites information to bulletin boards!
Being banned from the top search engines can effectively destroy your business, if not directly through loss of visibility then indirectly in that people tend to judge you on weather your are organised enough to be listed !
The criteria are continually changing, as the amoral SOE boys attempt to pervert the resultes, these "laws" are not always clear and there are no appeals, where we are all subject to the providers up ending a drum then dispensing swift and hard "judgements", that can doom us at any time!
The part that erks the most is that as the indexes converge, (goggle's index is used directly by 2 of the 3 top engines and 5 others indirectly use it for their rankings) a bann by anyone of these engines is enforced by them all.
© I am the website administrator of the Wandle industrial museum (http://www.wandle.org). Established in 1983 by local people determined to ensure that the history of the valley was no longer neglected but enhanced awareness its heritage for the use and benefits of the community.
Yahoo Listing Still Worth It?
In October 2002, the Yahoo! portal changed the way it delivers search results. In the past, the most prominent results were exclusively culled from websites listed in the Yahoo directory itself. Since October, sites listed in the Yahoo directory no longer enjoy this privileged status.
Optimze Your Web Site on a Shoe String Budget
Let me start off by saying I'm not a marketing guru or work for a fortune 500 advertising company. I don't even have a marketing background. But if you don't have money to blow on optimizing your site, then listen up.
Search Engine Optimization for Beginners
If you are confused about terms like "search engine optimization" or having a "search engine friendly" site, then listen up! I am here to help.
Google Contest - Nigritude Ultramarine
Search engine optimization experts are having fun with Google. Experts, with DarkBlue.com at the helm, are holding a contest to determine how Google really works. Experts are competing, with the goal of optimizing a webpage for a non-sensical phrase: 'nigritude ultramarine'.
Sales And Crawlers, Update! Update! Update!
The importance to the algorithmic web crawlers that speed throughout the web is crucial to the successful marketing campaign of your site. Your site would simply be a pretty compilation of HTML, XML, Java and the sort if search engine crawlers did not come around and look over your site. These amazing little creatures are absolutely incredible software inventions those only purpose in life is to "crawl" the web.
Internet Marketing and SEO
Have you ever seen any email offers of getting you to the top of search engines guaranteeing top rankings. Like anything in life there is no quick result without hard work. When it comes to designing a website it is the proper foundation principle. Like any great structure you need the foundation built for search engines.
SEO Tips for Google
Getting a high ranking on Google is a big achievement. There are many factors that go into pulling a high page rank. I have put together a small list of things that should not be overlooked when optimizing your site. Let's start from the top:
What Size Body Section Ranks Highest?
This is another one of the controversial questions in many of the SEO (Search Engine Optimization) forums, yet it is very easy to answer for any particular search engine. While popular belief seems to be that pages should be very short (less than 10K) to rank well with the leading search engine, this article conclusively answers that question? with a completely different answer.
Why Pay-Per-Inclusion Search Engines are Dying
A Pay-Per-Inclusion search engine is a service in whicha search engine charges you a certain amount to spiderand include your website in its database. For this fee,regular repeated spiderings are guaranteed, so you aresure to be indexed.
The Real Search Engine Optimization Guide
Nowadays, there is so much talk about SEO (search engine optimization) that it has become an industry of its own. Still, 90% of webmasters don`t know how to achieve high search engine positions. In this article, you`ll learn what the 90% doesn`t...
How To Rank High On MSN Search
The new MSN Search is quickly gaining popularity among internet search engine users. Google still being the #1 search engine with Yahoo in #2, and MSN in a strong #3. Each of the three major search engines has their own unique algorithm, and MSN is no different. Over the past several months I have had amazing success with MSN. With a few tweaks and tricks I have all of my sites ranked top 3 (mostly #1's) with my targeted key words. This article will give you a quick look at how I have managed to rank so high so often.
Do It Yourself SEO
Internet surfers use search engines more than any other tool to find things online. Search engines rank their results using a complex formula that considers web page content, link popularity and other details. This is why you should Search Engine Optimization (SEO) your web site.
How to Avoid Getting OOPed On by Google?
Earlier this year, I was having trouble getting good search engine placement for one of our sites. I was trading links like crazy, spending many hours a week trying to optimize the site properly, checking my linking structures, keyword density, etc., but still I was not climbing the ranks in Google.
Keyword Selection- The Dark Horse of Search Engines Optimization
Below are what I call the "10 Commandments" for Keywords.
Adding City Names At The End Of Your Keywords Can Bring You More Profits
In recent times, I have been closely studying keywords that have famous city names at the end of them and what I have discovered is nothing short of amazing. My research started off with pay-per-click ads. With Google Adsense the same keyword with only a city inserted at the end can attract substantially higher paying adsense ads to your site. That really surprised me and I went further and researched the kind of traffic the same keywords get for different well know American cities. Again I was in for a shock. Some cities have very high traffic for a certain keyword when you compare them to others for the same keyword.
The Google Strategy
Webmasters across the Internet were totally floored by what happened to their Google ranking recently. Those whose pages wereranked on top seemed to totally disappear, and many who had beenranked at the bottom were suddenly in the top 10.
Achieving Better Search Engine Optimization
The search engine giants are locked in an all out power struggle to get your attention and patronage.
SEO = Search Engine Optimization, tips on successful page ranking
One of the key things to remember when developing your web-site presence is to always evaluate your competition. See what's working for them; how they market their products and services, and even evaluate their KEYWORD and DISCRIPTION tags.
Soliciting Search Engines
As your guide operator through the web, search engines are invaluable when used effectively.
What is Google Pagerank?
PageRank is one of the factors that Google uses to evaluate your web site and determine its position in the Google search engine results. PageRank is a number from 0 to 10.
|© Athifea Distribution LLC - 2013|