It’s rare that some piece of computer technology becomes so completely dominating in its area that its name becomes a verb. But this is what happened with Google. It built its dominating position back in the early 2000s, when the Internet was already burnt by the dotcom bubble, but still quite diverse in terms of competing platforms.
One factor that’s crucial for understanding this is that modern search engines are not, in reality, one thing. Even on some very basic level they combine functions of old library catalogs (finding documents) and yellow pages (finding people, institutions and offers).
This distinction was already obvious during the formative years of the current search situation. It also barely scratches the surface of things that are expected of a search engine.
For a long time many users have put web addresses into Google instead of the address bar. Today the address bar effectively became Google’s input field.
Then there are queries like ‘weather in Melbourne’ or ‘who played Harry Potter in the movies’. In that case the user doesn’t really care about websites or documents. They just want to learn some simple thing quickly.
Maybe I succeeded in convincing you that Google isn’t a tool, like a library catalog: it’s an interface. Without much fanfare, it came to be a digital assistant (not unlike much more pumped ‘futuristic’ voice assistants). Only the commands – selecting the functions – are so subtle, like body gestures, that we often don’t even notice.
Or, at least, so says the apparent guiding principle of current search engine development. Some people don’t like it: over past years, there were many Reddit and Hacker News threads bemoaning decline of quality of Google results.
The thing people complain about could be summed up as excessive simplification. The engine ignores details of your query to give more popular results. For example, you may ask Google about ‘tiramisu order of adding ingredients’ and you won’t get pages or questions on forums concerning this very topic. Instead, you will get panels with photos, then some vaguely related Q&A and only then a list of recipes from (presumably popular) sites.
Hopefully somewhere on these sites there will be answers to your doubts about adding baking soda or whatever. But this is a fine example of the user knowing with high precision what they are looking for and the search engine force feeding them cookie-cutter shiny content. In many niche or technical topics this problem is even more apparent and makes finding useful information more difficult, in contradiction to the Google mission statement.
Now, aside from more conspiratorial stuff about corporations dumbing down software to fully control and optimize money extraction, there seem to be some legitimate reasons for this.
For one, polished content is probably a better alternative to low-effort SEO fluff, generated for each set of keywords and filled with trashy ads. If Google optimized just for the verbatim keywords, there could be more incentive for site owners to machine-generate thousands of pages instead of polishing fewer pages about really popular topics. It’s a compromise.
Second, and this is a little less convincing but often brought up in discussions on Google Search’s decline, this behavior may really be what “an average user” expects. They want to get something digestible regardless of typos (which, to be sure, everyone makes) and perhaps sloppy wording.
But would the average user take time to formulate their query if they didn’t mean it? After all, you can always modify your query. But that, say proponents of this hypothesis, is already too much – people have more pressing things to do than interacting with search engines. Better give them a graded “4 out of 10” somewhat relevant response to 90% of typical queries than a mixture, ranging from maybe “9 out of 10” to complete garbage. So, again, the sacrifice of more precise use cases has to be made.
(Third hypothesis, out of many, is more technical and speculative – that maintaining good indexes of terms is hard and costly. This becomes apparent if you try to construct any kind of search yourself. Perhaps even at the scale of Big Search, with the Web still growing, it’s deemed more economical to focus on popular terms and allow the rest to be at least less available.)
One conclusion to be drawn is that design of search engine responses has to make sacrifices of giving the user not what they would want. I’d argue that this inavoidably follows from trying to fit many different tools into one interface, with no clear way of selecting them. (If we, as the user, were able to choose, it would introduce settings and knobs – something that seems to be verboten for many modern tech designers.)
Aside from this, there is a fear that the Web may have lost the quality that made old Google possible.
People that have something to say or show to the world may not build a personal website for that, like in the year 2000. They are likely to go to a walled garden like Facebook, or LinkedIn, where it is difficult to reach them from the outside. Commercialization and professionalization of making websites seems to have made it less viable for normal people. They may feel that there is no point of their own online presence without a monetization scheme and professional-looking design.
All of this may degrade the informative content of the Web and force search engines to be harsher in trawling through the swamp of commercial, aggressively optimized and walled off content. It also may be that there is comparatively less of the high quality websites to even find.
In my experience though, there is still much of the authentic Web out there. People still feel the need for sharing information that they care about. Niche subreddits, personal websites, active forums – they exist, even if they’re hard to find. Yet for Google, who needs its algorithms to operate on the scale of the whole Web, they are often too small to register or pay attention to. This is something I’d like to tackle with ActualScan.