“Search Neutrality” Is Not Possible
Search neutrality – the idea that any search engine should reveal all and each of the Internet’s entries without favor, i.e. unbiasedly – is on its rise. But then again, what is an unbiased search engine? Is it the one that gives most results or the one that gives best results? Google’s secret in the early 2000s was to rank the results. Is that the best approach?
In the general discussion about search neutrality, there is one fact consistently omitted: any search is in itself an exercise in bias. Searching for something entails purposefully leaving some other things out. So, if the search as such requires favor, how can the tools for searching produce neutrality?
Pizza in the prairie
Imagine you just arrived in Medicine Hat, Canada – a place you didn’t know and don’t know anything about. You feel hungry and crave pizza. You want to find the pizza places in this town in South Alberta. Before (mobile) internet, you would have looked at the advertisements and signs. But then, you would only be made aware of those offerings near to you or where you drove by. Alternatively, you would have asked locals. But they, too, are biased. Maybe the dominant local flavor – steaks – doesn’t agree with you. And depending on which person you asked, this person’s preferences would influence the answers.
You could have also consulted the “restaurant” entry in the Yellow Pages. But, alas, in Medicine Hat, anyone that goes straight to that entry misses most pizza-places. In the Hat’s version of the pre-internet search-engine, some pizza parlors are listed under “pizza”, some under “restaurant”, and some twice. Those that paid to be listed with ads are everywhere. What, now, is the difference between this seemingly pre-historic world and today? The difference is that in addition to all the old-fashioned search-heuristics, many a stranger to that prairie-town would use the internet to find a suitable pizza place. How? Most likely, with a search engine.
But it doesn’t get simpler. There are several search engines, for example Google, Yahoo, Ask, or Bing. There used to be less and worse – Compuserve, Altavista –, remember? Even if you successfully picked out an engine, you still have to formulate your query. You could search for “pizza”, or for “good pizza”, or even for “best pizza in Medicine Hat”, or just impute “pizza restaurant Medicine Hat open now”.
Search engines are “essential facilities”. This term comes from antitrust/competition law and denotes a bottleneck so important that without it, a whole system would fail. Therefore, the owner of the bottleneck must (!) provide access to competitors at a reasonable price. Typical examples for essential facilities are railroads or electrical power grids.
Finally, you found different entries. How to choose among them? You might go for the first result, or have a look at the first page or results, or still compare the rankings of the pizza-places. These can be compared using search engines again or special platforms, or both. And every time you decide to take a certain search route, you refrain from taking others or doing other things. That’s the nature of choice. But now, you have finally settled for Papa’s pizza place. And again, the decision-making begins. Take the first best pizza, study the menu, ask for a sampler, combine choices. The cost of deciding for the pizza margherita is, in addition to the 10 dollars it costs, not ordering any of the others.
What is the point of this example? It shows how an apparently simple task involves many decisions. It involves deciding what to search and how to search for it. It is about how you must exclude and simplify to successfully find what you are looking for. You must constantly increase your bias through a series of decisions. Search is a decision-driven process. But then again, all decisions bear costs, at least those associated with the opportunity of deciding and doing otherwise.
If this is the case, how can there be search neutrality? There can’t. The claim for search neutrality is the sum of two major flaws: “forgetting” that search is an exercise in decision-making, and “denying” that every decision comes with a cost.
Nestor of neutrality
For a long time, there was a debate about net neutrality, i.e. the unbiased management of the internet – infrastructure. Now, the question seems to have mutated to search neutrality. A benevolent definition of it is:
“Search neutrality aims to keep the organic search results (results returned because of their relevance to the search terms, as opposed to results sponsored by advertising) of a search engine free from any manipulation, while network neutrality aims to keep those who provide and govern access to the Internet from limiting the availability of resources to access any given content”.
Frank Pasquale has been among the most prominent advocates of search neutrality. His main argument: search engines are “essential facilities”. This term comes from antitrust/competition law and denotes a bottleneck so important that without it, a whole system would fail. Therefore, the owner of the bottleneck must (!) provide access to competitors at a reasonable price. Typical examples for essential facilities are railroads or electrical power grids. (Isn’t it ironic that Google once advocated net neutrality based on this same argument?)
But Pasquale’s idea goes beyond claiming that without search engines, the whole internet would fail. He deems them to be essential political and cultural facilities. I.e. it is impossible to be active politically or gain some education without search engines. His solution? Passing legislation guaranteeing search neutrality and creating a “federal search commission” (all further quotes from this paper).
Pasquale uses a plethora or arguments, for example, those based in antitrust/competition law:
“While users can locate relevant information on the Net in other ways, search engines now constitute the dominant platform through which content producers and audiences can reach each other. Moreover, the search process itself is structured as a high-stakes, winner-takes-(almost)-all competition”.
Then, he resorts to the anticapitalistic rhetoric:
“The result is that very few entities control the critical junction of Internet communication […]. These new gatekeepers can directly manipulate the flow of information […] [T]he hierarchical ranking system, at least in its current one-size fits all form, has a strong bias toward majority preferences. The majority bias partly overlaps with a dominance of well financed and commercial speakers. Third, the system tilts toward consumerist content […]”.
But he is not alone. Astrid Mager speaks about power structures and an “algorithmic ideology”. One might think that the desiderata of search engines might depend on the people using them. But for Pasquale, the user has no relevancy whatsoever in determining how to manage search:
“Similarly, even if it turns out that users’ behavior demonstrates no concern about possible biases in favor of content supplied by the search engine allies, this does not necessarily dispel the concerns about a degrading effect that such behavior may have on the public sphere or public discourse”.
Judgmental judge
It would be easy to counter all three of Pasquale’s arguments. While it is true that search engines are gatekeepers, it is also true that there is a fierce competition among them. Also, non-classical search engines – Amazon, Facebook – are increasingly putting the established ones under pressure. While it is true that network economies are often a winner-takes-all game, the barriers to market-entry are so low that today’s winner can easily be displaced tomorrow. Think of the dominants of yore – Compuserve, Aol, Lycos.
Pasquale also seems to miss a crucial point. Search engines are for-profit undertakings selling a good in the market. It is unsurprising that they commercialize their services, need capital to work and want to make money. If their results are “consumerist”, then, because most of them are designed as tools for consumerism. Besides, “consumerist” results are a pretty good mirroring of the web, of which over 50% consists of “consumerism”.
Decisions are fundamentally individual and reflect not only subjective preferences, but are also context-relevant. Deciding for anything automatically entails deciding against many other possibilities. These forgone opportunities are the cost of a decision made. Since searching – online and offline – consists of a series of decisions, it is natural that any search implies high costs.
Pasquale leashes out at users/customers: “Personalized search targeted at the specific characteristics of users makes possible more finely tuned manipulation and increases the potential value of each intervention in the search results. The prospects created by customized search are analogous to those of targeted advertising based on profiling and categorization of the target audience”.
But here is the other thing he chooses to ignore: Self-profiling and voluntary categorization though subscriptions, loyalty programs, credit-cards, apps, coupons and feedback forms are welcome tools in the navigation of day-to-day decision-making. And most people use them willingly and with dexterity.
What is surprising: Pasquale literally claims that the users are not qualified to steer search engines results because they don’t care about the supposed shortcomings. Well, if the users don’t care, why should bias be a problem at all? Or, should we substitute personalized queries and the competitive non-neutrality of search machines by Pasquale’s personal preferences? Much more pressing: will this really help you finding pizza?
Distorted decisions
Countering Pasquale is easy. But there is also a much deeper way of revealing the main failures in the claim for net neutrality. Many have done so before: Manne and Wright explain that the claim is a solution searching for a problem. James Grimmelmann, on the other hand, identifies eight mistakes in search neutrality. Putting search engines in context, he says:
“Search engines are attention lenses; they bring the online world into focus. They can redirect, reveal, magnify, and distort”.
And he also reckons: the most important source of bias is the one between the keyboard and the chair.
Any search, by necessity, is not neutral. This bias occurs at several levels. The user has intentions. The search engine has algorithms. The content of the internet is anything but neutral. It is the biased user that wants to navigate an immense amount of information and all these other biases. To do so, the user employs a search route and search tools.
It is the user that creates different ways of searching through decision-making. Decisions are fundamentally individual and reflect not only subjective preferences, but are also context-relevant. Deciding for anything automatically entails deciding against many other possibilities. These forgone opportunities are the cost of a decision made. Since searching – online and offline – consists of a series of decisions, it is natural that any search implies high costs.
So, any search is constructed by a whole series of decisions concerning what to look for, how to look for it, which tools to use, how to combine them, in which context it takes place, and even a decision about how to choose. Only the individual can make all these decisions and bear its costs as well as benefits.
Let’s shift the perspective for one moment. What of those wishing to be found? Some advocates of search neutrality make the case that without it, their businesses do not show as search results, or at least, not as privileged as they deem adequate. Without visibility, their enterprises become unviable. Well, if a business is a business, it considers its marketing as a business decision. If good location is a necessity, do it like the pizza places in Medicine Hat and buy an ad. Negotiate. Do business with the search engine. But, if the business case relies solely on the net and there is no readiness to pay for using the web, the business model might be flawed.
And so, the case for search neutrality errs in several ways. It is the user that is the source of purposeful bias. The user structures a search, on- and offline, basing on different and selective decisions. The user bears the costs and benefits of these decisions, because only the user, from an individual and subjective perspective, can tell how much they are worth.
It’s the market
Eric Goldman takes a pragmatic stance:
“Due to search engines' automated operations, people often assume that search engines display search results neutrally and without bias. However, this perception is mistaken. Like any other media company, search engines affirmatively control their users' experiences, which has the consequence of skewing search results […]. Some commentators believe that search engine bias is a defect requiring legislative correction. Instead, [I] argue that search engine bias is the beneficial consequence of search engines optimizing content for their users. [I] further argue that the most problematic aspect of search engine bias, the ‘winner-take-all’ effect caused by top placement in search results, will be mooted by emerging personalized search technology”.