- A New York judge ruled that the First Amendment protects a search engine’s decision to censor pro-democracy links from its search results because search results are subjective opinions about what is most responsive to a user’s search query.
- However, speech, including opinions, is not protected by the First Amendment unless it conveys a particularized message that listeners are likely to understand.
- The message conveyed by a search engine’s opinion about what is responsive to a search query cannot be understood by users if the opinion is based on a bias that users do not know of.
- If users are not notified about that bias, then the search engine’s message is not actually spoken to us, and its search results cannot receive the First Amendment’s protection.
Everyone is all up in arms about net neutrality, and I sit here worrying about a different kind of neutrality: search neutrality.
You see, to me, my Netflix stream sputtering a few times per episode is much less concerning than if my search results for, say, a presidential candidate or the debate on search neutrality, were selected and sorted based on a bias that skewed the results for reasons I didn’t know about. Typically, these results are selected & sorted based on search engine engineers’ subjective evaluation of which objective factors make the hyperlinked websites most responsive to my search query. However, search manipulation occurs when the selection and sorting of these results is also based on a subjective bias that ranks certain websites higher because they contain information that is favorable or agreeable to those with control over the search process. And while I am not saying that my search engine of choice, Google, does this, a recent New York case that embodies everything that worries me about this topic made it clear that search engines can and do engage in this type of behavior, and there is nothing the law will do about it.
In this case, Zhang v. Baidu.com, Inc., a group of pro-democracy activists sued Baidu, the Chinese search engine counterpart to Google, for censoring pro-democracy links from the search results it presented to users located here in the United States. The result of that suit: score one for Baidu and every other search engine operating in the United States. The New York judge presiding over the case ruled that censoring pro-democracy links from a search result is protected by the First Amendment’s guarantee to freedom of speech because the search engine is in fact speaking when it produces these results.
For now, I will gladfully concede that search engines are engaged in speech when selecting and sorting search results to address a more pertinent question:
If search engines are talking, then what in the heck are they saying?
One possibility is that search engines are speaking through the websites linked to in search results; that is, they are associating themselves with the content contained within the linked websites–either directly like an editor approving their message or indirectly like a publisher endorsing their message. However, we know that is not true because search engines have actually gone out of their way to make it known that they should not be perceived as the speakers, editors, or publishers of any content contained within these websites. And for good reason: if search engines were deemed to be approving or endorsing this content, then that would diminish their right to claim intermediary immunity from tort claims under the CDA.
Perhaps more importantly, people simply do not associate search engines with the speech contained within the websites linked to in search results. When I Google anything, I have never thought of Google as having participated in creating, editing, or publishing any speech contained within the linked websites. Rather, I view Google as performing a functional role similar to that of an on-television programming guide, which shows me available channels and helps me locate which channel or program I want to view.
The other possibility is that the search results themselves are the speech for which search engines receive protection. The idea here is that search engines, like newspapers or parade organizers, inevitably make “editorial judgments” about what information to include and how and where to display that information.
Thus far, this has been the way courts have treated search results. In what has been deemed to be the most clear and persuasive claim for why search engines should receive First Amendment protection, the NY judge ruling on the Baidu case explained that the editorial judgements contained within search results are inherently subjective opinions about what is the most appropriate response to a user’s search query and opinions–even those created by computer algorithms–constitute speech that is protected by the First Amendment.
But this analysis does not complete the First Amendment equation because simply deciding that something falls within a strict, literal definition of speech does not mean that it is protected by the Constitution. Rather, speech must be communicative for it to fall within the ambit of the First Amendment, and for speech to be communicative, it must “convey a particularized message” that listeners are likely to understand. See United States v. O’Brien, 391 U.S. 367, 376 (1968) (“We cannot accept the view that an apparently limitless variety of conduct can be labeled as ‘speech’ whenever the person engaging in the conduct intends thereby to express an idea.”).
So for search engines to claim, as they have, First Amendment protection for their search results, they are necessarily implying that these results are somehow speaking a particularized message that users are likely to understand, which begs my final question:
What message do search results convey to users?
The simple response would be that the message is these are the websites indexed by the search engine that are most responsive to a user’s search query. This seems to be the answer that the NY judge arrived at (despite it being a case dealing with the fact that searches for pro-democracy links would return results containing anything but websites relevant to the query) and, in a perfect world, this would be the answer.
However, we already know that this is not the entire message because, without even considering manipulative or deceptive search bias, we can see that search results do not present links solely based on relevance. When we enter search queries, the top search results are often sponsored links paid for by advertisers or direct links to services provided by the search engine (e.g., Bing or Google maps) regardless of whether the search engine has indexed websites that are more responsive to our search query.
Still, these are messages that can be easily discerned by a user. When I am presented with these links, I can reasonably infer that the search engine is saying that “we sponsor these links” or that “we offer these services.”
But beyond that, we can discern little more of any message that search results communicate to us because search engines have been far from transparent about why they select and sort the links the way they do. Yet, how then can search engines properly claim First Amendment protection for search results if this protection is based on communication of a message that is purposely being hidden from us?
Without even attacking the fallible premise that the machine speech constituting search results is speech that qualifies for the First Amendment’s protection, it should still not receive this protection unless the message that these results are delivering is likely to be understood by the search engine’s users and this understanding requires a less opaque approach to how search results are produced.
I agree that search results are inherently subjective because I understand that what is deemed responsive to a search query will vary from search engine to search engine. But when what makes the result “responsive” departs from factors that a user would normally believe to be included in such an evaluation (e.g., click popularity, frequency of search terms in the document, the location of the search terms in the document, the document’s age, the document’s author), then we must be notified about the basis for that deviation for a search result’s message to be delivered.
For example, if instead of complete censorship of pro-democracy links, Baidu instead devalued these links, pushing all pro-democracy results beyond the top 5 links that receive 84% of clicks on search results, we should be notified that their placement in the rankings is intended to be a message about how the search engine feels about this content.
Until we are notified about that message, it is not actually spoken to us, and therefore it should not receive First Amendment protection.