In this week’s news, a PR Firm called Burson-Marsteller admitted they were hired by Facebook to smear Google’s Social Circle via an alleged whisper campaign claiming the service violated user privacy rights. Both the PR Firm and Facebook later apologized for the action. Tisk Tisk, bad Facebook.
It is not clear whether any of this is anything other than a bunch of noise. We all know our search and web actions are analyzed, tracked, and used to customize what we are shown and what is advertised to us. The idea is to improve the web experience, focus the results to things that are relevant to you (supposedly). To exactly what extent this is done is nearly impossible to assert, given the nature of the intellectual property value of the algorithms that compute such filtering. The question really is, are we being filtered too much?
What we see on television or read in the paper, and even online, is filtered information. Someone has examined the facts of information (hopefully), and used their judgement to then assemble a story or piece of information, then created a presentation of that story to us. From whether the information even is presented to the angle of the presentation, most everything we see or hear has been filtered by people we inherently trust (freedom of the press) to do so forthrightly.
The Internet is largely the same. We trust that when we enter our search term that the information in the results has been appropriately arranged in order of relevance, or in terms of the non-organic display areas, that the highest and most relevant advertising is being presented. If it is, I may just click on that link. However, there is one big difference between the Internet and the legacy information distribution: the Internet is based on algorithms, not people. Granted, people write the algorithms, but we have already seen what that can do to the stock market. This leaves a burning question: Are the algorithms, this intellectual property of the web systems, actually preventing me from seeing things that may affect my perception of the available information?
The answer to this question may indeed be yes. For example, Facebook will lower or raise the relevance of certain postings on your wall based on whether you click on them or not, or like them or not. This means that some friends may appear to disappear from your feed, when in fact they have not stopped posting. Another example of this is if you and someone you know type the same search term into Google, you will get different search responses. It is actually a little weird. Experiencing this leads to immediate deeper questions:
- If I am not shown the same information as someone else, will I form the same conclusions researching an issue?
- If I don’t hear about something that could be very important am I as connected as I think I am?
- Could entire population perception be affected by this filtering? Could it be inadvertent?
- Is anyone testing these algorithms, not just for their assisting intent, but for the negative impact they could cause?
Like the newspaper editors and others we have trusted to present information in an appropriate way, with the appropriate filtering, we must call on these Internet mega-companies that are in catfights over technology and intellectual property to do what is right, and invest not only competitively, but in protection of the proper filtering performance of their algorithms.
I have no problem counting on computers programmed by intelligent people automating this process, I just have a doubt that these same algorithms will be properly tested, and more completely designed in taking care of the complexity of information that I need properly filtered to be connected today. For example, shouldn’t there be certain items that I can tweak in the algorithm beyond what it may calculate itself as my preferences or behavior patterns? Let’s say that I am brand new to web searching and I need to use the Internet to find a car part for my car. If I do this for an hour or so, I’ll bet the algorithms start thinking I am a car person or mechanic. I may completely abhor cars. Shouldn’t I be able to wipe that history so my categorization is reset?
Hopefully the Internet giants will all prove me wrong.
If you are paranoid about all this, I am sorry. There are, nonetheless, some things you can do. One is to use Firefox or Chrome as you web browser and install something called GreaseMonkey. It is an add-in that can perform interesting scripting of your web searching and web mail. Once it is installed, you can add a script that can optionally stop web sites like Google from tracking your activity. The script is called GoogleMonkeyR and is available here. There are lots of options for this script and you can check out these usage options here.
Let us know what you think, or any tips you want to share.