ICT and Human Rights New Year Review
21 January 2014
"The Digital Dangers we identified early in 2013 are clearly still important as we head into 2014. The year ahead may uncover still more difficult dilemmas. Our aim for the Digital Dangers project is to build up a body of research that will shed light on these challenges and provide practical assistance to companies seeking to fully implement their responsibility to respect human rights."
A roundup of 2013 and challenges for 2014
Information and Communications Technology (ICT) companies have typically operated in an environment of limited or evolving rules. This has fostered numerous innovations whose benefits can be seen around the world. However, just like every other industry, the ICT sector also faces significant human rights challenges.
In 2013, IHRB launched the Digital Dangers project in collaboration with the School of Law at the University of Washington in Seattle aimed at fostering greater understanding and more effective action to address such challenges.
IHRB’s work in this area builds on our previous role in developing in cooperation with SHIFT the European Commission’s ICT Sector Guide on Implementing the UN Guiding Principles on Business and Human Rights. Drawing on that research, the Digital Dangers project has identified six areas where ICT companies and governments are at risk of violating international human rights standards, focusing in particular on issues relating to freedom of expression and privacy.
We are currently producing a series of Digital Dangers case studies based on in-depth research at a particular company facing one or more of the challenges we’ve identified. Our methodology involves “embedding” an IHRB researcher in a company’s operations to observe at close quarters and in real time how those working in the company seek to prevent or mitigate risks to human rights linked to their activities.
Last year we published the first case study in the Digital Dangers series, addressing Corporate Responses to Hate Speech in the 2013 Kenyan Presidential Elections. At the company’s invitation, I was embedded in the Kenyan mobile operator, Safaricom, in the run up to and during the 2013 Presidential elections, to observe how the company tackled the issue of hate speech on its bulk SMS platform. We’ve been encouraged by the positive reaction from companies to this approach and willingness to explore options for further case studies.
What do these and other experiences tell us about what happened in 2013, and how do the Digital Dangers we’ve identified stack up as we look ahead to potential developments in the ICT sector during 2014? The following sections briefly address these questions.
1. Complying with government orders to impose surveillance
When we developed the Digital Dangers project in early 2013, we could not have predicted how much the issue of government orders to ICT companies would dominate the year. Via documents leaked by former US defense contractor Edward Snowden to several newspapers, revelations concerning mass collection and sharing of phone and internet data by the NSA in the US and GCHQ in the UK became one of the biggest news stories of the year. The importance of “metadata”, and allegations that private companies colluded with state secret services and supplied this data put the ICT sector under unprecedented scrutiny.
A frequent question that emerged in IHRB’s work during 2013 was the extent to which the UN Guiding Principles on Business and Human Rights may be a useful tool to help companies respect human rights in an age of mass surveillance. Recent events have demonstrated that human rights due diligence is key in how a company responds to these challenges. Some companies affected by mass surveillance allegations have dissociated themselves from the practice and even challenged governments in courts of law seeking protection of user rights. They are in this position because mass surveillance undermines their business model.
There were some positive signs going into 2014 that governments had begun to recognise the need to address concerns relating to surveillance. The UN unanimously approved a resolution on “The Right to Privacy in the Digital Age” and a report by the US President’s Review Group on Intelligence and Communications Technology recommended 46 curbs on surveillance. In response, President Obama last week outlined several reforms, including ending the storage of metadata under government control, but not ending the collection. The NSA has been given 60 days to come up with suggestions as to where the metadata could be stored and by whom.
Obama also pledged greater transparency by de-classifying opinions and orders of the Foreign Intelligence Surveillance Court (FISC) where possible, the establishment of a panel of non-governmental advocates to provide an independent voice to cases brought before FISC, and that National Security letters will not last indefinitely and expire after a fixed time. ICT companies will be allowed to make public more information about orders they have received to provide data to the government.
President Obama’s announcements ensure that the surveillance debate will continue for some time. There are still questions about what impact these reforms in the US may have on international data sharing agreements and non-US citizens. In addition, it remains to be seen if other governments that have so far remained almost silent will follow suit in initiating their own surveillance reforms.
2. Monitoring user-content under a company’s own policies
The question of how to regulate certain forms of harmful speech in the digital realm without restricting legitimate speech and debate is an on-going controversy that affects every country in the world differently, and an issue that companies often struggle with. Our work as part of the Digital Dangers project provided new insights into how one company, Safaricom, responded to hate speech in the context of elections in Kenya. By analysing this particular context using real time data from the company, we were able to develop a set of recommendations that aim to assist companies, governments and civil society in other countries tackle the issue of hate speech by applying a systematic approach drawing on the framework provided the UN Guiding Principles.
3. Selling dual use technology where there is a high probability of its misuse
IHRB completed research on our second Digital Dangers case study in 2013. I was embedded in a telecommunication infrastructure vendor to research further the risks of misuse of lawful interception and dual use technology. We expect this study to be published in Spring 2014.
Companies in the ICT sector are often accused of selling “surveillance” technology to repressive regimes. Civil society organisations such as Citizen Lab and Privacy International have worked to identify companies selling products to governments likely to use them to spy on their citizens and track political dissidents. In 2013, Privacy International filed six complaints with the OECD regarding the sale of surveillance technology to Bahrain.
This industry is driven by the legal requirement that telecommunication networks include a capability for law enforcement to intercept communications to assist with fighting crime. Providing lawful intercept “solutions” traditionally fell to the vendor developing the network, but there are now thousands of companies offering these services. When these functionalities are misused by the State to spy on citizens, network operators are often accused of selling “surveillance” equipment. It is important to unpack what this means. The question is what is being supplied, and how is it being used or misused. The challenge for regulators is to avoid placing controls on products that could disrupt the smooth running of a network. We aim to address these dilemmas in our forthcoming case study.
4. Monitoring, evaluating and blocking user content at the request of third parties (state or non-state actors)
Companies often receive requests from governments and non-state actors such as religious groups, political groups, and others to remove certain content that is perceived to be illegal or offensive. Companies need clear frameworks to determine the extent to which they can cooperate with such requests that are in line with international human rights standards. 2013 saw a few web-based companies muddle through decisions as they pondered over what they would and would not allow on their platforms. For example, Facebook struggled with the decision to allow videos of beheadings and faced criticism over its policies dealing with content depicting violence against women. Companies seem to be facing particular challenges when dealing with content perceived to be blasphemous. This topic will clearly continue to be one requiring further research and action during 2014.
5. Disconnecting or disrupting network access
2013 saw continued network disruption in countries such as Pakistan, Sudan, and Syria. In Somalia, Al-Shabab reportedly demanded ISPs shut down the internet indefinitely, which the government urged ISPs to ignore.
In contrast, during protests in the Ukraine, it was reported that the country’s largest internet service provider, Volia, voluntarily maximised internet speeds and set up free wi-fi points in Kiev, reportedly on the orders of the company director. It was encouraging to see a company respond to a situation happening on the ground and go beyond its legal responsibilities in the interest of enabling freedom of expression in the Ukrainian case. However, a particular dilemma facing companies is when requests are made by governments to suspend services on the grounds of national security, particularly if there is a risk of a bomb being detonated via a mobile phone. We intend to research this particular aspect of network disconnection further in 2014.
6. Handing over stored user content and data to the state
Before the fallout from the mass surveillance allegations, IHRB was following the trend of transparency reporting. While agreeing transparency reports were a positive step, during 2013, IHRB raised questions as to how the data in transparency reports can actually be used to address risks to human rights and highlighted the importance of giving meaning and context to what is essentially a list of numbers. Our concern was and remains that transparency reports should provide more answers than raise questions.
Now that US based companies will be able to provide “more information” about government requests, what will this extra transparency look like? How will an increase in numbers on a transparency report actually improve transparency and ease the concerns of users? What effect, if any, will it have on the on-going surveillance debate? IHRB plans to explore these questions in 2014.
Looking ahead
The Digital Dangers we identified early in 2013 are clearly still important as we head into 2014. The year ahead may uncover still more difficult dilemmas. Our aim for the Digital Dangers project is to build up a body of research that will shed light on these challenges and provide practical assistance to companies seeking to fully implement their responsibility to respect human rights.