Open Rights Group Report: “Digital Surveillance”

May 16th, 2013 No comments

The Open Rights Group have recently released “Digital Surveillance – Why the Snoopers’ Charter is the wrong approach: A call for targeted and accountable investigatory powers”. The report sets out various arguments as to why the proposed Communications Data Bill in the UK, which aims to massively extend the scope of surveillance over online communications, is a terrible, counterproductive, expensive, and unnecessary idea.

I was asked to contribute a section on the risks of the existing proposals, and some thoughts on where things should go in the future. My contribution is reproduced here, but please do go and download the full report.

Where laws intersect with technology, as is strikingly the case with surveillance, the discrepancy between the pace of technological change and the pace of legal change requires lawmakers to consider carefully the risks that arise from the future development and application of technologies. Crucially, and challengingly, it is necessary to differentiate between the limitations that exist in current technologies, and will disappear as technology develops; and those limitations that are fixed and inherent.

Information technologies, and in particular the Internet, have expanded the potential for surveillance to a degree that would have seemed fantastical in previous decades. Unprecedented levels of data can now be collected, stored, and analysed, and can be combined and controlled with an amazing degree of centrality.

The technical capabilities of the Internet not only allow this surveillance, they encourage us, through convenience, to place more and more of our lives into the spotlight. We now read news, search for information, talk to friends, organize social and business life, bank, and meet potential partners via the Internet. There is no precedent that can even approximate a model for the pervasiveness of the Internet in our lives — not the phone network, not post or telegraph, not CCTV surveillance. Equating the Internet with historical technologies when making policy is not simply wrong, it is dangerously misleading.

From the state’s perspective, the desire for surveillance is easy to understand. Such a wealth of data seems to promise an oracle allowing security services not only to investigate, but also to detect, predict and prevent crimes — and ubiquitous surveillance can, certainly, achieve some of these goals.

The sheer wealth of data that surveillance reveals, however, tips the balance decisively from its power to help towards its power to harm. Vast amounts of information can be handled by faster and faster computers, but the power and accuracy of the predictive algorithms are not so scalable — when applied blindly to entire populations the ability to identify suspicious patterns is lost in the flood and becomes either worthless or actively harmful.

Pervasive and detailed information on individuals is a powerful tool. When investigating a crime the details of a suspect’s activities, communications, and habits can be highly valuable. This tool, however, can be used just as effectively against all those individuals who are not under suspicion — blackmail, fraud, stalking, and simple invasion of privacy are all enabled by such collections of data just as effectively as the investigation of crime. Placing an entire population in handcuffs to ensure that the criminals have been caught is not an acceptable policy.

As such, any legal framework for enabling surveillance must, in the first instance, be based on the notion of targeted gathering of data on well-justified grounds. This precludes the a priori gathering and storage of data — such gathering should only occur in response to justified suspicions. Data that is found not to be useful, particularly where it concerns third parties, must be deleted quickly and verifiably. Further, there should be no institutionalised technical mechanism to surveil communications; instead, surveillance requests should be made directly to service providers who must be free to manage and control their own platforms.

As has been observed with existing laws, such as the UK’s own RIPA, surveillance powers are easily and widely abused. Strict and independent audit, therefore, both of surveillance requests and data handling should be a key feature of any proposed surveillance framework. This must, of course, be supported by stringent penalties for misuse of either powers or data. Transparency, imposed both at a legal level and by the need to interact with private organisations that control infrastructure, is the only hope to mitigate the abuses that inevitably accompany such approaches.

The technological landscape in which we find ourselves is one in which the potential for surveillance is vast and growing. Surveillance law must therefore focus on restraining risks and abuses, without being carried away by false promises of effectiveness. Minimisation, decentralisation, accountability and limitation of access are all necessary steps to ensure that the cure is not worse than the disease.

Categories: Policy, Surveillance Tags:

Chinese Internet Filtering: The Curious Case of the Florida Pet Club

December 23rd, 2012 4 comments

Of the various ways to filter the internet, manipulating DNS is probably the simplest and cheapest in terms of resources. DNS, the Domain Name Service, is the mapping between the human-readable URLs that we use, like https://www.pseudonymity.net, and the more machine-friendly IP addresses, like 87.106.104.43.

The Chinese Golden Shield Project, or Great Firewall, famously makes use of a range of techniques. These include keyword filtering, as reported by Clayton et al., as well as active blocking of services such as Tor at the IP level, and more manual censorship and takedown on services like Weibo.

In the past year or so I’ve spent some time tinkering with exactly how China’s internet is filtered. In particular, I’ve been interested in the extent to which the system is centrally-driven, with blanket country-wide decisions and implementation, against how many of its decisions are loose and locally applied by regional authorities and ISPs.

To study this it is more or less useless to fire up a VPN, or a copy of Tor, and run network tests. Filtering conditions may vary by ISP, by province, by city or by ISP. When I see a report that some site ‘is blocked in China’, my immediate response has become to ask where. On which ISP? Using what method?

Instead, we need to study internet filtering with enough resolution to capture a potentially complex and varied filtering landscape. Multiple systems across the country, on different networks, must be collated and compared. From my personal research bias and interests, I’m interested in how this can be achieved remotely. I’ve presented work on this elsewhere, (see my FOCI paper), but I’m concerned with the approach of asking users to install censorship monitoring software on their systems when those users are often not technically qualified to understand the risks. I don’t genuinely understand the risks of accessing http://www.tibet.net from inside China, and I can’t justify asking another, probably less informed, user to do it on my behalf.

As a result, to date I’ve focused largely on looking at how China poisons DNS. DNS poisoning is a common and relatively simple way to stop people reaching web pages, provided that you’re happy to block the entire domain. DNS servers are relatively common, and also are usually open to requests from anywhere, meaning that it’s easy to get wide coverage at a relatively high resolution.

The short version of the results are that poisoning is rife across China, with Twitter being the most widely poisoned domain. The most common type of poisoning is to misdirect by providing an incorrect IP address, rather than to claim that a given domain does not exist.

In general the majority of this IP misdirection seems to be more or less random. Users attempting to reach sites such as http://www.tibet.net or http://www.voanews.com may find themselves redirected to computers in Korea, Azerbaijan, the US or China itself. While these addresses vary across the country, there are cases of correlation. DNS servers in different cities, operated by different ISPs, will quite often redirect to the same incorrect IP addresses. A quick analysis of these shows a range of relatively innocent-looking systems that seem to have been plucked more or less at random.

One of the more interesting examples was looking at the redirection surrounding the Tor Project’s website: https://www.torproject.org. Tor is a well-known target of filtering in China, so it isn’t surprising that they are being DNS poisoned. What was interesting, however, was finding that across China more than 14 separate servers all redirected https://www.torproject.org to a website owned by “The Pet Club”, a Florida-based pet-grooming service. When New Scientist magazine wrote a short article on this work recently, they contacted the webmaster of http://www.thepetclubfl.net to get his thoughts. The most interesting result of that conversation was that The Pet Club do experience a high volume of traffic from Chinese users, showing that the relevant IP address is not blocked. This was by no means the only example of such behaviour.

If we question why this all happens we are clearly moving from facts to speculation, but here are my theories for this strange behaviour:

  1. DNS poisoning by returning incorrect IP addresses results in a connection to the (fake) site. Assuming that these are likely to be outside of China, this means that Chinese border routers will observe connections to these IP addresses, which are unlikely to be visited by average Chinese users. Therefore it will be possible to observe users trying to get to https://www.torproject.org while still blocking their attempts. Simply returning ‘no such domain’ would effectively block many users, but would not reveal the scale of the connection attempts.
  2. The correlation of IP addresses shows some level of central decision-making. My theory is certainly that a large amount of the filtering decisions come in the form of guidelines rather than strict rules, resulting in variations as the guidelines are implemented by local ISPs and organisations. Despite this, there do seem to be situations where stronger rules are sent. The Pet Club case, for example, could feasilbly be explained by an instruction to ‘redirect the Tor Project’s website to thepetclubfl.net’, which was then inserted into local DNS servers.

What I find fascinating in this work is the complexity of blocking. We can study DNS redirection, IP blocking, keyword filtering, BGP manipulation, and social media takedown, but this is just a technical angle. On top of this we have variations from location to location, variation over time, the choice of what method to use for particular blocking targets, and now whether or not to send precise blocking commands or more flexible guidelines.

I have many plans to extend this work to the rest of the world, and to produce similar high-resolution testing for IP reachability and other forms of filtering while still avoiding the need to involve individuals on the ground. What this study of DNS filtering shows is that the overall story of the filtering is far more complicated than simply asking what websites are blocked in which countries.

Just to conclude: a diagram of the IP redirection poisoning seen for China as of September 2012. The scale runs from green to red, with red showing more DNS redirection for censored sites and green showing less. Size of circles indicate the number of results available, in terms of the number of servers, for a given city.

Misleading DNS results

Misleading DNS Results across China — Red shows more misdirection, green shows more genuine results. Larger circles denote more results.

Categories: Censorship, China Tags:

Workshop on Free and Open Communications on the Internet (FOCI’12)

March 31st, 2012 No comments

Following on from the fantastically interesting FOCI workshop last year, I am co-chairing this year’s FOCI workshop along with Roger Dingledine of the Tor Project. The workshop will again be co-located with USENIX Security, which is being held this year in Bellevue, Washington in August.

Although FOCI revolves around USENIX Security, and therefore by default falls on the more technical side of research, we are actively encouraging submissions from any field with something interesting to say on internet censorship. Social science, political science, law, economics, ethics, psychology — if you have something to say then send us your work!

The call for papers is here: https://www.usenix.org/conference/foci12

I hope to see you there!

Categories: Censorship, Conference Tags:

Discussing Online Privacy in the Observer, with Tom Chatfield

March 4th, 2012 No comments

I was recently approached by the Observer to take part in an email-based discussion with Tom Chatfield about online privacy and the direction that companies like Facebook and Google are taking us.

It was a lot of fun to write, over the course of a day, and there were some interesting points raised. 1000 words each isn’t enough to explore very much, but I found it surprisingly useful for clarifying my thoughts on the subject, and quite inspiring for some of the future work that is constantly buzzing around my head.

The original story on the Observer is here.

Categories: Article, Privacy Tags:

Presentation on Mapping Chinese Censorship

December 29th, 2011 No comments

I recently presented my work on censorship mapping to my colleagues at the OII, including a couple of maps with early analysis of DNS manipulation in Chinese cities.

The analysis is very preliminary, and there are considerable caveats even for the early results, but here’s the presentation:

Categories: Censorship, China Tags:

Freedom of Communication on the Internet Workshop (FOCI): Fine-Grained Censorship Mapping — Information Sources, Legality and Ethics

November 2nd, 2011 1 comment

This year saw the first workshop on Freedom of Communications on the Internet, co-located with USENIX Security in San Francisco. My contribution, co-authored with Ian Brown and Tulio de Souza, focused both on the means for mapping censorship in greater detail as well its legal and ethical implications.

The paper was inspired by the realization that censorship at the national level need not, and clearly often is not, applied equally across a country. The riots in Ürümqi, in Xinjiang, resulted in a blanket internet ban for that region that was not extended to the rest of China. The widely-reported shutdown of Egyptian internet service for several days during the 2011 Egyption revolution was not experienced, at least at first, on the ISP that provided service for important financial services. The ability to filter selectively is clearly, in the view of a censor, very useful.

Even when censorship is intended to apply equally, practical considerations can cause localized discrepancies. In large-scale or complex censorship regimes total centralization may be infeasible, resulting in censorship being delegated to local authorities or organizations. These may, in turn, make different choices in how to implement filtering at the local level, with varying results.

All previous major studies of internet censorship have considered filtering at the national level, without investigating the potential for local variation. It is therefore valuable to consider what local and organizational variations in censorship can reveal about how filtering is implemented and how it affects users.

The goal of this research, then, is in determining what filtering is being applied to a given remote computer, identified by its IP address. This IP can then be geolocated with a reasonable level of accuracy using the freely-available MaxMind GeoIP Cities Lite. To this end, we wish to view of the internet as seen by that computer. Tools such as Tor, psiphon and open virtual private networks (VPNs) provide exactly this functionality, but are unfortunately few and far between.

This problem is exacerbated when researching censorship, as redirection services are typically offered specifically to allow users to bypass censorship by routing their connections outside of a filtered area. There seems little incentive for people to offer anti-censorship tool exit points in known filtered locations. Tor, as an example, did not appear to offer any exit relays in mainland China when we conducted our experiments.

Without available services such as Tor, we began to investigate common services that allow connections to be bounced in a similar fashion. We are not interested in using these connections for any significant data transfer, and so even fragmentary information can be useful. With this perspective, DNS, IRC, FTP and others all offer potential informations sources to learn how remote systems see the internet.

It was shortly after beginning to consider these systems, however, that we became concerned with the ethical issues surrounding this type of research. Ignoring more technical approaches, the simplest way to learn how an individual computer’s connection is filtered is by contacting a remote user and asking them to run censorship detection code themselves. Whilst it can be difficult to scale such an approach, a great deal of information can be gained from each experiment run in this way. Unfortunately, probing censorship on the internet almost inevitably involves deliberately triggering a censorship mechanism, by attempting to access a blocked website, by searching for a banned term, or by transmitting data containing filtered keywords. When these attempts are made through a third party’s connection, potentially without their informed consent, consideration must be given as to the level of risk to which that user is exposed.

The nature of the risk can, however, be more subtle than simply coming to the attention of government censors. The Herdict project, a web-based censorship mapping tool, functions by loading potentially blocked pages as an HTML iframe embedded in their webpage, and users report whether the embedded page is visible or not. This embedded page, which loads on screen without warning and which can involve topics such as sexuality and political or religious expression, could cause anything from minor embarrassment to serious social or legal consequences if an unsuspecting user were observed viewing it in certain cultures.

Without an easy answer to these problems, we have limited ourselves to exploring DNS-level censorship. DNS servers are widespread on the internet, are often open to the general internet, and are public services run, in general, by organizations rather than individuals. This allows us to query for sites we believe to be blocked without exposing individuals to any form of risk. Obtaining a reasonable list of DNS servers in China was simple via a request to APNIC. It would also be simple to scan known Chinese IP blocks for open DNS servers, but we felt this to be unnecessary.

With a reasonable list of several hundred DNS servers, we retrieved a list of known blocked domain names from the Herdict project and automated the process of requesting the domain name to IP mapping from the remote DNS servers, comparing the results to those we could obtain from our own unfiltered connections.

Initial results demonstrate a fair amount of DNS poisoning, with fake results reported by several servers for known blocked sites such as facebook.com, twitter.com and wujie.net (the Chinese domain for the UltraSurf anti-censorship product), as well as many others. In a number of cases, DNS servers simply reported fake IP addresses that, on scanning, did not appear to offer any services. In other cases we observed DNS servers forwarding requests to alternate DNS servers, often located in Beijing, that then returned either fake results or no results at all. A number of servers returned no results at all for well-known blocked sites. Despite this, in a good number of cases we did receive genuine, correct responses from DNS servers.

The most interesting result, at a first glance, is the range of responses from the various servers. All possible behaviours, from genuine responses through faked results to no results at all were observed. There does not, from initial examination, seem to be an obvious pattern to the distribution of these different result types. This is doubly interesting in light of the various other methods of censorship, such as deep-packet inspection and TLS resets, that are known to be employed in China and which could be expected to make DNS poisoning unnecessary.

We currently have the raw data gathered from across China and are analyzing it for interesting patterns, we will also be re-running the experiment at regular intervals in order to observe how the patterns of blocking change over time. Of current interest is whether there are significant correlations between the types of filtering employed and the geographical or organizational distribution of the servers; those DNS servers that chose to redirect our requests are also a very interesting avenue of enquiry. Of the faked results received, we have already observed that these are often redirected to a small pool of “sink” IP addresses; whether these sinks are consistent across regions or organizations is not known.

There are many interesting questions to be answered from this line of research, and China is by no means the only country worth investigating. A more general point of interest is how to learn which sites to test for filtering. We have relied, to a large extent, on both the Herdict project’s list of sites, gathered through manual reporting from users around the world, and on our own knowledge of blocked sites. Automating this process of detecting filtered sites is certainly a problem worthy of further attention.

While there are serious legal and ethical limitations to researching censorship directly in this way, means to do so are available and allow scope for much interesting work. I look forward to sharing our results in the future.

Paper: Fine-Grained Censorship Mapping: Information Sources, Legality and Ethics

Categories: Censorship, China, Conference Tags:

Experiences of Chinese Internet Censorship

September 12th, 2011 2 comments

I was recently invited to speak at Dalian Technical University, in Liaoning Province in Northern China, and took the opportunity afterwards to spend three weeks travelling around China with my family. (Finally putting several years of studying Mandarin into practice, with a reasonable level of success, and having a fantastic time.)

Being in China, I couldn’t help but poke a little at the limitations imposed on my connection. Travelling with 14-month old twins is a full-time job, albeit one that I can highly recommend, which did not leave me a great deal of time to analyse connections. I will therefore only report on my personal experiences and impressions, although the data that I did gather will hopefully be useful for a future paper based on work that I presented at FOCI’11. As such, anyone who knows a little about Chinese state-level internet censorship is unlikely to find anything new here.

In my time in China, I ran simple filtering tests on all the Internet connections to which I had access, covering locations in Beijing, Dalian, Shanghai and Hangzhou. I also took the chance to run code to test local nameservers for DNS manipulation when requesting known blocked sites.

The most notable observations from my own experiences were:

  • Secondary effects of blocking
  • Twitter and Facebook are some of the more well-known blocked sites in China. In the course of normal usage, it is simple to avoid such sites. (Chinese users, of course, have a variety of alternatives for Facebook, with Sina Weibo in place of Twitter.)

    What is more noticeable, when browsing normal websites and blogs, is the severe slowdown caused by the inability to load Twitter’s “Tweet this” and Facebook’s “Like this” buttons that are now commonly embedded on blogs and news sites. Firefox is unwilling to render the page until these load or, presumably, time out, which cripples many sites.

    (It’s worth mentioning that all connections to which I had access were relatively slow and unreliable by UK standards, adding to this effect.)

  • Tor blocking
  • Tor is a standard presence on my netbook, despite not being used for everyday browsing. As expected, the comforting green onion on my taskbar faded to a sickly yellow for my entire journey. I didn’t, sadly, have time to experiment with Tor bridges.

  • Kindle is still uncensored
  • One of the amusing censorship stories of this year has been the discovery that Twitter, and apparently all other sites, is not blocked when using the Kindle’s built-in browser. This is caused by the Kindle automatically routing all browsing requests through Amazon servers located outside of China. I had predicted that this would not be blocked in China; the number of Kindle users are too low, and the browser is just not practical for day-to-day use. Combined with the effort required to force Amazon to reroute requests, it never seemed likely that China would clamp down on the Kindle.

    As expected, browsing via the Kindle showed no evidence of blocking whatsoever.

  • DNS manipulation is widespread
  • As part of earlier research I have some very basic code to perform DNS lookups for blocked websites, retrieved from the Herdict Project, against remote nameservers. This was run remotely against a list of Chinese DNS servers to compare relative results in different parts of China.

    Being physically located in China added little to the data that I already have, except to add a number of DNS servers that weren’t in my initial list. A deeper analysis of this data, along with the data capture from my earlier experiments, is forthcoming. The few extra data points from this trip confirm only that DNS manipulation is widespread for blocked sites, alongside any other more sophisticated means to filter content.

    (I will be writing up my FOCI’11 paper here in the very near future, which will go into this work in much more detail.)

  • VPNs have a truly significant positive effect
  • On untrusted networks I use VPN software by default where at all possible, for simple security reasons. In almost every location in China, connection to the Oxford University (Cisco) VPN was possible. Where I could not connect, a poor connection is as likely as anything more sinister.

    More noticeable was that to achieve anything close to my normal browsing experience, given the sites that I normally visit and the content that they include, I found truly significant differences when using the VPN.

    As mentioned above this was not simply a matter of being able to access Twitter and Facebook, both of which I rarely visit directly; nor was it a matter of my connection being dropped because I happened to type a politically sensitive term into a search engine. Instead, the most interesting aspect of directly experiencing this form of censorship was a subtle and generalised degradation of the internet — unpredictable connections, failed links, and slow loading times. All of these are a result of the interconnectivity of the web, and the assumption that cross-site links are equally available. (Wikipedia being blocked, however, was surprisingly restrictive. One interesting highlight of restrictions on connectivity is to draw attention to your own browsing habits.)

    In summary, my brief experience of Chinese internet censorship was strikingly different to my expectation. The majority of reports, in my experience, focus on the dramatic blocks of major websites, or on heavy-handed filtering of search results. In practice I was far more struck by the continual, low-level pressure that censorship imposes on normal usage, even though, as a lǎowài, I was largely unaffected by wider social or legal concerns from trying to access blocked sites. Most notably, I was surprised by the level of collateral damage that broad-scale filtering imposes on a wide range of largely unrelated sites.

    While the internet in China is by no means unusable, the restrictions are tangible. The context of my own usage, mainly restricted to English-language websites based in the west, is unlikely to be representative of the experience of a Chinese user. My inability to meaningfully browse and engage in Chinese-language websites prevented me from experiencing the less technical aspects of filtering: self-censorship, pro- and anti-government rhetoric, selective news reporting and others.

    I can say that I was very glad to be back with a nice Clean Feed in the UK.

    Categories: Censorship, China Tags:

    Freedom of Communications on the Internet (FOCI) Workshop

    February 27th, 2011 1 comment

    I’m on my way back from the Workshop on Free and Open Communication on the Internet (FOCI) that was held in the last few days at Georgia Tech in Atlanta. Hosted by Nick Feamster, FOCI brought together a number of computer scientists, activists, lawyers and policy makers to discuss the impact of anti-censorship technologies and to think about future directions from a number of angles.

    It’s always interesting to see experts on the same topic from different fields together, and FOCI was no exception. Despite occasional diversions into policy-speak or tech-talk that left half the room baffled, I came away more impressed with how often we had managed to cross that barrier.

    The technical side of the crowd seemed to have the benefit of more time to present, and so there were thorough discussions on the nature of filtering mechanisms and their technical capabilities as well as details of anti-censorship technologies, particularly Tor. Roger Dingledine gave some interesting, if slightly statistically questionable, numbers regarding Tor usage in various countries during the recent events in the Middle East.

    An estimate from Hal Roberts, based on surveys of activist bloggers, was that 3% of worldwide internet users employed some form of anti-censorship tool, including web-based proxies. Tor’s own estimated usage figures, hampered by the difficulty of monitoring use of an anonymising tool, showed usages ranging from the tens of thousands in Egypt down to tens in Yemen. Within the Tor project, active research is focusing on more effectively calculating real usage data. (See https://metrics.torproject.org if you’re interested.)

    (Tor’s ongoing efforts to bypass filtering and to improve their system of bridges, as well as to improve the performance and security of their network, remain a seemingly endless source of interesting technical challenges.)

    On the legal and policy side it was useful to see the international perspective given substantial time, rather than predicating discussions on the First Amendment and SafeHarbor.

    What the discussion highlighted is that, despite the existence of tools such as Tor and their increasing use, censorship is a complex and multi-faceted issue. Tor has done an excellent job on the technical side in combating censorship at many levels of the stack, and has extended that to user education, social awareness and discussions with policy makers. In general, though, it seems that it is at the social level that both filtering and anti-filtering will begin to move.

    One observation that I’ve heard elsewhere is that “hard” filtering, such as China’s Golden Shield, are being extensively supplemented or replaced with “softer” filtering that aims to drowns out dissenting views with waves of government-sponsored information. This can take the form of sponsored pro-government views, such as China’s 50 Cent Party posting on blogs, to legitimate pro-government sites. Approaching this from a technical angle is relatively ineffective, although technologies such as authentication and private access still have their role. Means to combat the resources of a major player, such as a state or government, in order to level the playing field of online debate will be an important question in the future.

    For me, one of the most important facts to come out of the day is that we need more effective ways of measuring censorship around the world, in terms of methods used, type and extent of filtering and usage of circumvention tools. Existing approaches to measuring censorship require significant human effort, and often report only relatively crude results. Improving and automating the gathering of this information raises some interesting, and very useful, open questions.

    FOCI was a good starting point for interdisciplinary work in this area, and I hope it will lead on to similar events in the future.

    Categories: Censorship, Conference Tags:

    Contentious Connections

    January 6th, 2011 No comments

    I have a comment piece in the Guardian today about network neutrality and BT’s Content Connect service. The online version is here.

    I’ll let the article stand largely by itself, whilst pleading the difficulty of putting the net neutrality debate across in 800 words whilst simultaneously linking in BT’s Content Connect.

    One point I would like to add, for anyone who finds this, is that the term “net neutrality” can be, and often is, very misleading; if you’re new to the subject then “neutrality” almost certainly means something different to what you think it means! Common terms combined with complicated technical subject matter are a recipe for disaster. Tim Wu’s excellent “Network Neutrality FAQ” should be required reading for this subject.

    The Guardian article in full:

    The desire for high-bandwidth internet services, such as internet TV is placing ever greater demands on the internet’s infrastructure. New technologies are being developed to meet these demands, but companies are increasingly considering new business models. With its Content Connect service, BT has brought itself into conflict with a fundamental design principle of the internet, raising concerns that the drive for profit could lead to changes that will harm consumers and content producers.

    The principle in question is that of net neutrality, which broadly states that data passing over the internet should be treated equally regardless of whose data it is. From a user’s perspective this means that your ISP should not, for example, prioritise Google’s traffic to you over Facebook’s. Net neutrality is the cause of much debate and confusion. It is accepted that prioritising one type of data over another is necessary for the internet to function. An ISP will therefore give preference to voice or streaming video data, as these rely on swift delivery to be useful; however, preferentially treating one content provider’s videos over another is considered unacceptable. Differentiation of service should therefore be made solely for engineering or quality-of-service considerations, and not for commercial exploitation.

    Proponents of net neutrality, such as the UK’s Open Rights Group, argue that, by treating all content providers equally, the internet provides a level playing field that stimulates innovation and competition. If Google could pay to have their content delivered more quickly than Facebook’s they would have a significant advantage, and smaller companies could be squeezed out of the market. This could result in a higher level of market domination by large companies, and in a “tiered” internet in which access to certain content requires extra payment for premium services.

    BT’s Content Connect service is a direct response to a demand for internet TV, and works by reducing the amount of data transferred across the internet by temporarily storing popular content close to end users. From a technical perspective, this is an excellent way to improve content delivery. The controversy is the business model that drives the service. Rather than agnostically storing popular content, such as the latest digital episode of Coronation Street, access is offered by ISPs to content providers such as Google, who must pay to have their content delivered at higher speeds and qualities. This allows those providers that can afford the service a significant advantage over those who cannot, these being relegated to the slower traditional network.

    The US has recently passed legislation supporting net neutrality, although the EU has indicated that it views such laws as unnecessary “at this time”. But is net neutrality, as a principle, necessary or even desirable? Opponents have argued that, given the essentially democratic nature of the internet, market forces should be sufficient to regulate companies. If ISPs choose not to carry certain content then their customers will leave them for more content-rich providers. Indeed, by preventing commercial differentiation of services, opponents argue innovation by companies seeking profit will be stifled. BT itself has claimed to support net neutrality as a principle, but stated that “service providers should also be free to strike commercial deals should content owners want a higher quality or assured service delivery”.

    As the debate continues, there is increasing pressure from companies to maximise profit while meeting the increasing demands of users. We can hope, and must ensure, that the factors driving the development of the internet sustain it as a free and open medium of exchange, and that the drive for profit is not allowed to override this ideal.

    Categories: Article, Net Neutrality Tags:

    Wikileaks Lessons for Privacy-Enhancing Technologies

    December 10th, 2010 No comments

    I had been studiously avoiding writing about Wikileaks. I’ve been interviewed a couple of times in the last few days on various aspects of the ongoing saga, though, and it has highlighted some points that I think are worth mentioning. (Slightly misquoted in BBC News online here, and brief comments about digital activism on BBC Radio 4′s World at One, about 25 minutes in, here.)

    One of the most interesting aspects of the Wikileaks saga, from the point of view of research into privacy-enhancing technologies, is how totally uninteresting it is. Given that we have spent years researching means for sender- and recipient-anonymous communications and censorship resistant access to content, a hugely subversive and risky site like Wikileaks is nothing more than a website with an encrypted submission form. Use of Tor is advised, but for the highest levels of security postal submission is still considered the gold standard.

    In a similar vein, both the attempts to block Wikileaks and Wikileaks’ response to those attempts have been brutally practical and theoretically unexciting. Rather than firewalls and DNS or IP blacklists, we see political and economic pressure on hosting companies and DNS registrars. Rather than untraceable distribution of content and proxying of blocked connections we see Wikileaks’ hosting hopping between countries and companies, and appeals to the community to mirror content widely. Rather than mixes and onion routing, we see reliance on just how difficult it is to track even normal Internet connections in a real-world environment.

    For Wikileaks, the danger is largely for contributors rather than the consumers. Viewing Wikileaks is, in most cases, unlikely to have serious consequences for the average reader. Even if it were, the chances of being singled out amongst the millions of hits is protection enough for all but the most paranoid. Submitting documents, however, potentially puts users at great risk. A practical tradeoff between security and usability has been made, though: standard web access is “anonymous enough” even for such potentially dangerous content.

    Rather than being concerned with theoretically strong security, privacy or anonymity, Wikileaks’ success has stemmed from the social issues of getting access to information and distributing it. It has developed and promoted a brand, ensuring that it is the market leader for leaks, if not an outright monopoly. The current media storm, involving issues far beyond the original leaked content, is advertising beyond Wikileaks’ wildest dreams, and all but guarantees that the next person who finds themselves holding a potentially explosive set of revelations will be knocking on Wikileaks’ door. Certainly from the point of view of research into censorship-resistance, there are lessons to be learnt here.

    Of course I don’t think that technical research is of no use, or that we should stop developing interesting and useful new privacy technologies. Wikileaks is a single scenario with given goals, and there are many cases where we would require different or stronger guarantees of various forms of privacy. What I feel is that, as a community, we need to recognise and interact with the wider issues that surround our technologies. This has been known for a number of years in the security community, where research into security economics and security psychology have produced significant results. When we think about new developments in privacy-enhancing technologies, we need to start thinking in the same terms.

    Categories: Censorship, Interview Tags: