Skip to content

5 – simple, clear, logical, as can be

-- 8 min read --

There are some very good histories of the development of the internet and I do not intend to retell the whole story here. I especially enjoyed ‘Where Wizards Stay Up Late’ by Katie Hafner and Matthew Lyon.

For the purposes of my story, I want to look at the connections between governments and the community that ‘owns’ the protocols and infrastructure that makes it possible for internet nodes to connect with each other.

I include three areas in what I think of as the ‘logical’ layer – the basic internet protocols that can be used for moving packets of data between nodes, the scheme for allocating unique Internet Protocol (IP) addresses to nodes, and the Domain Name System (DNS) that associates IP addresses with particular nodes and services.

Technical Protocols

The internet protocols were initially defined by a small group of technologists working in academia and/or industry in the 1970s and 1980s. These were mostly, but not exclusively, based in the US and relied on a significant amount of funding from the US Government. The protocols they developed were adopted by the US Department of Defense but because they were open standards other bodies could also implement them.

The foundational document for the Internet Protocol, RFC791 (RFC stands for ‘Request For Comments’), explains that it was prepared for the Defense Advanced Research Projects Agency (DARPA) by the University of Southern California. It was not a secret defense document but was shared with a community of researchers seeking their input.

Development of the internet protocols was formalised into a body called the “Internet Engineering Task Force” (IETF) in 1986 who continue to be the key body for defining standards. The original group consisted of researchers who had received US government funding but their meetings were quickly opened up to all interested parties. And while there was initial support from the US government, from 1993 the IETF has been dependent on an NGO, the Internet Society.

The history shows us that the key players in developing the internet protocols were, and continue to be, a ‘coalition of the willing’. There has been varying degrees of government support for the people who do this work, with a strong component of US funding for the early pioneers, but it operates outside of any form of governmental or inter-governmental control.

There was something much closer to a government standard for computer networking developed in the 1970’s and 1980’s called X.25. This was a product of the intergovernmental organisation the International Telecommunications Union (ITU) and many governments promoted it as the best or only standard for wide area network connections.

When I went to work in the UK’s National Health Service (NHS) in 1991, the official guidance then was to use only X.25 to connect different sites and the new internet protocols had to be installed by stealth. The French government used X.25 for their Transpac network, which had millions of users at one point, and the protocol continues to be used in some specialist areas to this day.

Differences between the more informal, bottom-up way of working in the internet standards bodies and the more formal processes used by many inter-governmental standards organisations create tensions at times. But it is hard to see any scenario in which governments can take control of defining the rules for the internet protocols themselves or of there being a strong motivation for them to do so.

Numbers and Names

The internet can only work if each node has been allocated a unique address that identifies it to other nodes on the network. As this is a global network, this requires a commonly agreed global schema.

Telephone numbers also require a common global addressing framework. Each country in the world has its own national numbering plan with some kind of central authority allocating unique numbers to telephone service providers. The International Telecommunications Union (ITU) allocates a unique prefix to each country and has a role in making sure all the national schemes work with each other so global calls get through.

Internet networking developed without a similar hierarchical structure for allocating addresses. The protocols would allow for any IP address to be allocated to any node anywhere in the world. The administrative structure evolved to include a layer of regional bodies who were allocated blocks of IP addresses to maintain but this was organisationally convenient rather than technically necessary.

A body called the Internet Assigned Numbers Authority (IANA) has responsibility for the orderly allocation of IP addresses. It is part of a body called the Internet Corporation for Assigned Names and Numbers (ICANN) that also oversees the Domain Name System (DNS).

Government interest in the IP address allocation function largely relates to concerns over whether a country believes it has access to enough IP addresses for their needs. The most widely used version of the internet protocol, IPv4, allows for over 4 billion addresses but these have now all been allocated globally. The newer version of the internet protocol, IPv6, vastly increases the pool of available addresses but the transition to using this can be complex and is happening slowly. This may be an area where we see more government intervention over coming years promoting the transition to IPv6 or arguing over the current allocation of IPv4 addresses.

There has been more overt government interest in the Domain Name System (DNS) and this led to a significant focus on how ICANN is structured and managed. Governments have a range of issues around these user friendly names that are linked to underlying IP addresses. These include who has the right to issue domain names, the cost of maintaining records, preventing harm in various forms, and how the system should evolve for example with new domain types.

The structure of ICANN over many years was that it operated under a contract from the US Department of Commerce. This created concerns in other governments and the wider internet community that the interests of the US government would be unfairly privileged. As a response to these concerns, a Governmental Advisory Committee (GAC) was set up in 1999 and has met regularly ever since. This is perhaps the closest thing there is to government regulation of the logical layer though, as the name makes clear, it has an advisory rather than executive function.

Even with the GAC in place, concerns remained over the special position of the US government. This culminated in ICANN ending its contractual relationship with the US government in 2016. It is now an independent non-governmental organisation that still invites governments in on an advisory basis but is not beholden to any formal national or inter-governmental body.

Future Trends

We can expect to see continued interest from governments in exerting control over all aspects of the internet as it becomes central to the lives of their citizens. It is uncomfortable for governments to feel dependent on things they cannot control.

A concrete expression of that interest is found in the Internet Governance Forum which has met periodically since 2006. This is not a decision making body but creates a space where issues of interest to governments can be discussed.

The success of the technical community’s process for developing core protocols, as evidenced by their widespread adoption, provides a strong defence against intervention in this part of the system. This is largely a technical area where everyone has invested significantly and so has a lot at stake.

The allocation of IP addresses could yet become an area where governments want to be more involved. In particular, there is a need for coordination, and possibly funding, around shifting groups of nodes to the new addressing system, IPv6.

The allocation of domain names has flared up as an issue over the years, but the trend towards people accessing services via apps rather than by typing in web addresses may shift the focus. Attention may increasingly turn to the major app stores and the rules that they apply for developer access and service naming where these raise similar issues to those that have featured in domain disputes. For example, there was a long-running argument with governments involved over the creation of a .xxx set of domain names for adult services and you could imagine similar issues arising over how adult apps should be managed in mobile app stores.

Blocking and ‘Splinternet’

When governments want to restrict access to particular nodes on the internet, there are several ways they can go about this.

They can intervene at the physical layer if the service is within their jurisdiction or in a country that will accept disconnection orders from them. This means literally pulling the plug on a device that is publishing the content they see as problematic. This may also involve the arrest and prosecution of the person or people who controlled that device.

When the offending content is child abuse material, this approach of disconnect and prosecute is generally seen favourably by governments including those who strongly support freedom of expression. If anything complaints arise when countries will not take robust action against servers distributing this content from within their jurisdictions.

You will often not find the same consensus about how to treat other kinds of problematic content outside of child abuse. Cross-border enforcement is also more onerous than working within a single jurisdiction. There can be significant procedural and prioritisation challenges even where countries are aligned around something being bad. These factors mean that governments will often be unable to secure the physical disconnection from the internet of nodes distributing content they deem to be illegal.

It is also hard for any single government to direct who has access to the global logical facilities of IP addresses and domain names given that they do not control the administrative structures. Internet services in every country today continue to recognise the allocation of IP addresses to particular nodes within the same global schema and use a common set of DNS data.

What governments can do without breaking the logical layer is to look at the data that flows over networks they can influence – those within their own borders – and disrupt this locally. These disruptions can take a number of forms and the Internet Society has published a good guide to the different methods used by governments to restrict access to content.

These methods can be very effective at preventing ‘normal’ access to content but are generally ineffective against someone with technical skills and determination. The decentralised nature of the internet means that there are multiple ways to connect devices and it is extremely hard to block all routes.

Governments are generally content if they can restrict most connections by closing off commonly used forms of access. They will often be able to see where people are using workarounds and move on also to restrict these new routes as they get used more widely.

A good example of how this cat and mouse game can play out was seen in Turkey in 2014. The Turkish government had ordered local internet access providers to block access to content by not returning IP addresses when people made a DNS query. People in Turkey started using Google’s public DNS servers instead of local servers as these would respond with the correct IP addresses. The Turkish government then appeared to take measures to direct people back to the local DNS servers.

These local or national restrictions do not break the internet as a whole but only interfere with the experience of users in that locality trying to make a connection. Anyone whose traffic does not go through the networks or devices that are deploying restriction tools can still get to the service.

Example 6 – RuNet

The Russian government has indicated a wish to make the internet in Russian ‘independent’ of the global internet. This has been reflected in legislation and in technical exercises carried out by ISPs.

The precise intent is subject to a lively debate but this does appear to be an attempt at least to test breaking away from the internet at the logical level.

If the network is reconfigured so that internet users in Russia can only access servers that are physically located on Russian territory then this would break the internet as we know it. There is also rhetoric around preference for Russian services. If this led to people being redirected at the network level from global information society services to Russian equivalents then this would again take us into a different world.

We can see the impact of national attempts to direct internet traffic in an incident that happened with Pakistan and YouTube in 2008. Pakistani telcos wanted to point people looking for YouTube to a local server instead. But they did this in a way that promoted their local server to people trying to access YouTube from anywhere in the world.

The telcos in Pakistan reversed the change they had made but if they had not done so then the only way to resolve the conflict may have been to cut them off from the global internet. The drive to promote local services in Russia has the potential to create similar conflicts between local and global internet addressing systems.

Summary :- the rules and tools for logically routing traffic across the internet are run outside of direct government control. Public funding helped support the development of the internet protocols but the key bodies largely exist outside of national or inter-governmental structures. The relationship between the administration of the Domain Name System (DNS) and governments is interesting and has been controversial at times.

Leave a Reply

Your email address will not be published. Required fields are marked *