KATE SOUTHWORTH

View Original

The Internet , Protocol and Politics

The Internet is a global distributed computer network underpinned by protocol, ‘a set of technical procedures for defining, managing, modulating, and distributing information throughout a flexible yet robust delivery infrastructure.’ [1] Although not designed specifically for warfare, the Internet emerges from American military technology of the 1950s and 1960s. Its origins can be traced  to the Advanced Research Projects Agency (ARPA) set up by the Defense Department of the United States in 1958, the aim of which was the development of technological military superiority over the Soviet Union in response to their launching of the first Sputnik satellite. In the early 1960s amid concerns about the ability of the United States’ telecommunications systems to withstand nuclear attack, Paul Baran at RAND corporation (and independently of Baran’s work, Donald Davies at the British National Physical Laboratory) developed packet switching, a revolutionary new communications transmission technology. Baran proposed the development of a communication network that would allow several hundred major communications stations to ‘intercommunicate with one another’ and ‘operate as a coherent entity’ after an enemy attack.’ [2]

Having identified a wide range and variety of networks Baran concluded that they all factored ‘into two components: centralized (or star) and distributed (or grid or mesh).’ [3] For Baran the significant difference between centralised and distributed networks focused on the extent to which each was capable of maintaining viable communication channels following targeted assault on military telecommunications infrastructures. Centralised networks are configured with a single central node that hierarchically controls and commands all activities, and as such are the most vulnerable under attack: ‘destruction of the central node destroys intercommunication between the end stations’ [4]. Perhaps somewhat surprisingly, a decentralized network is also a hierarchical structure; being merely ‘a multiplication of the centralized network’’ [5] in which the destruction of just a small number of nodes can destroy communication. [4] The distributed network differs to both the centralised and decentralised network having ‘no central hubs and no radial nodes. Instead each entity in the distributed network is an autonomous agent.’ [6]

This distributed communications network is independent of central command and control and can remain operational even after a number of its components have been destroyed. This research was later incorporated into the development of an interactive computer network known as ARPANET developed by one of ARPA’s smaller departments, the Information Processing Techniques Office (IPTO). By 1973 a number of these distributed networks were being developed and scientists began to explore the possibilities of linking them together in a network of networks. For computer networks to interact with each other standardized communication protocols are needed.

The Ultimate goal of the Internet protocols is totality. The virtues of the Internet are robustness, contingency, interoperability, flexibility, heterogeneity, pantheism. Accept everything, no matter what source, sender, or desitination. [7]

This was achieved with the development and refinement of the transmission control protocol (TCP) and the inter-network protocol (IP), together giving the TCP/IP protocol upon which the Internet still operates today. A few years later there followed a proliferation of networks linked together through gateways. In 1985, five supercomputer centres were build in the USA and the National Science Foundation built a ‘backbone’ network to connect them, and with the later involvement of networks within Europe and Asia, a super network comprising these networks - including the internet – was established. Following the decision taken by the Defense Department in the 1980s to make Internet technology commercially available, the department financially supported American computer manufacturers to include TCP/IP in their protocols. In the early 1990s the Internet was privatized. However, as Manuel Castells suggests ‘[t]he current shape of the Internet is also the outcome of a grassroots tradition of computer networking’ [9]. The sharing of source code of operating system UNIX, and the community of users that developed around it contributed to the emergence of the open source movement: ‘a deliberate attempt to keep access to all information about software systems open’. [10] The World Wide Web, the part of the Internet within which much of today’s online activities exists, was developed in 1990 by Tim Berners-Lee at CERN, the European Laboratory for Particle Physics, as a hypertext system that enables users to retrieve and contribute information from and to any computer connected to the Internet. He developed the HTTP, HTML and URI (later known as URL) protocols allowing access to millions of hypertext resources through a consistent interface.

The Internet is a centreless structural form that ‘resembles a web or a meshwork, [11] and which ‘follows a contrary organizational design’ [12] to the bureaucracy and hierarchy of centralised structures. Independent of central control, the Internet is a global distributed computer network underpinned by protocol - ‘a set of technical procedures for defining, managing, modulating, and distributing information throughout a flexible yet robust delivery infrastructure’. [13] Protocol ‘functions largely without relying on hierarchical, pyramidal or centralized mechanisms; it is flat and smooth; it is universal, flexible and robust. [14] It is not concerned with the content of what passes through the network, but rather with the facilitation and maintenance of communication between nodes. A distributed network has ‘no central hubs and no radial nodes’ [15] to organise communication: it is ‘a specific network architecture characterised by equity between nodes, bi- directional links, a high degree of redundancy and general lack of internal hierarchy’. [16]

The term protocol is most known today in its military context, as a method of correct behaviour under a given chain of command. On the Internet, the meaning of protocol is slightly different. In fact, the reason why the Internet could withstand nuclear attack is precisely because its internal protocols are the enemy of bureaucracy, of rigid hierarchy, and of centralization. [17]

The Internet promotes the relational concepts of cooperation, collaboration, participation, sharing and community whilst being rigidly controlled by protocol. As Alexander Galloway suggests, within distributed networks there is an explicit tension between freedom and control: Freedom is evidenced in the open source culture that emphasises the benefits of making source code free and openly available. Equally participative in form is the flexibility with which Internet protocols enable thousands of diverse networks to be linked together, distributing control into autonomous locales.[18] But, these open and participative features of the technology are in sharp contrast with its controlling functions. It is an irony of Internet technology that `for protocol to enable radically distributed communications between autonomous entities, it must deploy a strategy of universalization and homogeneity. It must be anti- diversity. It must promote standardization in order to enable openness. It must organize peer groups into bureaucracies […] in order to create free technologies’. [19]

Collaborative, participative, open source philosophies informed network culture, and believing the openness of the Internet’s architecture and culture to be its main strength, many artists, thinkers, and hackers positioned their networked-based activities as ethically and radically progressive. But what seemed, to some, to be an escape from commodified forms of production, was in 2004 beginning to be exposed as a characteristic of the market. In a ‘posting’ to the nettime discussion list, Martin Hardie discussing features of ‘post-fordist forms of production and immaterial/unpaid/precarious labour’, suggested that open source/free software development exists in a culture of sharing, cooperation, collegiality, community, but this culture in turn sets up the individual to compete freely in the market place. [20] Thus, the initial assumption that distributed networks presented a challenge to capitalist organisational forms began to be re-evaluated. Geert Lovink and Florian Schneider in a ‘posting’ to the nettime discussion list argued that the concept of participatory networks had been co-opted by capitalism, thus loosing much of its radicalism. [21] They suggest that whilst networks still held out the possibility for radical social change, the real purpose of networked participation and sharing had not been adequately debated or defined. In response to this, Alexander Galloway and Eugene Thacker warned against a ‘metaphysics of networks’ within which the network ‘appears as a universal signifier of political resistance’, [22] arguing that networks needed to be historically contextualised and understood with reference to the technological protocols that control them.

It can be seen then, that in conflating hierarchical structures with authority and control, and because ‘networks exhibit a set of properties that distinguishes them from more centralized power structures’ [23] many artists and theorists imagined that distributed networks in themselves represented an organisational form that could resist control. And so, the network, as that which offers an alternative to hierarchical and centralised forms of organisation was steadily critiqued as being as restrictive and controlling as bureaucratic organisational forms it sought to push aside. Galloway suggested that rather than removing authority, ‘distributed networks produce an entirely new system of organization and control, that while incompatible with pyramidal systems of power, is nevertheless just as effective at keeping things in line.’ [24] In fact, he argues, that it is precisely because distributed networks ‘create new, robust structures for organization and control’ [25] that it is imperative ‘to understand the nature of this new logic of organization’. [25] In the book of the same name, he suggests that protocol is ‘how control exists after decentralization.’ [26] For him, protocol is fundamentally a technology of inclusion and openness; a fact that ‘makes it especially difficult to speak about [it] in a negative sense’. [27] He suggests that `for protocol to enable radically distributed communications between autonomous entities it must deploy a strategy of universalization and homogeneity. It must be anti-diversity. It must promote standardization in order to enable openness. It must organize peer groups into bureaucracies […] in order to create free technologies’. [28] Protocol is synonymous with the network itself and therefore, as Galloway argues, there is no escape from it. Protocol is not concerned with the content of what passes through the network, but rather with the facilitation and maintenance of communication between entities. Protocol by definition is relational: it prescribes and communicates the parameters of exchange. For Galloway:

Only the participants can connect, and therefore, by definition, there can be no resistance to protocol (at least not in any direct or connected sense). Opposing protocol is like opposing gravity – there is nothing that says it can’t be done, but such a pursuit is surely misguided and in the end hasn’t hurt gravity much. [29]

Galloway and Thacker go on to argue that once protocol is identified as the controlling force in distributed networks then the most effective oppositional strategy is ‘counterprotocol’. Interestingly, Galloway suggests that protocol gains its authority from another place, from technology itself and how people programme it. If there is no escape from protocol, and protocol gains its authority from how people programme it, then how do artists begin to think the politics, ethics and aesthetics of protocol? Do artists develop distributed networks for which they write the protocol? In what ways would artists’ protocols be different to that which exists already? For Galloway and Thacker ‘the best way to beat the enemy is to become a better enemy’. This is never fully elaborated but the text hints at deliberate and unintended misuse of protocol to heighten fissures in the network as a possibility. They are unsure about what kind counter-protocol practices can develop but believe it is in part, dependent upon how we refigure the concepts of resistance, agency, and, network affect.

 Notes

1.  Thacker, “Foreword: Protocol is as Protocol Does,” xv.

2.  Baran, “On Distributed Communications Networks,” 2.

3.  Ibid. [3]

4. Ibid [3]

5..  Galloway, Protocol: How control exists after decentralization, 31.

6.  Baran, “On Distributed Communications Networks,” 3.

7.  Galloway, Protocol: How control exists after decentralization, 29.

8. Ibid., 42.

9. Castells, The Internet Galaxy: Reflections on the Internet, Business and Society, 12.

10. Ibid.,14.

11.  Galloway, Protocol: How control exists after decentralization, 5.

12  Ibid., 3.

13.  Thacker, “Foreword: Protocol is as Protocol Does,” xv.

14.  Galloway, Protocol: How control exists after decentralization, 317.

15. Ibid., 33.

16. Ibid., 317.

17. Ibid., 29.

18. Ibid., 2

19. Ibid., 142

20 Martin Hardie, Post Fordist TV posted to Nettime mailing list, 24th May 2005 |
https://www.mail-archive.com/nettime-l@bbs.thing.net/msg02746.html (last retreived 5th July 2020)

21.  Lovink and Schneider, “Notes on the State of Networking,” posting to Nettime mailing list, 29 February, 2004
https://www.nettime.org/Lists-Archives/nettime-l-0402/msg00099.html (last retrieved 5th July 2020)

22.  Galloway and Thacker, “The Limits of Networking,” posting to Nettime mailing list, 24th March 2004
https://www.nettime.org/Lists-Archives/nettime-l-0403/msg00090.html (last retrieved 5th July 2020)

23.  Ibid.

24. Galloway, Protocol: How control exists after decentralization, 318.

25.  Ibid.

26.  Ibid.

27.  Galloway, Protocol: How control exists after decentralization. 85. Ibid., 147.

28. Ibid., 142.

29. Galloway, Protocol: How control exists after decentralization, 142 88. Ibid., 121.