Technological determinism is a reductionist doctrine that a society’s technology determines its cultural values, social structure, or history. This is not to be confused with the inevitability thesis (Chandler), which states that once a technology is introduced into a culture that what follows is the inevitable development of that technology.
Technological determinism has been summarized as ‘The belief in technology as a key governing force in society …’ (Merritt Roe Smith), ‘… the belief that social progress is driven by technological innovation, which in turn follows an “inevitable” course.’ (Michael L. Smith), ‘The idea that technological development determines social change …’ (Bruce Bimber), ‘… the belief that technical forces determine social and cultural changes.’ (Thomas P. Hughes); ‘… a three-word logical proposition: “Technology determines history”‘ (Rosalind Williams)
The term is believed to have been coined by Thorstein Veblen (1857-1929), an American sociologist.
Most interpretations of technological determinism share two general ideas:
* that the development of technology itself follows a predictable, traceable path largely beyond cultural or political influence, and
* that technology in turn has “effects” on societies that are inherent, rather than socially conditioned or that the society organizes itself in such a way to support and further develop a technology once it has been introduced.
Technological determinism stands in opposition to the theory of the social construction of technology, which holds that both the path of innovation and the consequences of technology for humans are strongly if not entirely shaped by society itself, through the influence of culture, politics, economic arrangements, and the like.
Technological determinism has been largely discredited within academia, especially by science and technology studies. However, it remains the dominant view within most news media and popular culture.
Pessimism towards techno-science arose after the mid 20th century for various reasons including the use of nuclear energy towards nuclear weapons, Nazi human experimentation during World War Two, and lack of economic development in the third world (also known as the global south). As a direct consequence, desire for greater control of the course of development of technology gave rise to disenchantment with the model of technological determinism in academia and the creation of the theory of technological constructivism (see social construction of technology).
HARD AND SOFT
In examining determinism we should also touch upon Aslam Mamu and the idea of Hard determinism and Soft Determinism. A compatibilist says that it is possible for free will and determinism to exist in the world together while a incompatibilist would say that they can not and there must be one or the other. Those who support determinism can be further divided.
Hard determinists would view technology as developing independent from social concerns. They would say that technology creates a set of powerful forces acting to regulate our social activity and its meaning. According to this view of determinism we organize ourselves to meet the needs of technology and the outcome of this organization is beyond our control or we do not have the freedom to make a choice regarding the outcome.
Soft Determinism, as the name suggests, is a more passive view of the way technology interacts with socio-political situations. Soft determinists still subscribe to the fact that technology is the guiding force in our evolution, but would maintain that we have a chance to make decisions regarding the outcomes of a situation. This is not to say that free will exists but it is the possible for us to roll the dice and see what the outcome is. A slightly different variant of soft determinism is the 1922 technology-driven theory of social change proposed by William Fielding Ogburn, in which society must adjust to the consequences of major inventions, but often does so only after a period of cultural lag.
Modern thinkers no longer consider technological determinism to be a very accurate view of the way in which we interact with technology. In his article “Subversive Rationalization: Technology, Power and Democracy with technology.” Andrew Feenberg argues that technological determinism is not a very well founded concept by illustrating that two of the founding theses of determinism are easily questionable and in doing so calls for what he calls democratic rationalization (Feenberg 210-212).
In his article “Do Artifacts Have Politics?,” Langdon Winner transcends hard and soft technological determinism by elaborating two ways in which artifacts can have politics.
Another conflicting idea is that of technological somnambulism a term coined by Winner in his essay “technology as forms of life”. Winner wonders whether or not we are simply sleepwalking through our existence with little concern or knowledge as to how we truly interact with technology. In this view it is still possible for us to wake up and once again take control of the direction in which we are traveling (Winner 104).
The telegraph and newspaper stories
These are all effects of telephone technology which we can all readily think of. And there are many other communication technologies with fairly readily discernible effects. One intriguing example is the effect the telegraph supposedly had on the shape of newspaper stories. As you’ll probably know, the typical shape of a newspaper story is the so-called ‘inverted triangle’, quite different from the shape of, say, a detective story in which the answer to who? Is conventionally reserved for the end of the story. In a typical newspaper story, the questions Who? What? Where? When? and perhaps How? and Why? are conventionally answered in the first paragraph and then fleshed out in the ensuing paragraphs. You can even often see this ‘inverted triangle’ quite literally in the shape of the articles, especially if it’s a front page story. The headline ‘MUGGERS ATTACK PENSIONER’ may span three columns. The paragraph below the headline, typically in bold, may also span three columns: A pensioner in her seventies was mercilessly kicked and trampled by a gang of youths in broad daylight yesterday. The brutal attack took place in a London street just after the pensioner had collected her pension from the local post office. The ensuing paragraphs may occupy two or only one column as further details are given, including the pensioner’s name, her age, a description of her attackers and so on. This structure is, of course, very convenient for the reader. It allows us to scan the paper for interesting articles very rapidly. If a headline catches our eye, we can quickly run through the first paragraph to see if the article is likely to interest us. If not, we can turn our attention to something else. There is a suggestion, though, that it was not consideration of the reader’s convenience which gave rise to this structure. If you look at newspapers from the eighteenth or early nineteenth century, you’ll see that they have a pretty conventional narrative structure, quite unlike today’s ‘inverted triangle’. It is said that the modern structure developed out of the American Civil War. By that time, newspapers in the west of the USA were receiving news via Western Union’s telegraph lines from the recently formed agencies in the east. For the first time, news from the political decision makers, news of powerful people, news of major technological and commercial developments, as well as international news was appearing in newspapers in the west. During the war, however, the telegraph wires were constantly being cut as each side tried to disrupt the other’s lines of communication. If your news item has a conventional narrative structure, a large chunk can go missing if the wires are cut early in the transmission. The readership’s recently aroused interest in national and international news is disappointed and the agency runs the risk of losing its press customers because only a part of the news is getting through. As a result, the agencies adopted a new structure which would ensure that at least the bare bones of the story got through, then a bit more detail, then a bit more and so on, thus allowing for the possibility of disruption.
Perfect binding and pulp fiction
That’s a fairly clear example of a technological effect on communication, though I rather suspect it’s not the whole story. There are others we might suspect, though it would be difficult, I think, to find clear evidence. Consider for example the romantic novel, the detective novel and the western novel. The last two of these are pretty dead now, though the first is still very much alive and kicking. They probably became so dominant in popular fiction as the publishers responded to the invention of ‘perfect binding’, i.e. binding books using glue, rather than the time-consuming and expensive procedure of binding books sewn in separate ‘signatures’. Quite suddenly, a cheap method of mass production became available to publishers, who were naturally keen to exploit the extra capacity. They now had the possibility of mass production for a mass public and, of course, were only too happy to develop ‘formulas’ which worked with their new, relatively poor and relatively poorly educated, readership and so the formulaic novel was born. Was the sudden development of pulp fiction due to ‘technology push’, was it due to the capitalist mode of publishing, or was it due to ‘consumer pull’?
The printing press
Could we have mass literacy without mass publications? Could the French Revolution have taken place without the spread of radical ideas through the ‘philosophes” books and, perhaps more importantly, illegal pamphlets? Would we now have the image we do of the Spanish Inquisition if the supporters of the Reformation had not been so successful in disseminating their anti-Catholic propaganda? Could the Renaissance have taken place without the rapid spread of knowledge? And, of course, the question underlying all of those questions is: could any of those events have taken place without the invention of movable type by Gutenberg in the middle of the fifteenth century?
Stefik (1999) describes how the development of roads and railways in France between 1870 and 1914 turned ‘peasants into Frenchmen’ in the space of forty years. Until after the middle of the nineteenth century most French citizens’ life was limited to their immediate vicinity – their village and the occasional trip to the local market town. In winter, most local roads became almost completely impassable. After 1881, when a law was passed to promote the building of rural roads, the villages became interconnected, the peasants thus becoming less reliant on the local market town for the sale of their produce and therefore less at the mercy of the tradespeople in the towns who knew that they had probably spent hours transporting their produce to the towns and would have to sell it before it went rotten. The railways connected the major towns, which were now within much easier reach of the peasants. This, together with the spread of public education, which gave them new skills in reading, writing and arithmetic enabled them to start shipping and receiving goods on their own account, becoming less dependent on the big merchants. Stefik says that, as a rule of thumb, productivity in any area served by a railway grew tenfold and industries grew and prospered as France started to function as a unified marketplace. It is, of course, not the mere increase in the posssibilities for movement of goods and people which brought about this rapid change, but also and perhaps more importantly the increased opportunities for the dissemination of information and ideas – Stefik refers to the transportation system as having transformed France into a ‘marketplace for memes’.
Currently, of course, there is speculation about the way that information and communication technologies (ICTs) may be determining our world, its social structures and economies, as well as individual consciousness. As with any new technology, the debate is frequently couched in alarmist and reactionary terms. In the college where I work, for example – and I suppose it’s not at all atypical – concern is frequently expressed that computers connected to the Internet are being ‘abused’ by students, who, rather than concentrating consistently on the mind-numbing exercises they are required to complete to achieve their certificates in information technology, are often delving into chatrooms, games cheats, SMS messaging services and the like. I imagine that late-fifteenth century monks may have been similarly concerned that novices kept sneaking a look into printed books rather than getting on with their illuminated manuscripts. It seems a little odd that in education the instinctive reaction to what may be the greatest communications revolution since the invention of movable type is to ban it. If lecturers were able to display half the intellectual curiosity and mental plasticity of their students, they’d be examining ways to turn the technology to educational advantage.
I have dealt with information technology’s putative effects on society and the economy in the article on information technology and society. Concern is also expressed that computers may be affecting consciousness (compare with McLuhan’s claims for the effect of print on consciousness (below)). It is suggested that some young people who spend much of their time in chat rooms and other simulated environments may be developing’ multiple personae’, a sort of fragmented self, or rather a variety of possible selves which are discarded and assumed as appropriate to negotiate the variety of virtual worlds they explore. The fear is that this fragmentation will lead computer users to experience life as a series of disconnected narratives rather than as something fundamentally grounded in shared social experience, with that experience’s attendant shared values and beliefs. I don’t know whether that’s happening or not and it will surely be some time before studies can be appropriately framed to attempt to determine whether it’s happening. But, even if it is, I don’t see why that’s necessarily a negative development. As my own children were going through school, I was hugely impressed by their ability to watch a soap, listen to a CD, hold a telephone conversation all at the same time and still complete their homework satisfactorily. If kids are to negotiate the cyberblitz more effcetively than I manage, then surely this is appropriate experience. Management gurus have been telling us for decades now of the importance of creativity, flexible response, cooperation, systems thinking and problem solving. If kids are growing up in a networked society and if, as predicted by many commentators, it is being networked which will provide the key to success in the emerging economies, then surely kids need to develop the skills of networking, rather than being forced to concentrate separately each on his or her own little assignment.
Those are all examples of a form of communication technology having an effect on the way we communicate. It might seem pretty obvious that there must be an effect. After all, you might think, we wouldn’t have gone ahead and invented and then developed these technologies if they were not going to have some effect. It’s as well to bear in mind, though, that the development and ultimate use of the technologies often takes surprising routes and ends up in quite unexpected places. The major impact of the automobile on our cities and towns, whose centres atrophy while people move into the leafy suburbs and shop at out-of-town hypermarkets would have been impossible to predict. After all, it’s not all that long ago that the head of IBM predicted that the world would need five or six computers. And it’s also not too long ago that it was envisaged that the telephone would be used mainly for public broadcasting (as indeed it was in Hungary until the 30s). It is difficult also to determine with any certainty whether a new form of communication pre-dates the development of the technology or vice-versa. For example, it would seem obvious that the consumer revolution of the nineteenth and twentieth centuries could not have happened without the industrial revolution, but Storey (1997) points out that there is a growing revisionist account which suggests that a consumer revolution was under way before the industrial revolution. So, here, as always, it seems it is as well not to rely too heavily on common sense.
SOCIAL CONSTRUCTION OF TECHNOLOGY
Social construction of technology (also referred to as SCOT) is a theory within the field of Science and Technology Studies (or Technology and society). Advocates of SCOT — that is, social constructivists — argue that technology does not determine human action, but that rather, human action shapes technology. They also argue that the ways in which a technology is used cannot be understood without understanding how that technology is embedded in its social context. SCOT is a response to technological determinism and is sometimes known as technological constructivism.
SCOT draws on work done in the constructivist school of the sociology of scientific knowledge, and its subtopics include actor-network theory (a branch of the sociology of science and technology) and historical analysis of sociotechnical systems Thomas P. Hughes. Leading adherents of SCOT include Wiebe Bijker and Trevor Pinch.
SCOT holds that those who seek to understand the reasons for acceptance or rejection of a technology should look to the social world. It is not enough, according to SCOT, to explain a technology’s success by saying that it is “the best” — researchers must look at how the criteria of being “the best” is defined and what groups and stakeholders participate in defining it. In particular, they must ask who defines the technical criteria by which success is measured, why technical criteria are defined in this way, and who is included or excluded.
SCOT is not only a theory, but also a methodology: it formalizes the steps and principles to follow when one wants to analyze the causes of technological failures or successes.
LEGACY OF THE STRONG PROGRAMME IN THE SOCIOLOGY OF SCIENCE
At the point of its conception, the SCOT approach was partly motivated by the ideas of the Strong Programme in the Sociology of Science (Bloor 1973). In their seminal article, Pinch and Bijker refer to the Principle of Symmetry as the most influential tenet of the Sociology of Science, which should be applied in historical and sociological investigations of technology as well. It is strongly connected to Bloor’s theory of social causation.
The Principle of Symmetry holds that in explaining the origins of scientific beliefs, that is, assessing the success and failure of models, theories, or experiments, the historian / sociologist should deploy the same kind of explanation in the cases of success as in cases of failure. When investigating beliefs, researchers should be impartial to the (a posteriori attributed) truth or falsehood of those beliefs, and the explanations should be unbiased. The strong programme adopts a position of relativism or neutralism regarding the arguments that social actors put forward for the acceptance/rejection of any technology. All arguments (social, cultural, political, economic, as well as technical) are to be treated equally.
The symmetry principle addresses the problem that the historian is tempted to explain the success of successful theories by referring to their “objective truth”, or inherent “technical superiority”, whereas s/he is more likely to put forward sociological explanations (citing political influence or economic reasons) only in the case of failures. For example, having experienced the obvious success of the chain-driven bicycle for decades, it is tempting to attribute its success to its “advanced technology” compared to the “primitiveness” of the Penny Farthing, but if we look closely and symmetrically at their history (as Pinch and Bijker do), we can see that at the beginning bicycles were valued according to quite different standards than nowadays. The early adopters (predominantly young, well-to-do gentlemen) valued the speed, the thrill, and the spectacularity of the Penny Farthing – in contrast to the security and stability of the chain-driven Safety Bicycle. Many other social factors (e.g., the contemporary state of urbanism and transport, women’s clothing habits and feminism) have influenced and changed the relative valuations of bicycle models.
A weak reading of the Principle of Symmetry would point out that there often are many competing theories or technologies, which all have the potential to provide slightly different solutions to similar problems. In these cases the sociological factors are those, which tip the balance between them: that’s why we should pay equal attention to them.
A strong, social constructivist reading would add that even the emergence of the questions or problems to be solved are governed by social determinations, so the Principle of Symmetry is applicable even to the apparently purely technical issues.
Interpretative Flexibility means that each technological artifact has different meanings and interpretations for various groups. Bijker and Pinch show that the air tire of the bicycle meant a more convenient mode of transportation for some people, whereas it meant technical nuisances, traction problems and ugly aesthetics to others. Sport cyclists were concerned by the speed reduction caused by the air tire.
These alternative interpretations generate different problems to be solved. Aesthetics, convenience or speed should be priorized? What is the best tradeoff between traction and speed?
For example, in early personal computing, computer literacy educators argued for easy-to-use “closed architecture” machines, with color displays, that would keep children engaged (and prevent them from meddling). In contrast, hobbyists, who were later supported by those who foresaw business uses for PCs, argued for user-upgradable “open architecture” machines, with monochrome displays. From SCOT’s perspective, these are actually different artifacts rather than competing versions of the same artifact. Stabilization occurred as the result of a compromise, embodied in the design of the 1977 Apple II and the 1981 IBM Personal Computer: an open architecture design equipped with a video system capable of reproducing color graphics as well as monochrome text.
Relevant Social Groups
According to Pinch and Bijker (1987, 30) a relevant social group consists of “all members of a certain social group [who] share the same set of meanings, attached to a specific artifact; recall that, during the period of interpretive flexibility, SCOT sees the various alternative designs as distinct artifacts in themselves. In SCOT’s initial formulation, this definition is quite vague — deliberately so, because a major part of the analyst’s task lies in identifying and defining the groups who actually participate in the design process.
The most basic relevant groups are the users and the producers of the technological artifact, but most often many subgroups can be delineated – users with different socioeconomic status, competing producers, etc. Sometimes there are relevant groups who are neither users, nor producers of the technology – journalists, politicians, civil groups, etc. (Just think of inter-continental ballistic missiles, for example) The groups can be distinguished based on their shared or diverging interpretations of the technology in question.
Just as technologies have different meanings in different social groups, there are always multiple ways of constructing technologies. A design is only a single point in the large field of technical possibilities, reflecting the interpretations of certain relevant groups.
Problems and Conflicts
The different interpretations often give rise to conflicts between criteria that are hard to resolve technologically (in the case of the bicycle, one such problem was: how can women ride the bicycle decently, in skirt?), or conflicts between the relevant groups (the “Anti-cyclists” lobbied for the banning of the bicycles). Different groups in different societies construct different problems, leading to different designs.
The first stage of the SCOT research methodology is to reconstruct the alternative interpretations of the technology, analyze the problems and conflicts these interpretations give rise to, and connect them to the design features of the technological artifacts. The relations between groups, problems, and designs can be visualized in diagrams.
Over time, as technologies are developed, the interpretative and design flexibility collapse through closure mechanisms. Two examples of closure mechanisms:
1. Rhetorical Closure: When social groups see the problem as being solved, the need for alternative designs diminishes. This is often the result of advertising.
2. Redefinition of the Problem: A design standing in the focus of conflicts can be stabilized by inventing a new problem, which is solved by this very design. The aesthetic and technical problems of the air tire diminished, as the technology advanced to the stage where air tire bikes started to win the bike races. Tires were still considered cumbersome and ugly, but they provided a solution to the “speed problem”, and this overrode previous concerns.
Closure is not permanent. New social groups may form and reintroduce interpretative flexibility, causing a new round of debate or conflict about a technology. (For instance, in the 1890s automobiles were seen as the “green” alternative, a cleaner environmentally-friendly technology, to horse-powered vehicles; by the 1960s, new social groups had introduced new interpretations about the environmental effects of the automobile)
The second stage of the SCOT methodology is to show how closure is achieved.
Relating the content of the technological artifact to the wider sociopolitical milieu
This is the third stage of the SCOT methodology, but the seminal article of Pinch and Bijker does not proceed to this stage. Many other historians and sociologists of technology nevertheless do. For example, Paul N. Edwards shows in his book “The Closed World: Computers and the Politics of Discourse in Cold War America”  the strong relations between the political discourse of the Cold War and the computer designs of this era.
In 1993, Langdon Winner published an influential critique of SCOT entitled “Upon Opening the Black Box and Finding it Empty: Social Constructivism and the Philosophy of Technology.” In it, he raises a few problems with social constructivism:
1. It explains how technologies arise, but ignores the effects of the technology after the fact.
2. It is a social construction of knowledge in itself, subject to the same limitations as it postulates (“Who says what are relevant social groups and social interests?”)
3. It disregards dynamics which are not due to its “preferred conceptual strawman: technological determinism.”
Other critics include Stewart Russell with his letter in the journal “Social Studies of Science” titled “The Social Construction of Artifacts: A Response to Pinch and Bijker”
Source : Wikipedia, Mick Underwood, STS Wiki