Introduction: The Myth of a Big-data-based Revolution

Today’s corporate and commercial literature on ‘big data’ constructs a myth of ‘newness’ and ‘nowness’ that turns the present into a time of massive rupture – a big-data-based epoch representing ‘a revolution that will transform how we live, work and think.’1 In the business context, data-centred decision-making is portrayed as a dramatic change compared to accustomed managerial practices.2 Recent innovations in technological abilities to capture, collect, and analyse increasing amounts of data are presented as enabling conditions for a decision-making process that no longer relies on human intuition, but solely on data.3 Correspondingly, more value is currently assigned to data than ever before – as evidenced by common claims about data as ‘the oil of the information economy’4 or ‘the world’s new natural resource’ (Figure 1).5 According to fashionable claims, it has taken under a decade for managerial control to shift to technologies for data collection, storing, and analysis. In this article, I counter these persistent claims as to a big-data ‘revolution’ by tracing the technological and cultural origins of data-based decision-making back to the 1970s, when, I argue, database technology shaped and reinforced a data-centric mindset in American management.

Today, the business vogue is to base every decision on data, often directed by ideological motives suggesting that data reflects objective truth.6 In existing critical literature on (big) data and algorithms in the field of media and communication studies, evermore attention has been paid to the ideological beliefs underlying and supporting data practices in the business world. Social scientists Boyd and Crawford were the first to point out that big data’s rationale rests on a set of false beliefs about big data bringing us closer to reality, surrounding big data with an ‘aura of truth, objectivity, and accuracy.’7 According to media scholar Van Dijck, this ideology, which she refers to as ‘dataism’, is rooted in problematic ontological and epistemological claims8 – such as, for example, false claims as to data being synonymous with reality; data presenting a new natural resource; and data containing its economic value within, as patterns of relations, waiting to be mined.

figure1

Figure 1

Cover image of IBM’s in-house magazine Inspire Beyond Today’s Technology (published 2014).

In a time when we collect and store more data than ever before, advocates of a big-data revolution claim that through this data ‘we get a more complete sense of reality.’9 However, as Boyd and Crawford point out, it is not the size of a dataset that matters, but our acknowledgement of the fact that data, regardless of quantity, always constitutes a sample – not the population it is selected from. Understanding this sample status means to question the mechanisms of sampling and bias inherent to sample collection and selection. In addition, claims as to big data bringing us closer to reality are reinforced by understandings of data as a natural resource. Media scholars Gitelman and Jackson argue that these understandings are informed by ‘an unnoticed assumption that data are transparent, that information is self-evident, the fundamental stuff of truth itself.’10 Through this move of naturalising data, data’s basis in preceding processes of collecting and storing it are concealed, and so are human agencies essential to it. Van Dijck also exposes the falsity of the claim that patterns exist as natural phenomena within the data, waiting to be extracted by automatic methods that enable, as media scholars Schäfer and van Es describe, ‘a seemingly accurate and unbiased assessment of reality.’11 As other media and communication scholars argue as well, the idea that data ‘possesses’ value obscures the fact that knowledge-making involves the acquisition of value and meaning through technological processes and human interpretation, which are never neutral or objective.12

The commercial and corporate discourses associating big data with ideological values of truth, facticity, objectivity and accuracy deserve our critical attention, as they develop a powerful set of business rationalities.13 These in turn facilitate the unquestioned acceptance of big data in the social world and, as Couldry and Yu argue, protect the corporate world from ethical questioning.14 Such critical work from the field of media and communication studies does a great job demonstrating that the big data ‘revolution’ is as much technological as it is cultural and ideological. Importantly, however, all critical interrogations of big data’s ideological underpinnings follow the unnoticed assumption that data-based practices and thinking are of recent origin, emerging in the wake of the alleged data revolution in the last decade. By and large, the claim of the revolution itself remains unquestioned, and the historicity of data-based management understudied – despite the fact that several scholars have recently called for the importance of historical lines of inquiry into big data.15

I argue, however, that big data and its associated ideology and reality claims are not new forces only recently unleashed upon an unprepared corporate world. Schäfer and van Es suggest that if we really want to critically understand the big data reality that is currently developing, and what it means for society, it is crucial that we ‘debunk the exceptionalism inherent in the “Big Data” paradigm.’16 This article does so by puncturing the aura of ‘newness’ and ‘nowness’ surrounding big data, revealing the datafication of business as not being current, recent, or imminent, but as fundamentally historical. It traces big data’s technological and cultural origins back to the 1970s and 1980s, arguing that innovations in databases – database management systems and the relational database model – provided a technical condition for the emergence of a data-based mindset in American business. This mindset is manifested in four interlinked concepts of data – data as asset, data as raw, data as reality, and data as relatable – that, I assert, continue to exist and have value to us. Before I specify how this mindset emerged, I will first use the following paragraph to introduce the abovementioned technological innovations in databases, discuss the academic relevance of my approach by positioning it in relation to existing literature on the history of databases and data processing, and introduce the historical source materials.

The History of Data Processing and Control

The development and use of the database management system (DBMS) in the 1970s constitutes a major shift from the corporate data processing of the 1960s. DBMSs triggered the imagination of managerial America. As computer historians Bergin and Haigh argue, ‘[DBMSs] existed both as a tangible technology (…) and as the symbol of a movement to raise the status of computing within the managerial world and establish the idea of data as a corporate resource.’17 In brief, a DBMS is a piece of software that arranges data by providing a predefined structure in which data, and relations between data elements, are organised in order for the data to be manipulated by queries of users (and application programs).18 As media scholar Manovich had already observed in the late 1990s, DBMSs encourage a perception of databases as not just a randomly stored collection of data but ‘as a structured collection of data.’19 The first commercial DBMS packages were introduced to the market in the early 1970s. Examples are IBM’s IMS (1971) and Cullinane’s IDMS (1971).20 In this same decade, DBMSs developed into the most important commercial software systems. Their use increased exponentially, especially in large companies, to eventually become fundamental to almost all business ­information processing.21

Around the same time, the mathematician Edgar F. Codd published his highly influential article “A Relational Model of Data for Large Shared Data Banks” (1970), which introduced the relational model of data as a particular approach to organising, and accessing, a database. At the time, however, this model existed only in theory and within the imagination of data professionals and researchers. It provided a simpler view on structuring data than the hierarchical and networked data structures implemented by early DBMSs such as IMS. The relational model fashioned data in a manner that privileged usability over complexity, enacting data abstraction: both designers and users no longer needed to know the physical storage mechanisms employed by a computer in order to query databases.22 This allowed ordinary, non-programming users more freedom in relating stored items of data to produce meaningful information, whilst reducing their dependence upon programmers.23 The relational model did not develop into a tangible technology until the 1980s, eventually becoming an industry standard in the 1990s. However, the database theory behind it had already gained a growing following in the 1970s, also amongst professionals in the data processing industry (who were convinced that the model showed great promise for managerial decision-making). I would therefore argue that the relational model enjoyed a similar symbolic status in the managerial world to the actual DBMS technologies.

Yet, surprisingly, histories of technology, computing and media seem to neglect database developments in these two decades. Only a handful of studies have appeared in the last decade that focus on the history of databases. The emphasis here is mainly on the origins of databases and the database management system, and the companies and people involved in the development, production and sale of DBMS products during the 1960s–1980s.24 Others have discussed the historical impact of the relational database model on the database research community and commercial and government bureaucracies.25 Some have focused on the relationship between changes in the usability of database technology and transformations in ways of thinking about ‘database literacy’ and/or the role of the database in society at large.26 However, virtually nothing is known about the development of data(bases), and data-based modes of thought, as means of control over managerial decision-making.

There is a select set of historical literature, however, that has studied how innovations in data collection, storage and processing technology in the late nineteenth century and early twentieth century co-evolved with increases in (bureaucratic) control. In his influential book The Control Revolution (1989), Beniger argues that the so-called ‘information society’ originated as a result of developments that started more than a century ago in the speed and volume of information processing – rather than later developments in computing technologies.27 Beniger concludes that bureaucratic control could only be imposed on American offices because of numerous innovations in mechanical and early electric information technologies, starting with Herman Hollerith’s patenting of the electric punch-card tabulator in 1889.28 These technologies, Beniger shows, enabled a shift from the human pace of administration by clerical workers to processing administrative data at industrial scale and speed.29 Accordingly, they provided enabling conditions for the processing of large amounts of data in the tabulating and life insurance industries, and the further re-organisation of such American offices through Taylorist principles of efficiency and system, as historian Yates has argued.30 In the 1950s, computers gradually replaced punch-card tabulators in the context of ‘electronic data processing’, or what computing historian Haigh describes as ‘administrative computing.’31 Computers enabled significant increases in the speed and volume of data processing for administrative applications, usually in payroll and accounting, yet continued the project of office automation that had already started fifty years earlier. In this sense, the application of computers for the automation of clerical routines in the 1950s was, Haigh concludes, ‘evolutionary’ rather than ‘revolutionary.’32 Computers sustained, not initiated, the project of bureaucratic control – as also argued by Cortada, one of the most influential historians of computing.33

The works of Beniger, Yates, Haigh and Cortada have convincingly shown that it was not computers but early mechanical and electric information technologies (in particular technologically induced increases in the speed and scale of processing) that provided key enabling conditions for a revolution in controlling bureaucratic processes (the automation of the office). However, these works do not provide an answer to the question of how we came to value corporate data in and of itself, as a vital resource from which to withdraw business intelligence to inform managerial decision-making and strengthen managerial control (the automation of management). Here it is key to understand that control over decision-making is not facilitated by the speed at which data is processed, but by the extent to which data provides a base for generating meaningful, and therefore valuable, business insights. In other words, advances in the speed and scale of data processing do not automatically explain how the cultural meaning of data in business shifted from that ‘stuff’ automatically processed by computers to ‘a significant corporate asset, a vital economic input’ (as Mayer-Schönberger and Cukier write in their highly influential book on the big-data revolution).34

To explain how this cultural meaning developed, this article traces the technological and cultural origins of data-based decision-making to the period of the 1970s and early 1980s, when, I argue, managerial control gradually shifted to database technology. Particular attention will be directed to how developments in database technology – DBMSs and the relational model – co-evolved with changes at the level of culture, thus both shaping and reinforcing the development of a data-centric mindset indicating transformations in the meaning and value of data for business. Importantly, as computing historian Haigh argues, this shift towards a managerial interpretation of the computer had already started in the 1960s. In this decade, a certain class of computer people known as the ‘systems men’ began to promote the computer as a managerial tool for producing information, rather than just a machine for speeding up administrative processes.35

This redefinition of the corporate meaning of the computer, Haigh argues, also involved a reinterpretation of data’s value to business, with a view of data developing as ‘a vital resource that powers managerial decision making and corporate success.’36 Haigh’s work has done a great job of showing how computer people re-imagined the meaning of the computer for business. This article will demonstrate that they continued to project their desires and needs onto the machine in the 1970s. However, it will also argue that database technology provided a key enabling condition for them to re-imagine the meaning of the computer – something Haigh’s focus on the role of the systems men does not allow for. Moreover, Haigh’s narrative of the computer’s redefinition stops at the moment business starts employing DBMSs, which is why, I argue, it misses some of the key conceptual developments related to the corporate interpretations of database that only take shape in the 1970s, when innovations in databases provide a new technological ground for rethinking the corporate value of data.

To study the above developments, I will draw exclusively on historical source material from the trade magazine Datamation, which was the leading managerially orientated data-processing publication of the 1960s and 1970s, when the data-processing market was exploding; at the time, there was little competition from other magazines.37 Datamation was founded as Research and Engineering (“The magazine of Datamation”) in 1957 – at the time when corporate data processing took enormous flight in American organisations.38 It was published in print form by Thompson Publications of Chicago until 1998, and since then, it has continued as a web publication. Datamation was not aimed at computer scientists, but primarily at business personnel working in the entire field of data processing. It covered all aspects of business automation, including managerial use. Most of the articles were written by people from within the data-processing community,39 employing accessible language and emphasising practical relevance instead of theoretical depth.40

DBMSs, the Relational Model, and the Birth of the Data-based Mindset

Both DBMSs and the theory of the relational model were extensively covered in Datamation, feeding a managerial hype around databases in the 1970s.41 Reporting in Datamation in this decade is almost obsessive about figuring out what a database is, and what it can mean for managerial decision-making in particular. Importantly, for data-processing professionals and people from the database industry, both database innovations provided a technological ground for re-thinking the value and meaning of data processing for business. This re-interpretation was thus based in databases, so to speak, and, as I point out, manifested within the so-called ‘data base approach’ as specified in Datamation.42 For this study, it is important to emphasise that such data-based discourse, as it appeared in Datamation, both shaped and reinforced concepts of data that continue to have value and meaning in the current ‘era’ of big data, despite the fact that DBMSs in this period ‘disappointed as a managerial panacea’ – as computing historian Haigh has pointed out.43

In the article “What Data Base Isn’t” (1977), data-processing manager Appleton explains the importance of the database for management by contrasting it with the disadvantages of the so-called ‘applications approach’ – the state of the art in administrative data-processing systems in the 1960s. In this decade, more and more American corporations, situated in various industries (for example, manufacturing, retail, transportation, finance, insurance) established a data-processing department (consisting of systems analysts and programmers) that had responsibility for the development of application systems.44 As the article discusses, the applications approach employed in this department was characterised by the programming of individual computer applications that satisfied a specific output requirement (for example, payroll, reporting). In the development of these data-processing applications, an ‘information model’ formed the starting point. This meant that data input, processing, and output were coordinated to process information as efficiently as possible in the service of satisfying a specific information requirement now embedded in the applications program. This commitment to a single, and predetermined, need did not match up with the everyday reality of organisational management – or so the supporters of the database approach argued. For example, the article “DSS: An Executive Mind-Support System” (1979) discussed how decision-making was a dynamic process in which information needs changed continuously:

Managers cannot specify in advance what they want from programmers and model builders. Decision-making and planning are often exploratory. Information needs and methods of analysis evolve as the decision-maker and his or her staff learn more about the problem.45

The ‘applications approach’, database advocates claimed, was ill-fitted for developing computers as natural extensions of managers’ dynamic approaches for exploring problems. What managers needed, they argued, were data systems able to produce information dynamically, as needed by the user.

The ‘database approach’ was presented as the solution to the managerial shortcomings of the ‘applications approach.’ What made this approach fundamentally different, Appleton argued, is that it replaced the ‘information model’ with a ‘data model.’46 Appleton pointed out that central to the design of systems based on this model was not a pre-defined information need, and thus the coordination of data input and output, but the question of how ‘to capture data at its source, regardless of whether that source will ever see the data in the form of output’47 – or, in other words: through a data model, data collection and ­storage could be approached completely independently from the output, the information product, and the ultimate use of such information for satisfying a managerial decision-­making need. Importantly, Appleton continuously stressed that a successful implementation of the database approach required more than just technology. What was needed, he argued, was ‘a complete psychological reorientation to computerization.’48 So, what Appleton and other advocates of the database approach actively promoted was a view of the database as not just a technology, but as a completely new managerial philosophy and mindset.

For many contributors to Datamation, emerging database technologies and concepts in the 1970s represented a shift away from the idea that data processing was all about administrative efficiency. In the following sections, I will discuss this reorientation as a conceptual development and demonstrate how innovations in database technology conditioned the emergence of a data-based mindset that manifested in the following four interlinked concepts of data: (1) data as asset, (2) data as raw, (3) data as reality, and (4) data as relatable. I argue that this interlinked set of data-based concepts serves as the foundation for a business culture that takes data as the ultimate source of managerial control.

Data as Asset

The idea of using DBMSs gained widespread acceptance across corporate America in the 1970s, and the market for DBMS products grew exponentially.49 By 1974, the main vendors of DBMSs had more than 1,400 installations running, and by 1979, this figure was already up to 5,841.50 As DBMS products settled in, more and more American organisations began to view data as a key piece of business that could be leveraged to improve decision-making in financial and operational management. The idea of data as asset, illustrated by this excerpt from the article “Millionaire Machine?” (1981), manifested in a growing belief that data fulfilled an indispensable task at the heart of organisational management:

Information processing systems have become so critical to the operation of an organization that chaos would result if these firms tried to operate for even a few days without the information provided by data systems.51

By and large, Datamation articles attested to a growing awareness that the role of data in an organisation, and thereby the function of its data-processing department and its associated job roles, was radically changing compared to the 1950s and 1960s. This is evidenced by article titles such as “The Changing DP Organization” (1975) and “The changing role of the MIS executive” (1979).52 In such articles, advocates of the database approach developed a view on data processing ‘as an integral plan of a business rather than as a back-office clerical operation.’53 Importantly, database supporters connected data processing to an objective very different to that of speeding up business administrative tasks. As clearly expressed in the following passage: ‘Data Processing exists to provide accurate and timely information to assist the operation of a business.’54

An organisation that treats data as an asset requires that its data are managed carefully to maximise value to the business. This idea is now considered common sense, but it was not yet so in the 1970s. The idea of data as a resource was still in its infancy, and the concept of data management was yet to be ‘invented’ and ‘sold’ to management executives. Datamation played an important role in developing, explaining, and marketing the concept, as demonstrated in this excerpt from the article “DP’s Role Is Changing” (1978):

Only one common element runs through this management maze; it is the data resource, the stuff that computers compute, communication devices communicate, word processors process, and humans use or misuse. It is to the management of data that the manager must turn his attention; the alternative may be to see his influence eroded by other data managers in the organization.55

Obviously, as the name suggests, database management systems provided a technical infrastructure for managing and structuring data. Importantly, however, in the mid- to late 1970s, data management became a managerial concept in companies rather than a purely technological one. The term is increasingly discussed as a consistent methodology for ensuring the deployment of timely, trusted and accurate data across an organisation – that is, ‘assuring that all elements of the organization are provided with the most effective and economical means to gather, process, and use the firm’s data resources’ (as stated in the same article).56

Accordingly, there is a growing body of opinion claiming that responsibility for data management – the effective and cost-efficient unlocking of the data resource in all management departments of an organisation – had to be transferred from the data-processing department to a new domain of management often referred to as ‘information resources management’ (IRM) and to a new set of job roles, including database administrators and MIS managers.57 Indicative of the development of data into a core business asset is the amount of managerial power ascribed to this new position of the MIS manager, as illustrated by this quotation from the article “Can Todays MIS Manager Make the Transition” (1978):

the rank [of the MIS manager] will be that of vice president and the information he or she controls will be viewed as one of the organization’s most valuable resources. Such an executive, therefore, will sit ‘very close to the throne.’58

Managerial control over the data resource, for that matter, is increasingly discussed as equivalent to increases in power over an organisation as a whole.

The resource value currently assigned to data, often expressed by an equation of data to a natural resource such as oil, has its origins in the 1970s. The determining condition for data-centric decision-making, now as in the past fifty years, has been to accept that data exists to provide accurate and timely information to assist the operation of a business. In the 1970s, this idea constituted a shift away from the notion that data processing with computers was all about speeding up the processing of increasingly large volumes of data.

Data as Raw

Treating data as a core business asset also attests to a fundamental shift from treating data as the stuff being processed in the automation of clerical routines to a perception of data as valuable in and of itself, in terms of its potential worth as a raw material for producing information that can feed into managerial decision-making. This also means that data, not information, is increasingly portrayed as the fundamental resource around which an entire organisation revolves. Simultaneously, a perception of data emerges as something that demands corporate attention and finances. This is nicely illustrated in this excerpt from the article “DP’s Role Is Changing” (1978):

It is worth repeating that there is one common thread to the coalescence of these diverse technologies and diverse functions—the data resource. It is data that is being collected, processed, published, filed, transmitted, used and misused. (…) It is data that is demanding more and more of the company’s resource dollar for capital investment. It is data, the raw material (…).59

Database technology, I argue, provided a technical condition for a concept of data as raw to be developed as distinct from information. As this excerpt illustrates, ‘data bases do not store information as information. They store data which can be used to generate information.’60

It is important to remember that DBMSs did not randomly store data, but structured data so as to enable their modification and manipulation by users (or other programs). The article “A Brief History of Data Base Management” (1974) is one of the first publications in Datamation in which a changing concept of the relationship between data and information is explicitly linked to the role of the DBMS:

With the growth and acceptance of data base management systems (DBMS), the computer industry has at long last given itself ‘information system’ potential. Information differs from data in that information ‘participates’ in the corporation – answers questions, solves problems.61

The data control affordances of DBMSs enabled a shift away from understanding databases simply as storage technologies, and data as information stored in digital format, towards understanding data as a raw material – a feedstock for the production of information. Accordingly, advocates of the database approach no longer considered data on a par with information, which was now viewed as ‘[raw] data made useful.’62

Importantly, corporate and commercial rhetoric around database management in the 1970s also reinforced a concept of information as something latently existing within databases, concealed within increasing amounts of raw data collected and stored. The 1980 advertisement for the ‘teletype’, as shown in Figure 2, provides an example of such rhetoric. The teletype, a possible user interface for a DBMS, is promoted by this advertisement as the point ‘where data becomes information.’ The ad here obviously speaks in hyperbolic fashion, as it was the DBMS rather than the teletype that possessed such transformative power. However, it is most striking how the ad connects the ‘big bang’ of an explosion of raw data within American business to an idea of teletype technology as ‘[giving] you the useful information hidden in all that data’ (my emphasis). The emergence of a concept of data as raw cannot be seen as separate from significant increases in the amounts of data collected and stored, which made all the more plausible the idea that hidden somewhere in this data was information, waiting to be uncovered by one’s company and one’s DBMS.

In the 1970s, the notion of data as raw material, unlike today, was not valorised any further through associations with natural resources, such as oil. Nonetheless, accepting the idea of data as raw forms a key enabling condition for these kinds of associations to develop. That is to say, before being able to naturalise the resource value of data, people had to accept the idea that data formed an information resource in the first place. Advocates of the database approach facilitated such acceptance by conceptualising data as distinct from information, and by attributing large amounts of corporate and economic value to data independently of information – value specifically as a building material, and one without which information no longer had (or seemed to have) ground for existence.

figure2

Figure 2

Advertisement for Teletype Terminal from the Teletype Corporation (published in Datamation, April 1980).

Data as Reality

In a 1979 advertisement, Cullinane’s IDMS is presented as the DBMS that ‘lets you put your real world into your computer’ (Figure 3). Throughout the 1970s and into the 1980s, advertisements and articles connect DBMSs to the design of a data representation (‘data model’) that accurately reflects the real-world data complexity of a company.63 Such a notion of modelling real-world processes in computer data indicates a shift away from a concept of data as representing information in digital form towards a concept of data as representing reality. DBMSs, as discussed, required a data model as input because they stored data as a structured and organised collection of data items. The data modelling requirement of the DBMS, in close connection with data’s new status as a core business asset, provided the fundamental conditions for a concept of data as reality to come to fruition.

A data model is a specification determining the structure of data in a system at the logical level. It organises elements of data and provides the technical standards for relating those elements to one another, and to the actual properties of the real-world entities they represent. In my introduction to the history of data processing and control, I alluded to the fact that in the early 1970s, approaches to database structure could be grouped into three main categories: hierarchical, networked and relational. Only the first two were actually implemented in DBMSs at the time, as the relational model only existed as an idea and as part of Codd’s theory. Each of the aforementioned approaches enabled database designers to express relationships between real-world entities in a data model. Regardless of the structure employment, the data modelling requirements of DBMSs urged a complete rethinking of data storage as the centre of database design. Accordingly, proponents of the database philosophy promoted database, not system, as key to the design of database management systems. That is to say, advocates of the database approach encouraged a design philosophy in which data storage (input) was confronted as a problem completely independent from the ultimate output and use of the data. In other words, as Appleton argued, ‘data base development concentrates first on identifying what “base data” should be stored on the computer.’64

Within the applications approach, which database approach proponents opposed, data storage within computer databases was typically treated as file conversion – that is, representing existing business information in a digital format so as to enable its automatic processing by computer. The article “Designing the Data Base” (1978) criticised such conventional methods in which database design was approached as a ‘file conversion problem.’65 It argued that information of actual relevance to management could only be generated when design started with ‘determining the data organization and processing requirements of an enterprise’ and focused on the ‘accurate reflection of these requirements in (…) schemas’ that could be converted to a data format within the DBMS environment.66 The problem of designing the data model of a DBMS, for that matter, develops as a problem of representing the reality of the organisational workflow (the logical interrelationships within and between departments) in data – that is, the problem of ‘reflecting (mapping) real world processes with the maximum possible exactness in our computer data files’, as the article “Anything New in Data Base Technology?” (1976) states.67

figure3

Figure 3

Advertisement for the Database Management System IDMS from Cullinane (published in Datamation, January 1979).

Such a mapping of reality, however, was conditioned by the particular data model of the DBMS. Hierarchical, networked, and relational structures allowed designers to express very different models of real-world entities and their relationships. In brief, in both networked and hierarchical models, data was stored as records connected to one another through links encoded in the design.68 The major limitation of these models for mapping reality was that modelling on the logical level (for example, describing what data the database stores, and what relationships exist among those data) was dependent on the physical level (the lowest level of abstraction that describes how a system actually stores data). In layman’s terms, modelling the data reality of an organisation was conditioned, and affected, by underlying technical structures of data storage on physical storage mechanisms.

Within the relational view, Codd proposed a model of structuring data on the logical level, or, as he put it, ‘describing data with its natural structure only.’69 This approach avoided any concern for machine data storage complexities at the physical level, something typically characterised as data independence. Equally important was that it only employed one data type, and one very familiar to corporate culture: the relation, which referred to a simple two-dimensional table consisting of rows and columns. The relational model enabled the possibility of representing an organisation’s data as a collection of tables and representing actual connections as potential relationships between tables. By following three logical steps in defining a database through a process called normalisation, the designer could precisely describe the important relations and relationships in the database, whilst avoiding redundancy and securing the integrity of the data.70 Normalisation enabled designers to keep the data representation up to speed with the continuously changing data reality of the organisation – that is, it enabled organisations to keep ‘stored data and reality synchronized over time’ (as explained in the article “Synchronizing Data with Reality” from 1981).71

Advocates of the database approach preferred the relational model to the networked and hierarchical models, because the first facilitated the design of data models that more clearly and precisely described organisational reality – data models, in other words, that fully reflected business needs, and not technology. As early as the mid-1970s, when Codd’s model still existed only on paper, the relational model gained a growing following amongst advocates of the database approach. In the article “Anything New in Data Base Technology” (1976), the relational model was discussed as a concept for database design that could ‘substantially change our viewing of data as mapping of reality’ because, as it stated, ‘this theory itself is the first serious attempt to reflect complexity of the real systems whose data is being transferred into computer media.’ 72 The fact that none of the DBMSs in this period implemented the relational model is not important. What is important to note, however, is that Datamation’s corporate and commercial discourse anticipated the potential of Codd’s theory for database development and use, thus providing a key condition for a concept of data as reality to gain ground.

In today’s big-data discourse, the idea that big data brings us closer to reality is dominant. The roots of this idea – and even more broadly, the idea that data can at all represent something other than information in digital form – lie in the 1970s, particularly in the technological affordances of the relational model for creating a data model reflecting real-world relationships. The idea that data can reflect actual relationships and processes is an essential condition for viewing data as (almost) similar to reality. In the 1970s, the idea of a data reality was not yet linked to ideological values as facticity, objectivity or truthfulness. However, the current idea that data reflects objective truth, presenting us a neutral perspective on reality and leading to better decisions and higher profits, can only have materialised on the basis of accepting the idea that data and reality are (almost) on a par.

Data as Relatable

Incidentally, it was not the simplicity of the data model itself that was recognised as the real strength of the relational view, but rather the fact that the data represented could be subjected to relatively simple, yet extremely powerful, operations described by relational algebra.73 Important to understand here is that the relational model not only provided a means of describing data on the logical level, but, as Codd argues, also provided ‘a basis for a high level data language’ for accessing and querying the data.74 The query language could only be developed because of the relational model of data representation. System R, developed by IBM and marketed by software company Cullinet, was one of the first serious implementations of the relational database approach in the early 1980s.75 A 1983 advertisement promoting System R (Figure 4) connected the DBMS’s affordances for relating data with ease of use:

This is how it works: As a true relational system it allows you to select data from separate and unrelated files; join it, then project it in ways that make it possible for you to handle […] unstructured end-user requests for information quickly, directly and intelligently.

figure4

Figure 4

Advertisement for the Relational Database Management System R from Cullinet (published in Datamation, May 1983).

The relational model as implemented in a DBMS such as System R provided the technical ­condition for a concept of data as relatable – that is, capable of being meaningfully related or connected – to materialise and take root. As previously discussed, information was treated by database approach proponents as a kind of economic value generated from raw data – ­something brought into existence from the data, so to speak. Accordingly, the relational model encouraged these advocates to develop a view on information as identified relations, or patterns of meaningful relationships, existing within the data, waiting to be discovered by users of the relational model’s access facility.

As mentioned previously, Codd had developed the relational model on the basis of a usability rationale, to facilitate both design and access of databases, making these processes more natural and understandable to regular users and management. The hallmark of Codd’s approach was the concept of data independence, which also conditioned ‘ease of use’ at the query level of the system. On the user level, it referred to data’s independence from dedicated and pre-programmed applications, enabling data to be queried and retrieved for ad hoc information needs.76 The relational model shielded users querying databases from any kind of processes for describing data storage structures at the physical level. This meant that users did not need to know the exact storage locations for the data they sought to find. Instead, they could directly state the information they wanted to retrieve from the database through declarative statements using Boolean operators.

Compared to networked and hierarchical models, then, relational models allowed users a greater degree of control over the formation of relationships between data items.77 Hierarchical and networked models employed explicitly encoded links between data at the logical level, which were specified by programmers that modelled the database. The relational model did not encode links between data items, but created unique mathematical identifiers for each file to referentially relate common data entities, enabling the identification of relationships at the user level of the system as a form of navigating between logically related tables through the comparison of corresponding attributes.78 In the article “Managing the Very Large Database” (1981), this affordance of relatability was described in terms of the following design axiom: ‘[I]f real relationships exist among stored data, then identification can occur, and (…) retrieval can occur as a function of these identified relationships.’79 The relational model, in this sense, provided the enabling condition for the identification of relationships between data items, and for a view on information as related items of data.

This in turn transferred the responsibility of establishing relationships ‘from the person designing them to the person querying them.’80 Relationships between tables could be produced dynamically, as needed by the user – as opposed to data associations being hardwired into the database at the logical level. Networked and hierarchical models, in contrast, were better suited for pre-programmed applications in which database search-and-retrieval protocols were predefined and dedicated to satisfy pre-determined information needs. The relational model instead permitted managers (at least in theory) to collate, and hence produce new, tables in different ways, for ­different objectives. This flexibility materialised in the high degree of precision and expression of the relational calculus employed in the accompanying database language which invited users to manipulate and relate tables via relatively simple operators (SELECT; JOIN; PROJECT).

It was especially this ability to re-associate data into information on the fly that advocates of the database approach perceived as a major advantage of the relational model for managerial decision-making:

The significance of being able to do dynamic relational accessing cannot be overemphasized. Since it permits data from any or all databases in the MIS data bank to be pulled out, collated, sifted, and displayed, management can get interactive answers to ‘what if’ and ‘what is’ questions on anything in the organization that is reflected in the data bank.81

Moreover, the ability to relate data into information was perceived as a natural fit with the information needs of managers because, as claimed in the article “What Data Base Isn’t” (1977):

the information required to satisfy 80–90% of management’s decision-making needs is, in most companies, developed from combining, arranging, analyzing, sorting and reporting some subset of between 400–800 basic elements of data.82

For those reasons, the relational model resonated with managers, because it promised them a kind of freedom in re-connecting elements of data to seek meaningful relations that might contribute to solving managerial problems. This encouraged the rise of a new concept of data as relatable atoms of information. As a 1980s article put it, this entailed thinking ‘of data in terms of individual groups of connected elements.’83 As another piece put it, ‘DBMS is that tool which enables us to build a framework of data which, when properly related [emphasis added], can generate information.’84 DBMSs, and the relational model in particular, allowed a data practice to be imagined where users, ideally managers, meaningfully related elements of data – that is, it provided the technical infrastructure for imagining a ‘[knowledge production] process by which data from those relations can be combined to form meaningful information’ (in the article “Implementing Relational Databases” from 1980).85

The relational model produced data relationality, opening up new avenues for the relational analysis of data by means of the use of relational algebra as a query language. This enabled managers to, for example, do a simple form of pattern detection (for instance, linking sick days with sales statistics). In a way, these relational practices can be understood as an early indication of today’s widespread use of data mining – machine learning algorithms – to extract patterns of meaningful relationships from large data collections. Yet, in the early 1980s, humans, not algorithms, still shouldered the responsibility for detecting meaningful patterns and correlations in data.86 Even so, the concept of data as relatable, in this period, represented a shift away from the idea of data as merely ‘processable.’ Simultaneously, a concept of information took root in terms of identified relations, or patterns of relationships, hidden in databases, patiently waiting within the data to be uncovered by any kind of ‘intelligent’ connector – whether human or algorithm. Arguably, it is a logical step, from here, to lower the appreciation for human actions and intuition in knowledge production, as is currently the case, because if relations already exist in the raw material, then the human brain is no longer needed to create meaningful relationships, but merely to extract relationships from the database.

Conclusions: A Database Revolution?

Taking Beniger’s historiographical approach as a model, this article has traced the technological and cultural origins of data-based managerial practice and thought back to the 1970s and 1980s, arguing that innovations in databases – database management systems and the relational database model – provided key enabling conditions for a data-based mindset to take root in the United States. It showed that current claims of big data causing a revolution in managerial control need to be approached with caution. I would argue that if we can speak of a revolution (at any point in time), it would have to concern ‘databases’ rather than ‘big data.’ And in my view, the roots of this revolution do not lie in the twenty-first century, but in developments almost five decades ago, when managerial control began to shift to database technology.

However, questioning the narrative of a big data revolution was not the only objective of this article. I also sought to provide insight into the technological and cultural roots of contemporary data centrism in corporate decision-making. The article has shown that the introduction of DBMSs and the relational model formed key technological conditions for advocates of the database approach to develop a data-based mindset – a mindset that in turn manifested in four concepts of data. These data-centred concepts, and the practices through which they were anchored in reality, provided a conceptual grounding for the materialisation and growth of ideological beliefs concerning data’s objectivity, truthfulness, and transparency. The concepts themselves, as they developed in the 1970s and 1980s, were hardly ideological in nature. They did not associate data with values of objectivity, neutrality, truthfulness and accuracy. They did not proclaim data to be a natural resource, nor did they describe data as facts, or an inherently more objective resource than the human mind. Yet, the data-based mindset represented by the four concepts does evidence how, contrary to currently fashionable opinion, a concept of data as a vital economic input for creating business insights materialised long before the current big-data hype in business.

The most fundamental condition for (big-)data-based decision-making is the idea that data exists to provide accurate and timely information to assist the operation of a business. This idea is most clearly represented in the development of the concept of data as asset in the 1970s. There was nothing historically inevitable about how we came to think about the relation between computers and data in this particular period. The concept of data as asset presented a major change of view compared to the belief that data processing was all about enhancing the speed of processing administrative data. Accordingly, the concept of data management illustrates how organisations attributed increasing amounts of economic and managerial value to data and became more and more convinced of the idea that data presented a vital economic resource necessitating governance.

The concept of data as raw indicates how a perception developed of data as valuable in and of itself, as a feedstock for information. This entailed in turn that the burden of corporate decision-making began to shift from information to data. In other words, corporate management was increasingly seen as dependent on the raw material out of which decision-relevant information could be produced. Accordingly, the concept of data as reality indicates how organisations attached growing importance to specifying and organising the contents of the raw material, focusing on the reflection of real-world processes in data. The concept is indicative of a radical shift in understanding the representational nature of data. In administrative computing of the 1960s, data still represented information in a digital format which served to facilitate its rapid processing by computers. Taking the relational model as a thinking tool, advocates of the database approach came to think of data as instead representing real-world relationships. I argue that in accepting the idea of data as a mapping of reality, they provided an enabling condition for associating data with values of truth and objectivity – that is, it had to be accepted first that data and reality were on a par before the perception could emerge of a raw data reality as, what Gitelman and Jackson call, the ‘fundamental stuff of truth itself.’87 In the 1970s, importantly, this ‘truth’ appears as relational in nature – as evidenced by the emergence of the concept of data as relatable. At the same time, we see a concept take root of information as identified relations, or patterns of relationships, waiting to be uncovered within the data. This concept provides a condition for growing attention to identifying meaningful patterns in data as if they were natural phenomena – facts covered within a data reality, waiting to be ‘unearthed.’

To conclude, I would like to make some brief methodological suggestions for continuing the project of studying the historicity of datafication. Recently, media scholar David Beer made a call to media and communication researchers to reflect on the question of ‘how [we] should […] do the history of Big Data.’88 In response to this question, my first methodological step would be to expand the domain of historical inquiry, so as to include the much longer history of data’s intersection with computation – which, as we know, had already started in the late 1950s. Simultaneously, the focus can be redirected to the study of developments in other nations and regions (for instance, in Europe) and to domains other than the private sector – as computer databases have been introduced in government bureaucracies since at least the late 1960s. Secondly, and following media and communication scholars Crawford, Miltner and Gray, I would reject the claim that big data has been precipitated by recent innovations in technology alone.89 Recognising this enables us to look more broadly at the cultural, technological, and political making of big data within a much longer history. Conclusively, I propose that we practice big-data history not just as an end in itself, but as a means for contributing to the emerging field of critical data studies, with the ultimate goal of debunking the persistent exceptionalism integral to the big-data ‘revolution.’

Notes

1 Viktor Mayer-Schönberger and Kenneth Cukier, Big Data: A Revolution That Will Transform How We Live, Work and Think (London: John Murray, 2017).

2 In this article, I focus on the domain of (commercial) business, as opposed to, for instance, government or state bureaucracies, in tracing the origins of data-based practices and thought. This choice is not a matter of preferring one over the other, as I am absolutely convinced that focusing on the domain of government will lead to different, yet equally interesting, results. However, archival sources for the domain of business are currently more accessible than for the domain of government, which is why I started with the exploration of the latter to further the project by focusing on developments in the public sector.

3 For a key example of a report that promotes big data as a fundamental shift in corporate decision-making, see: Economist Intelligence Unit, “The Deciding Factor: Big Data & Decision Making” (Capgemini, 2012). It is beyond any doubt that data-based management gains increased following in the corporate world. Many organisations today treat data as raw material, employing data analytics to extract information from data, gaining new insights into customers, markets, and supply chains to inform decision-making. The Capgemini report confirms a growing appetite among organisations for data and data-driven decisions, whilst decisions based purely on intuition or experience are increasingly regarded as suspect.

4 Mayer-Schönberger and Cukier, Big Data, 37.

5 It is no surprise that IBM promotes data as a natural resource. Today, IBM performs at the forefront of big-data innovations with, for example, its big-data analytics mining machine, Watson, which combines artificial intelligence (AI) and sophisticated analytical software for optimal performance as a ‘question answering’ machine.

6 See the following blog post for a discussion of the ideological blindness in corporate culture concerning data practices. Cennydd Bowles, “Datafication and Ideological Blindness,” blog post, (2016), https://www.cennydd.com/writing/datafication-and-ideological-blindness.

7 Danah Boyd and Kate Crawford, “Critical Questions for Big Data: Provocations for a Cultural, Technological, and Scholarly Phenomenon,” Information, Communication & Society 15, no. 5 (2012): 663.

8 José Van Dijck, “Datafication, Dataism and Dataveillance: Big Data between Scientific Paradigm and Ideology,” Surveillance & Society 12, no. 2 (2014): 197–208.

9 Mayer-Schönberger and Cukier, Big Data, 104.

10 Lisa Gitelman and Virginia Jackson, “Introduction,” in ‘Raw Data’ is an Oxymoron, ed. Lisa Gitelman (Cambridge; London: The MIT Press, 2013), 2.

11 Mirko Tobias Schäfer and Karin van Es, “Introduction: New Brave World,” in The Datafied Society: Studying Culture through Data (Amsterdam: Amsterdam University Press, 2017), 13.

12 Cornelius Puschmann and Jean Burgess, “Big Data, Big Questions| Metaphors of Big Data,” International Journal of Communication 8 (2014): 1690–1709.

13 Rob Kitchin, The Data Revolution: Big Data, Open Data, Data Infrastructures and Their Consequences (Sage, 2014).

14 David Beer, “How Should We Do the History of Big Data?,” Big Data & Society 3, no. 1 (2016): 1–10.

15 Ibid.; Amelia Acker, “Toward a Hermeneutics of Data,” IEEE Annals of the History of Computing 37, no. 3 (2015): 70–75.

16 Schäfer and van Es, “Introduction,” 13.

17 Thomas J. Bergin and Thomas Haigh, “The Commercialization of Database Management Systems, 1969–1983,” IEEE Annals of the History of Computing 31, no. 4 (2009): 26–41.

18 Thomas Haigh, “How Data Got Its Base: Information Storage Software in the 1950s and 1960s,” IEEE Annals of the History of Computing 31, no. 4 (2009): 6–25.

19 Lev Manovich, “Database as Symbolic Form,” Convergence 5, no. 2 (1999): 81.

20 For a complete overview of early DBMS packages and their technical characteristics, see Bergin and Haigh, “The Commercialization”.

21 Ibid.

22 For an excellent introduction to the relational model, see Manuel Castelle, “Relational and Non-Relational Models in the Entextualization of Bureaucracy,” Computational Culture 3 (2013).

23 Rahul Mukherjee, “Interfacing Data Destinations and Visualizations: A History of Database Literacy,” New Media & Society (2013): 110–128.

24 For an example of the former, see Thomas Haigh, “‘A Veritable Bucket of Facts’: Origins of the Data Base Management System,” ACM SIGMOD Record 35, no. 2 (2006): 73–88. Examples of the latter are Bergin and Haigh, “The Commercialization”; Haigh, “How Data Got Its Base”; Martin Campbell-Kelly, “The Rdbms Industry: A Northern California Perspective,” IEEE Annals of the History of Computing 34, no. 4 (2012): 18–29.

25 For the former, see David Alan Grier, “The Relational Database and the Concept of the Information System,” IEEE Annals of the History of Computing 34, no. 4 (2012): 9–17. For the latter, see Castelle, “Relational and Non-Relational Models”.

26 For the former, see Mukherjee, “Interfacing Data Destinations”. For the latter, see Kevin Driscoll, “From Punched Cards to Big Data,” Communication +1 1, no. 1 (2012).

27 James Beniger, The Control Revolution: Technological and Economic Origins of the Information Society (Cambridge, MA: Harvard University Press, 1986).

28 In the 1930s, punch-card tabulators became widely implemented in American offices. A full tabulating system including ‘a series of devices ranging from key punches to verifiers and sorters to tabulators in which data entered the system in machine-readable form’ could operate on and process ‘thousands, even millions of pieces of data.’ See James W Cortada, Before the Computer: Ibm, Ncr, Burroughs, and Remington Rand and the Industry They Created, 1865–1956 (Princeton University Press, 1993), 44.

29 Initially, the punched-card machines merely replaced manual methods of accounting, but soon they were applied for more complex processing tasks such as ‘the immediate and continuous analysis of sales and costs.’ Beniger, The Control Revolution.

30 JoAnne Yates, Control through Communication: The Rise of System in American Management (Baltimore: Johns Hopkins University Press, 1993); Idem, “Co-Evolution of Information-Processing Technology and Use: Interaction between the Life Insurance and Tabulating Industries,” Business History Review 67, no. 1 (1993): 1–51.

31 Thomas Haigh, “The Chromium-Plated Tabulator: Institutionalizing an Electronic Revolution, 1954–1958,” Annals of the History of Computing 23, no. 4 (2001): 85–87.

32 Haigh, “The Chromium-Plated Tabulator”.

33 As historian James Cortada explains, ‘its [tabulating equipment’s] enormous capacity for processing data (…) created the demand and the mindset that largely motivated organizations [with large or data-intensive calculating needs] to want what eventually became known as the computer.’ Cortada, 44.

34 Mayer-Schönberger and Cukier, Big Data, 37.

35 Thomas Haigh, “Inventing Information Systems: The Systems Men and the Computer, 1950–1968,” Business History Review 75, no. 1 (2001): 15–61.

36 Haigh, “Inventing Information Systems,” 18.

37 The only competition for Datamation came from the magazines Computers and Automation (1953) and Business Automation (1961).

38 In October 1957, Datamation started as a semi-monthly publication. In 1961, it converted to a monthly publication.

39 Robert V. Head, “Datamation’s Glory Days,” IEEE Annals of the History of Computing 26, no. 2 (2004): 16–21.

40 For a short reflection on the tone of Datamation articles, see Nathan L Ensmenger, The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise (Cambridge, MA: MIT Press, 2010).

41 Early discussion of database management and DBMSs appear in Robert L. Flynn, “A Brief History of Data Base Management,” Datamation 20, no. 8 (1974): 71–88; Richard F. Schubert, “Directions in Data Base Management Technology,” Datamation 20, no. 9 (1974): 48–56; J. Stevens Blanchard, “We Bet Our Company on Data Base Management,” Datamation 20, no. 9 (1974): 61–70; Robert M. Curtice, “The Outlook for Data Base Management,” Datamation 22, no. 4 (1976): 46–49. The relational data base and its concepts are first discussed in C.J. Date, “Relational Data Base Concepts,” Datamation 22, no. 4 (1976): 50–53. Since then, the relational model appeared as an object of discussion in at least twenty articles, most prominently in Michael F. Korns, “Halfway to a Relational Data Base,” Datamation 22, no. 5 (1976): 107–116; Warren J. Polk and Kerry Byrd, “Managing the Very Large Database,” Datamation 27, no. 9 (1981): 115–124; Robert Bowerman, “Relational Database Systems for Micros,” Datamation 29, no. 7 (1983): 128–134.

42 The ‘database approach’ is first mentioned in 1974 by Gerald E. Huhn, “The Database in a Critical Online Business Environment,” Datamation 20, no. 9 (1974): 52–56; J. Stevens Blanchard, “We Bet Our Company on Data Base Management,” Datamation 20, no. 9 (1974): 61–65. It is provided with a more substantive philosophical base in the following articles: George Schussel, “When Not to Use a Data Base,” Datamation 21, no. 11 (1975): 82–98; Daniel S. Appleton, “What Data Base Isn’t,” Datamation 23, no. 1 (1977): 85–92.

43 Haigh, “A Veritable Bucket,” 33.

44 For an excellent discussion on the role of ‘computer people’ in the data-processing department in the 1950s and 1960s, see Ensmenger, The Computer Boys. For a detailed history of computer use in various sectors of the American economy, see James W. Cortada, The Digital Hand: How Computers Changed the Work of American Manufacturing, Transportation, and Retail Industries (Oxford, NY: Oxford University Press, 2004).

45 Peter G. Keen and Jerry R. Wagner, “DSS: An Executive Mind-Support System,” Datamation 25, no. 11 (1979): 117–122.

46 Appleton, “What Data Base Isn’t”.

47 Ibid., 87.

48 Ibid., 86.

49 Bergin and Haigh, “The Commercialization”.

50 Ibid.

51 Alfred R. Berkeley, “Millionaire Machine?,” Datamation 27, no. 8 (1981): 32.

52 Johnson, “The Changing DP Organization,” Datamation 21, no. 1 (1975): 81–83; Joseph Ferreira and James F. Collins Jr., “The Changing Role of the MIS Executive,” Datamation 25, no. 11 (1979): 26–32.

53 Paul R. Hessinger, “Distributed Systems and Data Management,” Datamation 27, no. 11 (1981): 179.

54 Hessinger, “Distributed Systems and Data Management,” 181.

55 Getz, “DP’s Role Is Changing,” 117.

56 Ibid., 124.

57 See Ibid.; Joseph Ferreira and James F. Collins Jr., “The Changing Role of the MIS Executive,” Datamation 25, no. 11 (1979): 26–32.

58 John C. Gilbert, “Can Today’s MIS Manager Make the Transition?,” Datamation 24, no. 3 (1978): 151.

59 Getz, “DP’s Role Is Changing,” 124.

60 Appleton, “What Data Base Isn’t,” 87.

61 Flynn, “A Brief History,” 71.

62 Getz, “DP’s Role Is Changing,” 120.

63 Vaclav Chvalovsky, “Anything New in Data Base Technology?,” Datamation 22, no. 4 (1976): 54–55; Robert S. Barnhardt, “Implementing Relational Databases,” Datamation 26, no. 10 (1980): 161–172.

64 Appleton, “What Data Base Isn’t,” 1977.

65 For further critique on the file conversion approach, see Byford E. Hoffman and Richard J. Schonberger, “Data Base for Results,” Datamation 22, no. 8 (1976): 155–156.

66 D.C. Tsichritzis and F.H. Lochovsky, “Designing the Data Base,” Datamation 24, no. 8 (1978): 147.

67 Chvalovsky, “Anything New in Data Base Technology?,” 54.

68 For a more detailed discussion of the difference between networked and hierarchical models, see Castelle, “Relational and Non-Relational Models”.

69 Edgar F. Codd, “A Relational Model of Data for Large Shared Data Banks,” Communications of the ACM 13, no. 6 (1970): 377.

70 For a discussion on normalisation in Datamation, see Barnhardt, “Implementing Relational Databases,” 169.

71 Jim Highsmith, “Synchronizing Data with Reality,” Datamation 27, no. 11 (1981): 187.

72 Chvalovsky, “Anything New in Data Base Technology?,” 54.

73 Date, “Relational Data Base Concepts”.

74 Codd, “A Relational Model,” 377.

75 An offshoot of System R was the query language SQL, a data-manipulation language designed to manipulate and retrieve data stored in the databases of System R. SQL employed some basic arithmetic operators to perform some elementary calculations on the data (SUM; COUNT; AVG; MIN; MAX) and placed a rudimentary form of data-analytical power in the hands of its users.

76 For a discussion of data independence in terms of separating data from applications, see Robert M. Curtis, “Data Independence in Data Base Systems,” Datamation 21, no. 4 (1975): 65–71.

77 For a more detailed discussion of the greater degree of freedom in producing relationships between data that users experienced when using relational databases, see Castelle, “Relational and Non-Relational Models”.

78 For an insightful introduction to the user-interfacing of the relational model, see Mukherjee, “Interfacing Data Destinations”.

79 Polk and Byrd, “Managing the Very Large Database,” 118.

80 Haigh, “A Veritable Bucket,” 43.

81 Nigel S. Read and Douglas L. Harmon, “Assuring MIS Success,” Datamation 27, no. 2 (1981): 110.

82 Appleton, “What Data Base Isn’t,” 87.

83 Robert H. Holland, “DBMS: Developing User Views,” Datamation 26, no. 2 (1980): 141.

84 Robert L. Flynn, “A Brief History,” 71.

85 Barnhardt, “Implementing Relational Databases,” 169.

86 For a more detailed discussion of human-driven analysis, see Usama Fayyad and Ramasamy Uthurusamy, “Data Mining and Knowledge Discovery in Databases (Editorial),” Communications of the ACM 39, no. 11 (1996): 416–430. As the authors state, ‘[t]raditionally, analysis was strictly a manual process. One or more analysts would become intimately familiar with the data and – with the help of statistical techniques – provide summaries and generate reports. In effect, the analysts acted as sophisticated query processors.’

87 Gitelman and Jackson, “Introduction,” 2.

88 Beer, “How Should We Do”.

89 Kate Crawford, Mary L Gray, and Kate Miltner, “Big Data Critiquing Big Data: Politics, Ethics, Epistemology| Special Section Introduction,” International Journal of Communication 8 (2014): 1663–1672.

Biography

Niels Kerssens holds a Ph.D in Media Studies and is a researcher and lecturer at the Department of Media and Cultural Studies at Utrecht University. In 2016, he finished his Ph.D dissertation entitled Cultures of Use – 1970s/1980s: An Archaeology of Computing’s Integration with Everyday Life (supervised by José van Dijck and Bernhard Rieder). His current research is part of the fields of critical data studies and media history, investigating datafication as a historical transition that started in the 1960s and has continued unabated.