Popular filmgoing is a study of the actions and choices of film consumers. Film popularity is a measure of those choices. If a metric can be found to account for the popularity of films in a particular locality, then it follows that this will be possible in other localities, making comparative analysis possible. This is what this short paper promotes: an analytical tool (RelPOP) for doing comparative popular filmgoing history. The metric can be audience numbers or box office revenue, or in the absence of information about either, a proxy, such as POPSTAT. A second issue arises concerning why one would want to do this. The answer to this is that film popularity in some way reflects film tastes. It thus necessarily opens a portal onto civil society. Not only are the actions of going to the cinema part of what it is to live, but so too are the choices made. What do the combination of images and sounds tell us about audience sympathies and empathies, and more broadly about social, moral, and ethical values? Clearly differences in film popularity between two or more localities is a matter of interest, and evidence of differences in tastes between different populations.

For instance, in his new treatment of cinema under the Nazi regime in Germany, Joseph Garncarz, among many striking findings, draws attention to the growth of cinema audiences during the course of the Second World War and, unsurprisingly, the increased presence of women in the audience.1 Furthermore, in response to this change in audience composition, Garncarz observes that during these years the diet of films being screened changes in character, better reflecting women’s tastes.2 Implicit in these observations is the idea that sections of the audience are attracted by particular films and producers respond to these preferences by producing more of the type of film favoured. The sources that support his findings are both secondary – published statistics found in the archive that record the number of cinema attendances and scale of recruitment of men into the German army – and primary: the film programmes of a single representative sample of cinemas, drawn entirely from the city of Berlin, from which the author derives an index of film popularity, known as POPSTAT. Eric Hobsbawm once wrote: ‘I strongly defend the view that what historians investigate is real.’3 (Hobsbawm appears to argue that it is the job of historians to concern themselves with marshalling evidence to support a particular depiction of the past that is of significance: moreover, it should be an account that is capable of being shown to be false through the presentation of better and more pertinent historical evidence.) To my mind, Garncarz’s work is an example of what Hobsbawm terms ‘real’ and the POPSTAT method helps him achieve this.

POPSTAT is now over 25 years old. Based upon all of the films that were screened at least once in one of 80 or so London and provincial city first and second-run cinemas during 1934, I developed the POPSTAT method and the index of film popularity that it gave rise to, in order to bring to the history of cinema a new type of evidence about what films people paid to watch in very large numbers.4 I wanted those countless decisions of what film to see to speak for themselves: to be able to say, unambiguously, that for whatever reason, Film A was more popular with a particular audience than Film B, even though film scholars may well have reversed that order when conducting critical analysis. The essay later led to a book in which the temporal span of the investigation was extended to 1932-1937.5

As an economic historian, capturing relative film popularity gave me a key for conceptualising how a stock of films flows through a stock of cinemas on any particular day, week, year or cluster of years and for understanding that this process was not random, but rather predicated upon those films most popular with audiences being screened more often than films that were not so popular. From this, an economic rationality is apparent: one in which exhibitors and distributors maximised their returns by adjusting supply to better reflect audience preferences, once revealed.

Garncarz argues that my study based on leading cinemas is truncated in scope and therefore one that will not of necessity reflect aspects of the diffusion process beyond those cinemas in the sample: that my sample was not truly representative.6 I agree with his criticism and forms the reason why, at the time, I also conducted two local studies – one in the North of England (Bolton) and one in the South (Brighton) – based upon the full population of cinemas in each locality that advertised daily in the local evening newspapers. The intention was that of capturing the life and death dynamics of films in each locality. In response to the discovery of cinema box-office ledgers by Sue Harper7, the programming history of a third small city, Portsmouth, also on the South Coast of England, was later conducted.8

All three cities screened comparable numbers of films among comparable populations of cinemas. The POPSTAT indices of the three manifest similar statistical properties – highly skewed with a long right tail, in which the median and mean POPSTAT index value fall within the first decile (ten percent) of the range of values. But how best to compare the actual preferences for films? That is, while the statistical distributions of POPSTAT values were comparable, did the preferences of audiences similarly converge? The answer in the main was yes, but not wholly. For instance, the Gracie Fields vehicle Sing As We Go, was hugely popular with Bolton audiences, but ranked 15th in Brighton, 37th in the national study and 65th in Portsmouth.9 Conversely, the screwball comedy It Happened One Night staring Clark Gable and Claudette Colbert was uniformly popular, ranked 1st in Portsmouth, 7th in Brighton, 8th in Bolton and 9th nationally.10 A weakness of the single representative sample is that it does not allow for this kind of comparative analysis.

Thus, the POPSTAT method not only provides a way of establishing a rank order of films according to their popularity within a locality, but, implicitly, also a means of comparing the popularity of films across populations of cinemas sited in different localities. As well, the method is adaptable to the particular circumstances of time and place and associated data constraints, although comparative analysis requires the researcher to compare like with like, making explicit procedures and constraints. For example, in her investigation of filmgoing in the Netherlands in the mid-1930s, Clara Pafort-Overduin, in stark contrast with the single representative sample of cinemas adopted by Garncarz, drew upon the film programmes of 22 different Dutch cities/towns between 1934-1936, ranging from Amsterdam with a population of 781,645 to Zierikzee, which had a population 100 times smaller (6,944), to investigate those films popular with Dutch audiences of the time. Her surprising results show a Dutch film De Jantjes was the most popular film in 13 of the 22 locations over the three-year period, including the cities of Den Haag, Groningen, Haarlem, Rotterdam, and Utrecht, and was second placed in Amsterdam and Eindhoven.11 In an extension of this work, a paper Pafort-Overduin wrote with Jaap Boter and myself, addressed the apparently simple question why, for a comparably rich nation, when, say, compared to Great Britain, the Dutch didn’t go to the cinema very often? Our answer was anything but simple. We found that a complex combination of business, cultural, economic and institutional factors were all at play, giving form to a different attitude towards the cinema than that found in Great Britain.12

Thus, like Garncarz, Pafort-Overduin uses a combination of secondary and primary resources to do analysis. For both scholars, POPSTAT serves as a means of establishing an empirical base from which to estimate the relative size of cinema markets and the preferences of audiences. For the latter, her extended sampling allows her to investigate intra-locality, inter-locality and national film popularity, while as stated earlier Garncarz is concerned with single national popularity characteristics. From the establishment of the POPSTAT index of popularity, wider questions can then be asked. For instance, Garncarz seeks to understand the preferences of Jewish filmgoers and whether these were substantially different for non-Jewish Germans during the Nazi period, while Pafort-Overduin is fascinated by the tastes expressed by Dutch filmgoers implicit in the popularity of three films – what are known as ‘Jordaanfilms’ – including De Jantjes, that depict in a light-hearted way the quotidian of life in the Netherlands at the time. For my part, the evolution of filmgoers’ preferences for particular films, the tastes that lie behind their choices, in conjunction with the pattern of diffusion that prevails across different states with different ideologies at different times, is intriguing. Garncarz shows how this pattern of diffusion was adopted in totalitarian Germany. Preliminary work on programming data obtained from Brno, Czechoslovakia in 1952 and Krakow, Poland suggests that a similar pattern of diffusion prevailed behind the Iron Curtain, following the end of the Second World War.13 What appears to be the case, is that audiences select what they want to see from a body of films determined by other agencies, and the distribution system operationalises these choices in a manner which reflects the popularity of the films being chosen.

One major issue that arises in assessing intra-locality film popularity is the distinction between the cardinal (number order) and ordinal (ranking order) nature of POPSTAT Index values: specifically, that the rank order of films established by a POPSTAT Index will only poorly reflect differences in POPSTAT values. The explanation for this is to be found in the highly skewed nature of the frequency distribution associated with the POPSTAT Index, which when smoothed shows an ever diminishing marginal POPSTAT value in relation to rank. That is, the slope of the POPSTAT curve declines as rank order increases: the curve becomes shallower along its length, meaning that a change of rank from, say, 1 to 2 represents a considerably great change in POPSTAT values compared to the change from, say, 99 to 100. For this reason, cardinal values are preferred to ordinal values, because they allow us to understand how much more popular any one film is in relation to any another film and thus, by implication, all other films.

Clearly, the same argument applies when making a comparison of the performance of films across various localities. Averaging the ranked performance of a film across different localities does not capture adequately differences in respective performances. Take, for example, the popularity of Sing As We Go in the English towns of Bolton, Brighton and Portsmouth. The arithmetic mean rank is (1+15+65)/3=27. Clearly the variance in rank is marked. But further difficulties arise, since the population of films screened differs between localities, meaning that a particular rank will reflect different levels of relative popularity. To take an extreme hypothetical example , if locality A screens 200 films and locality B just 20 films, rank 20 will means quite different things in either. This is pertinent, because sometimes we might want to make comparisons across localities of different sizes in which large differences in the number of films screened are observed. For instance, respectively 730 and 598 films were screened at least once in January 1954 in Rome and Milan, while in the same month only 188 films were screened in Bari and 72 films in Cagliari.14

A better way to proceed, is to express all POPSTAT values in a given population as proportions of the median. In doing this, a common measure of relative popularity is being proposed: one which standardises each of the various locality populations of films around a measure which represents the film that lies in the middle of a POPSTAT Index when ranked from first to last.15 This statistic is termed RelPOP and was first published in 2019 by Sedgwick, Miskell and Nicoli,16 when analysing actual box-office data gathered by an Italian trade body for the Italian film market, 1957 to 1966. RelPOP takes the general form:

$\mathrm{Re}lPO{P}_{i}=\sum _{loc=1}^{n}\frac{B{O}_{i}}{B{O}_{m}}÷n$

where, RelPOP = Relative Popularity

i=ith film

loc=locality

n=number of localities

BOi= Box-office of the ith film, or appropriate proxy such as POPSTAT

BOm=Box-office of the median film

This formula should be read as follows: The relative popularity of any particular film (the ith film) in a population of films can be expressed as a quotient of its actual popularity (measured by its box-office, or a proxy, such as POPSTAT) and the median of that population. Accordingly, a RelPOP value of five for a hit film indicates that it is five times more popular than the median film in that series: a RelPOP value of 0.5 indicates a film that is only half as popular as the median film. Across a number of localities, the relative popularity of a film is given by the arithmetic mean. Measures of variance (such as the coefficient of variation) can then be used to identify films in which consumption was largely uniform across various markets and those for which it was uneven. A RelPOP Index thus allows a researcher to make statements such as the films starring the Italian comic actor Totò were very much more popular in Bari than in Milan. Implicit in this statement and the method that enables it to be made is the consideration that each locality in a study is treated as equivalent, irrespective of its size. That is, we can compare the relative popularities of films across unequally sized cities, regions, or (more problematically) countries: Rome with Bari; Amsterdam with Utrecht; London with Bolton.

## Concluding remarks

In this brief essay, I have attempted to convey the idea that during the era when filmgoing dominated all other paid-for-leisure activities, the POPSTAT method opens a portal onto civil society. It allows us to understand the process by which films were diffused; the reason why they were diffused in this manner; the preferences of audiences for particular films and by inference what excited them; the manner in which these informal (subjective) preferences co-existed with the formal structures of ideology exercised by the authorities17; and finally gender, class and ethnic differences in taste and how these might have changed over time. I have illustrated the use to which the POPSTAT method has been used by historians, concentrating on the important contributions of Joseph Garncarz and Clara Pafort-Overduin. At the centre of the method is the behaviour of audiences, the consumers of films. POPSTAT in conjunction with RelPOP allows us to measure, compare and contrast this behaviour.