Journal articles: 'Document Object Model (Web site development technology)' – Grafiati (2024)

  • Bibliography
  • Subscribe
  • News
  • Referencing guides Blog Automated transliteration Relevant bibliographies by topics

Log in

Українська Français Italiano Español Polski Português Deutsch

We are proudly a Ukrainian website. Our country was attacked by Russian Armed Forces on Feb. 24, 2022.
You can support the Ukrainian Army by following the link: https://u24.gov.ua/. Even the smallest donation is hugely appreciated!

Relevant bibliographies by topics / Document Object Model (Web site development technology) / Journal articles

To see the other types of publications on this topic, follow the link: Document Object Model (Web site development technology).

Author: Grafiati

Published: 4 June 2021

Last updated: 27 January 2023

Create a spot-on reference in APA, MLA, Chicago, Harvard, and other styles

Consult the top 24 journal articles for your research on the topic 'Document Object Model (Web site development technology).'

Next to every source in the list of references, there is an 'Add to bibliography' button. Press on it, and we will generate automatically the bibliographic reference to the chosen work in the citation style you need: APA, MLA, Harvard, Chicago, Vancouver, etc.

You can also download the full text of the academic publication as pdf and read online its abstract whenever available in the metadata.

Browse journal articles on a wide variety of disciplines and organise your bibliography correctly.

1

Wang, Yanlong, and Jinhua Liu. "Object-oriented Design based Comprehensive Experimental Development of Document Object Model." Advances in Engineering Technology Research 3, no.1 (December7, 2022): 390. http://dx.doi.org/10.56028/aetr.3.1.390.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

JavaScript code using Document Object Model (DOM) can realize the dynamic control of Web pages, which is the important content of the Web development technology course. The application of DOM is very flexible and includes many knowledge points, so it is difficult for students to master. In order to help students to understand each knowledge point and improve their engineering ability to solve practical problems, a DOM comprehensive experiment project similar to blind box is designed and implemented. This experimental project integrates knowledge points such as DOM events, DOM operations, and communication between objects. Practice has proved that running and debugging of the project can help students to understand and master relevant knowledge points.

2

Manko,M.O., and YuV.Tryus. "Creating a web-oriented expert system for solving problems of optimization." CTE Workshop Proceedings 3 (March20, 2015): 295–99. http://dx.doi.org/10.55056/cte.283.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Research goals: to create a web-oriented expert system on methods of optimization based on the principles of cloud technologies. Research objectives are designing and develop an expert system on based productive model of knowledge about the subject area. The object of research is a web-oriented expert system and the subjects of research are objectives and methods of optimization. In the research used the methods of mathematical modeling and computer experiment. The result of research is the knowledge base based on productive model of knowledge about the objectives and methods of optimization and developed on the it basis of web-oriented expert system for solving optimization problems. The expert system is created to guide the learning process in the preparation of mathematicians and applied mathematicians, professionals of IT and Economic Cybernetics. The main conclusions and recommendations. Web-oriented expert system created for use in the educational process at university training in mathematics, applied mathematics, information technology and economic cybernetics. In the future development of software modules for solving some classes of optimization problems directly on the site of an expert system that will enable use it to solve real problems of small and medium businesses.

3

Lee, Seung Hyun, and Jaeho Son. "Development of a Safety Management System Tracking the Weight of Heavy Objects Carried by Construction Workers Using FSR Sensors." Applied Sciences 11, no.4 (February3, 2021): 1378. http://dx.doi.org/10.3390/app11041378.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

It has been pointed out that the act of carrying a heavy object that exceeds a certain weight by a worker at a construction site is a major factor that puts physical burden on the worker’s musculoskeletal system. However, due to the nature of the construction site, where there are a large number of workers simultaneously working in an irregular space, it is difficult to figure out the weight of the object carried by the worker in real time or keep track of the worker who carries the excess weight. This paper proposes a prototype system to track the weight of heavy objects carried by construction workers by developing smart safety shoes with FSR (Force Sensitive Resistor) sensors. The system consists of smart safety shoes with sensors attached, a mobile device for collecting initial sensing data, and a web-based server computer for storing, preprocessing and analyzing such data. The effectiveness and accuracy of the weight tracking system was verified through the experiments where a weight was lifted by each experimenter from +0 kg to +20 kg in 5 kg increments. The results of the experiment were analyzed by a newly developed machine learning based model, which adopts effective classification algorithms such as decision tree, random forest, gradient boosting algorithm (GBM), and light GBM. The average accuracy classifying the weight by each classification algorithm showed similar, but high accuracy in the following order: random forest (90.9%), light GBM (90.5%), decision tree (90.3%), and GBM (89%). Overall, the proposed weight tracking system has a significant 90.2% average accuracy in classifying how much weight each experimenter carries.

4

Khort,D.O., A.I.Kutyrev, I.G.Smirnov, and I.V.Voronkov. "Development of an Automated Management System for Agricultural Technologies in Horticulture." Agricultural Machinery and Technologies 15, no.2 (June23, 2021): 61–68. http://dx.doi.org/10.22314/2073-7599-2021-15-2-61-68.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The implementation of intelligent technologies in industrial horticulture is possible with the help of an automated system for managing production processes. (Research purpose) To develop and substantiate the parameters of an automated management system for agricultural technologies in horticulture with the ability to conduct land inspections using a mobile application. (Materials and methods) ADO.NET driver Npqsql was used for work with the database. Dapper was used as Object Relational Mapping. The web application used the Model View Controller design pattern, and Bootstrap as the css framework. Data visualization from the database was carried out using cloud technology, placing the site using a set of Internet Information Services. Jquery (a set of JavaScript functions) served as the main framework for working with the client-side of the program code. The authors also used the PostgreSql database management system. The mobile application was created in the Android studio integrated environment. (Results and discussion) The authors developed an automated system for managing agricultural technologies. They formed the structure of the hardware and software base. They created the system ability to operate in a dialogue mode with the user through forms, based on the algorithm for choosing the optimal options for technological processes in the horticultural products production. A mobile application was implemented to conduct digital land inspections. They determined the procedure for conducting land inspections by agronomists using a mobile application. (Conclusions) The authors developed a system for the automated technologies formation and management in horticulture, which provided operational processing of information flows in real time, reflecting the characteristics of the plants’ growth and state in critical phases of development. They provided modern recording devices and a mobile application operation. They showed that the system automatically optimized machine technologies for the cultivation of horticultural crops according to biological (realization of the potential biological productivity of crops) and economic (increasing the efficiency of using production resources) criteria.

5

Liu, Hongda, Yuxi Luo, Jiejun Geng, and Pinbo Yao. "Research Hotspots and Frontiers of Product R&D Management under the Background of the Digital Intelligence Era—Bibliometrics Based on Citespace and Histcite." Applied Sciences 11, no.15 (July23, 2021): 6759. http://dx.doi.org/10.3390/app11156759.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

The rise of “cloud-computing, mobile-Internet, Internet of things, big-data, and smart-data” digital technology has brought a subversive revolution to enterprises and consumers’ traditional patterns. Product research and development has become the main battlefield of enterprise competition, facing an environment where challenges and opportunities coexist. Regarding the concepts and methods of product R&D projects, the domestic start was later than the international ones, and many domestic companies have also used successful foreign cases as benchmarks to innovate their management methods in practice. “Workers must first sharpen their tools if they want to do their jobs well”. This article will start from the relevant concepts of product R&D projects and summarize current R&D management ideas and methods. We combined the bibliometric analysis software Histcite and Citespace to sort out the content of domestic and foreign literature and explore the changing trends of research hotspots. Finally, combined with the analysis of confirmed cases in domestic masters and doctoral dissertations to test the theory, the literature review of the product R&D project management theme was carried out from the dual perspectives of comprehensive theory and practice. This study uses the core collection library of Web of Science as the object of document extraction. Based on the search conditions of “Product development” or “Intergrat* product development”, 8998 sample documents were initially retrieved. The search deadline was June 2019, with a time range from 2000 to June 2019. Then, using the record number of 50 as the critical condition, 5007 analysis samples were deleted, refined, and cleaned. Through the review and measurement of 5007 papers, the analysis showed that: (1) in the last ten years, sustainability, consumer focus, new approaches to product development management, and organizational design have become critical considerations in the product development process stage; (2) at this stage, researchers are paying more attention to the innovation, design, product development, identification, simultaneous engineering, consequence, and stage/gate model aspects of product development; and (3) factors such as long development cycles, high costs, and poor organizational design are now common problems in the product development process.

6

Nayyar, Anand, Pijush Kanti Dutta Pramankit, and Rajni Mohana. "Introduction to the Special Issue on Evolving IoT and Cyber-Physical Systems: Advancements, Applications, and Solutions." Scalable Computing: Practice and Experience 21, no.3 (August1, 2020): 347–48. http://dx.doi.org/10.12694/scpe.v21i3.1568.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Internet of Things (IoT) is regarded as a next-generation wave of Information Technology (IT) after the widespread emergence of the Internet and mobile communication technologies. IoT supports information exchange and networked interaction of appliances, vehicles and other objects, making sensing and actuation possible in a low-cost and smart manner. On the other hand, cyber-physical systems (CPS) are described as the engineered systems which are built upon the tight integration of the cyber entities (e.g., computation, communication, and control) and the physical things (natural and man-made systems governed by the laws of physics). The IoT and CPS are not isolated technologies. Rather it can be said that IoT is the base or enabling technology for CPS and CPS is considered as the grownup development of IoT, completing the IoT notion and vision. Both are merged into closed-loop, providing mechanisms for conceptualizing, and realizing all aspects of the networked composed systems that are monitored and controlled by computing algorithms and are tightly coupled among users and the Internet. That is, the hardware and the software entities are intertwined, and they typically function on different time and location-based scales. In fact, the linking between the cyber and the physical world is enabled by IoT (through sensors and actuators). CPS that includes traditional embedded and control systems are supposed to be transformed by the evolving and innovative methodologies and engineering of IoT. Several applications areas of IoT and CPS are smart building, smart transport, automated vehicles, smart cities, smart grid, smart manufacturing, smart agriculture, smart healthcare, smart supply chain and logistics, etc. Though CPS and IoT have significant overlaps, they differ in terms of engineering aspects. Engineering IoT systems revolves around the uniquely identifiable and internet-connected devices and embedded systems; whereas engineering CPS requires a strong emphasis on the relationship between computation aspects (complex software) and the physical entities (hardware). Engineering CPS is challenging because there is no defined and fixed boundary and relationship between the cyber and physical worlds. In CPS, diverse constituent parts are composed and collaborated together to create unified systems with global behaviour. These systems need to be ensured in terms of dependability, safety, security, efficiency, and adherence to real‐time constraints. Hence, designing CPS requires knowledge of multidisciplinary areas such as sensing technologies, distributed systems, pervasive and ubiquitous computing, real-time computing, computer networking, control theory, signal processing, embedded systems, etc. CPS, along with the continuous evolving IoT, has posed several challenges. For example, the enormous amount of data collected from the physical things makes it difficult for Big Data management and analytics that includes data normalization, data aggregation, data mining, pattern extraction and information visualization. Similarly, the future IoT and CPS need standardized abstraction and architecture that will allow modular designing and engineering of IoT and CPS in global and synergetic applications. Another challenging concern of IoT and CPS is the security and reliability of the components and systems. Although IoT and CPS have attracted the attention of the research communities and several ideas and solutions are proposed, there are still huge possibilities for innovative propositions to make IoT and CPS vision successful. The major challenges and research scopes include system design and implementation, computing and communication, system architecture and integration, application-based implementations, fault tolerance, designing efficient algorithms and protocols, availability and reliability, security and privacy, energy-efficiency and sustainability, etc. It is our great privilege to present Volume 21, Issue 3 of Scalable Computing: Practice and Experience. We had received 30 research papers and out of which 14 papers are selected for publication. The objective of this special issue is to explore and report recent advances and disseminate state-of-the-art research related to IoT, CPS and the enabling and associated technologies. The special issue will present new dimensions of research to researchers and industry professionals with regard to IoT and CPS. Vivek Kumar Prasad and Madhuri D Bhavsar in the paper titled "Monitoring and Prediction of SLA for IoT based Cloud described the mechanisms for monitoring by using the concept of reinforcement learning and prediction of the cloud resources, which forms the critical parts of cloud expertise in support of controlling and evolution of the IT resources and has been implemented using LSTM. The proper utilization of the resources will generate revenues to the provider and also increases the trust factor of the provider of cloud services. For experimental analysis, four parameters have been used i.e. CPU utilization, disk read/write throughput and memory utilization. Kasture et al. in the paper titled "Comparative Study of Speaker Recognition Techniques in IoT Devices for Text Independent Negative Recognition" compared the performance of features which are used in state of art speaker recognition models and analyse variants of Mel frequency cepstrum coefficients (MFCC) predominantly used in feature extraction which can be further incorporated and used in various smart devices. Mahesh Kumar Singh and Om Prakash Rishi in the paper titled "Event Driven Recommendation System for E-Commerce using Knowledge based Collaborative Filtering Technique" proposed a novel system that uses a knowledge base generated from knowledge graph to identify the domain knowledge of users, items, and relationships among these, knowledge graph is a labelled multidimensional directed graph that represents the relationship among the users and the items. The proposed approach uses about 100 percent of users' participation in the form of activities during navigation of the web site. Thus, the system expects under the users' interest that is beneficial for both seller and buyer. The proposed system is compared with baseline methods in area of recommendation system using three parameters: precision, recall and NDGA through online and offline evaluation studies with user data and it is observed that proposed system is better as compared to other baseline systems. Benbrahim et al. in the paper titled "Deep Convolutional Neural Network with TensorFlow and Keras to Classify Skin Cancer" proposed a novel classification model to classify skin tumours in images using Deep Learning methodology and the proposed system was tested on HAM10000 dataset comprising of 10,015 dermatoscopic images and the results observed that the proposed system is accurate in order of 94.06\% in validation set and 93.93\% in the test set. Devi B et al. in the paper titled "Deadlock Free Resource Management Technique for IoT-Based Post Disaster Recovery Systems" proposed a new class of techniques that do not perform stringent testing before allocating the resources but still ensure that the system is deadlock-free and the overhead is also minimal. The proposed technique suggests reserving a portion of the resources to ensure no deadlock would occur. The correctness of the technique is proved in the form of theorems. The average turnaround time is approximately 18\% lower for the proposed technique over Banker's algorithm and also an optimal overhead of O(m). Deep et al. in the paper titled "Access Management of User and Cyber-Physical Device in DBAAS According to Indian IT Laws Using Blockchain" proposed a novel blockchain solution to track the activities of employees managing cloud. Employee authentication and authorization are managed through the blockchain server. User authentication related data is stored in blockchain. The proposed work assists cloud companies to have better control over their employee's activities, thus help in preventing insider attack on User and Cyber-Physical Devices. Sumit Kumar and Jaspreet Singh in paper titled "Internet of Vehicles (IoV) over VANETS: Smart and Secure Communication using IoT" highlighted a detailed description of Internet of Vehicles (IoV) with current applications, architectures, communication technologies, routing protocols and different issues. The researchers also elaborated research challenges and trade-off between security and privacy in area of IoV. Deore et al. in the paper titled "A New Approach for Navigation and Traffic Signs Indication Using Map Integrated Augmented Reality for Self-Driving Cars" proposed a new approach to supplement the technology used in self-driving cards for perception. The proposed approach uses Augmented Reality to create and augment artificial objects of navigational signs and traffic signals based on vehicles location to reality. This approach help navigate the vehicle even if the road infrastructure does not have very good sign indications and marking. The approach was tested locally by creating a local navigational system and a smartphone based augmented reality app. The approach performed better than the conventional method as the objects were clearer in the frame which made it each for the object detection to detect them. Bhardwaj et al. in the paper titled "A Framework to Systematically Analyse the Trustworthiness of Nodes for Securing IoV Interactions" performed literature on IoV and Trust and proposed a Hybrid Trust model that seperates the malicious and trusted nodes to secure the interaction of vehicle in IoV. To test the model, simulation was conducted on varied threshold values. And results observed that PDR of trusted node is 0.63 which is higher as compared to PDR of malicious node which is 0.15. And on the basis of PDR, number of available hops and Trust Dynamics the malicious nodes are identified and discarded. Saniya Zahoor and Roohie Naaz Mir in the paper titled "A Parallelization Based Data Management Framework for Pervasive IoT Applications" highlighted the recent studies and related information in data management for pervasive IoT applications having limited resources. The paper also proposes a parallelization-based data management framework for resource-constrained pervasive applications of IoT. The comparison of the proposed framework is done with the sequential approach through simulations and empirical data analysis. The results show an improvement in energy, processing, and storage requirements for the processing of data on the IoT device in the proposed framework as compared to the sequential approach. Patel et al. in the paper titled "Performance Analysis of Video ON-Demand and Live Video Streaming Using Cloud Based Services" presented a review of video analysis over the LVS \& VoDS video application. The researchers compared different messaging brokers which helps to deliver each frame in a distributed pipeline to analyze the impact on two message brokers for video analysis to achieve LVS & VoS using AWS elemental services. In addition, the researchers also analysed the Kafka configuration parameter for reliability on full-service-mode. Saniya Zahoor and Roohie Naaz Mir in the paper titled "Design and Modeling of Resource-Constrained IoT Based Body Area Networks" presented the design and modeling of a resource-constrained BAN System and also discussed the various scenarios of BAN in context of resource constraints. The Researchers also proposed an Advanced Edge Clustering (AEC) approach to manage the resources such as energy, storage, and processing of BAN devices while performing real-time data capture of critical health parameters and detection of abnormal patterns. The comparison of the AEC approach is done with the Stable Election Protocol (SEP) through simulations and empirical data analysis. The results show an improvement in energy, processing time and storage requirements for the processing of data on BAN devices in AEC as compared to SEP. Neelam Saleem Khan and Mohammad Ahsan Chishti in the paper titled "Security Challenges in Fog and IoT, Blockchain Technology and Cell Tree Solutions: A Review" outlined major authentication issues in IoT, map their existing solutions and further tabulate Fog and IoT security loopholes. Furthermore, this paper presents Blockchain, a decentralized distributed technology as one of the solutions for authentication issues in IoT. In addition, the researchers discussed the strength of Blockchain technology, work done in this field, its adoption in COVID-19 fight and tabulate various challenges in Blockchain technology. The researchers also proposed Cell Tree architecture as another solution to address some of the security issues in IoT, outlined its advantages over Blockchain technology and tabulated some future course to stir some attempts in this area. Bhadwal et al. in the paper titled "A Machine Translation System from Hindi to Sanskrit Language Using Rule Based Approach" proposed a rule-based machine translation system to bridge the language barrier between Hindi and Sanskrit Language by converting any test in Hindi to Sanskrit. The results are produced in the form of two confusion matrices wherein a total of 50 random sentences and 100 tokens (Hindi words or phrases) were taken for system evaluation. The semantic evaluation of 100 tokens produce an accuracy of 94\% while the pragmatic analysis of 50 sentences produce an accuracy of around 86\%. Hence, the proposed system can be used to understand the whole translation process and can further be employed as a tool for learning as well as teaching. Further, this application can be embedded in local communication based assisting Internet of Things (IoT) devices like Alexa or Google Assistant. Anshu Kumar Dwivedi and A.K. Sharma in the paper titled "NEEF: A Novel Energy Efficient Fuzzy Logic Based Clustering Protocol for Wireless Sensor Network" proposed a a deterministic novel energy efficient fuzzy logic-based clustering protocol (NEEF) which considers primary and secondary factors in fuzzy logic system while selecting cluster heads. After selection of cluster heads, non-cluster head nodes use fuzzy logic for prudent selection of their cluster head for cluster formation. NEEF is simulated and compared with two recent state of the art protocols, namely SCHFTL and DFCR under two scenarios. Simulation results unveil better performance by balancing the load and improvement in terms of stability period, packets forwarded to the base station, improved average energy and extended lifetime.

7

"Mutual Browser Conflicts Disclosure." International Journal of Recent Technology and Engineering 8, no.4 (November30, 2019): 1401–5. http://dx.doi.org/10.35940/ijrte.d7365.118419.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In today’s world everything is becoming web dependent, and due to the advances made in web technologies, web developers have to face various challenges. Every web application before being deployed goes through various phases which may look different on different browsers. It becomes difficult to identify correct web page when it gives differences across different browsers. The web pages may give significant differences and it is known as cross-browser inconsistency. A technology that has gained a prominent position known as AJAX (Asynchronous JavaScript and XML), in which the combination of JavaScript and Document Object Model (DOM) manipulation, along with asynchronous server communication is used to achieve a high level of user interactivity. With this change in developing web applications comes a whole set of new challenges, One way to address these challenges is through the use of a crawler that can automatically walk through different states of a highly dynamic AJAX site and create a model of the navigational paths and states. Identifying these conflicts manually is a laborious task. Mutual browser conflict disclosure presents a mechanism to identify conflicts.

8

Bruns, Axel. "Invading the Ivory Tower." M/C Journal 2, no.2 (March1, 1999). http://dx.doi.org/10.5204/mcj.1742.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

One of the most frequent comments about Internet-based media, particularly about newsgroups and the Web, is that they provide a forum for everyone, no matter how obscure or specific their interest -- you'll find dedicated fora for every field, from high-energy physics to learning Klingon, from the campaign for an independent country in Northern Italy to Indonesian cooking. This is seen as a positive development as often as it is regarded as a negative force -- optimists see these fora as potential bases for the formation of virtual communities which may be able to reinvigorate previously neglected niche groupings, while pessimists predict a further shattering of societies into disparate fragments with mutually almost unintelligible cultural attitudes. Examples supporting either view can be found amongst the multitudes of newsgroups and Websites available on the Net, but let us skip this debate for the moment; instead, let's focus on some of the potential consequences this situation may have for academia. It requires little prescience to predict that the next few years will see an increasing use of the Web and, to a smaller extent, newsgroups in academic teaching and research. Continuously updated Websites will enable students and scholars to work with the latest developments in their disciplines, rather than limiting themselves to whatever recent books and journals their university library has managed to acquire, and newsgroups can help put interested academics in touch with each other in order to exchange news and pointers to information on the Web, as well as discuss recent research. For anyone with a computer, much of this information will also be accessible more easily electronically, via the Internet, than physically through libraries, bookstores, and photocopies. If it is organised efficiently on the Web, interested researchers may also come to be able to better target precisely the information they need, avoiding the need to leaf through volumes of journals to find the one useful article they might contain. Such research isn't limited only to academics and university students anymore, though. As hypertext scholar George P. Landow notes, "hypertext provides the individualistic learner with the perfect means for exploration and enrichment of particular areas of study. By permitting one to move from relatively familiar areas to less familiar ones, a hypertext corpus encourages the autodidact, the continuing education student, and the student with little access to instructors" (Hypertext 129-30) -- particularly the ethos of information freedom that is widespread on the Internet means that any amateur enthusiast may conduct their own self-education with the materials available on the Web. This was already possible, after a fashion, in pre-Web times, of course, but the Net increases the amount of information available, and removes the physical and psychological barrier of entering a university library as a non-student, and facilitates connections to other (self-taught as well as 'official') students through newsgroups and email. What's more, the Web also allows adding one's own voice to academic debates: "in a book one can always move one's finger or pencil across the printed page, but one's intrusion always remains physically separate from the text. One may make a mark on the page, but one's intrusion does not affect the text itself" (Landow, Hypertext 44). By creating a Web page displaying one's own thoughts on the matter, providing links to related sites, and ideally receiving links from those sites, too, any outsider may now invade the discourse in an academic discipline. In most cases, such invasions may go largely unnoticed -- but nothing's to stop a self-taught enthusiast from creating a highly useful Website that even 'proper' academics may consider relevant, and so from adding own articles to the discipline's body of knowledge. As a side-effect of such presentation on the Web, then, texts by students are no longer so easily subordinated to those by revered authors, and disparities between them are less visible. The text as a site of authority can also become a site of resistance: in hypertext, indeed, opposition to the canonised texts is more likely to succeed in conditions of hypertextuality than in the print culture, if only because hypertext makes it easier to expose the contradictions and power moves in such texts, and the multiply constructed positions from which they might be read. (Snyder 77) Both these points pose a major problem for the currently prevalent conventions of academic debate, of course, which (despite post-structuralism's argument for the "death of the author") still evaluate the relevance of academic work partly based on its authorial source. Canonisation of particular scholars and their works (a process which is not limited only to literary disciplines) must ultimately fail -- "because all electronic texts are interrelated, none has well-defined borders; instead, each text reaches out to link up with past, present and future texts. It therefore becomes difficult to cordon off and to canonise a few great texts and authors" (Snyder 75). And generally, Nunberg notes, "media like the Web tend to resist attempts to impose the sort of solutions that enable us to manage (even imperfectly) the steady increase in the number of print documents -- the ramification of discourses and forms of publication, the imposition of systems of screening or refereeing, the restriction of the right to speak to 'qualified' participants" ("Farewell" 126). The freely accessible information on the Web includes texts by revered researchers as well as badly-informed beginners, and elaborate essays as well as superficial scribblings. This realisation has caused many academics who grew up with the apparent simplicities of print to regard Internet-based media with despair and, frequently, with contempt; Nunberg himself provides a good example by stating that "any undergraduate student is free to post her night thoughts on Mary Shelley or the Klingon verb to a 'potential audience' of millions (a quick search of the Web turns up numerous examples of both), and there will be nothing in its mode of circulation to distinguish it from communications from better-qualified contributors" ("Farewell" 127). Such remarkably condescending prose indicates more than anything a paralysing fear of an invasion of the proverbial academic ivory tower by the uncouth hordes of self-taught dilettantes who have no respect for scholarly authority: Nunberg's insistence that a notion of academic 'qualification' (expressed no doubt in degrees and positions) could do any more than indicate vaguely that an author might have something valuable to say, and that anybody not 'qualified' this way cannot possibly contribute anything worth one's while, is surprisingly hierarchistic. Surely, in reality the onus for determining a text's worth should (and must) always eventually lie with the individual reader; the sense a text makes, not the source that made the text, should determine its quality. It's easy to see that this emphasis which Nunberg and others place on a text's source is in fact determined by print as the still-prevalent technology of information dissemination. As Bolter describes it, "the idea of a relatively stable canon made sense in a culture dominated by printed books. ... But the notion of a standard has now collapsed, and the collapse is mirrored in the shift from the printed to the electronic writing space, in which a stable canon of works and authors is meaningless" (237). Landow elaborates that hypertext's effects are so basic, so radical, that it reveals that many of our most cherished, most commonplace ideas and attitudes toward literature and literary production turn out to be the result of that particular form of information technology and technology of cultural memory that has provided the setting for them. This technology -- that of the printed book and of its close relations, which include the typed or printed page -- engenders certain notions of authorial property, authorial uniqueness, and a physically isolated text that hypertext makes untenable. The evidence of hypertext, in other words, historicises many of our most commonplace assumptions, thereby forcing them to descend from the ethereality of abstraction and appear as corollaries to a particular technology rooted in specific times and places. (33) Today, on the Web, however, where anyone can participate by adding their own texts or simply rearranging others', we lose once and for all notions of the author or the text as a stable entity. Thus, Nunberg claims, "on the Web ... you can never have the kind of experience that you can have with the informational genres of print, the experience of interpreting a text simply as a newspaper or encyclopedia article without attending to its author, its publisher, or the reliability of its recommender. We read Web documents, that is, not as information but as intelligence, which requires an explicit warrant of one form or another" ("Farewell" 127-8). Again, however, Nunberg claims a simplicity of the print media which simply doesn't exist: he goes on to say that "we should look to electronic discourse to provide a counter and complement to the informational forms of print -- a domain that privileges the personal, the private, and the subjective against the impersonal, the public, and the objective" (133). In reality, though, anyone who today still reads a newspaper or any other form of printed information as an 'objective' source, without an awareness of its publisher's or its journalists' political and economic agenda, must certainly be regarded as a naïve fool -- not just in Australia, with its atrocious standards of print journalism. If the modern media have taught us anything, it is that there is no such thing as 'objective truth'; the Web, with its unprecedented opportunities for world-wide publication, just makes this fact particularly obvious. While they may contribute to more openness in dealing with contributions from non-traditionally qualified sources, however, such realisations won't completely eradicate academia's fear of an invasion by the self-trained and the untrained. Some hope is at hand, though: "at the very moment indeed when the new technologies of memory can make us fear an alarming glut of traces -- a true change of scale in the collective accumulation of archives, at once written, audio, visual, and audiovisual -- these same technologies increasingly lighten its load, at almost the same pace, by facilitating individualised retrieval" (Debray 146); more elaborate search engines and resource listings on the Web can help point interested researchers to useful contributions both from within and without the ivory tower, and multiple alternative engines and listings may cater for various definitions of what constitutes 'useful'. "In the future, it seems, there will be no fixed canons of texts and no fixed epistemological boundaries between disciplines, only paths of inquiry, modes of integration, and moments of encounter" (Hesse 31). This may also have negative implications, though. On the one hand, as Bazin writes, "the digital empire puts too much emphasis on relation and circulation per se, rather than on the acquisition of content. Instead of the substantialist metaphysics of the hidden meaning which a 'vertical' reading would attempt to reveal, it prefers the rhetoric of exchange and conversation. It counters the aesthetics of depth with a pragmatics of interface" (163-4), and researchers on the Web may stay on the surface of a discipline rather than explore the very depths of its discourse -- they may stick with digests, digest-digests, digest-digest-digests, to borrow from Ray Bradbury (55). "Electronic linking almost inevitably tends to lead to blending and mixing of genres and modes ... . Hypertextualising a text produces not an electronic book but a miniature electronic library" (Landow, "Twenty Minutes" 226-7), and sticking to one's research topic may prove difficult. On the other end of the scale, the Net's tendency to group interests off into niches may lead to specific deeply involved research being done without any awareness of related disciplines that may offer alternative approaches to a subject -- in short, without any knowledge of the bigger picture one's discipline fits into. To avoid both pitfalls demands a researcher's discipline and attention. On the positive side, the invasion of the ivory tower allows for unprecedented public involvement (as Net theorists have often promised it): we are witnessing the appearance ... of a 'dynamic textuality' ... that by freeing itself from the straitjacket of the book is transforming not only the individual's relation to the text but also the traditional model of producing and transmitting learning and practical knowledge. In the place vacated by a linear transmission, inherited from forebears and relatively individualised, a system for the coemergence of bodies of knowledge is tending to be progressively substituted -- a system in which instruction, self-apprenticing, intellectual creation, and diffusion all closely cooperate. (Bazin 163) Naturally, this process won't mean that anybody can now easily become a nuclear scientist, economic expert, or cultural historian -- in most fields, to make it to the very top of the profession will still require a level of access to materials and equipment that only academic and professional institutions can offer. Nonetheless, more self-trained amateur enthusiasts will now be able to make meaningful contributions to their discipline -- a development we already begin to see in fields as diverse as astronomy, computer sciences, and some forms of literary studies. At the very least, it will create among the participants a more interested, more informed and more involved public, thinking for themselves and questioning the commonplaces of a print-based culture. "We are promised ... less of the dogmatic and more of the ludic, less of the canonical and more of the festive. Fewer arguments from authority, though more juxtaposition of authorities" (Debray 146). The invasion of the ivory tower is no attack on the Bastille -- the new dilettante invaders come to learn and share, not to destroy. References Bazin, Patrick. "Toward Metareading." Nunberg 153-68. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991. Bradbury, Ray. Fahrenheit 451. Berlin: Cornelsen-Velhagen & Klasing, 1985. Debray, Régis. "The Book as Symbolic Object." Nunberg 139-51. Hesse, Carla. "Books in Time." Nunberg 21-36. Landow, George P. Hypertext: The Convergence of Contemporary Critical Theory and Technology. Baltimore: Johns Hopkins UP, 1992. ---. "Twenty Minutes into the Future, or How Are We Moving beyond the Book?" Nunberg 209-37. Nunberg, Geoffrey. "Farewell to the Information Age." Nunberg 103-38. ---, ed. The Future of the Book. Berkeley: U of California P, 1996. Snyder, Ilana. Hypertext: The Electronic Labyrinth. Carlton South: Melbourne UP, 1996. Citation reference for this article MLA style: Axel Bruns. "Invading the Ivory Tower: Hypertext and the New Dilettante Scholars." M/C: A Journal of Media and Culture 2.2 (1999). [your date of access] <http://www.uq.edu.au/mc/9903/ivory.php>. Chicago style: Axel Bruns, "Invading the Ivory Tower: Hypertext and the New Dilettante Scholars," M/C: A Journal of Media and Culture 2, no. 2 (1999), <http://www.uq.edu.au/mc/9903/ivory.php> ([your date of access]). APA style: Axel Bruns. (1999) Invading the ivory tower: hypertext and the new dilettante scholars. M/C: A Journal of Media and Culture 2(2). <http://www.uq.edu.au/mc/9903/ivory.php> ([your date of access]).

9

Bruns, Axel. "What's the Story." M/C Journal 2, no.5 (July1, 1999). http://dx.doi.org/10.5204/mcj.1774.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Practically any good story follows certain narrative conventions in order to hold its readers' attention and leave them with a feeling of satisfaction -- this goes for fictional tales as well as for many news reports (we do tend to call them 'news stories', after all), for idle gossip as well as for academic papers. In the Western tradition of storytelling, it's customary to start with the exposition, build up to major events, and end with some form of narrative closure. Indeed, audience members will feel disturbed if there is no sense of closure at the end -- their desire for closure is a powerful one. From this brief description of narrative patterns it is also clear that such narratives depend crucially on linear progression through the story in order to work -- there may be flashbacks and flashforwards, but very few stories, it seems, could get away with beginning with their point of closure, and work back to the exposition. Closure, as the word suggests, closes the story, and once reached, the audience is left with the feeling of now knowing the whole story, of having all the pieces necessary to understand its events. To understand how important the desire to reach this point is to the audience, just observe the discussions of holes in the plot which people have when they're leaving a cinema: they're trying to reach a better sense of closure than was afforded them by the movie itself. In linearly progressing media, this seems, if you'll pardon the pun, straightforward. Readers know when they've finished an article or a book, viewers know when a movie or a broadcast is over, and they'll be able to assess then if they've reached sufficient closure -- if their desires have been fulfilled. On the World Wide Web, this is much more difficult: "once we have it in our hands, the whole of a book is accessible to us readers. However, in front of an electronic read-only hypertext document we are at the mercy of the author since we will only be able to activate the links which the author has provided" (McKnight et al. 119). In many cases, it's not even clear whether we've reached the end of the text already: just where does a Website end? Does the question even make sense? Consider the following example, reported by Larry Friedlander: I watched visitors explore an interactive program in a museum, one that contained a vast amount of material -- pictures, film, historic explanations, models, simulations. I was impressed by the range of subject matter and by the ambitiousness and polish of the presentation. ... But to my surprise, as I watched visitors going down one pathway after another, I noticed a certain dispirited glaze spread over their faces. They seemed to lose interest quite quickly and, in fact, soon stopped their explorations. (163) Part of the problem here may just have been the location of the programme, of course -- when you're out in public, you might just not have the time to browse as extensively as you could from your computer at home. But there are other explanations, too: the sheer amount of options for exploration may have been overwhelming -- there may not have been any apparent purpose to aim for, any closure to arrive at. This is a problem inherent in hypertext, particularly in networked systems like the Web: it "changes our conception of an ending. Different readers can choose not only to end the text at different points but also to add to and extend it. In hypertext there is no final version, and therefore no last word: a new idea or reinterpretation is always possible. ... By privileging intertextuality, hypertext provides a large number of points to which other texts can attach themselves" (Snyder 57). In other words, there will always be more out there than any reader could possibly explore, since new documents are constantly being added. There is no ending if a text is constantly extended. (In print media this problem appears only to a far more limited extent: there, intertextuality is mostly implicit, and even though new articles may constantly be added -- 'linked', if you will -- to a discourse, due to the medium's physical nature they're still very much separate entities, while Web links make intertextuality explicit and directly connect texts.) Does this mark the end of closure, then? Adding to the problem is the fact that it's not even possible to know how much of the hypertextual information available is still left unexplored, since there is no universal register of all the information available on the Web -- "the extent of hypertext is unknowable because it lacks clear boundaries and is often multi-authored" (Snyder 19). While reading a book you can check how many more pages you've got to go, but on the Web this is not an option. Our traditions of information transmission create this desire for closure, but the inherent nature of the medium prevents us from ever satisfying it. Barrett waxes lyrical in describing this dilemma: contexts presented online are often too limited for what we really want: an environment that delivers objects of desire -- to know more, see more, learn more, express more. We fear being caught in Medusa's gaze, of being transfixed before the end is reached; yet we want the head of Medusa safely on our shield to freeze the bitstream, the fleeting imagery, the unstoppable textualisations. We want, not the dead object, but the living body in its connections to its world, connections that sustain it, give it meaning. (xiv-v) We want nothing less, that is, than closure without closing: we desire the knowledge we need, and the feeling that that knowledge is sufficient to really know about a topic, but we don't want to devalue that knowledge in the same process by removing it from its context and reducing it to trivial truisms. We want the networked knowledge base that the Web is able to offer, but we don't want to feel overwhelmed by the unfathomable dimensions of that network. This is increasingly difficult the more knowledge is included in that network -- "with the growth of knowledge comes decreasing certainty. The confidence that went with objectivity must give way to the insecurity that comes from knowing that all is relative" (Smith 206). The fact that 'all is relative' is one which predates the Net, of course, and it isn't the Internet or the World Wide Web that has destroyed objectivity -- objectivity has always been an illusion, no matter how strongly journalists or scientists have at times laid claims ot it. Internet-based media have simply stripped away more of the pretences, and laid bare the subjective nature of all information; in the process, they have also uncovered the fact that the desire for closure must ultimately remain unfulfilled in any sufficiently non-trivial case. Nonetheless, the early history of the Web has seen attempts to connect all the information available (LEO, one of the first major German Internet resource centres, for example, took its initials from its mission to 'Link Everything Online') -- but as the amount of information on the Net exploded, more and more editorial choices of what to include and what to leave out had to be made, so that now even search engines like Yahoo! and Altavista quite clearly and openly offer only a selection of what they consider useful sites on the Web. Web browsers still hoping to find everything on a certain topic would be well-advised to check with all major search engines, as well as important resource centres in the specific field. The average Web user would probably be happy with picking the search engine, Web directory or Web ring they find easiest to use, and sticking with it. The multitude of available options here actually shows one strength of the Internet and similar networks -- "the computer permits many [organisational] structures to coexist in the same electronic text: tree structures, circles, and lines can cross and recross without obstructing one another. The encyclopedic impulse to organise can run riot in this new technology of writing" (Bolter 95). Still, this multitude of options is also likely to confuse some users: in particular, "novices do not know in which order they need to read the material or how much they should read. They don't know what they don't know. Therefore learners might be sidetracked into some obscure corner of the information space instead or covering the important basic information" (Nielsen 190). They're like first-time visitors to a library -- but this library has constantly shifting aisles, more or less well-known pathways into specialty collections, fiercely competing groups of librarians, and it extends almost infinitely. Of course, the design of the available search and information tools plays an important role here, too -- far more than it is possible to explore at this point. Gay makes the general observation that "visual interfaces and navigational tools that allow quick browsing of information layout and database components are more effective at locating information ... than traditional index or text-based search tools. However, it should be noted that users are less secure in their findings. Users feel that they have not conducted complete searches when they use visual tools and interfaces" (185). Such technical difficulties (especially for novices) will slow take-up of and low satisfaction with the medium (and many negative views of the Web can probably be traced to this dissatisfaction with the result of searches -- in other words, to a lack of satisfaction of the desire for closure); while many novices eventually overcome their initial confusion and become more Web-savvy, others might disregard the medium as unsuitable for their needs. At the other extreme of the scale, the inherent lack for closure, in combination with the societally deeply ingrained desire for it, may also be a strong contributing factor for another negative phenomenon associated with the Internet: that of Net users becoming Net junkies, who spend every available moment online. Where the desire to know, to get to the bottom (or more to the point: to the end) of a topic, becomes overwhelming, and where the fundamental unattainability of this goal remains unrealised, the step to an obsession with finding information seems a small one; indeed, the neverending search for that piece of knowledge surpassing all previously found ones seems to have obvious similarities to drug addiction with its search for the high to better all previous highs. And most likely, the addiction is only heightened by the knowledge that on the Web, new pieces of information are constantly being added -- an endless, and largely free, supply of drugs... There is no easy solution to this problem -- in the end, it is up to the user to avoid becoming an addict, and to keep in mind that there is no such thing as total knowledge. Web designers and content providers can help, though: "there are ways of orienting the reader in an electronic document, but in any true hypertext the ending must remain tentative. An electronic text never needs to end" (Bolter 87). As Tennant & Heilmeier elaborate, "the coming ease-of-use problem is one of developing transparent complexity -- of revealing the limits and the extent of vast coverage to users, and showing how the many known techniques for putting it all together can be used most effectively -- of complexity that reveals itself as powerful simplicity" (122). We have been seeing, therefore, the emergence of a new class of Websites: resource centres which help their visitors to understand a certain topic and view it from all possible angles, which point them in the direction of further information on- and off-site, and which give them an indication of how much they need to know to understand the topic to a certain degree. In this, they must ideally be very transparent, as Tennant & Heilmeier point out -- having accepted that there is no such thing as objectivity, it is necessary for these sites to point out that their offered insight into the field is only one of many possible approaches, and that their presented choice of information is based on subjective editorial decisions. They may present preferred readings, but they must indicate that these readings are open for debate. They may help satisfy some of their readers' desire for closure, but they must at the same time point out that they do so by presenting a temporary ending beyond which a more general story continues. If, as suggested above, closure crucially depends on a linear mode of presentation, such sites in their arguments help trace one linear route through the network of knowledge available online; they impose a linear from-us-to-you model of transmission on the normally unordered many-to-many structure of the Net. In the face of much doomsaying about the broadcast media, then, here is one possible future for these linear transmission media, and it's no surprise that such Internet 'push' broad- or narrowcasting is a growth area of the Net -- simply put, it serves the apparent need of users to be told stories, to have their desire for closure satisfied through clear narrative progressions from exposition through development to end. (This isn't 'push' as such, really: it's more a kind of 'push on demand'.) But at the same time, this won't mean the end of the unstructured, networked information that the Web offers: even such linear media ultimately build on that networked pool of knowledge. The Internet has simply made this pool public -- passively as well as actively accessible to everybody. Now, however, Web designers (and this includes each and every one of us, ultimately) must work "with the users foremost in mind, making sure that at every point there is a clear, simple and focussed experience that hooks them into the welter of information presented" (Friedlander 164); they must play to the desire for closure. (As with any preferred reading, however, there is also a danger that that closure is premature, and that the users' process or meaning-making is contained and stifled rather than aided.) To return briefly to Friedlander's experience with the interactive museum exhibit: he draws the conclusion that visitors were simply overwhelmed by the sheer mass of information and were reluctant to continue accumulating facts without a guiding purpose, without some sense of how or why they could use all this material. The technology that delivers immense bundles of data does not simultaneously deliver a reason for accumulating so much information, nor a way for the user to order and make sense of it. That is the designer's task. The pressing challenge of multimedia design is to transform information into usable and useful knowledge. (163) Perhaps this transformation is exactly what is at the heart of fulfilling the desire for closure: we feel satisfied when we feel we know something, have learnt something from a presentation of information (no matter if it's a news report or a fictional story). Nonetheless, this satisfaction must of necessity remain intermediate -- there is always much more still to be discovered. "From the hypertext viewpoint knowledge is infinite: we can never know the whole extent of it but only have a perspective on it. ... Life is in real-time and we are forced to be selective, we decide that this much constitutes one node and only these links are worth representing" (Beardon & Worden 69). This is not inherently different from processes in other media, where bandwidth limitations may even force much stricter gatekeeping regiments, but as in many cases the Internet brings these processes out into the open, exposes their workings and stresses the fundamental subjectivity of information. Users of hypertext (as indeed users of any medium) must be aware of this: "readers themselves participate in the organisation of the encyclopedia. They are not limited to the references created by the editors, since at any point they can initiate a search for a word or phrase that takes them to another article. They might also make their own explicit references (hypertextual links) for their own purposes ... . It is always a short step from electronic reading to electronic writing, from determining the order of texts to altering their structure" (Bolter 95). Significantly, too, it is this potential for wide public participation which has made the Internet into the medium of the day, and led to the World Wide Web's exponential growth; as Bolter describes, "today we cannot hope for permanence and for general agreement on the order of things -- in encyclopedias any more than in politics and the arts. What we have instead is a view of knowledge as collections of (verbal and visual) ideas that can arrange themselves into a kaleidoscope of hierarchical and associative patterns -- each pattern meeting the needs of one class of readers on one occasion" (97). To those searching for some meaningful 'universal truth', this will sound defeatist, but ultimately it is closer to realism -- one person's universal truth is another one's escapist phantasy, after all. This doesn't keep most of us from hoping and searching for that deeper insight, however -- and from the preceding discussion, it seems likely that in this we are driven by the desire for closure that has been imprinted in us so deeply by the multitudes of narrative structures we encounter each day. It's no surprise, then, that, as Barrett writes, "the virtual environment is a place of longing. Cyberspace is an odyssey without telos, and therefore without meaning. ... Yet cyberspace is also the theatre of operations for the reconstruction of the lost body of knowledge, or, perhaps more correctly, not the reconstruction, but the always primary construction of a body of knowing. Thought and language in a virtual environment seek a higher synthesis, a re-imagining of an idea in the context of its truth" (xvi). And so we search on, following that by definition end-less quest to satisfy our desire for closure, and sticking largely to the narrative structures handed down to us through the generations. This article is no exception, of course -- but while you may gain some sense of closure from it, it is inevitable that there is a deeper feeling of a lack of closure, too, as the article takes its place in a wider hypertextual context, where so much more is still left unexplored: other articles in this issue, other issues of M/C, and further journals and Websites adding to the debate. Remember this, then: you decide when and where to stop. References Barrett, Edward, and Marie Redmont, eds. Contextual Media: Multimedia and Interpretation. Cambridge, Mass.: MIT P, 1995. Barrett, Edward. "Hiding the Head of Medusa: Objects and Desire in a Virtual Environment." Barrett & Redmont xi- vi. Beardon, Colin, and Suzette Worden. "The Virtual Curator: Multimedia Technologies and the Roles of Museums." Barrett & Redmont 63-86. Bolter, Jay David. Writing Space: The Computer, Hypertext, and the History of Writing. Hillsdale, N.J.: Lawrence Erlbaum Associates, 1991. Friedlander, Larry. "Spaces of Experience on Designing Multimedia Applications." Barrett & Redmont 163-74. Gay, Geri. "Issues in Accessing and Constructing Multimedia Documents." Barrett & Redmont 175-88. McKnight, Cliff, John Richardson, and Andrew Dillon. "The Authoring of Hypertext Documents." Hypertext: Theory into Practice. Ed. Ray McAleese. Oxford: Intellect, 1993. Nielsen, Jakob. Hypertext and Hypermedia. Boston: Academic Press, 1990. Smith, Anthony. Goodbye Gutenberg: The Newspaper Revolution of the 1980's [sic]. New York: Oxford UP, 1980. Snyder, Ilana. Hypertext: The ELectronic Labyrinth. Carlton South: Melbourne UP, 1996. Tennant, Harry, and George H. Heilmeier. "Knowledge and Equality: Harnessing the Truth of Information Abundance." Technology 2001: The Future of Computing and Communications. Ed. Derek Leebaert. Cambridge, Mass.: MIT P, 1991. Citation reference for this article MLA style: Axel Bruns. "What's the Story: The Unfulfilled Desire for Closure on the Web." M/C: A Journal of Media and Culture 2.5 (1999). [your date of access] <http://www.uq.edu.au/mc/9907/closure.php>. Chicago style: Axel Bruns, "What's the Story: The Unfulfilled Desire for Closure on the Web," M/C: A Journal of Media and Culture 2, no. 5 (1999), <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]). APA style: Axel Bruns. (1999) What's the story: the unfulfilled desire for closure on the Web. M/C: A Journal of Media and Culture 2(5). <http://www.uq.edu.au/mc/9907/closure.php> ([your date of access]).

10

Downes,DanielM. "The Medium Vanishes?" M/C Journal 3, no.1 (March1, 2000). http://dx.doi.org/10.5204/mcj.1829.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Introduction The recent AOL/Time-Warner merger invites us to re-think the relationships amongst content producers, distributors, and audiences. Worth an estimated $300 billion (US), the largest Internet transaction of all time, the deal is 45 times larger than the AOL/Netscape merger of November 1998 (Ledbetter). Additionally, the Time Warner/EMI merger, which followed hard on the heels of the AOL/Time-Warner deal and is itself worth $28 billion (US), created the largest content rights organisation in the music industry. The joining of the Internet giant (AOL) with what was already the world's largest media corporation (Time-Warner-EMI) has inspired some exuberant reactions. An Infoworld column proclaimed: The AOL/Time-Warner merger signals the demise of traditional media companies and the ascendancy of 'new economy' media companies that will force any industry hesitant to adopt a complete electronic-commerce strategy to rethink and put itself on Internet time. (Saap & Schwarrtz) This comment identifies the distribution channel as the dominant component of the "new economy" media. But this might not really be much of an innovation. Indeed, the assumption of all industry observers is that Time-Warner will provide broadband distribution (through its extensive cable holdings) as well as proprietary content for AOL. It is also expected that Time-Warner will adopt AOL's strategy of seeking sponsorship for development projects as well as for content. However, both of these phenomena -- merger and sponsorship -- are at least as old as radio. It seems that the Internet is merely repeating an old industrial strategy. Nonetheless, one important difference distinguishes the Internet from earlier media: its characterisation of the audience. Internet companies such as AOL and Microsoft tend towards a simple and simplistic media- centred view of the audience as market. I will show, however, that as the Internet assumes more of the traditional mass media functions, it will be forced to adopt a more sophisticated notion of the mass audience. Indeed, the Internet is currently the site in which audience definitions borrowed from broadcasting are encountering and merging with definitions borrowed from marketing. The Internet apparently lends itself to both models. As a result, definitions of what the Internet does or is, and of how we should understand the audience, are suitably confused and opaque. And the behaviour of big Internet players, such as AOL and MSN, perfectly reflects this confusion as they seem to careen between a view of the Internet as the new television and a contrasting view of the Internet as the new shopping mall. Meanwhile, Internet users move in ways that most observers fail to capture. For example, Baran and Davis characterise mass communication as a process involving (1) an organized sender, (2) engaged in the distribution of messages, (3) directed toward a large audience. They argue that broadcasting fits this model whereas a LISTSERV does not because, even though the LISTSERV may have very many subscribers, its content is filtered through a single person or Webmaster. But why is the Webmaster suddenly more determining than a network programmer or magazine editor? The distinction seems to grow out of the Internet's technological characteristics: it is an interactive pipeline, therefore its use necessarily excludes the possibility of "broadcasting" which in turn causes us to reject "traditional" notions of the audience. However, if a media organisation were to establish an AOL discussion group in order to promote Warner TV shows, for example, would not the resulting communication suddenly fall under the definition as set out by Baran and Davis? It was precisely the confusion around such definitions that caused the CRTC (Canada's broadcasting and telecommunications regulator) to hold hearings in 1999 to determine what kind of medium the Internet is. Unlike traditional broadcasting, Internet communication does indeed include the possibility of interactivity and niche communities. In this sense, it is closer to narrowcasting than to broadcasting even while maintaining the possibility of broadcasting. Hence, the nature of the audience using the Internet quickly becomes muddy. While such muddiness might have led us to sharpen our definitions of the audience, it seems instead to have led many to focus on the medium itself. For example, Morris & Ogan define the Internet as a mass medium because it addresses a mass audience mediated through technology (Morris & Ogan 39). They divide producers and audiences on the Internet into four groups: One-to-one asynchronous communication (e-mail); Many-to-many asynchronous communication (Usenet and News Groups); One-to-one, one-to-few, and one-to-many synchronous communication (topic groups, construction of an object, role-playing games, IRC chats, chat rooms); Asynchronous communication (searches, many-to-one, one-to-one, one to- many, source-receiver relations (Morris & Ogan 42-3) Thus, some Internet communication qualifies as mass communication while some does not. However, the focus remains firmly anchored on either the sender or the medium because the receiver --the audience -- is apparently too slippery to define. When definitions do address the content distributed over the Net, they make a distinction between passive reception and interactive participation. As the World Wide Web makes pre-packaged content the norm, the Internet increasingly resembles a traditional mass medium. Timothy Roscoe argues that the main focus of the World Wide Web is not the production of content (and, hence, the fulfilment of the Internet's democratic potential) but rather the presentation of already produced material: "the dominant activity in relation to the Web is not producing your own content but surfing for content" (Rosco 680). He concludes that if the emphasis is on viewing material, the Internet will become a medium similar to television. Within media studies, several models of the audience compete for dominance in the "new media" economy. Denis McQuail recalls how historically, the electronic media furthered the view of the audience as a "public". The audience was an aggregate of common interests. With broadcasting, the electronic audience was delocalised and socially decomposed (McQuail, Mass 212). According to McQuail, it was not a great step to move from understanding the audience as a dispersed "public" to thinking about the audience as itself a market, both for products and as a commodity to be sold to advertisers. McQuail defines this conception of the audience as an "aggregate of potential customers with a known social- economic profile at which a medium or message is directed" (McQuail, Mass 221). Oddly though, in light of the emancipatory claims made for the Internet, this is precisely the dominant view of the audience in the "new media economy". Media Audience as Market How does the marketing model characterise the relationship between audience and producer? According to McQuail, the marketing model links sender and receiver in a cash transaction between producer and consumer rather than in a communicative relationship between equal interlocutors. Such a model ignores the relationships amongst consumers. Indeed, neither the effectiveness of the communication nor the quality of the communicative experience matters. This model, explicitly calculating and implicitly manipulative, is characteristically a "view from the media" (McQuail, Audience 9). Some scholars, when discussing new media, no longer even refer to audiences. They speak of users or consumers (Pavick & Dennis). The logic of the marketing model lies in the changing revenue base for media industries. Advertising-supported media revenues have been dropping since the early 1990s while user-supported media such as cable, satellite, online services, and pay-per-view, have been steadily growing (Pavlik & Dennis 19). In the Internet-based media landscape, the audience is a revenue stream and a source of consumer information. As Bill Gates says, it is all about "eyeballs". In keeping with this view, AOL hopes to attract consumers with its "one-stop shopping and billing". And Internet providers such as MSN do not even consider their subscribers as "audiences". Instead, they work from a consumer model derived from the computer software industry: individuals make purchases without the seller providing content or thematising the likely use of the software. The analogy extends well beyond the transactional moment. The common practice of prototyping products and beta-testing software requires the participation of potential customers in the product development cycle not as a potential audience sharing meanings but as recalcitrant individuals able to uncover bugs. Hence, media companies like MTV now use the Internet as a source of sophisticated demographic research. Recently, MTV Asia established a Website as a marketing tool to collect preferences and audience profiles (Slater 50). The MTV audience is now part of the product development cycle. Another method for getting information involves the "cookie" file that automatically provides a Website with information about the user who logs on to a site (Pavick & Dennis). Simultaneously, though, both Microsoft and AOL have consciously shifted from user-subscription revenues to advertising in an effort to make online services more like television (Gomery; Darlin). For example, AOL has long tried to produce content through its own studios to generate sufficiently heavy traffic on its Internet service in order to garner profitable advertising fees (Young). However, AOL and Microsoft have had little success in providing content (Krantz; Manes). In fact, faced with the AOL/Time-Warner merger, Microsoft declared that it was in the software rather than the content business (Trott). In short, they are caught between a broadcasting model and a consumer model and their behaviour is characteristically erratic. Similarly, media companies such as Time-Warner have failed to establish their own portals. Indeed, Time-Warner even abandoned attempts to create large Websites to compete with other Internet services when it shut down its Pathfinder site (Egan). Instead it refocussed its Websites so as to blur the line between pitching products and covering them (Reid; Lyons). Since one strategy for gaining large audiences is the creation of portals - - large Websites that keep surfers within the confines of a single company's site by providing content -- this is the logic behind the AOL/Time-Warner merger though both companies have clearly been unsuccessful at precisely such attempts. AOL seems to hope that Time- Warner will act as its content specialist, providing the type of compelling material that will make users want to use AOL, whereas Time- Warner seems to hope that AOL will become its privileged pipeline to the hearts and minds of untold millions. Neither has a coherent view of the audience, how it behaves, or should behave. Consequently, their efforts have a distinctly "unmanaged" and slighly inexplicable air to them, as though everyone were simultaneously hopeful and clueless. While one might argue that the stage is set to capitalise on the audience as commodity, there are indications that the success of such an approach is far from guaranteed. First, the AOL/Time-Warner/EMI transaction, merely by existing, has sparked conflicts over proprietary rights. For example, the Recording Industry Association of America, representing Sony, Universal, BMG, Warner and EMI, recently launched a $6.8 billion lawsuit against MP3.com -- an AOL subsidiary -- for alleged copyright violations. Specifically, MP3.com is being sued for selling digitized music over the Internet without paying royalties to the record companies (Anderson). A similar lawsuit has recently been launched over the issue of re- broadcasting television programs over the Internet. The major US networks have joined together against Canadian Internet company iCravetv for the unlawful distribution of content. Both the iCravetv and the MP3.com cases show how dominant media players can marshal their forces to protect proprietary rights in both content and distribution. Since software and media industries have failed to recreate the Internet in the image of traditional broadcasting, the merger of the dominant players in each industry makes sense. However, their simultaneous failure to secure proprietary rights reflects both the competitive nature of the "new media economy" and the weakness of the marketing view of the audience. Media Audience as Public It is often said that communication produces social cohesion. From such cohesion communities emerge on which political or social orders can be constructed. The power of social cohesion and attachment to group symbols can even create a sense of belonging to a "people" or nation (Deutsch). Sociologist Daniel Bell described how the mass media helped create an American culture simply by addressing a large enough audience. He suggested that on the evening of 7 March 1955, when one out of every two Americans could see Mary Martin as Peter Pan on television, a kind of social revolution occurred and a new American public was born. "It was the first time in history that a single individual was seen and heard at the same time by such a broad public" (Bell, quoted in Mattelart 72). One could easily substitute the 1953 World Series or the birth of little Ricky on I Love Lucy. The desire to document such a process recurs with the Internet. Internet communities are based on the assumption that a common experience "creates" group cohesion (Rheingold; Jones). However, as a mass medium, the Internet has yet to find its originary moment, that event to which all could credibly point as the birth of something genuine and meaningful. A recent contender was the appearance of Paul McCartney at the refurbished Cavern Club in Liverpool. On Tuesday, 14 December 1999, McCartney played to a packed club of 300 fans, while another 150,000 watched on an outdoor screen nearby. MSN arranged to broadcast the concert live over the Internet. It advertised an anticipated global audience of 500 million. Unfortunately, there was such heavy Internet traffic that the system was unable to accommodate more than 3 million people. Servers in the United Kingdom were so congested that many could only watch the choppy video stream via an American link. The concert raises a number of questions about "virtual" events. We can draw several conclusions about measuring Internet audiences. While 3 million is a sizeable audience for a 20 minute transmission, by advertising a potential audience of 500 million, MSN showed remarkably poor judgment of its inherent appeal. The Internet is the first medium that allows access to unprocessed material or information about events to be delivered to an audience with neither the time constraints of broadcast media nor the space limitations of the traditional press. This is often cited as one of the characteristics that sets the Internet apart from other media. This feeds the idea of the Internet audience as a participatory, democratic public. For example, it is often claimed that the Internet can foster democratic participation by providing voters with uninterpreted information about candidates and issues (Selnow). However, as James Curran argues, the very process of distributing uninterrupted, unfiltered information, at least in the case of traditional mass media, represents an abdication of a central democratic function -- that of watchdog to power (Curran). In the end, publics are created and maintained through active and continuous participation on the part of communicators and audiences. The Internet holds together potentially conflicting communicative relationships within the same technological medium (Merrill & Ogan). Viewing the audience as co-participant in a communicative relationship makes more sense than simply focussing on the Internet audience as either an aggregate of consumers or a passively constructed symbolic public. Audience as Relationship Many scholars have shifted attention from the producer to the audience as an active participant in the communication process (Ang; McQuail, Audience). Virginia Nightingale goes further to describe the audience as part of a communicative relationship. Nightingale identifies four factors in the relationship between audiences and producers that emphasize their co-dependency. The audience and producer are engaged in a symbiotic relationship in which consumption and use are necessary but not sufficient explanations of audience relations. The notion of the audience invokes, at least potentially, a greater range of activities than simply use or consumption. Further, the audience actively, if not always consciously, enters relationships with content producers and the institutions that govern the creation, distribution and exhibition of content (Nightingale 149-50). Others have demonstrated how this relationship between audiences and producers is no longer the one-sided affair characterised by the marketing model or the model of the audience as public. A global culture is emerging based on critical viewing skills. Kavoori calls this a reflexive mode born of an increasing familiarity with the narrative conventions of news and an awareness of the institutional imperatives of media industries (Kavoori). Given the sophistication of the emergent global audience, a theory that reduces new media audiences to a set of consumer preferences or behaviours will inevitably prove inadequate, just as it has for understanding audience behavior in old media. Similarly, by ignoring those elements of audience behavior that will be easily transported to the Web, we run the risk of idealising the Internet as a medium that will create an illusory, pre-technological public. Conclusion There is an understandable confusion between the two models of the audience that appear in the examples above. The "new economy" will have to come to terms with sophisticated audiences. Contrary to IBM's claim that they want to "get to know all about you", Internet users do not seem particularly interested in becoming a perpetual source of market information. The fragmented, autonomous audience resists attempts to lock it into proprietary relationships. Internet hypesters talk about creating publics and argue that the Internet recreates the intimacy of community as a corrective to the atomisation and alienation characteristic of mass society. This faith in the power of a medium to create social cohesion recalls the view of the television audience as a public constructed by the common experience of watching an important event. However, MSN's McCartney concert indicates that creating a public from spectacle it is not a simple process. In fact, what the Internet media conglomerates seem to want more than anything is to create consumer bases. Audiences exist for pleasure and by the desire to be entertained. As Internet media institutions are established, the cynical view of the audience as a source of consumer behavior and preferences will inevitably give way, to some extent, to a view of the audience as participant in communication. Audiences will be seen, as they have been by other media, as groups whose attention must be courted and rewarded. Who knows, maybe the AOL/Time-Warner merger might, indeed, signal the new medium's coming of age. References Anderson, Lessley. "To Beam or Not to Beam. MP3.com Is Being Sued by the Major Record Labels. Does the Digital Download Site Stand a Chance?" Industry Standard 31 Jan. 2000. <http://www.thestandard.com>. Ang, Ien. Watching Dallas: Soap Opera and the Melodramatic Imagination. London: Methuen, 1985. Baran, Stanley, and Dennis Davis. Mass Communication Theory: Foundations, Ferment, and Future. 2nd ed. Belmont, Calif.: Wadsworth 2000. Curran, James. "Mass Media and Democracy Revisited." Mass Media and Society. Eds. James Curran and Michael Gurevitch. New York: Hodder Headline Group, 1996. Darlin, Damon. "He Wants Your Eyeballs." Forbes 159 (16 June 1997): 114-6. Egan, Jack, "Pathfinder, Rest in Peace: Time-Warner Pulls the Plug on Site." US News and World Report 126.18 (10 May 1999): 50. Gomery, Douglas. "Making the Web Look like Television (American Online and Microsoft)." American Journalism Review 19 (March 1997): 46. Jones, Steve, ed. CyberSociety: Computer-Mediated Communication and Community. Thousand Oaks: Sage, 1995. Kavoori, Amandam P. "Discursive Texts, Reflexive Audiences: Global Trends in Television News Texts and Audience Reception." Journal of Broadcasting and Electronic Media 43.3 (Summer 1999): 386-98. Krantz, Michael. "Is MSN on the Block?" Time 150 (20 Oct. 1997): 82. Ledbetter, James. "AOL-Time-Warner Make It Big." Industry Standard 11 Jan. 2000. <http://www.thestandard.com>. Lyons, Daniel. "Desparate.com (Media Companies Losing Millions on the Web Turn to Electronic Commerce)." Forbes 163.6 (22 March 1999): 50-1. Manes, Stephen. "The New MSN as Prehistoric TV." New York Times 4 Feb. 1997: C6. McQuail, Denis. Audience Analysis. Thousand Oaks, Calif.: Sage, 1997. ---. Mass Communication Theory. 2nd ed. London: Sage, 1987. Mattelart, Armand. Mapping World Communication: War, Progress, Culture. Trans. Susan Emanuel and James A. Cohen. Minneapolis: U of Minnesota P, 1994. Morris, Merrill, and Christine Ogan. "The Internet as Mass Medium." Journal of Communications 46 (Winter 1996): 39-50. Nightingale, Virginia. Studying Audience: The Shock of the Real. London: Routledge, 1996. Pavlik, John V., and Everette E. Dennis. New Media Technology: Cultural and Commercial Perspectives. 2nd ed. Boston: Allyn and Bacon, 1998. Reid, Calvin. "Time-Warner Seeks Electronic Synergy, Profits on the Web (Pathfinder Site)." Publisher's Weekly 242 (4 Dec. 1995): 12. Rheingold, Howard. Virtual Community: Homesteading on the Electronic Frontier. New York: Harper, 1993. Roscoe, Timothy. "The Construction of the World Wide Web Audience." Media, Culture and Society 21.5 (1999): 673-84. Saap, Geneva, and Ephraim Schwarrtz. "AOL-Time-Warner Deal to Impact Commerce, Content, and Access Markets." Infoworld 11 January 2000. <http://infoworld.com/articles/ic/xml/00/01/11/000111icimpact.xml>. Slater, Joanna. "Cool Customers: Music Channels Hope New Web Sites Tap into Teen Spirit." Far Eastern Economic Review 162.9 (4 March 1999): 50. Trott, Bob. "Microsoft Views AOL-Time-Warner as Confirmation of Its Own Strategy." Infoworld 11 Jan. 2000. <http://infoworld.com/articles/pi/xml/00/01/11/000111pimsaoltw.xml>. Yan, Catherine. "A Major Studio Called AOL?" Business Week 1 Dec. 1997: 1773-4. Citation reference for this article MLA style: Daniel M. Downes. "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy." M/C: A Journal of Media and Culture 3.1 (2000). [your date of access] <http://www.uq.edu.au/mc/0003/mass.php>. Chicago style: Daniel M. Downes, "The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy," M/C: A Journal of Media and Culture 3, no. 1 (2000), <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]). APA style: Daniel M. Downes. (2000) The Medium Vanishes? The Resurrection of the Mass Audience in the New Media Economy. M/C: A Journal of Media and Culture 3(1). <http://www.uq.edu.au/mc/0003/mass.php> ([your date of access]).

11

Shaheed Al-Azzawy, Dhyaa, and Sinan Adnan Diwan. "Design of Intelligent Agent Based management security system for E-government." Journal of Al-Qadisiyah for computer science and mathematics 9, no.2 (November20, 2017). http://dx.doi.org/10.29304/jqcm.2017.9.2.322.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

E-Government delivers services into inhabitant electronically, action, and another government existence. This is analogous to document centric approach in traditional service delivery by the government. One of the most crucial factor regarding the reliability of government services is the security factor, which eventually leads to the social acceptance and satisfaction. E-Government is now a day is the response to the rapid development in the information technology especially in the automation of the process of service delivery. The model introduced by this paper is built over the social behavior of entities that shares the knowledge and provides decision-making baselines. Each entity is capturing its own knowledge and crystalize it with other entities within the communities. In this paper JAVA Agents where built to represent the individuals, which are attached to certain interaction points, for example each intelligent agent is attached to a web site representing the source of knowledge and behavior capture. Results proven that the social behavior of the software intelligent agent is a huge potential toward establishing social acceptance due to the smart behavior in collecting information regarding the utilization of the service, Three sites have been built along the implementation of paper to pursue its hypothesis; this is to represent government web sites deliver certain services and over which an intelligent agent is attached to capture the behavior of users and later on broadcast captured knowledge to other agents (i.e., the community is composed of four agents). The knowledge and expertise have been mutually exchanged and the overall knowledge has been proven to be converged toward the maximum experienced Agent.

12

Burgess, Jean, and Axel Bruns. "Twitter Archives and the Challenges of "Big Social Data" for Media and Communication Research." M/C Journal 15, no.5 (October11, 2012). http://dx.doi.org/10.5204/mcj.561.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Lists and Social MediaLists have long been an ordering mechanism for computer-mediated social interaction. While far from being the first such mechanism, blogrolls offered an opportunity for bloggers to provide a list of their peers; the present generation of social media environments similarly provide lists of friends and followers. Where blogrolls and other earlier lists may have been user-generated, the social media lists of today are more likely to have been produced by the platforms themselves, and are of intrinsic value to the platform providers at least as much as to the users themselves; both Facebook and Twitter have highlighted the importance of their respective “social graphs” (their databases of user connections) as fundamental elements of their fledgling business models. This represents what Mejias describes as “nodocentrism,” which “renders all human interaction in terms of network dynamics (not just any network, but a digital network with a profit-driven infrastructure).”The communicative content of social media spaces is also frequently rendered in the form of lists. Famously, blogs are defined in the first place by their reverse-chronological listing of posts (Walker Rettberg), but the same is true for current social media platforms: Twitter, Facebook, and other social media platforms are inherently centred around an infinite, constantly updated and extended list of posts made by individual users and their connections.The concept of the list implies a certain degree of order, and the orderliness of content lists as provided through the latest generation of centralised social media platforms has also led to the development of more comprehensive and powerful, commercial as well as scholarly, research approaches to the study of social media. Using the example of Twitter, this article discusses the challenges of such “big data” research as it draws on the content lists provided by proprietary social media platforms.Twitter Archives for ResearchTwitter is a particularly useful source of social media data: using the Twitter API (the Application Programming Interface, which provides structured access to communication data in standardised formats) it is possible, with a little effort and sufficient technical resources, for researchers to gather very large archives of public tweets concerned with a particular topic, theme or event. Essentially, the API delivers very long lists of hundreds, thousands, or millions of tweets, and metadata about those tweets; such data can then be sliced, diced and visualised in a wide range of ways, in order to understand the dynamics of social media communication. Such research is frequently oriented around pre-existing research questions, but is typically conducted at unprecedented scale. The projects of media and communication researchers such as Papacharissi and de Fatima Oliveira, Wood and Baughman, or Lotan, et al.—to name just a handful of recent examples—rely fundamentally on Twitter datasets which now routinely comprise millions of tweets and associated metadata, collected according to a wide range of criteria. What is common to all such cases, however, is the need to make new methodological choices in the processing and analysis of such large datasets on mediated social interaction.Our own work is broadly concerned with understanding the role of social media in the contemporary media ecology, with a focus on the formation and dynamics of interest- and issues-based publics. We have mined and analysed large archives of Twitter data to understand contemporary crisis communication (Bruns et al), the role of social media in elections (Burgess and Bruns), and the nature of contemporary audience engagement with television entertainment and news media (Harrington, Highfield, and Bruns). Using a custom installation of the open source Twitter archiving tool yourTwapperkeeper, we capture and archive all the available tweets (and their associated metadata) containing a specified keyword (like “Olympics” or “dubstep”), name (Gillard, Bieber, Obama) or hashtag (#ausvotes, #royalwedding, #qldfloods). In their simplest form, such Twitter archives are commonly stored as delimited (e.g. comma- or tab-separated) text files, with each of the following values in a separate column: text: contents of the tweet itself, in 140 characters or less to_user_id: numerical ID of the tweet recipient (for @replies) from_user: screen name of the tweet sender id: numerical ID of the tweet itself from_user_id: numerical ID of the tweet sender iso_language_code: code (e.g. en, de, fr, ...) of the sender’s default language source: client software used to tweet (e.g. Web, Tweetdeck, ...) profile_image_url: URL of the tweet sender’s profile picture geo_type: format of the sender’s geographical coordinates geo_coordinates_0: first element of the geographical coordinates geo_coordinates_1: second element of the geographical coordinates created_at: tweet timestamp in human-readable format time: tweet timestamp as a numerical Unix timestampIn order to process the data, we typically run a number of our own scripts (written in the programming language Gawk) which manipulate or filter the records in various ways, and apply a series of temporal, qualitative and categorical metrics to the data, enabling us to discern patterns of activity over time, as well as to identify topics and themes, key actors, and the relations among them; in some circ*mstances we may also undertake further processes of filtering and close textual analysis of the content of the tweets. Network analysis (of the relationships among actors in a discussion; or among key themes) is undertaken using the open source application Gephi. While a detailed methodological discussion is beyond the scope of this article, further details and examples of our methods and tools for data analysis and visualisation, including copies of our Gawk scripts, are available on our comprehensive project website, Mapping Online Publics.In this article, we reflect on the technical, epistemological and political challenges of such uses of large-scale Twitter archives within media and communication studies research, positioning this work in the context of the phenomenon that Lev Manovich has called “big social data.” In doing so, we recognise that our empirical work on Twitter is concerned with a complex research site that is itself shaped by a complex range of human and non-human actors, within a dynamic, indeed volatile media ecology (Fuller), and using data collection and analysis methods that are in themselves deeply embedded in this ecology. “Big Social Data”As Manovich’s term implies, the Big Data paradigm has recently arrived in media, communication and cultural studies—significantly later than it did in the hard sciences, in more traditionally computational branches of social science, and perhaps even in the first wave of digital humanities research (which largely applied computational methods to pre-existing, historical “big data” corpora)—and this shift has been provoked in large part by the dramatic quantitative growth and apparently increased cultural importance of social media—hence, “big social data.” As Manovich puts it: For the first time, we can follow [the] imaginations, opinions, ideas, and feelings of hundreds of millions of people. We can see the images and the videos they create and comment on, monitor the conversations they are engaged in, read their blog posts and tweets, navigate their maps, listen to their track lists, and follow their trajectories in physical space. (Manovich 461) This moment has arrived in media, communication and cultural studies because of the increased scale of social media participation and the textual traces that this participation leaves behind—allowing researchers, equipped with digital tools and methods, to “study social and cultural processes and dynamics in new ways” (Manovich 461). However, and crucially for our purposes in this article, many of these scholarly possibilities would remain latent if it were not for the widespread availability of Open APIs for social software (including social media) platforms. APIs are technical specifications of how one software application should access another, thereby allowing the embedding or cross-publishing of social content across Websites (so that your tweets can appear in your Facebook timeline, for example), or allowing third-party developers to build additional applications on social media platforms (like the Twitter user ranking service Klout), while also allowing platform owners to impose de facto regulation on such third-party uses via the same code. While platform providers do not necessarily have scholarship in mind, the data access affordances of APIs are also available for research purposes. As Manovich notes, until very recently almost all truly “big data” approaches to social media research had been undertaken by computer scientists (464). But as part of a broader “computational turn” in the digital humanities (Berry), and because of the increased availability to non-specialists of data access and analysis tools, media, communication and cultural studies scholars are beginning to catch up. Many of the new, large-scale research projects examining the societal uses and impacts of social media—including our own—which have been initiated by various media, communication, and cultural studies research leaders around the world have begun their work by taking stock of, and often substantially extending through new development, the range of available tools and methods for data analysis. The research infrastructure developed by such projects, therefore, now reflects their own disciplinary backgrounds at least as much as it does the fundamental principles of computer science. In turn, such new and often experimental tools and methods necessarily also provoke new epistemological and methodological challenges. The Twitter API and Twitter ArchivesThe Open API was a key aspect of mid-2000s ideas about the value of the open Web and “Web 2.0” business models (O’Reilly), emphasising the open, cross-platform sharing of content as well as promoting innovation at the margins via third-party application development—and it was in this ideological environment that the microblogging service Twitter launched and experienced rapid growth in popularity among users and developers alike. As José van Dijck cogently argues, however, a complex interplay of technical, economic and social dynamics has seen Twitter shift from a relatively open, ad hoc and user-centred platform toward a more formalised media business: For Twitter, the shift from being primarily a conversational communication tool to being a global, ad-supported followers tool took place in a relatively short time span. This shift did not simply result from the owner’s choice for a distinct business model or from the company’s decision to change hardware features. Instead, the proliferation of Twitter as a tool has been a complex process in which technological adjustments are intricately intertwined with changes in user base, transformations of content and choices for revenue models. (van Dijck 343)The specifications of Twitter’s API, as well as the written guidelines for its use by developers (Twitter, “Developer Rules”) are an excellent example of these “technological adjustments” and the ways they are deeply interwined with Twitter’s search for a viable revenue model. These changes show how the apparent semantic openness or “interpretive flexibility” of the term “platform” allows its meaning to be reshaped over time as the business models of platform owners change (Gillespie).The release of the API was first announced on the Twitter blog in September 2006 (Stone), not long after the service’s launch but after some popular third-party applications (like a mashup of Twitter with Google Maps creating a dynamic display of recently posted tweets around the world) had already been developed. Since then Twitter has seen a flourishing of what the company itself referred to as the “Twitter ecosystem” (Twitter, “Developer Rules”), including third-party developed client software (like Twitterific and TweetDeck), institutional use cases (such as large-scale social media visualisations of the London Riots in The Guardian), and parasitic business models (including social media metrics services like HootSuite and Klout).While the history of Twitter’s API rules and related regulatory instruments (such as its Developer Rules of the Road and Terms of Use) has many twists and turns, there have been two particularly important recent controversies around data access and control. First, the company locked out developers and researchers from direct “firehose” (very high volume) access to the Twitter feed; this was accompanied by a crackdown on free and public Twitter archiving services like 140Kit and the Web version of Twapperkeeper (Sample), and coincided with the establishment of what was at the time a monopoly content licensing arrangement between Twitter and Gnip, a company which charges commercial rates for high-volume API access to tweets (and content from other social media platforms). A second wave of controversy among the developer community occurred in August 2012 in response to Twitter’s release of its latest API rules (Sippey), which introduce further, significant limits to API use and usability in certain circ*mstances. In essence, the result of these changes to the Twitter API rules, announced without meaningful consultation with the developer community which created the Twitter ecosystem, is a forced rebalancing of development activities: on the one hand, Twitter is explicitly seeking to “limit” (Sippey) the further development of API-based third-party tools which support “consumer engagement activities” (such as end-user clients), in order to boost the use of its own end-user interfaces; on the other hand, it aims to “encourage” the further development of “consumer analytics” and “business analytics” as well as “business engagement” tools. Implicit in these changes is a repositioning of Twitter users (increasingly as content consumers rather than active communicators), but also of commercial and academic researchers investigating the uses of Twitter (as providing a narrow range of existing Twitter “analytics” rather than engaging in a more comprehensive investigation both of how Twitter is used, and of how such uses continue to evolve). The changes represent an attempt by the company to cement a certain, commercially viable and valuable, vision of how Twitter should be used (and analysed), and to prevent or at least delay further evolution beyond this desired stage. Although such attempts to “freeze” development may well be in vain, given the considerable, documented role which the Twitter user base has historically played in exploring new and unforeseen uses of Twitter (Bruns), it undermines scholarly research efforts to examine actual Twitter uses at least temporarily—meaning that researchers are increasingly forced to invest time and resources in finding workarounds for the new restrictions imposed by the Twitter API.Technical, Political, and Epistemological IssuesIn their recent article “Critical Questions for Big Data,” danah boyd and Kate Crawford have drawn our attention to the limitations, politics and ethics of big data approaches in the social sciences more broadly, but also touching on social media as a particularly prevalent site of social datamining. In response, we offer the following complementary points specifically related to data-driven Twitter research relying on archives of tweets gathered using the Twitter API.First, somewhat differently from most digital humanities (where researchers often begin with a large pre-existing textual corpus), in the case of Twitter research we have no access to an original set of texts—we can access only what Twitter’s proprietary and frequently changing API will provide. The tools Twitter researchers use rely on various combinations of parts of the Twitter API—or, more accurately, the various Twitter APIs (particularly the Search and Streaming APIs). As discussed above, of course, in providing an API, Twitter is driven not by scholarly concerns but by an attempt to serve a range of potentially value-generating end-users—particularly those with whom Twitter can create business-to-business relationships, as in their recent exclusive partnership with NBC in covering the 2012 London Olympics.The following section from Twitter’s own developer FAQ highlights the potential conflicts between the business-case usage scenarios under which the APIs are provided and the actual uses to which they are often put by academic researchers or other dataminers:Twitter’s search is optimized to serve relevant tweets to end-users in response to direct, non-recurring queries such as #hashtags, URLs, domains, and keywords. The Search API (which also powers Twitter’s search widget) is an interface to this search engine. Our search service is not meant to be an exhaustive archive of public tweets and not all tweets are indexed or returned. Some results are refined to better combat spam and increase relevance. Due to capacity constraints, the index currently only covers about a week’s worth of tweets. (Twitter, “Frequently Asked Questions”)Because external researchers do not have access to the full, “raw” data, against which we could compare the retrieved archives which we use in our later analyses, and because our data access regimes rely so heavily on Twitter’s APIs—each with its technical quirks and limitations—it is impossible for us to say with any certainty that we are capturing a complete archive or even a “representative” sample (whatever “representative” might mean in a data-driven, textualist paradigm). In other words, the “lists” of tweets delivered to us on the basis of a keyword search are not necessarily complete; and there is no way of knowing how incomplete they are. The total yield of even the most robust capture system (using the Streaming API and not relying only on Search) depends on a number of variables: rate limiting, the filtering and spam-limiting functions of Twitter’s search algorithm, server outages and so on; further, because Twitter prohibits the sharing of data sets it is difficult to compare notes with other research teams.In terms of epistemology, too, the primary reliance on large datasets produces a new mode of scholarship in media, communication and cultural studies: what emerges is a form of data-driven research which tends towards abductive reasoning; in doing so, it highlights tensions between the traditional research questions in discourse or text-based disciplines like media and communication studies, and the assumptions and modes of pattern recognition that are required when working from the “inside out” of a corpus, rather than from the outside in (for an extended discussion of these epistemological issues in the digital humanities more generally, see Dixon).Finally, even the heuristics of our analyses of Twitter datasets are mediated by the API: the datapoints that are hardwired into the data naturally become the most salient, further shaping the type of analysis that can be done. For example, a common process in our research is to use the syntax of tweets to categorise it as one of the following types of activity: original tweets: tweets which are neither @reply nor retweetretweets: tweets which contain RT @user… (or similar) unedited retweets: retweets which start with RT @user… edited retweets: retweets do not start with RT @user…genuine @replies: tweets which contain @user, but are not retweetsURL sharing: tweets which contain URLs(Retweets which are made using the Twitter “retweet button,” resulting in verbatim passing-along without the RT @user syntax or an opportunity to add further comment during the retweet process, form yet another category, which cannot be tracked particularly effectively using the Twitter API.)These categories are driven by the textual and technical markers of specific kinds of interactions that are built into the syntax of Twitter itself (@replies or @mentions, RTs); and specific modes of referentiality (URLs). All of them focus on (and thereby tend to privilege) more informational modes of communication, rather than the ephemeral, affective, or ambiently intimate uses of Twitter that can be illuminated more easily using ethnographic approaches: approaches that can actually focus on the individual user, their social contexts, and the broader cultural context of the traces they leave on Twitter. ConclusionsIn this article we have described and reflected on some of the sociotechnical, political and economic aspects of the lists of tweets—the structured Twitter data upon which our research relies—which may be gathered using the Twitter API. As we have argued elsewhere (Bruns and Burgess)—and, hopefully, have begun to demonstrate in this paper—media and communication studies scholars who are actually engaged in using computational methods are well-positioned to contribute to both the methodological advances we highlight at the beginning of this paper and the political debates around computational methods in the “big social data” moment on which the discussion in the second part of the paper focusses. One pressing issue in the area of methodology is to build on current advances to bring together large-scale datamining approaches with ethnographic and other qualitative approaches, especially including close textual analysis. More broadly, in engaging with the “big social data” moment there is a pressing need for the development of code literacy in media, communication and cultural studies. In the first place, such literacy has important instrumental uses: as Manovich argues, much big data research in the humanities requires costly and time-consuming (and sometimes alienating) partnerships with technical experts (typically, computer scientists), because the free tools available to non-programmers are still limited in utility in comparison to what can be achieved using raw data and original code (Manovich, 472).But code literacy is also a requirement of scholarly rigour in the context of what David Berry calls the “computational turn,” representing a “third wave” of Digital Humanities. Berry suggests code and software might increasingly become in themselves objects of, and not only tools for, research: I suggest that we introduce a humanistic approach to the subject of computer code, paying attention to the wider aspects of code and software, and connecting them to the materiality of this growing digital world. With this in mind, the question of code becomes increasingly important for understanding in the digital humanities, and serves as a condition of possibility for the many new computational forms that mediate our experience of contemporary culture and society. (Berry 17)A first step here lies in developing a more robust working knowledge of the conceptual models and methodological priorities assumed by the workings of both the tools and the sources we use for “big social data” research. Understanding how something like the Twitter API mediates the cultures of use of the platform, as well as reflexively engaging with its mediating role in data-driven Twitter research, promotes a much more materialist critical understanding of the politics of the social media platforms (Gillespie) that are now such powerful actors in the media ecology. ReferencesBerry, David M. “Introduction: Understanding Digital Humanities.” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 1-20.boyd, danah, and Kate Crawford. “Critical Questions for Big Data.” Information, Communication & Society 15.5 (2012): 662-79.Bruns, Axel. “Ad Hoc Innovation by Users of Social Networks: The Case of Twitter.” ZSI Discussion Paper 16 (2012). 18 Sep. 2012 ‹https://www.zsi.at/object/publication/2186›.Bruns, Axel, and Jean Burgess. “Notes towards the Scientific Study of Public Communication on Twitter.” Keynote presented at the Conference on Science and the Internet, Düsseldorf, 4 Aug. 2012. 18 Sep. 2012 http://snurb.info/files/2012/Notes%20towards%20the%20Scientific%20Study%20of%20Public%20Communication%20on%20Twitter.pdfBruns, Axel, Jean Burgess, Kate Crawford, and Frances Shaw. “#qldfloods and @QPSMedia: Crisis Communication on Twitter in the 2011 South East Queensland Floods.” Brisbane: ARC Centre of Excellence for Creative Industries and Innovation, 2012. 18 Sep. 2012 ‹http://cci.edu.au/floodsreport.pdf›Burgess, Jean E. & Bruns, Axel (2012) “(Not) the Twitter Election: The Dynamics of the #ausvotes Conversation in Relation to the Australian Media Ecology.” Journalism Practice 6.3 (2012): 384-402Dixon, Dan. “Analysis Tool Or Research Methodology: Is There an Epistemology for Patterns?” Understanding Digital Humanities. Ed. David M. Berry. London: Palgrave Macmillan, 2012. 191-209.Fuller, Matthew. Media Ecologies: Materialist Energies in Art and Technoculture. Cambridge, Mass.: MIT P, 2005.Gillespie, Tarleton. “The Politics of ‘Platforms’.” New Media & Society 12.3 (2010): 347-64.Harrington, Stephen, Highfield, Timothy J., & Bruns, Axel (2012) “More than a Backchannel: Twitter and Television.” Ed. José Manuel Noguera. Audience Interactivity and Participation. COST Action ISO906 Transforming Audiences, Transforming Societies, Brussels, Belgium, pp. 13-17. 18 Sept. 2012 http://www.cost-transforming-audiences.eu/system/files/essays-and-interview-essays-18-06-12.pdfLotan, Gilad, Erhardt Graeff, Mike Ananny, Devin Gaffney, Ian Pearce, and danah boyd. “The Arab Spring: The Revolutions Were Tweeted: Information Flows during the 2011 Tunisian and Egyptian Revolutions.” International Journal of Communication 5 (2011): 1375-1405. 18 Sep. 2012 ‹http://ijoc.org/ojs/index.php/ijoc/article/view/1246/613›.Manovich, Lev. “Trending: The Promises and the Challenges of Big Social Data.” Debates in the Digital Humanities. Ed. Matthew K. Gold. Minneapolis: U of Minnesota P, 2012. 460-75.Mejias, Ulises A. “Liberation Technology and the Arab Spring: From Utopia to Atopia and Beyond.” Fibreculture Journal 20 (2012). 18 Sep. 2012 ‹http://twenty.fibreculturejournal.org/2012/06/20/fcj-147-liberation-technology-and-the-arab-spring-from-utopia-to-atopia-and-beyond/›.O’Reilly, Tim. “What is Web 2.0? Design Patterns and Business Models for the Next Generation of Software.” O’Reilly Network 30 Sep. 2005. 18 Sep. 2012 ‹http://www.oreillynet.com/pub/a/oreilly/tim/news/2005/09/30/what-is-web-20.html›.Papacharissi, Zizi, and Maria de Fatima Oliveira. “Affective News and Networked Publics: The Rhythms of News Storytelling on #Egypt.” Journal of Communication 62.2 (2012): 266-82.Sample, Mark. “The End of Twapperkeeper (and What to Do about It).” ProfHacker. The Chronicle of Higher Education 8 Mar. 2011. 18 Sep. 2012 ‹http://chronicle.com/blogs/profhacker/the-end-of-twapperkeeper-and-what-to-do-about-it/31582›.Sippey, Michael. “Changes Coming in Version 1.1 of the Twitter API.” 16 Aug. 2012. Twitter Developers Blog. 18 Sep. 2012 ‹https://dev.Twitter.com/blog/changes-coming-to-Twitter-api›.Stone, Biz. “Introducing the Twitter API.” Twitter Blog 20 Sep. 2006. 18 Sep. 2012 ‹http://blog.Twitter.com/2006/09/introducing-Twitter-api.html›.Twitter. “Developer Rules of the Road.” Twitter Developers Website 17 May 2012. 18 Sep. 2012 ‹https://dev.Twitter.com/terms/api-terms›.Twitter. “Frequently Asked Questions.” 18 Sep. 2012 ‹https://dev.twitter.com/docs/faq›.Van Dijck, José. “Tracing Twitter: The Rise of a Microblogging Platform.” International Journal of Media and Cultural Politics 7.3 (2011): 333-48.Walker Rettberg, Jill. Blogging. Cambridge: Polity, 2008.Wood, Megan M., and Linda Baughman. “Glee Fandom and Twitter: Something New, or More of the Same Old Thing?” Communication Studies 63.3 (2012): 328-44.

13

Livingstone,RandallM. "Let’s Leave the Bias to the Mainstream Media: A Wikipedia Community Fighting for Information Neutrality." M/C Journal 13, no.6 (November23, 2010). http://dx.doi.org/10.5204/mcj.315.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Although I'm a rich white guy, I'm also a feminist anti-racism activist who fights for the rights of the poor and oppressed. (Carl Kenner)Systemic bias is a scourge to the pillar of neutrality. (Cerejota)Count me in. Let's leave the bias to the mainstream media. (Orcar967)Because this is so important. (CuttingEdge)These are a handful of comments posted by online editors who have banded together in a virtual coalition to combat Western bias on the world’s largest digital encyclopedia, Wikipedia. This collective action by Wikipedians both acknowledges the inherent inequalities of a user-controlled information project like Wikpedia and highlights the potential for progressive change within that same project. These community members are taking the responsibility of social change into their own hands (or more aptly, their own keyboards).In recent years much research has emerged on Wikipedia from varying fields, ranging from computer science, to business and information systems, to the social sciences. While critical at times of Wikipedia’s growth, governance, and influence, most of this work observes with optimism that barriers to improvement are not firmly structural, but rather they are socially constructed, leaving open the possibility of important and lasting change for the better.WikiProject: Countering Systemic Bias (WP:CSB) considers one such collective effort. Close to 350 editors have signed on to the project, which began in 2004 and itself emerged from a similar project named CROSSBOW, or the “Committee Regarding Overcoming Serious Systemic Bias on Wikipedia.” As a WikiProject, the term used for a loose group of editors who collaborate around a particular topic, these editors work within the Wikipedia site and collectively create a social network that is unified around one central aim—representing the un- and underrepresented—and yet they are bound by no particular unified set of interests. The first stage of a multi-method study, this paper looks at a snapshot of WP:CSB’s activity from both content analysis and social network perspectives to discover “who” geographically this coalition of the unrepresented is inserting into the digital annals of Wikipedia.Wikipedia and WikipediansDeveloped in 2001 by Internet entrepreneur Jimmy Wales and academic Larry Sanger, Wikipedia is an online collaborative encyclopedia hosting articles in nearly 250 languages (Cohen). The English-language Wikipedia contains over 3.2 million articles, each of which is created, edited, and updated solely by users (Wikipedia “Welcome”). At the time of this study, Alexa, a website tracking organisation, ranked Wikipedia as the 6th most accessed site on the Internet. Unlike the five sites ahead of it though—Google, Facebook, Yahoo, YouTube (owned by Google), and live.com (owned by Microsoft)—all of which are multibillion-dollar businesses that deal more with information aggregation than information production, Wikipedia is a non-profit that operates on less than $500,000 a year and staffs only a dozen paid employees (Lih). Wikipedia is financed and supported by the WikiMedia Foundation, a charitable umbrella organisation with an annual budget of $4.6 million, mainly funded by donations (Middleton).Wikipedia editors and contributors have the option of creating a user profile and participating via a username, or they may participate anonymously, with only an IP address representing their actions. Despite the option for total anonymity, many Wikipedians have chosen to visibly engage in this online community (Ayers, Matthews, and Yates; Bruns; Lih), and researchers across disciplines are studying the motivations of these new online collectives (Kane, Majchrzak, Johnson, and Chenisern; Oreg and Nov). The motivations of open source software contributors, such as UNIX programmers and programming groups, have been shown to be complex and tied to both extrinsic and intrinsic rewards, including online reputation, self-satisfaction and enjoyment, and obligation to a greater common good (Hertel, Niedner, and Herrmann; Osterloh and Rota). Investigation into why Wikipedians edit has indicated multiple motivations as well, with community engagement, task enjoyment, and information sharing among the most significant (Schroer and Hertel). Additionally, Wikipedians seem to be taking up the cause of generativity (a concern for the ongoing health and openness of the Internet’s infrastructures) that Jonathan Zittrain notably called for in The Future of the Internet and How to Stop It. Governance and ControlAlthough the technical infrastructure of Wikipedia is built to support and perhaps encourage an equal distribution of power on the site, Wikipedia is not a land of “anything goes.” The popular press has covered recent efforts by the site to reduce vandalism through a layer of editorial review (Cohen), a tightening of control cited as a possible reason for the recent dip in the number of active editors (Edwards). A number of regulations are already in place that prevent the open editing of certain articles and pages, such as the site’s disclaimers and pages that have suffered large amounts of vandalism. Editing wars can also cause temporary restrictions to editing, and Ayers, Matthews, and Yates point out that these wars can happen anywhere, even to Burt Reynold’s page.Academic studies have begun to explore the governance and control that has developed in the Wikipedia community, generally highlighting how order is maintained not through particular actors, but through established procedures and norms. Konieczny tested whether Wikipedia’s evolution can be defined by Michels’ Iron Law of Oligopoly, which predicts that the everyday operations of any organisation cannot be run by a mass of members, and ultimately control falls into the hands of the few. Through exploring a particular WikiProject on information validation, he concludes:There are few indicators of an oligarchy having power on Wikipedia, and few trends of a change in this situation. The high level of empowerment of individual Wikipedia editors with regard to policy making, the ease of communication, and the high dedication to ideals of contributors succeed in making Wikipedia an atypical organization, quite resilient to the Iron Law. (189)Butler, Joyce, and Pike support this assertion, though they emphasise that instead of oligarchy, control becomes encapsulated in a wide variety of structures, policies, and procedures that guide involvement with the site. A virtual “bureaucracy” emerges, but one that should not be viewed with the negative connotation often associated with the term.Other work considers control on Wikipedia through the framework of commons governance, where “peer production depends on individual action that is self-selected and decentralized rather than hierarchically assigned. Individuals make their own choices with regard to resources managed as a commons” (Viegas, Wattenberg and McKeon). The need for quality standards and quality control largely dictate this commons governance, though interviewing Wikipedians with various levels of responsibility revealed that policies and procedures are only as good as those who maintain them. Forte, Larco, and Bruckman argue “the Wikipedia community has remained healthy in large part due to the continued presence of ‘old-timers’ who carry a set of social norms and organizational ideals with them into every WikiProject, committee, and local process in which they take part” (71). Thus governance on Wikipedia is a strong representation of a democratic ideal, where actors and policies are closely tied in their evolution. Transparency, Content, and BiasThe issue of transparency has proved to be a double-edged sword for Wikipedia and Wikipedians. The goal of a collective body of knowledge created by all—the “expert” and the “amateur”—can only be upheld if equal access to page creation and development is allotted to everyone, including those who prefer anonymity. And yet this very option for anonymity, or even worse, false identities, has been a sore subject for some in the Wikipedia community as well as a source of concern for some scholars (Santana and Wood). The case of a 24-year old college dropout who represented himself as a multiple Ph.D.-holding theology scholar and edited over 16,000 articles brought these issues into the public spotlight in 2007 (Doran; Elsworth). Wikipedia itself has set up standards for content that include expectations of a neutral point of view, verifiability of information, and the publishing of no original research, but Santana and Wood argue that self-policing of these policies is not adequate:The principle of managerial discretion requires that every actor act from a sense of duty to exercise moral autonomy and choice in responsible ways. When Wikipedia’s editors and administrators remain anonymous, this criterion is simply not met. It is assumed that everyone is behaving responsibly within the Wikipedia system, but there are no monitoring or control mechanisms to make sure that this is so, and there is ample evidence that it is not so. (141) At the theoretical level, some downplay these concerns of transparency and autonomy as logistical issues in lieu of the potential for information systems to support rational discourse and emancipatory forms of communication (Hansen, Berente, and Lyytinen), but others worry that the questionable “realities” created on Wikipedia will become truths once circulated to all areas of the Web (Langlois and Elmer). With the number of articles on the English-language version of Wikipedia reaching well into the millions, the task of mapping and assessing content has become a tremendous endeavour, one mostly taken on by information systems experts. Kittur, Chi, and Suh have used Wikipedia’s existing hierarchical categorisation structure to map change in the site’s content over the past few years. Their work revealed that in early 2008 “Culture and the arts” was the most dominant category of content on Wikipedia, representing nearly 30% of total content. People (15%) and geographical locations (14%) represent the next largest categories, while the natural and physical sciences showed the greatest increase in volume between 2006 and 2008 (+213%D, with “Culture and the arts” close behind at +210%D). This data may indicate that contributing to Wikipedia, and thus spreading knowledge, is growing amongst the academic community while maintaining its importance to the greater popular culture-minded community. Further work by Kittur and Kraut has explored the collaborative process of content creation, finding that too many editors on a particular page can reduce the quality of content, even when a project is well coordinated.Bias in Wikipedia content is a generally acknowledged and somewhat conflicted subject (Giles; Johnson; McHenry). The Wikipedia community has created numerous articles and pages within the site to define and discuss the problem. Citing a survey conducted by the University of Würzburg, Germany, the “Wikipedia:Systemic bias” page describes the average Wikipedian as:MaleTechnically inclinedFormally educatedAn English speakerWhiteAged 15-49From a majority Christian countryFrom a developed nationFrom the Northern HemisphereLikely a white-collar worker or studentBias in content is thought to be perpetuated by this demographic of contributor, and the “founder effect,” a concept from genetics, linking the original contributors to this same demographic has been used to explain the origins of certain biases. Wikipedia’s “About” page discusses the issue as well, in the context of the open platform’s strengths and weaknesses:in practice editing will be performed by a certain demographic (younger rather than older, male rather than female, rich enough to afford a computer rather than poor, etc.) and may, therefore, show some bias. Some topics may not be covered well, while others may be covered in great depth. No educated arguments against this inherent bias have been advanced.Royal and Kapila’s study of Wikipedia content tested some of these assertions, finding identifiable bias in both their purposive and random sampling. They conclude that bias favoring larger countries is positively correlated with the size of the country’s Internet population, and corporations with larger revenues work in much the same way, garnering more coverage on the site. The researchers remind us that Wikipedia is “more a socially produced document than a value-free information source” (Royal & Kapila).WikiProject: Countering Systemic BiasAs a coalition of current Wikipedia editors, the WikiProject: Countering Systemic Bias (WP:CSB) attempts to counter trends in content production and points of view deemed harmful to the democratic ideals of a valueless, open online encyclopedia. WP:CBS’s mission is not one of policing the site, but rather deepening it:Generally, this project concentrates upon remedying omissions (entire topics, or particular sub-topics in extant articles) rather than on either (1) protesting inappropriate inclusions, or (2) trying to remedy issues of how material is presented. Thus, the first question is "What haven't we covered yet?", rather than "how should we change the existing coverage?" (Wikipedia, “Countering”)The project lays out a number of content areas lacking adequate representation, geographically highlighting the dearth in coverage of Africa, Latin America, Asia, and parts of Eastern Europe. WP:CSB also includes a “members” page that editors can sign to show their support, along with space to voice their opinions on the problem of bias on Wikipedia (the quotations at the beginning of this paper are taken from this “members” page). At the time of this study, 329 editors had self-selected and self-identified as members of WP:CSB, and this group constitutes the population sample for the current study. To explore the extent to which WP:CSB addressed these self-identified areas for improvement, each editor’s last 50 edits were coded for their primary geographical country of interest, as well as the conceptual category of the page itself (“P” for person/people, “L” for location, “I” for idea/concept, “T” for object/thing, or “NA” for indeterminate). For example, edits to the Wikipedia page for a single person like Tony Abbott (Australian federal opposition leader) were coded “Australia, P”, while an edit for a group of people like the Manchester United football team would be coded “England, P”. Coding was based on information obtained from the header paragraphs of each article’s Wikipedia page. After coding was completed, corresponding information on each country’s associated continent was added to the dataset, based on the United Nations Statistics Division listing.A total of 15,616 edits were coded for the study. Nearly 32% (n = 4962) of these edits were on articles for persons or people (see Table 1 for complete coding results). From within this sub-sample of edits, a majority of the people (68.67%) represented are associated with North America and Europe (Figure A). If we break these statistics down further, nearly half of WP:CSB’s edits concerning people were associated with the United States (36.11%) and England (10.16%), with India (3.65%) and Australia (3.35%) following at a distance. These figures make sense for the English-language Wikipedia; over 95% of the population in the three Westernised countries speak English, and while India is still often regarded as a developing nation, its colonial British roots and the emergence of a market economy with large, technology-driven cities are logical explanations for its representation here (and some estimates make India the largest English-speaking nation by population on the globe today).Table A Coding Results Total Edits 15616 (I) Ideas 2881 18.45% (L) Location 2240 14.34% NA 333 2.13% (T) Thing 5200 33.30% (P) People 4962 31.78% People by Continent Africa 315 6.35% Asia 827 16.67% Australia 175 3.53% Europe 1411 28.44% NA 110 2.22% North America 1996 40.23% South America 128 2.58% The areas of the globe of main concern to WP:CSB proved to be much less represented by the coalition itself. Asia, far and away the most populous continent with more than 60% of the globe’s people (GeoHive), was represented in only 16.67% of edits. Africa (6.35%) and South America (2.58%) were equally underrepresented compared to both their real-world populations (15% and 9% of the globe’s population respectively) and the aforementioned dominance of the advanced Westernised areas. However, while these percentages may seem low, in aggregate they do meet the quota set on the WP:CSB Project Page calling for one out of every twenty edits to be “a subject that is systematically biased against the pages of your natural interests.” By this standard, the coalition is indeed making headway in adding content that strategically counterbalances the natural biases of Wikipedia’s average editor.Figure ASocial network analysis allows us to visualise multifaceted data in order to identify relationships between actors and content (Vego-Redondo; Watts). Similar to Davis’s well-known sociological study of Southern American socialites in the 1930s (Scott), our Wikipedia coalition can be conceptualised as individual actors united by common interests, and a network of relations can be constructed with software such as UCINET. A mapping algorithm that considers both the relationship between all sets of actors and each actor to the overall collective structure produces an image of our network. This initial network is bimodal, as both our Wikipedia editors and their edits (again, coded for country of interest) are displayed as nodes (Figure B). Edge-lines between nodes represents a relationship, and here that relationship is the act of editing a Wikipedia article. We see from our network that the “U.S.” and “England” hold central positions in the network, with a mass of editors crowding around them. A perimeter of nations is then held in place by their ties to editors through the U.S. and England, with a second layer of editors and poorly represented nations (Gabon, Laos, Uzbekistan, etc.) around the boundaries of the network.Figure BWe are reminded from this visualisation both of the centrality of the two Western powers even among WP:CSB editoss, and of the peripheral nature of most other nations in the world. But we also learn which editors in the project are contributing most to underrepresented areas, and which are less “tied” to the Western core. Here we see “Wizzy” and “Warofdreams” among the second layer of editors who act as a bridge between the core and the periphery; these are editors with interests in both the Western and marginalised nations. Located along the outer edge, “Gallador” and “Gerrit” have no direct ties to the U.S. or England, concentrating all of their edits on less represented areas of the globe. Identifying editors at these key positions in the network will help with future research, informing interview questions that will investigate their interests further, but more significantly, probing motives for participation and action within the coalition.Additionally, we can break the network down further to discover editors who appear to have similar interests in underrepresented areas. Figure C strips down the network to only editors and edits dealing with Africa and South America, the least represented continents. From this we can easily find three types of editors again: those who have singular interests in particular nations (the outermost layer of editors), those who have interests in a particular region (the second layer moving inward), and those who have interests in both of these underrepresented regions (the center layer in the figure). This last group of editors may prove to be the most crucial to understand, as they are carrying the full load of WP:CSB’s mission.Figure CThe End of Geography, or the Reclamation?In The Internet Galaxy, Manuel Castells writes that “the Internet Age has been hailed as the end of geography,” a bold suggestion, but one that has gained traction over the last 15 years as the excitement for the possibilities offered by information communication technologies has often overshadowed structural barriers to participation like the Digital Divide (207). Castells goes on to amend the “end of geography” thesis by showing how global information flows and regional Internet access rates, while creating a new “map” of the world in many ways, is still closely tied to power structures in the analog world. The Internet Age: “redefines distance but does not cancel geography” (207). The work of WikiProject: Countering Systemic Bias emphasises the importance of place and representation in the information environment that continues to be constructed in the online world. This study looked at only a small portion of this coalition’s efforts (~16,000 edits)—a snapshot of their labor frozen in time—which itself is only a minute portion of the information being dispatched through Wikipedia on a daily basis (~125,000 edits). Further analysis of WP:CSB’s work over time, as well as qualitative research into the identities, interests and motivations of this collective, is needed to understand more fully how information bias is understood and challenged in the Internet galaxy. The data here indicates this is a fight worth fighting for at least a growing few.ReferencesAlexa. “Top Sites.” Alexa.com, n.d. 10 Mar. 2010 ‹http://www.alexa.com/topsites>. Ayers, Phoebe, Charles Matthews, and Ben Yates. How Wikipedia Works: And How You Can Be a Part of It. San Francisco, CA: No Starch, 2008.Bruns, Axel. Blogs, Wikipedia, Second Life, and Beyond: From Production to Produsage. New York: Peter Lang, 2008.Butler, Brian, Elisabeth Joyce, and Jacqueline Pike. Don’t Look Now, But We’ve Created a Bureaucracy: The Nature and Roles of Policies and Rules in Wikipedia. Paper presented at 2008 CHI Annual Conference, Florence.Castells, Manuel. The Internet Galaxy: Reflections on the Internet, Business, and Society. Oxford: Oxford UP, 2001.Cohen, Noam. “Wikipedia.” New York Times, n.d. 12 Mar. 2010 ‹http://www.nytimes.com/info/wikipedia/>. Doran, James. “Wikipedia Chief Promises Change after ‘Expert’ Exposed as Fraud.” The Times, 6 Mar. 2007 ‹http://technology.timesonline.co.uk/tol/news/tech_and_web/article1480012.ece>. Edwards, Lin. “Report Claims Wikipedia Losing Editors in Droves.” Physorg.com, 30 Nov 2009. 12 Feb. 2010 ‹http://www.physorg.com/news178787309.html>. Elsworth, Catherine. “Fake Wikipedia Prof Altered 20,000 Entries.” London Telegraph, 6 Mar. 2007 ‹http://www.telegraph.co.uk/news/1544737/Fake-Wikipedia-prof-altered-20000-entries.html>. Forte, Andrea, Vanessa Larco, and Amy Bruckman. “Decentralization in Wikipedia Governance.” Journal of Management Information Systems 26 (2009): 49-72.Giles, Jim. “Internet Encyclopedias Go Head to Head.” Nature 438 (2005): 900-901.Hansen, Sean, Nicholas Berente, and Kalle Lyytinen. “Wikipedia, Critical Social Theory, and the Possibility of Rational Discourse.” The Information Society 25 (2009): 38-59.Hertel, Guido, Sven Niedner, and Stefanie Herrmann. “Motivation of Software Developers in Open Source Projects: An Internet-Based Survey of Contributors to the Linex Kernel.” Research Policy 32 (2003): 1159-1177.Johnson, Bobbie. “Rightwing Website Challenges ‘Liberal Bias’ of Wikipedia.” The Guardian, 1 Mar. 2007. 8 Mar. 2010 ‹http://www.guardian.co.uk/technology/2007/mar/01/wikipedia.news>. Kane, Gerald C., Ann Majchrzak, Jeremaih Johnson, and Lily Chenisern. A Longitudinal Model of Perspective Making and Perspective Taking within Fluid Online Collectives. Paper presented at the 2009 International Conference on Information Systems, Phoenix, AZ, 2009.Kittur, Aniket, Ed H. Chi, and Bongwon Suh. What’s in Wikipedia? Mapping Topics and Conflict Using Socially Annotated Category Structure. Paper presented at the 2009 CHI Annual Conference, Boston, MA.———, and Robert E. Kraut. Harnessing the Wisdom of Crowds in Wikipedia: Quality through Collaboration. Paper presented at the 2008 Association for Computing Machinery’s Computer Supported Cooperative Work Annual Conference, San Diego, CA.Konieczny, Piotr. “Governance, Organization, and Democracy on the Internet: The Iron Law and the Evolution of Wikipedia.” Sociological Forum 24 (2009): 162-191.———. “Wikipedia: Community or Social Movement?” Interface: A Journal for and about Social Movements 1 (2009): 212-232.Langlois, Ganaele, and Greg Elmer. “Wikipedia Leeches? The Promotion of Traffic through a Collaborative Web Format.” New Media & Society 11 (2009): 773-794.Lih, Andrew. The Wikipedia Revolution. New York, NY: Hyperion, 2009.McHenry, Robert. “The Real Bias in Wikipedia: A Response to David Shariatmadari.” OpenDemocracy.com 2006. 8 Mar. 2010 ‹http://www.opendemocracy.net/media-edemocracy/wikipedia_bias_3621.jsp>. Middleton, Chris. “The World of Wikinomics.” Computer Weekly, 20 Jan. 2009: 22-26.Oreg, Shaul, and Oded Nov. “Exploring Motivations for Contributing to Open Source Initiatives: The Roles of Contribution, Context and Personal Values.” Computers in Human Behavior 24 (2008): 2055-2073.Osterloh, Margit and Sandra Rota. “Trust and Community in Open Source Software Production.” Analyse & Kritik 26 (2004): 279-301.Royal, Cindy, and Deepina Kapila. “What’s on Wikipedia, and What’s Not…?: Assessing Completeness of Information.” Social Science Computer Review 27 (2008): 138-148.Santana, Adele, and Donna J. Wood. “Transparency and Social Responsibility Issues for Wikipedia.” Ethics of Information Technology 11 (2009): 133-144.Schroer, Joachim, and Guido Hertel. “Voluntary Engagement in an Open Web-Based Encyclopedia: Wikipedians and Why They Do It.” Media Psychology 12 (2009): 96-120.Scott, John. Social Network Analysis. London: Sage, 1991.Vego-Redondo, Fernando. Complex Social Networks. Cambridge: Cambridge UP, 2007.Viegas, Fernanda B., Martin Wattenberg, and Matthew M. McKeon. “The Hidden Order of Wikipedia.” Online Communities and Social Computing (2007): 445-454.Watts, Duncan. Six Degrees: The Science of a Connected Age. New York, NY: W. W. Norton & Company, 2003Wikipedia. “About.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:About>. ———. “Welcome to Wikipedia.” n.d. 8 Mar. 2010 ‹http://en.wikipedia.org/wiki/Main_Page>.———. “Wikiproject:Countering Systemic Bias.” n.d. 12 Feb. 2010 ‹http://en.wikipedia.org/wiki/Wikipedia:WikiProject_Countering_systemic_bias#Members>. Zittrain, Jonathan. The Future of the Internet and How to Stop It. New Haven, CT: Yale UP, 2008.

14

Lawrence, Robert. "Locate, Combine, Contradict, Iterate: Serial Strategies for PostInternet Art." M/C Journal 21, no.1 (March14, 2018). http://dx.doi.org/10.5204/mcj.1374.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

We (I, Robert Lawrence and, in a rare display of unity, all my online avatars and agents)hereby render and proclaim thisMANIFESTO OF PIECES AND BITS IN SERVICE OF CONTRADICTIONAL AESTHETICSWe start with the simple premise that art has the job of telling us who we are, and that through the modern age doing this job while KEEPING UP with accelerating cultural change has necessitated the invention of something we might call the avant-garde. Along the way there has been an on-again-off-again affair between said avant-garde and technology. We are now in a new phase of the new and the technology under consideration is the Internet.The recent hyperventilating about the term postInternet reflects the artworld’s overdue recognition of the effect of the Internet on the culture at large, and on art as a cultural practice, a market, and a historical process.I propose that we cannot fully understand what the Internet is doing to us through a consideration of what happens on the screen, nor by considering what happens in the physical space we occupy either before or behind the screen. Rather we must critically and creatively fathom the flow of cultural practice between and across these realms. This requires Hybrid art combining both physical and Internet forms.I do not mean to imply that single discipline-based art cannot communicate complexity, but I believe that Internet culture introduces complexities that can only be approached through hybrid practices. And this is especially critical for an art that, in doing the job of “telling us who we are”, wants to address the contradictory ways we now form and promote, or conceal and revise, our multiple identities through online social media profiles inconsistent with our fleshly selves.We need a different way of talking about identity. A history of identity:In the ancient world, individual identity as we understand it did not exist.The renaissance invented the individual.Modernism prioritized and alienated him (sic).Post-Modernism fragmented him/her.The Internet hyper-circulates and amplifies all these modalities, exploding the possibilities of identity.While reducing us to demographic market targets, the Web facilitates mass indulgence in perversely individual interests. The now common act of creating an “online profile” is a regular reiteration of the simple fact that identity is an open-ended hypothesis. We can now live double, or extravagantly multiple, virtual lives. The “me meme” is a ceaseless morph. This is a profound change in how identity was understood just a decade ago. Other historical transformations of identity happened over centuries. This latest and most radical change has occurred in the click of a mouse. Selfhood is now imbued with new complexity, fluidity and amplified contradictions.To fully understand what is actually happening to us, we need an art that engages the variant contracts of the physical and the virtual. We need a Hybrid art that addresses variant temporal and spatial modes of the physical and virtual. We need an art that offers articulations through the ubiquitous web in concert with the distinct perspectives that a physical gallery experience uniquely offers: engagement and removal, reflection and transference. Art that tells us who we are today calls for an aesthetics of contradiction. — Ro Lawrence (and all avatars) 2011, revised 2013, 2015, 2018. The manifesto above grew from an artistic practice beginning in 1998 as I started producing a website for every project that I made in traditional media. The Internet work does not just document or promote the project, nor is it “Netart” in the common sense of creative work restricted to a browser window. All of my efforts with the Internet are directly linked to my projects in traditional media and the web components offer parallel aesthetic voices that augment or overtly contradict the reading suggested by the traditional visual components of each project.This hybrid work grew out of a previous decade of transmedia work in video installation and sculpture, where I would create physical contexts for silent video as a way to remove the video image from the seamless flow of broadcast culture. A video image can signify very differently in a physical context that separates it from the flow of mass media and rather reconnects it to lived physical culture. A significant part of the aesthetic pleasure of this kind of work comes from nuances of dissonance arising from contradictory ways viewers had learned to read the object world and the ways we were then still learning to read the electronic image world. This video installation work was about “relocating” the electronic image, but I was also “locating” the electronic image in another sense, within the boundaries of geographic and cultural location. Linking all my projects to specific geographic locations set up contrasts with the spatial ubiquity of electronic media. In 1998 I amplified this contrast with my addition of extensive Internet components with each installation I made.The Way Things Grow (1998) began as an installation of sculptures combining video with segments of birch trees. Each piece in the gallery was linked to a specific geographic location within driving distance of the gallery exhibiting the work. In the years just before this piece I had moved from a practice of text-augmented video installations to the point where I had reduced the text to small printed handouts that featured absurd Scripts for Performance. These text handouts that viewers could take with them suggested that the work was to be completed by the viewer later outside the gallery. This to-be-continued dynamic was the genesis of a serial form in work going forward from then on. Thematic and narrative elements in the work were serialized via possible actions viewers would perform after leaving the gallery. In the installation for The Way Things Grow, there was no text in the gallery at all to suggest interpretations of this series of video sculptures. Even the titles offered no direct textual help. Rather than telling the viewers something about the work before them in the gallery, the title of each piece led the viewer away from the gallery toward serial actions in the specific geographic locations the works referred to. Each piece was titled with an Internet address.Figure 1: Lawrence, Robert, The Way Things Grow, video Installation with web components at http://www.h-e-r-e.com/grow.html, 1998.When people went to the web site for each piece they found only a black page referencing a physical horizon with a long line of text that they could scroll to right for meters. Unlike the determinedly embodied work in the gallery, the web components were disembodied texts floating in a black void, but texts about very specific physical locations.Figure 2: Lawrence, Robert, The Way Things Grow, partial view of webpage at http://www.h-e-r-e.com/growth_variant4.html, 1998.The texts began with the exact longitude and latitude of a geographical site in some way related to birch trees. ... A particularly old or large tree... a factory that turned birch trees into popsicle sticks and medical tongue depressors... etc. The website texts included directions to the site, and absurd scripts for performance. In this way the Internet component transformed the suite of sculptures in the gallery to a series of virtual, and possibly actual, events beyond the gallery. These potential narratives that viewers were invited into comprised an open-ended serial structure. The gallery work was formal, minimal, essentialist. On the web it was social, locative, deconstructive. In both locations, it was located. Here follows an excerpt from the website. GROWTH VARIANT #25: North 44:57:58 by West 93:15:56. On the south side of the Hennepin County Government Center is a park with 9 birch trees. These are urban birches, and they display random scratchings, as well as proclamations of affection expressed with pairs of initials and a “+” –both with and without encircling heart symbols. RECOMMENDED PERFORMANCE: Visit these urban birches once each month. Photograph all changes in their bark made by humans. After 20 years compile a document entitled, "Human Mark Making on Urban Birches, a Visual Study of Specific Universalities". Bring it into the Hennepin County Government Center and ask that it be placed in the archives.An Acre of Art (2000) was a collaborative project with sculptor Mark Knierim. Like The Way Things Grow, this new work, commissioned by the Minneapolis Art Institute, played out in the gallery, in a specific geographic location, and online. In the Art Institute was a gallery installation combining sculptures with absurd combinations of physical rural culture fitting contradictorily into an urban "high art" context. One of the pieces, entitled Landscape (2000), was an 18’ chicken coop faced with a gold picture frame. Inside were two bard rock hens and an iMac. The computer was programmed to stream to the Internet live video from the coop, the world’s first video chicken cam. As a work unfolding across a long stretch of time, the web cam video was a serial narrative without determined division into episodes. The gallery works also referenced a specific acre of agricultural land an hour from the Institute. Here we planted a row of dwarf corn at a diagonal to the mid-western American rural geometric grid of farmland. Visitors to the rural site could sit on “rural art furniture,” contemplate the corn growing, and occasionally witness absurd performances. The third stream of the piece was an extensive website, which playfully theorized the rural/urban/art trialectic. Each of the three locations of the work was exploited to provide a richer transmedia interpretation of the project’s themes than any one venue or medium could. Location Sequence is a serial installation begun in 1999. Each installation has completely different physical elements. The only consistent physical element is 72 segments of a 72” collapsible carpenter's ruler evenly spaced to wrap around the gallery walls. Each of the 72 segments of the ruler displays an Internet web address. Reversing the notion of the Internet as a place of rapid change compared to a more enduring physical world, in this case the Internet components do not change with each new episode of the work, while the physical components transform with each new installation. Thematically, all aspects of the work deal with various shades of meaning of the term "location." Beginning/Middle/End is a 30-year conceptual serial begun in 2002, presenting a series of site-specific actions, objects, or interventions combined with corresponding web pages that collectively negotiate concepts related to time, location, and narrative. Realizing a 30-year project via the web in this manner is a self-conscious contradiction of the culture of the instantaneous that the Internet manifests and propagates.The installation documented here was completed for a one-night event in 2002 with Szilage Gallery in St Petersburg, Florida. Bricks moulded with the URLs for three web sites were placed in a historic brick road with the intention that they would remain there through a historical time frame. The URLs were also projected in light on a creek parallel to the brick road and seen only for several hours. The corresponding web site components speculate on temporal/narrative structures crossing with geographic features, natural and manufactured.Figure 3: Lawrence, Robert, Beginning/Middle/End, site-specific installation with website in conjunction with 30-year series, http://www.h-e-r-e.com/beginning.html, 2002-32.The most recent instalment was done as part of Conflux Festival in 2014 in collaboration with painter Ld Lawrence. White shapes appeared in various public spaces in downtown Manhattan. Upon closer inspection people realized that they were not painted tags or stickers, but magnetic sheets that could be moved or removed. An optical scan tag hidden on the back of each shape directed to a website which encouraged people to move the objects to other locations and send a geo-located photo to the web site to trace the shape's motion through the world. The work online could trace the serial narrative of the physical installation components following the installation during Conflux Festival. Figure 4: Lawrence, Robert w/Lawrence, Ld, Gravity Ace on the Move, site-specific installation with geo-tracking website at http://www.h-e-r-e.com/gravityace/. Completed for Conflux Festival NYC, 2014, as part of Beginning/Middle/End.Dad's Boots (2003) was a multi-sited sculpture/performance. Three different physical manifestations of the work were installed at the same time in three locations: Shirakawa-go Art Festival in Japan; the Phipps Art Center in Hudson, Wisconsin; and at the Tampa Museum of Art in Florida. Physical components of the work included silent video projection, digital photography, computer key caps, and my father's boots. Each of these three different installations referred back to one web site. Because all these shows were up at the same time, the work was a distributed synchronous serial. In each installation space the title of the work was displayed as an Internet address. At the website was a series of popup texts suggesting performances focused, however absurdly, on reassessing paternal relationships.Figure 5: Lawrence, Robert, Dad’s Boots, simultaneous gallery installation in Florida, Wisconsin and Japan, with website, 2003. Coincidently, beginning the same time as my transmedia physical/Internet art practice, since 1998 I have had a secret other-life as a tango dancer. I came to this practice drawn by the music and the attraction of an after-dark subculture that ran by different rules than the rest of life. While my life as a tanguero was most certainly an escape strategy, I quickly began to see that although tango was different from the rest of the world, it was indeed a part of this world. It had a place and a time and a history. Further, it was a fascinating history about the interplays of power, class, wealth, race, and desire. Figure 6: Lawrence, Robert, Tango Intervention, site-specific dance interventions with extensive web components, 2007-12.As Marta Savigliano points out in Tango and the Political Economy of Passion, “Tango is a practice already ready for struggle. It knows about taking sides, positions, risks. It has the experience of domination/resistance from within. …Tango is a language of decolonization. So pick and choose. Improvise... let your feet do the thinking. Be comfortable in your restlessness. Tango” (17). The realization that tango, my sensual escape from critical thought, was actually political came just about the time I was beginning to understand the essential dynamic of contradiction between the physical and Internet streams of my work. Tango Intervention began in 2007. I have now, as of 2018, done tango interventions in over 40 cities. Overall, the project can be seen as a serial performance of contradictions. In each case the physical dance interventions are manifestations of sensual fantasy in public space, and the Internet components recontextualize the public actions as site-specific performances with a political edge, revealing a hidden history or current social situation related to the political economy of tango. These themes are further developed in a series of related digital prints and videos shown here in various formats and contexts.In Tango Panopticon (2009), a “spin off” from the Tango Intervention series, the hidden social issue was the growing video surveillance of public space. The first Tango Panopticon production was Mayday 2009 with people dancing tango under public video surveillance in 15 cities. Mayday 2010 was Tango Panopticon 2.0, with tangointervention.org streaming live cell phone video from 16 simultaneous dance interventions on 4 continents. The public encountered the interventions as a sensual reclaiming of public space. Contradictorily, on the web Tango Panopticon 2.0 became a distributed worldwide action against the growing spectre of video surveillance and the increasing control of public commons. Each intervention team was automatically located on an online map when they started streaming video. Visitors to the website could choose an action from the list of cities or click on the map pins to choose which live video to load into the grid of 6 streaming signals. Visitors to the physical intervention sites could download our free open source software and stream their own videos to tangointervention.org.Figure 7: Lawrence, Robert, Tango Panopticon 2.0, worldwide synchronous dance intervention with live streaming video and extensive web components, 2010.Tango Panopticon also has a life as a serial installation, initially installed as part of the annual conference of “Digital Resources for Humanities and the Arts” at Brunel University, London. All shots in the grid of videos are swish pans from close-ups of surveillance cameras to tango interveners dancing under their gaze. Each ongoing installation in the series physically adapts to the site, and with each installation more lines of video frames are added until the images become too small to read.Figure 8: Lawrence, Robert, Tango Panopticon 2.0 (For Osvaldo), video installation based on worldwide dance intervention series with live streaming video, 2011.My new work Equivalence (in development) is quite didactic in its contradictions between the online and gallery components. A series of square prints of clouds in a gallery are titled with web addresses that open with other cloud images and then fade into randomly loading excerpts from the CIA torture manual used at Guantanamo Bay Detention Center.Figure 9: Lawrence, Robert, Eauivalence, digital prints, excerpts from CIA Guantanamo Detention Center torture manual, work-in-progress.The gallery images recall Stieglitz’s Equivalents photographs from the early 20th century. Made in the 1920s to 30s, the Equivalents comprise a pivotal change in photographic history, from the early pictorial movement in which photography tried to imitate painting, and a new artistic approach that embraced features distinct to the photographic medium. Stieglitz’s Equivalents merged photographic realism with abstraction and symbolist undertones of transcendent spirituality. Many of the 20th century masters of photography, from Ansel Adams to Minor White, acknowledged the profound influence these photographs had on them. Several images from the Equivalents series were the first photographic art to be acquired by a major art museum in the US, the Boston Museum of Fine Arts.My series Equivalence serves as the latest episode in a serial art history narrative. Since the “Pictures Generation” movement in the 1970s, photography has cannibalized its history, but perhaps no photographic body of work has been as quoted as Stieglitz’s Equivalents. A partial list includes: John Baldessari’s series Blowing Cigar Smoke to Match Clouds That Are the Same(1973), William Eggleston’s series Wedgwood Blue (1979), John Pfahl’s smoke stack series (1982-89), George Legrady’s Equivalents II(1993), Vik Muniz’sEquivalents(1997), Lisa Oppenheim (2012), and most recently, Berndnaut Smilde’s Nimbus Series, begun in 2012. Over the course of more than four decades each of these series has presented a unique vision, but all rest on Stieglitz’s shoulders. From that position they make choices about how to operate relative the original Equivalents, ranging from Baldessari and Muniz’s phenomenological playfulness to Eggleston and Smilde’s neo-essentialist approach.My series Equivalence follows along in this serial modernist image franchise. What distinguishes it is that it does not take a single position relative to other Equivalents tribute works. Rather, it exploits its gallery/Internet transmediality to simultaneously assume two contradictory positions. The dissonance of this positioning is one of my main points with the work, and it is in some ways resonant with the contradictions concerning photographic abstraction and representation that Stieglitz engaged in the original Equivalents series almost a century ago.While hanging on the walls of a gallery, Equivalence suggests the same metaphysical intentions as Stieglitz’s Equivalents. Simultaneously, in its manifestation on the Internet, my Equivalence series transcends its implied transcendence and claims a very specific time and place –a small brutal encampment on the island of Cuba where the United States abandoned any remaining claim to moral authority. In this illegal prison, forgotten lives drag on invisibly, outside of time, like untold serial narratives without resolution and without justice.Partially to balance the political insistence of Equivalence, I am also working on another series that operates with very different modalities. Following up on the live streaming technology that I developed for my Tango Panopticon public intervention series, I have started Horizon (In Development).Figure 10: Lawrence, Robert, Horizon, worldwide synchronous horizon interventions with live streaming video to Internet, work-in-progress.In Horizon I again use live cell phone video, this time streamed to an infinitely wide web page from live actions around the world done in direct engagement with the horizon line. The performances will begin and automatically come online live at noon in their respective time zone, each added to the growing horizontal line of moving images. As the actions complete, the streamed footage will begin endlessly looping. The project will also stream live during the event to galleries, and then HD footage from the events will be edited and incorporated into video installations. Leading up to this major event day, I will have a series of smaller instalments of the piece, with either live or recorded video. The first of these preliminary versions was completed during the Live Performers Workshop in Rome. Horizon continues to develop, leading to the worldwide synchronous event in 2020.Certainly, artists have always worked in series. However, exploiting the unique temporal dimensions of the Internet, a series of works can develop episodically as a serial work. If that work unfolds with contradictory thematics in its embodied and online forms, it reaches further toward an understanding of the complexities of postInternet culture and identity. ReferencesSaviligliano, Marta. Tango and the Political Economy of Passion. Boulder: Westview Press, 1995.

15

Wagman, Ira. "Wasteaminute.com: Notes on Office Work and Digital Distraction." M/C Journal 13, no.4 (August18, 2010). http://dx.doi.org/10.5204/mcj.243.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

For those seeking a diversion from the drudgery of work there are a number of websites offering to take you away. Consider the case of wasteaminute.com. On the site there is everything from flash video games, soft-core p*rnography and animated nudity, to puzzles and parlour games like poker. In addition, the site offers links to video clips grouped in categories such as “funny,” “accidents,” or “strange.” With its bright yellow bubble letters and elementary design, wasteaminute will never win any Webby awards. It is also unlikely to be part of a lucrative initial public offering for its owner, a web marketing company based in Lexington, Kentucky. The internet ratings company Alexa gives wasteaminute a ranking of 5,880,401 when it comes to the most popular sites online over the last three months, quite some way behind sites like Wikipedia, Facebook, and Windows Live.Wasteaminute is not unique. There exists a group of websites, a micro-genre of sorts, that go out of their way to offer momentary escape from the more serious work at hand, with a similar menu of offerings. These include sites with names such as ishouldbeworking.com, i-am-bored.com, boredatwork.com, and drivenbyboredom.com. These web destinations represent only the most overtly named time-wasting opportunities. Video sharing sites like YouTube or France’s DailyMotion, personalised home pages like iGoogle, and the range of applications available on mobile devices offer similar opportunities for escape. Wasteaminute inspired me to think about the relationship between digital media technologies and waste. In one sense, the site’s offerings remind us of the Internet’s capacity to re-purpose old media forms from earlier phases in the digital revolution, like the retro video game PacMan, or from aspects of print culture, like crosswords (Bolter and Grusin; Straw). For my purposes, though, wasteaminute permits the opportunity to meditate, albeit briefly, on the ways media facilitate wasting time at work, particularly for those working in white- and no-collar work environments. In contemporary work environments work activity and wasteful activity exist on the same platform. With a click of a mouse or a keyboard shortcut, work and diversion can be easily interchanged on the screen, an experience of computing I know intimately from first-hand experience. The blurring of lines between work and waste has accompanied the extension of the ‘working day,’ a concept once tethered to the standardised work-week associated with modernity. Now people working in a range of professions take work out of the office and find themselves working in cafes, on public transportation, and at times once reserved for leisure, like weekends (Basso). In response to the indeterminate nature of when and where we are at work, the mainstream media routinely report about the wasteful use of computer technology for non-work purposes. Stories such as a recent one in the Washington Post which claimed that increased employee use of social media sites like Facebook and Twitter led to decreased productivity at work have become quite common in traditional media outlets (Casciato). Media technologies have always offered the prospect of making office work more efficient or the means for management to exercise control over employees. However, those same technologies have also served as the platforms on which one can engage in dilatory acts, stealing time from behind the boss’s back. I suggest stealing time at work may well be a “tactic,” in the sense used by Michel de Certeau, as a means to resist the rules and regulations that structure work and the working life. However, I also consider it to be a tactic in a different sense: websites and other digital applications offer users the means to take time back, in the form of ‘quick hits,’ providing immediate visual or narrative pleasures, or through interfaces which make the time-wasting look like work (Wagman). Reading sites like wasteaminute as examples of ‘office entertainment,’ reminds us of the importance of workers as audiences for web content. An analysis of a few case studies also reveals how the forms of address of these sites themselves recognise and capitalise on an understanding of the rhythms of the working day, as well as those elements of contemporary office culture characterised by interruption, monotony and surveillance. Work, Media, Waste A mass of literature documents the transformations of work brought on by industrialisation and urbanisation. A recent biography of Franz Kafka outlines the rigors imposed upon the writer while working as an insurance agent: his first contract stipulated that “no employee has the right to keep any objects other than those belonging to the office under lock in the desk and files assigned for its use” (Murray 66). Siegfried Kracauer’s collection of writings on salaried workers in Germany in the 1930s argues that mass entertainment offers distractions that inhibit social change. Such restrictions and inducements are exemplary of the attempts to make work succumb to managerial regimes which are intended to maximise productivity and minimise waste, and to establish a division between ‘company time’ and ‘free time’. One does not have to be an industrial sociologist to know the efforts of Frederick W. Taylor, and the disciplines of “scientific management” in the early twentieth century which were based on the idea of making work more efficient, or of the workplace sociology scholarship from the 1950s that drew attention to the ways that office work can be monotonous or de-personalising (Friedmann; Mills; Whyte). Historian JoAnne Yates has documented the ways those transformations, and what she calls an accompanying “philosophy of system and efficiency,” have been made possible through information and communication technologies, from the typewriter to carbon paper (107). Yates evokes the work of James Carey in identifying these developments, for example, the locating of workers in orderly locations such as offices, as spatial in nature. The changing meaning of work, particularly white-collar or bureaucratic labour in an age of precarious employment and neo-liberal economic regimes, and aggressive administrative “auditing technologies,” has subjected employees to more strenuous regimes of surveillance to ensure employee compliance and to protect against waste of company resources (Power). As Andrew Ross notes, after a deep period of self-criticism over the drudgery of work in North American settings in the 1960s, the subsequent years saw a re-thinking of the meaning of work, one that gradually traded greater work flexibility and self-management for more assertive forms of workplace control (9). As Ross notes, this too has changed, an after-effect of “the shareholder revolution,” which forced companies to deliver short-term profitability to its investors at any social cost. With so much at stake, Ross explains, the freedom of employees assumed a lower priority within corporate cultures, and “the introduction of information technologies in the workplace of the new capitalism resulted in the intensified surveillance of employees” (12). Others, like Dale Bradley, have drawn attention to the ways that the design of the office itself has always concerned itself with the bureaucratic and disciplinary control of bodies in space (77). The move away from physical workspaces such as ‘the pen’ to the cubicle and now from the cubicle to the virtual office is for Bradley a move from “construction” to “connection.” This spatial shift in the way in which control over employees is exercised is symbolic of the liquid forms in which bodies are now “integrated with flows of money, culture, knowledge, and power” in the post-industrial global economies of the twenty-first century. As Christena Nippert-Eng points out, receiving office space was seen as a marker of trust, since it provided employees with a sense of privacy to carry out affairs—both of a professional or of a personal matter—out of earshot of others. Privacy means a lot of things, she points out, including “a relative lack of accountability for our immediate whereabouts and actions” (163). Yet those same modalities of control which characterise communication technologies in workspaces may also serve as the platforms for people to waste time while working. In other words, wasteful practices utilize the same technology that is used to regulate and manage time spent in the workplace. The telephone has permitted efficient communication between units in an office building or between the office and outside, but ‘personal business’ can also be conducted on the same line. Radio stations offer ‘easy listening’ formats, providing unobtrusive music so as not to disturb work settings. However, they can easily be tuned to other stations for breaking news, live sports events, or other matters having to do with the outside world. Photocopiers and fax machines facilitate the reproduction and dissemination of communication regardless of whether it is it work or non-work related. The same, of course, is true for computerised applications. Companies may encourage their employees to use Facebook or Twitter to reach out to potential clients or customers, but those same applications may be used for personal social networking as well. Since the activities of work and play can now be found on the same platform, employers routinely remind their employees that their surfing activities, along with their e-mails and company documents, will be recorded on the company server, itself subject to auditing and review whenever the company sees fit. Employees must be careful to practice image management, in order to ensure that contradictory evidence does not appear online when they call in sick to the office. Over time the dynamics of e-mail and Internet etiquette have changed in response to such developments. Those most aware of the distractive and professionally destructive features of downloading a funny or comedic e-mail attachment have come to adopt the acronym “NSFW” (Not Safe for Work). Even those of us who don’t worry about those things are well aware that the cache and “history” function of web browsers threaten to reveal the extent to which our time online is spent in unproductive ways. Many companies and public institutions, for example libraries, have taken things one step further by filtering out access to websites that may be peripheral to the primary work at hand.At the same time contemporary workplace settings have sought to mix both work and play, or better yet to use play in the service of work, to make “work” more enjoyable for its workers. Professional development seminars, team-building exercises, company softball games, or group outings are examples intended to build morale and loyalty to the company among workers. Some companies offer their employees access to gyms, to game rooms, and to big screen TVs, in return for long and arduous—indeed, punishing—hours of time at the office (Dyer-Witheford and Sherman; Ross). In this manner, acts of not working are reconfigured as a form of work, or at least as a productive experience for the company at large. Such perks are offered with an assumption of personal self-discipline, a feature of what Nippert-Eng characterises as the “discretionary workplace” (154). Of course, this also comes with an expectation that workers will stay close to the office, and to their work. As Sarah Sharma recently argued in this journal, such thinking is part of the way that late capitalism constructs “innovative ways to control people’s time and regulate their movement in space.” At the same time, however, there are plenty of moments of gentle resistance, in which the same machines of control and depersonalisation can be customised, and where individual expressions find their own platforms. A photo essay by Anna McCarthy in the Journal of Visual Culture records the inspirational messages and other personalised objects with which workers adorn their computers and work stations. McCarthy’s photographs represent the way people express themselves in relation to their work, making it a “place where workplace politics and power relations play out, often quite visibly” (McCarthy 214). Screen SecretsIf McCarthy’s photo essay illustrates the overt ways in which people bring personal expression or gentle resistance to anodyne workplaces, there are also a series of other ‘screen acts’ that create opportunities to waste time in ways that are disguised as work. During the Olympics and US college basketball playoffs, both American broadcast networks CBS and NBC offered a “boss button,” a graphic link that a user could immediately click “if the boss was coming by” that transformed the screen to something was associated with the culture of work, such as a spreadsheet. Other purveyors of networked time-wasting make use of the spreadsheet to mask distraction. The website cantyouseeimbored turns a spreadsheet into a game of “Breakout!” while other sites, like Spreadtweet, convert your Twitter updates into the form of a spreadsheet. Such boss buttons and screen interfaces that mimic work are the presentday avatars of the “panic button,” a graphic image found at the bottom of websites back in the days of Web 1.0. A click of the panic button transported users away from an offending website and towards something more legitimate, like Yahoo! Even if it is unlikely that boss keys actually convince one’s superiors that one is really working—clicking to a spreadsheet only makes sense for a worker who might be expected to be working on those kinds of documents—they are an index of how notions of personal space and privacy play out in the digitalised workplace. David Kiely, an employee at an Australian investment bank, experienced this first hand when he opened an e-mail attachment sent to him by his co-workers featuring a scantily-clad model (Cuneo and Barrett). Unfortunately for Kiely, at the time he opened the attachment his computer screen was visible in the background of a network television interview with another of the bank’s employees. Kiely’s inauspicious click (which made his the subject of an investigation by his employees) continues to circulate on the Internet, and it spawned a number of articles highlighting the precarious nature of work in a digitalised environment where what might seem to be private can suddenly become very public, and thus able to be disseminated without restraint. At the same time, the public appetite for Kiely’s story indicates that not working at work, and using the Internet to do it, represents a mode of media consumption that is familiar to many of us, even if it is only the servers on the company computer that can account for how much time we spend doing it. Community attitudes towards time spent unproductively online reminds us that waste carries with it a range of negative signifiers. We talk about wasting time in terms of theft, “stealing time,” or even more dramatically as “killing time.” The popular construction of television as the “boob tube” distinguishes it from more ‘productive’ activities, like spending time with family, or exercise, or involvement in one’s community. The message is simple: life is too short to be “wasted” on such ephemera. If this kind of language is less familiar in the digital age, the discourse of ‘distraction’ is more prevalent. Yet, instead of judging distraction a negative symptom of the digital age, perhaps we should reinterpret wasting time as the worker’s attempt to assert some agency in an increasingly controlled workplace. ReferencesBasso, Pietro. Modern Times, Ancient Hours: Working Lives in the Twenty-First Century. London: Verso, 2003. Bolter, Jay David, and Richard Grusin. Remediation: Understanding New Media. Cambridge: MIT Press, 2000.Bradley, Dale. “Dimensions Vary: Technology, Space, and Power in the 20th Century Office”. Topia 11 (2004): 67-82.Casciato, Paul. “Facebook and Other Social Media Cost UK Billions”. Washington Post, 5 Aug. 2010. 11 Aug. 2010 ‹http://www.washingtonpost.com/wp-dyn/content/article/2010/08/05/AR2010080503951.html›.Cuneo, Clementine, and David Barrett. “Was Banker Set Up Over Saucy Miranda”. The Daily Telegraph 4 Feb. 2010. 21 May 2010 ‹http://www.dailytelegraph.com.au/entertainment/sydney-confidential/was-banker-set-up-over-saucy-miranda/story-e6frewz0-1225826576571›.De Certeau, Michel. The Practice of Everyday Life. Vol. 1. Berkeley: U of California P. 1988.Dyer-Witheford, Nick, and Zena Sharman. "The Political Economy of Canada's Video and Computer Game Industry”. Canadian Journal of Communication 30.2 (2005). 1 May 2010 ‹http://www.cjc-online.ca/index.php/journal/article/view/1575/1728›.Friedmann, Georges. Industrial Society. Glencoe, Ill.: Free Press, 1955.Kracauer, Siegfried. The Salaried Masses. London: Verso, 1998.McCarthy, Anna. Ambient Television. Durham: Duke UP, 2001. ———. “Geekospheres: Visual Culture and Material Culture at Work”. Journal of Visual Culture 3 (2004): 213-21.Mills, C. Wright. White Collar. Oxford: Oxford UP, 1951. Murray, Nicholas. Kafka: A Biography. New Haven: Yale UP, 2004.Newman, Michael. “Ze Frank and the Poetics of Web Video”. First Monday 13.5 (2008). 1 Aug. 2010 ‹http://www.uic.edu/htbin/cgiwrap/bin/ojs/index.php/fm/article/view/2102/1962›.Nippert-Eng, Christena. Home and Work: Negotiating Boundaries through Everyday Life. Chicago: U. of Chicago P, 1996.Power, Michael. The Audit Society. Oxford: Oxford UP, 1997. Ross, Andrew. No Collar: The Humane Workplace and Its Hidden Costs. Philadelphia: Temple UP, 2004. Sharma, Sarah. “The Great American Staycation and the Risk of Stillness”. M/C Journal 12.1 (2009). 11 May 2010 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/122›. Straw, Will. “Embedded Memories”. Residual Media Ed. Charles Acland. U. of Minnesota P., 2007. 3-15.Whyte, William. The Organisation Man. New York: Simon and Schuster, 1957. Wagman, Ira. “Log On, Goof Off, Look Up: Facebook and the Rhythms of Canadian Internet Use”. How Canadians Communicate III: Contexts for Popular Culture. Eds. Bart Beaty, Derek, Gloria Filax Briton, and Rebecca Sullivan. Athabasca: Athabasca UP 2009. 55-77. ‹http://www2.carleton.ca/jc/ccms/wp-content/ccms-files/02_Beaty_et_al-How_Canadians_Communicate.pdf›Yates, JoAnne. “Business Use of Information Technology during the Industrial Age”. A Nation Transformed by Information. Eds. Alfred D. Chandler & James W. Cortada. Oxford: Oxford UP., 2000. 107-36.

16

Glover, Stuart. "Failed Fantasies of Cohesion: Retrieving Positives from the Stalled Dream of Whole-of-Government Cultural Policy." M/C Journal 13, no.1 (March21, 2010). http://dx.doi.org/10.5204/mcj.213.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

In mid-2001, in a cultural policy discussion at Arts Queensland, an Australian state government arts policy and funding apparatus, a senior arts bureaucrat seeking to draw a funding client’s gaze back to the bigger picture of what the state government was trying to achieve through its cultural policy settings excused his own abstracting comments with the phrase, “but then I might just be a policy ‘wank’”. There was some awkward laughter before one of his colleagues asked, “did you mean a policy ‘wonk’”? The incident was a misstatement of a term adopted in the 1990s to characterise the policy workers in the Clinton Whitehouse (Cunningham). This was not its exclusive use, but many saw Clinton as an exemplary wonk: less a pragmatic politician than one entertained by the elaboration of policy. The policy work of Clinton’s kitchen cabinet was, in part, driven by a pervasive rationalist belief in the usefulness of ordered policy processes as a method of producing social and economic outcomes, and, in part, by the seductions of policy-play: its ambivalences, its conundrums, and, in some sense, its aesthetics (Klein 193-94). There, far from being characterised as unproductive “self-abuse” of the body-politic, policy processes were alive as a pragmatic technology, an operationalisation of ideology, as an aestheticised field of play, but more than anything as a central rationalist tenant of government action. This final idea—the possibilities of policy for effecting change, promoting development, meeting government objectives—is at the centre of the bureaucratic imagination. Policy is effective. And a concomitant belief is that ordered or organised policy processes result in the best policy and the best outcomes. Starting with Harold Lasswell, policy theorists extended the general rationalist suppositions of Western representative democracies into executive government by arguing for the value of information/knowledge and the usefulness of ordered process in addressing thus identified policy problems. In the post-war period particularly, a case can be made for the usefulness of policy processes to government—although, in a paradox, these rationalist conceptions of the policy process were strangely irrational, even Utopian, in their view of transformational capacities possibilities of policy. The early policy scientists often moved beyond a view of policy science as a useful tool, to the advocacy of policy science and the policy scientist as panaceas for public ills (Parsons 18-19). The Utopian ambitions of policy science finds one of their extremes in the contemporary interest in whole-of-government approaches to policy making. Whole-of-governmentalism, concern with co-ordination of policy and delivery across all areas of the state, can seen as produced out of Western governments’ paradoxical concern with (on one hand) order, totality, and consistency, and (on the other) deconstructing existing mechanisms of public administration. Whole-of-governmentalism requires a horizontal purview of government goals, programs, outputs, processes, politics, and outcomes, alongside—and perhaps in tension with—the long-standing vertical purview that is fundamental to ministerial responsibility. This often presents a set of public management problems largely internal to government. Policy discussion and decision-making, while affecting community outcomes and stakeholder utility, are, in this circ*mstance, largely inter-agency in focus. Any eventual policy document may well have bureaucrats rather than citizens as its target readers—or at least as its closest readers. Internally, cohesion of objective, discourse, tool and delivery are pursued as a prime interests of policy making. Failing at Policy So what happens when whole-of-government policy processes, particularly cultural policy processes, break down or fail? Is there anything productive to be retrieved from a failed fantasy of policy cohesion? This paper examines the utility of a failure to cohere and order in cultural policy processes. I argue that the conditions of contemporary cultural policy-making, particularly the tension between the “boutique” scale of cultural policy-making bodies and the revised, near universal, remit of cultural policy, require policy work to be undertaken in an environment and in such a way that failure is almost inevitable. Coherence and cohesions are fundamental principles of whole-of-government policy but cultural policy ambitions are necessarily too comprehensive to be achievable. This is especially so for the small arts or cultural offices government that normally act as lead agencies for cultural policy development within government. Yet, that these failed processes can still give rise to positive outcomes or positive intermediate outputs that can be taken up in a productive way in the ongoing cycle of policy work that categorises contemporary cultural governance. Herein, I detail the development of Building the Future, a cultural policy planning paper (and the name of a policy planning process) undertaken within Arts Queensland in 1999 and 2000. (While this process is now ten years in the past, it is only with a decade past that as a consultant I am in apposition to write about the material.) The abandonment of this process before the production of a public policy program allows something to be said about the utility and role of failure in cultural policy-making. The working draft of Building the Future never became a public document, but the eight months of its development helped produce a series of shifts in the discourse of Queensland Government cultural policy: from “arts” to “creative industries”; and from arts bureaucracy-centred cultural policy to the whole-of-government policy frameworks. These concepts were then taken up and elaborated in the Creative Queensland policy statement published by Arts Queensland in October 2002, particularly the concern with creative industries; whole-of-government cultural policy; and the repositioning of Arts Queensland as a service agency to other potential cultural funding-bodies within government. Despite the failure of the Building the Future process, it had a role in the production of the policy document and policy processes that superseded it. This critique of cultural policy-making rather than cultural policy texts, announcements and settings is offered as part of a project to bring to cultural policy studies material and theoretical accounts of the particularities of making cultural policy. While directions in cultural policy have much to do with the overall directions of government—which might over the past decade be categorised as focus on de-regulation, out-sourcing of services—there are developments in cultural policy settings and in cultural policy processes that are particular to cultural policy and cultural policy-making. Central to the development of cultural policy studies and to cultural policy is a transformational broadening of the operant definition of culture within government (O'Regan). Following Raymond Williams, the domain of culture is broadened to include the high culture, popular culture, folk culture and the culture of everyday life. Accordingly, in some sense, every issue of governance is deemed to have a cultural dimension—be it policy questions around urban space, tourism, community building and so on. Contemporary governments are required to act with a concern for cultural questions both within and across a number of long-persisting and otherwise discrete policy silos. This has implications for cultural policy makers and for program delivery. The definition of culture as “everyday life”, while truistically defendable, becomes unwieldy as an imprimatur or a container for administrative activity. Transforming cultural policy into a domain incorporating most social policy and significant elements of economic policy makes the domain titanically large. Potentially, it compromises usual government efforts to order policy activity through the division or apportionment of responsibility (Glover and Cunningham 19). The problem has given rise to a new mode of policy-making which attends to the co-ordination of policy across and between levels of government, known as whole-of government policy-making (see O’Regan). Within the domain of cultural policy the task of whole-of-government cultural policy is complicated by the position of, and the limits upon, arts and cultural bureaux within state and federal governments. Dedicated cultural planning bureaux often operate as “boutique” agencies. They are usually discrete line agencies or line departments within government—only rarely are they part of the core policy function of departments of a Premier or a Prime Minister. Instead, like most line agencies, they lack the leverage within the bureaucracy or policy apparatus to deliver whole-of-government cultural policy change. In some sense, failure is the inevitable outcome of all policy processes, particularly when held up against the mechanistic representation of policy processes in policy typical of policy handbooks (see Bridgman and Davis 42). Against such models, which describe policy a series of discrete linear steps, all policy efforts fail. The rationalist assumptions of early policy models—and the rigid templates for policy process that arise from their assumptions—in retrospect condemn every policy process to failure or at least profound shortcoming. This is particularly so with whole-of-government cultural policy making To re-think this, it can be argued that the error then is not really in the failure of the process, which is invariably brought about by the difficulty for coherent policy process to survive exogenous complexity, but instead the error rests with the simplicity of policy models and assumptions about the possibility of cohesion. In some sense, mechanistic policy processes make failure endogenous. The contemporary experience of making policy has tended to erode any fantasies of order, clear process, or, even, clear-sightedness within government. Achieving a coherence to the policy message is nigh on impossible—likewise cohesion of the policy framework is unlikely. Yet, importantly, failed policy is not without value. The churn of policy work—the exercise of attempting cohrent policy-making—constitutes, in some sense, the deliberative function of government, and potentially operates as a force (and site) of change. Policy briefings, reports, and draft policies—the constitution of ideas in the policy process and the mechanism for their dissemination within the body of government and perhaps to other stakeholders—are discursive acts in the process of extending the discourse of government and forming its later actions. For arts and cultural policy agencies in particular, who act without the leverage or resources of central agencies, the expansive ambitions of whole-of-government cultural policy makes failure inevitable. In such a circ*mstance, retrieving some benefits at the margins of policy processes, through the churn of policy work towards cohesion, is an important consolation. Case study: Cultural Policy 2000 The policy process I wish to examine is now complete. It ran over the period 1999–2002, although I wish to concentrate on my involvement in the process in early 2000 during which, as a consultant to Arts Queensland, I generated a draft policy document, Building the Future: A policy framework for the next five years (working draft). The imperative to develop a new state cultural policy followed the election of the first Beattie Labor government in July 1998. By 1999, senior Arts Queensland staff began to argue (within government at least) for the development of a new state cultural policy. The bureaucrats perceived policy development as one way of establishing “traction” in the process of bidding for new funds for the portfolio. Arts Minister Matt Foley was initially reluctant to “green-light” the policy process, but eventually in early 1999 he acceded to it on the advice of Arts Queensland, the industry, his own policy advisors and the Department of Premier. As stated above, this case study is offered now because the passing of time makes the analysis of relatively sensitive material possible. From the outset, an abbreviated timeframe for consultation and drafting seem to guarantee a difficult birth for the policy document. This was compounded by a failure to clarity the aims and process of the project. In presenting the draft policy to the advisory group, it became clear that there was no agreed strategic purpose to the document: Was it to be an advertisem*nt, a framework for policy ideas, an audit, or a report on achievements? Tied to this, were questions about the audience for the policy statement. Was it aimed at the public, the arts industry, bureaucrats inside Arts Queensland, or, in keeping with the whole-of-government inflection to the document and its putative use in bidding for funds inside government, bureaucrats outside of Arts Queensland? My own conception of the document was as a cultural policy framework for the whole-of-government for the coming five years. It would concentrate on cultural policy in three realms: Arts Queensland; the arts instrumentalities; and other departments (particularly the cultural initiatives undertaken by the Department of Premier and the Department of State Development). In order to do this I articulated (for myself) a series of goals for the document. It needed to provide the philosophical underpinnings for a new arts and cultural policy, discuss the cultural significance of “community” in the context of the arts, outline expansion plans for the arts infrastructure throughout Queensland, advance ideas for increased employment in the arts and cultural industries, explore the development of new audiences and markets, address contemporary issues of technology, globalisation and culture commodification, promote a whole-of-government approach to the arts and cultural industries, address social justice and equity concerns associated with cultural diversity, and present examples of current and new arts and cultural practices. Five key strategies were identified: i) building strong communities and supporting diversity; ii) building the creative industries and the cultural economy; iii) developing audiences and telling Queensland’s stories; iv) delivering to the world; and v) a new role for government. While the second aim of building the creative industries and the cultural economy was an addition to the existing Australian arts policy discourse, it is the articulation of a new role for government that is most radical here. The document went to the length of explicitly suggesting a series of actions to enable Arts Queensland to re-position itself inside government: develop an ongoing policy cycle; position Arts Queensland as a lead agency for cultural policy development; establish a mechanism for joint policy planning across the arts portfolio; adopt a whole-of-government approach to policy-making and program delivery; use arts and cultural strategies to deliver on social and economic policy agendas; centralise some cultural policy functions and project; maintain and develop mechanisms and peer assessment; establish long-term strategic relationships with the Commonwealth and local government; investigate new vehicles for arts and cultural investment; investigate partnerships between industry, community and government; and develop appropriate performance measures for the cultural industries. In short, the scope of the document was titanically large, and prohibitively expansive as a basis for policy change. A chief limitation of these aims is that they seem to place the cohesion and coherence of the policy discourse at the centre of the project—when it might have better privileged a concern with policy outputs and industry/community outcomes. The subsequent dismal fortunes of the document are instructive. The policy document went through several drafts over the first half of 2000. By August 2000, I had removed myself from the process and handed the drafting back to Arts Queensland which then produced shorter version less discursive than my initial draft. However, by November 2000, it is reasonable to say that the policy document was abandoned. Significantly, after May 2000 the working drafts began to be used as internal discussion documents with government. Thus, despite the abandonment of the policy process, largely due to the unworkable breadth of its ambition, the document had a continued policy utility. The subsequent discussions helped organise future policy statements and structural adjustments by government. After the re-election of the Beattie government in January 2001, a more substantial policy process was commenced with the earlier policy documents as a starting point. By early 2002 the document was in substantial draft. The eventual policy, Creative Queensland, was released in October 2002. Significantly, this document sought to advance two ideas that I believe the earlier process did much to mobilise: a whole-of-government approach to culture; and a broader operant definition of culture. It is important not to see these as ideas merely existing “textually” in the earlier policy draft of Building the Future, but instead to see them as ideas that had begun adhere themselves to the cultural policy mechanism of government, and begun to be deployed in internal policy discussions and in program design, before finding an eventual home in a published policy text. Analysis The productive effects of the aborted policy process in which I participated are difficult to quantify. They are difficult, in fact, to separate out from governments’ ongoing processes of producing and circulating policy ideas. What is clear is that the effects of Building the Future were not entirely negated by it never becoming public. Instead, despite only circulating to a readership of bureaucrats it represented the ideas of part of the bureaucracy at a point in time. In this instance, a “failed” policy process, and its intermediate outcomes, the draft policy, through the churn of policy work, assisted government towards an eventual policy statement and a new form of governmental organisation. This suggests that processes of cultural policy discussion, or policy churn, can be as productive as the public “enunciation” of formal policy in helping to organise ideas within government and determine programs and the allocation of resources. This is even so where the Utopian idealism of the policy process is abandoned for something more graspable or politic. For the small arts or cultural policy bureau this is an important incremental benefit. Two final implications should be noted. The first is for models of policy process. Bridgman and Davis’s model of the Australian policy cycle, despite its mechanistic qualities, is ambiguous about where the policy process begins and ends. In one instance they represent it as linear but strictly circular, always coming back to its own starting point (27). Elsewhere, however, they represent it as linear, but not necessarily circular, passing through eight stages with a defined beginning and end: identification of issues; policy analysis; choosing policy instruments; consultation; co-ordination; decision; implementation; and evaluation (28–29). What is clear from the 1999-2002 policy process—if we take the full period between when Arts Queensland began to organise the development of a new arts policy and its publication as Creative Queensland in October 2002—is that the policy process was not a linear one progressing in an orderly fashion towards policy outcomes. Instead, Building the Future, is a snapshot in time (namely early to mid-2000) of a fragmenting policy process; it reveals policy-making as involving a concurrency of policy activity rather than a progression through linear steps. Following Mark Considine’s conception of policy work as the state’s effort at “system-wide information exchange and policy transfer” (271), the document is concerned less in the ordering of resources than the organisation of policy discourse. The churn of policy is the mobilisation of information, or for Considine: policy-making, when considered as an innovation system among linked or interdependent actors, becomes a learning and regulating web based upon continuous exchanges of information and skill. Learning occurs through regulated exchange, rather than through heroic insight or special legislative feats of the kind regularly described in newspapers. (269) The acceptance of this underpins a turn in contemporary accounts of policy (Considine 252-72) where policy processes become contingent and incomplete Policy. The ordering of policy is something to be attempted rather than achieved. Policy becomes pragmatic and ad hoc. It is only coherent in as much as a policy statement represents a bringing together of elements of an agency or government’s objectives and program. The order, in some sense, arrives through the act of collection, narrativisation and representation. The second implication is more directly for cultural policy makers facing the prospect of whole-of-government cultural policy making. While it is reasonable for government to wish to make coherent totalising statements about its cultural interests, such ambitions bring the near certainty of failure for the small agency. Yet these failures of coherence and cohesion should be viewed as delivering incremental benefits through the effort and process of this policy “churn”. As was the case with the Building the Future policy process, while aborted it was not a totally wasted effort. Instead, Building the Future mobilised a set of ideas within Arts Queensland and within government. For the small arts or cultural bureaux approaching the enormous task of whole-of government cultural policy making such marginal benefits are important. References Arts Queensland. Creative Queensland: The Queensland Government Cultural Policy 2002. Brisbane: Arts Queensland, 2002. Bridgman, Peter, and Glyn Davis. Australian Policy Handbook. St Leonards: Allen & Unwin, 1998. Considine, Mark. Public Policy: A Critical Approach. South Melbourne: Palgrave Macmillan, 1996. Cunningham, Stuart. "Willing Wonkers at the Policy Factory." Media Information Australia 73 (1994): 4-7. Glover, Stuart, and Stuart Cunningham. "The New Brisbane." Artlink 23.2 (2003): 16-23. Glover, Stuart, and Gillian Gardiner. Building the Future: A Policy Framework for the Next Five Years (Working Draft). Brisbane: Arts Queensland, 2000. Klein, Joe. "Eight Years." New Yorker 16 & 23 Oct. 2000: 188-217. O'Regan, Tom. "Cultural Policy: Rejuvenate or Wither". 2001. rtf.file. (26 July): AKCCMP. 9 Aug. 2001. ‹http://www.gu.edu.au/centre/cmp>. Parsons, Wayne. Public Policy: An Introduction to the Theory and Practice of Policy Analysis. Aldershot: Edward Edgar, 1995.Williams, Raymond. Key Words: A Vocabulary of Culture and Society. London: Fontana, 1976.

17

Edmundson, Anna. "Curating in the Postdigital Age." M/C Journal 18, no.4 (August10, 2015). http://dx.doi.org/10.5204/mcj.1016.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

It seems nowadays that any aspect of collecting and displaying tangible or intangible material culture is labeled as curating: shopkeepers curate their wares; DJs curate their musical selections; magazine editors curate media stories; and hipsters curate their coffee tables. Given the increasing ubiquity and complexity of 21st-century notions of curatorship, the current issue of MC Journal, ‘curate’, provides an excellent opportunity to consider some of the changes that have occurred in professional practice since the emergence of the ‘digital turn’. There is no doubt that the internet and interactive media have transformed the way we live our daily lives—and for many cultural commentators it only makes sense that they should also transform our cultural experiences. In this paper, I want to examine the issue of curatorial practice in the postdigital age, looking some of the ways that curating has changed over the last twenty years—and some of the ways it has not. The term postdigital comes from the work of Ross Parry, and is used to references the ‘tipping point’ where the use of digital technologies became normative practice in museums (24). Overall, I contend that although new technologies have substantially facilitated the way that curators do their jobs, core business and values have not changed as the result of the digital turn. While, major paradigm shifts have occurred in the field of professional curatorship over the last twenty years, these shifts have been issue-driven rather than a result of new technologies. Everyone’s a Curator In a 2009 article in the New York Times, journalist Alex Williams commented on the growing trend in American consumer culture of labeling oneself a curator. “The word ‘curate’,’’ he observed, “has become a fashionable code word among the aesthetically minded, who seem to paste it onto any activity that involves culling and selecting” (1). Williams dated the origins of the popular adoption of the term ‘curating’ to a decade earlier; noting the strong association between the uptake and the rise of the internet (2). This association is not surprising. The development of increasingly interactive software such as Web 2.0 has led to a rapid rise in new technologies aimed at connecting people and information in ways that were previously unimaginable. In particular the internet has become a space in which people can collect, store and most importantly share vast quantities of information. This information is often about objects. According to sociologist Jyri Engeström, the most successful social network sites on the internet (such as Pinterest, Flickr, Houzz etc), use discrete objects, rather than educational content or interpersonal relationships, as the basis for social interaction. So objects become the node for inter-personal communication. In these and other sites, internet users can find, collate and display multiple images of objects on the same page, which can in turn be connected at the press of a button to other related sources of information in the form of text, commentary or more images. These sites are often seen as the opportunity to virtually curate mini-exhibitions, as well as to create mood boards or sites of virtual consumption. The idea of curating as selective aesthetic editing is also popular in online markets places such as Etsy where numerous sellers offer ‘curated’ selections from home wares, to prints, to (my personal favorite) a curated selection of cat toys. In all of these exercises there is an emphasis on the idea of connoisseurship. As part of his article on the new breed of ‘curators’, for example, Alex Williams interviewed Tom Kalendrain, the Fashion Director of a leading American department store, which had engaged in a collaboration with Scott Schuman of the fashion blog, the Sartorialist. According to Kalendrain the store had asked Schuman to ‘curate’ a collection of clothes for them to sell. He justified calling Schuman a curator by explaining: “It was precisely his eye that made the store want to work with him; it was about the right shade of blue, about the cut, about the width of a lapel” (cited in Williams 2). The interview reveals much about current popular notions of what it means to be a curator. The central emphasis of Kalendrain’s distinction was on connoisseurship: exerting a privileged authoritative voice based on intimate knowledge of the subject matter and the ability to discern the very best examples from a plethora of choices. Ironically, in terms of contemporary museum practice, this is a model of curating that museums have consciously been trying to move away from for at least the last three decades. We are now witnessing an interesting disconnect in which the extra-museum community (represented in particular by a postdigital generation of cultural bloggers, commentators and entrepreneurs) are re-vivifying an archaic model of curating, based on object-centric connoisseurship, just at the point where professional curators had thought they had successfully moved on. From Being about Something to Being for Somebody The rejection of the object-expert model of curating has been so persuasive that it has transformed the way museums conduct core business across all sectors of the institution. Over the last thirty to forty years museums have witnessed a major pedagogical shift in how curators approach their work and how museums conceptualise their core values. These paradigmatic and pedagogical shifts were best characterised by the museologist Stephen Weil in his seminal article “From being about something to being for somebody.” Weil, writing in the late 1990s, noted that museums had turned away from traditional models in which individual curators (by way of scholarship and connoisseurship) dictated how the rest of the world (the audience) apprehended and understood significant objects of art, science and history—towards an audience centered approach where curators worked collaboratively with a variety of interested communities to create a pluralist forum for social change. In museum parlance these changes are referred to under the general rubric of the ‘new museology’: a paradigm shift, which had its origins in the 1970s; its gestation in the 1980s; and began to substantially manifest by the 1990s. Although no longer ‘new’, these shifts continue to influence museum practices in the 2000s. In her article, “Curatorship as Social Practice’” museologist Christina Kreps outlined some of the developments over recent decades that have challenged the object-centric model. According to Kreps, the ‘new museology’ was a paradigm shift that emerged from a widespread dissatisfaction with conventional interpretations of the museum and its functions and sought to re-orient itself away from strongly method and technique driven object-focused approaches. “The ‘new museum’ was to be people-centered, action-oriented, and devoted to social change and development” (315). An integral contributor to the developing new museology was the subjection of the western museum in the 1980s and ‘90s to representational critique from academics and activists. Such a critique entailed, in the words of Sharon Macdonald, questioning and drawing attention to “how meanings come to be inscribed and by whom, and how some come to be regarded as ‘right’ or taken as given” (3). Macdonald notes that postcolonial and feminist academics were especially engaged in this critique and the growing “identity politics” of the era. A growing engagement with the concept that museological /curatorial work is what Kreps (2003b) calls a ‘social process’, a recognition that; “people’s relationships to objects are primarily social and cultural ones” (154). This shift has particularly impacted on the practice of museum curatorship. By way of illustration we can compare two scholarly definitions of what constitutes a curator; one written in 1984 and one from 2001. The Manual of Curatorship, written in 1994 by Gary Edson and David Dean define a curator as: “a staff member or consultant who is as specialist in a particular field on study and who provides information, does research and oversees the maintenance, use, and enhancement of collections” (290). Cash Cash writing in 2001 defines curatorship instead as “a social practice predicated on the principle of a fixed relation between material objects and the human environment” (140). The shift has been towards increased self-reflexivity and a focus on greater plurality–acknowledging the needs of their diverse audiences and community stakeholders. As part of this internal reflection the role of curator has shifted from sole authority to cultural mediator—from connoisseur to community facilitator as a conduit for greater community-based conversation and audience engagement resulting in new interpretations of what museums are, and what their purpose is. This shift—away from objects and towards audiences—has been so great that it has led some scholars to question the need for museums to have standing collections at all. Do Museums Need Objects? In his provocatively titled work Do Museums Still Need Objects? Historian Steven Conn observes that many contemporary museums are turning away from the authority of the object and towards mass entertainment (1). Conn notes that there has been an increasing retreat from object-based research in the fields of art; science and ethnography; that less object-based research seems to be occurring in museums and fewer objects are being put on display (2). The success of science centers with no standing collections, the reduction in the number of objects put on display in modern museums (23); the increasing phalanx of ‘starchitect’ designed museums where the building is more important than the objects in it (11), and the increase of virtual museums and collections online, all seems to indicate that conventional museum objects have had their day (1-2). Or have they? At the same time that all of the above is occurring, ongoing research suggests that in the digital age, more than ever, people are seeking the authenticity of the real. For example, a 2008 survey of 5,000 visitors to living history sites in the USA, found that those surveyed expressed a strong desire to commune with historically authentic objects: respondents felt that their lives had become so crazy, so complicated, so unreal that they were seeking something real and authentic in their lives by visiting these museums. (Wilkening and Donnis 1) A subsequent research survey aimed specifically at young audiences (in their early twenties) reported that: seeing stuff online only made them want to see the real objects in person even more, [and that] they felt that museums were inherently authentic, largely because they have authentic objects that are unique and wonderful. (Wilkening 2) Adding to the question ‘do museums need objects?’, Rainey Tisdale argues that in the current digital age we need real museum objects more than ever. “Many museum professionals,” she reports “have come to believe that the increase in digital versions of objects actually enhances the value of in-person encounters with tangible, real things” (20). Museums still need objects. Indeed, in any kind of corporate planning, one of the first thing business managers look for in a company is what is unique about it. What can it provide that the competition can’t? Despite the popularity of all sorts of info-tainments, the one thing that museums have (and other institutions don’t) is significant collections. Collections are a museum’s niche resource – in business speak they are the asset that gives them the advantage over their competitors. Despite the increasing importance of technology in delivering information, including collections online, there is still overwhelming evidence to suggest that we should not be too quick to dismiss the traditional preserve of museums – the numinous object. And in fact, this is precisely the final argument that Steven Conn reaches in his above-mentioned publication. Curating in the Postdigital Age While it is reassuring (but not particularly surprising) that generations Y and Z can still differentiate between virtual and real objects, this doesn’t mean that museum curators can bury their heads in the collection room hoping that the digital age will simply go away. The reality is that while digitally savvy audiences continue to feel the need to see and commune with authentic materially-present objects, the ways in which they access information about these objects (prior to, during, and after a museum visit) has changed substantially due to technological advances. In turn, the ways in which curators research and present these objects – and stories about them – has also changed. So what are some of the changes that have occurred in museum operations and visitor behavior due to technological advances over the last twenty years? The most obvious technological advances over the last twenty years have actually been in data management. Since the 1990s a number of specialist data management systems have been developed for use in the museum sector. In theory at least, a curator can now access the entire collections of an institution without leaving their desk. Moreover, the same database that tells the curator how many objects the institution holds from the Torres Strait Islands, can also tell her what they look like (through high quality images); which objects were exhibited in past exhibitions; what their prior labels were; what in-house research has been conducted on them; what the conservation requirements are; where they are stored; and who to contact for copyright clearance for display—to name just a few functions. In addition a curator can get on the internet to search the online collection databases from other museums to find what objects they have from the Torres Strait Islands. Thus, while our curator is at this point conducting the same type of exhibition research that she would have done twenty years ago, the ease in which she can access information is substantially greater. The major difference of course is that today, rather than in the past, the curator would be collaborating with members of the original source community to undertake this project. Despite the rise of the internet, this type of liaison still usually occurs face to face. The development of accessible digital databases through the Internet and capacity to download images and information at a rapid rate has also changed the way non-museum staff can access collections. Audiences can now visit museum websites through which they can easily access information about current and past exhibitions, public programs, and online collections. In many cases visitors can also contribute to general discussion forums and collections provenance data through various means such as ‘tagging’; commenting on blogs; message boards; and virtual ‘talk back’ walls. Again, however, this represents a change in how visitors access museums but not a fundamental shift in what they can access. In the past, museum visitors were still encouraged to access and comment upon the collections; it’s just that doing so took a lot more time and effort. The rise of interactivity and the internet—in particular through Web 2.0—has led many commentators to call for a radical change in the ways museums operate. Museum analyst Lynda Kelly (2009) has commented on the issue that: the demands of the ‘information age’ have raised new questions for museums. It has been argued that museums need to move from being suppliers of information to providing usable knowledge and tools for visitors to explore their own ideas and reach their own conclusions because of increasing access to technologies, such as the internet. Gordon Freedman for example argues that internet technologies such as computers, the World Wide Web, mobile phones and email “… have put the power of communication, information gathering, and analysis in the hands of the individuals of the world” (299). Freedman argued that museums need to “evolve into a new kind of beast” (300) in order to keep up with the changes opening up to the possibility of audiences becoming mediators of information and knowledge. Although we often hear about the possibilities of new technologies in opening up the possibilities of multiple authors for exhibitions, I have yet to hear of an example of this successfully taking place. This doesn’t mean, however, that it will never happen. At present most museums seem to be merely dipping their toes in the waters. A recent example from the Art Gallery of South Australia illustrates this point. In 2013, the Gallery mounted an exhibition that was, in theory at least, curated by the public. Labeled as “the ultimate people’s choice exhibition” the project was hosted in conjunction with ABC Radio Adelaide. The public was encouraged to go online to the gallery website and select from a range of artworks in different categories by voting for their favorites. The ‘winning’ works were to form the basis of the exhibition. While the media spin on the exhibition gave the illusion of a mass curated show, in reality very little actual control was given over to the audience-curators. The public was presented a range of artworks, which had already been pre-selected from the standing collections; the themes for the exhibition had also already been determined as they informed the 120 artworks that were offered up for voting. Thus, in the end the pre-selection of objects and themes, as well as the timing and execution of the exhibition remained entirely in the hand of the professional curators. Another recent innovation did not attempt to harness public authorship, but rather enhanced individual visitor connections to museum collections by harnessing new GPS technologies. The Streetmuseum was a free app program created by the Museum of London to bring geotagged historical street views to hand held or portable mobile devices. The program allowed user to undertake a self-guided tour of London. After programing in their route, users could then point their device at various significant sites along the way. Looking through their viewfinder they would see a 3D historic photograph overlayed on the live site – allowing user not only to see what the area looked like in the past but also to capture an image of the overlay. While many of the available tagging apps simply allow for the opportunity of adding more white noise, allowing viewers to add commentary, pics, links to a particular geo tagged site but with no particular focus, the Streetmuseum had a well-defined purpose to encourage their audience to get out and explore London; to share their archival photograph collection with a broader audience; and to teach people more about London’s unique history. A Second Golden Age? A few years ago the Steven Conn suggested that museums are experiencing an international ‘golden age’ with more museums being built and visited and talked about than ever before (1). In the United States, where Conn is based, there are more than 17,500 accredited museums, and more than two million people visit some sort of museum per day, averaging around 865 million museum visits per year (2). However, at the same time that museums are proliferating, the traditional areas of academic research and theory that feed into museums such as history, cultural studies, anthropology and art history are experiencing a period of intense self reflexivity. Conn writes: At the turn of the twenty-first century, more people are going to more museums than at any time in the past, and simultaneously more scholars, critics, and others are writing and talking about museums. The two phenomena are most certainly related but it does not seem to be a happy relationship. Even as museums enjoy more and more success…many who write about them express varying degrees of foreboding. (1) There is no doubt that the internet and increasingly interactive media has transformed the way we live our daily lives—it only makes sense that it should also transform our cultural experiences. At the same time Museums need to learn to ride the wave without getting dumped into it. The best new media acts as a bridge—connecting people to places and ideas—allowing them to learn more about museum objects and historical spaces, value-adding to museum visits rather than replacing them altogether. As museologust Elaine Gurian, has recently concluded, the core business of museums seems unchanged thus far by the adoption of internet based technology: “the museum field generally, its curators, and those academic departments focused on training curators remain at the core philosophically unchanged despite their new websites and shiny new technological reference centres” (97). Virtual life has not replaced real life and online collections and exhibitions have not replaced real life visitations. Visitors want access to credible information about museum objects and museum exhibitions, they are not looking for Wiki-Museums. Or if they are are, they are looking to the Internet community to provide that service rather than the employees of state and federally funded museums. Both provide legitimate services, but they don’t necessarily need to provide the same service. In the same vein, extra-museum ‘curating’ of object and ideas through social media sites such as Pinterest, Flikr, Instagram and Tumblr provide a valuable source of inspiration and a highly enjoyable form of virtual consumption. But the popular uptake of the term ‘curating’ remains as easily separable from professional practice as the prior uptake of the terms ‘doctor’ and ‘architect’. An individual who doctors an image, or is the architect of their destiny, is still not going to operate on a patient nor construct a building. While major ontological shifts have occurred within museum curatorship over the last thirty years, these changes have resulted from wider social shifts, not directly from technology. This is not to say that technology will not change the museum’s ‘way of being’ in my professional lifetime—it’s just to say it hasn’t happened yet. References Cash Cash, Phillip. “Medicine Bundles: An Indigenous Approach.” Ed. T. Bray. The Future of the Past: Archaeologists, Native Americans and Repatriation. New York and London: Garland Publishing (2001): 139-145. Conn, Steven. Do Museums Still Need Objects? Philadelphia: University of Pennsylvania Press, 2011. Edson, Gary, and David Dean. The Handbook for Museums. New York and London: Routledge, 1994. Engeström, Jyri. “Why Some Social Network Services Work and Others Don’t — Or: The Case for Object-Centered Sociality.” Zengestrom Apr. 2005. 17 June 2015 ‹http://www.zengestrom.com/blog/2005/04/why-some-social-network-services-work-and-others-dont-or-the-case-for-object-centered-sociality.html›. Freedman, Gordon. “The Changing Nature of Museums”. Curator 43.4 (2000): 295-306. Gurian, Elaine Heumann. “Curator: From Soloist to Impresario.” Eds. Fiona Cameron and Lynda Kelly. Hot Topics, Public Culture, Museums. Newcastle: Cambridge Scholars Publishing, 2010. 95-111. Kelly, Lynda. “Museum Authority.” Blog 12 Nov. 2009. 25 June 2015 ‹http://australianmuseum.net.au/blogpost/museullaneous/museum-authority›. Kreps, Christina. “Curatorship as Social Practice.” Curator: The Museum Journal 46.3 (2003): 311-323. ———, Christina. Liberating Culture: Cross-Cultural Perspectives on Museums, Curation, and Heritage Preservation. London and New York: Routledge, 2003. Macdonald, Sharon. “Expanding Museum Studies: An Introduction.” Ed. Sharon MacDonald. A Companion to Museum Studies. Oxford: Blackwell Publishing, 2011. Parry, Ross. “The End of the Beginning: Normativity in the Postdigital Museum.” Museum Worlds: Advances in Research 1 (2013): 24-39. Tisdale, Rainey. “Do History Museums Still Need Objects?” History News (2011): 19-24. 18 June 2015 ‹http://aaslhcommunity.org/historynews/files/2011/08/RaineySmr11Links.pdf›. Suchy, Serene. Leading with Passion: Change Management in the Twenty-First Century Museum. Lanham: AltaMira Press, 2004. Weil, Stephen E. “From Being about Something to Being for Somebody: The Ongoing Transformation of the American Museum.” Daedalus, Journal of the American Academy of Arts and Sciences 128.3 (1999): 229–258. Wilkening, Susie. “Community Engagement and Objects—Mutually Exclusive?” Museum Audience Insight 27 July 2009. 14 June 2015 ‹http://reachadvisors.typepad.com/museum_audience_insight/2009/07/community-engagement-and-objects-mutually-exclusive.html›. ———, and Erica Donnis. “Authenticity? It Means Everything.” History News (2008) 63:4. Williams, Alex. “On the Tip of Creative Tongues.” New York Times 4 Oct. 2009. 4 June 2015 ‹http://www.nytimes.com/2009/10/04/fashion/04curate.html›.

18

Rossiter, Ned. "Creative Industries and the Limits of Critique from." M/C Journal 6, no.3 (June1, 2003). http://dx.doi.org/10.5204/mcj.2208.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

‘Every space has become ad space’. Steve Hayden, Wired Magazine, May 2003. Marshall McLuhan’s (1964) dictum that media technologies constitute a sensory extension of the body shares a conceptual affinity with Ernst Jünger’s notion of ‘“organic construction” [which] indicates [a] synergy between man and machine’ and Walter Benjamin’s exploration of the mimetic correspondence between the organic and the inorganic, between human and non-human forms (Bolz, 2002: 19). The logo or brand is co-extensive with various media of communication – billboards, TV advertisem*nts, fashion labels, book spines, mobile phones, etc. Often the logo is interchangeable with the product itself or a way or life. Since all social relations are mediated, whether by communications technologies or architectonic forms ranging from corporate buildings to sporting grounds to family living rooms, it follows that there can be no outside for sociality. The social is and always has been in a mutually determining relationship with mediating forms. It is in this sense that there is no outside. Such an idea has become a refrain amongst various contemporary media theorists. Here’s a sample: There is no outside position anymore, nor is this perceived as something desirable. (Lovink, 2002a: 4) Both “us” and “them” (whoever we are, whoever they are) are all always situated in this same virtual geography. There’s no outside …. There is nothing outside the vector. (Wark, 2002: 316) There is no more outside. The critique of information is in the information itself. (Lash, 2002: 220) In declaring a universality for media culture and information flows, all of the above statements acknowledge the political and conceptual failure of assuming a critical position outside socio-technically constituted relations. Similarly, they recognise the problems inherent in the “ideology critique” of the Frankfurt School who, in their distinction between “truth” and “false-consciousness”, claimed a sort of absolute knowledge for the critic that transcended the field of ideology as it is produced by the culture industry. Althusser’s more complex conception of ideology, material practices and subject formation nevertheless also fell prey to the pretence of historical materialism as an autonomous “science” that is able to determine the totality, albeit fragmented, of lived social relations. One of the key failings of ideology critique, then, is its incapacity to account for the ways in which the critic, theorist or intellectual is implicated in the operations of ideology. That is, such approaches displace the reflexivity and power relationships between epistemology, ontology and their constitution as material practices within socio-political institutions and historical constellations, which in turn are the settings for the formation of ideology. Scott Lash abandons the term ideology altogether due to its conceptual legacies within German dialectics and French post-structuralist aporetics, both of which ‘are based in a fundamental dualism, a fundamental binary, of the two types of reason. One speaks of grounding and reconciliation, the other of unbridgeability …. Both presume a sphere of transcendence’ (Lash, 2002: 8). Such assertions can be made at a general level concerning these diverse and often conflicting approaches when they are reduced to categories for the purpose of a polemic. However, the work of “post-structuralists” such as Foucault, Deleuze and Guattari and the work of German systems theorist Niklas Luhmann is clearly amenable to the task of critique within information societies (see Rossiter, 2003). Indeed, Lash draws on such theorists in assembling his critical dispositif for the information age. More concretely, Lash (2002: 9) advances his case for a new mode of critique by noting the socio-technical and historical shift from ‘constitutive dualisms of the era of the national manufacturing society’ to global information cultures, whose constitutive form is immanent to informational networks and flows. Such a shift, according to Lash, needs to be met with a corresponding mode of critique: Ideologycritique [ideologiekritik] had to be somehow outside of ideology. With the disappearance of a constitutive outside, informationcritique must be inside of information. There is no outside any more. (2002: 10) Lash goes on to note, quite rightly, that ‘Informationcritique itself is branded, another object of intellectual property, machinically mediated’ (2002: 10). It is the political and conceptual tensions between information critique and its regulation via intellectual property regimes which condition critique as yet another brand or logo that I wish to explore in the rest of this essay. Further, I will question the supposed erasure of a “constitutive outside” to the field of socio-technical relations within network societies and informational economies. Lash is far too totalising in supposing a break between industrial modes of production and informational flows. Moreover, the assertion that there is no more outside to information too readily and simplistically assumes informational relations as universal and horizontally organised, and hence overlooks the significant structural, cultural and economic obstacles to participation within media vectors. That is, there certainly is an outside to information! Indeed, there are a plurality of outsides. These outsides are intertwined with the flows of capital and the imperial biopower of Empire, as Hardt and Negri (2000) have argued. As difficult as it may be to ascertain the boundaries of life in all its complexity, borders, however defined, nonetheless exist. Just ask the so-called “illegal immigrant”! This essay identifies three key modalities comprising a constitutive outside: material (uneven geographies of labour-power and the digital divide), symbolic (cultural capital), and strategic (figures of critique). My point of reference in developing this inquiry will pivot around an analysis of the importation in Australia of the British “Creative Industries” project and the problematic foundation such a project presents to the branding and commercialisation of intellectual labour. The creative industries movement – or Queensland Ideology, as I’ve discussed elsewhere with Danny Butt (2002) – holds further implications for the political and economic position of the university vis-à-vis the arts and humanities. Creative industries constructs itself as inside the culture of informationalism and its concomitant economies by the very fact that it is an exercise in branding. Such branding is evidenced in the discourses, rhetoric and policies of creative industries as adopted by university faculties, government departments and the cultural industries and service sectors seeking to reposition themselves in an institutional environment that is adjusting to ongoing structural reforms attributed to the demands by the “New Economy” for increased labour flexibility and specialisation, institutional and economic deregulation, product customisation and capital accumulation. Within the creative industries the content produced by labour-power is branded as copyrights and trademarks within the system of Intellectual Property Regimes (IPRs). However, as I will go on to show, a constitutive outside figures in material, symbolic and strategic ways that condition the possibility of creative industries. The creative industries project, as envisioned by the Blair government’s Department of Culture, Media and Sport (DCMS) responsible for the Creative Industry Task Force Mapping Documents of 1998 and 2001, is interested in enhancing the “creative” potential of cultural labour in order to extract a commercial value from cultural objects and services. Just as there is no outside for informationcritique, for proponents of the creative industries there is no culture that is worth its name if it is outside a market economy. That is, the commercialisation of “creativity” – or indeed commerce as a creative undertaking – acts as a legitimising function and hence plays a delimiting role for “culture” and, by association, sociality. And let us not forget, the institutional life of career academics is also at stake in this legitimating process. The DCMS cast its net wide when defining creative sectors and deploys a lexicon that is as vague and unquantifiable as the next mission statement by government and corporate bodies enmeshed within a neo-liberal paradigm. At least one of the key proponents of the creative industries in Australia is ready to acknowledge this (see Cunningham, 2003). The list of sectors identified as holding creative capacities in the CITF Mapping Document include: film, music, television and radio, publishing, software, interactive leisure software, design, designer fashion, architecture, performing arts, crafts, arts and antique markets, architecture and advertising. The Mapping Document seeks to demonstrate how these sectors consist of ‘... activities which have their origin in individual creativity, skill and talent and which have the potential for wealth and job creation through generation and exploitation of intellectual property’ (CITF: 1998/2001). The CITF’s identification of intellectual property as central to the creation of jobs and wealth firmly places the creative industries within informational and knowledge economies. Unlike material property, intellectual property such as artistic creations (films, music, books) and innovative technical processes (software, biotechnologies) are forms of knowledge that do not diminish when they are distributed. This is especially the case when information has been encoded in a digital form and distributed through technologies such as the internet. In such instances, information is often attributed an “immaterial” and nonrivalrous quality, although this can be highly misleading for both the conceptualisation of information and the politics of knowledge production. Intellectual property, as distinct from material property, operates as a scaling device in which the unit cost of labour is offset by the potential for substantial profit margins realised by distribution techniques availed by new information and communication technologies (ICTs) and their capacity to infinitely reproduce the digital commodity object as a property relation. Within the logic of intellectual property regimes, the use of content is based on the capacity of individuals and institutions to pay. The syndication of media content ensures that market saturation is optimal and competition is kept to a minimum. However, such a legal architecture and hegemonic media industry has run into conflict with other net cultures such as open source movements and peer-to-peer networks (Lovink, 2002b; Meikle, 2002), which is to say nothing of the digital piracy of software and digitally encoded cinematic forms. To this end, IPRs are an unstable architecture for extracting profit. The operation of Intellectual Property Regimes constitutes an outside within creative industries by alienating labour from its mode of information or form of expression. Lash is apposite on this point: ‘Intellectual property carries with it the right to exclude’ (Lash, 2002: 24). This principle of exclusion applies not only to those outside the informational economy and culture of networks as result of geographic, economic, infrastructural, and cultural constraints. The very practitioners within the creative industries are excluded from control over their creations. It is in this sense that a legal and material outside is established within an informational society. At the same time, this internal outside – to put it rather clumsily – operates in a constitutive manner in as much as the creative industries, by definition, depend upon the capacity to exploit the IP produced by its primary source of labour. For all the emphasis the Mapping Document places on exploiting intellectual property, it’s really quite remarkable how absent any elaboration or considered development of IP is from creative industries rhetoric. It’s even more astonishing that media and cultural studies academics have given at best passing attention to the issues of IPRs. Terry Flew (2002: 154-159) is one of the rare exceptions, though even here there is no attempt to identify the implications IPRs hold for those working in the creative industries sectors. Perhaps such oversights by academics associated with the creative industries can be accounted for by the fact that their own jobs rest within the modern, industrial institution of the university which continues to offer the security of a salary award system and continuing if not tenured employment despite the onslaught of neo-liberal reforms since the 1980s. Such an industrial system of traditional and organised labour, however, does not define the labour conditions for those working in the so-called creative industries. Within those sectors engaged more intensively in commercialising culture, labour practices closely resemble work characterised by the dotcom boom, which saw young people working excessively long hours without any of the sort of employment security and protection vis-à-vis salary, health benefits and pension schemes peculiar to traditional and organised labour (see McRobbie, 2002; Ross, 2003). During the dotcom mania of the mid to late 90s, stock options were frequently offered to people as an incentive for offsetting the often minimum or even deferred payment of wages (see Frank, 2000). It is understandable that the creative industries project holds an appeal for managerial intellectuals operating in arts and humanities disciplines in Australia, most particularly at Queensland University of Technology (QUT), which claims to have established the ‘world’s first’ Creative Industries faculty (http://www.creativeindustries.qut.com/). The creative industries provide a validating discourse for those suffering anxiety disorders over what Ruth Barcan (2003) has called the ‘usefulness’ of ‘idle’ intellectual pastimes. As a project that endeavours to articulate graduate skills with labour markets, the creative industries is a natural extension of the neo-liberal agenda within education as advocated by successive governments in Australia since the Dawkins reforms in the mid 1980s (see Marginson and Considine, 2000). Certainly there’s a constructive dimension to this: graduates, after all, need jobs and universities should display an awareness of market conditions; they also have a responsibility to do so. And on this count, I find it remarkable that so many university departments in my own field of communications and media studies are so bold and, let’s face it, stupid, as to make unwavering assertions about market demands and student needs on the basis of doing little more than sniffing the wind! Time for a bit of a reality check, I’d say. And this means becoming a little more serious about allocating funds and resources towards market research and analysis based on the combination of needs between students, staff, disciplinary values, university expectations, and the political economy of markets. However, the extent to which there should be a wholesale shift of the arts and humanities into a creative industries model is open to debate. The arts and humanities, after all, are a set of disciplinary practices and values that operate as a constitutive outside for creative industries. Indeed, in their creative industries manifesto, Stuart Cunningham and John Hartley (2002) loath the arts and humanities in such confused, paradoxical and hypocritical ways in order to establish the arts and humanities as a cultural and ideological outside. To this end, to subsume the arts and humanities into the creative industries, if not eradicate them altogether, is to spell the end of creative industries as it’s currently conceived at the institutional level within academe. Too much specialisation in one post-industrial sector, broad as it may be, ensures a situation of labour reserves that exceed market needs. One only needs to consider all those now unemployed web-designers that graduated from multi-media programs in the mid to late 90s. Further, it does not augur well for the inevitable shift from or collapse of a creative industries economy. Where is the standing reserve of labour shaped by university education and training in a post-creative industries economy? Diehard neo-liberals and true-believers in the capacity for perpetual institutional flexibility would say that this isn’t a problem. The university will just “organically” adapt to prevailing market conditions and shape their curriculum and staff composition accordingly. Perhaps. Arguably if the university is to maintain a modality of time that is distinct from the just-in-time mode of production characteristic of informational economies – and indeed, such a difference is a quality that defines the market value of the educational commodity – then limits have to be established between institutions of education and the corporate organisation or creative industry entity. The creative industries project is a reactionary model insofar as it reinforces the status quo of labour relations within a neo-liberal paradigm in which bids for industry contracts are based on a combination of rich technological infrastructures that have often been subsidised by the state (i.e. paid for by the public), high labour skills, a low currency exchange rate and the lowest possible labour costs. In this respect it is no wonder that literature on the creative industries omits discussion of the importance of unions within informational, networked economies. What is the place of unions in a labour force constituted as individualised units? The conditions of possibility for creative industries within Australia are at once its frailties. In many respects, the success of the creative industries sector depends upon the ongoing combination of cheap labour enabled by a low currency exchange rate and the capacity of students to access the skills and training offered by universities. Certainly in relation to matters such as these there is no outside for the creative industries. There’s a great need to explore alternative economic models to the content production one if wealth is to be successfully extracted and distributed from activities in the new media sectors. The suggestion that the creative industries project initiates a strategic response to the conditions of cultural production within network societies and informational economies is highly debateable. The now well documented history of digital piracy in the film and software industries and the difficulties associated with regulating violations to proprietors of IP in the form of copyright and trademarks is enough of a reason to look for alternative models of wealth extraction. And you can be sure this will occur irrespective of the endeavours of the creative industries. To conclude, I am suggesting that those working in the creative industries, be they content producers or educators, need to intervene in IPRs in such a way that: 1) ensures the alienation of their labour is minimised; 2) collectivising “creative” labour in the form of unions or what Wark (2001) has termed the “hacker class”, as distinct from the “vectoralist class”, may be one way of achieving this; and 3) the advocates of creative industries within the higher education sector in particular are made aware of the implications IPRs have for graduates entering the workforce and adjust their rhetoric, curriculum, and policy engagements accordingly. Works Cited Barcan, Ruth. ‘The Idleness of Academics: Reflections on the Usefulness of Cultural Studies’. Continuum: Journal of Media & Cultural Studies (forthcoming, 2003). Bolz, Norbert. ‘Rethinking Media Aesthetics’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 18-27. Butt, Danny and Rossiter, Ned. ‘Blowing Bubbles: Post-Crash Creative Industries and the Withering of Political Critique in Cultural Studies’. Paper presented at Ute Culture: The Utility of Culture and the Uses of Cultural Studies, Cultural Studies Association of Australia Conference, Melbourne, 5-7 December, 2002. Posted to fibreculture mailing list, 10 December, 2002, http://www.fibreculture.org/archives/index.html Creative Industry Task Force: Mapping Document, DCMS (Department of Culture, Media and Sport), London, 1998/2001. http://www.culture.gov.uk/creative/mapping.html Cunningham, Stuart. ‘The Evolving Creative Industries: From Original Assumptions to Contemporary Interpretations’. Seminar Paper, QUT, Brisbane, 9 May, 2003, http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf Cunningham, Stuart; Hearn, Gregory; Cox, Stephen; Ninan, Abraham and Keane, Michael. Brisbane’s Creative Industries 2003. Report delivered to Brisbane City Council, Community and Economic Development, Brisbane: CIRAC, 2003. http://www.creativeindustries.qut.com/research/cirac/documen... ...ts/bccreportonly.pdf Flew, Terry. New Media: An Introduction. Oxford: Oxford University Press, 2002. Frank, Thomas. One Market under God: Extreme Capitalism, Market Populism, and the End of Economic Democracy. New York: Anchor Books, 2000. Hartley, John and Cunningham, Stuart. ‘Creative Industries: from Blue Poles to fat pipes’, in Malcolm Gillies (ed.) The National Humanities and Social Sciences Summit: Position Papers. Canberra: DEST, 2002. Hayden, Steve. ‘Tastes Great, Less Filling: Ad Space – Will Advertisers Learn the Hard Lesson of Over-Development?’. Wired Magazine 11.06 (June, 2003), http://www.wired.com/wired/archive/11.06/ad_spc.html Hardt, Michael and Negri, Antonio. Empire. Cambridge, Mass.: Harvard University Press, 2000. Lash, Scott. Critique of Information. London: Sage, 2002. Lovink, Geert. Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002a. Lovink, Geert. Dark Fiber: Tracking Critical Internet Culture. Cambridge, Mass.: MIT Press, 2002b. McLuhan, Marshall. Understanding Media: The Extensions of Man. London: Routledge and Kegan Paul, 1964. McRobbie, Angela. ‘Clubs to Companies: Notes on the Decline of Political Culture in Speeded up Creative Worlds’, Cultural Studies 16.4 (2002): 516-31. Marginson, Simon and Considine, Mark. The Enterprise University: Power, Governance and Reinvention in Australia. Cambridge: Cambridge University Press, 2000. Meikle, Graham. Future Active: Media Activism and the Internet. Sydney: Pluto Press, 2002. Ross, Andrew. No-Collar: The Humane Workplace and Its Hidden Costs. New York: Basic Books, 2003. Rossiter, Ned. ‘Processual Media Theory’, in Adrian Miles (ed.) Streaming Worlds: 5th International Digital Arts & Culture (DAC) Conference. 19-23 May. Melbourne: RMIT University, 2003, 173-184. http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf Sassen, Saskia. Losing Control? Sovereignty in an Age of Globalization. New York: Columbia University Press, 1996. Wark, McKenzie. ‘Abstraction’ and ‘Hack’, in Hugh Brown, Geert Lovink, Helen Merrick, Ned Rossiter, David Teh, Michele Willson (eds). Politics of a Digital Present: An Inventory of Australian Net Culture, Criticism and Theory. Melbourne: Fibreculture Publications, 2001, 3-7, 99-102. Wark, McKenzie. ‘The Power of Multiplicity and the Multiplicity of Power’, in Geert Lovink, Uncanny Networks: Dialogues with the Virtual Intelligentsia. Cambridge, Mass.: MIT Press, 2002, 314-325. Links http://hypertext.rmit.edu.au/dac/papers/Rossiter.pdf http://www.creativeindustries.qut.com/ http://www.creativeindustries.qut.com/research/cirac/documents/THE_EVOLVING_CREATIVE_INDUSTRIES.pdf http://www.creativeindustries.qut.com/research/cirac/documents/bccreportonly.pdf http://www.culture.gov.uk/creative/mapping.html http://www.fibreculture.org/archives/index.html http://www.wired.com/wired/archive/11.06/ad_spc.html Citation reference for this article Substitute your date of access for Dn Month Year etc... MLA Style Rossiter, Ned. "Creative Industries and the Limits of Critique from " M/C: A Journal of Media and Culture< http://www.media-culture.org.au/0306/11-creativeindustries.php>. APA Style Rossiter, N. (2003, Jun 19). Creative Industries and the Limits of Critique from . M/C: A Journal of Media and Culture, 6,< http://www.media-culture.org.au/0306/11-creativeindustries.php>

19

Jethani, Suneel. "New Media Maps as ‘Contact Zones’: Subjective Cartography and the Latent Aesthetics of the City-Text." M/C Journal 14, no.5 (October18, 2011). http://dx.doi.org/10.5204/mcj.421.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Any understanding of social and cultural change is impossible without a knowledge of the way media work as environments. —Marshall McLuhan. What is visible and tangible in things represents our possible action upon them. —Henri Bergson. Introduction: Subjective Maps as ‘Contact Zones’ Maps feature heavily in a variety of media; they appear in textbooks, on television, in print, and on the screens of our handheld devices. The production of cartographic texts is a process that is imbued with power relations and bound up with the production and reproduction of social life (Pinder 405). Mapping involves choices as to what information is and is not included. In their organisation, categorisation, modeling, and representation maps show and they hide. Thus “the idea that a small number of maps or even a single (and singular) map might be sufficient can only apply in a spatialised area of study whose own self-affirmation depends on isolation from its context” (Lefebvre 85–86). These isolations determine the way we interpret the physical, biological, and social worlds. The map can be thought of as a schematic for political systems within a confined set of spatial relations, or as a container for political discourse. Mapping contributes equally to the construction of experiential realities as to the representation of physical space, which also contains the potential to incorporate representations of temporality and rhythm to spatial schemata. Thus maps construct realities as much as they represent them and coproduce space as much as the political identities of people who inhabit them. Maps are active texts and have the ability to promote social change (Pickles 146). It is no wonder, then, that artists, theorists and activists alike readily engage in the conflicted praxis of mapping. This critical engagement “becomes a method to track the past, embody memories, explain the unexplainable” and manifest the latent (Ibarra 66). In this paper I present a short case study of Bangalore: Subjective Cartographies a new media art project that aims to model a citizen driven effort to participate in a critical form of cartography, which challenges dominant representations of the city-space. I present a critical textual analysis of the maps produced in the workshops, the artist statements relating to these works used in the exhibition setting, and statements made by the participants on the project’s blog. This “praxis-logical” approach allows for a focus on the project as a space of aggregation and the communicative processes set in motion within them. In analysing such projects we could (and should) be asking questions about the functions served by the experimental concepts under study—who has put it forward? Who is utilising it and under what circ*mstances? Where and how has it come into being? How does discourse circulate within it? How do these spaces as sites of emergent forms of resistance within global capitalism challenge traditional social movements? How do they create self-reflexive systems?—as opposed to focusing on ontological and technical aspects of digital mapping (Renzi 73). In de-emphasising the technology of digital cartography and honing in on social relations embedded within the text(s), this study attempts to complement other studies on digital mapping (see Strom) by presenting a case from the field of politically oriented tactical media. Bangalore: Subjective Cartographies has been selected for analysis, in this exploration of media as “zone.” It goes some way to incorporating subjective narratives into spatial texts. This is a three-step process where participants tapped into spatial subjectivities by data collection or environmental sensing led by personal reflection or ethnographic enquiry, documenting and geo-tagging their findings in the map. Finally they engaged an imaginative or ludic process of synthesising their data in ways not inherent within the traditional conventions of cartography, such as the use of sound and distortion to explicate the intensity of invisible phenomena at various coordinates in the city-space. In what follows I address the “zone” theme by suggesting that if we apply McLuhan’s notion of media as environment together with Henri Bergson’s assertion that visibility and tangibility constitutes the potential for action to digital maps, projects such as Bangalore: Subjective Cartographies constitute a “contact zone.” A type of zone where groups come together at the local level and flows of discourse about art, information communication, media, technology, and environment intersect with local histories and cultures within the cartographic text. A “contact zone,” then, is a site where latent subjectivities are manifested and made potentially politically potent. “Contact zones,” however, need not be spaces for the aggrieved or excluded (Renzi 82), as they may well foster the ongoing cumulative politics of the mundane capable of developing into liminal spaces where dominant orders may be perforated. A “contact zone” is also not limitless and it must be made clear that the breaking of cartographic convention, as is the case with the project under study here, need not be viewed as resistances per se. It could equally represent thresholds for public versus private life, the city-as-text and the city-as-social space, or the zone where representations of space and representational spaces interface (Lefebvre 233), and culture flows between the mediated and ideated (Appadurai 33–36). I argue that a project like Bangalore: Subjective Cartographies demonstrates that maps as urban text form said “contact zones,” where not only are media forms such as image, text, sound, and video are juxtaposed in a singular spatial schematic, but narratives of individual and collective subjectivities (which challenge dominant orders of space and time, and city-rhythm) are contested. Such a “contact zone” in turn may not only act as a resource for citizens in the struggle of urban design reform and a democratisation of the facilities it produces, but may also serve as a heuristic device for researchers of new media spatiotemporalities and their social implications. Critical Cartography and Media Tactility Before presenting this brief illustrative study something needs to be said of the context from which Bangalore: Subjective Cartographies has arisen. Although a number of Web 2.0 applications have come into existence since the introduction of Google Maps and map application program interfaces, which generate a great deal of geo-tagged user generated content aimed at reconceptualising the mapped city-space (see historypin for example), few have exhibited great significance for researchers of media and communications from the perspective of building critical theories relating to political potential in mediated spaces. The expression of power through mapping can be understood from two perspectives. The first—attributed largely to the Frankfurt School—seeks to uncover the potential of a society that is repressed by capitalist co-opting of the cultural realm. This perspective sees maps as a potential challenge to, and means of providing emancipation from, existing power structures. The second, less concerned with dispelling false ideologies, deals with the politics of epistemology (Crampton and Krygier 14). According to Foucault, power was not applied from the top down but manifested laterally in a highly diffused manner (Foucault 117; Crampton and Krygier 14). Foucault’s privileging of the spatial and epistemological aspects of power and resistance complements the Frankfurt School’s resistance to oppression in the local. Together the two perspectives orient power relative to spatial and temporal subjectivities, and thus fit congruently into cartographic conventions. In order to make sense of these practices the post-oppositional character of tactical media maps should be located within an economy of power relations where resistance is never outside of the field of forces but rather is its indispensable element (Renzi 72). Such exercises in critical cartography are strongly informed by the critical politico-aesthetic praxis of political/art collective The Situationist International, whose maps of Paris were inherently political. The Situationist International incorporated appropriated texts into, and manipulated, existing maps to explicate city-rhythms and intensities to construct imaginative and alternate representations of the city. Bangalore: Subjective Cartographies adopts a similar approach. The artists’ statement reads: We build our subjective maps by combining different methods: photography, film, and sound recording; […] to explore the visible and invisible […] city; […] we adopt psycho-geographical approaches in exploring territory, defined as the study of the precise effects of the geographical environment, consciously developed or not, acting directly on the emotional behaviour of individuals. The project proposals put forth by workshop participants also draw heavily from the Situationists’s A New Theatre of Operations for Culture. A number of Situationist theories and practices feature in the rationale for the maps created in the Bangalore Subjective Cartographies workshop. For example, the Situationists took as their base a general notion of experimental behaviour and permanent play where rationality was approached on the basis of whether or not something interesting could be created out of it (Wark 12). The dérive is the rapid passage through various ambiences with a playful-constructive awareness of the psychographic contours of a specific section of space-time (Debord). The dérive can be thought of as an exploration of an environment without preconceptions about the contours of its geography, but rather a focus on the reality of inhabiting a place. Détournement involves the re-use of elements from recognised media to create a new work with meaning often opposed to the original. Psycho-geography is taken to be the subjective ambiences of particular spaces and times. The principles of détournement and psycho-geography imply a unitary urbanism, which hints at the potential of achieving in environments what may be achieved in media with détournement. Bangalore: Subjective Cartographies carries Situationist praxis forward by attempting to exploit certain properties of information digitalisation to formulate textual representations of unitary urbanism. Bangalore: Subjective Cartographies is demonstrative of a certain media tactility that exists more generally across digital-networked media ecologies and channels this to political ends. This tactility of media is best understood through textual properties awarded by the process and logic of digitalisation described in Lev Manovich’s Language of New Media. These properties are: numerical representation in the form of binary code, which allows for the reification of spatial data in a uniform format that can be stored and retrieved in-silico as opposed to in-situ; manipulation of this code by the use of algorithms, which renders the scales and lines of maps open to alteration; modularity that enables incorporation of other textual objects into the map whilst maintaining each incorporated item’s individual identity; the removal to some degree of human interaction in terms of the translation of environmental data into cartographic form (whilst other properties listed here enable human interaction with the cartographic text), and the nature of digital code allows for changes to accumulate incrementally creating infinite potential for refinements (Manovich 49–63). The Subjective Mapping of Bangalore Bangalore is an interesting site for such a project given the recent and rapid evolution of its media infrastructure. As a “media city,” the first television sets appeared in Bangalore at some point in the early 1980s. The first Internet Service Provider (ISP), which served corporate clients only, commenced operating a decade later and then offered dial-up services to domestic clients in the mid-1990s. At present, however, Bangalore has the largest number of broadband Internet connections in India. With the increasing convergence of computing and telecommunications with traditional forms of media such as film and photography, Bangalore demonstrates well what Scott McQuire terms a media-architecture complex, the core infrastructure for “contact zones” (vii). Bangalore: Subjective Cartographies was a workshop initiated by French artists Benjamin Cadon and Ewen Cardonnet. It was conducted with a number of students at the Srishti School of Art, Design and Technology in November and December 2009. Using Metamap.fr (an online cartographic tool that makes it possible to add multimedia content such as texts, video, photos, sounds, links, location points, and paths to digital maps) students were asked to, in groups of two or three, collect and consult data on ‘felt’ life in Bangalore using an ethnographic, transverse geographic, thematic, or temporal approach. The objective of the project was to model a citizen driven effort to subvert dominant cartographic representations of the city. In doing so, the project and this paper posits that there is potential for such methods to be adopted to form new literacies of cartographic media and to render the cartographic imaginary politically potent. The participants’ brief outlined two themes. The first was the visible and symbolic city where participants were asked to investigate the influence of the urban environment on the behaviours and sensations of its inhabitants, and to research and collect signifiers of traditional and modern worlds. The invisible city brief asked participants to consider the latent environment and link it to human behaviour—in this case electromagnetic radiation linked to the cities telecommunications and media infrastructure was to be specifically investigated. The Visible and Symbolic City During British rule many Indian cities functioned as dual entities where flow of people and commodities circulated between localised enclaves and the centralised British-built areas. Mirroring this was the dual mode of administration where power was shared between elected Indian legislators and appointed British officials (Hoselitz 432–33). Reflecting on this diarchy leads naturally to questions about the politics of civic services such as the water supply, modes of public communication and instruction, and the nature of the city’s administration, distribution, and manufacturing functions. Workshop participants approached these issues in a variety of ways. In the subjective maps entitled Microbial Streets and Water Use and Reuse, food and water sources of street vendors are traced with the aim to map water supply sources relative to the movements of street vendors operating in the city. Images of the microorganisms are captured using hacked webcams as makeshift microscopes. The data was then converted to audio using Pure Data—a real-time graphical programming environment for the processing audio, video and graphical data. The intention of Microbial Streets is to demonstrate how mapping technologies could be used to investigate the flows of food and water from source to consumer, and uncover some of the latencies involved in things consumed unhesitatingly everyday. Typographical Lens surveys Russell Market, an older part of the city through an exploration of the aesthetic and informational transformation of the city’s shop and street signage. In Ethni City, Avenue Road is mapped from the perspective of local goldsmiths who inhabit the area. Both these maps attempt to study the convergence of the city’s dual function and how the relationship between merchants and their customers has changed during the transition from localised enclaves, catering to the sale of particular types of goods, to the development of shopping precincts, where a variety of goods and services can be sought. Two of the project’s maps take a spatiotemporal-archivist approach to the city. Bangalore 8mm 1940s uses archival Super 8 footage and places digitised copies on the map at the corresponding locations of where they were originally filmed. The film sequences, when combined with satellite or street-view images, allow for the juxtaposition of present day visions of the city with those of the 1940s pre-partition era. Chronicles of Collection focuses on the relationship between people and their possessions from the point of view of the object and its pathways through the city in space and time. Collectors were chosen for this map as the value they placed on the object goes beyond the functional and the monetary, which allowed the resultant maps to access and express spatially the layers of meaning a particular object may take on in differing contexts of place and time in the city-space. The Invisible City In the expression of power through city-spaces, and by extension city-texts, certain circuits and flows are ossified and others rendered latent. Raymond Williams in Politics and Letters writes: however dominant a social system may be, the very meaning of its domination involves a limitation or selection of the activities it covers, so that by definition it cannot exhaust all social experience, which therefore always potentially contains space for alternative acts and alternative intentions which are not yet articulated as a social institution or even project. (252) The artists’ statement puts forward this possible response, an exploration of the latent aesthetics of the city-space: In this sense then, each device that enriches our perception for possible action on the real is worthy of attention. Even if it means the use of subjective methods, that may not be considered ‘evidence’. However, we must admit that any subjective investigation, when used systematically and in parallel with the results of technical measures, could lead to new possibilities of knowledge. Electromagnetic City maps the city’s sources of electromagnetic radiation, primarily from mobile phone towers, but also as a by-product of our everyday use of technologies, televisions, mobile phones, Internet Wi-Fi computer screens, and handheld devices. This map explores issues around how the city’s inhabitants hear, see, feel, and represent things that are a part of our environment but invisible, and asks: are there ways that the intangible can be oriented spatially? The intensity of electromagnetic radiation being emitted from these sources, which are thought to negatively influence the meditation of ancient sadhus (sages) also features in this map. This data was collected by taking electromagnetic flow meters into the suburb of Yelhanka (which is also of interest because it houses the largest milk dairy in the state of Karnataka) in a Situationist-like derive and then incorporated back into Metamap. Signal to Noise looks at the struggle between residents concerned with the placement of mobile phone towers around the city. It does so from the perspectives of people who seek information about their placement concerned about mobile phone signal quality, and others concerned about the proximity of this infrastructure to their homes due to to potential negative health effects. Interview footage was taken (using a mobile phone) and manipulated using Pure Data to distort the visual and audio quality of the footage in proportion to the fidelity of the mobile phone signal in the geographic area where the footage was taken. Conclusion The “contact zone” operating in Bangalore: Subjective Cartographies, and the underlying modes of social enquiry that make it valuable, creates potential for the contestation of new forms of polity that may in turn influence urban administration and result in more representative facilities of, and for, city-spaces and their citizenry. Robert Hassan argues that: This project would mean using tactical media to produce new spaces and temporalities that are explicitly concerned with working against the unsustainable “acceleration of just about everything” that our present neoliberal configuration of the network society has generated, showing that alternatives are possible and workable—in ones job, home life, family life, showing that digital [spaces and] temporality need not mean the unerring or unbending meter of real-time [and real city-space] but that an infinite number of temporalities [and subjectivities of space-time] can exist within the network society to correspond with a diversity of local and contextual cultures, societies and polities. (174) As maps and locative motifs begin to feature more prominently in media, analyses such as the one discussed in this paper may allow for researchers to develop theoretical approaches to studying newer forms of media. References Appadurai, Arjun. Modernity at Large: Cultural Dimensions of Globalisation. Minneapolis: U of Minnesota P, 1996. “Bangalore: Subjective Cartographies.” 25 July 2011 ‹http://bengaluru.labomedia.org/page/2/›. Bergson, Henri. Creative Evolution. New York: Henry Holt and Company, 1911. Crampton, Jeremy W., and John Krygier. “An Introduction to Critical Cartography.” ACME: An International E-Journal for Critical Geography 4 (2006): 11–13. Chardonnet, Ewen, and Benjamin Cadon. “Semaphore.” 25 July 2011 ‹http://semaphore.blogs.com/semaphore/spectral_investigations_collective/›. Debord, Guy. “Theory of the Dérive.” 25 July 2011 ‹http://www.bopsecrets.org/SI/2.derive.htm›. Foucault, Michel. Remarks on Marx. New York: Semitotext[e], 1991.Hassan, Robert. The Chronoscopic Society: Globalization, Time and Knowledge in the Networked Economy. New York: Lang, 2003. “Historypin.” 4 Aug. 2011 ‹http://www.historypin.com/›. Hoselitz, Bert F. “A Survey of the Literature on Urbanization in India.” India’s Urban Future Ed. Roy Turner. Berkeley: U of California P, 1961. 425-43. Ibarra, Anna. “Cosmologies of the Self.” Elephant 7 (2011): 66–96. Lefebvre, Henri. The Production of Space. Oxford: Blackwell, 1991. Lovink, Geert. Dark Fibre. Cambridge: MIT Press, 2002. Manovich, Lev. The Language of New Media Cambridge: MIT Press, 2000. “Metamap.fr.” 3 Mar. 2011 ‹http://metamap.fr/›. McLuhan, Marshall, and Quentin Fiore. The Medium Is the Massage. London: Penguin, 1967. McQuire, Scott. The Media City: Media, Architecture and Urban Space. London: Sage, 2008. Pickles, John. A History of Spaces: Cartographic Reason, Mapping and the Geo-Coded World. London: Routledge, 2004. Pinder, David. “Subverting Cartography: The Situationists and Maps of the City.” Environment and Planning A 28 (1996): 405–27. “Pure Data.” 6 Aug. 2011 ‹http://puredata.info/›. Renzi, Alessandra. “The Space of Tactical Media” Digital Media and Democracy: Tactics in Hard Times. Ed. Megan Boler. Cambridge: MIT Press, 2008. 71–100. Situationist International. “A New Theatre of Operations for Culture.” 6 Aug. 2011 ‹http://www.blueprintmagazine.co.uk/index.php/urbanism/reading-the-situationist-city/›. Strom, Timothy Erik. “Space, Cyberspace and the Interface: The Trouble with Google Maps.” M/C Journal 4.3 (2011). 6 Aug. 2011 ‹http://journal.media-culture.org.au/index.php/mcjournal/article/viewArticle/370›. Wark, McKenzie. 50 Years of Recuperation of the Situationist International, New York: Princeton Architectural Press, 2008. Williams, Raymond. Politics and Letters: Interviews with New Left Review. London: New Left, 1979.

20

Watson, Robert. "E-Press and Oppress." M/C Journal 8, no.2 (June1, 2005). http://dx.doi.org/10.5204/mcj.2345.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

From elephants to ABBA fans, silicon to hormone, the following discussion uses a new research method to look at printed text, motion pictures and a teenage rebel icon. If by ‘print’ we mean a mechanically reproduced impression of a cultural symbol in a medium, then printing has been with us since before microdot security prints were painted onto cars, before voice prints, laser prints, network servers, record pressings, motion picture prints, photo prints, colour woodblock prints, before books, textile prints, and footprints. If we accept that higher mammals such as elephants have a learnt culture, then it is possible to extend a definition of printing beyond hom*o sapiens. Poole reports that elephants mechanically trumpet reproductions of human car horns into the air surrounding their society. If nothing else, this cross-species, cross-cultural reproduction, this ‘ability to mimic’ is ‘another sign of their intelligence’. Observation of child development suggests that the first significant meaningful ‘impression’ made on the human mind is that of the face of the child’s nurturer – usually its mother. The baby’s mind forms an ‘impression’, a mental print, a reproducible memory data set, of the nurturer’s face, voice, smell, touch, etc. That face is itself a cultural construct: hair style, makeup, piercings, tattoos, ornaments, nutrition-influenced skin and smell, perfume, temperature and voice. A mentally reproducible pattern of a unique face is formed in the mind, and we use that pattern to distinguish ‘familiar and strange’ in our expanding social orbit. The social relations of patterned memory – of imprinting – determine the extent to which we explore our world (armed with research aids such as text print) or whether we turn to violence or self-harm (Bretherton). While our cultural artifacts (such as vellum maps or networked voice message servers) bravely extend our significant patterns into the social world and the traversed environment, it is useful to remember that such artifacts, including print, are themselves understood by our original pattern-reproduction and impression system – the human mind, developed in childhood. The ‘print’ is brought to mind differently in different discourses. For a reader, a ‘print’ is a book, a memo or a broadsheet, whether it is the Indian Buddhist Sanskrit texts ordered to be printed in 593 AD by the Chinese emperor Sui Wen-ti (Silk Road) or the US Defense Department memo authorizing lower ranks to torture the prisoners taken by the Bush administration (Sanchez, cited in ABC). Other fields see prints differently. For a musician, a ‘print’ may be the sheet music which spread classical and popular music around the world; it may be a ‘record’ (as in a ‘recording’ session), where sound is impressed to wax, vinyl, charged silicon particles, or the alloys (Smith, “Elpida”) of an mp3 file. For the fine artist, a ‘print’ may be any mechanically reproduced two-dimensional (or embossed) impression of a significant image in media from paper to metal, textile to ceramics. ‘Print’ embraces the Japanese Ukiyo-e colour prints of Utamaro, the company logos that wink from credit card holographs, the early photographs of Talbot, and the textured patterns printed into neolithic ceramics. Computer hardware engineers print computational circuits. Homicide detectives investigate both sweaty finger prints and the repeated, mechanical gaits of suspects, which are imprinted into the earthy medium of a crime scene. For film makers, the ‘print’ may refer to a photochemical polyester reproduction of a motion picture artifact (the reel of ‘celluloid’), or a DVD laser disc impression of the same film. Textualist discourse has borrowed the word ‘print’ to mean ‘text’, so ‘print’ may also refer to the text elements within the vision track of a motion picture: the film’s opening titles, or texts photographed inside the motion picture story such as the sword-cut ‘Z’ in Zorro (Niblo). Before the invention of writing, the main mechanically reproduced impression of a cultural symbol in a medium was the humble footprint in the sand. The footprints of tribes – and neighbouring animals – cut tracks in the vegetation and the soil. Printed tracks led towards food, water, shelter, enemies and friends. Having learnt to pattern certain faces into their mental world, children grew older and were educated in the footprints of family and clan, enemies and food. The continuous impression of significant foot traffic in the medium of the earth produced the lines between significant nodes of prewriting and pre-wheeled cultures. These tracks were married to audio tracks, such as the song lines of the Australian Aborigines, or the ballads of tramping culture everywhere. A typical tramping song has the line, ‘There’s a track winding back to an old-fashion shack along the road to Gundagai,’ (O’Hagan), although this colonial-style song was actually written for radio and became an international hit on the airwaves, rather than the tramping trails. The printed tracks impressed by these cultural flows are highly contested and diverse, and their foot prints are woven into our very language. The names for printed tracks have entered our shared memory from the intersection of many cultures: ‘Track’ is a Germanic word entering English usage comparatively late (1470) and now used mainly in audio visual cultural reproduction, as in ‘soundtrack’. ‘Trek’ is a Dutch word for ‘track’ now used mainly by ecotourists and science fiction fans. ‘Learn’ is a Proto-Indo-European word: the verb ‘learn’ originally meant ‘to find a track’ back in the days when ‘learn’ had a noun form which meant ‘the sole of the foot’. ‘Tract’ and ‘trace’ are Latin words entering English print usage before 1374 and now used mainly in religious, and electronic surveillance, cultural reproduction. ‘Trench’ in 1386 was a French path cut through a forest. ‘Sagacity’ in English print in 1548 was originally the ability to track or hunt, in Proto-Indo-European cultures. ‘Career’ (in English before 1534) was the print made by chariots in ancient Rome. ‘Sleuth’ (1200) was a Norse noun for a track. ‘Investigation’ (1436) was Latin for studying a footprint (Harper). The arrival of symbolic writing scratched on caves, hearth stones, and trees (the original meaning of ‘book’ is tree), brought extremely limited text education close to home. Then, with baked clay tablets, incised boards, slate, bamboo, tortoise shell, cast metal, bark cloth, textiles, vellum, and – later – paper, a portability came to text that allowed any culture to venture away from known ‘foot’ paths with a reduction in the risk of becoming lost and perishing. So began the world of maps, memos, bills of sale, philosophic treatises and epic mythologies. Some of this was printed, such as the mechanical reproduction of coins, but the fine handwriting required of long, extended, portable texts could not be printed until the invention of paper in China about 2000 years ago. Compared to lithic architecture and genes, portable text is a fragile medium, and little survives from the millennia of its innovators. The printing of large non-text designs onto bark-paper and textiles began in neolithic times, but Sui Wen-ti’s imperial memo of 593 AD gives us the earliest written date for printed books, although we can assume they had been published for many years previously. The printed book was a combination of Indian philosophic thought, wood carving, ink chemistry and Chinese paper. The earliest surviving fragment of paper-print technology is ‘Mantras of the Dharani Sutra’, a Buddhist scripture written in the Sanskrit language of the Indian subcontinent, unearthed at an early Tang Dynasty site in Xian, China – making the fragment a veteran piece of printing, in the sense that Sanskrit books had been in print for at least a century by the early Tang Dynasty (Chinese Graphic Arts Net). At first, paper books were printed with page-size carved wooden boards. Five hundred years later, Pi Sheng (c.1041) baked individual reusable ceramic characters in a fire and invented the durable moveable type of modern printing (Silk Road 2000). Abandoning carved wooden tablets, the ‘digitizing’ of Chinese moveable type sped up the production of printed texts. In turn, Pi Sheng’s flexible, rapid, sustainable printing process expanded the political-cultural impact of the literati in Asian society. Digitized block text on paper produced a bureaucratic, literate elite so powerful in Asia that Louis XVI of France copied China’s print-based Confucian system of political authority for his own empire, and so began the rise of the examined public university systems, and the civil service systems, of most European states (Watson, Visions). By reason of its durability, its rapid mechanical reproduction, its culturally agreed signs, literate readership, revered authorship, shared ideology, and distributed portability, a ‘print’ can be a powerful cultural network which builds and expands empires. But print also attacks and destroys empires. A case in point is the Spanish conquest of Aztec America: The Aztecs had immense libraries of American literature on bark-cloth scrolls, a technology which predated paper. These libraries were wiped out by the invading Spanish, who carried a different book before them (Ewins). In the industrial age, the printing press and the gun were seen as the weapons of rebellions everywhere. In 1776, American rebels staffed their ‘Homeland Security’ units with paper makers, knowing that defeating the English would be based on printed and written documents (Hahn). Mao Zedong was a book librarian; Mao said political power came out of the barrel of a gun, but Mao himself came out of a library. With the spread of wireless networked servers, political ferment comes out of the barrel of the cell phone and the internet chat room these days. Witness the cell phone displays of a plane hitting a tower that appear immediately after 9/11 in the Middle East, or witness the show trials of a few US and UK lower ranks who published prints of their torturing activities onto the internet: only lower ranks who published prints were arrested or tried. The control of secure servers and satellites is the new press. These days, we live in a global library of burning books – ‘burning’ in the sense that ‘print’ is now a charged silicon medium (Smith, “Intel”) which is usually made readable by connecting the chip to nuclear reactors and petrochemically-fired power stations. World resources burn as we read our screens. Men, women, children burn too, as we watch our infotainment news in comfort while ‘their’ flickering dead faces are printed in our broadcast hearths. The print we watch is not the living; it is the voodoo of the living in the blackout behind the camera, engaging the blood sacrifice of the tormented and the unfortunate. Internet texts are also ‘on fire’ in the third sense of their fragility and instability as a medium: data bases regularly ‘print’ fail-safe copies in an attempt to postpone the inevitable mechanical, chemical and electrical failure that awaits all electronic media in time. Print defines a moral position for everyone. In reporting conflict, in deciding to go to press or censor, any ‘print’ cannot avoid an ethical context, starting with the fact that there is a difference in power between print maker, armed perpetrators, the weak, the peaceful, the publisher, and the viewer. So many human factors attend a text, video or voice ‘print’: its very existence as an aesthetic object, even before publication and reception, speaks of unbalanced, and therefore dynamic, power relationships. For example, Graham Greene departed unscathed from all the highly dangerous battlefields he entered as a novelist: Riot-torn Germany, London Blitz, Belgian Congo, Voodoo Haiti, Vietnam, Panama, Reagan’s Washington, and mafia Europe. His texts are peopled with the injustices of the less fortunate of the twentieth century, while he himself was a member of the fortunate (if not happy) elite, as is anyone today who has the luxury of time to read Greene’s works for pleasure. Ethically a member of London and Paris’ colonizers, Greene’s best writing still electrifies, perhaps partly because he was in the same line of fire as the victims he shared bread with. In fact, Greene hoped daily that he would escape from the dreadful conflicts he fictionalized via a body bag or an urn of ashes (see Sherry). In reading an author’s biography we have one window on the ethical dimensions of authority and print. If a print’s aesthetics are sometimes enduring, its ethical relationships are always mutable. Take the stylized logo of a running athlete: four limbs bent in a rotation of action. This dynamic icon has symbolized ‘good health’ in Hindu and Buddhist culture, from Madras to Tokyo, for thousands of years. The cross of bent limbs was borrowed for the militarized health programs of 1930s Germany, and, because of what was only a brief, recent, isolated yet monstrously horrific segment of its history in print, the bent-limbed swastika is now a vilified symbol in the West. The sign remains ‘impressed’ differently on traditional Eastern culture, and without the taint of Nazism. Dramatic prints are emotionally charged because, in depicting hom*o sapiens in danger, or passionately in love, they elicit a hormonal reaction from the reader, the viewer, or the audience. The type of emotions triggered by a print vary across the whole gamut of human chemistry. A recent study of three genres of motion picture prints shows a marked differences in the hormonal responses of men compared to women when viewing a romance, an actioner, and a documentary (see Schultheiss, Wirth, and Stanton). Society is biochemically diverse in its engagement with printed culture, which raises questions about equality in the arts. Motion picture prints probably comprise around one third of internet traffic, in the form of stolen digitized movie files pirated across the globe via peer-to-peer file transfer networks (p2p), and burnt as DVD laser prints (BBC). There is also a US 40 billion dollar per annum legitimate commerce in DVD laser pressings (Grassl), which would suggest an US 80 billion per annum world total in legitimate laser disc print culture. The actively screen literate, or the ‘sliterati’ as I prefer to call them, research this world of motion picture prints via their peers, their internet information channels, their television programming, and their web forums. Most of this activity occurs outside the ambit of universities and schools. One large site of sliterate (screen literate) practice outside most schooling and official research is the net of online forums at imdb.com (International Movie Data Base). Imdb.com ‘prints’ about 25,000,000 top pages per month to client browsers. Hundreds of sliterati forums are located at imdb, including a forum for the Australian movie, Muriel’s Wedding (Hogan). Ten years after the release of Muriel’s Wedding, young people who are concerned with victimization and bullying still log on to http://us.imdb.com/title/tt0110598/board/> and put their thoughts into print: I still feel so bad for Muriel in the beginning of the movie, when the girls ‘dump’ her, and how much the poor girl cried and cried! Those girls were such biartches…I love how they got their comeuppance! bunniesormaybemidgets’s comment is typical of the current discussion. Muriel’s Wedding was a very popular film in its first cinema edition in Australia and elsewhere. About 30% of the entire over-14 Australian population went to see this photochemical polyester print in the cinemas on its first release. A decade on, the distributors printed a DVD laser disc edition. The story concerns Muriel (played by Toni Collette), the unemployed daughter of a corrupt, ‘police state’ politician. Muriel is bullied by her peers and she withdraws into a fantasy world, deluding herself that a white wedding will rescue her from the torments of her blighted life. Through theft and deceit (the modus operandi of her father) Muriel escapes to the entertainment industry and finds a ‘wicked’ girlfriend mentor. From a rebellious position of stubborn independence, Muriel plays out her fantasy. She gets her white wedding, before seeing both her father and her new married life as hollow shams which have goaded her abandoned mother to suicide. Redefining her life as a ‘game’ and assuming responsibility for her independence, Muriel turns her back on the mainstream, image-conscious, female gang of her oppressed youth. Muriel leaves the story, having rekindled her friendship with her rebel mentor. My methodological approach to viewing the laser disc print was to first make a more accessible, coded record of the entire movie. I was able to code and record the print in real time, using a new metalanguage (Watson, “Eyes”). The advantage of Coding is that ‘thinks’ the same way as film making, it does not sidetrack the analyst into prose. The Code splits the movie print into Vision Action [vision graphic elements, including text] (sound) The Coding splits the vision track into normal action and graphic elements, such as text, so this Coding is an ideal method for extracting all the text elements of a film in real time. After playing the film once, I had four and a half tightly packed pages of the coded story, including all its text elements in square brackets. Being a unique, indexed hard copy, the Coded copy allowed me immediate access to any point of the Muriel’s Wedding saga without having to search the DVD laser print. How are ‘print’ elements used in Muriel’s Wedding? Firstly, a rose-coloured monoprint of Muriel Heslop’s smiling face stares enigmatically from the plastic surface of the DVD picture disc. The print is a still photo captured from her smile as she walked down the aisle of her white wedding. In this print, Toni Collette is the Mona Lisa of Australian culture, except that fans of Muriel’s Wedding know the meaning of that smile is a magical combination of the actor’s art: the smile is both the flush of dreams come true and the frightening self deception that will kill her mother. Inserting and playing the disc, the text-dominant menu appears, and the film commences with the text-dominant opening titles. Text and titles confer a legitimacy on a work, whether it is a trade mark of the laser print owners, or the household names of stars. Text titles confer status relationships on both the presenters of the cultural artifact and the viewer who has entered into a legal license agreement with the owners of the movie. A title makes us comfortable, because the mind always seeks to name the unfamiliar, and a set of text titles does that job for us so that we can navigate the ‘tracks’ and settle into our engagement with the unfamiliar. The apparent ‘truth’ and ‘stability’ of printed text calms our fears and beguiles our uncertainties. Muriel attends the white wedding of a school bully bride, wearing a leopard print dress she has stolen. Muriel’s spotted wild animal print contrasts with the pure white handmade dress of the bride. In Muriel’s leopard textile print, we have the wild, rebellious, impoverished, inappropriate intrusion into the social ritual and fantasy of her high-status tormentor. An off-duty store detective recognizes the printed dress and calls the police. The police are themselves distinguished by their blue-and-white checked prints and other mechanically reproduced impressions of cultural symbols: in steel, brass, embroidery, leather and plastics. Muriel is driven in the police car past the stenciled town sign (‘Welcome To Porpoise Spit’ heads a paragraph of small print). She is delivered to her father, a politician who presides over the policing of his town. In a state where the judiciary, police and executive are hijacked by the same tyrant, Muriel’s father, Bill, pays off the police constables with a carton of legal drugs (beer) and Muriel must face her father’s wrath, which he proceeds to transfer to his detested wife. Like his daughter, the father also wears a spotted brown print costume, but his is a batik print from neighbouring Indonesia (incidentally, in a nation that takes the political status of its batik prints very seriously). Bill demands that Muriel find the receipt for the leopard print dress she claims she has purchased. The legitimate ownership of the object is enmeshed with a printed receipt, the printed evidence of trade. The law (and the paramilitary power behind the law) are legitimized, or contested, by the presence or absence of printed text. Muriel hides in her bedroom, surround by poster prints of the pop group ABBA. Torn-out prints of other people’s weddings adorn her mirror. Her face is embossed with the clown-like primary colours of the marionette as she lifts a bouquet to her chin and stares into the real time ‘print’ of her mirror image. Bill takes the opportunity of a business meeting with Japanese investors to feed his entire family at ‘Charlie Chan’’s restaurant. Muriel’s middle sister sloppily wears her father’s state election tee shirt, printed with the text: ‘Vote 1, Bill Heslop. You can’t stop progress.’ The text sets up two ironic gags that are paid off on the dialogue track: “He lost,’ we are told. ‘Progress’ turns out to be funding the concreting of a beach. Bill berates his daughter Muriel: she has no chance of becoming a printer’s apprentice and she has failed a typing course. Her dysfunction in printed text has been covered up by Bill: he has bribed the typing teacher to issue a printed diploma to his daughter. In the gambling saloon of the club, under the arrays of mechanically repeated cultural symbols lit above the poker machines (‘A’ for ace, ‘Q’ for queen, etc.), Bill’s secret girlfriend Diedre risks giving Muriel a cosmetics job. Another text icon in lights announces the surf nightclub ‘Breakers’. Tania, the newly married queen bitch who has made Muriel’s teenage years a living hell, breaks up with her husband, deciding to cash in his negotiable text documents – his Bali honeymoon tickets – and go on an island holiday with her girlfriends instead. Text documents are the enduring site of agreements between people and also the site of mutations to those agreements. Tania dumps Muriel, who sobs and sobs. Sobs are a mechanical, percussive reproduction impressed on the sound track. Returning home, we discover that Muriel’s older brother has failed a printed test and been rejected for police recruitment. There is a high incidence of print illiteracy in the Heslop family. Mrs Heslop (Jeannie Drynan), for instance, regularly has trouble at the post office. Muriel sees a chance to escape the oppression of her family by tricking her mother into giving her a blank cheque. Here is the confluence of the legitimacy of a bank’s printed negotiable document with the risk and freedom of a blank space for rebel Muriel’s handwriting. Unable to type, her handwriting has the power to steal every cent of her father’s savings. She leaves home and spends the family’s savings at an island resort. On the island, the text print-challenged Muriel dances to a recording (sound print) of ABBA, her hand gestures emphasizing her bewigged face, which is made up in an impression of her pop idol. Her imitation of her goddesses – the ABBA women, her only hope in a real world of people who hate or avoid her – is accompanied by her goddesses’ voices singing: ‘the mystery book on the shelf is always repeating itself.’ Before jpeg and gif image downloads, we had postcard prints and snail mail. Muriel sends a postcard to her family, lying about her ‘success’ in the cosmetics business. The printed missal is clutched by her father Bill (Bill Hunter), who proclaims about his daughter, ‘you can’t type but you really impress me’. Meanwhile, on Hibiscus Island, Muriel lies under a moonlit palm tree with her newly found mentor, ‘bad girl’ Ronda (Rachel Griffiths). In this critical scene, where foolish Muriel opens her heart’s yearnings to a confidante she can finally trust, the director and DP have chosen to shoot a flat, high contrast blue filtered image. The visual result is very much like the semiabstract Japanese Ukiyo-e woodblock prints by Utamaro. This Japanese printing style informed the rise of European modern painting (Monet, Van Gogh, Picasso, etc., were all important collectors and students of Ukiyo-e prints). The above print and text elements in Muriel’s Wedding take us 27 minutes into her story, as recorded on a single page of real-time handwritten Coding. Although not discussed here, the Coding recorded the complete film – a total of 106 minutes of text elements and main graphic elements – as four pages of Code. Referring to this Coding some weeks after it was made, I looked up the final code on page four: taxi [food of the sea] bq. Translation: a shop sign whizzes past in the film’s background, as Muriel and Ronda leave Porpoise Spit in a taxi. Over their heads the text ‘Food Of The Sea’ flashes. We are reminded that Muriel and Ronda are mermaids, fantastic creatures sprung from the brow of author PJ Hogan, and illuminated even today in the pantheon of women’s coming-of-age art works. That the movie is relevant ten years on is evidenced by the current usage of the Muriel’s Wedding online forum, an intersection of wider discussions by sliterate women on imdb.com who, like Muriel, are observers (and in some cases victims) of horrific pressure from ambitious female gangs and bullies. Text is always a minor element in a motion picture (unless it is a subtitled foreign film) and text usually whizzes by subliminally while viewing a film. By Coding the work for [text], all the text nuances made by the film makers come to light. While I have viewed Muriel’s Wedding on many occasions, it has only been in Coding it specifically for text that I have noticed that Muriel is a representative of that vast class of talented youth who are discriminated against by print (as in text) educators who cannot offer her a life-affirming identity in the English classroom. Severely depressed at school, and failing to type or get a printer’s apprenticeship, Muriel finds paid work (and hence, freedom, life, identity, independence) working in her audio visual printed medium of choice: a video store in a new city. Muriel found a sliterate admirer at the video store but she later dumped him for her fantasy man, before leaving him too. One of the points of conjecture on the imdb Muriel’s Wedding site is, did Muriel (in the unwritten future) get back together with admirer Brice Nobes? That we will never know. While a print forms a track that tells us where culture has been, a print cannot be the future, a print is never animate reality. At the end of any trail of prints, one must lift one’s head from the last impression, and negotiate satisfaction in the happening world. References Australian Broadcasting Corporation. “Memo Shows US General Approved Interrogations.” 30 Mar. 2005 http://www.abc.net.au>. British Broadcasting Commission. “Films ‘Fuel Online File-Sharing’.’’ 22 Feb. 2005 http://news.bbc.co.uk/1/hi/technology/3890527.stm>. Bretherton, I. “The Origins of Attachment Theory: John Bowlby and Mary Ainsworth.” 1994. 23 Jan. 2005 http://www.psy.med.br/livros/autores/bowlby/bowlby.pdf>. Bunniesormaybemidgets. Chat Room Comment. “What Did Those Girls Do to Rhonda?” 28 Mar. 2005 http://us.imdb.com/title/tt0110598/board/>. Chinese Graphic Arts Net. Mantras of the Dharani Sutra. 20 Feb. 2005 http://www.cgan.com/english/english/cpg/engcp10.htm>. Ewins, R. Barkcloth and the Origins of Paper. 1991. 20 Feb. 2005 http://www.justpacific.com/pacific/papers/barkcloth~paper.html>. Grassl K.R. The DVD Statistical Report. 14 Mar. 2005 http://www.corbell.com>. Hahn, C. M. The Topic Is Paper. 20 Feb. 2005 http://www.nystamp.org/Topic_is_paper.html>. Harper, D. Online Etymology Dictionary. 14 Mar. 2005 http://www.etymonline.com/>. Mask of Zorro, The. Screenplay by J McCulley. UA, 1920. Muriel’s Wedding. Dir. PJ Hogan. Perf. Toni Collette, Rachel Griffiths, Bill Hunter, and Jeannie Drynan. Village Roadshow, 1994. O’Hagan, Jack. On The Road to Gundagai. 1922. 2 Apr. 2005 http://ingeb.org/songs/roadtogu.html>. Poole, J.H., P.L. Tyack, A.S. Stoeger-Horwath, and S. Watwood. “Animal Behaviour: Elephants Are Capable of Vocal Learning.” Nature 24 Mar. 2005. Sanchez, R. “Interrogation and Counter-Resistance Policy.” 14 Sept. 2003. 30 Mar. 2005 http://www.abc.net.au>. Schultheiss, O.C., M.M. Wirth, and S.J. Stanton. “Effects of Affiliation and Power Motivation Arousal on Salivary Progesterone and Testosterone.” Hormones and Behavior 46 (2005). Sherry, N. The Life of Graham Greene. 3 vols. London: Jonathan Cape 2004, 1994, 1989. Silk Road. Printing. 2000. 20 Feb. 2005 http://www.silk-road.com/artl/printing.shtml>. Smith, T. “Elpida Licenses ‘DVD on a Chip’ Memory Tech.” The Register 20 Feb. 2005 http://www.theregister.co.uk/2005/02>. —. “Intel Boffins Build First Continuous Beam Silicon Laser.” The Register 20 Feb. 2005 http://www.theregister.co.uk/2005/02>. Watson, R. S. “Eyes And Ears: Dramatic Memory Slicing and Salable Media Content.” Innovation and Speculation, ed. Brad Haseman. Brisbane: QUT. [in press] Watson, R. S. Visions. Melbourne: Curriculum Corporation, 1994. Citation reference for this article MLA Style Watson, Robert. "E-Press and Oppress: Audio Visual Print Drama, Identity, Text and Motion Picture Rebellion." M/C Journal 8.2 (2005). echo date('d M. Y'); ?> <http://journal.media-culture.org.au/0506/08-watson.php>. APA Style Watson, R. (Jun. 2005) "E-Press and Oppress: Audio Visual Print Drama, Identity, Text and Motion Picture Rebellion," M/C Journal, 8(2). Retrieved echo date('d M. Y'); ?> from <http://journal.media-culture.org.au/0506/08-watson.php>.

21

Potter, Emily. "Calculating Interests: Climate Change and the Politics of Life." M/C Journal 12, no.4 (October13, 2009). http://dx.doi.org/10.5204/mcj.182.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

There is a moment in Al Gore’s 2006 documentary An Inconvenient Truth devised to expose the sheer audacity of fossil fuel lobby groups in the United States. In their attempts to address significant scientific consensus and growing public concern over climate change, these groups are resorting to what Gore’s film suggests are grotesque distortions of fact. A particular example highlighted in the film is the Competitive Enterprise Institute’s (CPE—a lobby group funded by ExxonMobil) “pro” energy industry advertisem*nt: “Carbon dioxide”, the ad states. “They call it pollution, we call it life.” While on the one hand employing rhetoric against the “inconvenient truth” that carbon dioxide emissions are ratcheting up the Earth’s temperature, these advertisem*nts also pose a question – though perhaps unintended – that is worth addressing. Where does life reside? This is not an issue of essentialism, but relates to the claims, materials and technologies through which life as a political object emerges. The danger of entertaining the vested interests of polluting industry in a discussion of climate change and its biopolitics is countered by an imperative to acknowledge the ways in which multiple positions in the climate change debate invoke and appeal to ‘life’ as the bottom line, or inviolable interest, of their political, social or economic work. In doing so, other questions come to the fore that a politics of climate change framed in terms of moral positions or competing values will tend to overlook. These questions concern the manifold practices of life that constitute the contemporary terrain of the political, and the actors and instruments put in this employ. Who speaks for life? And who or what produces it? Climate change as a matter of concern (Latour) has gathered and generated a host of experts, communities, narratives and technical devices all invested in the administration of life. It is, as Malcom Bull argues, “the paradigmatic issue of the new politics,” a politics which “draws people towards the public realm and makes life itself subject to the caprices of state and market” (2). This paper seeks to highlight the politics of life that have emerged around climate change as a public issue. It will argue that these politics appear in incremental and multiple ways that situate an array of actors and interests as active in both contesting and generating the terms of life: what life is and how we come to know it. This way of thinking about climate change debates opposes a prevalent moralistic framework that reads the practices and discourses of debate in terms of oppositional positions alone. While sympathies may flow in varying directions, especially when it comes to such a highly charged and massively consequential issue as climate change, there is little insight to be had from charging the CPE (for example) with manipulating consumers, or misrepresenting well-known facts. Where new and more productive understandings open up is in relation to the fields through which these gathering actors play out their claims to the project of life. These fields, from the state, to the corporation, to the domestic sphere, reveal a complex network of strategies and devices that seek to secure life in constantly renovated terms. Life Politics Biopolitical scholarship in the wake of Foucault has challenged life as a pre-given uncritical category, and sought to highlight the means through which it is put under question and constituted through varying and composing assemblages of practitioners and practices. Such work regards the project of human well-being as highly complex and technical, and has undertaken to document this empirically through close attention to the everyday ecologies in which humans are enmeshed. This is a political and theoretical project in itself, situating political processes in micro, as well as macro, registers, including daily life as a site of (self) management and governance. Rabinow and Rose refer to biopolitical circuits that draw together and inter-relate the multiple sites and scales operative in the administration of life. These involve not just technologies, rationalities and regimes of authority and control, but also politics “from below” in the form of rights claims and community formation and agitation (198). Active in these circuits, too, are corporate and non-state interests for whom the pursuit of maximising life’s qualities and capabilities has become a concern through which “market relations and shareholder value” are negotiated (Rabinow and Rose 211). As many biopolitical scholars argue, biopower—the strategies through which biopolitics are enacted—is characteristic of the “disciplinary neo-liberalism” that has come to define the modern state, and through which the conduct of conduct is practiced (Di Muzio 305). Foucault’s concept of governmentality describes the devolution of state-based disciplinarity and sovereignty to a host of non-state actors, rationalities and strategies of governing, including the self-managing subject, not in opposition to the state, but contributing to its form. According to Bratich, Packer and McCarthy, everyday life is thus “saturated with governmental techniques” (18) in which we are all enrolled. Unlike regimes of biopolitics identified with what Agamben terms “thanopolitics”—the exercise of biopower “which ultimately rests on the power of some to threaten the death of others” (Rabinow and Rose 198), such as the Nazi’s National Socialism and other eugenic campaigns—governmental arts in the service of “vitalist” biopolitics (Rose 1) are increasingly diffused amongst all those with an “interest” in sustaining life, from organisations to individuals. The integration of techniques of self-governance which ask the individual to work on themselves and their own dispositions with State functions has broadened the base by which life is governed, and foregrounded an unsettled terrain of life claims. Rose argues that medical science is at the forefront of these contemporary biopolitics, and to this effect “has […] been fully engaged in the ethical questions of how we should live—of what kinds of creatures we are, of the kinds of obligations that we have to ourselves and to others, of the kinds of techniques we can and should use to improve ourselves” (20). Asking individuals to self-identify through their medical histories and bodily specificities, medical cultures are also shaping new political arrangements, as communities connected by shared genetics or physical conditions, for instance, emerge, evolve and agitate according to the latest medical knowledge. Yet it is not just medicine that provokes ethical work and new political forms. The environment is a key site for life politics that entails a multi-faceted discourse of obligations and entitlements, across fields and scales of engagement. Calculating Environments In line with neo-liberal logic, environmental discourse concerned with ameliorating climate change has increasingly focused upon the individual as an agent of self-monitoring, to both facilitate government agendas at a distance, and to “self-fashion” in the mode of the autonomous subject, securing against external risks (Ong 501). Climate change is commonly represented as such a risk, to both human and non-human life. A recent letter published by the Royal Australasian College of Physicians in two leading British medical journals, named climate change as the “biggest global health threat of the twenty-first century” (Morton). As I have argued elsewhere (Potter), security is central to dominant cultures of environmental governance in the West; these cultures tie sustainability goals to various and interrelated regimes of monitoring which attach to concepts of what Clark and Stevenson call “the good ecological citizen” (238). Citizenship is thus practiced through strategies of governmentality which call on individuals to invest not just in their own well-being, but in the broader project of life. Calculation is a primary technique through which modern environmental governance is enacted; calculative strategies are seen to mediate risk, according to Foucault, and consequently to “assure living” (Elden 575). Rationalised schemes for self-monitoring are proliferating under climate change and the project of environmentalism more broadly, something which critics of neo-liberalism have identified as symptomatic of the privatisation of politics that liberal governmentality has fostered. As we have seen in Australia, an evolving policy emphasis on individual practices and the domestic sphere as crucial sites of environmental action – for instance, the introduction of domestic water restrictions, and the phasing out of energy-inefficient light bulbs in the home—provides a leading discourse of ethico-political responsibility. The rise of carbon dioxide counting is symptomatic of this culture, and indicates the distributed fields of life management in contemporary governmentality. Carbon dioxide, as the CPE is keen to point out, is crucial to life, but it is also—in too large an amount—a force of destruction. Its management, in vitalist terms, is thus established as an effort to protect life in the face of death. The concept of “carbon footprinting” has been promoted by governments, NGOs, industry and individuals as a way of securing this goal, and a host of calculative techniques and strategies are employed to this end, across a spectrum of activities and contexts all framed in the interests of life. The footprinting measure seeks to secure living via self-policed limits, which also—in classic biopolitical form—shift previously private practices into a public realm of count-ability and accountability. The carbon footprint, like its associates the ecological footprint and the water footprint, has developed as a multi-faceted tool of citizenship beyond the traditional boundaries of the state. Suggesting an ecological conception of territory and of our relationships and responsibilities to this, the footprint, as a measure of resource use and emissions relative to the Earth’s capacities to absorb these, calculates and visualises the “specific qualities” (Elden 575) that, in a spatialised understanding of security, constitute and define this territory. The carbon footprint’s relatively simple remit of measuring carbon emissions per unit of assessment—be that the individual, the corporation, or the nation—belies the ways in which life is formatted and produced through its calculations. A tangled set of devices, practices and discourses is employed to make carbon and thus life calculable and manageable. Treading Lightly The old environmental adage to “tread lightly upon the Earth” has been literalised in the metaphor of the footprint, which attempts both to symbolise environmental practice and to directly translate data in order to meaningfully communicate necessary boundaries for our living. The World Wildlife Fund’s Living Planet Report 2008 exemplifies the growing popularity of the footprint as a political and poetic hook: speaking in terms of our “ecological overshoot,” and the move from “ecological credit to ecological deficit”, the report urges an attendance to our “global footprint” which “now exceeds the world’s capacity to regenerate by about 30 per cent” (1). Angela Crombie’s A Lighter Footprint, an instruction manual for sustainable living, is one of a host of media through which individuals are educated in modes of footprint calculation and management. She presents a range of techniques, including carbon offsetting, shifting to sustainable modes of transport, eating and buying differently, recycling and conserving water, to mediate our carbon dioxide output, and to “show […] politicians how easy it is” (13). Governments however, need no persuading from citizens that carbon calculation is an exercise to be harnessed. As governments around the world move (slowly) to address climate change, policies that instrumentalise carbon dioxide emission and reduction via an auditing of credits and deficits have come to the fore—for example, the European Union Emissions Trading Scheme and the Chicago Climate Exchange. In Australia, we have the currently-under-debate Carbon Pollution Reduction Scheme, a part of which is the Australian Emissions Trading Scheme (AETS) that will introduce a system of “carbon credits” and trading in a market-based model of supply and demand. This initiative will put a price on carbon dioxide emissions, and cap the amount of emissions any one polluter can produce without purchasing further credits. In readiness for the scheme, business initiatives are forming to take advantage of this new carbon market. Industries in carbon auditing and off-setting services are consolidating; hectares of trees, already active in the carbon sequestration market, are being cultivated as “carbon sinks” and key sites of compliance for polluters under the AETS. Governments are also planning to turn their tracts of forested public land into carbon credits worth billions of dollars (Arup 7). The attachment of emission measures to goods and services requires a range of calculative experts, and the implementation of new marketing and branding strategies, aimed at conveying the carbon “health” of a product. The introduction of “food mile” labelling (the amount of carbon dioxide emitted in the transportation of the food from source to consumer) in certain supermarkets in the United Kingdom is an example of this. Carbon risk analysis and management programs are being introduced across businesses in readiness for the forthcoming “carbon economy”. As one flyer selling “a suite of carbon related services” explains, “early action will give you the edge in understanding and mitigating the risks, and puts you in a prime position to capitalise on the rewards” (MGI Business Solutions Worldwide). In addition, lobby groups are working to ensure exclusions from or the free allocation of permits within the proposed AETS, with degrees of compulsion applied to different industries – the Federal Government, for instance, will provide a $3.9 billion compensation package for the electric power sector when the AETS commences, to enable their “adjustment” to this carbon regime. Performing Life Noortje Mares provides a further means of thinking through the politics of life in the context of climate change by complicating the distinction between public and private interest. Her study of “green living experiments” describes the rise of carbon calculation in the home in recent years, and the implementation of technologies such as the smart electricity meter that provides a constantly updating display of data relating to amounts and cost of energy consumed and the carbon dioxide emitted in the routines of domestic life. Her research tracks the entry of these personal calculative regimes into public life via internet forums such as blogs, where individuals notate or discuss their experiences of pursing low-carbon lifestyles. On the one hand, these calculative practices of living and their public representation can be read as evidencing the pervasive neo-liberal governmentality at work in contemporary environmental practice, where individuals are encouraged to scrupulously monitor their domestic cultures. The rise of auditing as a technology of self, and more broadly as a technique of public accountability, has come under fire for its “immunity-granting role” (Charkiewicz 79), where internal audits become substituted for external compliance and regulation. Mares challenges this reading, however, by demonstrating the ways in which green living experiments “transform everyday material practices into practices of public involvement” that (118) don’t resolve or pin down relations between the individual, the non-human environment, and the social, or reveal a mappable flow of actions and effects between the public realm and the home. The empirical modes of publicity that these individuals employ, “the careful recording of measurements and the reliable descriptions of sensory observation, so as to enable ‘virtual witnessing’ by wider audiences”, open up to much more complex understandings than one of calculative self-discipline at work. As “instrument[s] of public involvement” (120), the experiments that Mares describe locate the politics of life in the embodied socio-material entanglements of the domestic sphere, in arrangements of humans and non-human technologies. Such arrangements, she suggests, are ontologically productive in that they introduce “not only new knowledge, but also new entities […] to society” (119), and as such these experiments and the modes of calculation they employ become active in the composition of reality. Recent work in economic sociology and cultural studies has similarly contended that calculation, far from either a naturalised or thoroughly abstract process, relies upon a host of devices, relations, and techniques: that is, as Gay Hawkins explains, calculative processes “have to be enacted” (108). Environmental governmentality in the service of securing life is a networked practice that draws in a host of actors, not a top-down imposition. The institution of carbon economies and carbon emissions as a new register of public accountability, brings alternative ways to calculate the world into being, and consequently re-calibrates life as it emerges from these heterogeneous arrangements. All That Gathers Latour writes that we come to know a matter of concern by all the things that gather around it (Latour). This includes the human, as well as the non-human actors, policies, practices and technologies that are put to work in the making of our realities. Climate change is routinely represented as a threat to life, with predicted (and occurring) species extinction, growing numbers of climate change refugees, dispossessed from uninhabitable lands, and the rise of diseases and extreme weather scenarios that put human life in peril. There is no doubt, of course, that climate change does mean death for some: indeed, there are thanopolitical overtones in inequitable relations between the fall-out of impacts from major polluting nations on poorer countries, or those much more susceptible to rising sea levels. Biosocial equity, as Bull points out, is a “matter of being equally alive and equally dead” (2). Yet in the biopolitical project of assuring living, life is burgeoning around the problem of climate change. The critique of neo-liberalism as a blanketing system that subjects all aspects of life to market logic, and in which the cynical techniques of industry seek to appropriate ethico-political stances for their own material ends, are insufficient responses to what is actually unfolding in the messy terrain of climate change and its biopolitics. What this paper has attempted to show is that there is no particular purchase on life that can be had by any one actor who gathers around this concern. Varying interests, ambitions, and intentions, without moral hierarchy, stake their claim in life as a constantly constituting site in which they participate, and from this perspective, the ways in which we understand life to be both produced and managed expand. This is to refuse either an opposition or a conflation between the market and nature, or the market and life. It is also to argue that we cannot essentialise human-ness in the climate change debate. For while human relations with animals, plants and weathers may make us what we are, so too do our relations with (in a much less romantic view) non-human things, technologies, schemes, and even markets—from carbon auditing services, to the label on a tin on the supermarket shelf. As these intersect and entangle, the project of life, in the new politics of climate change, is far from straightforward. References An Inconvenient Truth. Dir. Davis Guggenheim. Village Roadshow, 2006. Arup, Tom. “Victoria Makes Enormous Carbon Stocktake in Bid for Offset Billions.” The Age 24 Sep. 2009: 7. Bratich, Jack Z., Jeremy Packer, and Cameron McCarthy. “Governing the Present.” Foucault, Cultural Studies and Governmentality. Ed. Bratich, Packer and McCarthy. Albany: State University of New York Press, 2003. 3-21. Bull, Malcolm. “Globalization and Biopolitics.” New Left Review 45 (2007): 12 May 2009 . < http://newleftreview.org/?page=article&view=2675 >. Charkiewicz, Ewa. “Corporations, the UN and Neo-liberal Bio-politics.” Development 48.1 (2005): 75-83. Clark, Nigel, and Nick Stevenson. “Care in a Time of Catastrophe: Citizenship, Community and the Ecological Imagination.” Journal of Human Rights 2.2 (2003): 235-246. Crombie, Angela. A Lighter Footprint: A Practical Guide to Minimising Your Impact on the Planet. Carlton North, Vic.: Scribe, 2007. Di Muzio, Tim. “Governing Global Slums: The Biopolitics of Target 11.” Global Governance. 14.3 (2008): 305-326. Elden, Stuart. “Governmentality, Calculation and Territory.” Environment and Planning D: Society and Space 25 (2007): 562-580. Hawkins, Gay. The Ethics of Waste: How We Relate to Rubbish. Sydney: University of New South Wales Press, 2006. Latour, Bruno. “Why Has Critique Run Out of Steam?: From Matters of Fact to Matters of Concern.” Critical Inquiry 30.2 (2004): 225-248. Mares, Noortje. “Testing Powers of Engagement: Green Living Experiments, the Ontological Turn and the Undoability and Involvement.” European Journal of Social Theory 12.1 (2009): 117-133. MGI Business Solutions Worldwide. “Carbon News.” Adelaide. 2 Aug. 2009. Ong, Aihwa. “Mutations in Citizenship.” Theory, Culture and Society 23.2-3 (2006): 499-505. Potter, Emily. “Footprints in the Mallee: Climate Change, Sustaining Communities, and the Nature of Place.” Landscapes and Learning: Place Studies in a Global World. Ed. Margaret Somerville, Kerith Power and Phoenix de Carteret. Sense Publishers. Forthcoming. Rabinow, Paul, and Nikolas Rose. “Biopower Today.” Biosocieties 1 (2006): 195-217. Rose, Nikolas. “The Politics of Life Itself.” Theory, Culture and Society 18.6 (2001): 1-30. World Wildlife Fund. Living Planet Report 2008. Switzerland, 2008.

22

Muntean, Nick, and Anne Helen Petersen. "Celebrity Twitter: Strategies of Intrusion and Disclosure in the Age of Technoculture." M/C Journal 12, no.5 (December13, 2009). http://dx.doi.org/10.5204/mcj.194.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Being a celebrity sure ain’t what it used to be. Or, perhaps more accurately, the process of maintaining a stable star persona isn’t what it used to be. With the rise of new media technologies—including digital photography and video production, gossip blogging, social networking sites, and streaming video—there has been a rapid proliferation of voices which serve to articulate stars’ personae. This panoply of sanctioned and unsanctioned discourses has brought the coherence and stability of the star’s image into crisis, with an evermore-heightened loop forming recursively between celebrity gossip and scandals, on the one hand, and, on the other, new media-enabled speculation and commentary about these scandals and gossip-pieces. Of course, while no subject has a single meaning, Hollywood has historically expended great energy and resources to perpetuate the myth that the star’s image is univocal. In the present moment, however, studios’s traditional methods for discursive control have faltered, such that celebrities have found it necessary to take matters into their own hands, using new media technologies, particularly Twitter, in an attempt to stabilise that most vital currency of their trade, their professional/public persona. In order to fully appreciate the significance of this new mode of publicity management, and its larger implications for contemporary subjectivity writ large, we must first come to understand the history of Hollywood’s approach to celebrity publicity and image management.A Brief History of Hollywood PublicityThe origins of this effort are nearly as old as Hollywood itself, for, as Richard DeCordova explains, the celebrity scandals of the 1920s threatened to disrupt the economic vitality of the incipient industry such that strict, centralised image control appeared as a necessary imperative to maintain a consistently reliable product. The Fatty Arbuckle murder trial was scandalous not only for its subject matter (a murder suffused with illicit and shadowy sexual innuendo) but also because the event revealed that stars, despite their mediated larger-than-life images, were not only as human as the rest of us, but that, in fact, they were capable of profoundly inhuman acts. The scandal, then, was not so much Arbuckle’s crime, but the negative pall it cast over the Hollywood mythos of glamour and grace. The studios quickly organised an industry-wide regulatory agency (the MPPDA) to counter potentially damaging rhetoric and ward off government intervention. Censorship codes and morality clauses were combined with well-funded publicity departments in an effort that successfully shifted the locus of the star’s extra-filmic discursive construction from private acts—which could betray their screen image—to information which served to extend and enhance the star’s pre-existing persona. In this way, the sanctioned celebrity knowledge sphere became co-extensive with that of commercial culture itself; the star became meaningful only by knowing how she spent her leisure time and the type of make-up she used. The star’s identity was not found via unsanctioned intrusion, but through studio-sanctioned disclosure, made available in the form of gossip columns, newsreels, and fan magazines. This period of relative stability for the star's star image was ultimately quite brief, however, as the collapse of the studio system in the late 1940s and the introduction of television brought about a radical, but gradual, reordering of the star's signifying potential. The studios no longer had the resources or incentive to tightly police star images—the classic age of stardom was over. During this period of change, an influx of alternative voices and publications filled the discursive void left by the demise of the studios’s regimented publicity efforts, with many of these new outlets reengaging older methods of intrusion to generate a regular rhythm of vendible information about the stars.The first to exploit and capitalize on star image instability was Robert Harrison, whose Confidential Magazine became the leading gossip publication of the 1950s. Unlike its fan magazine rivals, which persisted in portraying the stars as morally upright and wholesome, Confidential pledged on the cover of each issue to “tell the facts and name the names,” revealing what had been theretofore “confidential.” In essence, through intrusion, Confidential reasserted scandal as the true core of the star, simultaneously instituting incursion and surveillance as the most direct avenue to the “kernel” of the celebrity subject, obtaining stories through associations with call girls, out-of-work starlettes, and private eyes. As extra-textual discourses proliferated and fragmented, the contexts in which the public encountered the star changed as well. Theatre attendance dropped dramatically, and as the studios sold their film libraries to television, the stars, formerly available only on the big screen and in glamour shots, were now intercut with commercials, broadcast on grainy sets in the domestic space. The integrity—or at least the illusion of integrity—of the star image was forever compromised. As the parameters of renown continued to expand, film stars, formally distinguished from all other performers, migrated to television. The landscape of stardom was re-contoured into the “celebrity sphere,” a space that includes television hosts, musicians, royals, and charismatic politicians. The revamped celebrity “game” was complex, but still playabout: with a powerful agent, a talented publicist, and a check on drinking, drug use, and extra-marital affairs, a star and his or her management team could negotiate a coherent image. Confidential was gone, The National Inquirer was muzzled by libel laws, and People and E.T.—both sheltered within larger media companies—towed the publicists’s line. There were few widely circulated outlets through which unauthorised voices could gain traction. Old-School Stars and New Media Technologies: The Case of Tom CruiseYet with the relentless arrival of various news media technologies beginning in the 1980s and continuing through the present, maintaining tight celebrity image control began to require the services of a phalanx of publicists and handlers. Here, the example of Tom Cruise is instructive: for nearly twenty years, Cruise’s publicity was managed by Pat Kingsley, who exercised exacting control over the star’s image. With the help of seemingly diverse yet essentially similar starring roles, Cruise solidified his image as the co*cky, charismatic boy-next-door.The unified Cruise image was made possible by shutting down competing discourses through the relentless, comprehensive efforts of his management company; Kingsley's staff fine-tuned Cruise’s acts of disclosure while simultaneously eliminating the potential for unplanned intrusions, neutralising any potential scandal at its source. Kingsley and her aides performed for Cruise all the functions of a studio publicity department from Hollywood’s Golden Age. Most importantly, Cruise was kept silent on the topic of his controversial religion, Scientology, lest it incite domestic and international backlash. In interviews and off-the-cuff soundbites, Cruise was ostensibly disclosing his true self, and that self remained the dominant reading of what, and who, Cruise “was.” Yet in 2004, Cruise fired Kingsley, replaced her with his own sister (and fellow Scientologist), who had no prior experience in public relations. In essence, he exchanged a handler who understood how to shape star disclosure for one who did not. The events that followed have been widely rehearsed: Cruise avidly pursued Katie Holmes; Cruise jumped for joy on Oprah’s couch; Cruise denounced psychology during a heated debate with Matt Lauer on The Today Show. His attempt at disclosing this new, un-publicist-mediated self became scandalous in and of itself. Cruise’s dismissal of Kingsley, his unpopular (but not necessarily unwelcome) disclosures, and his own massively unchecked ego all played crucial roles in the fall of the Cruise image. While these stumbles might have caused some minor career turmoil in the past, the hyper-echoic, spastically recombinatory logic of the technoculture brought the speed and stakes of these missteps to a new level; one of the hallmarks of the postmodern condition has been not merely an increasing textual self-reflexivity, but a qualitative new leap forward in inter-textual reflexivity, as well (Lyotard; Baudrillard). Indeed, the swift dismantling of Cruise’s long-established image is directly linked to the immediacy and speed of the Internet, digital photography, and the gossip blog, as the reflexivity of new media rendered the safe division between disclosure and intrusion untenable. His couchjumping was turned into a dance remix and circulated on YouTube; Mission Impossible 3 boycotts were organised through a number of different Web forums; gossip bloggers speculated that Cruise had impregnated Holmes using the frozen sperm of Scientology founder L. Ron Hubbard. In the past, Cruise simply filed defamation suits against print publications that would deign to sully his image. Yet the sheer number of sites and voices reproducing this new set of rumors made such a strategy untenable. Ultimately, intrusions into Cruise’s personal life, including the leak of videos intended solely for Scientology recruitment use, had far more traction than any sanctioned Cruise soundbite. Cruise’s image emerged as a hollowed husk of its former self; the sheer amount of material circulating rendered all attempts at P.R., including a Vanity Fair cover story and “reveal” of daughter Suri, ridiculous. His image was fragmented and re-collected into an altered, almost uncanny new iteration. Following the lackluster performance of Mission Impossible 3 and public condemnation by Paramount head Sumner Redstone, Cruise seemed almost pitiable. The New Logic of Celebrity Image ManagementCruise’s travails are expressive of a deeper development which has occurred over the course of the last decade, as the massively proliferating new forms of celebrity discourse (e.g., paparazzi photos, mug shots, cell phone video have further decentered any shiny, polished version of a star. With older forms of media increasingly reorganising themselves according to the aesthetics and logic of new media forms (e.g., CNN featuring regular segments in which it focuses its network cameras upon a computer screen displaying the CNN website), we are only more prone to appreciate “low media” forms of star discourse—reports from fans on discussion boards, photos taken on cell phones—as valid components of the celebrity image. People and E.T. still attract millions, but they are rapidly ceding control of the celebrity industry to their ugly, offensive stepbrothers: TMZ, Us Weekly, and dozens of gossip blogs. Importantly, a publicist may be able to induce a blogger to cover their client, but they cannot convince him to drop a story: if TMZ doesn’t post it, then Perez Hilton certainly will. With TMZ unabashedly offering pay-outs to informants—including those in law enforcement and health care, despite recently passed legislation—a star is never safe. If he or she misbehaves, someone, professional or amateur, will provide coverage. Scandal becomes normalised, and, in so doing, can no longer really function as scandal as such; in an age of around-the-clock news cycles and celebrity-fixated journalism, the only truly scandalising event would be the complete absence of any scandalous reports. Or, as aesthetic theorist Jacques Ranciere puts it; “The complaint is then no longer that images conceal secrets which are no longer such to anyone, but, on the contrary, that they no longer hide anything” (22).These seemingly paradoxical involutions of post-modern celebrity epistemologies are at the core of the current crisis of celebrity, and, subsequently, of celebrities’s attempts to “take back their own paparazzi.” As one might expect, contemporary celebrities have attempted to counter these new logics and strategies of intrusion through a heightened commitment to disclosure, principally through the social networking capabilities of Twitter. Yet, as we will see, not only have the epistemological reorderings of postmodernist technoculture affected the logic of scandal/intrusion, but so too have they radically altered the workings of intrusion’s dialectical counterpart, disclosure.In the 1930s, when written letters were still the primary medium for intimate communication, stars would send lengthy “hand-written” letters to members of their fan club. Of course, such letters were generally not written by the stars themselves, but handwriting—and a star’s signature—signified authenticity. This ritualised process conferred an “aura” of authenticity upon the object of exchange precisely because of its static, recurring nature—exchange of fan mail was conventionally understood to be the primary medium for personal encounters with a celebrity. Within the overall political economy of the studio system, the medium of the hand-written letter functioned to unleash the productive power of authenticity, offering an illusion of communion which, in fact, served to underscore the gulf between the celebrity’s extraordinary nature and the ordinary lives of those who wrote to them. Yet the criterion and conventions through which celebrity personae were maintained were subject to change over time, as new communications technologies, new modes of Hollywood's industrial organization, and the changing realities of commercial media structures all combined to create a constantly moving ground upon which the celebrity tried to affix. The celebrity’s changing conditions are not unique to them alone; rather, they are a highly visible bellwether of changes which are more fundamentally occurring at all levels of culture and subjectivity. Indeed, more than seventy years ago, Walter Benjamin observed that when hand-made expressions of individuality were superseded by mechanical methods of production, aesthetic criteria (among other things) also underwent change, rendering notions of authenticity increasingly indeterminate.Such is the case that in today’s world, hand-written letters seem more contrived or disingenuous than Danny DeVito’s inaugural post to his Twitter account: “I just joined Twitter! I don't really get this site or how it works. My nuts are on fire.” The performative gesture in DeVito’s tweet is eminently clear, just as the semantic value is patently false: clearly DeVito understands “this site,” as he has successfully used it to extend his irreverent funny-little-man persona to the new medium. While the truth claims of his Tweet may be false, its functional purpose—both effacing and reifying the extraordinary/ordinary distinction of celebrity and maintaining DeVito’s celebrity personality as one with which people might identify—is nevertheless seemingly intact, and thus mirrors the instrumental value of celebrity disclosure as performed in older media forms. Twitter and Contemporary TechnocultureFor these reasons and more, considered within the larger context of contemporary popular culture, celebrity tweeting has been equated with the assertion of the authentic celebrity voice; celebrity tweets are regularly cited in newspaper articles and blogs as “official” statements from the celebrity him/herself. With so many mediated voices attempting to “speak” the meaning of the star, the Twitter account emerges as the privileged channel to the star him/herself. Yet the seemingly easy discursive associations of Twitter and authenticity are in fact ideological acts par excellence, as fixations on the indexical truth-value of Twitter are not merely missing the point, but actively distracting from the real issues surrounding the unsteady discursive construction of contemporary celebrity and the “celebretification” of contemporary subjectivity writ large. In other words, while it is taken as axiomatic that the “message” of celebrity Twittering is, as Henry Jenkins suggests, “Here I Am,” this outward epistemological certainty veils the deeply unstable nature of celebrity—and by extension, subjectivity itself—in our networked society.If we understand the relationship between publicity and technoculture to work as Zizek-inspired cultural theorist Jodi Dean suggests, then technologies “believe for us, accessing information even if we cannot” (40), such that technology itself is enlisted to serve the function of ideology, the process by which a culture naturalises itself and attempts to render the notion of totality coherent. For Dean, the psycho-ideological reality of contemporary culture is predicated upon the notion of an ever-elusive “secret,” which promises to reveal us all as part of a unitary public. The reality—that there is no such cohesive collective body—is obscured in the secret’s mystifying function which renders as “a contingent gap what is really the fact of the fundamental split, antagonism, and rupture of politics” (40). Under the ascendancy of the technoculture—Dean's term for the technologically mediated landscape of contemporary communicative capitalism—subjectivity becomes interpellated along an axis blind to the secret of this fundamental rupture. The two interwoven poles of this axis are not unlike structuralist film critics' dialectically intertwined accounts of the scopophilia and scopophobia of viewing relations, simply enlarged from the limited realm of the gaze to encompass the entire range of subjectivity. As such, the conspiratorial mindset is that mode of desire, of lack, which attempts to attain the “secret,” while the celebrity subject is that element of excess without which desire is unthinkable. As one might expect, the paparazzi and gossip sites’s strategies of intrusion have historically operated primarily through the conspiratorial mindset, with endless conjecture about what is “really happening” behind the scenes. Under the intrusive/conspiratorial paradigm, the authentic celebrity subject is always just out of reach—a chance sighting only serves to reinscribe the need for the next encounter where, it is believed, all will become known. Under such conditions, the conspiratorial mindset of the paparazzi is put into overdrive: because the star can never be “fully” known, there can never be enough information about a star, therefore, more information is always needed. Against this relentless intrusion, the celebrity—whose discursive stability, given the constant imperative for newness in commercial culture, is always in danger—risks a semiotic liquidation that will totally displace his celebrity status as such. Disclosure, e.g. Tweeting, emerges as a possible corrective to the endlessly associative logic of the paparazzi’s conspiratorial indset. In other words, through Twitter, the celebrity seeks to arrest meaning—fixing it in place around their own seemingly coherent narrativisation. The publicist’s new task, then, is to convincingly counter such unsanctioned, intrusive, surveillance-based discourse. Stars continue to give interviews, of course, and many regularly pose as “authors” of their own homepages and blogs. Yet as posited above, Twitter has emerged as the most salient means of generating “authentic” celebrity disclosure, simultaneously countering the efforts of the papparazzi, fan mags, and gossip blogs to complicate or rewrite the meaning of the star. The star uses the account—verified, by Twitter, as the “real” star—both as a means to disclose their true interior state of being and to counter erastz narratives circulating about them. Twitter’s appeal for both celebrities and their followers comes from the ostensible spontaneity of the tweets, as the seemingly unrehearsed quality of the communiqués lends the form an immediacy and casualness unmatched by blogs or official websites; the semantic informality typically employed in the medium obscures their larger professional significance for celebrity tweeters. While Twitter’s air of extemporary intimacy is also offered by other social networking platforms, such as MySpace or Facebook, the latter’s opportunities for public feedback (via wall-posts and the like) works counter to the tight image control offered by Twitter’s broadcast-esque model. Additionally, because of the uncertain nature of the tweet release cycle—has Ashton Kutcher sent a new tweet yet?—the voyeuristic nature of the tweet disclosure (with its real-time nature offering a level of synchronic intimacy that letters never could have matched), and the semantically displaced nature of the medium, it is a form of disclosure perfectly attuned to the conspiratorial mindset of the technoculture. As mentioned above, however, the conspiratorial mindset is an unstable subjectivity, insofar as it only exists through a constant oscillation with its twin, the celebrity subjectivity. While we can understand that, for the celebrities, Twitter functions by allowing them a mode for disclosive/celebrity subjectivisation, we have not yet seen how the celebrity itself is rendered conspiratorial through Twitter. Similarly, only the conspiratorial mode of the follower’s subjectivity has thus far been enumerated; the moment of the follower's celebrtification has so far gone unmentioned. Since we have seen that the celebrity function of Twitter is not really about discourse per se, we should instead understand that the ideological value of Twitter comes from the act of tweeting itself, of finding pleasure in being engaged in a techno-social system in which one's participation is recognised. Recognition and participation should be qualified, though, as it is not the fully active type of participation one might expect in say, the electoral politics of a representative democracy. Instead, it is a participation in a sort of epistemological viewing relations, or, as Jodi Dean describes it, “that we understand ourselves as known is what makes us think there is that there is a public that knows us” (122). The fans’ recognition by the celebrity—the way in which they understood themselves as known by the star was once the receipt of a hand-signed letter (and a latent expectation that the celebrity had read the fan’s initial letter); such an exchange conferred to the fan a momentary sense of participation in the celebrity's extraordinary aura. Under Twitter, however, such an exchange does not occur, as that feeling of one-to-one interaction is absent; simply by looking elsewhere on the screen, one can confirm that a celebrity's tweet was received by two million other individuals. The closest a fan can come to that older modality of recognition is by sending a message to the celebrity that the celebrity then “re-tweets” to his broader following. Beyond the obvious levels of technological estrangement involved in such recognition is the fact that the identity of the re-tweeted fan will not be known by the celebrity’s other two million followers. That sense of sharing in the celebrity’s extraordinary aura is altered by an awareness that the very act of recognition largely entails performing one’s relative anonymity in front of the other wholly anonymous followers. As the associative, conspiratorial mindset of the star endlessly searches for fodder through which to maintain its image, fans allow what was previously a personal moment of recognition to be transformed into a public one. That is, the conditions through which one realises one’s personal subjectivity are, in fact, themselves becoming remade according to the logic of celebrity, in which priority is given to the simple fact of visibility over that of the actual object made visible. Against such an opaque cultural transformation, the recent rise of reactionary libertarianism and anti-collectivist sentiment is hardly surprising. ReferencesBaudrillard, Jean. Simulacra and Simulation. Ann Arbor: Michigan UP, 1994.Benjamin, Walter. Illuminations. New York: Harcourt, Brace and World, 1968. Dean, Jodi. Publicity’s Secret: How Technoculture Capitalizes on Democracy. Ithaca: Cornell UP, 2003. DeCordova, Richard. Picture Personalities: The Emergence of the Star System in America. Urbana: University of Illinois Press, 1990. Jenkins, Henry. “The Message of Twitter: ‘Here It Is’ and ‘Here I Am.’” Confessions of an Aca-Fan. 23 Aug. 2009. 15 Sep. 2009 < http://henryjenkins.org/2009/08/the_message_of_twitter.html >.Lyotard, Jean-Francois. The Postmodern Condition: A Report on Knowledge. Minneapolis: Minnesota UP, 1984.Ranciere, Jacques. The Future of the Image. New York: Verso, 2007.

23

Abidin, Crystal. "Micro­microcelebrity: Branding Babies on the Internet." M/C Journal 18, no.5 (October14, 2015). http://dx.doi.org/10.5204/mcj.1022.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

Babies and toddlers are amassing huge followings on social media, achieving microcelebrity status, and raking in five figure sums. In East Asia, many of these lucrative “micro­-microcelebrities” rise to fame by inheriting exposure and proximate microcelebrification from their social media Influencer mothers. Through self-branding techniques, Influencer mothers’ portrayals of their young’ children’s lives “as lived” are the canvas on which (baby) products and services are marketed to readers as “advertorials”. In turning to investigate this budding phenomenon, I draw on ethnographic case studies in Singapore to outline the career trajectory of these young children (under 4yo) including their social media presence, branding strategies, and engagement with their followers. The chapter closes with a brief discussion on some ethical considerations of such young children’s labour in the social media age.Influencer MothersTheresa Senft first coined the term “microcelebrity” in her work Camgirls as a burgeoning online trend, wherein people attempt to gain popularity by employing digital media technologies, such as videos, blogs, and social media. She describes microcelebrities as “non-actors as performers” whose narratives take place “without overt manipulation”, and who are “more ‘real’ than television personalities with ‘perfect hair, perfect friends and perfect lives’” (Senft 16), foregrounding their active response to their communities in the ways that maintain open channels of feedback on social media to engage with their following.Influencers – a vernacular industry term albeit inspired by Katz & Lazarsfeld’s notion of “personal influence” that predates Internet culture – are one type of microcelebrity; they are everyday, ordinary Internet users who accumulate a relatively large following on blogs and social media through the textual and visual narration of their personal lives and lifestyles, engage with their following in “digital” and “physical” spaces, and monetize their following by integrating “advertorials” into their blog or social media posts and making physical appearances at events. A pastiche of “advertisem*nt” and “editorial”, advertorials in the Influencer industry are highly personalized, opinion-laden promotions of products/services that Influencers personally experience and endorse for a fee. Influencers in Singapore often brand themselves as having “relatability”, or the ability to persuade their followers to identify with them (Abidin). They do so by make consciously visible the backstage (Goffman) of the usually “inaccessible”, “personal”, and “private” aspects of mundane, everyday life to curate personae that feel “authentic” to fans (Marwick 114), and more accessible than traditional celebrity (Senft 16).Historically, the Influencer industry in Singapore can be traced back to the early beginnings of the “blogshop” industry from the mid-2000s and the “commercial blogging” industry. Influencers are predominantly young women, and market products and services from diverse industries, although the most popular have been fashion, beauty, F&B, travel, and electronics. Most prominent Influencers are contracted to management agencies who broker deals in exchange for commission and assist in the production of their vlogs. Since then, the industry has grown, matured, and expanded so rapidly that Influencers developed emergent models of advertorials, with the earliest cohorts moving into different life stages and monetizing several other aspects of their personal lives such as the “micro-microcelebrity” of their young children. What this paper provides is an important analysis of the genesis and normative practices of micro-microcelebrity commerce in Singapore from its earliest years, and future research trajectories in this field.Micro-Microcelebrity and Proximate MicrocelebrificationI define micro-microcelebrities as the children of Influencers who have themselves become proximate microcelebrities, having derived exposure and fame from their prominent Influencer mothers, usually through a more prolific, deliberate, and commercial form of what Blum-Ross defines as “sharenting”: the act of parents sharing images and stores about their children in digital spaces such as social networking sites and blogs. Marwick (116-117), drawing from Rojek’s work on types of celebrity – distinguishes between two types of microcelebrity: “ascribed microcelebrity” where the online personality is made recognizable through the “production of celebrity media” such as paparazzi shots and user-produced online memes, or “achieved microcelebrity” where users engage in “self-presentation strateg[ies]”, such as fostering the illusion of intimacy with fans, maintaining a persona, and selective disclosure about oneself.Micro-microcelebrities lie somewhere between the two: In a process I term “proximate microcelebrification”, micro-microcelebrities themselves inherit celebrity through the preemptive and continuous exposure from their Influencer mothers, many beginning even during the pre-birth pregnancy stages in the form of ultrasound scans, as a form of “achieved microcelebrity”. Influencer mothers whose “presentational strategies” (cf. Marshall, “Promotion” 45) are successful enough (as will be addressed later) gain traction among followers, who in turn further popularize the micro-microcelebrity by setting up fan accounts, tribute sites, and gossip forums through which fame is heightened in a feedback loop as a model of “ascribed microcelebrity”.Here, however, I refrain from conceptualizing these young stars as “micro-Influencers” for unlike Influencers, these children do not yet curate their self-presentation to command the attention of followers, but instead are used, framed, and appropriated by their mothers for advertorials. In other words, Influencer mothers “curate [micro-microcelebrities’] identities into being” (Leaver, “Birth”). Following this, many aspects of their micro-microcelebrities become rapidly commodified and commercialized, with advertisers clamoring to endorse anything from maternity hospital stays to nappy cream.Although children of mommybloggers have the prospect to become micro-microcelebrities, both groups are conceptually distinct. Friedman (200-201) argues that among mommybloggers arose a tension between those who adopt “the raw authenticity of nonmonetized blogging”, documenting the “unglamorous minutiae” of their daily lives and a “more authentic view of motherhood” and those who use mommyblogs “primarily as a source of extra income rather than as a site for memoir”, focusing on “parent-centered products” (cf. Mom Bloggers Club).In contrast, micro-microcelebrities and their digital presence are deliberately commercial, framed and staged by Influencer mothers in order to maximize their advertorial potential, and are often postured to market even non-baby/parenting products such as fast food and vehicles (see later). Because of the overt commerce, it is unclear if micro-microcelebrity displays constitute “intimate surveillance”, an “almost always well-intentioned surveillance of young people by parents” (Leaver, “Born” 4). Furthermore, children are generally peripheral to mommybloggers whose own parenting narratives take precedence as a way to connect with fellow mothers, while micro-microcelebrities are the primary feature whose everyday lives and digital presence enrapture followers.MethodologyThe analysis presented is informed by my original fieldwork with 125 Influencers and related actors among whom I conducted a mixture of physical and digital personal interviews, participant observation, web archaeology, and archival research between December 2011 and October 2014. However, the material presented here is based on my digital participant observation of publicly accessible and intentionally-public digital presence of the first four highly successful micro-microcelebrities in Singapore: “Baby Dash” (b.2013) is the son of Influencer xiaxue, “#HeYurou” (b.2011) is the niece of Influencer bongqiuqiu, “#BabyElroyE” (b.2014) is the son of Influencer ohsofickle, and “@MereGoRound” (b.2015) is the daughter of Influencer bongqiuqiu.The microcelebrity/social media handles of these children take different forms, following the platform on which their parent/aunt has exposed them on the most. Baby Dash appears in all of xiaxue’s digital platforms under a variety of over 30 indexical, ironic, or humourous hashtags (Leaver, “Birth”) including “#pointylipped”, #pineappledash”, and “#面包脸” (trans. “bread face”); “#HeYurou” appears on bongqiuqiu’s Instagram and Twitter; “#BabyElroyE” appears on ohsofickle’s Instagram and blog, and is the central figure of his mother’s new YouTube channel; and “@MereGoRound” appears on all of bongqiuqiu’s digital platforms but also has her own Instagram account and dedicated YouTube channel. The images reproduced here are screenshot from Influencer mothers’ highly public social media: xiaxue, bongqiuqiu, and ohsofickle boast 593k, 277k, and 124k followers on Instagram and 263k, 41k, and 17k followers on Twitter respectively at the time of writing.Anticipation and Digital EstatesIn an exclusive front-pager (Figure 1) on the day of his induced birth, it was announced that Baby Dash had already received up to SGD25,000 worth of endorsem*nt deals brokered by his Influencer mother, xiaxue. As the first micro-microcelebrity in his cohort (his mother was among the pioneer Influencers), Baby Dash’s Caesarean section was even filmed and posted on xiaxue’s YouTube channel in three parts (Figure 2). xiaxue had announced her pregnancy on her blog while in her second trimester, following which she consistently posted mirror selfies of her baby bump.Figure 1 & 2, screenshot April 2013 from ‹instagram.com/xiaxue›In her successful attempt at generating anticipation, the “bump” itself seemed to garner its own following on Twitter and Instagram, with many followers discussing how the Influencer dressed “it”, and how “it” was evolving over the weeks. One follower even compiled a collage of xiaxue’s “bump” chronologically and gifted it to the Influencer as an art image via Twitter on the day she delivered Baby Dash (Figure 3 & 4). Followers also frequently speculated and bantered about how her baby would look, and mused about how much they were going to adore him. Figure 3 & 4, screenshot March 2013 from ‹twitter.com/xiaxue› While Lupton (42) has conceptualized the sharing of images that precede birth as a “rite of passage”, Influencer mothers who publish sonograms deliberately do so in order to claim digital estates for their to-be micro-microcelebrities in the form of “reserved” social media handles, blog URLs, and unique hashtags for self-branding. For instance, at the 3-month mark of her pregnancy, Influencer bongqiuqiu debuted her baby’s dedicated hashtag, “#MereGoRound” in a birth announcement on her on Instagram account. Shortly after, she started an Instagram account, “@MereGoRound”, for her baby, who amassed over 5.5k followers prior to her birth. Figure 5 & 6, screenshot March 2015 from instagram.com/meregoround and instagram.com/bongqiuqiuThe debut picture features a heavily pregnant belly shot of bongqiuqiu (Figure 5), creating much anticipation for the arrival of a new micro-microcelebrity: in the six months leading up to her birth, various family, friends, and fans shared Instagram images of their gifts and welcome party for @MereGoRound, and followers shared congratulations and fan art on the dedicated Instagram hashtag. During this time, bongqiuqiu also frequently updated followers on her pregnancy progress, not without advertising her (presumably sponsored) gynecologist and hospital stay in her pregnancy diaries (Figure 6) – like Baby Dash, even as a foetus @MereGoRound was accumulating advertorials. Presently at six months old, @MereGoRound boasts almost 40k followers on Instagram on which embedded in the narrative of her growth are sponsored products and services from various advertisers.Non-Baby-Related AdvertorialsPrior to her pregnancy, Influencer bongqiuqiu hopped onto the micro-microcelebrity bandwagon in the wake of Baby Dash’s birth, by using her niece “#HeYurou” in her advertorials. Many Influencers attempt to naturalize their advertorials by composing their post as if recounting a family event. With reference to a child, parent, or partner, they may muse or quip about a product being used or an experience being shared in a bid to mask the distinction between their personal and commercial material. bongqiuqiu frequently posted personal, non-sponsored images engaging in daily mundane activities under the dedicated hashtag “#HeYurou”.However, this was occasionally interspersed with pictures of her niece holding on to various products including storybooks (Figure 8) and shopping bags (Figure 9). At first glance, this might have seemed like any mundane daily update the Influencer often posts. However, a close inspection reveals the caption bearing sponsor hashtags, tags, and campaign information. For instance, one Instagram post shows #HeYurou casually holding on to and staring at a burger in KFC wrapping (Figure 7), but when read in tandem with bongqiuqiu’s other KFC-related posts published over a span of a few months, it becomes clear that #HeYurou was in fact advertising for KFC. Figure 7, 8, 9, screenshot December 2014 from ‹instagram.com/bongqiuqiu›Elsewhere, Baby Dash was incorporated into xiaxue’s car sponsorship with over 20 large decals of one of his viral photos – dubbed “pineapple Dash” among followers – plastered all over her vehicle (Figure 10). Followers who spot the car in public are encouraged to photograph and upload the image using its dedicated hashtag, “#xiaxuecar” as part of the Influencer’s car sponsorship – an engagement scarcely related to her young child. Since then, xiaxue has speculated producing offshoots of “pineapple Dash” products including smartphone casings. Figure 10, screenshot December 2014 from ‹instagram.com/xiaxue›Follower EngagementSponsors regularly organize fan meet-and-greets headlined by micro-microcelebrities in order to attract potential customers. Photo opportunities and the chance to see Baby Dash “in the flesh” frequently front press and promotional material of marketing campaigns. Elsewhere on social media, several Baby Dash fan and tribute accounts have also emerged on Instagram, reposting images and related media of the micro-microcelebrity with overt adoration, no doubt encouraged by xiaxue, who began crowdsourcing captions for Baby Dash’s photos.Influencer ohsofickle postures #BabyElroyE’s follower engagement in a more subtle way. In her YouTube channel that debut in the month of her baby’s birth, ohsofickle produces video diaries of being a young, single, mother who is raising a child (Figure 11). In each episode, #BabyElroyE is the main feature whose daily activities are documented, and while there is some advertising embedded, ohsofickle’s approach on YouTube is much less overt than others as it features much more non-monetized personal content (Figure 12). Her blog serves as a backchannel to her vlogs, in which she recounts her struggles with motherhood and explicitly solicits the advice of mothers. However, owing to her young age (she became an Influencer at 17 and gave birth at 24), many of her followers are teenagers and young women who respond to her solicitations by gushing over #BabyElroyE’s images on Instagram. Figure 11 & 12, screenshot September 2015 from ‹instagram.com/ohsofickle›PrivacyAs noted by Holloway et al. (23), children like micro-microcelebrities will be among the first cohorts to inherit “digital profiles” of their “whole lifetime” as a “work in progress”, from parents who habitually underestimate or discount the privacy and long term effects of publicizing information about their children at the time of posting. This matters in a climate where social media platforms can amend privacy policies without user consent (23), and is even more pressing for micro-microcelebrities whose followers store, republish, and recirculate information in fan networks, resulting in digital footprints with persistence, replicability, scalability, searchability (boyd), and extended longevity in public circulation which can be attributed back to the children indefinitely (Leaver, “Ends”).Despite minimum age restrictions and recent concerns with “digital kidnapping” where users steal images of other young children to be re-posted as their own (Whigham), some social media platforms rarely police the proliferation of accounts set up by parents on behalf of their underage children prominently displaying their legal names and life histories, citing differing jurisdictions in various countries (Facebook; Instagram), while others claim to disable accounts if users report an “incorrect birth date” (cf. Google for YouTube). In Singapore, the Media Development Authority (MDA) which governs all print and digital media has no firm regulations for this but suggests that the age of consent is 16 judging by their recommendation to parents with children aged below 16 to subscribe to Internet filtering services (Media Development Authority, “Regulatory” 1). Moreover, current initiatives have been focused on how parents can impart digital literacy to their children (Media Development Authority, “Empowered”; Media Literacy Council) as opposed to educating parents about the digital footprints they may be unwittingly leaving about their children.The digital lives of micro-microcelebrities pose new layers of concern given their publicness and deliberate publicity, specifically hinged on making visible the usually inaccessible, private aspects of everyday life (Marshall, “Persona” 5).Scholars note that celebrities are individuals for whom speculation of their private lives takes precedence over their actual public role or career (Geraghty 100-101; Turner 8). However, the personae of Influencers and their young children are shaped by ambiguously blurring the boundaries of privacy and publicness in order to bait followers’ attention, such that privacy and publicness are defined by being broadcast, circulated, and publicized (Warner 414). In other words, the publicness of micro-microcelebrities is premised on the extent of the intentional publicity rather than simply being in the public domain (Marwick 223-231, emphasis mine).Among Influencers privacy concerns have aroused awareness but not action – Baby Dash’s Influencer mother admitted in a national radio interview that he has received a death threat via Instagram but feels that her child is unlikely to be actually attacked (Channel News Asia) – because privacy is a commodity that is manipulated and performed to advance their micro-microcelebrities’ careers. As pioneer micro-microcelebrities are all under 2-years-old at present, future research warrants investigating “child-centred definitions” (Third et al.) of the transition in which they come of age, grow an awareness of their digital presence, respond to their Influencer mothers’ actions, and potentially take over their accounts.Young LabourThe Ministry of Manpower (MOM) in Singapore, which regulates the employment of children and young persons, states that children under the age of 13 may not legally work in non-industrial or industrial settings (Ministry of Manpower). However, the same document later ambiguously states underaged children who do work can only do so under strict work limits (Ministry of Manpower). Elsewhere (Chan), it is noted that national labour statistics have thus far only focused on those above the age of 15, thus neglecting a true reflection of underaged labour in Singapore. This is despite the prominence of micro-microcelebrities who are put in front of (video) cameras to build social media content. Additionally, the work of micro-microcelebrities on digital platforms has not yet been formally recognized as labour, and is not regulated by any authority including Influencer management firms, clients, the MDA, and the MOM. Brief snippets from my ethnographic fieldwork with Influencer management agencies in Singapore similarly reveal that micro-microcelebrities’ labour engagements and control of their earnings are entirely at their parents’ discretion.As models and actors, micro-microcelebrities are one form of entertainment workers who if between the ages of 15 days and 18 years in the state of California are required to obtain an Entertainment Work Permit to be gainfully employed, adhering to strict work, schooling, and rest hour quotas (Department of Industrial Relations). Furthermore, the Californian Coogan Law affirms that earnings by these minors are their own property and not their parents’, although they are not old enough to legally control their finances and rely on the state to govern their earnings with a legal guardian (Screen Actors Guild). However, this similarly excludes underaged children and micro-microcelebrities engaged in creative digital ecologies. Future research should look into safeguards and instruments among young child entertainers, especially for micro-micrcocelebrities’ among whom commercial work and personal documentation is not always distinct, and are in fact deliberately intertwined in order to better engage with followers for relatabilityGrowing Up BrandedIn the wake of moral panics over excessive surveillance technologies, children’s safety on the Internet, and data retention concerns, micro-microcelebrities and their Influencer mothers stand out for their deliberately personal and overtly commercial approach towards self-documenting, self-presenting, and self-publicizing from the moment of conception. As these debut micro-microcelebrities grow older and inherit digital publics, personae, and careers, future research should focus on the transition of their ownership, engagement, and reactions to a branded childhood in which babies were postured for an initimate public.ReferencesAbidin, Crystal. “Communicative Intimacies: Influencers and Perceived Interconnectedness.” Ada: A Journal of Gender, New Media, & Technology. Forthcoming, Nov 2015.Aiello, Marianne. “Mommy Blog Banner Ads Get Results.” Healthcare Marketing Advisor 17 Nov. 2010. HealthLeaders Media. 16 Aug. 2015 ‹http://healthleadersmedia.com/content/MAR-259215/Mommy-Blog-Banner-Ads-Get-Results›.Azzarone, Stephanie. “When Consumers Report: Mommy Blogging Your Way to Success.” Playthings 18 Feb. 2009. Upfront: Marketing. 16 Aug. 2015 ‹http://mamanista.com/media/Mamanista_playthings_full.pdf›.Blum-Ross, Alicia. “’Sharenting’: Parent Bloggers and Managing Children’s Digital Footprints.” Parenting for a Digital Future, 17 Jun. 2015. 16 Aug. 2015 ‹http://blogs.lse.ac.uk/parenting4digitalfuture/2015/06/17/managing-your-childs-digital-footprint-and-or-parent-bloggers-ahead-of-brit-mums-on-the-20th-of-june/›.boyd, danah. “Social Network Sites and Networked Publics: Affordances, Dymanics and Implications.” A Networked Self: Identity, Community, and Culture on Social Network Sites. Ed. Zizi Papacharissi. London: Routledge, 2010. 39–58.Business Wire. “Attention All Mommy Bloggers: TheBump.com Launches 2nd Annual The Bump Mommy Blog Awards.” Business Wire 2 Nov. 2010. 16 Aug. 2015 ‹http://www.businesswire.com/news/home/20101102007005/en/Attention-Mommy-Bloggers-TheBump.com-Launches-2nd-Annual#.VdDsXp2qqko›.Channel News Asia. “Blogger Xiaxue ‘On the Record’.” Channel News Asia 10 Jul. 2015. 16 Aug. 2015 ‹http://www.channelnewsasia.com/news/singapore/blogger-xiaxue-on-the/1975712.html›.Chan, Wing Cheong. “Protection of Underaged Workers in Singapore: Domestic and International Regulation.” Singapore Academy of Law Journal 17 (2005): 668-692. ‹http://www.sal.org.sg/digitallibrary/Lists/SAL%20Journal/Attachments/376/2005-17-SAcLJ-668-Chan.pdf›.Department of Industrial Relations. “California Child Labor Laws.” Department of Industrial Relations, 2013. 16 Aug. 2015 ‹http://www.dir.ca.gov/DLSE/ChildLaborLawPamphlet.pdf›.Facebook. “How Do I Report a Child under the Age of 13?” Facebook 2015. 16 Aug. 2015 ‹https://www.facebook.com/help/157793540954833›.Friedman, Mary. Mommyblogs and the Changing Face of Motherhood. Toronto, ON: University of Toronto Press, 2013.Geraghty, Christine. “Re-Examining Stardom: Questions of Texts, Bodies and Performance.” Stardom and Celebrity: A Reader. Eds. Sean Redmond & Su Holmes. Los Angeles: Sage, 2007. 98-110.Goffman, Erving. The Presentation of Self in Everyday Life. London: Penguin Books, 1956. Google. “Age Requirements on Google Accounts.” Google Support 2015. 16 Aug. 2015 ‹https://support.google.com/accounts/answer/1350409?hl=en›.Holloway, Donell, Lelia Green, and Sonia Livingstone. “Zero to Eight: Young Children and Their Internet Use.” EU Kids Online 2013. London: London School of Economics. 16. Aug 2015 ‹http://eprints.lse.ac.uk/52630/1/Zero_to_eight.pdf›.Howell, Whitney L.J. “Mom-to-Mom Blogs: Hospitals Invite Women to Share Experiences.” H&HN 84.10(2010): 18. ‹http://connection.ebscohost.com/c/articles/54858655/mom-to-mom-blogs-hospitals-invite-women-share-experiences-mommy-blogs-are-catching-as-way-let-parents-interact-compare-notes›.Instagram. “Tips for Parents.” Instagram Help 2015. 16 Aug. 2015 ‹https://help.instagram.com/154475974694511/›.Katz, Elihu, and Paul F. Lazarsfeld. Personal Influence: The Part Played by People in the Flow of Mass Communications. New Brunswick: Transaction Publishers, 2009. Leaver, Tama. “The Ends of Online Identity”. Paper presented at Internet Research 12, Seattle, 2011.Leaver, Tama. “Birth and Death on Social Media: Dr Tama Leaver.” Lecture presented at Curtin University, 20 Jul. 2015.. 16 Aug. 2015 ‹https://www.youtube.com/watch?v=rQ6eW6qxGx8›.Leaver, Tama. “Born Digital? Presence, Privacy, and Intimate Surveillance.” Re-Orientation: Translingual Transcultural Transmedia: Studies in Narrative, Language, Identity, and Knowledge. Eds. John Hartley & Weiguo Qu. Fudan University Press, forthcoming.Lupton, Deborah. The Social Worlds of the Unborn. Basingstoke: Palgrave MacMillan, 2013.Marshall, P. David. "The Promotion and Presentation of the Self: Celebrity as Marker of Presentational Media." Celebrity Studies 1.1 (2010): 35-48. Marshall, P. David. “Persona Studies: Mapping the Proliferation of the Public Self.” Journalism 15.2 (2013): 153-170. Marwick, Alice E. Status Update: Celebrity, Publicity, & Branding in the Social Media Age. New Haven, CT: Yale University Press, 2013.Media Development Authority. “The Regulatory Options to Facilitate the Adoption of Internet Parental Controls.” Regulations and Licensing 2015. 16 Aug. 2015 ‹http://www.mda.gov.sg/RegulationsAndLicensing/Consultation/Documents/Consultation%20Papers/Public%20consultation%20paper%20for%20Internet%20parental%20controls_21%20Apr_final.pdf›.Media Development Authority. “Be Empowered! Protecting Your Kids in the Digital Age.” Documents 2015. 16 Aug. 2015 ‹http://www.mda.gov.sg/Documents/Newsletter/Issue08/Pages/02.aspx.html›.Media Literacy Council. “Clique Click: Bringing Up Children in the Digital Age.” Resources 2014. 16 Aug. 2015 ‹http://www.medialiteracycouncil.sg/Lists/Resources/Attachments/176/Clique%20Click.pdf›.Ministry of Manpower. “Employing Young Persons and Children.” Employment 26 May 2014. 16 Aug. 2015 ‹http://www.mom.gov.sg/employment-practices/young-persons-and-children›.Mom Bloggers Club. “Eight Proven Ways to Monetize Your Mom Blog.” Mom Bloggers Club 19 Nov. 2009. 15 Aug. 2015 ‹http://www.mombloggersclub.com/page/eight-proven-ways-to-monetize?id=988554%3APage%3A345278&page=3#comments›.Morrison, Aimee. “‘Suffused by Feeling and Affect:’ The Intimate Public of Personal Mommy Blogging.” Biography 34.1 (2011): 37-55.Nash, Meredith. “Shapes of Motherhood: Exploring Postnatal Body Image through Photographs.” Journal of Gender Studies (2013): 1-20. ‹http://www.tandfonline.com/doi/abs/10.1080/09589236.2013.797340#.VdDsvZ2qqko›.Rojek, Chris. Celebrity. London: Reaktion Books, 2001. Screen Actors Guild. “Coogan Law.” SAGAFTRA 2015. 16 Aug. 2015 ‹http://www.sagaftra.org/content/coogan-law›.Senft, Theresa. M. Camgirls: Celebrity & Community in the Age of Social Networks. New York, NY: Peter Lang, 2008.Stevenson, Seth. “Popularity Counts.” Wired 20.5 (2012): 120.Tatum, Christine. “Mommy Blogs Mull and Prove Market Might.” Denver Post 23 Oct 2007. 16 Aug. 2015 ‹http://www.denverpost.com/search/ci_7250753›.Third, Amanda, Delphine Bellerose, Urszula Dawkins, Emma Keltie, and Kari Pihl. “Children’s Rights in the Digital Age.” Young and Well Cooperative Research Centre 2014. 16 Aug. 2015 ‹http://www.youngandwellcrc.org.au/wp-content/uploads/2014/10/Childrens-Rights-in-the-Digital-Age_Report_single_FINAL_.pdf >.Thompson, Stephanie. “Mommy Blogs: A Marketer’s Dream; Growing Number of Well-Produced Sites Put Advertisers in Touch with an Affluent, Loyal Demo.” AD AGE 26 Feb. 2007. 16 Aug. 2015 ‹http://adage.com/article/digital/mommy-blogs-a-marketer-s-dream/115194/›.Turner, Graeme. Understanding Celebrity. Los Angeles: Sage, 2004.Warner, Michael. “Publics and Counter Publics.” Quarterly Journal of Speech 88.4 (2002): 413-425. Whigham, Nick. “Digital Kidnapping Will Make You Think Twice about What You Post to Social Media.” News.com.au 15 July 2015. 16 Aug. 2015 ‹http://www.news.com.au/lifestyle/real-life/digital-kidnapping-will-make-you-think-twice-about-what-you-post-to-social-media/story-fnq2oad4-1227449635495›.

24

Stewart, Jonathan. "If I Had Possession over Judgment Day: Augmenting Robert Johnson." M/C Journal 16, no.6 (December16, 2013). http://dx.doi.org/10.5204/mcj.715.

Full text

APA, Harvard, Vancouver, ISO, and other styles

Abstract:

augmentvb [ɔːgˈmɛnt]1. to make or become greater in number, amount, strength, etc.; increase2. Music: to increase (a major or perfect interval) by a semitone (Collins English Dictionary 107) Almost everything associated with Robert Johnson has been subject to some form of augmentation. His talent as a musician and songwriter has been embroidered by myth-making. Johnson’s few remaining artefacts—his photographic images, his grave site, other physical records of his existence—have attained the status of reliquary. Even the integrity of his forty-two surviving recordings is now challenged by audiophiles who posit they were musically and sonically augmented by speeding up—increasing the tempo and pitch. This article documents the promulgation of myth in the life and music of Robert Johnson. His disputed photographic images are cited as archetypal contested artefacts, augmented both by false claims and genuine new discoveries—some of which suggest Johnson’s cultural magnetism is so compelling that even items only tenuously connected to his work draw significant attention. Current challenges to the musical integrity of Johnson’s original recordings, that they were “augmented” in order to raise the tempo, are presented as exemplars of our on-going fascination with his life and work. Part literature review, part investigative history, it uses the phenomenon of augmentation as a prism to shed new light on this enigmatic figure. Johnson’s obscurity during his lifetime, and for twenty-three years after his demise in 1938, offered little indication of his future status as a musical legend: “As far as the evolution of black music goes, Robert Johnson was an extremely minor figure, and very little that happened in the decades following his death would have been affected if he had never played a note” (Wald, Escaping xv). Such anonymity allowed those who first wrote about his music to embrace and propagate the myths that grew around this troubled character and his apparently “supernatural” genius. Johnson’s first press notice, from a pseudonymous John Hammond writing in The New Masses in 1937, spoke of a mysterious character from “deepest Mississippi” who “makes Leadbelly sound like an accomplished poseur” (Prial 111). The following year Hammond eulogised the singer in profoundly romantic terms: “It still knocks me over when I think of how lucky it is that a talent like his ever found its way to phonograph records […] Johnson died last week at precisely the moment when Vocalion scouts finally reached him and told him that he was booked to appear at Carnegie Hall” (19). The visceral awe experienced by subsequent generations of Johnson aficionados seems inspired by the remarkable capacity of his recordings to transcend space and time, reaching far beyond their immediate intended audience. “Johnson’s music changed the way the world looked to me,” wrote Greil Marcus, “I could listen to nothing else for months.” The music’s impact originates, at least in part, from the ambiguity of its origins: “I have the feeling, at times, that the reason Johnson has remained so elusive is that no one has been willing to take him at his word” (27-8). Three decades later Bob Dylan expressed similar sentiments over seven detailed pages of Chronicles: From the first note the vibrations from the loudspeaker made my hair stand up … it felt like a ghost had come into the room, a fearsome apparition …When he sings about icicles hanging on a tree it gives me the chills, or about milk turning blue … it made me nauseous and I wondered how he did that … It’s hard to imagine sharecroppers or plantation field hands at hop joints, relating to songs like these. You have to wonder if Johnson was playing for an audience that only he could see, one off in the future. (282-4) Such ready invocation of the supernatural bears witness to the profundity and resilience of the “lost bluesman” as a romantic trope. Barry Lee Pearson and Bill McCulloch have produced a painstaking genealogy of such a-historical misrepresentation. Early contributors include Rudi Blesch, Samuel B Charters, Frank Driggs’ liner notes for Johnson’s King of the Delta Blues Singers collection, and critic Pete Welding’s prolific 1960s output. Even comparatively recent researchers who ostensibly sought to demystify the legend couldn’t help but embellish the narrative. “It is undeniable that Johnson was fascinated with and probably obsessed by supernatural imagery,” asserted Robert Palmer (127). For Peter Guralnick his best songs articulate “the debt that must be paid for art and the Faustian bargain that Johnson sees at its core” (43). Contemporary scholarship from Pearson and McCulloch, James Banninghof, Charles Ford, and Elijah Wald has scrutinised Johnson’s life and work on a more evidential basis. This process has been likened to assembling a complicated jigsaw where half the pieces are missing: The Mississippi Delta has been practically turned upside down in the search for records of Robert Johnson. So far only marriage application signatures, two photos, a death certificate, a disputed death note, a few scattered school documents and conflicting oral histories of the man exist. Nothing more. (Graves 47) Such material is scrappy and unreliable. Johnson’s marriage licenses and his school records suggest contradictory dates of birth (Freeland 49). His death certificate mistakes his age—we now know that Johnson inadvertently founded another rock myth, the “27 Club” which includes fellow guitarists Brian Jones, Jimi Hendrix and Kurt Cobain (Wolkewitz et al., Segalstad and Hunter)—and incorrectly states he was single when he was twice widowed. A second contemporary research strand focuses on the mythmaking process itself. For Eric Rothenbuhler the appeal of Johnson’s recordings lies in his unique “for-the-record” aesthetic, that foreshadowed playing and song writing standards not widely realised until the 1960s. For Patricia Schroeder Johnson’s legend reveals far more about the story-tellers than it does the source—which over time has become “an empty center around which multiple interpretations, assorted viewpoints, and a variety of discourses swirl” (3). Some accounts of Johnson’s life seem entirely coloured by their authors’ cultural preconceptions. The most enduring myth, Johnson’s “crossroads” encounter with the Devil, is commonly redrawn according to the predilections of those telling the tale. That this story really belongs to bluesman Tommy Johnson has been known for over four decades (Evans 22), yet it was mistakenly attributed to Robert as recently as 1999 in French blues magazine Soul Bag (Pearson and McCulloch 92-3). Such errors are, thankfully, becoming less common. While the movie Crossroads (1986) brazenly appropriated Tommy’s story, the young walking bluesman in Oh, Brother, Where Art Thou? (2000) faithfully proclaims his authentic identity: “Thanks for the lift, sir. My name's Tommy. Tommy Johnson […] I had to be at that crossroads last midnight. Sell my soul to the devil.” Nevertheless the “supernatural” constituent of Johnson’s legend remains an irresistible framing device. It inspired evocative footage in Peter Meyer’s Can’t You Hear the Wind Howl? The Life and Music of Robert Johnson (1998). Even the liner notes to the definitive Sony Music Robert Johnson: The Centennial Edition celebrate and reclaim his myth: nothing about this musician is more famous than the word-of-mouth accounts of him selling his soul to the devil at a midnight crossroads in exchange for his singular mastery of blues guitar. It has become fashionable to downplay or dismiss this account nowadays, but the most likely source of the tale is Johnson himself, and the best efforts of scholars to present this artist in ordinary, human terms have done little to cut through the mystique and mystery that surround him. Repackaged versions of Johnson’s recordings became available via Amazon.co.uk and Spotify when they fell out of copyright in the United Kingdom. Predictable titles such as Contracted to the Devil, Hellbound, Me and the Devil Blues, and Up Jumped the Devil along with their distinctive “crossroads” artwork continue to demonstrate the durability of this myth [1]. Ironically, Johnson’s recordings were made during an era when one-off exhibited artworks (such as his individual performances of music) first became reproducible products. Walter Benjamin famously described the impact of this development: that which withers in the age of mechanical reproduction is the aura of the work of art […] the technique of reproduction detaches the reproduced object from the domain of tradition. By making many reproductions it substitutes a plurality of copies for a unique existence. (7) Marybeth Hamilton drew on Benjamin in her exploration of white folklorists’ efforts to document authentic pre-modern blues culture. Such individuals sought to preserve the intensity of the uncorrupted and untutored black voice before its authenticity and uniqueness could be tarnished by widespread mechanical reproduction. Two artefacts central to Johnson’s myth, his photographs and his recorded output, will now be considered in that context. In 1973 researcher Stephen LaVere located two pictures in the possession of his half–sister Carrie Thompson. The first, a cheap “dime store” self portrait taken in the equivalent of a modern photo booth, shows Johnson around a year into his life as a walking bluesman. The second, taken in the Hooks Bros. studio in Beale Street, Memphis, portrays a dapper and smiling musician on the eve of his short career as a Vocalion recording artist [2]. Neither was published for over a decade after their “discovery” due to fears of litigation from a competing researcher. A third photograph remains unpublished, still owned by Johnson’s family: The man has short nappy hair; he is slight, one foot is raised, and he is up on his toes as though stretching for height. There is a sharp crease in his pants, and a handkerchief protrudes from his breast pocket […] His eyes are deep-set, reserved, and his expression forms a half-smile, there seems to be a gentleness about him, his fingers are extraordinarily long and delicate, his head is tilted to one side. (Guralnick 67) Recently a fourth portrait appeared, seemingly out of nowhere, in Vanity Fair. Vintage guitar seller Steven Schein discovered a sepia photograph labelled “Old Snapshot Blues Guitar B. B. King???” [sic] while browsing Ebay and purchased it for $2,200. Johnson’s son positively identified the image, and a Houston Police Department forensic artist employed face recognition technology to confirm that “all the features are consistent if not identical” (DiGiacomo 2008). The provenance of this photograph remains disputed, however. Johnson’s guitar appears overly distressed for what would at the time be a new model, while his clothes reflect an inappropriate style for the period (Graves). Another contested “Johnson” image found on four seconds of silent film showed a walking bluesman playing outside a small town cinema in Ruleville, Mississippi. It inspired Bob Dylan to wax lyrical in Chronicles: “You can see that really is Robert Johnson, has to be – couldn’t be anyone else. He’s playing with huge, spiderlike hands and they magically move over the strings of his guitar” (287). However it had already been proved that this figure couldn’t be Johnson, because the background movie poster shows a film released three years after the musician’s death. The temptation to wish such items genuine is clearly a difficult one to overcome: “even things that might have been Robert Johnson now leave an afterglow” (Schroeder 154, my italics). Johnson’s recordings, so carefully preserved by Hammond and other researchers, might offer tangible and inviolate primary source material. Yet these also now face a serious challenge: they run too rapidly by a factor of up to 15 per cent (Gibbens; Wilde). Speeding up music allowed early producers to increase a song’s vibrancy and fit longer takes on to their restricted media. By slowing the recording tempo, master discs provided a “mother” print that would cause all subsequent pressings to play unnaturally quickly when reproduced. Robert Johnson worked for half a decade as a walking blues musician without restrictions on the length of his songs before recording with producer Don Law and engineer Vincent Liebler in San Antonio (1936) and Dallas (1937). Longer compositions were reworked for these sessions, re-arranging and edited out verses (Wald, Escaping). It is also conceivable that they were purposefully, or even accidentally, sped up. (The tempo consistency of machines used in early field recordings across the South has often been questioned, as many played too fast or slow (Morris).) Slowed-down versions of Johnson’s songs from contributors such as Angus Blackthorne and Ron Talley now proliferate on YouTube. The debate has fuelled detailed discussion in online blogs, where some contributors to specialist audio technology forums have attempted to decode a faintly detectable background hum using spectrum analysers. If the frequency of the alternating current that powered Law and Liebler’s machine could be established at 50 or 60 Hz it might provide evidence of possible tempo variation. A peak at 51.4 Hz, one contributor argues, suggests “the recordings are 2.8 per cent fast, about half a semitone” (Blischke). Such “augmentation” has yet to be fully explored in academic literature. Graves describes the discussion as “compelling and intriguing” in his endnotes, concluding “there are many pros and cons to the argument and, indeed, many recordings over the years have been speeded up to make them seem livelier” (124). Wald ("Robert Johnson") provides a compelling and detailed counter-thesis on his website, although he does acknowledge inconsistencies in pitch among alternate master takes of some recordings. No-one who actually saw Robert Johnson perform ever called attention to potential discrepancies between the pitch of his natural and recorded voice. David “Honeyboy” Edwards, Robert Lockwood Jr. and Johnny Shines were all interviewed repeatedly by documentarians and researchers, but none ever raised the issue. Conversely Johnson’s former girlfriend Willie Mae Powell was visibly affected by the familiarity in his voice on hearing his recording of the tune Johnson wrote for her, “Love in Vain”, in Chris Hunt’s The Search for Robert Johnson (1991). Clues might also lie in the natural tonality of Johnson’s instrument. Delta bluesmen who shared Johnson’s repertoire and played slide guitar in his style commonly used a tuning of open G (D-G-D-G-B-G). Colloquially known as “Spanish” (Gordon 2002, 38-42) it offers a natural home key of G major for slide guitar. We might therefore expect Johnson’s recordings to revolve around the tonic (G) or its dominant (D) -however almost all of his songs are a full tone higher, in the key of A or its dominant E. (The only exceptions are “They’re Red Hot” and “From Four Till Late” in C, and “Love in Vain” in G.) A pitch increase such as this might be consistent with an increase in the speed of these recordings. Although an alternative explanation might be that Johnson tuned his strings particularly tightly, which would benefit his slide playing but also make fingering notes and chords less comfortable. Yet another is that he used a capo to raise the key of his instrument and was capable of performing difficult lead parts in relatively high fret positions on the neck of an acoustic guitar. This is accepted by Scott Ainslie and Dave Whitehill in their authoritative volume of transcriptions At the Crossroads (11). The photo booth self portrait of Johnson also clearly shows a capo at the second fret—which would indeed raise open G to open A (in concert pitch). The most persuasive reasoning against speed tampering runs parallel to the argument laid out earlier in this piece, previous iterations of the Johnson myth have superimposed their own circ*mstances and ignored the context and reality of the protagonist’s lived experience. As Wald argues, our assumptions of what we think Johnson ought to sound like have little bearing on what he actually sounded like. It is a compelling point. When Son House, Skip James, Bukka White, and other surviving bluesmen were “rediscovered” during the 1960s urban folk revival of North America and Europe they were old men with deep and resonant voices. Johnson’s falsetto vocalisations do not, therefore, accord with the commonly accepted sound of an authentic blues artist. Yet Johnson was in his mid-twenties in 1936 and 1937; a young man heavily influenced by the success of other high pitched male blues singers of his era. people argue that what is better about the sound is that the slower, lower Johnson sounds more like Son House. Now, House was a major influence on Johnson, but by the time Johnson recorded he was not trying to sound like House—an older player who had been unsuccessful on records—but rather like Leroy Carr, Casey Bill Weldon, Kokomo Arnold, Lonnie Johnson, and Peetie Wheatstraw, who were the big blues recording stars in the mid–1930s, and whose vocal styles he imitated on most of his records. (For example, the ooh-well-well falsetto yodel he often used was imitated from Wheatstraw and Weldon.) These singers tended to have higher, smoother voices than House—exactly the sound that Johnson seems to have been going for, and that the House fans dislike. So their whole argument is based on the fact that they prefer the older Delta sound to the mainstream popular blues sound of the 1930s—or, to put it differently, that their tastes are different from Johnson’s own tastes at the moment he was recording. (Wald, "Robert Johnson") Few media can capture an audible moment entirely accurately, and the idea of engineering a faithful reproduction of an original performance is also only one element of the rationale for any recording. Commercial engineers often aim to represent the emotion of a musical moment, rather than its totality. John and Alan Lomax may have worked as documentarians, preserving sound as faithfully as possible for the benefit of future generations on behalf of the Library of Congress. Law and Liebler, however, were producing exciting and profitable commercial products for a financial gain. Paradoxically, then, whatever the “real” Robert Johnson sounded like (deeper voice, no mesmeric falsetto, not such an extraordinarily adept guitar player, never met the Devil … and so on) the mythical figure who “sold his soul at the crossroads” and shipped millions of albums after his death may, on that basis, be equally as authentic as the original. Schroeder draws on Mikhail Bakhtin to comment on such vacant yet hotly contested spaces around the Johnson myth. For Bakhtin, literary texts are ascribed new meanings by consecutive generations as they absorb and respond to them. Every age re–accentuates in its own way the works of its most immediate past. The historical life of classic works is in fact the uninterrupted process of their social and ideological re–accentuation [of] ever newer aspects of meaning; their semantic content literally continues to grow, to further create out of itself. (421) In this respect Johnson’s legend is a “classic work”, entirely removed from its historical life, a free floating form re-contextualised and reinterpreted by successive generations in order to make sense of their own cultural predilections (Schroeder 57). As Graves observes, “since Robert Johnson’s death there has seemed to be a mathematical equation of sorts at play: the less truth we have, the more myth we get” (113). The threads connecting his real and mythical identity seem so comprehensively intertwined that only the most assiduous scholars are capable of disentanglement. Johnson’s life and work seem destined to remain augmented and contested for as long as people want to play guitar, and others want to listen to them. Notes[1] Actually the dominant theme of Johnson’s songs is not “the supernatural” it is his inveterate womanising. Almost all Johnson’s lyrics employ creative metaphors to depict troubled relationships. Some even include vivid images of domestic abuse. In “Stop Breakin’ Down Blues” a woman threatens him with a gun. In “32–20 Blues” he discusses the most effective calibre of weapon to shoot his partner and “cut her half in two.” In “Me and the Devil Blues” Johnson promises “to beat my woman until I get satisfied”. However in The Lady and Mrs Johnson five-time W. C. Handy award winner Rory Block re-wrote these words to befit her own cultural agenda, inverting the original sentiment as: “I got to love my baby ‘til I get satisfied”.[2] The Gibson L-1 guitar featured in Johnson’s Hooks Bros. portrait briefly became another contested artefact when it appeared in the catalogue of a New York State memorabilia dealership in 2006 with an asking price of $6,000,000. The Australian owner had apparently purchased the instrument forty years earlier under the impression it was bona fide, although photographic comparison technology showed that it couldn’t be genuine and the item was withdrawn. “Had it been real, I would have been able to sell it several times over,” Gary Zimet from MIT Memorabilia told me in an interview for Guitarist Magazine at the time, “a unique item like that will only ever increase in value” (Stewart 2010). References Ainslie, Scott, and Dave Whitehall. Robert Johnson: At the Crossroads – The Authoritative Guitar Transcriptions. Milwaukee: Hal Leonard Publishing, 1992. Bakhtin, Mikhail M. The Dialogic Imagination. Austin: University of Texas Press, 1982. Banks, Russell. “The Devil and Robert Johnson – Robert Johnson: The Complete Recordings.” The New Republic 204.17 (1991): 27-30. Banninghof, James. “Some Ramblings on Robert Johnson’s Mind: Critical Analysis and Aesthetic in Delta Blues.” American Music 15/2 (1997): 137-158. Benjamin, Walter. The Work of Art in the Age of Mechanical Reproduction. London: Penguin, 2008. Blackthorne, Angus. “Robert Johnson Slowed Down.” YouTube.com 2011. 1 Aug. 2013 ‹http://www.youtube.com/user/ANGUSBLACKTHORN?feature=watch›. Blesh, Rudi. Shining Trumpets: A History of Jazz. New York: Knopf, 1946. Blischke, Michael. “Slowing Down Robert Johnson.” The Straight Dope 2008. 1 Aug. 2013 ‹http://boards.straightdope.com/sdmb/showthread.php?t=461601›. Block, Rory. The Lady and Mrs Johnson. Rykodisc 10872, 2006. Charters, Samuel. The Country Blues. New York: De Capo Press, 1959. Collins UK. Collins English Dictionary. Glasgow: Harper Collins Publishers, 2010. DiGiacomo, Frank. “A Disputed Robert Johnson Photo Gets the C.S.I. Treatment.” Vanity Fair 2008. 1 Aug. 2013 ‹http://www.vanityfair.com/online/daily/2008/10/a-disputed-robert-johnson-photo-gets-the-csi-treatment›. DiGiacomo, Frank. “Portrait of a Phantom: Searching for Robert Johnson.” Vanity Fair 2008. 1 Aug. 2013 ‹http://www.vanityfair.com/culture/features/2008/11/johnson200811›. Dylan, Bob. Chronicles Vol 1. London: Simon & Schuster, 2005. Evans, David. Tommy Johnson. London: November Books, 1971. Ford, Charles. “Robert Johnson’s Rhythms.” Popular Music 17.1 (1998): 71-93. Freeland, Tom. “Robert Johnson: Some Witnesses to a Short Life.” Living Blues 150 (2000): 43-49. Gibbens, John. “Steady Rollin’ Man: A Revolutionary Critique of Robert Johnson.” Touched 2004. 1 Aug. 2013 ‹http://www.touched.co.uk/press/rjnote.html›. Gioia, Ted. Delta Blues: The Life and Times of the Mississippi Masters Who Revolutionised American Music. London: W. W. Norton & Co, 2008. Gioia, Ted. "Robert Johnson: A Century, and Beyond." Robert Johnson: The Centennial Collection. Sony Music 88697859072, 2011. Gordon, Robert. Can’t Be Satisfied: The Life and Times of Muddy Waters. London: Pimlico Books, 2002. Graves, Tom. Crossroads: The Life and Afterlife of Blues Legend Robert Johnson. Spokane: Demers Books, 2008. Guralnick, Peter. Searching for Robert Johnson: The Life and Legend of the "King of the Delta Blues Singers". London: Plume, 1998. Hamilton, Marybeth. In Search of the Blues: Black Voices, White Visions. London: Jonathan Cape, 2007. Hammond, John. From Spirituals to Swing (Dedicated to Bessie Smith). New York: The New Masses, 1938. Johnson, Robert. “Hellbound.” Amazon.co.uk 2011. 1 Aug. 2013 ‹http://www.amazon.co.uk/Hellbound/dp/B0063S8Y4C/ref=sr_1_cc_2?s=aps&ie=UTF8&qid=1376605065&sr=1-2-catcorr&keywords=robert+johnson+hellbound›. ———. “Contracted to the Devil.” Amazon.co.uk 2002. 1 Aug. 2013. ‹http://www.amazon.co.uk/Contracted-The-Devil-Robert-Johnson/dp/B00006F1L4/ref=sr_1_cc_1?s=aps&ie=UTF8&qid=1376830351&sr=1-1-catcorr&keywords=Contracted+to+The+Devil›. ———. King of the Delta Blues Singers. Columbia Records CL1654, 1961. ———. “Me and the Devil Blues.” Amazon.co.uk 2003. 1 Aug. 2013 ‹http://www.amazon.co.uk/Me-Devil-Blues-Robert-Johnson/dp/B00008SH7O/ref=sr_1_16?s=music&ie=UTF8&qid=1376604807&sr=1-16&keywords=robert+johnson›. ———. “The High Price of Soul.” Amazon.co.uk 2007. 1 Aug. 2013 ‹http://www.amazon.co.uk/High-Price-Soul-Robert-Johnson/dp/B000LC582C/ref=sr_1_39?s=music&ie=UTF8&qid=1376604863&sr=1-39&keywords=robert+johnson›. ———. “Up Jumped the Devil.” Amazon.co.uk 2005. 1 Aug. 2013 ‹http://www.amazon.co.uk/Up-Jumped-Devil-Robert-Johnson/dp/B000B57SL8/ref=sr_1_2?s=music&ie=UTF8&qid=1376829917&sr=1-2&keywords=Up+Jumped+The+Devil›. Marcus, Greil. Mystery Train: Images of America in Rock ‘n’ Roll Music. London: Plume, 1997. Morris, Christopher. “Phonograph Blues: Robert Johnson Mastered at Wrong Speed?” Variety 2010. 1 Aug. 2013 ‹http://www.varietysoundcheck.com/2010/05/phonograph-blues-robert-johnson-mastered-at-wrong-speed.html›. Oh, Brother, Where Art Thou? DVD. Universal Pictures, 2000. Palmer, Robert. Deep Blues: A Musical and Cultural History from the Mississippi Delta to Chicago’s South Side to the World. London: Penguin Books, 1981. Pearson, Barry Lee, and Bill McCulloch. Robert Johnson: Lost and Found. Chicago: University of Illinois Press, 2003. Prial, Dunstan. The Producer: John Hammond and the Soul of American Music. New York: Farrar, Straus and Giroux, 2006. Rothenbuhler, Eric W. “For–the–Record Aesthetics and Robert Johnson’s Blues Style as a Product of Recorded Culture.” Popular Music 26.1 (2007): 65-81. Rothenbuhler, Eric W. “Myth and Collective Memory in the Case of Robert Johnson.” Critical Studies in Media Communication 24.3 (2007): 189-205. Schroeder, Patricia. Robert Johnson, Mythmaking and Contemporary American Culture (Music in American Life). Chicago: University of Illinois Press, 2004. Segalstad, Eric, and Josh Hunter. The 27s: The Greatest Myth of Rock and Roll. Berkeley: North Atlantic Books, 2009. Stewart, Jon. “Rock Climbing: Jon Stewart Concludes His Investigation of the Myths behind Robert Johnson.” Guitarist Magazine 327 (2010): 34. The Search for Robert Johnson. DVD. Sony Pictures, 1991. Talley, Ron. “Robert Johnson, 'Sweet Home Chicago', as It REALLY Sounded...” YouTube.com 2012. 1 Aug. 2013. ‹http://www.youtube.com/watch?v=LCHod3_yEWQ›. Wald, Elijah. Escaping the Delta: Robert Johnson and the Invention of the Blues. London: HarperCollins, 2005. ———. The Robert Johnson Speed Recording Controversy. Elijah Wald — Writer, Musician 2012. 1 Aug. 2013. ‹http://www.elijahwald.com/johnsonspeed.html›. Wilde, John . “Robert Johnson Revelation Tells Us to Put the Brakes on the Blues: We've Been Listening to the Immortal 'King of the Delta Blues' at the Wrong Speed, But Now We Can Hear Him as He Intended.” The Guardian 2010. 1 Aug. 2013 ‹http://www.theguardian.com/music/musicblog/2010/may/27/robert-johnson-blues›. Wolkewitz, M., A. Allignol, N. Graves, and A.G. Barnett. “Is 27 Really a Dangerous Age for Famous Musicians? Retrospective Cohort Study.” British Medical Journal 343 (2011): d7799. 1 Aug. 2013 ‹http://www.bmj.com/content/343/bmj.d7799›.

You might also be interested in the bibliographies on the topic 'Document Object Model (Web site development technology)' for other source types:

Books

To the bibliography
Journal articles: 'Document Object Model (Web site development technology)' – Grafiati (2024)
Top Articles
Latest Posts
Article information

Author: Edmund Hettinger DC

Last Updated:

Views: 5777

Rating: 4.8 / 5 (58 voted)

Reviews: 81% of readers found this page helpful

Author information

Name: Edmund Hettinger DC

Birthday: 1994-08-17

Address: 2033 Gerhold Pine, Port Jocelyn, VA 12101-5654

Phone: +8524399971620

Job: Central Manufacturing Supervisor

Hobby: Jogging, Metalworking, Tai chi, Shopping, Puzzles, Rock climbing, Crocheting

Introduction: My name is Edmund Hettinger DC, I am a adventurous, colorful, gifted, determined, precious, open, colorful person who loves writing and wants to share my knowledge and understanding with you.