The internet runs on habits. This course prompts us to change them from the inside. We will study ignored histories of digital stuff, examine dominant beliefs about technologies and culture, and compose our own content, from audio and images to prototypes and websites, all with the goal of making critical media, with the past in mind, in the present moment.
Please note this course:
- Is not a “gotcha!” course. Our aim is not to identify mistakes and call people on them; we will instead stress awareness, reflection, and engagement through attention to situation and practice.
- Is not a computer science course. The only technical competencies I assume are that you know how to send an email and also check this website.
- Is not about defining “digital humanities.” We will treat digital humanities not as a field but as a collection of strategies for conducting humanities research with and through digital technologies. We will emphasize techniques and methods over the use of specific tools and software.
- Jan 7: Why Unlearn the Internet?
- Jan 10: The Myth of Lone Male Inventors
- Jan 14 + 17: The Desire for Immediate Interfaces
- Jan 21 + 24: The Investment in Neutral Instruments
- Jan 28 + 31: The Rhetoric of Raw Data
- Feb 4 + 7: The Appeal to Interactivity
- Feb 11 + 14: Writing and Publishing Workshops
- Feb 18 + 21: Reading Break
- Feb 25: Choosing Your Own Adventure
- Feb 28 + Mar 4: The Assertion of Autonomous Algorithms
- Mar 7: The Belief in Sourcery
- Mar 11: The Ideology of Landless Territory
- Mar 14: The Futures of Digital Labour
- Mar 18 + 21: Making a Zine
- Mar 25: The Fallacy of Extended Man
- Mar 28: Polishing Your Work
- Apr 1 + 4: Zine Fest and FIN
Why Unlearn the Internet? #
I’ll start by echoing Cathy Davidson (2011) and Jack Halberstam (2012): to unlearn is to identify habits and constraints, rethink them, and experiment with alternatives. Unlearning requires being open about what you don’t know and what you take for granted; plus, it’s a shift in attention. You might speculate or wonder, unfocus and refocus, resist and reconfigure, or approach things anew.
But why unlearn the internet in particular? Well, the internet is an infrastructure (hardware, software, and services) that shapes how a lot of knowledge is produced these days. It’s at once a force and a relation that easily becomes “common sense” or “second nature.” It is content, too: rather than only using it, you can study it, the material it circulates, and the patterns it enables.
For the purposes of this course, then, to unlearn the internet is a fourfold process:
- To acknowledge the internet as a mode and means of production, without yearning nostalgically for some pre-internet era or rejecting the internet altogether;
- To historicize the internet as culture, without accepting that history as inevitable;
- To try some perhaps unfamiliar techniques, without relying on the default settings of most software, hardware, and services; and
- To assess a few dominant beliefs about some technologies and practices at play in the internet, without assuming such beliefs cannot or will not change. Those beliefs are detailed below, under the section headings for each module.
Jan 7 (M): Early Histories of the Internet
We’ll introduce ourselves, talk about the course structure and topics, and then discuss some early histories of the internet, including histories of movable type, printing presses, steam engines, automata, mechanical digital computers, and telegraph networks, all of which may be considered “old parts” of machines we run today. Before you leave, I’ll ask you to submit an “exit card” to me, communicating any questions you have about the course.
Log Entry for Jan 7 (M): Pick An Old Part
Following today’s brief overview of the internet’s early histories and “old parts,” identify one pre-1900 technology that was both new and interesting to you. Please write 50-100 words explaining why it piqued your interest. Feel free to include a particular question or two you have about the technology and its context. (No need to conduct any additional research here; you can draw on your initial impressions.)
Notes and Slides for this Module
The notes and slides for this module are online. A key term in the module is parthood, and important questions resulting from the module are, “How is the internet an assemblage of ‘old’ parts, and how do those parts and their histories shape knowledge production today?”
The Myth of Lone Male Inventors #
Technologies are frequently treated as creations of genius men working alone. Consider the legacies of Charles Babbage, Thomas Edison, Alexander Graham Bell, Nikola Tesla, and Steve Jobs, for instance. Or just search for “famous inventors.” Individual men are often given credit for technologies, even when those projects were imagined, made, operated, and maintained by groups of people. Sites of technological development, including computing projects, have also been sites of gender inequality and discrimination–sites where women have been paid less than men and also expected to downplay their knowledge and technical expertise. During this module, we will survey histories of gender and labour, including histories of discrimination and unattributed work. Key historical figures include Ada Lovelace (analytical engine, 1840s), six ENIAC programmers (1940s), Beatrice Worsley (Computation Centre at Toronto, 1950s and ’60s), eleven human computers at NASA (1950s), Navajo manufacturers at Fairchild Semiconductor (1960s and 70s), and Lynne Conway (IBM and elsewhere, 1960s to present). After reading an article by historian, Marie Hicks, we will then focus on Powers-Samas and the “Powers Girl” advertisements as a case study for discussion. Talking about the myth of lone male inventors will help us to understand how technological shifts may promise change while keeping norms, such as gender and labour norms, intact.
Jan 10 (R): The “Powers Girl” Ads (Case Study)
Before class, please read “Only the Clothes Changed: Women Operators in British Computing and Advertising, 1950–1970,” by Marie Hicks (2010). As you read, note how visual representations of women in the profession not only played a role in the emergence of electronic office computing but also affected the conditions of operating, programming, and managing computers.
Log Entry for Jan 10 (R): The Factory without the Shop Floor
Hicks writes, “Business customers wanted the efficiency and cost benefits of turning the office into a factory without the negatives associated with the shop floor, working class labor, and labor organization” (2010: 4). Following today’s discussion of “Labour and Gender in Computing,” please use 50-100 words to explain one way the “Powers Girl” ad (see Hicks, Figure 1a) addressed Powers-Samas customers. To get started, you might describe what you see (and don’t see), identify important aspects of the ad, consider the audience and context, articulate the ad’s overall message and framing, and supplement Hicks’s interpretation. But ultimately, you only need to write about one of the ad’s rhetorical strategies.
Notes and Slides for this Module
The notes and slides for this module are online. Key terms in the module are gender and labour, and important questions resulting from the module are, “How are norms, habits, and labour conditions reinforced despite, and even through, technological change? What are the risks of assuming technological change, including the rise of the internet, will always bring about social good?”
The Desire for Immediate Interfaces #
Many technologies, including software, are designed with invisibility or user-friendliness in mind, but their interfaces will always involve some sort of mediation, from typing, swiping, and talking to glitches, buffering, and updating. Interfaces are spaces or modalities of engagement between two or more systems or entities (e.g., a keyboard and screen); they help turn this (input) into that (output). During this module, we’ll learn about the contributions of “correction fluid” (e.g., Liquid Paper) to designing the “modes” of present-day computer interfaces, survey several popular computer interfaces (such as batch processing, command lines, TUIs, WYSIWYGs, GUIs, WIMPs, skeuomorphs, and ubicomp), and consider a few experimental ones by artists such as jodi and Erik Loyer. Then we’ll spend some time on text editing and word processing in particular, with WordStar 4.0 (an application that gained traction in the 80s) as our case study. I’ll ask you to read a piece by critic, Matthew Kirschenbaum, and begin composing with a text editor in a syntax called “Markdown.” Talking about the desire for immediate interfaces will help us to understand why software is so often approached through metaphors and icons (and not as stuff).
Jan 14 (M): WordStar 4.0 (Case Study)
Before class, please read, “Software, It’s a Thing,” by Matthew Kirschenbaum (2014). As you read, note how Kirschenbaum describes software as “stuff” that is not only used but made, claimed, and preserved, and as “cruft” that’s not some “intangible, ephemeral trellis of code,” but an artifact accumulating and executing layers of history. We’ll apply Kirschenbaum’s interpretation of software to WordStar 4.0 (in particular) and to word processing and text editing (in general).
Log Entry for Jan 14 (M): “It Looks Like You’re Writing a …“
We might add “software as assistant” to Kirschenbaum’s list. People expect software to not just help them but to work–to both act and function. Please explain in 50-100 words the most important task you think a computer and its interface should help people to perform successfully. In doing so, please detail as thoroughly as possible how that task is mediated. It may help to consider this lecture about the history of “modes,” “modeless interfaces,” and “modal screens.” You might also compare computer performance with how people perform (or would perform) the same task (e.g., helping you to find, write, calculate, or store something). After all, tech is entwined with occupations and labour. (Recall material, including the Hicks article, for previous modules. If you enjoy histories of labour and computing, then you might also find the 1957 film, Desk Set, of interest.)
Jan 17 (R): Markdown (Workshop)
Before class, please identify and install a text editor to use for the rest of the semester. For what it’s worth, I recommend Sublime Text. You may also wish to review this Markdown cheat sheet by Patrick J. Bartlein. You’ll learn Markdown (MD) during workshop. It is intended to make writing software-agnostic (without format dependencies). Once you produce an MD file, you can convert it into HTML, PDF, or DOCX, for instance. More on file conversion later in the course.
Log Entry for Jan 17 (R): Programs with Attitude
Kirschenbaum says software is many things; one of them is craft. Please use 50-100 words to contrast your experiences with a text editor to those of a word processor you’ve used in the past. As you do, feel free to assess the “personalities” and interfaces of your selected software. What is it like to interact with it? To write with it? To format and save things with it? (In the process, please identify the software you’re using: for example, maybe you’re using Sublime and Microsoft Word.) It may help to consider a design perspective: interpret the interface’s use of colour (background and foreground), boundaries (windows and lines), highlighting (cursor use and search results), structure (menus and layout), space (assumed screen location of elements), or sequencing (how the interface guides use). But don’t try to cover all of these features. Pick maybe one or two; after all, you have only a few words. To ground your writing, you might perform the same task on both a text editor and a word processor and then contrast the experiences and “personalities.”
Notes and Slides for this Module
The notes and slides for this module are online. Key terms in the module are mediation and mode, and important questions resulting from the module are, “Even though interfaces are forms of mediation, why and under what assumptions do people want them to be invisible? When should interfaces interrupt us? When should they be transparent?”
The Investment in Neutral Instruments #
Technologies require standards to function. Geoffrey C. Bowker and Susan Leigh Star (1999) describe standards as sets of agreed-upon rules for production that span communities and are deployed to render things interoperable. Standards are also enforced by committees or legal bodies, and they frequently compete with other standards. Most important, standards are vehicles for the ostensible neutrality of tech: once a technology is standardized, people can point to it as a “disinterested” instrument for observation or decision-making. It’s worthwhile, then, to consider how values are baked into this process. How are standards used to plan change? How do they become authoritative? By studying examples such as QWERTY, DOS, VCR, and MP3 during this module, we’ll engage the cultural dimensions of standards, and then we’ll turn to our case study: Shirley Cards (ca. 1954), which are used to standardize skin tones in photography. During workshop, we’ll talk more about photographs, converting page images into discrete characters (recall movable type) using something called “optical character recognition” (OCR) that renders visual material “machine-readable.” OCR is one reason why people can search digitized print texts, and it is often considered to be a banal labour process akin to copying. Talking about investments in neutral instruments will help us to consider how standards, such as Shirley Cards, and conversion techniques, such as OCR, work to build trust in processes while also shaping them, their output, and attendant habits of use.
Jan 21 (M): Shirley Cards (Case Study)
Before class, please listen to “How Kodak’s Shirley Cards Set Photography’s Skin-Tone Standard,” by Mandalit del Barco (National Public Radio, 2014) and featuring Concordia professor, Lorna Roth. As you listen, consider how colour and beauty are normalized through the cards, and how the cards changed over time. During class, we’ll draw upon Roth’s “Looking at Shirley, the Ultimate Norm” (2009) for examples and discussion.
Log Entry for Jan 21 (M): Picture Processing
Photography is often associated with “capturing” or documenting reality, but (as we’ve seen) it’s processed. And now, of course, image filters are ubiquitous forms of processing. Please take two photographs of an object, one without a filter and one with a filter of your choice. (You might add a third image here: an auto-corrected or auto-enhanced version of your photograph.) Then, following today’s discussion of standards and Shirley Cards, use 50-100 words to state whether you used the filter to “correct” or “transform” the photo. Were you trying to “fix” the image (“correction”) or make it look unique (“transformation”)? Please also describe the specific values people may associate with your selected filter. To get started, you might consider temperature, saturation, contrast, exposure, and aging (“vintage”) effects, and how the filter affects aesthetics, gives the photo a (fabricated) sense of history, highlights objects, manipulates colours, and/or serves some social function (e.g., a joke or distortion). Be sure to include the two or three images in your log. Also, an interesting tidbit for you: in a 2015 study of 7.6 million photos uploaded to Flickr and Instagram, Saeideh Bakhshi et al. at Yahoo! Labs found that filtered photos are 21% more likely to be viewed and 45% more likely to be commented on by consumers of photographs. (We’ll talk about such attention economics later in the course.) They also found that warm temperature, higher contrast, and higher exposure increase chances of receiving views and comments.
Jan 24 (R): Optical Character Recognition (Workshop)
McPherson Library contains 400+ books under the Library of Congress’ “Internet social aspects” subject heading; not all of them are electronic copies, either. Before today’s class, please visit the stacks in McPherson Library, find an interesting book on the subject of “Internet social aspects” (try that link or visit, say, HM851 in the stacks), take a “book selfie” (a photo of you holding the book), pick a page in the book (preferably one with a lot of text), and then take at least three photos of that one page. (You might experiment with filters, framing, and your flash.) Bring the image files for these photos to class on Thursday. (If you need help taking or sending photos, then see me or a TA.) In workshop, you’ll experiment with converting these photos into machine-readable text (TXT, to be exact). The book selfie simply proves you were in the stacks (not that I’m a suspicious person).
Log Entry for Jan 24 (R): Q i-jtb the Raven
After encountering OCR hiccups such as “Q i-jtb the Raven” (“Quoth the Raven,” from Edgar Allan Poe’s poem, “The Raven”), Ryan Cordell suggests “we might think of OCR as a compositor setting text in a language it does not comprehend–as we know compositors sometimes did in the printing house–copying letters and words by their form rather than their sense” (2017: 196). Another common OCR error is confusing “arms” with “anus.” Please further this inquiry into what OCR recognizes but does not comprehend by first including and briefly describing in this log entry the “book selfie,” page images, and OCR results from your recent visit to the McPherson stacks. (No need to analyze the images; just include and describe them.) Next, please convert at least one more text; this text should use handwritten content (print, not cursive), a non-Roman alphabet, an experimental or old style typeface, or a layout with more than one column. I recommend picking a text with which you’re already familiar; you could also create your own text. (If you need help getting started, then try one of the URLs circulated during class.) Then use 50-100 words to describe the accuracy of the results of this conversion, with an emphasis on one particular factor you believe is important to the processing of page images into machine-readable text. For all texts referenced in this entry, please provide the title, author(s), and publication date of the text you’re converting and also embed or reference both the image files (input) and the text files (output).
Notes and Slides for this Module
The notes and slides for this module are online. Key terms in the module are standards and processing, and important questions resulting from the module are, “How and under what assumptions do standards appear unbiased while also determining defaults and producing errors? How can standards be changed through research, cultural work, and participation in communities and systems?”
The Rhetoric of Raw Data #
Despite common parlance, data is neither taken nor given. It is produced. And it is used regularly in art, criticism, storytelling, and journalism. Consider Francesca Fionda’s data-driven investigations for The Discourse, Ryoji Ikeda’s datamatics, and Catherine D’Ignazio and Lauren Klein’s data feminism, for instance. If data is always cooked, then a few visits to the kitchen, if you will, may tell people a lot about the dataset at hand and how it got that way. A systems approach to data moves beyond the rhetoric of data (“raw” or “big”) to attend to its preparation, staging, processing, and expression. During this module, I’ll ask you to read a piece by Canadian artist, Jer Thorp, on data as a system. Alongside Thorp, we’ll discuss The Library of Missing Datasets by artist and researcher, Mimi Onuoha, as our case study. It will inspire several prototyping and graphical expression exercises in workshop. Talking about the rhetoric of raw data will help us to understand the lived, cultural realities of data, from prep and production to uses and hacks.
Jan 28 (M): The Library of Missing Datasets (Case Study)
Before class, please read The Library of Missing Datasets, including the overview, by Mimi Onuoha (2016), as well as “You Say Data, I Say System,” by Jer Thorp (2017). As you read, reflect on the implications of Onuoha’s argument: “If you haven’t considered the collection process, you haven’t considered the data.” Note, too, how Thorp parses the data production process into collection, computation, and representation.
Log Entry for Jan 28 (M): Blank Spots
Thorp writes, “Whenever you look at data–as a spreadsheet or database view or a visualization, you are looking at an artifact of … a system.” For this entry, I’m asking you to devise a system in the absence of data. Please review Onuoha’s list of missing datasets and imagine another dataset for it. Here, you’ll be making an argument for something that could or should exist. In your log, use 50-150 words to name the missing dataset and then describe its data types (integer, Boolean, character, date, time, or location, e.g.), intended audience, form of expression (graph, map, chart, table, or sculpture, e.g.), and primary purpose. Feel free to sketch or diagram these components (as images in your log), and to note where applicable any obstacles to your missing dataset’s existence. To get started, you might begin with an issue for which data is needed and then work backwards, reverse-engineering its use to articulate its components or “ontology.”
Jan 31 (R): Choropleth Maps (Workshop)
Before class, imagine a map of UVic that you believe should exist, and for whom it should exist. During workshop you’ll use tracing paper to sketch data overlays (with legends) that turn UVic campus maps into choropleths, which express data variables through shaded or patterned geographic areas. For an example, see this choropleth map of median household total income in Canada in 2015. You’ll pair up to exchange and compare your overlays.
Log Entry for Jan 31 (R): Traces and Overlays
As you make your choropleth maps of UVic, please document the process by photographing the overlays made with tracing paper. With the overlay, please provide one sample data record and include it in your log. (I’ll help you to structure and populate a sample record.) Then, in 50-100 words, use your overlay to tell a very brief story about how you or someone else navigated the campus with your map. Please tell your story in the past tense (as if the event already happened), in either first- or third-person (e.g., “I moved” or “They searched”). Don’t forget to give your story a title.
Notes and Slides for this Module
The notes and slides for this module are online. Key terms in the module are systems and absences, and important questions resulting from the module are, “How do we attend to the processes and systems through which data is ‘cooked’? What do we learn from identifying and prototyping missing datasets?”
The Appeal to Interactivity #
The word “interactive” is used quite often to describe digital stuff: interactive fiction, interactive visualization, interactive media, interactive Netflix movie . . . Lev Manovich (2001) demonstrates how this tendency follows in a long history of externalizing and objectifying mental life. Notice how, for instance, links on a website resemble associations or references we might follow during a conversation or while reading. But, of course, a link is encoded: you don’t have to wonder too much about what the author was thinking or implying; you simply click or touch a reference. During this module, we will focus not on programmed interactions but rather on the subjunctive: what has not happened, or what has not yet been programmed. I’ll ask you to read an essay by sci fi author and critic, Joanna Russ, and then turn to “design fiction,” specifically a video by designer, Julian Bleecker. Russ and Bleecker demonstrate how science fiction overlaps with the work of design and development, in part by addressing how people like to entwine sci fi with prediction and even prophecy. (Minority Report, based on the work of sci fi writer, Philip K. Dick, is a classic example here.) We’ll also have two workshops during this module. For the first, you’ll experiment with the subjunctive through the fanzine (or “zine”) format, which has a history in science fiction, dating back to at least the 1930s, with projects such as The Comet, published by the Science Correspondence Club. This zine workshop will ask you to prep for the final project, a prompt for which I’ll circulate in class. The workshop will also nudge you to imagine what’s neither possible nor impossible (see Russ) through a pedagogy of technology after the internet. During the second workshop, you’ll blend audio production with design fiction to conjecture about conversations between people and artificial intelligence. Talking about the appeal to interactivity will help us to imagine new subject matter before baking links and scripts into objects.
Feb 4 (M): Design Fiction Vol7 (Case Study)
Before class, please read “Speculations: The Subjunctivity of Science Fiction” (see me for ID and passcode), by Joanna Russ (1973). As you read, note how Russ distinguishes between science fiction and fantasy. Also note her argument that calling sci fi “prophetic” is, for numerous reasons, problematic and misleading. If you’re new to literary criticism, then focus on how Russ and Samuel Delany, who is quoted at the beginning of the essay, describe sci fi and how it differs from fantasy. Don’t worry about the references to other writers, such as Genet, or to field-specific terms, such as “post-realism” and “Brechtian alienation.” After reading Russ, we’ll watch Bleecker’s Design Fiction Vol7 (2011), and then you’ll begin to imagine your zine (the final project for this course).
Feb 4 (M): Wizard Zines (Workshop)
Wizard Zines (2015-18), by software developer, Julia Evans, are about “learning hard things and doing hard things in a practical and friendly way.” Evans’s focus is programming and, more generally, technology and culture. We’ll use her work as inspiration for our own zines about tech and culture, and to consider a pedagogy of tech after the internet.
Log Entry for Feb 4 (M): What Hasn’t Happened?
We know that zines have sci fi histories, but how could sci fi influence other sorts of zines? Please take a moment to read a Wizard Zine or two by Julia Evans. Note how Evans combines technical stuff with cultural issues and humour, and how most of the content is written and drawn manually. Now please use 50-100 words to describe potential topics for your zine this semester. Try including at least three topics (but no more than ten) in your list. For each topic, offer a brief remark (it doesn’t need to be a complete sentence) about how (and even why) the topic engages what has not happened. In the overlaps of technology and culture, or along the theme of unlearning the internet, what do you want to see in the world? And what might you tell or share to help people get there? Please don’t feel like your zine needs to be as technical as any given Wizard Zine. I only ask that it include some dimension of instruction on the topic of technology and culture or unlearning the internet. You might, for example, consider a step-by-step approach, such as “steps to get a cat picture” in Evans’s Networking! Ack! (2017). For details, see the prompt for the final project, and don’t hesitate to contact me or any TA with questions about the zine. (This workshop might carry over to February 7th. Also, if you’re interested in the overlaps of computing with humanities research, especially history, then you might find The Programming Historian of interest. It is in many ways a zine. Honest!)
Feb 7 (R): Audacity (Workshop)
Audacity is a free, open-source application for recording and editing audio. During this workshop, we’ll cover the basics. Before class, please install it as well as the LAME encoder. If you encounter any issues with installation, don’t hesitate to drop a line. You’re also welcome to use the audio app of your choice; just ensure you can record and edit audio and also export MP3s.
Log Entry for Feb 7 (R): “I Don’t Know About That”
For this entry, I’m asking you to speculate about the use of AI-driven “digital assistants” (e.g., Alexa) for social activity, academic life, or undergraduate research. At the moment, most voice-activated applications are about delegating tasks to a machine (e.g., telling the time, playing a song, or getting the news), hence “assistant.” But what if they could be more social, or more relational, or more pertinent to university life and research? Please compose no less than 30 seconds and no more than 90 seconds of audio to demonstrate such an application (or “skill”). The audio should include a verbal prompt for the machine and its verbal response to a person, and it should demonstrate awareness of cultural factors such as voice and labour (see previous modules). Feel free to include a transcript and brief description (~25 words) of your audio, and also feel free to enlist voice actors (e.g., a concatenated voice “bot” or someone else in class). If you do have another human speak in the recording, then please include a signed release from them (see me for details). Don’t forget to reference or embed the audio in your log.
Notes and Slides for this Module
The notes and slides for this module are online. A key term in the module is subjunctive, and important questions resulting from the module are, “How does the subjunctive shape the design of interactions? How is engaging what hasn’t happened also a way to imagine what we want to see in the world?”
Writing and Publishing Workshops #
We’ll slow things down and focus on polishing the writing for your log, getting feedback on it, and preparing it for publication online. (Note that you’re not required to render your log discoverable.) We’ll cover file conversions, file transfer, access restriction, style and encoding, basic metadata, use of evidence (or documentation), clarity and specificity, and awareness of audience and context. Each of these practices is central to the composition of critical media, broadly understood.
Feb 11 (M) and 14 (R): Conversion, Encoding, Transfer, and Feedback (Workshop)
The first half of this workshop will be dedicated to converting your Markdown logs into HTML + CSS files with basic metadata, transferring them using file transfer protocol (FTP), and restricting access to them via .htaccess. (I will assume no familiarity with this stuff.) The second half will be dedicated to revision: getting verbal and written feedback from your peers on how to improve your work, attend to context, and address your audience. In this case, the primary audience for your logs consists of fellow UVic undergraduates, not just me.
The Log Is Due (Feb 15)
Your log is due for assessment this week. Assessment will begin at noon on Friday, February 15th. There is no need to submit anything to me or a TA. In class, we will ask you to provide a URL (open or restricted) for the log, which we will then visit to assess your work. Assessment will follow the rubric provided in the course outline.
Notes for this Module
The notes and handouts for this module are online.
Reading Break #
Feb 18 (M) and 21 (R)
Step away from DHum 150 and take care. Reading break is not a myth, or at least it shouldn’t be.
Choosing Your Own Adventure #
It’s in the internet! Choices abound. And the choices we encounter require an incredible amount of design and planning. They may not follow the habits or conventions we typically associate with most books, either. People who make websites and write material for the web shape how their readers navigate content, and they try to anticipate the choices people will make. Put this way, content and paths through it must both be engaging to keep people from bouncing (that is, leaving the story or site), and readers may have less agency than terms such as “interaction” or “choose your own adventure” imply. During this module, we’ll take a look at how responsive content and choice are designed, with a guest lecture and workshop by Ian Michael Waddell, who will point you to work by Brendan Patrick Hennessy and Sam Kabo Ashwell. Talking about the notion of “choosing your own adventure” will provide us with opportunities to consider the structural dimensions of the internet alongside the content we find on it.
Feb 25 (M): Creative Encoding
During this guest lecture and workshop by Ian Michael Waddell, we’ll survey works of interactive fiction and engage the challenges of designing both choice and responsiveness. During class, we’ll read excerpts of “Small Child in Woods” by Brendan Patrick Hennessy (1987-2015), and we’ll refer to “Standard Patterns in Choice-Based Games” by Sam Kabo Ashwell (2015). We’ll consider some ways to structure responsive content (including stories), too.
Log Entry for Feb 25 (M): A Storymap
At what point does a story become a game, or a game become a story? This workshop about storytelling and fiction builds on our discussions about immediate interfaces and appeals to interactivity. We’ll give you a sheet with sixteen icons on it. The icons will be based on events in “Small Child in Woods,” which we will read together in class. You’ll arrange the icons into a narrative structure on a sheet of paper, drawing arrows to link the icons. You can choose (or create) any narrative structure or pattern you would like, but in your log you’ll have to justify why you made the choices you made. What each icon represents is entirely up to you: some meanings will be more obvious than others, but feel free to experiment. Please take a photo of your “storymap” (the icons connected by arrows) and embed it in your log. Then write 50-150 words explaining the narrative structure represented by your storymap, which narrative structure you chose (see Ashwell), what the narrative structure allows (or doesn’t allow) the reader to do or see, and why. Feel free to reference Hennessy’s and Ashwell’s work in the process, and see Ian’s slides for some narrative structures (care of Ashwell) that you might want to try: “time cave,” “gauntlet,” “branch and bottleneck,” “quest,” and “loop and grow.” (This prompt was written by Ian Michael Waddell.)
Notes and Slides for this Module
The notes and slides for this module are online. A key term in the module is structure, and important questions resulting from the module are, “How is responsiveness designed, and how is movement through responsive content structured? What can people see and not see as they make their way through structured content? What, for instance, do designers and writers decide to hide from readers, and why?”
The Assertion of Autonomous Algorithms #
Algorithms are steps or rules for performing tasks; they are implemented by software (but they can be implemented by people, too). They contribute to decision-making, meaning they are agents of a sort. But calling them “autonomous” can be dangerous. That adjective may suggest algorithms alone determine our situation, or it may bypass a key question, namely how people are or should be held accountable for algorithms. During this module, we’ll return to a few issues from previous modules, including desires for immediate interfaces and investments in neutral instruments. I’ll ask you to read work on algorithmic accountability by reporter, Megan Rose Dickey, and researchers, Robyn Caplan, Joan Donovan, Lauren Hanson, and Jeanna Matthews. Through these readings, we’ll survey a few examples of machine bias (e.g., the commercial risk assessment software, COMPAS) and algorithms gone wrong (e.g., Microsoft’s bot, Tay). Then we’ll ground accountability in a fictitious scenario where an AI (“Professor Bot”) is used to mark student papers. During workshop, we’ll try something called an “algorithm audit,” which researchers, Christian Sandvig, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort, describe as a way “researchers might look inside the black box of the algorithm to pursue knowledge about pressing public problems” (2014: 8). Talking about the assertion of autonomous algorithms will help us to understand their agency in everyday life without assuming they are somehow self-ruling, detached from production and maintenance, or outside society and culture.
Feb 28 (R): Algorithmic Accountability
Before class, please read “Algorithmic Accountability,” by Megan Rose Dickey (2017), and “Algorithmic Accountability: A Primer,” by Robyn Caplan, Joan Donovan, Lauren Hanson, and Jeanna Matthews (2018). As you read, take notes on how algorithms are defined, how they make decisions, how they are audited, and how people are held accountable for them. You might also keep in mind a few examples of machine bias and algorithms gone wrong. (P.S.: How’s the zine coming along? Any questions or concerns? We’ll take a second to discuss.)
Log Entry for Feb 28 (R): Professor Bot
Your goal for this log entry is to articulate what “algorithmic accountability” means in the case of Professor Bot, a fictitious scenario we performed during class. Please gather in groups of no more than five people to respond to the scenario and then (on a whiteboard in the classroom) define through examples “transparency,” “qualified transparency,” and “ethical and social responsibility” in the case of Professor Bot. Your answers should directly address these questions about the scenario: “Transparency of what, exactly?” “Qualified transparency according to whom, exactly?” “Whose ethical and social responsibility, exactly?” And, “Ethical and social responsibility determined by whom, exactly?” You might also consider how various groups or participants in this fictitious scenario would respond to your answers. What obstacles to accountability might these actors emphasize, what concerns might they have, and how might they (or the bot) be biased? Be sure to identify these actors in the process, and to be as specific as possible. Finally, in your log, please use 100-150 words to share your group’s definitions and examples of “transparency,” “qualified transparency,” and “ethical and social responsibility” in the case of Professor Bot and to reflect on what you learned about algorithmic accountability through this scenario. You might even define “algorithmic accountability” in your own words. For the log, feel free to co-author the descriptions with your group, but the reflection should be written by you alone. Also be sure to include the first names of your group members (for the sake of attribution), together with a photograph of the answers you wrote on the whiteboard.
Mar 4 (M): Collaborative Algorithm Audit (Workshop)
This workshop will draw from Sandvig et al.’s “collaborative audit” design for studying algorithms and identifying discrimination on internet platforms. In such a design, people act like computer programs to detect “misbehaving” algorithms. We’ll perform a low-tech version of collaborative auditing; it will not require knowledge of code or the use of a complex tool. We’ll also address the ethical problems of “data injection” for research purposes, and we’ll ask what it means to regulate toward auditability (as opposed to regulating for transparency or misbehaviour).
Log Entry for Mar 4 (M): Personalization and Recommendation
Sandvig et al. identify five types of algorithm audits: code, noninvasive user, scraping, sock puppet, and collaborative or crowdsourced. You’ll conduct a version of the fifth type for this log entry. Please identify two other like-minded people in class and team up with them to constitute a collective brain for this exercise. Identify an issue from the schedule on the course website that’s most important to you as a group. For the next three weeks, each member of your group will study this issue via routine Google searches (google.ca). In your group, create three search term(s) corresponding with three different search types relevant to your selected issue: a definitional or descriptive search (e.g., searching for the definition of a particular concept or term relevant to your issue), a historical search (e.g., searching for the history of a specific technology, debate, or area of research relevant to your issue), and a news search (e.g., searching for current events or writing relevant to your issue). Test your search terms in Google before you settle on them. Then, in your log, use 100-200 words to note the issue you selected, provide the search terms (definitional, history, and news) you’ll be using for Google searches, explain why you’re using these search terms, and share what you expect from the results during the next three weeks. What will you learn from them? How might they change? What factors might shape them? How might they differ across searches by your group members? You should use the spreadsheet I provided to record your results for this collaborative audit. The spreadsheet asks for your name, date of search, search terms (definitional), search terms (historical), search terms (news), results (file names for screen grabs), device used, browser used, browser mode (default, incognito, private), search type (images, news, all), geographic location (district / neighbourhood), whether you’re logged into Google, and whether your history, cache, and cookies were cleared. Please keep the search terms and search type consistent for the entire process. I recommend changing the browser, browser mode, device used, log-in status, and location from time to time to see how the results change. Each member of your group should search with the same terms at least seven times between now and March 28, when you’ll communicate in your log the results of your audit. To make your life easier, I include recommended search dates in the spreadsheet. On March 28, each group should submit one spreadsheet (with results gathered by all three members), and your results should be tracked using screen grabs stored in a subfolder (e.g., “audit”) for your course log / site. In the screen grabs, try to capture at least five search results. Name the image files for your screen grabs according to this convention: auditDayMonthSearchType.png / .jpg. Here are two example file names: audit4MarchD.png and audit11MarchN.png. For search type, use “D” for definition, “H” for history, and “N” for news. As you conduct this work, you might find “The Anatomy of a Large-Scale Hypertextual Web Search Engine” by Sergey Brin and Lawrence Page (1998) of interest. Also see Google’s own description of how search works.
Slides and Handouts for this Module
The slides and handouts for this module are online. Important questions resulting from the module are, “How do we determine who or what is accountable for the decisions algorithms make? Or how they shape decisions? What’s the difference between transparency and accountability?”
The Belief in Sourcery #
Code is frequently deemed to be the end all, be all of digital culture and computing. Some may call it the “source.” But, as Wendy Chun’s work (2011) demonstrates, a focus on code alone risks ignoring how, when, and where it’s executed. After all, code does not exist in a vacuum, and practices such as design and communication are fundamental to its interpretation. During this module, we’ll consider how code manages to be at once highly rational and deeply mysterious, hence “sourcery,” a term Chun coined to study how programs are “re-membered” or compiled. Sourcery is about the logic of magic: people may be wowed by the whiz-bang effects without knowing how to explain its cause. But code is one component, not the “source,” of an entire process. And the context and content of this process matter, too. We’ll start this module by reading Paul Ford’s “What Is Code?”, and then we’ll turn our attention to the relationship between sourcery and consent. Here, an important issue is how to communicate ethically to people how their data is being processed and disseminated when they cannot see, let alone explain, the magic at play. Talking about sourcery will allow us to consider some ways to learn about code and its contexts without assuming everyone knows (or wants to know) how to program a computer.
Mar 7 (R): What Is Code?
Before class, please read “What Is Code?” by Paul Ford (2015). We’ll talk briefly about it before transitioning into a workshop. Ford’s article is pretty long. Please focus on sections 1, 2, 6, and 7.
Log Entry for Mar 7 (R): Design for Consent (Workshop)
In your log, please use 100-150 words to share your experiences performing two different roles in the “Interfaces and Consent” workshop. What did you learn about consent, interfaces, and data privacy? Then please communicate one important issue you believe designers (such as the AIRC designer in the scenario you performed) should consider when developing technologies (such as AI) with people’s consent in mind. Think, for instance, about licensing agreements that are never read. Is there a way to make such “contractual” processes more meaningful, or to clarify their implications for users? How do you provide a service while also protecting privacy and mitigating risks? To which services should people opt in (instead of opting out), when, and why? How are the social and cultural consequences of systems made of code and algorithms communicated to people who aren’t familiar with the technical dimensions of those systems? How should those consequences be communicated? (This prompt was written by Stefan Higgins and me.)
Handout for this Module
The handout for this module is online. Important questions resulting from the module are, “What can we learn from low-tech approaches? How are design and development processes that involve multiple components irreducible to code and programming?”
The Ideology of Landless Territory #
Mar 11 (M): Guest Lecture: Alana Sayers
Alana Sayers (Hupacasath and Alexander First Nations; UVic English) will join us to give a guest lecture. Prior to class, please read these selections from Network Sovereignty by Marisa Duarte.
Log Entry for Mar 11 (M): Terrain
The internet is often considered to be a neutral space, ungrounded from any social, cultural, or political context. However, as Marisa Duarte writes, “Conceptually, [the internet] represents a richly featured terrain, with the topographies of technical networks shaped by a mixture of the personal agendas, political will, mundane habits, and desires of individuals of those who create and use them” (32). Duarte also attends to the fact that no academic studies address the complex history of Indigenous relationships with Information and Communication Technologies (ICTs) because of the problematic and “unfortunate intellectual inheritance that Native peoples are premodern and anti-technological” (28). Please form small groups (three to five people) and respond to the following questions, engaging Duarte’s work as well as Alana Sayers’s talk along the way: what, according to you, are the power structures responsible for this lack of academic studies of (or this missing dataset for) Indigenous ICTs? What is the relationship between unequal access to the internet and colonial ideas of “progress”? And how can the internet both maintain and potentially disrupt hierarchies of race, class, and/or gender in society? Based on your discussions, Duarte’s research, and Sayers’s talk, please address one of these questions in a log entry of 100-200 words. (This prompt was written by Ria Karve.)
Notes for this Module
Important questions resulting from this module are, “How do land and infrastructure affect internet access? Development? How do people address, resist, and correct claims that the internet is landless?”
The Futures of Digital Labour #
Trebor Scholz (2013) describes the internet as both a playground and a factory. In this context, digital labour is usually unpaid labour. It may also be considered play. Some call it “immaterial,” too; however, it’s most certainly embodied, and it’s shaped by infrastructures and material conditions. This module begins with a brief history of digital labour by way of attention economics. I’ll survey how data (and thus value) was produced from sensual labour (such as seeing and listening) in the 20th century. Here, film, radio, television, and record industries are important. (Think of Nielsen ratings, for instance.) Then we’ll shift to more recent examples of digital labour, such as content moderation, goldfarming, crowdsourcing, friendship marketing, and microtasking, and we’ll discuss where these practices are (or might be) taking us. I’ll ask you to read an article by Lisa Nakamura (2009), watch parts of a documentary by Ge Jin (2010), and conduct a brief study of Amazon’s “Mechanical Turk,” the history of which dates back to an automaton chess player from the late 18th century. This work and, more generally, talking about digital labour will help us to not only foreground how (and for whom) digital technologies produce value but also consider how labour critiques function, or even happen, within today’s networked infrastructures.
Mar 14 (R): Goldfarming (Case Study)
Please read Lisa Nakamura’s “Don’t Hate the Player, Hate the Game” (2009) before class. As you read, please focus on how Nakamura defines and approaches racial discourse, labour, and goldfarming. I also recommend her talk, “Five Types of Online Racism and Why You Should Care,” as well as Ge Jin’s documentary, Gold Farmers.
Log Entry for Mar 14 (R): Mechanical Turk
Amazon Mechanical Turk (AMT) is a crowdsourcing platform where businesses (called “requesters”) contract workers (called “workers”) to perform individual human intelligence tasks (called “HITs”) that computers either cannot perform or do not perform well. HITs are categorized into seven types of work: information finding, verification and validation, interpretation and analysis, content creation, surveys, content access, and research. Some HITs require qualifications, which are determined for each worker by the AMT’s HITs assessment system. AMT qualifications are based on HITs abandoned (tasks accepted but not completed in time), approved (task results approved by requesters), rejected (task results not approved by requesters), and returned (tasks accepted but left unanswered). Approved HITs earn “rewards” corresponding with U.S. dollars. A recent task-level analysis of AMT found that the median hourly wage is ~$2 per hour, with only 4% of AMT workers earning more than $7.25 (the U.S. minimum wage) per hour (Hara et al. 2018). With this context in mind, please study AMT in class (I’ll provide you with access) and then use 100-200 words to do at least three things. First, please explain why the AMT’s HITs system is a form of digital labour. Next, describe how workers accumulate “avatarial capital” through AMT’s qualifications mechanism. (Recall that Nakamura draws upon Edward Castronova’s research to argue that goldfarmers are unable to accumulate avatarial capital. They are thus unable to possess their own avatars.) Finally, detail at least one important AMT activity for which AMT workers are not or may not be compensated. (Identifying this unpaid work may require an interface analysis of sorts. How do workers traverse AMT to perform tasks and build their qualifications? Which parts of that traversal are uncompensated?) As you write, feel free to comment on how AMT is both a factory and a playground, or on how digital labour affects people’s working conditions more broadly.
Notes and Slides for this Module
The slides for this module are online. Important questions resulting from the module are, “How do large projects become individual microtasks? How do people accumulate data through microtasking platforms and games? Do they own that data? How are they compensated for their labour?”
Making a Zine #
Mar 18 (M) and 21 (R): Zine Workshops
These workshops draw almost entirely from the handouts referenced below and on the notes page. I also recommend reviewing the Booklyn Education Manual.
Log Entry for Mar 18 (M): Zine Forms and Materials
For this workshop on the forms and materials of zine-making, I provided a series of prompts, and you responded to them using sheets of paper folded into eight panels. You then folded a sheet into a book (instructions care of Booklyn). In your log, please include photographs of all your responses to the prompts. You should have at least five photographs in this log entry: two for the sixteen one-minute exercises (a photograph of each side of your paper), one for the comics exercise (a photograph of one side of your paper), one for the slidedeck exercise (a photograph of the other side of your paper), and one or two for the outlining exercise (a photograph of one or both sides of your paper). Please briefly contextualize these photographs by describing for your readers what you did during the workshop and how you responded to each of the prompts. Your writing for this entry does not have to be analytical; you simply need to describe what you did and how.
Log Entry for Mar 21 (R): Zine Content and Culture
Please provide one or two short paragraphs (totaling 200-300 words) describing your zine. For more instructions and guidance, see this worksheet.
Handouts for this Module
The handouts and workshop materials for this module are online.
The Fallacy of Extended Man #
Mar 25 (M): All Technologies Are Assistive
Before class, please read “All Technology Is Assistive: Six Design Rules on Disability,” by Sara Hendren (2017). Please also check out ScreamBody, by machine therapist, Kelly Dobson, and Squeeze Machine, by animal scientist, Temple Grandin.
Log Entry for Mar 25 (M): Designing for One
Rule five of six in Hendren’s essay is to “Design for One.” For this log entry, I’m asking you to do just that. Please take a moment to identify a person in your life for whom you’d design a device that (or so you assume) doesn’t exist. What would the device do? For our purposes, it should be intended for public use and address a specific yet subtle need. Perhaps it augments sensory experience, or expresses information, or fosters communication, or facilitates a task, or helps someone to identify other people with similar interests and needs. Don’t worry about whether the device would work, or how exactly it would function. This is a speculative exercise. Once you’ve identified the person and imagined the device, please use 100-150 words to describe the device as if it exists. In your description, state in the present tense what the device does, what subtle needs it addresses, and what experiences it affords. You can even give it a name, if you wish; and feel free to include a sketch of it. But please do not mention the person’s name in your entry or provide any identifying information about them. Focus your description on the device, even if you’re imagining it for a particular individual.
Slides for this Module
The slides for this module are online. Important questions resulting from the module are, “For decades if not centuries, technologies have been described as extensions of man. But this gendered and rather masculine formation of tech as power ignores how technologies also make us vulnerable; they also assist us in performing tasks and building relations. In fact, assistive technologies are often sites of innovation. What happens when we design assistive technologies for one person? Or let technologies ask questions instead of solve problems? Does tech need to be a tool? What if it’s a negotiation?”
Polishing Your Work #
Log for March 28 (R): Results of Your Audit
Congratulations! Your audit is complete! For this log entry, please use 100-200 words to identify and describe the two most interesting findings across your group of three. You might want to compare your search results with those of other people in your group, or note a surprise (a result you didn’t expect, or a result you expected but didn’t get), or share a curious change in the results over time. Along with your two most interesting findings, please point (that is, link) to your group’s spreadsheet as well as screen grabs of your results during the last few weeks. Don’t forget to upload your spreadsheets and image files, and feel free to embed an image or two as evidence of your interesting findings.
The Log Is Due (Mar 29)
Your log is due for assessment this week. Assessment will begin at noon on Friday, March 29th. There is no need to submit anything to me or a TA. In class, we will ask you to provide a URL (open or restricted) for the log, which we will then visit to asses your work. Assessment will follow the rubric provided in the course outline.
Handout for this Module
Here is the checklist for the second half of your log.
Zine Fest #
April 1 (M): Zine Fest!
We’ll have a zine fest during class. We will divide it into two rounds: 10:15-10:45am (first half of the alphabet) and 10:45-11:15am (second half of the alphabet). All you need to do is bring your zine (or even several copies, if you wish) and share it with others.
April 4 (R): FIN
We’ll review the course and the final assignment (the reflection), talk about our experiences during the zine fest, and conduct course evaluation surveys.
The Zine Is Due (Apr 4)
The zine is due for assessment by the end of class on Thursday, April 4th. Please deliver your zine to me in person that day.
The Log Is Due (Apr 16)
Your log is due for assessment (one last time) by noon on Tuesday, April 16th. Here’s the prompt. Please read it closely. There is no need to submit anything to me or a TA. In class, we will ask you to provide a URL (open or restricted) for the log, which we will then visit to asses your work. Assessment will follow the rubric given to you.
Here’s a list of all the stuff I’m asking you to study, consider, use, or prototype this semester.
- Ashwell, Sam Kabo. “Standard Patterns in Choice-Based Games.” These Heterogenous Tasks, 2015.
- Audacity: free, open source, cross-platform audio software
- Bakhshi, Saeideh, David A. Shamma, Lyndon Kennedy, and Eric Gilbert. “Why We Filter Our Photos and How It Impacts Engagement.” Association for the Advancement of Artificial Intelligence. Yahoo Labs, 2015.
- Bleecker, Julian. Design Fiction Vol7. Vimeo, 2011.
- Booklyn Artists Alliance. Booklyn Educational Manual. Brooklyn, NY: Booklyn Education Department, 2018.
- Bowker, Geoffrey, and Susan Leigh Star. Sorting Things Out: Classification and Its Consequences. Cambridge, MA: MIT Press, 1999.
- Brin, Sergey and Lawrence Page. “The Anatomy of a Large-Scale Hypertextual Web Search Engine”. Stanford, CA: Stanford U, 1998.
- Caplan, Robyn, Joan Donovan, Lauren Hanson, and Jeanna Matthews. “Algorithmic Accountability: A Primer.” Data & Society, 2018.
- Choropleth Maps: thematic maps in which areas are shaded or patterned in proportion to the measurement of a variable that’s displayed
- Chun, Wendy Hui Kyong. Programmed Visions: Software and Memory. Cambridge, MA: MIT Press, 2011.
- Cordell, Ryan. “Q i-jtb the Raven’: Taking Dirty OCR Seriously.” Book History 20 (2017): 188-225.
- CSS 3: a style sheet language used for describing the presentation of a document written in a markup language such as HTML
- Davidson, Cathy. Now You See It. New York: Penguin, 2011.
- Del Barco, Mandalit. “How Kodak’s Shirley Cards Set Photography’s Skin-Tone Standard.” National Public Radio, 13 November 2014.
- Dickey, Megan Rose. “Algorithmic Accountability.” TechCrunch, 2017.
- Dobson, Kelly. ScreamBody. Massachusetts Institute of Technology, 1998-2004.
- Duarte, Marisa. Network Sovereignty: Building the Internet across Indian Country. Seattle: U of Washington P, 2017.
- Evans, Julia. Wizard Zines. Wizard Industries, 2015-18.
- Fanzines Archive: “This online archive was first constructed by the Fall 2012 ‘Archives, Media & Materiality’ course led by Dr. Lauren Klein in conjunction with the Georgia Institute of Technology Library Archives, the Bud Foote Collection, and the University of Victoria Intro to Digital Humanities class.”
- File Transfer Protocol: a standard network protocol used for the transfer of computer files between a client and server on a computer network
- Ford, Paul. “What Is Code?” Business Week, 11 June 2015.
- Gaertner, David. “Why We Need to Talk about Indigenous Literature and the Digital Humanities.” Novel Alliances, 26 January 2017.
- Grandin, Temple. Squeeze Machine. Dr. Temple Grandin’s Website, 1992.
- Halberstam, Jack. “Unlearning.” Profession (2012): 9-16.
- Hendren, Sara. “All Technology Is Assistive: Six Design Rules on Disability.” Making Things and Drawing Boundaries, ed. Jentery Sayers. Minneapolis, MN: University of Minnesota Press, 2017. 139-45.
- Hennessy, Brendan Patrick. “Small Child in Woods.” You Will Select a Decision, 1987-2015.
- Hicks, Marie. “Only the Clothes Changed: Women Operators in British Computing and Advertising, 1950–1970.” IEEE Annals of the History of Computing 32.4 (2010): 5-17.
- HTML 5: a software solution stack that defines the properties and behaviours of web page content by implementing markup
- Hypertext Access (or .htaccess): a directory-level configuration file supported by several web servers, used for configuration of site-access issues
- Jin, Ge. Chinese Gold Farmers. 2008-12.
- jodi: a collective of two internet artists, Joan Heemskerk and Dirk Paesmans
- Kirschenbaum, Matthew. “Software, It’s a Thing.” Medium, 24 July 2014.
- Loyer, Erik: various design projects, including work for Vectors
- Markdown: distilled markup with a plain text formatting syntax
- Manovich, Lev. The Language of New Media. Cambridge, MA: MIT Press, 2001.
- Nakamura, Lisa. “Don’t Hate the Player, Hate the Game: The Racialization of Labor in World of Warcraft.” Critical Studies in Media Communication 26.2 (2009): 128-144.
- Onuoha, Mimi. The Library of Missing Datasets. Mimi Onuoha, 2016.
- Optical Character Recognition: the mechanical or electronic conversion of page images into machine-readable text
- “Powers Girl” Ads: images that appeared in the Powers company magazine during the 1950s and ’60s (see Hicks)
- The Programming Historian: “We publish novice-friendly, peer-reviewed tutorials that help humanists learn a wide range of digital tools, techniques, and workflows to facilitate research and teaching. We are committed to fostering a diverse and inclusive community of editors, writers, and readers.”
- Raskin, Jef. The Humane Interface: New Directions for Designing Interactive Systems. Boston: Addison Wesley, 2000.
- Roth, Lorna. “Looking at Shirley, the Ultimate Norm: Colour Balance, Image Technologies, and Cognitive Equity.” Canadian Journal of Communication 34.1 (2009).
- Russ, Joanna. “Speculations: The Subjunctivity of Science Fiction” (1973). To Write Like a Woman. Bloomington, IN: Indiana University Press, 1995.
- Sandvig, Christian, Kevin Hamilton, Karrie Karahalios, and Cedric Langbort. “Auditing Algorithms: Research Methods for Detecting Discrimination on Internet Platforms.” “Data and Discrimination: Converting Critical Concerns into Productive Inquiry.” Seattle, WA: 22 May 2014.
- Sayers, Alana. Lecture. “Indigenous Memes.” In “Unlearning the Internet: Digital Humanities 150.” University of Victoria. 11 March 2019.
- Scholz, Trebor. Digital Labor: The Internet as Playground and Factory. New York: Routledge, 2013.
- Shirley Cards: colour reference cards that are used to perform skin-colour balance in still photography printing (see Roth)
- Simpkin, Sarah. “Getting Started with Markdown.” Programming Historian 4 (2015).
- Sublime Text: “a sophisticated text editor for code, markup and prose”
- Thorp, Jer. “You Say Data, I Say System.” Hackernoon (Medium), 13 July 2017.
- Waddell, Ian Michael. “Animalia.” Itch, 2018.
- WAVE: used to evaluate the accessibility of web pages
- Wayback Machine: “explore more than 345 billion web pages saved over time”
- WordStar 4.0: a word processor application that had a dominant market share during the early- to mid-1980s; published by MicroPro (see Kirschenbaum)
- YAML: a human-readable data serialization language
- Zine: a self-published work of original, unofficial, or appropriated texts and images; derived from fanzine (a portmanteau of fan and magazine)
I acknowledge with respect the Lkwungen-speaking peoples on whose traditional territory the University of Victoria stands, and the Songhees, Esquimalt, and WSÁNEĆ peoples whose historical relationships with the land continue to this day.
Work by Virgilio Almeida, Saeideh Bakhshi, Cheryl Ball, Michelle Barboza-Ramirez, Patrick J. Bartlein, Lynda Barry, Jonathan Beller, Julian Bleecker, Geoffrey C. Bowker, Anne Burdick, Allison Burtch, Robyn Caplan, Manuel Castells, Edward Castronova, Kim Christen, Wendy Hui Kyong Chun, Anne Cong-Huyen, Ryan Cordell, Ellen Cushman, Valentina D’Efilippo, Cathy Davidson, Samuel R. Delany, Megan Rose Dickey, Kelly Dobson, Joan Donovan, Julia Dressel, Kevin Driscoll, Johanna Drucker, Marisa Duarte, Sarah Elwood, Lori Emerson, Hany Farid, Batya Friedman, Christian Fuchs, David Gaertner, Alexander Galloway, Alex Gil, Lisa Gitelman, Temple Grandin, Melissa Gregg, Dene Grigar, Richard Grusin, Jack Halberstam, Kevin Hamilton, Lauren Hanson, Donna Haraway, Duchess Harris, Sara Hendren, Marie Hicks, Bernard J. Jansen, Ge Jin, jodi, Bernward Joerges, Jessica Marie Johnson, Karrie Karahalios, Darius Kazemi, Lauren Klein, Kari Kraus, Matthew Kirschenbaum, Cedric Langbort, Alan Liu, Elizabeth Losh, Erik Loyer, Julien Maillan, Lev Manovich, Mark Marino, Carolyn Marvin, Shannon Mattern, Jeanna Matthews, Frank McCown, Tara McPherson, Mara Mills, Thomas Mullaney, Dawn Nafus, Lisa Nakamura, Alondra Nelson, Safiya Noble, Bethany Nowviskie, Mimi Onuoha, Frank Pasquale, Amanda Phillips, Miriam Posner, Rita Raley, Roopika Risam, Daniela Rosner, Lorna Roth, Joanna Russ, Christian Sandvig, Alana Sayers, Simon Schaffer, Trebor Scholz, Jonathan Schwabish, Cynthia Selfe, Steven Shapin, Steven Shaviro, Sarah Simpkin, Susan Leigh Star, Nicole Starosielski, Jonathan Sterne, Jer Thorp, William J. Turkel, McKenzie Wark, Jacqueline Wernimont, Matthew W. Wilson, and Langdon Winner informed the design, development, content, and instruction of this course.