We’ll use this page to circulate notes, slides, handouts, key terms, questions, and whatnot throughout the semester.
- January 7: Why Unlearn the Internet?
- January 10: The Myth of Lone Male Inventors
- January 14 and 17: The Desire for Immediate Interfaces
- January 21 and 24: The Investment in Neutral Instruments
- January 28 and 31: The Rhetoric of Raw Data
- February 4 and 7: The Appeal to Interactivity
- February 11 and 14: Writing and Publishing Workshops
- February 18 and 21: Reading Break (no notes)
- February 25: Choosing Your Own Adventure
- February 28 and March 4: The Assertion of Autonomous Algorithms
- March 7: The Belief in Sourcery
- March 11: The Ideology of Landless Territory
- March 14: The Futures of Digital Labour
- March 18 and 21: Making a Zine
- March 25: The Fallacy of Extended Man
- March 28: Polishing Your Work
- Composing, Revising, and Formatting Your Logs
January 7 #
I circulated a handout–a DHum 150 FAQ–at the beginning of class. Enjoy!
Here are the slides for “Early Histories of the Internet.” I focus on movable type, printing presses, automata, steam engines, mechanical digital computers, and telegraph networks, and then I mention railroads, factories, phonography, photography, moving images, radio, and magazines. The slides include video, segments of which I used for the purposes of illustration. I’m not endorsing or recommending the videos (including their historical accuracy and arguments). This disclaimer pertains to all future uses of video in “Unlearning the Internet.” Thank you for understanding.
A key term for this “Unlearning the Internet” module is parthood: If we begin with the assumption that the internet is comprised of parts, or that it’s not some monolithic structure, then we can approach it historically, asking what components were “remediated” into what we use today and thus continue to shape our habits. Of course, these parts are not just things; they are processes, too. Learning more about them helps us to avoid treating the internet as a recent phenomenon as well as attend to the relevance of ostensibly “obsolete” technologies.
Questions for the module: How is the internet an assemblage of “old” parts, and how do those parts and their histories shape knowledge production today?
January 10 #
Here are the slides for “Labour and Gender in Computing.” References include Ada Lovelace’s notes (1842); work on the ENIAC by Betty Snyder Holberton, Jean Jennings Barik, Kay McNulty Mauchly Antonelli, Marlyn Wescoff Meltzer, Ruth Lichterman Teitelbaum, and Frances Bilas Spence; Beatrice Worsley’s contributions to Transcode and FERUT; Duchess Harris’s research on Black human computers at NASA; Lisa Nakamura’s research on female Navajo workers at Fairchild Semiconductor; and Lynn Conway’s contributions to chip design and computer processing.
Following Marie Hicks (2010), gender inequalities and labour conditions persisted as technologies of typing, tabulation, and calculation morphed into office computing. While the clothes changed, most of the working conditions did not. Advertisements, such as the “Powers Girl” ads, did not just reflect these conditions; they helped to enable them: women in early computing were paid less than men, and they were also encouraged to downplay their technical expertise as their computing work was advertised as clerical work.
Key terms for this “Unlearning the Internet” module are gender and labour: If we begin with the knowledge that technologies are gendered forms of congealed labour, then we can question popular myths of genius male inventors creating devices on their own. (For example, search for “famous inventors.”) Technologies are imagined, designed, and developed by groups (not lone male inventors) in particular contexts, which are shaped by history (see “Early Histories of the Internet”). Despite their significant contributions to science and technology, women and non-binary people are often not given credit and/or compensation for their research, and technologies and technical expertise are routinely gendered male and masculinized.
Questions for the module: How are norms, habits, and labour conditions reinforced despite, and even through, technological change? What are the risks of assuming technological change, including the rise and growth of the internet, will always bring about social good?
January 14 and 17 #
Here are the slides for “A History of Immediate Interfaces,” from class on Monday, January 14th. I focus on batch processing, command lines, TUIs, WYSIWYGs, GUIs, WIMPs, skeuomorphs, ubicomp, and WordStar, and I conclude with a reminder that people (usually women) have also played the role of computing interfaces. (See, for example, I.J. Good’s description of Colossus.) I draw on work by Bette Nesmith Graham, Jef Raskin, Alan Kay, Jim Henson, Mark Weiser, Wendy Chun, and Matthew Kirschenbaum, whose demonstration or litany (2014) of how software may be variously understood is central to this module: software as asset, package, shrinkwrap, notation (or score), object, craft, epigraphy, clickwrap, hardware, social media, background, paper trail, service, and big data. If you have a moment, then you might familiarize yourself with two or three of these understandings. Package, object, craft, and service are especially relevant to this course.
Also, here is page 1 of the handout for the Markdown workshop in class on Monday, January 17th. Page two of the handout was a Markdown guide produced by Patrick J. Bartlein, Professor of Geography at the University of Oregon. I also recommend Sublime Text and Sarah Simpkin’s “Getting Started with Markdown.” Please note that a text editor is not a WYSIWYG environment. You will need to convert your Markdown into another format (e.g., HTML) to render images, links, formatting, and the like. One aim of Markdown, then, is to focus on composition and to avoid toggling much, if at all, between composition and other tasks and windows. Writing in Markdown with a text editor resembles aspects of WordStar in this way, even if WordStar was a WYSIWYG interface.
Key terms for this “Unlearning the Internet” module are mediation and mode: If we acknowledge that all interfaces are somehow mediated and also (following Kirschenbaum) that software is a thing (not just an abstraction or metaphor), then we can begin to consider how everyday internet tasks, such as writing, shopping, searching, and data entry, are guided and influenced by interface and interaction design. We can also consider why people want their interfaces to feel or appear immediate, where “immediate” may mean “instantaneous,” “consistent,” “immersive,” “invisible,” “glitch-free,” or “friendly,” among other things. Interfaces have been vehicles for the automation of tasks as well as product loyalty. In this history of design, a “mode” (see Raskin) is how an interface changes the output of a gesture or sequence of actions. Examples of modes include caps lock, vibrate, and airplane mode as well as toggling between view and edit, insert and overwrite, or painting and selecting. Moments of conscious mediation correspond with “mode errors,” such as aggressive or annoying popups (“focus stealing”) and the appearance of #&%!#$& instead of 3751347 when caps are locked.
Questions for the module: Even though interfaces are forms of mediation, why and under what assumptions do people want them to be invisible? When should interfaces interrupt us? When should they be transparent?
January 21 and 24 #
Here are the slides for “Standards,” from class on Monday, January 21st. Please also see Lorna Roth (2009), “Looking at Shirley, the Ultimate Norm.” In the slides, I provide a working definition of “standard” (see Bowker and Star 1999) in the context of technology and culture, and I follow it with a few examples: QWERTY, DOS, VCR, and MP3. I then discuss Roth’s work on Shirley Cards (norm reference cards in photography) and survey their history from the 1950s to the present. I conclude with a few questions about how Shirley Cards (as both norms and standards that shape the expression of colour) are entangled with race, gender, and values.
During the OCR workshop, we used the OCR.space service, which stores no data, and I pointed you to the Internet Archive (texts from 1800 to 1900), Google Fonts, and the Harry Ransom Center’s digitized copy of the Gutenberg Bible, which uses the textualis (or textura) typeface. We also discussed how the OCR process is affected by image quality, lighting, language, alphabets, typefaces, layout, and distance between the page and camera.
Key terms for this “Unlearning the Internet” module are standards and processing: If we recognize that the conversion of input into output is always processed and shaped by some standard (an agreed-upon set of rules for production), then we can begin to consider how values and structural issues are baked into seemingly neutral (or unbiased) procedures: how, for instance, the Shirley Cards not only affect colour balance but are also entwined with race and gender norms, or how OCR not only affects how (or whether) texts are machine-readable but also requires the normalization of page images. Standards, in short, produce errors (e.g., OCR mistakes), defaults (e.g., light skin tones in early Shirley Cards), and categories such as “miscellaneous” and “other” (e.g., materials that do not fit a given scheme or format), and yet the internet relies on standardization, which is necessary to the authority of formats and the interoperability of systems. Knowledge of the values enabled by standardization thus affords opportunities to ask who and what should be involved in the standards design process, and to articulate more inclusive (or more critical) procedures for converting this into that.
Questions for the module: How and under what assumptions do standards appear unbiased while also determining defaults and producing errors? How can standards be changed through research, cultural work, and participation in communities and systems?
January 28 and 31 #
Here are the slides for “Data as a System,” from class on Monday, January 28th. Please also see Jer Thorp (2017), “You Say Data, I Say System,” and Mimi Onuoha (2017), “Missing Data Sets.” In the slides, I start by talking about “raw” data, and then I outline how data is “cooked.” From there, I move to Thorp’s treatment of data as an artifact of a system, followed by Onuoha’s repository for The Library of Missing Datasets. The slides conclude with an exercise premised on expanding Onuoha’s library project. If you were to add a dataset to Onuoha’s list, then what would it be?
Also, here is the handout for the choropleth mapping workshop on Thursday, January 31st. During that workshop, I asked you create overlays for the UVic Campus map. We used tracing paper for the overlays, which functioned as prototypes for “missing datasets” and UVic maps that do not (yet) exist.
Key terms for this “Unlearning the Internet” module are systems and processes: If, following Thorp and Onuoha, we approach data as artifacts of systems, then we must consider the entire production process, from collection to computation to representation. Such considerations raise questions about how data is structured and expressed, with inquiry into data that is not gathered. The absence of data says a lot about the presence and persistence of certain datasets. And from attention to what’s missing, we can imagine and prototype, for instance, maps that do not (yet) exist or graphs people want but do not have. Such expressions of data are not just useful for readers or consumers; they may also be used as arguments for change.
Questions for the module: How do we attend to the processes and systems through which data is “cooked”? What do we learn from identifying and prototyping missing datasets?
February 4 and 7 #
Here are the slides for “From ‘Interactive’ to Subjunctive” from class on Monday, February 4th. Please also see Joanna Russ (1973), “Speculations: The Subjunctivity of Science Fiction.” I start by noting how “interaction” is a rather vague term, and how “interactive” media are actually programmed and highly structured. I add that routinized tasks conducted online may feel more like following a designer’s plan or instructions than interacting with something or someone. So instead of asking how to make stuff interactive, we might engage the subjunctive (or what if?) of media. In the slides, I show Julian Bleecker’s clips of science fiction films as an example here. Science fiction is what has not (yet) happened; it’s often a site of design experiments. But, as I share in the slides via excerpts of work by Joanna Russ, science fiction isn’t prediction or prophecy. It isn’t fantasy, either. The slides then further describe science fiction and, moving to our case study, its history in zines. The slides conclude by looking at Wizard Zines made by Julia Evans. While the content of these zines is not exactly science fiction, Evans does ask, “what if?” That is, she imagines both a world and a genre she wants to see, and she uses various techniques to render the bizarre familiar and the familiar bizarre.
Following our discussion of Evans’s work, here is the prompt for the zine assignment. The zine is due by April 4th.
During workshop on Thursday, February 7th, I mentioned Audacity. If you’re interested in using it, then here’s a brief and quite helpful tour of the software. I also mentioned the Alexa project as an example of research on “digital assistants,” for which people program “skills.” Here’s a sample skill.
A key term for this “Unlearning the Internet” module is subjunctive: rather than focusing on baking scripts into media, we might engage what is possible with and through them. Here, we can learn a lot from science fiction, which (echoing Russ) presents what has not happened. The subjunctivity of science fiction is that it is neither possible nor impossible. Science fiction is not possible because it does not reflect our present-day actuality; it is not reportage. Science fiction is not impossible because we don’t know for sure that it could not happen. The subjunctive is evocative and also quite practical because it prompts us to imagine scenarios and experiment with designs before jumping immediately into programming interactions between people and stuff. It often expresses people’s wishes, too.
Questions for the module: How does the subjunctive shape the design of interactions? How is engaging what hasn’t happened also a way to imagine what we want to see in the world?
February 11 and 14 #
This week is dedicated to workshop and, more specifically, to revising and publishing your logs. Here are some useful links:
- Dillinger for Markdown conversion,
- Cyberduck for file transfer,
- UVic’s page on web publishing and hosting,
- UVic’s page (Windows or Mac) for restricting access to your website,
- The checklist for your logs, and
- The instructions for publishing your logs as websites.
February 25 #
Here are the slides for “Creative Encoding,” a lecture given by Ian Michael Waddell during class on Monday, February 25th. Please also see “Small Child in Woods” by Brendan Patrick Hennessy, “Standard Patterns in Choice-Based Games” by Sam Kabo Ashwell, and Ian’s own “Animalia.” During workshop, Ian also circulated a sheet of icons for your use in constructing a “storymap” (a visual narrative structure for a story). He pointed you to Twine, too.
A key term for this “Unlearning the Internet” module is structure: the choices we encounter in responsive content such as choose-your-own-adventure (CYOA) stories require an incredible amount of design and planning, and they may not follow the habits or conventions we typically associate with most books. Recall, for instance, the extensive storymap for Ian’s “Animalia.” People who make websites and write CYOAs shape how their readers navigate this content, and they try to anticipate the choices people will make. Put this way, content and paths through it must both be engaging to keep people from bouncing (that is, leaving the story or site), and readers may have less agency than terms such as CYOA imply. Sam Kabo Ashwell provides us with some sample structures for CYOA: “time cave,” “gauntlet,” “branch and bottleneck,” “quest,” and “loop and grow,” for example. We experimented with them during class.
Questions for the module: How is responsiveness designed, and how is movement through responsive content structured? What can people see and not see as they make their way through structured content? What, for instance, do designers and writers decide to hide from readers, and why?
February 28 and March 4 #
Here are the handout for “Professor Bot,” an algorithmic accountability scenario we conducted on Thursday, February 28th; the slides for “The Public Scrutiny of Algorithms” from Monday, March 4th; and the spreadsheet for the algorithm audit (feel free to download or make a copy of the spreadsheet). Also see Sandvig et al.
Questions for the Module: How do we determine who or what is accountable for the decisions algorithms make? Or how they shape decisions? What’s the difference between transparency and accountability?
March 7 #
Here is the handout for the “Interfaces and Consent” workshop on March 7th.
Questions for the Module: What can we learn from low-tech approaches? How are design and development processes that involve multiple components irreducible to code and programming?
March 11 #
Questions for the Module: How do land and infrastructure affect internet access? Development? How do people address, resist, and correct claims that the internet is landless?
March 14 #
Here are the slides for “Digital Labour: Then, Now, and Soon,” my lecture on Thursday, March 14th.
Questions for the Module: How do large projects become individual microtasks? How do people accumulate data through microtasking platforms and games? Do they own that data? How are they compensated for their labour?
March 18 and 21 #
And perhaps most important, here is the rubric for the zine assignment.
March 25 #
Here are slides for “Other Than Extensions,” my lecture on Monday, March 25th.
Questions for the Module: For decades if not centuries, technologies have been described as extensions of man. But this gendered and rather masculine formation of tech as power ignores how technologies also make us vulnerable; they also assist us in performing tasks and building relations. In fact, assistive technologies are often sites of innovation. What happens when we design assistive technologies for one person? Or let technologies ask questions instead of solve problems? Does tech need to be a tool? What if it’s a negotiation?
March 28 #
Notes for Your Logs #
Here are a few tips for composing, revising, and formatting your logs:
- Write with awareness of your audience: not just TAs and me, but undergraduate students who may be interested in “Unlearning the Internet” and the issues we’re covering. What does your audience know? What don’t they know? What terms should be defined? What context should you provide?
- Be specific and avoid repetition, generalities, or abstractions wherever possible. The easiest way to do this is to draw upon examples, explain those examples (in your own words), and communicate to your audience why they matter. Each log entry can be about a specific example or prototype, which responds to the prompt.
- Avoid scope creep. You don’t have many words (usually only 50-200) per entry, so you won’t be able to address all the things. Focus instead on one thing and elaborate.
- Stay near the recommended word count. Do no more than double it.
- List in your references section (at the bottom of your log) any works you cite, quote, paraphrase, or mention. I’ll provide you with the proper formatting and details for references to course materials.
- Enrich your writing with media, such as images. If the images are not yours, then include important details (creator, date, and venue) about them in your references section.
- Draw upon the assigned materials for evidence (e.g., quotation) and, where applicable, add your own perspective to them. Perhaps, for example, you disagree with someone’s argument. Explain why.
- Get feedback from other people in the course and try reading your work aloud. Also see the TAs or me if you’d like feedback. We are happy to provide feedback during office hours or by appointment. (Given the size of this course, we can’t provide feedback by email. Thank you for understanding.)
- Situate knowledge. Write about what you know and consider how your own experiences inform the course material. Don’t hesitate to write in the first-person; after all, it’s a log about what you’re learning and doing during the semester. Feel free to apply your own discipline or major to the log entries, too.
- Briefly introduce your log at the top of your Markdown document. You might mention who you are, what the log is about, what it includes, and what your interests are (especially with respect to “Unlearning the Internet”).
- For each log entry, provide a title and date. (See the example document I circulated during the Markdown workshop.) The date should correspond with the date on the course website (as opposed to, for instance, the date you wrote the entry), and log entires should appear in ascending chronological order (oldest entries at the top, and newest entries at the bottom). You can create your own entry titles or copy and paste the titles on the course site.
- Review the notes and slides (above) for the course modules and, where possible, engage in your log entries the key terms and questions for each week. Put the language, issues, and problems from the course into action.
- If you’re aiming for an A-, A, or A+, then demonstrate that you’re learning more than what’s expected by the prompts. Conduct additional research (with references), improve upon the course material by adding to it or offering another perspective, and provide compelling examples (with evidence) that respond directly to the prompts and push them in compelling directions (e.g., toward new research or lines of inquiry). However, please do not write additional entries unless I explicitly state in class or via email that extra credit for a specific entry is available. High marks should be a matter of quality, not just quantity. Also, I don’t want to encourage you to write more than is necessary.