7 minute read

Weeknotes are where I share what I’m working on / thinking about this week and a few things to share without worrying too much about the ideas being fully formed.

Thinking About / Working On

This will be my last post for the year as I leave on Wednesday for Hawaii. I’m trying to spend today and tomorrow wrapping up a few work projects and leaving things in a state where when I get back it’s not to a disaster.

I’ve been thinking about is the value of these WeekNotes. When I actually do them, I find them useful in helping organize my thinking. Or maybe it’s more accurate to say that they help make what I’m already thinking about feel more real, and that is their value. I hope to be a little more consistent next term with them.

Probably the biggest thing I’m thinking about this week is what changes I want to make in the Makerspace next term. I would still like to encourage and support students, staff, or faculty looking to go a little bit deeper in the projects they’re doing in the makerspace. Part of this will be trying to create a bit more of a community and I’m hoping that some changes to the weekly “show up and make” events I started this term, including changing it to be non-registration, making it once a week later in the afternoon (when we are usually closed, so it will be special and meet a user need for later hours) and involving staff and student ambassadors instead of just me will help with this.

This could also involve micro-grants for students who have ideas, supporting them by buying a little bit of materials, and working a bit more with faculty who are interested in embedding multi-modality and maker methodologies into their learning outcomes.

Another thing is that one of our values has been critical making. In the last couple of years I’ve really focused on sustainability and indigenization, but it’s time to try to implement some small things to make users think more critically about the role of technology in their lives and in the makerspace.

Finally, while our stats have been even this term vs last, I think we probably need to make another little push on outreach. My attention has been distracted this term by being full chair while my co-chair was on sabbatical.

Reading / Consuming / Sharing

David Gray Widder, Meredith Whittaker, and Sarah Myers West in Nature on Why ‘Open’ AI Systems Are Actually Closed, and Why This Matters

I thought this article was really useful for taking apart all the ways that “open” is being used in AI development right now, rhetorically and even when less rhetorical. Why is it not enough to counter the issues in how AI is being developed and marketed.

We need a wider scope for AI development and greater diversity of methods, as well as support for technologies that more meaningfully attend to the needs of the public, not of commercial interests. And we need space to ask ‘why AI’ in the context of many pressing social and ecological challenges. Creating the conditions to make such alternatives possible is a project that can coexist with, and even be supported by, regulation. But pinning our hopes on ‘open’ AI in isolation will not lead us to that world, and—in many respects—could make things worse, as policymakers and the public put their hope and momentum behind open AI126, assuming that it will deliver benefits that it cannot offer in the context of concentrated corporate power. (View Highlight)

Rachel Coldicutt on FOMO Is Not a Strategy

I think this is an incredibly smart piece that captures many of the reasons why people who aren’t idiots might use generative AI in their work, even if they’re critical of the production, marketing, and use of generative AI. It comes closest to describing the way I use generative AI. I think it also captures my own thinking around why generative AI is unlikely to be integrated into institution-wide workflows and why, at least right now, productivity gains are probably low.

An individual might enjoy using a tool because it gives them an extra 10 minutes here and there, or because it makes a boring task seem more fun, but that does not guarantee their whole working day will become more efficient or productive.

People who really enjoy using genAI tools seem to particularly like the ad hoc freedom of daisy chaining a few things together and experimenting, and part of the fun is trying things out rather than adopting standardised new protocols. And if every workaround a coworker develops goes on to become standard practice, there are likely to be drawbacks: as well as potentially being irritating for colleagues, there would need to be new routines to manage workflows, such as quality assurance, standards-setting and training.

If your business depends on trust – in delivering services, developing relationships, taking care of people – then generative AI in particular will probably only deliver marginal gains for some individuals, and that may risk the quality of your overall delivery. There might be a good case to empower staff to use genAI and other tools in ways that make their lives easier, but the second and third-order consequences of those decisions need to be understood if you’re going to carry on delivering business as usual.

Helen Beetham on Chips With Everything

I nice dive into some of the stupid ways that the use of AI in government is being pushed in the UK right now, especially around education and health.

The subject/citizen is now a body of data, invited to know themselves as a unique bundle of desires and needs (and genetic codes), but known by the state and its corporate partners in terms of quantified risks. Whatever can be ‘personalised’ in a public service is almost by definition non-essential. So individual users are always right about what they want, but the needs people have in common can be refused any reality. Tech capital in particular has no capacity to build the foundational services people need, but can help to manage those needs, providing the interface between citizens and what remains of the common good in the form of ‘choices’, ‘customer services’, ‘personalised plans’ and ‘diagnoses’, chatbots and AI-based apps. Meanwhile the state can use all that data to provide a kind of risk management service or insurance back-stop to private capital as it moves into the public sphere. Calculating risk is exactly what deep learning is good at. (Location 1995568 | Readwise Link)

Sun-ha Hong on Predictions Without Futures*

I really enjoyed this piece on how motifs of the future, such as driving cars, self-driving cars, or virtual reality, are used to constrain and control the present. It’s something I’m going to be thinking about a lot next term.

The conceit of the open future furnishes a space of relative looseness in what kinds of claims are considered plausible, a space where unproven and speculative statements can be couched in the language of simulations, innovation, and revolutionary duty. What is being traded here are not concrete achievements or end states but the performative power of the promise itself. In this context, claims do not live or die by specifically prophesized outcomes; rather, they involve a rotating array of promissory themes that create space for optimism and investment. Cars that can really drive themselves without alarming swerves, facial recognition systems that can really determine one’s sexuality, and so on—the final justifications for such totally predictive systems are always placed in the “near” future, partly shielded from conventional tests of viability or even morality.

Carlo Iacono on The Authenticity Paradox

The true threat isn’t that students might use AI - it’s that our entire framework for understanding knowledge creation is dissolving before our eyes. We’re witnessing the death of individual authorship, not as a tragedy, but as an overdue evolution.

Think about it. When a student engages in iterative dialogue with an AI, incorporating insights from human peers, building on digital resources, and synthesising multiple perspectives, whose thoughts are they expressing? The question itself betrays our outdated epistemological assumptions. We’re trying to draw clean lines in an increasingly murky cognitive soup.

Kate Armstrong from AI Futures for Art and Design on Dream Machines

This video stitching together archival images using body posture and machine learning is just so cool. Scroll down to the video and jump to 19:44 to watch.

The end result is that you can use these ML systems to locate and then stitch together a reel where a figure transforms from one context to another at a point within the frame that remains smooth and consistent, going from a runner, to a firefighter, to two firefighters, to two politicians. This is hard to describe so see what it looks like at 19:44 on the video.