Extraordinary new public humanities resource that curates in near real-time over 50,0000 items from sources/channels of humanities-related scholars who create pubic-facing essays, podcasts, videos, blogs, etc.: publicscholarship.org
Extraordinary new public humanities resource that curates in near real-time over 50,0000 items from sources/channels of humanities-related scholars who create pubic-facing essays, podcasts, videos, blogs, etc.: publicscholarship.org
Lots more findings and discussion in the pre-print.
And please, share your feedback with us! The article has been accepted for publication at New Literary History, and we consider the pre-print to be part of our peer review process. Thx βοΈ
arxiv.org/abs/2603.01220
In order to make sense of these findings, we interpret them by way of recent literary theory on the nature of fictionality. This helps us (1) answer why characters should feature prominently in LLMs learning from novels and (2) articulate a theory of representation for the AI era.
One major finding is that an early model -- BERT -- learns about gender as a dialogical construction between characters.
More surprising is that the relations are characterized by heightened affect: gender differences are represented through anger, confusion, erotics in high stakes situations.
We know that current AI models train on large swaths of novels, thanks to reporting in The Atlantic.
Richard and I trace fiction's role back the first generation LLMs and we empirically test: What do AI models actually learn from fiction, as opposed to non-fiction sources like Wikipedia?
In the paper, we argue that training data is a new and consequential frontier for cultural representation.
It is a truism that LLMs only know about the world what they learn from their training data, but the insight is rarely tested and, as a result, not well theorized. We aim to rectify that.
Excited to share the pre-print for a forthcoming article in NLH with @richardjeanso.bsky.social π
Generative AI & Fictionality: How Novels Power Large Language Models
arxiv.org/abs/2603.01220
A dh tutorial web app
tl;dr: I build little wonky things to help with my own teaching and research. If all goes well, they turn out to be useful/interesting for other people too. This post is in the spirit of that sharing. A worsening problem I am having is an overall decline in basic digital [β¦]
2016: entertaining myself by watching -LL/token go down in MALLET sessions
2026: entertaining myself by watching Claude Code google keywords from my prompt
Fired up the old blog for this post. Glad to see DH Now is still kicking!
Wowza. Well, I stand by my initial assessment: that post is a litany of broken things in academia (with varying degrees of self-awareness)
A pop discourse that represents the imaginary relationship of college students to their real conditions of trudgery!
As a tea drinker, the energy I use to boil water each month probably matches that of driving a car ten miles. Wild!
I get the sense you are looking for disruptions that are real βcreative destructionβ β interventions that reorganize the playing field. Any egβs you think are (potentially) transformative?
Maybe notebook LM has gotten closest so far, as a diff way to engage library holdings/course readings
The flip side is that insights about βthe way things workβ tells you about what is broken: e.g. Einstein directed attention to the terrible incentives students have in the ed system.
He βdigesteth harde yronneβ (takes a D3+iron supplement)
Hold the phone! La Sagrada Familia finished being built last week
www.theguardian.com/world/2026/f...
DOGE allegedly used ChatGPT to identify 1,400 NEH grants it said were DEI. Grants were terminated April 2025, according to a court filing. E.g.
Film: 1873 Colfax massacre
Film: first female pilots flying for U.S. military in WWII
Film: βUntold Story of Jewish Women Slave Labor in the Holocaust"
But the elision of code work, when we offload it to AI, puts pressure on the code's ultimate purpose, its end. At some level, that's all that prompting consists of: naming goals for AI agents to pursue.
The goals are weird overdetermined mixtures of ethical, aesthetic, and analytic dispositions
I don't say it quite this way in the post, but my hunch is that we, as users, are only partly conscious of our mixture of goals for any project, and that reading chat transcripts of vibe coding will reveal what it actually consists of.
The transcripts themselves may be AI's real value to humanists!
But the elision of code work, when we offload it to AI, puts pressure on the code's ultimate purpose, its end. At some level, that's all that prompting consists of: naming goals for AI agents to pursue.
The goals are weird overdetermined mixtures of ethical, aesthetic, and analytic dispositions
At a practical level, AI's flexibility will reduce some of the familiar friction in research workflows and lesson plans that rely on tools made for other disciplines.
Over the weekend, I made a couple apps with Claude Code. It was exhilarating, and it gave a peek at how the tectonic plates are shifting beneath Digital Humanities
What does AI mean for the tools we built and the knowledge they generated? A few thoughts on the blog:
teddyroland.com/2026/02/26/n...
It is funny how just writing down all the tasks you're juggling makes them feel that much more manageable
From time to time I mutter about a secret project that involves benchmarks and historical language models. Here's a formal announcement of the Schmidt Sciences grant. Other PIs include @dmimno.bsky.social , @lauraknelson.bsky.social, @andrewpiper.bsky.social, and @mattwilkens.bsky.social. And +
This is the Star Trek βcaptainβs logβ conceit. At the edge of space, keeping a daily log is the only way to recognize that everyone on the Enterprise is under alien mind control β¦ except you!
A positive cultural adaptation to AI (for habitual users) would be a daily journaling practice.
A record of how you think each day would show how your thinking *changes* as you interact with the increasingly capable systems. Reflecting on your own position βin-the-loopβ would be its own good.
Cool new essay that analyzes data from @post45data.bsky.social to argue for the rise of literary nationalism in parallel with political nationalism substack.com/home/post/p-...
For the first time in league history, the WNBA generated enough revenue in 2025 to trigger revenue sharing with players. Players will receive $8 million.
The union will also pay out $9.25 million from revenue generated from its group licensing program.
www.espn.com/wnba/story/_...