On changing tools, and tools changing us
TL;DR: Changing research tools changes research practice. This reflection takes a CSCW-informed view of tool migration, examining how new platforms reshape workflows, redistribute expertise, and inβ¦
Changing research tools changes research practice.
I've written a blog post, and have taken a CSCW-informed view of tool migration, examining how new platforms reshape workflows, redistribute expertise, and influence what teams come to treat as valid insight.
briangrellmann.work/2026/02/08/o...
09.02.2026 19:56
π 0
π 0
π¬ 0
π 0
credit I think to: Stephen Farrugia
@fasterandworse
07.12.2025 17:00
π 1
π 0
π¬ 0
π 0
An AI button with the tooltip: warm a data centre
Positive friction or persuasive design or shaming? Yes.
07.12.2025 16:58
π 4
π 1
π¬ 1
π 0
Try Comet with Pro included
For a limited time, get access to Comet with a month of free Perplexity Pro
If you're looking for an invite to Comet with Pro included, (the AI powered browser that acts as a personal assistant), then you're in luck: pplx.ai/briangrell35...
21.10.2025 16:06
π 0
π 0
π¬ 0
π 0
This new case study shows:
β strategies for building & exploring personal knowledge bases
β how retrieval shapes the way people create & maintain notes
β where AI could support knowledge work in the future
25.09.2025 07:29
π 0
π 0
π¬ 0
π 0
Screenshot of paper title: How people manage knowledge in their "second brains"
No way! Some researchers at IBM in Brazil have looked into exactly what Iβve been trying to figure out myselfβ¦ how researchers use Obsidian as a βsecond brainβ to manage knowledge π§ π
arxiv.org/pdf/2509.20187
25.09.2025 07:29
π 0
π 0
π¬ 1
π 0
Someone please run this study!
22.09.2025 12:54
π 0
π 0
π¬ 0
π 0
Wore a suit in central London for the first time and was offered so much cocaine.
Does π+π = π?
My hypothesis: People in suits are more likely to be approached with illicit drugs than those in casual wear, as suits may signal disposable income, social capital, or lower perceived risk to dealers.
22.09.2025 12:54
π 0
π 0
π¬ 1
π 0
π A relevant paper to our discussions in HCI curriculum development. How do we encourage critical thinking, understanding, and enquiry around AI for workforce skills requirements against academic integrity and the need to enforce against misuse.
arxiv.org/pdf/2506.22231
09.07.2025 18:38
π 0
π 0
π¬ 0
π 0
The following recommendations are suggested and summarise activities:
1. Redesign assessments to emphasise process and originality
2. Enhance AI literacy for staff and students
3. Implement multi layered enforcement and detection
4. Develop clear and detailed AI usage guidelines
09.07.2025 18:38
π 0
π 0
π¬ 1
π 0
π There are pedagogical concerns like the erosion of academic integrity, and the risk of misinformation. If used as a shortcut rather than a learning aid there is the potential that unfettered use reduces understanding or ability to think critically.
β
So what can universities do?
09.07.2025 18:38
π 0
π 0
π¬ 1
π 0
β
AI can provide great benefit across the academic spectrum. Writing research grants, increase research productivity and transform teaching and learning.
βοΈ It also presents risks: there is a prevalence of misuse in student work, and limitations to forensic AI detection.
09.07.2025 18:38
π 0
π 0
π¬ 1
π 0
A screengrab of the paper title
π in Adapting University Policies for Generative AI: Opportunities, Challenges, and Policy Solutions in Higher Education, Beale asks how do universities respond to the advent of LLMs?
There are clear benefits and risk for misuse. Which policies strike the right balance for use of AI?
09.07.2025 18:38
π 0
π 0
π¬ 1
π 0
In short, the paper combines worker sentiment and expert views to shows AI agents are most valuable when humans and machines collaborate, not when AI operates alone.
Responsible AI should:
β
Center human agency
β
Align AI design with worker preferences
β
Recognise where human strengths truly shine
06.07.2025 07:27
π 0
π 0
π¬ 0
π 0
Authors suggest key human skills are shifting with AI adoption:
The demand for information-processing skills is shrinking.
While interpersonal, organisational skills are found in tasks that demand high human agency.
Could this have implications for training, hiring, and designing with AI in mind?
06.07.2025 07:27
π 0
π 0
π¬ 1
π 0
The authors highlight 4 core insights, here are 2 of them:
2οΈβ£ There are mismatches between what AI can do and what workers want it to do
4οΈβ£ Thereβs a broader skills shift underway: from information-processing to interpersonal competence
06.07.2025 07:27
π 0
π 0
π¬ 1
π 0
A framework for HAS H1 to H5 describing AI and human relationship across team dymanics, the degree of human involvement needed, the AIs role, and some example tasks.
They introduce the Human Agency Scale: a shared language for human-AI task relationships
H1: AI handles the task entirely on its own
H2: AI needs minimal human input
H3: Equal human-agent partnership
H4: AI needs substantial human input
H5: AI canβt function without continuous human involvement
06.07.2025 07:27
π 0
π 0
π¬ 1
π 0
Future of Work with AI Agents: Auditing Automation and Augmentation Potential across the U.S. Workforce. Shao et al
π What do workers actually want AI agents to do?
A new paper from Stanford titled The Future of Work with AI Agents proposes a principled, survey-based framework to evaluate this, shifting the focus from technical capability to human desire and agency.
π§΅
Paper: arxiv.org/pdf/2506.06576
06.07.2025 07:27
π 0
π 0
π¬ 1
π 0
Paper here: arxiv.org/pdf/2503.002...
Side note: I especially appreciated the researcherβs reflection on doing a solo-authored paperβand how it deepened her appreciation for working collaboratively with co-authors and her team.
04.05.2025 20:33
π 0
π 0
π¬ 0
π 0
Eat this magical konjac jelly and you'll instantly know how to speak and understand every single language βΒ type says honyaku konyaku
In short: Speculative tech in pop culture is a rich resource for rethinking how we design for real human needs in HCI.
Do I wish I could eat a konjac jelly and instantly understand every language instead of using an app? 100% yes.
04.05.2025 20:33
π 0
π 0
π¬ 1
π 0
The takeaway: Human needs havenβt changed much over the decadesβbut the technologies used to meet them have. While AI, AR, and VR echo some of Doraemonβs inventions, his tools are more seamlessly embedded in everyday life, moving beyond screen-based, modern UI paradigms.
04.05.2025 20:33
π 0
π 0
π¬ 1
π 0
For the unfamiliar: Doraemon is a robot cat from the 22nd century who travels back in time to help the hapless Nobita, armed with a seemingly endless supply of intuitive, problem-solving gadgets.
04.05.2025 20:33
π 0
π 0
π¬ 1
π 0
Doraemon, Nobita, and friends flying using Doraemon's Take-Copter
π In Doraemonβs Gadget Lab, Tram Tran explores the speculative tech of the beloved Japanese manga Doraemon through an HCI lensβcategorising 379 gadgets by user needs, comparing them to todayβs technologies, and asking how they might inspire future interaction design paradigms.
04.05.2025 20:33
π 1
π 0
π¬ 1
π 2
People + AI Guidebook
A toolkit for teams building human-centered AI products.
An important chapter to read for anyone designing AI-enabled systems, drawing links from established AI design principles and how users form mental models.
Worksheets: pair.withgoogle.com/worksheet/me...
pair.withgoogle.com/guidebook/ch...
20.04.2025 06:15
π 0
π 0
π¬ 0
π 0
β Account for user expectations of human-like interaction.
Communicate the nature and limits of the AI to set realistic user expectations and avoid unintended deception.
Try to find the balance between cueing the right interaction while limiting the level of mismatched expectations or failures.
20.04.2025 06:15
π 0
π 0
π¬ 1
π 0
β Plan for co-learning.
Implicit and explicit feedback improve AI and change the UX over time.
When the AI fails the 1st time, users will be disappointed so provide a ux that fails gracefully and doesn't rely on AI.
Remind and re-inforce mental models especially when user needs or journeys change
20.04.2025 06:15
π 0
π 0
π¬ 1
π 0
β Onboard in stages.
Onboarding starts before users' first interaction and continues indefinitely.
- again, set the right expectation
- explain the benefit, not the technology
- use relevant and actionable 'inboarding' messages
- allow for tinkering and experimentation
20.04.2025 06:15
π 0
π 0
π¬ 1
π 0
β Set expectations for adaptation.
One of the biggest opportunities for creating effective mental models of AI products is to identify and build on existing models, while teaching users the dynamic relationship between their input and product output.
20.04.2025 06:15
π 0
π 0
π¬ 1
π 0
The π§ Mental Models chapter of the π People + AI Guidebook explains how AI-enabled systems change over time, yet users' mental models may not match what a product can actually do.
Mismatched mental models lead to unmet expectations, frustration, and product abandonment.
4 key considerations π
20.04.2025 06:15
π 0
π 0
π¬ 1
π 0
As AI capabilities continue to evolve at speed, itβs our responsibility to continually test whether they still resonate, still guide, and still serve the humans these systems are meant to support.
Great paper, found via @stanfordhai.bsky.social course, required reading
15.04.2025 12:44
π 0
π 0
π¬ 0
π 0