Dr. Casey Fiesler's Avatar

Dr. Casey Fiesler

@cfiesler

information science professor (tech ethics + internet stuff) kind of a content creator (elsewhere also @professorcasey) though not influencing anyone to do anything except maybe learn things she/her more: casey.prof

16,887
Followers
344
Following
1,280
Posts
09.04.2023
Joined
Posts Following

Latest posts by Dr. Casey Fiesler @cfiesler

A CS MS student working with me is about to start a project about academic writers' attitudes towards LLMs! I'll mention you're working on this, she might want to chat with you. :)

04.03.2026 13:25 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
In the News: what’s going on with AI and the Department of War… 
Pentagon threatens Anthropic punishment (Axios)
Statement from Dario Amodei on our discussions with the Department of War (Anthropic)
Statement on the comments from Secretary of War Pete Hegseth (Anthropic)
How Talks Between Anthropic and the Defense Dept. Fell Apart (The New York Times)
OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump (CNBC)
Anthropic's Claude overtakes ChatGPT in App Store (Mashable)
Users are ditching ChatGPT for Claude β€” here’s how to make the switch (TechCrunch)
U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban (The Wall Street Journal)
Google employees call for military limits on AI amid Iran strikes, Anthropic fallout (CNBC)
Sam Altman says OpenAI is renegotiating with the Pentagon after an β€˜opportunistic and sloppy’ deal (Fortune)

In the News: what’s going on with AI and the Department of War… Pentagon threatens Anthropic punishment (Axios) Statement from Dario Amodei on our discussions with the Department of War (Anthropic) Statement on the comments from Secretary of War Pete Hegseth (Anthropic) How Talks Between Anthropic and the Defense Dept. Fell Apart (The New York Times) OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump (CNBC) Anthropic's Claude overtakes ChatGPT in App Store (Mashable) Users are ditching ChatGPT for Claude β€” here’s how to make the switch (TechCrunch) U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban (The Wall Street Journal) Google employees call for military limits on AI amid Iran strikes, Anthropic fallout (CNBC) Sam Altman says OpenAI is renegotiating with the Pentagon after an β€˜opportunistic and sloppy’ deal (Fortune)

prepping to try to explain to my AI & Society class what has happened since our last class early last week

03.03.2026 20:50 πŸ‘ 33 πŸ” 4 πŸ’¬ 1 πŸ“Œ 0
Preview
Analysis | ChatGPT is changing how we ask stupid questions The internet has long been a safe space to ask stupid questions. What do we lose when people switch to asking AI chatbots instead?

I've been thinking a lot about how people interact with chatbots versus online strangers. Recently @heatherkelly.bsky.social (formerly of WaPo siiiigh) asked me a very *interesting* question which is whether chatbots have changed how we ask stupid questions. www.washingtonpost.com/technology/2...

25.02.2026 23:28 πŸ‘ 16 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0

So "bots are doing harassment!" kind of feels like "AI is taking my job!" which attributes agency to AI in a way that is letting actual humans off the hook. Like... why take that very real decision making agency away from THE HUMAN THAT FIRED YOU?

17.02.2026 19:56 πŸ‘ 35 πŸ” 12 πŸ’¬ 2 πŸ“Œ 0

This isn't a story about AI gaining consciousness, it's a story about the capacity for AI agents to contribute to frighteningly scalable harassment. Because "go off my little bot friends and gather intel and write crappy linkedin style blog post hit pieces" is a thing a crappy human can do.

17.02.2026 19:54 πŸ‘ 60 πŸ” 18 πŸ’¬ 3 πŸ“Œ 1
Preview
An AI Agent Published a Hit Piece on Me Summary: An AI agent of unknown ownership autonomously wrote and published a personalized hit piece about me after I rejected its code, attempting to damage my reputation and shame me into acceptin…

I find this whole "AI agent wrote a hit piece" thing really troubling for reasons that have nothing to do with bots getting "mad" and "deciding" to take down open source contributors. theshamblog.com/an-ai-agent-...

Why do we keep wanting to give AI so much agency that it lets humans off the hook??

17.02.2026 19:52 πŸ‘ 80 πŸ” 19 πŸ’¬ 4 πŸ“Œ 2

For the record, my AI ethics themed standup set a week ago was mostly a mashup of previous sets, but also included:

- a subtle Donald Trump joke
- a subtle Heated Rivalry joke
- a not-subtle dig at AI bros in my YouTube comments

14.02.2026 02:15 πŸ‘ 24 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Privacy & Security

Will CU track usage? What data will the university collect?
CU will not monitor individual users’ interactions with ChatGPT Edu. CU will collect basic use statistics to better understand adoption and use patterns. This data will only be reported publicly in the aggregate. CU will retain the right to audit individual user interactions in isolated and limited cases.

Privacy & Security Will CU track usage? What data will the university collect? CU will not monitor individual users’ interactions with ChatGPT Edu. CU will collect basic use statistics to better understand adoption and use patterns. This data will only be reported publicly in the aggregate. CU will retain the right to audit individual user interactions in isolated and limited cases.

I’m so proud of the students in my AI & Society class! They honed right in on this sentence from the university’s FAQ about their new ChatGPT site license, and had a LOT of questions. So do I! I promised I’ll try to find out what I can.

13.02.2026 00:28 πŸ‘ 40 πŸ” 4 πŸ’¬ 1 πŸ“Œ 1

Oh definitely not. Also faculty are still permitted to e.g. ban it in their classes.

12.02.2026 21:33 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
What about the environmental impacts of artificial intelligence?
While AI proficiency is becoming an important part of a well-rounded education and career-readiness, it is also important to acknowledge that current large language model AI technologies are resource intensive. Large language model AI systems require significant computing power, which can increase energy use and environmental impacts. Because CU is committed to doing our part for environmental stewardship and sustainability, we encourage our community to consider the following options for heightening our sustainable use of the tool:

Optimize AI prompts: Providing AI tools with well-structured prompts can reduce unnecessary AI processing. It reduces the number of queries required, which improves efficiency and lowers computational load. 
Turn off unneeded AI integrations: Some software applications have AI-powered assistants running in the background. Going into the applications’ settings and disabling those assistants when not in use will lower energy consumption.

What about the environmental impacts of artificial intelligence? While AI proficiency is becoming an important part of a well-rounded education and career-readiness, it is also important to acknowledge that current large language model AI technologies are resource intensive. Large language model AI systems require significant computing power, which can increase energy use and environmental impacts. Because CU is committed to doing our part for environmental stewardship and sustainability, we encourage our community to consider the following options for heightening our sustainable use of the tool: Optimize AI prompts: Providing AI tools with well-structured prompts can reduce unnecessary AI processing. It reduces the number of queries required, which improves efficiency and lowers computational load. Turn off unneeded AI integrations: Some software applications have AI-powered assistants running in the background. Going into the applications’ settings and disabling those assistants when not in use will lower energy consumption.

I'm also just going to leave this FAQ answer here.

12.02.2026 20:18 πŸ‘ 17 πŸ” 1 πŸ’¬ 3 πŸ“Œ 0

This quote comes from yesterday's email announcement from my university that they have acquired a ChatGPT license for everyone.

12.02.2026 20:14 πŸ‘ 21 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

"We know that for some members of our community, generative AI raises significant concerns around privacy, sustainability and ethical use. We share those concerns and are working to mitigate – where possible – the impacts."

Those em dashes are doing a lot of work.

12.02.2026 20:13 πŸ‘ 69 πŸ” 9 πŸ’¬ 3 πŸ“Œ 0

Except it's not actually a word limit, it's a page limit! Though other ACM venues have moved to word limits instead so I'm not sure why FAccT hasn't...

12.02.2026 19:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

haha no I mean your point is still very good!!

12.02.2026 14:05 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

My usual plea related to how to do ethics in computer science programs is both standalone classes and in every class. I’m not sure what the version of that is here except just telling people they need to talk about it and put it in the right place ha.

12.02.2026 13:59 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Agreed! It also doesn’t need to all be in one place. Those are different kinds of things! I pointed this out in part because some of the instructions include decisions about human subjects research and that seems so weird to put at the end.

12.02.2026 13:58 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

But again, I am highly appreciative of the intention behind this. Doing something to encourage authors to write about research ethics in their papers is far more than most publication venues do, especially in computing.

12.02.2026 13:54 πŸ‘ 5 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0

Maybe my take on this relates to the work I've done on ethics education. Ethical considerations in special sections of papers at the very end that many people won't read makes me think of the senior level standalone CS ethics class you take after none of your profs mentions ethics for 4 years.

12.02.2026 13:52 πŸ‘ 5 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

(That last bit is directed at authors. :) )

If as a researcher you're going to leave your ethical decision making out of a paper because you need to find 150 words to cut elsewhere, then you probably shouldn't be submitting to a conference with "transparency" in the title.

12.02.2026 13:49 πŸ‘ 5 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0
Ethical Considerations Statement.
Can be included at submission time
This statement is a description of the ethical concerns and potential adverse impacts that authors considered and mitigated while conducting the work. Authors should describe the ethical challenges they faced in their submission and how they addressed such challenges. In particular, submissions that (1) describe experiments with human subjectsusers and/or deployed systems (e.g., websites or apps), or (2) rely on sensitive user data (e.g., social network information) must adhere to precepts of ethical research and community norms. These include compliance with applicable laws and applicable professional ethical codes; respect for privacy; secure storage of sensitive data; voluntary and informed consent when appropriate; avoiding deceptive practices when not essential; beneficence and non-maleficence (maximizing the benefits to an individual or society while minimizing harm to the individual); risk mitigation; and post-hoc disclosure of audits. See also the section on Code of Ethics and Professional Conduct above. We also encourage authors to discuss any potential adverse or unintended impacts the work might have once published, and how they have mitigated those potential impacts.

Ethical Considerations Statement. Can be included at submission time This statement is a description of the ethical concerns and potential adverse impacts that authors considered and mitigated while conducting the work. Authors should describe the ethical challenges they faced in their submission and how they addressed such challenges. In particular, submissions that (1) describe experiments with human subjectsusers and/or deployed systems (e.g., websites or apps), or (2) rely on sensitive user data (e.g., social network information) must adhere to precepts of ethical research and community norms. These include compliance with applicable laws and applicable professional ethical codes; respect for privacy; secure storage of sensitive data; voluntary and informed consent when appropriate; avoiding deceptive practices when not essential; beneficence and non-maleficence (maximizing the benefits to an individual or society while minimizing harm to the individual); risk mitigation; and post-hoc disclosure of audits. See also the section on Code of Ethics and Professional Conduct above. We also encourage authors to discuss any potential adverse or unintended impacts the work might have once published, and how they have mitigated those potential impacts.

Now to be clear! My assumption that the *reason* for ethical considerations sections being part of endmatter is that the conference wants to encourage such statements by not counting them towards the page limit. So good intention, but...

1. death to page limits
2. come onnnnnnnn

12.02.2026 13:48 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Interestingly I care much less about mentioning IRB approval (which I tend to just assume happens when human subjects research is conducted, or that they don't have access to an IRB) than about explaining ethical considerations that fall outside of IRB.

12.02.2026 13:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Reflexivity, positionality, and disclosure in HCI Are you an HCI researcher thinking about including a positionality statement? Here are some thoughts.

Same with positionality/reflexivity -- this should be part of the methods. This is also a section that authors are instructed to put at the end in FAccT papers.

(I have some thoughts about this one as well, but appreciate the thoughtful pointer to medium.com/@caliang/ref... )

12.02.2026 13:44 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

(I'm getting to this in the thread but) I assume the reason is just that they want to force discussion of ethics and didn't want that to count against the page limit.

12.02.2026 13:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I actually feel the same way (stronger, actually) about limitations and am completely baffled by why it's the norm for some fields/publications to put limitations at the end.

I need to know about limitations to appropriately interpret the findings! Why do I care about them when I get to the end!

12.02.2026 13:40 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

For example, let's say that you make a well informed ethical decision to obfuscate data in some way in order to protect the privacy of social media users. The reader is wondering why there aren't direct quotes in the findings or why some information is redacted. They should know this going in.

12.02.2026 13:39 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

I think that decisions about how to conduct research ethically are at the exact same level as any other methodological decision -- e.g. how to recruit participants or what statistical analyses to run. They are also often just as relevant to understanding and interpreting the findings.

12.02.2026 13:37 πŸ‘ 8 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

As I finish reviewing for FAccT I'd love to get other opinions on a topic. (I'm trying to decide if this is worth raising to decision makers or if I'm overreacting.)

TL;DR Ethical considerations should be in the methods section, so explicitly instructing authors to put them at the end is bad. 🧡

12.02.2026 13:35 πŸ‘ 35 πŸ” 5 πŸ’¬ 5 πŸ“Œ 0

I'm working on an async online class where I'm only permitted to assign 100% open access readings. I feel like half my prep is *despairing* over my inability to use things I want students to read. The latest: Ted Chiang's "Why AI Isn't Going to Make Art" New Yorker article. 😭

10.02.2026 15:13 πŸ‘ 24 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Fears about TikTok’s policy changes point to a deeper problem in the tech industry Following the app’s sale, the company’s updated privacy policy and terms of service set off alarm bells. The reaction shows Big Tech has lost the public’s trust.

After going down a rabbit hole re: U.S. TikTok's new privacy policy, I couldn't stop thinking about how even if some of the alarm was based on misleading information, it's not surprising that everyone assumes the worst. Anyway, I wrote about this: theconversation.com/fears-about-...

06.02.2026 16:47 πŸ‘ 23 πŸ” 10 πŸ’¬ 0 πŸ“Œ 1

Here's hoping for another one in the near future! <3

05.02.2026 17:33 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0