A CS MS student working with me is about to start a project about academic writers' attitudes towards LLMs! I'll mention you're working on this, she might want to chat with you. :)
@cfiesler
information science professor (tech ethics + internet stuff) kind of a content creator (elsewhere also @professorcasey) though not influencing anyone to do anything except maybe learn things she/her more: casey.prof
A CS MS student working with me is about to start a project about academic writers' attitudes towards LLMs! I'll mention you're working on this, she might want to chat with you. :)
In the News: whatβs going on with AI and the Department of Warβ¦ Pentagon threatens Anthropic punishment (Axios) Statement from Dario Amodei on our discussions with the Department of War (Anthropic) Statement on the comments from Secretary of War Pete Hegseth (Anthropic) How Talks Between Anthropic and the Defense Dept. Fell Apart (The New York Times) OpenAI strikes deal with Pentagon, hours after rival Anthropic was blacklisted by Trump (CNBC) Anthropic's Claude overtakes ChatGPT in App Store (Mashable) Users are ditching ChatGPT for Claude β hereβs how to make the switch (TechCrunch) U.S. Strikes in Middle East Use Anthropic, Hours After Trump Ban (The Wall Street Journal) Google employees call for military limits on AI amid Iran strikes, Anthropic fallout (CNBC) Sam Altman says OpenAI is renegotiating with the Pentagon after an βopportunistic and sloppyβ deal (Fortune)
prepping to try to explain to my AI & Society class what has happened since our last class early last week
I've been thinking a lot about how people interact with chatbots versus online strangers. Recently @heatherkelly.bsky.social (formerly of WaPo siiiigh) asked me a very *interesting* question which is whether chatbots have changed how we ask stupid questions. www.washingtonpost.com/technology/2...
So "bots are doing harassment!" kind of feels like "AI is taking my job!" which attributes agency to AI in a way that is letting actual humans off the hook. Like... why take that very real decision making agency away from THE HUMAN THAT FIRED YOU?
This isn't a story about AI gaining consciousness, it's a story about the capacity for AI agents to contribute to frighteningly scalable harassment. Because "go off my little bot friends and gather intel and write crappy linkedin style blog post hit pieces" is a thing a crappy human can do.
I find this whole "AI agent wrote a hit piece" thing really troubling for reasons that have nothing to do with bots getting "mad" and "deciding" to take down open source contributors. theshamblog.com/an-ai-agent-...
Why do we keep wanting to give AI so much agency that it lets humans off the hook??
For the record, my AI ethics themed standup set a week ago was mostly a mashup of previous sets, but also included:
- a subtle Donald Trump joke
- a subtle Heated Rivalry joke
- a not-subtle dig at AI bros in my YouTube comments
Privacy & Security Will CU track usage? What data will the university collect? CU will not monitor individual usersβ interactions with ChatGPT Edu. CU will collect basic use statistics to better understand adoption and use patterns. This data will only be reported publicly in the aggregate. CU will retain the right to audit individual user interactions in isolated and limited cases.
Iβm so proud of the students in my AI & Society class! They honed right in on this sentence from the universityβs FAQ about their new ChatGPT site license, and had a LOT of questions. So do I! I promised Iβll try to find out what I can.
Oh definitely not. Also faculty are still permitted to e.g. ban it in their classes.
What about the environmental impacts of artificial intelligence? While AI proficiency is becoming an important part of a well-rounded education and career-readiness, it is also important to acknowledge that current large language model AI technologies are resource intensive. Large language model AI systems require significant computing power, which can increase energy use and environmental impacts. Because CU is committed to doing our part for environmental stewardship and sustainability, we encourage our community to consider the following options for heightening our sustainable use of the tool: Optimize AI prompts: Providing AI tools with well-structured prompts can reduce unnecessary AI processing. It reduces the number of queries required, which improves efficiency and lowers computational load. Turn off unneeded AI integrations: Some software applications have AI-powered assistants running in the background. Going into the applicationsβ settings and disabling those assistants when not in use will lower energy consumption.
I'm also just going to leave this FAQ answer here.
This quote comes from yesterday's email announcement from my university that they have acquired a ChatGPT license for everyone.
"We know that for some members of our community, generative AI raises significant concerns around privacy, sustainability and ethical use. We share those concerns and are working to mitigate β where possible β the impacts."
Those em dashes are doing a lot of work.
Except it's not actually a word limit, it's a page limit! Though other ACM venues have moved to word limits instead so I'm not sure why FAccT hasn't...
haha no I mean your point is still very good!!
My usual plea related to how to do ethics in computer science programs is both standalone classes and in every class. Iβm not sure what the version of that is here except just telling people they need to talk about it and put it in the right place ha.
Agreed! It also doesnβt need to all be in one place. Those are different kinds of things! I pointed this out in part because some of the instructions include decisions about human subjects research and that seems so weird to put at the end.
But again, I am highly appreciative of the intention behind this. Doing something to encourage authors to write about research ethics in their papers is far more than most publication venues do, especially in computing.
Maybe my take on this relates to the work I've done on ethics education. Ethical considerations in special sections of papers at the very end that many people won't read makes me think of the senior level standalone CS ethics class you take after none of your profs mentions ethics for 4 years.
(That last bit is directed at authors. :) )
If as a researcher you're going to leave your ethical decision making out of a paper because you need to find 150 words to cut elsewhere, then you probably shouldn't be submitting to a conference with "transparency" in the title.
Ethical Considerations Statement. Can be included at submission time This statement is a description of the ethical concerns and potential adverse impacts that authors considered and mitigated while conducting the work. Authors should describe the ethical challenges they faced in their submission and how they addressed such challenges. In particular, submissions that (1) describe experiments with human subjectsusers and/or deployed systems (e.g., websites or apps), or (2) rely on sensitive user data (e.g., social network information) must adhere to precepts of ethical research and community norms. These include compliance with applicable laws and applicable professional ethical codes; respect for privacy; secure storage of sensitive data; voluntary and informed consent when appropriate; avoiding deceptive practices when not essential; beneficence and non-maleficence (maximizing the benefits to an individual or society while minimizing harm to the individual); risk mitigation; and post-hoc disclosure of audits. See also the section on Code of Ethics and Professional Conduct above. We also encourage authors to discuss any potential adverse or unintended impacts the work might have once published, and how they have mitigated those potential impacts.
Now to be clear! My assumption that the *reason* for ethical considerations sections being part of endmatter is that the conference wants to encourage such statements by not counting them towards the page limit. So good intention, but...
1. death to page limits
2. come onnnnnnnn
Interestingly I care much less about mentioning IRB approval (which I tend to just assume happens when human subjects research is conducted, or that they don't have access to an IRB) than about explaining ethical considerations that fall outside of IRB.
Same with positionality/reflexivity -- this should be part of the methods. This is also a section that authors are instructed to put at the end in FAccT papers.
(I have some thoughts about this one as well, but appreciate the thoughtful pointer to medium.com/@caliang/ref... )
(I'm getting to this in the thread but) I assume the reason is just that they want to force discussion of ethics and didn't want that to count against the page limit.
I actually feel the same way (stronger, actually) about limitations and am completely baffled by why it's the norm for some fields/publications to put limitations at the end.
I need to know about limitations to appropriately interpret the findings! Why do I care about them when I get to the end!
For example, let's say that you make a well informed ethical decision to obfuscate data in some way in order to protect the privacy of social media users. The reader is wondering why there aren't direct quotes in the findings or why some information is redacted. They should know this going in.
I think that decisions about how to conduct research ethically are at the exact same level as any other methodological decision -- e.g. how to recruit participants or what statistical analyses to run. They are also often just as relevant to understanding and interpreting the findings.
As I finish reviewing for FAccT I'd love to get other opinions on a topic. (I'm trying to decide if this is worth raising to decision makers or if I'm overreacting.)
TL;DR Ethical considerations should be in the methods section, so explicitly instructing authors to put them at the end is bad. π§΅
I'm working on an async online class where I'm only permitted to assign 100% open access readings. I feel like half my prep is *despairing* over my inability to use things I want students to read. The latest: Ted Chiang's "Why AI Isn't Going to Make Art" New Yorker article. π
After going down a rabbit hole re: U.S. TikTok's new privacy policy, I couldn't stop thinking about how even if some of the alarm was based on misleading information, it's not surprising that everyone assumes the worst. Anyway, I wrote about this: theconversation.com/fears-about-...
Here's hoping for another one in the near future! <3