Phil Chodrow's Avatar

Phil Chodrow

@philchodrow

Applied math, data science, and computing at Middlebury College. www.philchodrow.prof

386
Followers
453
Following
1
Posts
07.08.2023
Joined
Posts Following

Latest posts by Phil Chodrow @philchodrow

Post image

SFI is a research hub, home to between 20 and 50 resident and visiting researchers at any given time. SFI is currently seeking applications for full-time resident faculty. These 5+year appointments offer broad intellectual freedom and encouragement to take risks.
1/2

22.01.2025 16:39 πŸ‘ 43 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0
Call for Abstracts A NetSci 2025 Satellite Workshop June 2 or 3, 2025, Maastricht, Netherlands

Announcing the "Software and Data for Supporting Network Science" satellite workshop at NetSci 2025!! Apply at netsci.nascol.net/cfa before Feb. 7th! Organized with @zpneal.bsky.social, @vtraag.bsky.social, @foucaultwelles.bsky.social, and @illegaldaydream.bsky.social!

30.12.2024 16:27 πŸ‘ 17 πŸ” 14 πŸ’¬ 1 πŸ“Œ 0

I just found out that as a faculty member, I can check out books from the library with NO DUE DATE. My mind is blown.

14.12.2024 16:32 πŸ‘ 152 πŸ” 3 πŸ’¬ 11 πŸ“Œ 2

Meanwhile this use is deeply irresponsible.

AI generated titles and abstracts draw their text from somewhere, but you don't know where.

Using them is a great way to plagiarize other scholars β€” right where people are most likely to notice.

11.12.2024 23:47 πŸ‘ 280 πŸ” 45 πŸ’¬ 13 πŸ“Œ 4

This is amazing, and I can’t wait to see where this merger goes! If you’re an early career network scientist, I really recommend getting involved with @netplace.bsky.social πŸ”₯πŸ•ΈοΈ

30.11.2024 13:27 πŸ‘ 12 πŸ” 3 πŸ’¬ 0 πŸ“Œ 1
AI Accountability Lab Stewarding a greater ecology of accountability in the age of AI

Friends, this week marks a monumental step forward in my path towards pushing for a greater ecology of accountability in the age of AI. Thrilled to officially announce the AI Accountability Lab (AIAL) @theaial.bsky.social at Trinity College Dublin is launching today, Thursday Nov 28, 2024 aial.ie

28.11.2024 10:16 πŸ‘ 1259 πŸ” 315 πŸ’¬ 92 πŸ“Œ 32

πŸ“Œ

24.11.2024 23:26 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

most people want a quick and simple answer to why AI systems encode/exacerbate societal and historical bias/injustice and due to the reductive but common thinking of "bias in, bias out," the obvious culprit often is training data but this is not entirely true

1/

24.11.2024 16:26 πŸ‘ 598 πŸ” 217 πŸ’¬ 26 πŸ“Œ 42

One week left!

24.11.2024 21:08 πŸ‘ 2 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0
Preview
Fake Personableness The race to find generative AI's GUI is on. Here's why we should avoid the temptation to design our tools to imitate persons.

Designing AI interfaces that imitate human conversation is a bad idea. It shapes us in harmful ways as users. In the next phase of AI development, I hope we see more thoughtful interfaces that don’t rely on the deception of fake personableness.

20.11.2024 01:07 πŸ‘ 3 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0
Preview
What’s the impact of artificial intelligence on energy demand? The International Energy Agency (IEA) thinks we should all chill out a bit.

This is an interesting piece from @hannahritchie.bsky.social about the relative energy demands of data centers and generative AI. Well worth a read if you’re curious about putting these numbers into context.

20.11.2024 04:38 πŸ‘ 3 πŸ” 3 πŸ’¬ 1 πŸ“Œ 0

@jamescsanchez.bsky.social @amandagregg.bsky.social @ajayverghese.bsky.social @ajacobel.bsky.social @bencotts.bsky.social @caitlinmyers.bsky.social @philchodrow.bsky.social @jennortegren.bsky.social @eilatg.bsky.social
Hey Midd folks, I made a starter pack to find each other. Who am I missing?

19.11.2024 01:09 πŸ‘ 8 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

We need to have a science-wide announcement that LLMs networked together in an ABM is *absolutely not* a useful way to study human collective beahvior.

18.11.2024 17:43 πŸ‘ 70 πŸ” 14 πŸ’¬ 7 πŸ“Œ 3
Post image

We're hiring a postdoc in the &-Lab at Northeastern's Network Science Institute!

Looking for a curious, collaborative scholar to work on computational social science questions, at the intersection of data justice + network science.

northeastern.wd1.myworkdayjobs.com/careers/job/...

13.11.2024 12:36 πŸ‘ 50 πŸ” 40 πŸ’¬ 0 πŸ“Œ 1

Y'all even though I left for industry, I'm still a network scientist / computational social scientist / communication researcher / social media scholar

I want to be included in some starter packs too πŸ₯²

13.11.2024 01:08 πŸ‘ 21 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0
Post image

The BlueSky team created a great tool to help newcomers: starter packs.

Here a quick starter pack for #complexity and network scientists + feeds to quickly join the community! #NetSky

Please, help to share (and if you are not in the list, get in touch and I will add you)

go.bsky.app/KMfiTU2

12.11.2024 20:41 πŸ‘ 168 πŸ” 85 πŸ’¬ 73 πŸ“Œ 13

Very important work from @kspoon and an all-star team from CU Boulder and beyond. Thread.

21.10.2023 00:10 πŸ‘ 36 πŸ” 11 πŸ’¬ 1 πŸ“Œ 0

Asked to comment on impact(s) of AI in education, argued that more than worrying about fraud (an integrity issue) or adaptation to new tools, we should focus on how to educate future developers: shouldn’t we include in ML syllabi the social risks of our work? PT only sicnoticias.pt/especiais/in...

23.08.2023 14:36 πŸ‘ 12 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Beautiful crow against a black background

Beautiful crow against a black background

It seems to me that the time is ripe for a Bluesky thread about howβ€”and maybe even whyβ€”to befriend crows.

(1/n)

20.08.2023 01:55 πŸ‘ 7945 πŸ” 2584 πŸ’¬ 472 πŸ“Œ 736
Representing as your own work the writing of another person, or the output of an AI/LLM such as ChatGPT, is plagiarism and cause for academic misconduct, which will be reported to the university and result in a grade penalty to be determined by the instructor.

Note that using the work/output of another person/AI is not plagiarism as long as you clearly communicate that outside help was used, but your submission may not meet expectations if it does not demonstrate your personal understanding of the appropriate learning outcome.

For ethical reasons, the instructor discourages the use of AIs or LLMs that have been trained on data that were not licensed for such use. Students are also warned that these products can reflect the biases of the data they are trained on, and may produce content that is sexist, racist, or otherwise inappropriate.

Representing as your own work the writing of another person, or the output of an AI/LLM such as ChatGPT, is plagiarism and cause for academic misconduct, which will be reported to the university and result in a grade penalty to be determined by the instructor. Note that using the work/output of another person/AI is not plagiarism as long as you clearly communicate that outside help was used, but your submission may not meet expectations if it does not demonstrate your personal understanding of the appropriate learning outcome. For ethical reasons, the instructor discourages the use of AIs or LLMs that have been trained on data that were not licensed for such use. Students are also warned that these products can reflect the biases of the data they are trained on, and may produce content that is sexist, racist, or otherwise inappropriate.

My AI/LLM/ChatGPT syllabus statement. On this topic, does anyone know of a LLM that's trained solely on appropriately licensed data (e.g. everything was consented to or licensed openly)?

19.08.2023 16:12 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

This is going to be a huge problem for educational institutions using Zoom.

07.08.2023 01:07 πŸ‘ 17 πŸ” 8 πŸ’¬ 1 πŸ“Œ 1