Stanford Tech Impact and Policy Center's Avatar

Stanford Tech Impact and Policy Center

@techimpactpolicy

Transforming research into real-world impact to advance human agency and well-being in the era of social media and AI. πŸ”— tip.fsi.stanford.edu

563
Followers
45
Following
80
Posts
14.12.2023
Joined
Posts Following

Latest posts by Stanford Tech Impact and Policy Center @techimpactpolicy

Coming up today at 12PM PTβ€”join us online or in person!

03.03.2026 16:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Giovanni Ramos | Equity in Digital Mental Health Interventions

How can we advance #equity in digital #MentalHealth interventions?

Join us on Tuesday, March 3 for a seminar with @gramos.bsky.social, who will explore the state of the science in #DMHIs and share findings for creating more equitable models and expanding access for marginalized groups.

Join us! ‡️

27.02.2026 16:44 πŸ‘ 5 πŸ” 4 πŸ’¬ 0 πŸ“Œ 2
Preview
Large Language Models Require Curated Context for Reliable Political Fact-Checking -- Even with Reasoning and Web Search Large language models (LLMs) have raised hopes for automated end-to-end fact-checking, but prior studies report mixed results. As mainstream chatbots increasingly ship with reasoning capabilities and ...

Explore the preprint ‡️

23.02.2026 17:04 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
AI Chatbots Struggle at Fact-Checking, but Curated Evidence Can Help Can AI chatbots reliably tell you whether a political claim is true or false? And if not, what would it take to make them trustworthy fact-checkers?

Matt and co-authors Kai-Cheng Yang, Harry Yaojun, and Filippo Menczer found that today's leading models perform poorly, even when equipped with advanced reasoning and web search capabilities.

πŸ‘‰ The key to better performance? Giving them access to high-quality, curated evidence.

Read the summary ‡️

23.02.2026 17:04 πŸ‘ 2 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Post image

Can #AI #chatbots reliably tell you whether a political claim is true or false? If not, what would it take to make them trustworthy fact-checkers?

A new study led by Matt DeVerna tackles these questions by evaluating 15 #LLMs on more than 6K claims fact-checked by PolitiFact over an 18-year period.

23.02.2026 17:04 πŸ‘ 6 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
Preview
Tom Schnaubelt | Becoming a Citizen in the Age of Algorithms: Civic

What does it mean to become a citizen in an age of polarization, platforms, and declining trust in institutions?

Join us on Tuesday, Feb. 24 for a seminar with Tom Schnaubelt of @hooverinstitution.bsky.social to explore innovative approaches to civic learning in the digital age.

RSVP ‡️

20.02.2026 18:58 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“’ Last call! πŸ“’

Submit your commentaries for the Journal of Online Trust and Safety by March 1 to be considered for the Spring 2026 #JOTS issue.

We invite letters, editorials, or other #TrustAndSafety research outputs to be submitted as commentaries.

Details & submission form ‡️
bit.ly/4oi8b97

19.02.2026 18:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Robin Nabi | The Dangers of Focusing on the Danger of Digital Media Use Toward a prescription for healthier, more balanced media diets

Join us on Feb. 17 for a seminar with Robin Nabi of UCSB!

Sharing evidence of how digital media content can support desirable outcomes, Robin will highlight the need for better understanding of how to empower users to make media choices that can support their psychological well-being.

RSVP ‡️

12.02.2026 17:56 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Inside the marketplace powering bespoke AI deepfakes of real women New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.

β€œCivitai...is letting users buy custom instruction files for generating celebrity #deepfakes. Some of these files were specifically designed to make pornographic images banned by the site, a new analysis has found.”

Read more ‡️

10.02.2026 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Inside the marketplace powering bespoke AI deepfakes of real women New research details how Civitai lets users buy and sell tools to fine-tune deepfakes the company says are banned.

@technologyreview.com profiled a new study by TIP Center Postdoctoral Scholar Matt DeVerna and co-authors examining the generative #AI platform Civitai and its β€œBounties” feature, which allows users to commission the generation of content in exchange for payment.

10.02.2026 19:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Join us today at noon PT, in person or online!

10.02.2026 16:36 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Post image

🚨New WP: Can an AI voter guide (grounded in information from a nonpartisan, fact-checked source) help voters’ decision making? 🚨

We built and evaluated an LLM-based chatbot that provided voting info in CA & TX (N=2,474) right before the 2024 election. πŸ§΅πŸ‘‡

09.02.2026 20:56 πŸ‘ 26 πŸ” 10 πŸ’¬ 3 πŸ“Œ 3

Professor G'sell advances a central proposition: #blockchain systems reconfigureβ€”rather than eliminateβ€”traditional structures of authority, and remain only partially decentralized in practice.

Explore the preliminary draft for a sneak peek at the findings. ‡️

09.02.2026 23:03 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Is #blockchain technology delivering on its promise to redistribute authority away from centralized institutions?

A forthcoming report by @flogsell.bsky.social analyzes the legal and regulatory challenges posed by decentralized blockchain systems.

09.02.2026 23:03 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Metaphors of AI indicate that people increasingly perceive AI as warm and human-like - Communications Psychology Analyzing 12,000 metaphors of AI from a year-long U.S. survey, this study introduces a scalable framework for quantitatively analyzing implicit perceptions from open-ended language and shows that Amer...

A new paper by Tech Impact and Policy Center Postdoctoral Fellow Angela Lee, Director Jeff Hancock, and co-authors explores the public perception of #AI and how it informs people’s #trust and willingness to adopt AI technologies.

Read via Communications Psychology ‡️

04.02.2026 16:16 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Presenter Application: Trust and Safety Research Conference 2026 The Stanford Tech Impact and Policy Center's two day Trust and Safety Research Conference focuses on research in trust and safety for those in academia, industry, civil society, and government. The co...

Apply by April 30 to present your work at #TSRConf through a presentation, lightning talk, poster, participant-organized panel, or workshop.

Don’t miss this opportunity to share your #TrustAndSafety research with a community dedicated to making the internet safer for everyone.

πŸ‘‰ bit.ly/4agUJ0f

03.02.2026 18:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

πŸ“’ Call for proposals for #TSRConf 2026 πŸ“’

Mark your calendars! The Trust and Safety Research Conference returns on October 1–2, 2026, bringing together 500+ professionals from academia, industry, civil society, and government to tackle the most pressing questions in #TrustAndSafety research.

03.02.2026 18:21 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
Guilherme Lichand | The Educational Impacts of School Phone Bans Evidence from Brazil

Join us on February 10 to explore one of the key questions around tech impact and policy in today’s K-12 schools: the educational effects of #SchoolCellphoneBans.

Guilherme Lichand will present evidence that phone restrictions in schools *causally* boost K–12 learning outcomes.

Details & RSVP ‡️

02.02.2026 16:43 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 2
Preview
January 27 | AI, Automation, and Augmentation

Will #AI replace human workers or will it empower them?

Join us for a talk by @robreich.bsky.social, who will examine the distinction between #automation & #augmentation and discuss how design choices, policy decisions, and adoption patterns will determine AI's effects on labor and society.

RSVP ‡️

23.01.2026 17:10 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 0
Preview
Empowering users to discern fact from fiction in the age of AI A new project will explore interventions that help individuals effectively use AI while building literacy to avoid scams and abuse.

Expanding on the Stanford Social Media Lab's research on digital literacy, the project will design and test interventions to help users harness the powers of #AI while building the literacy they need to avoid scams and other abuse.

Learn moreπŸ‘‡

22.01.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Empowering users to discern fact from fiction in the age of AI A new project will explore interventions that help individuals effectively use AI while building literacy to avoid scams and abuse.

The Stanford Report profiled the Empowering Diverse Digital Citizens research project, which is spearheaded by Tech Impact and Policy Center Director Jeff Hancock and supported by @stanfordimpactlabs.bsky.social.

#AILiteracy #DigitalLiteracy

22.01.2026 16:41 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
AI is intensifying a 'collapse' of trust online, experts say From Venezuela to Minneapolis, the rapid rollout of deepfakes around major news events is stirring confusion and suspicion about real news.

β€œThat’s going to be the big challenge, is that for a while people are really going to not trust things they see in digital spaces.”

Jeff Hancock spoke with NBC News about the shifting dynamics of trust and media literacy as deepfakes and other AI-generated images flood digital ecosystems.

Read ‡️

21.01.2026 17:32 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Why People Create AI β€œWorkslop”—and How to Stop It With the rise of gen AI tools, offices have had to contend with a new scourge: β€œworkslop” or low-effort, AI-generated work that looks plausibly polished, but ends up wasting time and effort as it offl...

Is making more time and space for being *human* the key to making #AI work in the workplace?

In a follow-up to their groundbreaking article on AI #workslop, Jeff Hancock & co-authors share insight on what's driving the rise in workslopβ€”and how organizational leaders can prevent it.

Read @hbr.org ‡️

20.01.2026 21:00 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Starting soonβ€”join us online!

πŸ‘‰ stanford.io/49ott0x

20.01.2026 19:29 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

How can we shift from seeing #AI as a cheating tool to a pedagogical partner that fosters creativity, critical thinking, and personalization?

Join us in person or online for the next talk in our #WinterSeminarSeries, featuring Peter Norvig of @stanfordhai.bsky.social!

RSVP ‡️
stanford.io/49ott0x

16.01.2026 16:19 πŸ‘ 1 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1
January 13 | How Tech Has Enabled Survey Research and Undermined It
January 13 | How Tech Has Enabled Survey Research and Undermined It YouTube video by Stanford Tech Impact and Policy Center

Next was an excellent talk by Jon Krosnick on the history of tech-enabled survey research at @techimpactpolicy.bsky.social. I loved the review of industry and academia's fraught relationship with honestly communicating methodological limits. Highly recommend www.youtube.com/watch?v=8Y4B... (7/8)

14.01.2026 03:53 πŸ‘ 1 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0
Preview
New Report on AI-Generated Child Sexual Abuse Material Insights from Educators, Platforms, Law Enforcement, Legislators, and Victims

Read the full research report ‡️

13.01.2026 19:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
Opinion | There’s One Easy Solution to the A.I. Porn Problem

In light of Grok's ongoing deepfake nude scandal, Riana Pfefferkorn of @stanfordhai.bsky.social wrote an op-ed for @nytopinion.nytimes.com sharing research she published with Tech Impact and Policy Center last year that found legal risk is impeding AI companies from better safeguarding their models.

13.01.2026 19:34 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Coming up today at 12PM PT β€” join us!

13.01.2026 16:29 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

Join us on Tuesday for the launch of our #WinterSeminarSeries!

Award-winning Stanford professor, research psychologist, and public opinion expert Jon A. Krosnick will discuss the role of #SurveyResearch in modern life and its evolution in the digital era.

RSVP: stanford.io/4qHo626

08.01.2026 19:58 πŸ‘ 2 πŸ” 1 πŸ’¬ 0 πŸ“Œ 1