Marco Minici's Avatar

Marco Minici

@marcominici

Researcher at ICAR-CNR I develop computational tools to identify threats to online users posed by malicious actors and algorithms that behave unpredictably. Personal Website: https://mminici.github.io

77
Followers
141
Following
12
Posts
05.12.2024
Joined
Posts Following

Latest posts by Marco Minici @marcominici

Client Challenge

πŸ“’ New paper! We study urban location recommenders and their feedback with human mobility. Simulating this loop reveals a paradox: people explore more individually, yet city visits and encounters concentrate. Cities coevolve with AI, and inequality can grow.
πŸ“„ link.springer.com/article/10.1...

09.01.2026 20:06 πŸ‘ 4 πŸ” 2 πŸ’¬ 1 πŸ“Œ 0
screenshot of the title and authors of the Science paper that are linked in the next post

screenshot of the title and authors of the Science paper that are linked in the next post

Our new article in @science.org enables social media reranking outside of platforms' walled gardens.

We add an LLM-powered reranking of highly polarizing political content into N=1256 participants' feeds. Downranking cools tensions with the opposite partyβ€”but upranking inflames them.

01.12.2025 19:33 πŸ‘ 47 πŸ” 13 πŸ’¬ 1 πŸ“Œ 2
Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

Today, social media platforms hold the sole power to study the effects of feed-ranking algorithms. We developed a platform-independent method that reranks participants’ feeds in real time and used this method to conduct a preregistered 10-day field experiment with 1256 participants on X during the 2024 US presidential campaign. Our experiment used a large language model to rerank posts that expressed antidemocratic attitudes and partisan animosity (AAPA). Decreasing or increasing AAPA exposure shifted out-party partisan animosity by more than 2 points on a 100-point feeling thermometer, with no detectable differences across party lines, providing causal evidence that exposure to AAPA content alters affective polarization. This work establishes a method to study feed algorithms without requiring platform cooperation, enabling independent evaluation of ranking interventions in naturalistic settings.

New paper in Science:

In a platform-independent field experiment, we show that reranking content expressing antidemocratic attitudes and partisan animosity in social media feeds alters affective polarization.

🧡

01.12.2025 07:59 πŸ‘ 153 πŸ” 67 πŸ’¬ 4 πŸ“Œ 3
Post image Post image

What does coordinated inauthentic behavior look like on TikTok?

We introduce a new framework for detecting coordination in video-first platforms, uncovering influence campaigns using synthetic voices, split-screen tactics, and cross-account duplication.
πŸ“„https://arxiv.org/abs/2505.10867

19.05.2025 15:42 πŸ‘ 21 πŸ” 9 πŸ’¬ 2 πŸ“Œ 2

We constantly ask our apps where to visit, eat or drink.
AI tells us, and most of the time, we follow it. The loop continues.
But do AIs favor certain places? How would we even know if we don’t own the platforms?
We modeled this complex phenomenon, and results are fascinating!
Spoiler: rich get…

11.04.2025 09:01 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 1
Preview
IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship When thousands of fake accounts controlled by an unknown actor flood social media with some story, and platform algorithms amplify these messages, real...

IU's Observatory on Social Media defends citizens from online manipulation – the opposite of censorship
osome.iu.edu/research/blo...

04.03.2025 00:46 πŸ‘ 108 πŸ” 51 πŸ’¬ 0 πŸ“Œ 11
Preview
IOHunter: Graph Foundation Model to Uncover Online Information Operations Social media platforms have become vital spaces for public discourse, serving as modern agorΓ‘s where a wide range of voices influence societal narratives. However, their open nature also makes them vu...

Preprint is available on arXiv arxiv.org/abs/2412.14663

03.03.2025 17:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

This work would not have been possible without the other amazing coauthors @luceriluc.bsky.social @frafabbri.bsky.social @emilioferrara.bsky.social

Bonus Pic: myself beyond excited to stand next to my poster!

03.03.2025 17:23 πŸ‘ 3 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

Our work provides a scalable approach for online moderation teams, public institutions, and independent organizations to audit the health of online environmentsβ€”especially crucial during political events such as election cycles.

03.03.2025 17:23 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

2. We explore how our multimodal framework exhibits foundation model behavior in detecting online information operations. Our results show that pretraining IOHunter on past IO datasets enables it to generalize to new, emerging IOs with only a few labeled examples for fine-tuning.

03.03.2025 17:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Key takeaways:

1. We propose a multimodal framework that effectively integrates textual and graph information using a cross-attention mechanism, which is then processed by a GNN.

03.03.2025 17:23 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Can we effectively detect covert Information Operations (IOs) that attempt to manipulate socio-political debates on social media?

This is the focus of our work, "IOHunter: Graph Foundation Model to Uncover Online Information Operations", just presented at the #AAAI #AAAI2025

03.03.2025 17:23 πŸ‘ 7 πŸ” 2 πŸ’¬ 2 πŸ“Œ 0
Preview
The three horsemen of social media: brain rot, anxiety and foreign interference With so many media institutions bidding farewell to X, it's a good time to reflect on the relationship status of social media and European society at large.

The three horsemen of social media: brain rot, anxiety and foreign interference
voxeurop.eu/en/social-me...

26.12.2024 19:31 πŸ‘ 38 πŸ” 9 πŸ’¬ 4 πŸ“Œ 1
Preview
IOHunter: Graph Foundation Model to Uncover Online Information Operations Social media platforms have become vital spaces for public discourse, serving as modern agorΓ‘s where a wide range of voices influence societal narratives. However, their open nature also makes them vu...

Read our preprint available on arXiv at: arxiv.org/abs/2412.14663

23.12.2024 14:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Our effort highlights the critical role of multi-modality in modeling malicious user behavior, the value of attention to weight the modalities, and how we can advance toward a GFM for the IO Detection task by pre-training our architecture on a dataset of previous IOs.

23.12.2024 14:01 πŸ‘ 1 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our work demonstrates how a multi-modal framework based on GNN+LM and massive pre-training produces a model that effectively generalizes to IOs not present in the original training dataset β€” the most realistic scenario for IO detection.

23.12.2024 14:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Our model delivers substantial improvements over current IO detection methods across three learning tasks:

1️⃣ Supervised IO Detection
2️⃣ Scarcely-Labeled Supervised IO Detection
3️⃣ Cross-IO Detection (with minimal or no labeled data from emerging IOs)

23.12.2024 14:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Maintaining the integrity of online discourse is essential for safeguarding fair democratic processes.

Our multi-modal learning framework IOHunter integrates both content and contextual information to identify actors attempting to manipulate online discussions - i.e., IO Drivers

23.12.2024 14:01 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

"IOHunter: Graph Foundation Model to Uncover Online Information Operations" goes to AAAI'25!
This is the result of an incredible collaboration with @luceriluc.bsky.social @frafabbri.bsky.social and @emilioferrara.bsky.social

Read the entire thread for a summary and the link to the preprint.

23.12.2024 14:01 πŸ‘ 5 πŸ” 1 πŸ’¬ 1 πŸ“Œ 2
Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election

Exposing Cross-Platform Coordinated Inauthentic Activity in the Run-Up to the 2024 U.S. Election

Figure 1

Figure 1

Figure 2 & Table 5

Figure 2 & Table 5

Figure 3

Figure 3

New evidence of cross-platform foreign interference on social media during the 2024 U.S. Election that drove the spread of highly-partisan, low-credibility, and conspiratorial content, from Cinus, Minici, @luceriluc.bsky.social @emilioferrara.bsky.social arxiv.org/pdf/2410.22716

08.12.2024 20:43 πŸ‘ 77 πŸ” 30 πŸ’¬ 1 πŸ“Œ 5