EunJeong Hwang's Avatar

EunJeong Hwang

@ejhwang

PhD @ UBC. LLMs/NLP

248
Followers
78
Following
11
Posts
17.11.2024
Joined
Posts Following

Latest posts by EunJeong Hwang @ejhwang

Post image

In this amazing multidisciplinary collaboration, we report our early experience with the @openclaw-x.bsky.social ->

23.02.2026 23:32 πŸ‘ 40 πŸ” 21 πŸ’¬ 1 πŸ“Œ 9

Co-lead with @yuweiyin.bsky.social

Huge thanks to @veredshwartz.bsky.social, Peter West, Giuseppe Carenini
Paper: huggingface.co/papers/2509....
Code will be released soon!

02.10.2025 23:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Our findings highlight that:
πŸ‘‰ Social reasoning in LLMs cannot be achieved through optimizing their performance on general reasoning benchmarks alone!
πŸ‘‰It requires explicit modeling of mental states to enable safe, fair, and effective interactions with humans.

02.10.2025 23:44 πŸ‘ 2 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We also examine mental states (beliefs, desires, intentions, emotions, knowledge).

πŸ”Ή ToMA prioritizes intentions > emotions (other dimensions remain similar)
πŸ”Ή Uses +5.6% more 1st-order belief than bases, even when both are prompted equally for 0th/1st order states.

02.10.2025 23:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

We analyze 4 scenario types: cooperation, negotiation, persuasion, and conflict.

ToMA outperforms the base under all settings. Its reasoning is more strategic (e.g., compromise, accommodation). Even in failures, ToMA shows more active engagement (e.g., failed persuasion).

02.10.2025 23:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

ToMA adapts effectively to long conversations, sustaining strategic dialogue. When paired with diverse partners, it improves both its own goal completion and its partners’ success.

02.10.2025 23:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image Post image

ToMA generates latent mental states and utterances optimized for social interaction goals using dialogue simulation signals. On Sotopia, it improves performance by +18.9% with Qwen2.5-3B and +6.9% with Qwen2.5-7B, while remaining competitive with a GPT-5 nano baseline.

02.10.2025 23:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Post image

Theory of Mind can make LLMs better at dialogue: more strategic, goal-oriented, enabling long-horizon adaptation!

In our new paper, we introduce ToMA, a dialogue lookahead training framework that enables the LLMs to generate mental states that are maximally useful for achieving dialogue goals.πŸ§΅πŸ‘‡

02.10.2025 23:44 πŸ‘ 4 πŸ” 1 πŸ’¬ 1 πŸ“Œ 0

πŸ‘‹

24.11.2024 07:16 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Im also curious about this

23.11.2024 00:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Also, consider presenting a poster showcasing any ongoing projects or previously presented works from recent conferences. It will be a great chance to get feedback and promote your work!

22.11.2024 20:35 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Post image

We are organizing an NLP workshop in Vancouver on Dec 10. Consider registering if you're here for NeurIPS - it's free and open to everyone who is interested in NLP!
We have great members for invited talks and panel discussions.

More details here: nlp.cs.ubc.ca/future-of-nl...

22.11.2024 20:35 πŸ‘ 3 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0