There are a few with good vibes and (somewhat) specialty coffee. Personally I like KLVN (near Bakery Square), Arriviste (Shadyside), Redhawk (Oakland). They're not super fancy, but way better than the well-known chains!
There are a few with good vibes and (somewhat) specialty coffee. Personally I like KLVN (near Bakery Square), Arriviste (Shadyside), Redhawk (Oakland). They're not super fancy, but way better than the well-known chains!
π
Tuesday 5:45 pm - 8:00 pm in Exhibit Hall poster no. 437
My colleague Εukasz Sztukiewicz will present our joint work (with @inverse-hessian.bsky.social) on the relationship between saliency maps and fairness as part of the Undergraduate and Masterβs Consortium.
π Paper: arxiv.org/abs/2503.00234
π
Monday 8:00 am - 12:00 pm in Room 700
Presenting our work on mitigating persistent client dropout in decentralized federated learning as part of the FedKDD workshop.
π Project website: ignacystepka.com/projects/fed...
π Paper: openreview.net/pdf/576de662...
π
Tuesday 5:30 - 8 pm (poster no. 141) and Friday 8:55 - 9:15 (Room 801 A, talk)
Iβll be giving a talk and presenting a poster on robust counterfactual explanations.
π Project website: ignacystepka.com/projects/bet...
π Paper: arxiv.org/abs/2408.04842
This week I'm presenting some works at #KDD2025 in Toronto π¨π¦
Letβs connect if youβre interested in privacy/gradient inversion attacks in federated learning, counterfactual explanations, or fairness and xai!
Hereβs where you can find me:
Explore more:
π paper: arxiv.org/abs/2408.04842
π¨βπ» code: github.com/istepka/beta...
π project page: ignacystepka.com/projects/bet...
π Big thanks to my co-authors Jerzy Stefanowski and Mateusz Lango!
#KDD2025 #TrustworthyAI #XAI 7/7π§΅
π Results: Across 6 datasets, BetaRCE consistently achieved target robustness levels while preserving explanation quality and maintaining a competitive robustness-cost trade-off. 6/7π§΅
You control both confidence level (Ξ±) and robustness threshold (Ξ΄), giving statistical guarantees that your explanation will survive changes! For formal proofs on optimal SAM sampling methods and the full theoretical foundation, check out our paper! 5/7π§΅
βοΈ Under the hood: BetaRCE explores a "Space of Admissible Models" (SAM) - representing expected/foreseeable changes to your model. Using Bayesian statistics, we efficiently estimate the probability that explanations remain valid across these changes. 4/7π§΅
β Our solution: BetaRCE - offers probabilistic guarantees for robustness to model change. It works with ANY model class, is post-hoc, and can enhance your current counterfactual methods. Plus, it allows you to control the robustness-cost trade-off. 3/7π§΅
β This happens constantly in real-world AI systems. Current explanation methods don't address this well - they're limited to specific models, require extensive tuning, or lack guarantees about explanation robustness. 2/7π§΅
π£ New paper at #KDD2025 on robust counterfactual explanations!
Imagine an AI tells you "Increase income by $200 to get a loan". You do it, but when you reapply, the model has been updated and rejects you anyway. We solve this issue by making CFEs robust to model changes! 1/7π§΅