Amine El Ouassouli's Avatar

Amine El Ouassouli

@aelouass

CS / DS / ML / AI (whatever it is called now) Ph.D. Engineer.

147
Followers
1,353
Following
39
Posts
21.11.2024
Joined
Posts Following

Latest posts by Amine El Ouassouli @aelouass

That it’s time-off o’clock.

17.07.2025 21:51 👍 1 🔁 0 💬 0 📌 0
Preview
X’s dominance ‘over’ as Bluesky becomes new hub for research Data indicates more scholars turning to alternative social media site to post about their work after Elon Musk’s Twitter takeover

'Bluesky has overtaken its flailing rival X in hosting posts related to new academic research, indicating the platform is fast becoming the go-to place for scholars to share their work.'

09.04.2025 07:14 👍 17547 🔁 4416 💬 133 📌 318

13 minutes of wisdom.

“No authorities in science”.

Amen to that.

06.03.2025 22:41 👍 2 🔁 0 💬 0 📌 0
ImageNet Moment for Reinforcement Learning?
ImageNet Moment for Reinforcement Learning? YouTube video by Machine Learning Street Talk

@jfoerst.bsky.social take on how the community sees the ARC Challenge and how we evaluate models and use benchmarks nowadays is 👌.

#more_science_less_hype (please).

PS: Amazing discussion and good brain food, as usual with MLST.

18.02.2025 19:26 👍 3 🔁 1 💬 0 📌 0

There is nothing truer than this true statement.

31.01.2025 10:29 👍 1 🔁 0 💬 0 📌 0

📍

bsky.app/profile/aelo...

30.01.2025 18:23 👍 0 🔁 0 💬 0 📌 0
GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models

I missed this one when it came out but I can tell that it is one of the most useful piece of research I’ve read in a while.

“GSM-Symbolic: Understanding the Limitations of Mathematical Reasoning in Large Language Models”

arxiv.org/html/2410.05...

30.01.2025 18:21 👍 0 🔁 0 💬 1 📌 0

Seeing “very successful” and “have a somewhat loose relationship with the truth” referring to the same people is what I can’t make sense of …

28.01.2025 20:22 👍 0 🔁 0 💬 1 📌 0

We really need better brain-power allocation. The current algorithm is kind of turning crazy.

24.01.2025 14:26 👍 0 🔁 0 💬 0 📌 0

The ToDo list: a revolution.

24.01.2025 14:24 👍 1 🔁 0 💬 0 📌 0

Est ce qu'on dirait que "la promotion à outrance des mathématiques est idéologique" ? ça laisse entendre que "l'IA" spécifiquement, à forte dose, est une sciences eugénisante par essence. On peut très bien en faire une interprétation/usage "progressiste" même si ce n'est pas dans l'air du temps.

20.01.2025 18:44 👍 0 🔁 0 💬 1 📌 0

À mon avis, ce type d’analyse n’aide absolument pas à se faire une opinion. Le « calcul » (parce que c’est ce que c’est au finale) n’est que le calcul. En faire quelque chose d’idéologique par essence est une sur-interprétation biaisée. L’ « IA » elle même ne porte rien du tout.

20.01.2025 17:51 👍 0 🔁 0 💬 1 📌 0

😭

20.01.2025 17:42 👍 1 🔁 0 💬 0 📌 0

That’s a very good one 👌🏽

16.01.2025 20:48 👍 2 🔁 0 💬 0 📌 0

Best: the most useful research you can do in the current context.
Worst: seems that it is not the main focus for now + maybe the pushbacks.

16.01.2025 20:45 👍 1 🔁 0 💬 0 📌 0

May the force be with you!

16.01.2025 20:41 👍 1 🔁 0 💬 0 📌 0

Is it an outlier, though?

(and one way of coping for me is to listen to MLST to hear more nuanced, or at sounder, views and opinions + reading)

16.01.2025 20:39 👍 3 🔁 0 💬 1 📌 0

100%

16.01.2025 12:51 👍 0 🔁 0 💬 0 📌 0

The more I read and listen to current debates in the field, the more I’m convinced that we have a model evaluation crisis.

16.01.2025 11:14 👍 0 🔁 0 💬 0 📌 1

I never understood people going to concerts to spend their time there attending through the tiny screens of their phones.

11.01.2025 22:49 👍 0 🔁 0 💬 0 📌 0

Basically, IMO, given that all assertions have different degrees of consensus in the population, accurate sequential token prediction may overlap or not with accurate “truth” in the “meaning” or conceptual realm.

08.01.2025 23:31 👍 1 🔁 0 💬 0 📌 0

The model may represent truthfully what’s in the dataset, even if it is untruthful. An analogy I often use: you don’t decide if evolution exists by popular vote. The vote tells what the population thinks. Research work even if it is coming from a single individual is more relevant in “truthfulness”

08.01.2025 23:27 👍 7 🔁 0 💬 1 📌 0

Is it just me or are we in an Eliza effect pandemic?

07.01.2025 13:45 👍 0 🔁 0 💬 0 📌 0

What is clear for me is that the current hype is not helping the calm development of these methods and collaboration with other fields.

PS: as someone mentioned, cross domain collaboration is key when it comes to ai research. It is hard, but it is key.

06.01.2025 19:05 👍 0 🔁 0 💬 0 📌 0

If it is not satisfactory at an epistemological level, it is not always clear at the moment and advances in the field will highlight that later. Is that non-integrity ? I would say no (maybe I’m mistaken).

06.01.2025 19:02 👍 0 🔁 0 💬 1 📌 0

Then, there is epistemology. What people call ai nowadays is inductive reasoning at a huge scale. It’s new, not mature (even for ai researchers), and, it’s WIP but, really promising. If people are using it, they are using approaches that are still being developed, thus, inherently experimental.

06.01.2025 18:59 👍 0 🔁 0 💬 1 📌 0

In my opinion, there are two layers in that question: an epistemological and a deantological one. If someone is using “ai” in a wrong way knowingly or for clearly bad reasons (e.g secure funding, for the hype), then yes we have an obvious integrity problem. That’s the deantological part.

06.01.2025 18:51 👍 0 🔁 0 💬 1 📌 0

Huh, what a year !

Happy new year, everyone ! May it be a better one than 2024 (it’s not that hard, though)

Take care of your loved ones.

31.12.2024 19:46 👍 0 🔁 0 💬 0 📌 0

PS: I was amazed by how many people use a p-value without fully understanding how to interpret it correctly. It doesn't make their work unacceptable, though. Everybody doesn't need to be a statistician.

26.12.2024 23:47 👍 7 🔁 0 💬 2 📌 0

Here is another one:

Do all neuroscientists understand entirely how an MRI works?

IMHO, it is more a matter of epistemologically sound interpretation—understanding what can be concluded and what can't from the results (and that is/should be the job of people making those AI systems/tools).

26.12.2024 23:44 👍 13 🔁 1 💬 1 📌 1