Thomas Dietterich's Avatar

Thomas Dietterich

@tdietterich

Safe and robust AI/ML, computational sustainability. Former President AAAI and IMLS. Distinguished Professor Emeritus, Oregon State University. https://web.engr.oregonstate.edu/~tgd/

7,810
Followers
538
Following
1,234
Posts
22.09.2023
Joined
Posts Following

Latest posts by Thomas Dietterich @tdietterich

Yes. Lately google AI just sends me to low quality youtube videos from engagement miners. Those videos are out of date and skip complex cases.

06.03.2026 16:44 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0
Preview
The Rot Is Real And There Is More To It The Atlantic Piece Just Scratched The Surface

Sobering views from Phillips O'Brien regarding the Iran War. His assessment is that US participation is the result of government corruption.
open.substack.com/pub/phillips...

06.03.2026 15:40 πŸ‘ 9 πŸ” 3 πŸ’¬ 1 πŸ“Œ 1

Excellent editorial

06.03.2026 06:25 πŸ‘ 3 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Excellent news!

06.03.2026 06:24 πŸ‘ 1 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Leibniz, looking at the universe: "Why is there something instead of nothing?"

Me, looking at my Outlook calendar: same

06.03.2026 01:13 πŸ‘ 93 πŸ” 15 πŸ’¬ 2 πŸ“Œ 0

I try to figure out how I could have written the paper. What questions did I fail to ask? What experiment did I not conceive? What can I learn from them?

05.03.2026 23:25 πŸ‘ 6 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

In those days, the sense was that passing the Turing Test would be a useless activity. But I don't think many people realized that applying machine learning to build AI systems was fundamentally about mimicry. At least I, as an ML researcher, didn't appreciate this. So obvious in retrospect!

05.03.2026 19:05 πŸ‘ 5 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

John Searle's Chinese Room paper (1980) set out to show that passing the Turing Test would tell us nothing about language understanding. The Loebner Prize (1991) also demonstrated that mimicry was easy but did not provide any improvements in capabilities.

05.03.2026 19:05 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Original source?

05.03.2026 15:45 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Some day (if history is any guide), we will learn how accurately these systems conformed to the laws of war (Geneva Conventions). And if history is any guide, no one will be held accountable for the failures of these systems. We don't have the information to judge right now.

05.03.2026 06:37 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The Turing Test has been criticized within the AI community for many years. It rewards mimicry rather than high-quality systematic performance. We now have systems trained to be excellent mimics. And they do not exhibit good systematic performance. The Turing Test was a terrible mistake.

05.03.2026 06:33 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

I expected them to eventually advocate divorce to create a quiet sleep environment.

My solution is regular ear cleaning visits.

04.03.2026 20:39 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I stand up for science, but what exactly are "facts"? I know nuance is tough in politics, but the whole point of science is that it is a continual effort to find the truth. Its claims are supported by evidence and those will change if the evidence changes. "Our best current understanding" != Facts

04.03.2026 20:33 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Poor wording on my part. I meant an author who submits a paper that is rejected as not suitable for arXiv. That would usually be a paper that failed to make a claim or provide evidence, a paper that was LLM slop, and so on.

03.03.2026 21:09 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Yes! I would love google scholar to remove the counts and h-index information. Semantic Scholar had an interesting approach with their "influential citation" work. But I believe that is no longer being actively maintained

03.03.2026 18:46 πŸ‘ 4 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

To address this, there should also be a penalty for endorsing a fake or low-quality user. Each endorser is declaring that "I know this person, they are a real person, and they are a real researcher".

03.03.2026 18:08 πŸ‘ 5 πŸ” 0 πŸ’¬ 1 πŸ“Œ 1

A problem that I'd like people to consider is fake (sock puppet) authors on arXiv submitting papers to boost the citation counts of authors. ArXiv recently tightened its endorsement process, but one fake author who "gets through" can create and endorse many more accounts.

03.03.2026 18:08 πŸ‘ 4 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

Julian makes many excellent points. I have been opposed to anonymous submissions from the start; they are too open to abuse. His idea that we should create many smaller, more specialized meetings is interesting.

03.03.2026 18:08 πŸ‘ 15 πŸ” 1 πŸ’¬ 2 πŸ“Œ 0

The moral panic is to claim that any method for certifying age online must give up your biometrics. Beyond age, we also need to certify that we are humans (and replace those captchas that are now easier for AIs to solve than for people), that we are registered to vote, that we are citizens etc.

03.03.2026 05:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

I'm not sure I count as a leftist, but I think we need age verification for some things both offline (driving, drinking) and online. There are cryptographic methods that can certify age without revealing any other information about a person. We need to adopt those methods.

03.03.2026 05:53 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

This whole incident raises difficult questions. We know these models need guardrails. Guardrails implemented by RL or fine tuning are not modular. Who decides which guardrails to implement? Can they be made switchable? The failure of unlearning suggests not.

02.03.2026 17:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

According to this story, it was the government that initiated the contract discussion with OpenAI.

02.03.2026 17:15 πŸ‘ 0 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0
Preview
How Talks Between Anthropic and the Defense Dept. Fell Apart

The real story appears to be more complex, according to the NYTimes

www.nytimes.com/2026/03/01/t...

02.03.2026 17:11 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 1

I just meant that the grant goes to the university which passes it to the professor who hires the students. Quite different from the Canadian system where many students get direct government support

02.03.2026 17:07 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

The frustrating thing about this is, agencies have used automated targeting filters and machine learning for MASINT and scenario planning for many years. LLMs aren’t necessarily a huge analysis leap except now they are making command decisions. That is INSANE and Anthropic is right to balk at it.

01.03.2026 13:16 πŸ‘ 38 πŸ” 3 πŸ’¬ 0 πŸ“Œ 0
Video thumbnail

Alphabet/Waymo vs Amazon/Zoox roboclot:

20th St by Lexington, Mission, San Francisco

If their respective remote human assistants could communicate directly this would be faster and surer/safer.

OP: tiktok.justjimmynajera

02.03.2026 04:19 πŸ‘ 6 πŸ” 2 πŸ’¬ 0 πŸ“Œ 0

Done!

01.03.2026 22:52 πŸ‘ 2 πŸ” 0 πŸ’¬ 0 πŸ“Œ 0

Gather 'round Bluesky, while I tell the hoary tale of epidemic vs. endemic.

How can it be that pandemic interventions that were vitally important in 2020 are marginally effective in 2025?

Science will give us the answers!

Follow me...

1/

17.04.2025 03:59 πŸ‘ 62 πŸ” 28 πŸ’¬ 1 πŸ“Œ 9

Indirectly, it mostly goes to support graduate students

01.03.2026 22:04 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0

Yes. National Science Foundation (US). Funds primarily basic research in math, physics, engineering, social sciences (at least pre-Trump), education, and biology.

01.03.2026 22:03 πŸ‘ 4 πŸ” 0 πŸ’¬ 1 πŸ“Œ 0