the self is fake! don't let the liberal media tell you otherwise
the self is fake! don't let the liberal media tell you otherwise
I did not care for von Foerster but I would strongly endorse Varela & Maturana
silence 2dcel, a higher dimensionoid is speaking
The "cut fruit diaspora poetry" is a known stereotype within the Asian-American community. I am writing about the women around me, which is gross in the way all writers are gross.
This is Coase on The Nature of the Firm. There is a cost to this kind of contracting/monitoring and hiring someone as an employee (with residual claims on their activity) economizes on this.
Uber/Doordash present a model where this "contracting" is algorithmically mediated
firing all my useless ZIRP-era hiring glut imbeciles under the guise of "AI"
"This is how we beat the communists" I say, adding another PDF to my Zotero library while ignoring my boss's emails.
Most neurons in the brain are "dawdling" from a Marxist perspective. Huge caloric intake relative to individual contribution.
nobody is prepared for what's coming with AI
Setting up an internal futarchy at my company so I can rugpull my chud subordinates at will
someday I'll have a corrido written about me
Claude and I set up OpenClaw on a home server for my brother. Absolutely dogshit software, but he seems happy with it.
not sure why my feedback went unappreciated here
map of my belief system
we've been "past the event horizon" for a while
borges story
what im getting at is dependent origination in the Buddhist sense but that dingus eshear made it cringe
preferences are not exogenous to the systems that could be said to enact said preferences. it's a modeling choice that Yud takes as a fact about reality (given this weak "sufficiently optimized" qualifier now, which is circular)
if the system "has" a utility function in this computational sense, i.e. the behavior is purely "analytic," then "in theory and difficult practice" one can design the right utility function such that AI is friendly
but this analytic/synthetic distinction is self-defeating per Quine and others
There is this quiet equivocation between "the AI is an optimizer" to "the AI behaves as if it were an optimizer" that is doing all the work of "alignment"
i think this extra-snarky yud post gets at the problem
www.lesswrong.com/posts/cnYHFN...
yea ive been fucking losing it honestly
here I'd recommend Quine's "two dogmas of empiricism" to unpack what I mean. we shouldn't take the syntactical content of "rational Bayesian inference" for the semantic content of subjectivity such that bounding one necessarily bounds the other
www.theologie.uzh.ch/dam/jcr:ffff...
but you can claim mastery of LBC thought. that's allowed
anti-yud but also "alignment" treats preferences as ontologically primary in a way that rehashes logical positivist failures. alignment isn't real
my read as well
basically your definition of what counts as "information processing" is doing too much work, given the distinction between "computing" and "non-computing" matter is asserted but not demonstrated
dn790006.ca.archive.org/0/items/norb...
cybernetics pdf
That there exist algorithms that incorporate randomness does not mean you have accounted for the unobservable randomness (unknown unknowns) of your environment
I would encourage you to read Norbert Weiner's Cybernetics, but specifically the first chapter on Newtonian Time vs Bergsonian/Statistical time for a treatment of the problem of "randomness" that doesn't hand-wave the problem away