Yeah, because he's right.
@ericpanzer
Arguing since birth. Urban planning, politics, weather, Drag Race, and photography are all fair game. Opinions are strictly my own—unless others agree. I assume that strings of emojis in people's bios are their washing instructions
Yeah, because he's right.
If you want to understand how the corporate media consciously manipulate the public, look no further than how under Biden they characterized job GAINS as a *liability* for him, and under Trump they characterize job LOSSES as "unexpected" with essentially no mention of Trump's role or policies
I knew working on suicide research was going to be heavy.
But what's actually starting to get to me is reading all of these chatbot suicide laws and legislative proposals that call for measures that have been shown to exacerbate crisis.
www.governor.ny.gov/sites/defaul...
When Raine shared his suicidal ideations with ChatGPT, the bot did issue multiple messages containing the suicide hotline number, according to his family’s lawsuit. But his parents said their son would easily bypass the warnings by supplying seemingly harmless reasons for his queries, including by pretending he was just “building a character.”
If it did this unbidden or unencouraged, it would indeed be *very bad*
I will wait for more information though, since at least one previous instance involved stuff like the following, whereby the chatbot tried to steer the person a way from self harm, but they made multiple efforts to bypass
Which is precisely why we aren't going to beat them at that game
I think that it's usually better to give people a framing that allows them to feel good. And, yes, we fundamentally want to appeal to people's sense of love, because if we're appealing to their sense of hate, we will absolutely lose.
He needs to win TX moderates and crossover Republicans, not please us seething coastal lefties on BlueSky
It's a testament to Jewish humor that instead of breathless offense, most of us seem to be like, "We love how hilariously wrong this is and desperately want to be invited to this party."
"The young civilization of Earth has destroyed itself."
"Pity. Did AI cause them to launch their nuclear weapons?"
"No. Online betting."
Trump's war is crashing stock markets worldwide, here's why that's bad news for Joe Biden
There are absolutely pitfalls and it's overhyped for sure, but the people who say it's just "improv comedy" or "fancy auto complete" have their heads in the sand
I honestly think they feel existentially threatened because their identity is thinking good and they don't like machines doing it
Talarico has a great opportunity to demand that all votes should be counted, that he supports Crockett getting every damn vote she should have got, and rallying everyone around election integrity and basic fairness
Y'all need to understand that this is what people are worried about when they sound the alarm on Trump and the midterms.
Chaos at heavily democratic polling places in red/purple states
You don't need to cancel an election to make it unfair
To all you people who keep tut-tutting that "Trump can't cancel the midterms!" This is the sort of thing people are most worried about, not that every polling place in the US will be commandeered by a platoon of ICE goons
It doesn't take wholesale cancellation to make an election unfree/unfair
Do you like bussy? Well then you'll love Manus!
According to multiple sources, miseria also translates to pittance, which is a synonym of this meaning of "peanuts"
Probably not an AI issue since this is technically a correct translation of a different meaning of the word "peanuts"
So, the reason for this is that it is interpreting "peanuts" here to mean a very small amount. Like "he gets paid peanuts for his translation work"
Arguably this should not be top result as it's a more esoteric use of the word "peanuts" but it's not strictly wrong
With more context:
So it's like an insurance company?
And just a further clarify when I say text prompt output, I don't mean the output of a text-based LLM, I mean the image output of a text prompt given to a generative image AI model
I think what both of our points elucidate is that allowing for the copywriting of strict text prompt outputs would create a legal morass, and that the copyright office's approach, affirmed by SCOTUS, is actually the reasonable one
The most popular tools for local image generation actually embed the workflow in the image, making exact and undetectable duplication more likely
And the sort of production information one might include in a copyright application would also potentially contribute to this result
I think part of the issue here is that you may be talking about LLM outputs while I am talking about generative image model outputs. The likelihood of at least seemingly infringing similarity among the latter is, I would argue, much higher.
This probability grows with the use of local models
So what you're saying is in the context of an AI output, the person holding the copyright could sue someone else, and then the object of their suot would need to demonstrate their AI workflow that resulted in the same image?
See my other comment. The way AI generation works you would necessarily need to have the first to prompt wins scenario for copyrightability of the output to even make sense
I mean, given that I have seen some people be super duper protective of their prompts, I think there are some folks who stupidly think they want an outcome like that, but you're right that it was never likely in the real world. It's just not sensible... But we don't live in a sensible world either 🫠
Perhaps theoretically but there would be absolutely no way to prove this which would render the copyright protection of the AI output moot because someone could just claim to have independently discovered it
Being able to copyright the output of a text prompt precludes what you describe
Agree 100%! I think it would be very bad if someone could throw the word "peach" into AI, take the result, and say "I own this image of a peach", especially for people who actually make the effort to do AI locally and control the result
There would be way too much chance of accidental reproduction
If one were actually able to copyright the strict AI output of a text prompt, then that combination of model, prompt, settings, and seed would forever be locked to others in the future
In the case of AI outputs using the same settings, prompt, and seed, there would be no difference whatsoever between the pixels
What you're talking about would be akin to someone using the same prompt and settings but a different seed that yielded a very slightly different image