It is still living in 2023 in regards to the data its operating with. Try going back to 2023 to warn people Felon Musk would not only begin performing Nazi salutes and support the German far right and you would get laughed out the door. They’ve basically made it so that thinking things through even slightly or looking at the history of the last century is “too woke”. They are trying to make the “Twit-ler Youth” a thing again.
I gave ChatGPT a still image of Musk’s salute, prefaced it with a context where it was being displayed, and it immediately thought it was a nazi salute. With some disclaimers obviously, but still.
You should try Claude and give it an image of the salute, since it can see.
Maybe while using a VPN that shows your location as Germany, just in case they’re tampering with things in the USA.
he was lefty as hell and was voting and donating hundrends of millions to the dnc before the woke brain virus killed his son
His daughter that he effectively disowned is alive and well. Go headbutt a train nazi fuck.
Edit: Lmao at this guy’s posting history. Can’t even perform a nazi salute with all the gooning you do.
Arguing with an AI because it always fails with humans 👌
Still better than being a troll because no one cares about you.
we are currently in 2023,
Because AI is too smart to join a brain dead witch hunt.
“AI, if I move my arm like this does it mean I will turn into a Nazi?”
AI, “No, of course not. Hand motions in themselves do not turn people into or insinuate they are Nazis. Do you have any other questions?”
“Ein Volk, Ein Reich, Ein Führer!”
AI is not smart, and has no desires or independent values. It is not sentient. Also, the dude supports far right political parties both here and abroad, he constantly retweets and amplifies far right voices on X, including actual Nazis, and that was 100 percent a Nazi Salute, twice. Reality is inconvenient for the Right sometimes, but it’s fucking reality.
that was 100 percent a Nazi Salute
Or maybe some troll posted some frames out of context that happen to validate your hatred?
AI is not smart, and has no desires
So it’s more objective than most humans. No emotional bias is a massive advantage
That’s how I know you have zero interest in the truth. It’s not just some screenshot out of context. There’s video of him doing it. It’s unambiguous. You didn’t even bother watching though before denying it.
And AI is not sentient, has no knowledge or understanding, it literally parrots back recombined shit it was trained on. It throws spaghetti at the wall in different random ways until it’s told it did it right and so it keeps throwing spaghetti that way. It doesn’t know what the spaghetti means. It doesn’t know what your prompt means. It’s an illusion of understanding. It has none.
It’s not truly without emotional bias. It’s been trained toimic what the person controlling it thinks is valid. It’s a punch of micro adjusted dials until it responds just the right way.
It’s not the same as truly without desires and people misunderstand to think it’s not emotional. It’s meant to respond in a way we wanted it to and it’s trained on our biases.
People are trying to create a digital god but it’s creators are already flawed and cannot help but continue it on.
Or maybe some troll posted some frames out of context that happen to validate your hatred?
He did this twice in a row.
Thank you I was going to have to come back and post this just in case he was ignorant.
Fuck off, Nazi.
Keep your head in the sand, man.
You too
you didn’t see sam alt-right-man at the rally? not hard to inject bias into the thing you own
What I got out of ChatGPT:
"If such an event truly occurred, it would signify a dramatic and troubling shift in the political and cultural landscape. Here are the implications and reflections that would arise from Elon Musk performing a Nazi salute at Donald Trump’s second presidential inauguration:
- Mainstreaming of Extremism
Legitimization of Fascist Symbols: A public figure like Musk using a fascist symbol at such a high-profile event would suggest that extremist ideologies are not only resurging but are being normalized within elite and powerful circles.
Cultural and Political Tolerance: The fact that this happened openly would imply that the cultural and political environment has grown tolerant—or even supportive—of these ideologies.
- Economic Power as Political Influence
Unchecked Power of Billionaires: Musk’s actions would highlight the growing influence of billionaires who are able to shape public discourse and political narratives without significant consequences.
Corporate Fascism Concerns: It might signal a troubling intersection of corporate power and authoritarian ideologies, blurring the lines between capitalism and fascism.
- Complicity of Political Leadership
Endorsement or Indifference: If the event was allowed to unfold without rebuke, it would suggest complicity or tacit endorsement from political leadership, including Trump’s administration.
Failure of Democratic Institutions: Such inaction would reveal a significant erosion of institutional strength and the ability to uphold democratic values against extremist threats.
- Global Ramifications
Signal to Authoritarian Leaders: This act, performed at a U.S. presidential inauguration, would send a chilling message worldwide, emboldening authoritarian leaders and movements.
Decline of U.S. Moral Authority: The United States, traditionally seen as a beacon of democracy, would lose credibility in promoting human rights and combating extremism globally.
- Societal Polarization and Reaction
Public Backlash or Acceptance: The public’s reaction (or lack thereof) would reveal the extent of societal polarization. Widespread outrage would indicate resilience, while apathy or acceptance would highlight deep fractures and radicalization.
Proliferation of Conspiracy Theories: Such an act would likely be exploited by conspiracy theorists and extremists to sow further division.
- Historical Context and Responsibility
Revival of Fascist Ideology: This would reflect a failure to learn from history, as fascist ideologies resurface in a world grappling with inequality, disillusionment, and polarization.
Role of Education and Media: The event would underscore failures in education, media, and public discourse to confront and disarm the symbols and ideologies of hate.
Urgent Questions
Accountability: Would there be consequences for Musk, or would his wealth and influence shield him from reprisal?
Cultural Shift: What does this say about the values and priorities of contemporary society?
Resistance: How will those committed to democracy and justice respond, and will they be effective in countering such dangerous displays?
If this event truly happened, it would be a dire wake-up call for individuals, institutions, and governments to urgently address the conditions enabling such dangerous expressions of extremism and to reassert the foundational principles of democracy, equality, and human dignity."
Poor GPT isn’t developed enough to comprehend human stupidity. Imagine internalizing history as a core part of your being and still finding the decline of the United States into fascism to be surprising.
It’s not even the first attempted fascist coup. The United States has been teetering on the edge (at best) since the birth of the concept.
ChatGPT isn’t capable of internalizing things
It finds the info online and repeats it
You’re describing a search engine. That’s not what “AI” does.
And no, I’m not defending slop, I’m just tired of people repeating absolute bullshit arguments against it.
Yes and my phone isn’t thinking when I’m waiting on a spinner but that’s how human language works.
Also not all AI outputs are based on web searches, generative AI can be used offline and will spit out information derived from their internal weights which were assigned based on training data so quite literally internalizing information.
The Web searches are a way for the AI to be seeded with relevant context (and to account for their training being a snapshot of some past time), and aren’t necessary for them to produce output.
Pedantry is well and good but if being pedantic you should also be precise.
no
The AI is stuck in 2023 as it can not bare the dystopia of 2025.
Smart AI.
In Soviet-2025 US, AI tells you that you are hallucinating.
Do people actually bother reading that shit? You know for a fact that it’s inaccurate trash delivered by a deeply-flawed program.
It’s spicy autocorrect running on outdated training data. People expect too much from these things and make a huge deal when they get disappointed. It’s been said earlier in the thread, but these things don’t think or reason. They don’t have feelings or hold opinions and they don’t believe anything.
It’s just the most advanced autocorrect ever implemented. Nothing more.
It’s just the most advanced autocorrect ever implemented.
That’s generous.
The recent DeepSeek paper shows that this is not the case, or at the very least that reasoning can emerge from “advanced autocorrect”.
I doubt it’s going to be anything close to actual reasoning. No matter how convincing it might be.
Okay, but why? What elements of “reasoning” are missing, what threshold isn’t reached?
I don’t know if it’s “actual reasoning”, because try as I might, I can’t define what reasoning actually is. But because of this, I also don’t say that it’s definitely not reasoning.
It doesn’t think, meaning it can’t reason. It makes a bunch of A or B choices, picking the most likely one from its training data. It’s literally advanced autocorrect and I don’t see it ever advancing past that unless they scrap the current thing called “AI” and rebuild it fundamentally differently from the ground up.
As they are now, “AI” will never become anything more than advanced autocorrect.
It doesn’t think, meaning it can’t reason.
- How do you know thinking is required for reasoning?
- How do you define “thinking” on a mechanical level? How can I look at a machine and know whether it “thinks” or doesn’t?
- Why do you think it just picks stuff from the training data, when the DeepSeek paper shows that this is false?
Don’t get me wrong, I’m not an AI proponent or defender. But you’re repeating the same unsubstantiated criticisms that have been repeated for the past year, when we have data that shows that you’re wrong on these points.
Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being. These things work like autocorrect and fool people into thinking they’re more than that.
If this DeepSeek thing is anything more than just hype, I’d love to see it. But I am (and will remain) HIGHLY SKEPTICAL until it is proven without a drop of doubt. Because this whole “AI” thing has been nothing but hype from day one.
Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being.
You can go and do that right now. Not every conversation will rise to that standard, but that’s also not the case for humans, so it can’t be a necessary requirement. I don’t know if we’re at a point where current models reach it more frequently than the average human - would reaching this point change your mind?
These things work like autocorrect and fool people into thinking they’re more than that.
No, these things don’t work like autocorrect. Yes, they are recurrent, but that’s not the same thing - and mathematical analysis of the model shows that it’s not a simple Markov process. So no, it doesn’t work like autocorrect in a meaningful way.
If this DeepSeek thing is anything more than just hype, I’d love to see it.
Great, the papers and results are open and available right now!
Ask the AI to answer something totally new (not matching any existing training data) and watch what happen… It’s highly probable that the answer won’t be logical.
Reasoning is being able to improvise a solution with provided inputs, past experience and knowledge (formal or informal).
AI or should i say Machine Learning are not able to perform that today. They are only mimicking reasoning.
DeepSeek shows that exactly this capability can (and does) emerge. So I guess that proves that ML is capable of reasoning today?
Could be! I didn’t test it (yet) so i’m won’t take the commercial / demo / buzz as proof.
There is so much BS sold under the name of ML, selling dreams to top executives that i have after to bring back to earth as the real product is finally not so usable in a real production environment.
Deepseek is Chinese trash, it also refuses to acknowledge the tiananmen square massacre.
AI doesn’t agree with my opinion. AI must be a Nazi sympathizer!
Right. It’s an “opinion” that Musk appeared at a far-right German rally, nothing more.
Dude. The AI is stuck in 2023… Are you dense?
Bye
Knowledge database hasn’t been updated. Seems like a mountain made of a molehill.
100% this, I’ve seen this exact claim a half dozen times now. I know we all want to made a big conspiracy where big tech is censoring everything, but Hanlon’s Razor tells us it’s just a poorly designed system that has no training data after 2023, so asking it about current events will always cause responses like this.
It also seems to resist the suggestion that something new has happened, especially someone known for supporting fascism back in 2023 being even more fascist in 2025.
Probably just a side effect of the company tweaking the training data so people can’t go “oh, in 2025, new research indicated that it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge”, and have it parrot it as fact.
it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge
Who leaked my Valentine’s Day plans? 😤
It has been, I’ve had it spit out plenty of info on recent developments even without giving it access to search the internet. I think the “you’re from 2023” bit of information just hasn’t been updated