I’ve tried several types of artificial intelligence including Gemini, Microsoft co-pilot, chat GPT. A lot of the times I ask them questions and they get everything wrong. If artificial intelligence doesn’t work why are they trying to make us all use it?
Cause it’s cool
Not to me. If you like it, that’s fine.
Perhaps your personal bias is cluding your judgement a bit here. You don’t seem very open minded about it. You’ve already made up your mind.
Probably but I’m far from the only one.
Novelty, lack of understanding, and avarice.
Like was said: money.
In addition, they need training data. Both conversations and raw material. Shoving “AI” into everything whether you want it or not gives them the real world conversational data to train on. If you feed it any documents, etc it’s also sucking that up for the raw data to train on.
Ultimately the best we can do is ignore it and refuse to use it or feed it garbage data so it chokes on its own excrement.
That works for me. I’ll just ignore it to spare my sanity
Investors are dumb. It’s a hot new tech that looks convincing (since LLMs are designed specifically to appear correct, not be correct), so anything with that buzzword gets a ton of money thrown at it. The same phenomenon has occurred with blockchain, big data, even the World Wide Web. After each bubble bursts, some residue remains that actually might have some value.
I can see that. That guy over there has the new shiny toy. I want a new shiny toy. Give me a new shiny toy.
And LLM is mostly for investors, not for users. Investors see you “do AI” even if you just repackage GPT or llama, and your Series A is 20% bigger.
It amazed people when it first launched and capitalists took that to mean replace all their jobs with AI. Where we wanted AI to make shit jobs easier, they used it to replace whole swaths of talent across the industry’s. Recent movies read like they were written almost entirely by AI. Like when Cartman was a robot and kept giving out terrible movie ideas.
Rich assholes have spent a ton of money on it and they need to manufacture reasons why that wasn’t a waste.
I genuinely think the best practical use of AI, especially language models is malicious manipulation. Propaganda/advertising bots. There’s a joke that reddit is mostly bots. I know there’s some countermeasures to sniff them out but think about it.
I’ll keep reddit as the example because I know it best. Comments are simple puns, one liner jokes, or flawed/edgy opinions. But people also go to reddit for advise/recommendations that you can’t really get elsewhere.
Using an LLM AI I could in theory make tons of convincing recommendations. I get payed by a corporation or state entity to convince lurkers to choose brand A over brand B, to support or disown a political stance or to make it seem like tons of people support it when really few do.
And if it’s factually incorrect so what? It was just some kind stranger™ on the internet
If by “best practical” you meant “best unmitigated capitalist profit optimization” or “most common”, then sure, “malicious manipulation” is the answer. That’s what literally everything else is designed for.
It depends on the task you give it and the instructions you provide. I wrote this a while back i find it gives a 10x in capability especially if u use a non aligned llm like dolphin 8x22b.
I have no idea what any of that means. But thanks for the reply.
The natural general hype is not new… I even see it in 1970’s scifi. It’s like once something pierced the long-thought-impossible turing test, decades of hype pressure suddenly and freely flowed.
There is also an unnatural hype (that with one breakthrough will come another) and that the next one might yield a technocratic singularity to the first-mover: money, market dominance, and control.
Which brings the tertiary effect (closer to your question)… companies are so quickly and blindly eating so many billions of dollars of first-mover costs that the corporate copium wants to believe there will be a return (or at least cost defrayal)… so you get a bunch of shitty AI products, and pressure towards them.
Interestingly, the turing test has been passed by much dumber things than LLMs
I’m not talking about one-offs and the assessment noise floor, more like: “ChatGPT broke the Turing test” (as is claimed). It used to be something we tried to attain, and now we don’t even bother trying to make GPT seem human… we actually train them to say otherwise lest people forget. We figuratively pole-vaulted over the turing test and are now on the other side of it, as if it was a point on a timeline instead of an academic procedure.
True!
Sounds about right
Holy BALLS are you getting a lot of garbage answers here.
Have you seen all the other things that generative AI can do? From bone-rigging 3D models, to animations recreated from a simple video, recreations of voices, art created from people without the talent for it. Many times these generative AIs are very quick at creating boilerplate that only needs some basic tweaks to make it correct. This speeds up production work 100 fold in a lot of cases.
Plenty of simple answers are correct, breaking entrenched monopolies like Google from search, I’ve even had these GPTs take input text and summarize it quickly - at different granularity for quick skimming. There’s a lot of things that can be worthwhile out of these AIs. They can speed up workflows significantly.
I’m a simple man. I just want to look up a quick bit of information. I ask the AI where I can find a setting in an app. It gives me the wrong information and the wrong links. That’s great that you can do all that, but for the average person, it’s kind of useless. At least it’s useless to me.
So you got the wrong information about an app once. When a GPT is scoring higher than 97% of human test takers on the SAT and other standardized testing - what does that tell you about average human intelligence?
The thing about GPTs is that they are just word predictors. Lots of time when asked super specific questions about small subjects that people aren’t talking about - yeah - they’ll hallucinate. But they’re really good at condensing, categorizing, and regurgitating a wide range of topics quickly; which is amazing for most people.
It’s not once. It has become such an annoyance that I quit using and asked what the big deal is. I’m sure for creative and computer nerd stuff it’s great, but for regular people sitting at home listening to how awesome AI is and being underwhelmed it’s not great. They keep shoving it down our throats and plain old people are bailing.
Yeah, see that’s the kicker. Calling this “computer nerd stuff” just gives away your real thinking on the matter. My high school daughters use this to finish their essay work quickly, and they don’t really know jack about computers.
You’re right that old people are bailing - they tend to. They’re ignorant, they don’t like to learn new and better ways of doing things, they’ve raped our economy and expect everything to be done for them. People who embrace this stuff will simply run circles around those who don’t. That’s fine. Luddites exist in every society.
tl;dr: It’s useful, but not necessarily for what businesses are trying to convince you it’s useful for
You aren’t really using it for its intended purpose. It’s supposed to be used to synthesize general information. It only knows what people talk about; if the subject is particularly specific, like the settings in one app, it will not give you useful answers.
I mentioned somewhere in here that I created a document with it and it turned out really good.
Yeah, it’s pretty good at generating common documents like that
Yeah, I feel like people who have very strong opinions about what AI should be used for also tend to ignore the facts of what it can actually do. It’s possible for something to be both potentially destructive and used to excess for profit, and also an incredible technical achievement that could transform many aspects of our life. Don’t ignore facts about something just because you dislike it.
It’s understandable to feel frustrated when AI systems give incorrect or unsatisfactory responses. Despite these setbacks, there are several reasons why AI continues to be heavily promoted and integrated into various technologies:
-
Potential and Progress: AI is constantly evolving and improving. While current models are not perfect, they have shown incredible potential across a wide range of fields, from healthcare to finance, education, and beyond. Developers are working to refine these systems, and over time, they are expected to become more accurate, reliable, and useful.
-
Efficiency and Automation: AI can automate repetitive tasks and increase productivity. In areas like customer service, data analysis, and workflow automation, AI has proven valuable by saving time and resources, allowing humans to focus on more complex and creative tasks.
-
Enhancing Decision-Making: AI systems can process vast amounts of data faster than humans, helping in decision-making processes that require analyzing patterns, trends, or large datasets. This is particularly beneficial in industries like finance, healthcare (e.g., medical diagnostics), and research.
-
Customization and Personalization: AI can provide tailored experiences for users, such as personalized recommendations in streaming services, shopping, and social media. These applications can make services more user-friendly and customized to individual preferences.
-
Ubiquity of Data: With the explosion of data in the digital age, AI is seen as a powerful tool for making sense of it. From predictive analytics to understanding consumer behavior, AI helps manage and interpret the immense data we generate.
-
Learning and Adaptation: Even though current AI systems like Gemini, ChatGPT, and Microsoft Co-pilot make mistakes, they also learn from user interactions. Continuous feedback and training improve their performance over time, helping them better respond to queries and challenges.
-
Broader Vision: The development of AI is driven by the belief that, in the long term, AI can radically improve how we live and work, advancing fields like medicine (e.g., drug discovery), engineering (e.g., smarter infrastructure), and more. Developers see its potential as an assistive technology, complementing human skills rather than replacing them.
Despite their current limitations, the goal is to refine AI to a point where it consistently enhances efficiency, creativity, and decision-making while reducing errors. In short, while AI doesn’t always work perfectly now, the vision for its future applications drives continued investment and development.
Bravo.
We shall see. The above feels like an AI reponse.
Whoosh
I’m 80% sure this reply was written by an AI. Right now pretty much all it can do is tell people to eat rocks, claim you can leave dogs in hot cars, and starve artists.
lmao I see what you did there
-
If artificial intelligence doesn’t work why are they trying to make us all use it?
But it does work. It’s not obviously flawless but it’s orders of magnitude better than it was 10 years ago and it’ll only improve from here. Artificial intelligence is a spectrum. It’s not like we succesfully created it and it ended up sucking. No, it’s like the first cars; they suck compared to what we have now but it’s a huge leap from what we had before.
I think the main issue here is that the common folk has unrealistic expectations about what AI should be. They’re imagining what the “final product” would be like and then comparing our current systems to that. Ofcourse from that perspective it seems like it’s not working or is no good.
We’ll have to wait and see. I’m still not eating rocks or putting glue on my pizza.
This is like saying that automobiles are overhyped because they can’t drive themselves. When I code up a new algorithm at work, I’m spending an hour or two whiteboarding my ideas, then the rest of the day coding it up. AI can’t design the algorithm for me, but if I can describe it in English, it can do the tedious work of writing the code. If you’re just using AI as a Google replacement, you’re missing the bigger picture.
I’m retired. I don’t do all that stuff.
Maybe look into the creativity side more and less ‘Google replacement’?
The hype machine said we could use it in place of search engines for intelligent search. Pure BS.
Yes. Far more useful to embrace its hallucinogenic qualities…
I’ll see if I can think of something creative to do. I was just reading an article from MIT that pointed out that one reason AI is bad at search is that it can’t determine whether a source is accurate. It can’t tell the difference between Reddit and Harvard.
Neither can most of reddit…
A lot of people are doing work that can be automated in part by AI, and there’s a good chance that they’ll lose their jobs in the next few years if they can’t figure out how to incorporate it into their workflow. Some people are indeed out of the workforce or in industries that are safe from AI, but that doesn’t invalidate the hype for the rest of us.
Tech company management loves the idea of ridding themselves of programmers and other knowledge workers, and AI companies love selling the idea of non-productivity impacting layoffs to unsavvy companies (tech and otherwise).
Mooooneeeyyyy
I work as an AI engineer, let me tell you, the tech is awesome and has a looooot of potential but its not ready yet. Because of high potential literally no one wants to miss the opportunity of getting rich quick with it. Its only been like 2-3 years when this tech was released to the public, if only openai had released it as open-source, just like everyone before them, we wouldn’t be here. But they wanted to make money and now everyone else wants to too.