• 1 Post
  • 6 Comments
Joined 2 years ago
cake
Cake day: July 4th, 2023

help-circle

  • You come from a healthy background is what I’m hearing. And that’s good, and I don’t mean that in a derogatory way. What you have there is absolutely the right mindset to have. These tools are made by humans, who have their own set of problems they want to solve with their tools. It may not be the best tool, but it can work pretty damn well.

    However, it’s also not uncommon to see communities rage and fight over the superiority of their tools, if not just to shun those that they think are inferior. It’s a blatantly childish or tribalistic behaviour, depending on how you look at humanity. And you’ll see this outside of programming too; in the office, in town, on the streets. People engage in this behaviour so that they can show that “I am on your side”, for the side where they think is the right or superior side, based on factors like a perception of group size, a perception of power, a perception of closeness. It appeals to a common human desire to belong to a strong group. It appeals to the human desire to feel safe. And when you start looking at it that way, that’s not too different from how animals behave. It’s important to note that not all humans have the same amount of desire for this sort of tribe, or would give into that desire to engage in such behaviours, but it’s not surprising to see.

    In any case, this article is essentially a callout to the sort of toxic behaviour done for the sake of feeling superior, that exists within the programming community, to a point where some may even say is a major subculture.



  • This. Any time someone’s tries to tell me that AGI will come in the next 5 years given what we’ve seen, I roll my eyes. I don’t see a pathway where LLMs become what’s needed for AGI. It may be a part of it, but it would be non-critical at best. If you can’t reduce hallucinations down to being virtually indistinguishable from misunderstanding a sentence due to vagueness, it’s useless for AGI.

    Our distance from true AGI (not some goalpost moved by corporate interests) has not significantly moved from before LLMs became a thing, in my very harsh opinion, bar the knowledge and research being done by those who are actually working towards AGI. Just like how we’ve always thought AI would come one day, maybe soon, before 2020, it’s no different now. LLMs alone barely closes that gap. It gives us that illusion at best.