DuckDuckGo, Bing, Mojeek, and other search engines are not returning full Reddit results any more.

  • TheDarksteel94@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Personally, I really wish it was as easy to search for Lemmy posts with a search engine as it is with Reddit. Idk, maybe I’m doing it wrong.

  • Lvxferre [he/him]@mander.xyz
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Even back when I used Reddit, I had such a burning hate against Reddit results that I blacklisted them. So this is actually improving things for me, as I use DDG by default.

    As such I hope that this decision becomes another nail in each of their (Google and Reddit’s) coffins.

  • BurnSquirrel@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Still couldn’t get me to use it, I use DDG which can switch between search engines and search sites very quickly with it’s ! syntax (Everyone goes on about privacy, but this is pretty much it’s best feature). Google results are consistently the worst for me if I’m hitting multiple search engines

  • reddig33@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I’m not understanding what stops a search engine from scraping a publicly accessible website. ?

    • Eril@feddit.org
      link
      fedilink
      English
      arrow-up
      0
      ·
      6 months ago

      robots.txt, I guess? Yes, you can just ignore it, but you shouldn’t, if you develop a responsible web scraper.

      • reddig33@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Doesn’t seem legal that a robots.txt could pick and choose who scrapes. Seems like legally it would have to be all or nothing. Here’s hoping one of the search engines ignores it and makes it a legal case.

        • capital@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          You’d probably feel differently if it were your service. Should you be able to control who scrapes your sites or should that be all or nothing?

          For the record, I fucking hate what the internet is becoming. I naively believed that even if shit got cordoned off into the walled gardens that are mobile phone apps, the web would remain as open as it was. This is a terrible sign of things to come.

          • reddig33@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            edit-2
            6 months ago

            No, I wouldn’t feel differently. In fact letting search engines scrape and point to your content is what leads people to your site. It’s free advertising. If you’re going to let one search engine in, you should let them all in. If you want to be public, be public. Otherwise put up a login firewall and go private.

            • capital@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              6 months ago

              It’s not just search engines. Lots of people on Mastodon were using robots.txt to block ChatGPT (and any other LLM company they knew of) from scraping their sites/blogs.

              I disagree, to a point. I want to be able to control my services to the greatest extent possible, including picking who scrapes me.

              On the other hand, orgs as large as Google doing this poses a real threat to how the internet works right now which I hate.

        • Eril@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          6 months ago

          Actually currently it contains this:

          User-agent: *
          Disallow: /
          

          Well, that actually is a blanket ban for everyone, so something else must be at play here.

      • hotpot8toe@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        6 months ago

        Also, rate limiting. A publicly accessible website doesn’t mean that it will allow scrapers to read millions of pages each week. They can easily identify and block scrapers because of the pattern of their activity. I don’t know if Reddit has rate-limiting, but I wouldn’t be surprised if they implement one.

  • III@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    Great, neither Google search or reddit work anymore. They deserve each other.

  • thesmokingman@programming.dev
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    I have not actually been able to use any Reddit results for awhile. It might be that I force old[.]reddit[.]com and Reddit has finally cracked down on that?

  • jackyard@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    6 months ago

    After seeing this news I just created this lemmy account. I hope people make the right decision and move on to lemmy.