• sexy_peach@beehaw.org
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 month ago

    Researchers who work on transformer models understand how the algorithm works, but they don’t yet know how their simple programs can generalize as much as they do.

    They do!

    You can even train small networks by hand with pen and paper. You can also manually design small models without training them at all.

    The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

    • archomrade [he/him]@midwest.social
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 month ago

      an acknowledgement of how relatively uncomplicated their structure is compared to the complexity of its output.

      The interesting part is that this dated tech is producing such good results now that we throw our modern hardware at it.

      That’s exactly what I mean.