• Krakaval@jlai.lu
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    Somehow I miss those days. Now you need weeks of training to understand the black magic behind all the build/deployment stuff in whatever cloud provider your company decided to use…

    • xtapa@discuss.tchncs.de
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      We got our own platform based on kubernetes and cncf stuff and we don’t have to care anymore about the metal underneath. AWS? OTC? Azure? Thats just a target parameter, platform does the rest. It’s great.

      • widerporst@feddit.de
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        How often do you switch cloud providers that this is even a real rather than a hypothetical benefit? (Compared to the cost of dealing with a much more complicated stack.)

        • bamboo@lemmy.blahaj.zone
          link
          fedilink
          English
          arrow-up
          0
          ·
          8 days ago

          I manage a stack like this, we have dedicated hardware running a steady state of backend processing, but scale into AWS if there’s a surge in realtime processing needed and we don’t have the hardware. We also had an outage in our on prem datacenter once which was expensive for us (I assume an insurance claim was made), but scaling to AWS was almost automatic, and the impact was minimal for a full datacenter outage.

          If we wanted to optimize even more, I’m sure we could scale into Azure depending on server costs when spot pricing is higher in AWS. The moral of the story is to not get too locked into any one provider and utilize some of the abstraction layers so that AWS, Azure, etc are just targets that you can shop around for by default, without having to scramble.

  • Zip2@feddit.uk
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    Oh please, you didn’t even have to turn the cassette or floppy disc over. You and your luxuries.

      • fishpen0@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        One could argue the requirements have changed because the security and compliance part of the world finally caught up to modern software delivery concepts. Even the most dinosaur apps at compliant orgs are being dragged kicking and screaming into new CI/CD tools where applying governance and custody chains and permissions and approvals are all self documented automated hooks.

      • Fades@lemmy.world
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        Anybody that actually professionally deals with this kind of thing understands just how wrong you are.

    • JackbyDev@programming.dev
      link
      fedilink
      English
      arrow-up
      0
      ·
      7 days ago

      Don’t forget ear files. Oh, and don’t forget the abomination that is the executable war file when you’re using Spring Boot but your company hasn’t fully embraced it yet.

  • zzx@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    This is how I deployed an app less than 5 years ago (healthcare).

    It’s sad

    • drathvedro@lemm.ee
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      I know a place where they still do this. They’ve got an 8-digit user count, 7 digit monthly profits, all running on one server that costs something like $20 a month. They’ve downsized a few years ago to single-digit employee number and just sit there and collect profits. And this is why I’m now working for a company that casually dropped a few grand for a glorified CPU usage meter and a few grand on top of that for deployment tool that does the same thing that the old guy at a former place was doing with his trusty FTP client.

    • Flipper@feddit.de
      link
      fedilink
      arrow-up
      0
      ·
      7 days ago

      This is how I deploy my personal website today. The holster doesn’t give ash access.

  • fmstrat@lemmy.nowsci.com
    link
    fedilink
    English
    arrow-up
    0
    ·
    edit-2
    8 days ago

    I remember this. I also remember using scp instead. And ftp, if I go back far enough. rsync is still my friend though zfs has mostly replaced it now.

    • BoneALisa@lemm.ee
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 days ago

      How has zfs replaced rsync for you? One is a filesystem, and the other is a filesyncing tool. Does zfs do something im not aware of lol?

      • fmstrat@lemmy.nowsci.com
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        I used to use rsync to copy data from my storage array on one machine to an external and an off site backup. Since a lot of it was code, it always took forever to scan all the small files, and I had to script unlocking remote partitions.

        With encrypted ZFS, I can just zfs snap then zfs send, and it does the same thing at the block level, raw, so way faster, less data transfer, and no need to send a key or passphrase unless I need to mount it at the destination (meaning a cloud provider could never know the data, for instance).

        ZFS is also recursive, so if I have s/storage and /storage/stuff defined, I can snap and send either level, which makes it as versatile as rsync.

        • BoneALisa@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          7 days ago

          Oh interesting, i am not super familar with zfs’ tools, so thats pretty cool! Ill have to look at that for my storage array.

    • yrmp@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      My school had nothing about react, mode, angular, angularJS, SaaS, etc. back in 2015.

      We learned Perl, PHP, LAMP stack, SOAP based APIs and other “antiquated” things. Provided a solid foundation of fundamentals that I’ve built a nice career on.

      It might have been by design to get a feel for the fundamentals. Or maybe it’s just because the people teaching it have probably left the industry and are teaching how they did it.

      My department head was in his 70s and my professors all trended on the older side.

  • Qubbe@programming.dev
    link
    fedilink
    arrow-up
    0
    ·
    7 days ago

    FTP Explorer all the way! Preferred that to filezilla… I mean it didn’t support sftp but I liked it.

    • ResoluteCatnap@lemmy.ml
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      8 days ago

      They have bundled malware from the main downloads on their own site multiple times over the years, and even denied it and tried gaslighting people that AVs were giving false positives because AV companies are paid off by other corporations. And the admin will even try to delete the threads about this stuff but web archive to the rescue…

      https://web.archive.org/web/20180623190412/https://forum.filezilla-project.org/viewtopic.php?t=48441#p161487

      • 𝙁𝙌𝙌𝘿@lemmy.ohaa.xyz
        link
        fedilink
        English
        arrow-up
        0
        ·
        edit-2
        7 days ago

        You know what? I didn’t believe you, since I’m using it for a long time on Linux and never had any issues with it. Today, when I helped a friend (on Windows) with some SFTP transfer and recommended FileZilla was the first time I realised the official Downloads page provides Adware. The executable even gets flagged by Microsoft Defender and VirusTotal. That’s actually REALLY bad. Isn’t FileZilla operated by Mozilla? Should I stop using it, even though the Linux versions don’t have sketchy stuff? It definitely leaves a really bad taste.

        • ResoluteCatnap@lemmy.ml
          link
          fedilink
          English
          arrow-up
          0
          ·
          edit-2
          7 days ago

          Yeah, it’s bad. Surprised they’re still serving that crap in their own bundle but i guess some things don’t change.

          Filezilla is no relation to mozilla. But yeah i moved away from it years ago. The general recommendation I’ve seen is “anything but filezilla”. Personally i use winscp for windows, and will have to figure out what to use when i switch my daily driver to Linux.

    • RonSijm@programming.dev
      link
      fedilink
      arrow-up
      0
      ·
      8 days ago

      I suppose in the days of ‘Cloud Hosting’ a lot of people (hopefully) don’t just randomly upload new files (manually) on a server anymore.

      Even if you still just use normal servers that behave like this, a better practice would be to have a build server that creates builds, like whenever you check code into the Main branch, it’ll create a deploy for the server, and you deploy it from there - instead of compiling locally, opening filezilla and doing an upload.

      If you’re using ‘Cloud Hosting’ - for example AWS - If you use VMs or bare metal - you’d maybe create Elastic Beanstalk images and upload a new Application or Machine Image as a new version, and deploy that in a more managed way. Or if you’re using Docker, you just upload a new Docker image into a Docker registry and deploy those.

      • dan@upvote.au
        link
        fedilink
        arrow-up
        0
        ·
        8 days ago

        For some of my sites, I still build on my PC and rsync the build directory across. I’ve been meaning to set up Gitlab or something similar and configure automated deployments.

        • amazing_stories@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          8 days ago

          This is what I do because my sites aren’t complicated enough to warrant a build system. Personally I think most websites out there are over-engineered. Example: a Discord friend made a React site that displays stats from a gaming server. It looks nice, but you literally can’t hyperlink to any of the data, it can only be loaded dynamically and only looks coherent on a phone in portrait mode. There are a lot of people following trends (some good trends) but without really thinking about why.

          • dan@upvote.au
            link
            fedilink
            arrow-up
            0
            ·
            edit-2
            8 days ago

            I’m starting to like the htmx model a lot. Server-rendered app that uses HTML attributes to configure the dynamic bits (e.g. which URL to hit and which DOM element to insert the response into). Don’t have to write much JS (or any in some cases).

            you literally can’t hyperlink to any of the data

            I thought most React-powered frameworks use a URL router out-of-the-box these days? The developer does need to have a rough idea what they’re doing, though.

  • EnderMB@lemmy.world
    link
    fedilink
    arrow-up
    0
    ·
    8 days ago

    I remember joining the industry and switching our company over to full Continuous Integration and Deployment. Instead of uploading DLL’s directly to prod via FTP, we could verify each build, deploy to each environment, run some service tests to see if pages were loading, all the way up to prod - with rollback. I showed my manager, and he shrugged. He didn’t see the benefit of this happening when, in his eyes, all he needed to do was drag and drop, and load the page to make sure all is fine.

    Unsurprisingly, I found out that this is how he builds websites to this day…

  • KubeRoot@discuss.tchncs.de
    link
    fedilink
    English
    arrow-up
    0
    ·
    8 days ago

    This is from before my times, but… Deploying an app by uploading a pre built bundle? If it’s a fully self-contained package, that seems good to me, perhaps better than many websites today…

    • dan@upvote.au
      link
      fedilink
      arrow-up
      0
      ·
      edit-2
      8 days ago

      That’s one nice thing about Java. You can bundle the entire app in one .jar or .war file (a .war is essentially the same as a .jar but it’s designed to run within a Servlet container like Tomcat).

      PHP also became popular in the PHP 4.x era because it had a large standard library (you could easily create a PHP site with no third-party libraries), and deployment was simply copying the files to the server. No build step needed. Classic ASP was popular before it, and also had no build step. but it had a very small standard library and relied heavily on COM components which had to be manually installed on the server.

      PHP is mostly the same today, but these days it’s JIT compiled so it’s faster than the PHP of the past, which was interpreted.