I put up a vps with nginx and the logs show dodgy requests within minutes, how do you guys deal with these?

Edit: Thanks for the tips everyone!

  • h3x@kbin.social
    link
    fedilink
    arrow-up
    10
    ·
    1 year ago

    A pentester here. Those bad looking requests are mostly random fuzzing by bots and sometimes from benign vulnerability scanners like Censys. If you keep your applications up date and credentials strong, there shouldn’t be much to worry about. Of course, you should review the risks and possible vulns of every web application and other services well before putting them up in the public. Search for general server hardening tips online if you’re unsure about your configuration hygiene.

    An another question is, do you need to expose your services to the public? If they are purely private or for a small group of people, I’d recommend putting them behind a VPN. Wireguard is probably the easiest one to set up and so transparent you wouldn’t likely even notice it’s there while using it.

    But if you really want to get rid of just those annoying requests, there’s really good tips already posted here.

    Edit. Typos

  • orangeboats@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 year ago

    I only expose services on IPv6, for now that seems to work pretty well - very few scanners (I encounter only 1 or 2 per week, and they seem to connect to port 80/443 only).

    • Pixel@lemmy.sdf.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Isn’t that akin to security through obscurity… you might want one more layer of defense

      • orangeboats@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        I still have firewall (that blocks almost all incoming connections) and sshguard setup. I also check the firewall logs daily, blocking IPs that I find to be suspicious.

        I could probably do better, but with so few scanners connecting to my home server, I have managed to sleep way better than back when I setup a server on IPv4!

        Also, even if my home server gets attacked, at least I know that my other devices aren’t sharing the same IP with them… NAT-less is a godsend.

      • orangeboats@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        Lol, I have heard some ISP horror stories from the Down Under.

        I am fortunate enough that my country’s government has been forcing ISPs to implement IPv6 in their backbone infrastructure, so nowadays all I have to really do is to flick a switch on the router (unfortunately many routers still turn off IPv6 by default) to get an IPv6 connection.

        • 🅱🅴🅿🅿🅸@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          Yeah the internet services here are really stuck in the past. Hard to tell if theyre taking advantage of the scarcity of ipv4 addresses to make more money somehow, or of theyre just too fuckn lazy

  • Archy@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I use ACL where I add my home/work IPs as well as a few commonly used VPNs IPs as well. Cloudflare clocks known bots for me. Don’t see anything in the server logs, but I do see attempts on the CF side.

    • Meow.tar.gz@lemmy.goblackcat.com
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I am actually thinking about going back to Cloudflare tunnels. The only reason that I am hesitant is that I do use a fair amount of bandwidth as I host a mastodon server as well as a lemmy one. I don’t want to be stuck with a huge bandwidth bill.

  • gobbling871@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    1 year ago

    Nothing too fancy other than following the recommended security practices. And to be aware of and regularly monitor the potential security holes of the servers/services I have open.

    Even though semi-related, and commonly frowned upon by admins, I have unattended upgrades on my servers and my most of my services are auto-updated. If an update breaks a service, I guess its an opportunity to earn some more stripes.

        • exu@feditown.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          All the legit reasons mentioned in the blog post seem to apply to badly behaved client software. Using a good and stable server OS avoids most of the negatives.

          Unattended Upgrades on Debian for example will by default only apply security updates. I see no reason why this would harm stability more than running a potentially unpatched system.

          • med@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Hell, debian is usually so stable I would just run dist-upgrade on my laptop every morning.

            The difference there is that I’d be working with my laptop regularly and would notice problems more quickly

          • gobbling871@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            arrow-down
            1
            ·
            1 year ago

            Even though minimal, the risk of security patches introducing new changes to your software is still there as we all have different ideas on how/what correct software updates should look like.

  • Teapot@programming.dev
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    Anything exposed to the internet will get probed by malicious traffic looking for vulnerabilities. Best thing you can do is to lock down your server.

    Here’s what I usually do:

    • Install and configure fail2ban
    • Configure SSH to only allow SSH keys
    • Configure a firewall to only allow access to public services, if a service only needs to be accessible by you then whitelist your own IP. Alternatively install a VPN
      • ItsGhost@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Seconded, not only is CrowdSec a hell of a lot more resource efficient (Go vs Python IIRC), having it download a list of known bad actors for you in advance really slows down what it needs to process in the first place. I’ve had servers DDoSed just by fail2ban trying to process the requests.

        • Alfi@lemmy.alfi.casa
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          1 year ago

          Hi,

          Reading the thread I decided to give it a go, I went ahead and configured crowdsec. I have a few questions, if I may, here’s the setup:

          • I have set up the basic collections/parsers (mainly nginx/linux/sshd/base-http-scenarios/http-cve)
          • I only have two services open on the firewall, https and ssh (no root login, ssh key only)
          • I have set up the firewall bouncer.

          If I understand correctly, any attack detected will result in the ip being banned via iptables rule (for a configured duration, by default 4 hours).

          • Is there any added value to run the nginx bouncer on top of that, or any other?
          • cscli hub update/upgrade will fetch new definitions for collections if I undestand correctly. Is there any need to run this regularly, scheduled with let’s say a cron job, or does crowdsec do that automatically in the background?
          • ItsGhost@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Well I was expecting some form of notification for replies, but still, seen it now.

            My understanding of this is limited having mostly gotten as far as you have and been satisfied.

            For other bouncers, there’s actually a few decisions you can apply. By default the only decision is BAN which as the name suggests just outright blocks the IP at whatever level your bouncer runs at (L4 for firewall and L7 for nginx). The nginx bouncer can do more thought with CAPTCHA or CHALLENGE decisions to allow false alerts to still access your site. I tried writing something similar for traefik but haven’t deployed anything yet to comment further.

            Wih updates, I don’t have them on automated, but I do occasionally go in and run a manual update when I remember (usually when I upgrade my OPNSense firewall that’s runs it). I don’t think it’s a bad idea at all to automate them, however the attack vectors don’t change that often. One thing to note, newer scenarios only run on the latest agent, something I discovered recently when trying to upgrade. I believe it will refuse to update them if it would cause them to break in this way, but test it yourself before enabling corn

  • DigitalPortkey@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    1 year ago

    I stopped messing with port forwarding and reverse proxies and fail2ban and all the other stuff a long time ago.

    Everything is accessible for login only locally, and then I add Tailscale (alternative would be ZeroTier) on top of it. Boom, done. Everything is seamless, I don’t have any random connection attempts clogging up my logging, and I’ve massively reduced my risk surface. Sure I’m not immune; if the app communicates on the internet, it must be regularly patched, and that I do my best to keep up with.

  • z3bra@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I mean, it’s not a big deal to have crawlers and bots poking at our webserver if all you do is serving static pages (which is common for a blog).

    Now if you run code on server side (eg using PHP or python), you’ll want to retrieve multiple known lists of bad actors to block them by default, and setup fail2ban to block those that went through. The most important thing however is to keep your server up to date at all times.

  • OuiOuiOui@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I’ve been using crowdsec with swag for quite some time. I set it up with a discord notifier. It’s very interesting to see the types of exploits that are probed and from each country. Crowdsec blocks just like fail2ban and seems to do so in a more elegant fashion.

  • InEnduringGrowStrong@lemm.ee
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    I do client ssl verification.
    Nobody but me or my household is supposed to access those anyway.
    Any failure is a ban (I don’t remember how long for).
    I also ban every IP not from my country, adjusting that sometimes if I travel internationally.
    It’s much easier when you host stuff only for your devices (my case) and not for the larger public (like this lemmy instance).

    • ComptitiveSubset@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      That sounds like an excellent solution for web based apps, but what about services like Plex or Nextcloud that use their own client side apps?

      • InEnduringGrowStrong@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Some apps now have support for client certs (home-assistant ❤).
        Nextcloud is one of the only apps that’s open without client ssl because it’d be highly inconvenient to share a file link with someone if I had to install a cert on their devices. Plex app never works right for me so I just use the browser. My TV is too old to have old built-in so I have a VM in which I use a browser to watch plex.

    • karlthemailman@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      How do you have this set up? Is it possible to have a single verification process in front of several exposed services? Like as part of a reverse proxy?

      • InEnduringGrowStrong@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        Yes it’s running in my reverse proxy.
        Nginx is doing my “client ssl verify” in front of my web services.
        You can even do this on a per uri/location.
        For example, my nextcloud is open without client certs so I can share files with people, but the admin settings path is protected by client ssl.

      • dinosaurdynasty@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        1 year ago

        Yup, there are many ways of doing that. Most reverse proxies should support basic auth (easy, but browser UX is terrible and it breaks websockets) or TLS client auth (even worse browser UX, phones are awful).

        The best thing is do something like Caddy + Authelia (which is what I currently do with most things, with exceptions for specific user agents and IPs for apps that require it, aka non-browser stuff like Jellyfin),

  • apigban@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Depends on what kind of service the malicious requests are hitting.

    Fail2ban can be used for a wide range of services.

    I don’t have a public facing service (except for a honeypot), but I’ve used fail2ban before on public ssh/webauth/openvpn endpoint.

    For a blog, you might be well served by a WAF, I’ve used modsec before, not sure if there’s anything that’s newer.