Hi all !

As of today, I am running my services with rootless podman pods and containers. Each functional stack gets its dedicated user (user cloud runs a pod with nextcloud-fpm, nginx, postgresql…) with user mapping. Now, my thought were that if an attack can escape a container, it should be contained to a specific user.

Is it really meaningful ? With service users’ home setup in /var/lib, it makes a lot of small stuff annoying and I wonder if the current setup is really worth it ?

  • SMillerNL@lemmy.world
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    4 days ago

    Af an attack can escape a container a lot of companies worldwide are going to need to patch a 0-day. I do not expect that to be part of my threat model for self-hosted services.

    • qqq@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      4 days ago

      Woah, no. Sure escaping via a kernel bug or some issue in the container runtime is unexpected, but I “escape” containers all the time in my job because of configuration issues, poorly considered bind mounts, or the “contained” service itself ends up being designed to manage some things outside of the container.

      Might be valid to not consider it with the services you run, but that reasoning is very wrong.

    • sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      2
      ·
      3 days ago

      Companies don’t typically host multiple containers on the same host. So having a different user for them is less important than securing the connection between machines, since a given biat isn’t particularly interesting. Attackers will still try to break out, so they have a backup.

      As a self-hoster, you typically do the opposite. You run multiple services on the same host, and the internal network isn’t particularly secure. So you should be focusing more on mitigating issues, and having each service run as an unprivileged user is one fairly easy way to do that.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 days ago

          Sure, but those will usually be pieces of an app on the same host, not whole apps. Like for an inventory management app, you might have the auth server and its database on one host, the CRUD app and its database on another, and the report server, its database, and a replica of the CRUD db on another. And I use the term “host” broadly enough to include VMs on the same physical hardware. And these hosts will have restricted communication between each other.

          At least, that’s how I’ve seen it done.

          Self-hosters will generally run multiple full apps on one host. It’s a different setup.

    • mel ♀@jlai.luOP
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      I guess I should define my threat model first. Your answer pulls me towards a single user though

  • neidu3@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    5
    ·
    4 days ago

    The generally don’t containerize things because I’m too old and crusty, but segregating over several users is basically how it’s been done for ages, and while it may not be particularly useful in your case, I consider it a reasonable best practice that costs you nothing.

  • tty5@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    4 days ago

    It’s always effort vs risk.

    Since it’s a do once and forget kind of thing I’d rate effort rather low.

    As for risk in the worst case scenario a single service being compromised means all of them are with the attacker getting access to everything those services can access, including all the credentials. Will you make an effort to be on top of all the updates for all services?

    As far as I’m concerned: At home all containers for each service get a separate user. At work every container does.