• 0 Posts
  • 89 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle


  • Is Orca that resource intensive? I’m running it in a container with KasmVNC and have never really checked out the resource usage. Admittedly it’s on one of my local servers in another room. I guess it’s how large your projects are too.

    Edit: maybe it’s just my small projects













  • The volumes of cash that Microsoft throw at retailers (custom builders / big box) is astronomical. Worked for a relatively small retailer with some international buying power. EOFY “MDF” from Microsoft was an absurd figure.

    Our builders would belt out 3 - 6 machines per day, depending on complexity of the custom build, the pre-built machines were in the 6+ per day range.

    Considering the vast majority of those machines were running windows (some sold without an os), from a quick estimate after too many beers we were out of pocket 10% at most of the bulk buy price for licence keys after our “market development funds” came through.

    It’s fucking crook.



  • I’ve not tested the method linked but yeah it would seem like it’s possible via this method.

    My lone VM doesn’t need a connection to those drives so I’ve not had a point to.

    You could probably run OMV in an LXC and skip the overheads of a VM entirely. LXC are containers, you can just edit the config files for the containers on the host Proxmox and pass drives right through.

    Your containers will need to be privileged, you can also clone a container and make it privileged if you have something setup already as unprivileged!


  • Yeah there is a workaround for using bind-mounts in Proxmox VMs: https://gist.github.com/Drallas/7e4a6f6f36610eeb0bbb5d011c8ca0be

    If you wanted, and your drives are mounted to the Proxmox host (and not to a VM), try an LXC for the services you are running, if you require a VM then the above workaround would be recommended after backing up your data.

    I’ve got my drives mounted in a container as shown here:

    Basicboi config, but it’s quick and gets the job done.

    I’d originally gone down the same route as you had with VMs and shares, but it’s was all too much after a while.

    I’m almost rid of all my VMs, home assistant is currently the last package I’ve yet to migrate. Migrated my frigate to a docker container under nixos, tailscale exit node under nixos too while the vast majority of other packages are already in LXC.


  • Ahh the shouting from the rooftops wasn’t aimed at you, but the general group of people in similar threads. Lots of people shill tailscale as it’s a great service for nothing but there needs to be a level of caution with it too.

    I’m quite new to the self hosting game myself, but services like tailscale which have so much insight / reach into our networks are something that in the end, should be self hosted.

    If your using SMB locally between VMs maybe try proxmox, https//clan.lol/ is something I’m looking into to replace Proxmox down the line. I share bind-mounts currently between multiple LXC from the host Proxmox OS, configuration is pretty easy, and there are lots of tutorials online for getting started.