Infrastructure Design Question
I'm on the fence about switching over to Hetzner from Vultr and could use some advice from some folks smarter than me.
Right now I am simply using a Docker swarm cluster to host a few high traffic Wordpress sites, a handful of python scripts and apis, a few nextjs applications, and a handful of databases (mongodb, postgres, and mariadb). Oh and redis. I am using Vultrs object storage to hold things like wordpress backups, db dumps, and all assets for my stateless apps. I'm also using block storage attached to the hosts to separate the /var/lib/docker folder from the system.
There are several issues with this setup that swapping over to something like K8s would resolve I believe. I could be wrong here. Also the wordpress and database instances are bound to a specific node which isn't ideal. However, I haven't been able to get satisfactory performance from using something like Glusterfs.
Anyways, my questions are:
1. Would I benefit from converting this over to K8s?
2. Running this on Vultr is really expensive compared to Hetzner (maybe....). However, I'd be looking at the ASH dc and they don't have a managed object storage there. So I'd need to do that myself. I could use backblaze or similar for backups, but user uploaded files, I'd like to keep close for latency and transfer usage reasons. What options would I have here? CEPH, SeaweedFS?
3. Would it be easier to just run k8s on Vultr instead since they have a managed control plane for free?
I'm also working on another application that will be rather large scale with the potential to have very storage requirements (images and videos) and cpu usage. I would plan to put this in its own cluster but I'm weary that I might hit some limitations with Vultr. ie. would I want to store 50TB of data in their S3. I still need to figure out how to best handle large database servers in k8s. (Raid multiple block storage together and constrain the db to a node?)
Sorry for this post being all over the place. My mind has been going 1000 miles a minute lately.