Resolved: How to migrate StatefulSet to different nodes?

Question:

I have a Kubernetes cluster of 3 nodes in Amazon EKS. It’s running 3 pods of Cockroachdb in a StatefulSet. Now I want to use another instance type for all nodes of my cluster. So my plan was this:
  1. Add 1 new node to the cluster, increase replicas in my StatefulSet to 4 and wait for the new Cockroachdb pod to fully sync.
  2. Decommission and stop one of the old Cockroachdb nodes.
  3. Decrease replicas of the StatefulSet back to 3 to get rid of one of the old pods.
  4. Repeat steps 1-3 two more times.

Obviously, that doesn’t work because StatefulSet deletes most recent pods first when scaling down, so my new pod gets deleted instead of the old one. I guess I could just create a new StatefulSet and make it use existing PVs, but that doesn’t seem like the best solution for me. Is there any other way to do the migration?

Answer:

You can consider make a copy of your ASG current launch template -> upgrade the instance type of the copied template -> point your ASG to use this new launch template -> perform ASG instance refresh. Cluster of 3 nodes with minimum 90% of healthy % ensure only 1 instance will be replace at a time. Affected pod on the drained node will enter pending state for 5~10 mins and redeploy on the new node. This way you do not need to scale up StatefulSet un-necessary.

If you have better answer, please add a comment about this, thank you!

Source: Stackoverflow.com