VictoriaMetrics: migrating VMSingle and VictoriaLogs data between Kubernetes cluster

By | 07/05/2025
 

We have VictoriaMetrics and VictoriaLogs running on an AWS Elastic Kubernetes Service cluster.

We do major upgrades to EKS by creating a new cluster, and therefore we have to transfer monitoring data from the old VMSingle instance to the new one.

For VictoriaMetrics, there is the vmctl tool which can migrate data through the APIs of the old and new instances, acting as a proxy between the two instances.

With VictoriaLogs, the situation is still a bit more complicated and there are currently two options – let’s look at them further.

So, here’s our setup:

  • old Kubernetes cluster EKS 1.30
  • new Kubernetes cluster EKS 1.33

VictoriaMetrics and VictoriaLogs are deployed with our own Helm-chart which installs victoria-metrics-k8s-stack and victoria-logs-single through dependencies, plus a set of various additional services such as PostgreSQL Exporter.

Migrating VictoriaMetrics metrics

Running vmctl

vmctl supports migration from VMSinlge to VMClutser and vice versa, or simply between VMSinlge => VMSinlge or VMClutser => VMClutser instances.

In our case, these are just two instances of VMSingle.

You can install vmctl locally in a Pod with VMSingle, see How to build, but since the CLI still works through the API, it is easier to create a separate Pod and do everything from it. The Docker image is available here – victoriametrics/vmctl.

Since the entrypoint for this image is set in /vmctl-prod, to enter to the container we can pass --command, run ping and sleep in a loop, and then do everything we need from the console:

$ kubectl run vmctl-pod --image=victoriametrics/vmctl --restart=Never --command -- /bin/sh -c "while true; echo ping; do sleep 5; done"
pod/vmctl-pod created

There is no difference on which cluster to run it on.

Connect to the Pod:

$ kk exec -ti vmctl-pod -- sh 
/ #.

Check the CLI:

/ # /vmctl-prod vm-native --help
NAME:
   vmctl vm-native - Migrate time series between VictoriaMetrics installations via native binary format

USAGE:
   vmctl vm-native [command options] [arguments...]

OPTIONS:
   -s                              Whether to run in silent mode. If set to true no confirmation prompts will appear. (default: false)
   ...

Start the migration

The Kubernetes Pod with vmctl will act as a proxy between the source and destination, so it must have a stable network.

In addition, if you are migrating a large amount of data, then look towards the --vm-concurrency option to run the migration in several parallel threads, but keep in mind that each worker will use additional CPU and Memory.

The documentation also describes possible issues with limits – see Migrating data from VictoriaMetrics, and it is useful to look at the Migration tips section.

It is also recommended to add the --vm-native-filter-match='{__name__!~"vm_.*"}' filter to avoid migrating metrics that are related to VictoriaMetrics itself, as this can lead to data collision – duplicate time series.

Although in my case, VMAgent adds a metric with the name of the cluster to all metrics:

...
  vmagent:
    enabled: true
    spec:
      externalLabels:
        cluster: "eks-ops-1-33"
...

If resources.limits are set for the VMSingle Pod, it’s better to disable them or increase them, and increase the resources.requests, because I was getting 504 and Pod Eviction few times.

And maybe it makes sense to move VMSingle to a separate WorkerNode, because in our case, the t3 and Spot EC2 instances are used for monitoring.

What and where we will migrate:

  • source: VMSingle on EKS 1.30
    • endpoint: vmsingle.monitoring.1-30.ops.example.co
  • destination: VMSingle on EKS 1.33 endpoint
    • endpoint: vmsingle.monitoring.1-33.ops.example.co

From the Pod with the vmctl, check access to both endpoints:

/ # apk add curl

/ # curl -X GET -I https://vmsingle.monitoring.1-30.ops.example.co
HTTP/2 400

/ # curl -X GET -I https://vmsingle.monitoring.1-33.ops.example.co
HTTP/2 200

And start the migration for the entire period – I don’t remember when exactly this cluster was created, let’s say January 2023:

/ # /vmctl-prod vm-native \
> --vm-native-src-addr=https://vmsingle.monitoring.1-30.ops.example.co/ \
> --vm-native-dst-addr=https://vmsingle.monitoring.1-33.ops.example.co \
> --vm-native-filter-match='{__name__!~"vm_.*"}' \
> --vm-native-filter-time-start='2023-01-01'
VictoriaMetrics Native import mode
...

The process has started:

The resources on the source – memory – went up to 5-6 gigabytes:

The destination had a little more CPU, but less memory:

And completion – took 6 hours, but I did it without --vm-concurrency:

...
2025/06/23 19:07:29 Import finished!
2025/06/23 19:07:29 VictoriaMetrics importer stats:
  time spent while importing: 6h30m8.537582366s;
  total bytes: 16.5 GB;
  bytes/s: 705.9 kB;
  requests: 6882;
  requests retries: 405;
2025/06/23 19:07:29 Total time: 6h30m8.541808518s

Now we have a month’s worth of graphs on the new EKS cluster, even though the cluster was created just a week ago:

If migration fails

First, check the request – you need to find the old metrics on the new cluster.

In my case, I can check on the new cluster using the cluster label – a useful thing:

$ curl -s 'http://localhost:8429/prometheus/api/v1/series' -d 'match[]={cluster="eks-ops-1-30"}' | jq
...
    {
      "__name__": "yace_cloudwatch_targetgroupapi_requests_total",
      "cluster": "eks-ops-1-30",
      "job": "yace-exporter",
      "instance": "yace-service:5000",
      "prometheus": "ops-monitoring-ns/vm-k8s-stack"
    }
...

Documentation on deleting metrics and working with the VictoriaMetrics API in general – How to delete or replace metrics in VictoriaMetrics and Deletes time series from VictoriaMetrics.

Run a request to /api/v1/admin/tsdb/delete_series:

$ curl -s 'http://localhost:8429/api/v1/admin/tsdb/delete_series' -d 'match[]={cluster="eks-ops-1-30"}

Check:

$ curl -s 'http://localhost:8429/prometheus/api/v1/series' -d 'match[]={cluster="eks-ops-1-30"}' | jq
{
  "status": "success",
  "data": []
}

Now you can repeat the migration.

Another option is to add the dedup.minScrapeInterval=1ms option, then VictoriaMetrics will remove duplicates by itself, but I have not tested this option.

VictoriaLogs migration

With VictoriaLogs, the situation is a little more complicated, because vlogscli does not yet (hopefully, they will add) have any option to transfer data like in vmctl.

And there is a problem here:

  • if there is no data in VictoriaLogs on the new cluster yet, you can simply copy the old data from rsync to the PVC of the new VictoriaLogs instance similarly
    • the same, if the new VMLogs instance already has some data, but with no overlapping days from the old instance, because the data in VictoriaLogs storage is stored in directories by day, and they can be safely transferred
  • but if there is data and/or days are duplicated, then for now the only option is to run two VictoriaLogs instances: one with old data, one with new data, and have a vlselect instance in front of them

When VictoriaLogs will add Object Storage support, it will be easier, and it is already in its Roadmap. Then you can just keep everything in AWS S3, as we do it now with Grafana Loki.

Option 1: copying data from rsync

So, the first option is if there is no data in the new VictoriaLogs instance, or there are no records on the same days on both instances – the old and the new.

Here we can simply copy the data, and it will be available on the new Kubernetes cluster.

See VictoriaLogs documentation – Backup and restore.

I did it with rsync, but you can try it with utilities like korb.

Let’s check where the logs are stored in the VictoriaLogs Pod:

$ kk describe pod atlas-victoriametrics-vmlogs-new-server-0
Name:             atlas-victoriametrics-vmlogs-new-server-0
...
Containers:
  vlogs:
    ...
    Args:
      --storageDataPath=/storage
    ...
    Mounts:
      /storage from server-volume (rw)
    ...
Volumes:
  server-volume:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  server-volume-atlas-victoriametrics-vmlogs-new-server-0
    ReadOnly:   false
...

And the contents of the /storage directory :

~ $ ls -l /storage/partitions/
total 32
drwxrwsr-x    4 1000     2000          4096 Jun 16 00:00 20250616
drwxrwsr-x    4 1000     2000          4096 Jun 17 00:00 20250617
drwxrwsr-x    4 1000     2000          4096 Jun 18 00:00 20250618
drwxrwsr-x    4 1000     2000          4096 Jun 19 00:00 20250619
drwxrwsr-x    4 1000     2000          4096 Jun 20 00:00 20250620
drwxr-sr-x    4 1000     2000          4096 Jun 21 00:00 20250621
drwxr-sr-x    4 1000     2000          4096 Jun 22 00:00 20250622
drwxr-sr-x    4 1000     2000          4096 Jun 23 00:00 20250623

But there’s no rsync or SSH in the event itself, and we can’t even install it:

~ $ rsync
sh: rsync: not found
~ $ apk add rsync
ERROR: Unable to lock database: Permission denied
ERROR: Failed to open apk database: Permission denied
~ $ su
su: must be suid to work properly
~ $ sudo -s
sh: sudo: not found
~ $ ssh
sh: ssh: not found

So let’s just run rsync from the old EC2 to the new one.

How to find the right directory on the host – see in my Kubernetes: find a directory with a mounted volume in a Pod on its host post.

Setting up SSH access to EC2 for EKS – in the AWS: Karpenter and SSH for Kubernetes WorkerNodes.

Check Pod on the old cluster – find its EC2 and Container ID:

$ kk describe pod atlas-victoriametrics-vmlogs-new-server-0 | grep 'Node\|Container'
Node:             ip-10-0-39-190.ec2.internal/10.0.39.190
Containers:
    Container ID:  containerd://db9fa73a4d37045b0338ae48438f9815e4f6f92c3fd6546604ca5d1338f19844
...

Connect to the WorkerNode:

$ ssh -i ~/.ssh/eks_ec2 [email protected]

In the mounts[] find the directory for the /storage:

[root@ip-10-0-39-190 ec2-user]# crictl inspect  db9fa73a4d37045b0338ae48438f9815e4f6f92c3fd6546604ca5d1338f19844 | jq
...
    "mounts": [
      {
        "containerPath": "/storage",
        "gidMappings": [],
        "hostPath": "/var/lib/kubelet/pods/5192e1f9-20ea-49c6-99ed-775af5e44183/volumes/kubernetes.io~csi/pvc-43c427fa-b05c-45b8-8bdb-92b00bff3496/mount",
...

Check its content:

[root@ip-10-0-39-190 ec2-user]# ll /var/lib/kubelet/pods/5192e1f9-20ea-49c6-99ed-775af5e44183/volumes/kubernetes.io~csi/pvc-43c427fa-b05c-45b8-8bdb-92b00bff3496/mount
total 24
drwxrwsr-x  3 ec2-user 2000  4096 Nov 19  2024 cache
-rw-rw-r--  1 ec2-user 2000     0 Jun 20 19:20 flock.lock
drwxrws---  2 root     2000 16384 Sep  4  2024 lost+found
drwxrwsr-x 10 ec2-user 2000  4096 Jun 25 00:25 partitions

We only need the data from the partitions directory here.

Repeat for VictoriaLogs on a new cluster, though Amazon Linux 2023 does not have critctl – however, it does have ctr.

Check ContainerD Namespaces for containers:

[root@ip-10-0-41-247 ec2-user]# ctr ns ls
NAME   LABELS 
k8s.io

Check the container with the ctr containers info:

[root@ip-10-0-41-247 ec2-user]# ctr -n k8s.io containers info 9fd6fefaec92ab76093651239f6e177686e7c7dd012d53d4bf2e6820260aa884
...
            {
                "destination": "/storage",
                "type": "bind",
                "source": "/var/lib/kubelet/pods/4b2f179d-9ada-403e-9680-b76e3507563f/volumes/kubernetes.io~csi/pvc-da384ead-50e8-425f-b3b0-47c35f3a5155/mount",
...

And the content of the /var/lib/kubelet/pods/4b2f179d-9ada-403e-9680-b76e3507563f/volumes/kubernetes.io~csi/pvc-da384ead-50e8-425f-b3b0-47c35f3a5155/mount directory:

[root@ip-10-0-41-247 ec2-user]# ll /var/lib/kubelet/pods/4b2f179d-9ada-403e-9680-b76e3507563f/volumes/kubernetes.io~csi/pvc-da384ead-50e8-425f-b3b0-47c35f3a5155/mount
total 20
-rw-rw-r--.  1 ec2-user 2000     0 Jun 25 12:18 flock.lock
drwxrws---.  2 root     2000 16384 Jun 10 09:41 lost+found
drwxrwsr-x. 10 ec2-user 2000  4096 Jun 25 00:32 partitions

Pay attention to the user’s and group’s IDs, they must be the same – ec2-user(1000) and group 2000 on both EC2 instances in my case.

Create an SSH key on the old cluster and check the connection to the EC2 of the new cluster:

[root@ip-10-0-39-190 ec2-user]# ssh -i .ssh/eks [email protected]
...
[ec2-user@ip-10-0-41-247 ~]$

OK, we have it.

Now install rsync on both instances:

[root@ip-10-0-39-190 ec2-user]# yum -y install rsync

Just in case, you can back up the data on a new instance – either with an EBS snapshot or with tar.

One more thing about the retention period, I’m glad I mentioned it – we have only 7 days. Therefore, if you copy the data now, the old logs will be deleted.

Let’s change it:

... 
retentionPeriod: 30d 
...

On the new instance, let’s make a directory where we will transfer the data (but can be done directly to the PVC directory):

[root@ip-10-0-41-247 ec2-user]# mkdir vmlogs

And from the old EC2 run rsync to the new instance to the $HOME/vmlogs:

[root@ip-10-0-39-190 ec2-user]# rsync -avz --progress --delete -e "ssh -i .ssh/eks" \
> /var/lib/kubelet/pods/5192e1f9-20ea-49c6-99ed-775af5e44183/volumes/kubernetes.io~csi/pvc-43c427fa-b05c-45b8-8bdb-92b00bff3496/mount/partitions/ \
> [email protected]:/home/ec2-user/vmlogs/
...

Here:

  • -a: archive mode (saves permissions, create/modify time, and structure)
  • -v: verbose mode
  • -z: compress data
  • --progress: show progress
  • --delete: delete data from destination if it is deleted in source
  • -e: command with an ssh key

The first argument is the local directory, and the second is where to copy to.

And for the source, specify"/” at the end of .../mount/partitions/ – copy the contents, not the folder itself.

If you get errors with permission denied, add --rsync-path="sudo rsync".

The transfer is complete:

...
sent 2,483,902,797 bytes  received 189,037 bytes  20,614,869.99 bytes/sec
total size is 2,553,861,458  speedup is 1.03

Check the data on the new instance:

[root@ip-10-0-41-247 ec2-user]# ll vmlogs/
total 0
drwxrwsr-x. 4 ec2-user ec2-user 35 Jun 18 00:00 20250618
drwxrwsr-x. 4 ec2-user ec2-user 35 Jun 19 00:00 20250619
drwxrwsr-x. 4 ec2-user ec2-user 35 Jun 20 00:00 20250620
drwxr-sr-x. 4 ec2-user ec2-user 35 Jun 21 00:00 20250621
drwxr-sr-x. 4 ec2-user ec2-user 35 Jun 22 00:00 20250622
drwxr-sr-x. 4 ec2-user ec2-user 35 Jun 23 00:00 20250623
drwxr-sr-x. 4 ec2-user ec2-user 35 Jun 24 00:00 20250624
drwxr-sr-x. 4 ec2-user ec2-user 35 Jun 25 00:00 20250625

And this is where I encountered the problem of overlapping data:

[root@ip-10-0-41-247 ec2-user]# cp -r vmlogs/* /var/lib/kubelet/pods/84a4ecd3-21a0-4eec-bebc-078a5105bf86/volumes/kubernetes.io~csi/pvc-da384ead-50e8-425f-b3b0-47c35f3a5155/mount/partitions/
cp: overwrite '/var/lib/kubelet/pods/84a4ecd3-21a0-4eec-bebc-078a5105bf86/volumes/kubernetes.io~csi/pvc-da384ead-50e8-425f-b3b0-47c35f3a5155/mount/partitions/20250618/datadb/parts.json'?

I asked the developers about the JSON merge option, but it will not work.

If the data doesn’t overlap, then just copy the data and restart the VictoriaLogs Pod.

In my case, I had to do it a little differently.

Option 2: run two VMLogs + vlselect

So, if we have data for the same days on the old and new VictoriaLogs instances, we can do the following:

  • create a second VMLogs instance on the new EKS cluster
  • copy data from the old cluster to the PVC of the new VMLogs instance
  • add a new Pod with vlselect
  • for the vlselect we specify two sources – both VMLogs instances
  • and then for the Grafana VictoriaLogs data source we use the URL of the vlselect service

We could just add vlselect and route the requests to the old cluster – but I need to delete the old cluster.

vlselect vs VMLogs

In fact, vlselect is the same binary file as VictoriaLogs, which simplifies the whole setup for us – see the VictoriaLogs cluster documentation:

Note that all the VictoriaLogs cluster components – vlstorage, vlinsert and vlselect – share the same executable – victoria-logs-prod.

Therefore, we can simply take another victoria-logs-single Helm chart and run everything from it.

And we’ll actually be building a kind of “minimal VictoriaLogs cluster”:

  • our current VictoriaLogs instance will play the role of vlinsert and vlstorage – new logs of the new cluster are written there
  • the new VictoriaLogs instance will play the role of vlstorage – we will store data from the old cluster in it
  • the third VictoriaLogs instance will play the role of vlselect – it will be an endpoint for Grafana, and will make API requests to search for logs from both VictoriaLogs instances

Helm chart update

I’m not ready to run the full version of the VictoriaLogs cluster yet, so let’s just add a couple more dependencies to our current Helm chart.

Edit Chart.yaml:

...
dependencies:
...
- name: victoria-logs-single
  version: ~0.11.2
  repository: https://victoriametrics.github.io/helm-charts
  alias: vmlogs-new
- name: victoria-logs-single
  version: ~0.11.2
  repository: https://victoriametrics.github.io/helm-charts
  alias: vmlogs-old
- name: victoria-logs-single
  version: ~0.11.2
  repository: https://victoriametrics.github.io/helm-charts
  alias: vlselect
...

Here, we deploy three charts (more precisely, one chart, just with different values, see Helm: multiple deployments of a single chart with Chart’s dependency), and each one has its own alias:

  • vmlogs-new: the current VMLogs instance on the new EKS cluster
  • vmlogs-old: a new instance to which we will transfer data from the old EKS cluster
  • vlselect: will be our new endpoint for searching logs

The only thing is that during deployment there may be an error due to the length of the pod names, because I initially set too long names in the alias:

...
Pod "atlas-victoriametrics-victoria-logs-single-old-server-0" is invalid: metadata.labels: Invalid value: "atlas-victoriametrics-victoria-logs-single-old-server-77cf9cd79d": must be no more than 63 characters 
...

Check the default values.yaml of the victoria-logs-single chart :

...
  persistentVolume:
    # -- Create/use Persistent Volume Claim for server component. Use empty dir if set to false
    enabled: true
    size: 10Gi
...
  ingress:
    # -- Enable deployment of ingress for server component
    enabled: false
...

For the vlselect instance, add the storageNode parameter, and specify the endpoints of both VictoriaLogs separated by commas, and, if necessary, set the parameters for persistentVolume:

...
vmlogs-new:
  server:
    persistentVolume:
      enabled: true
      storageClassName: gp2-retain
      size: 30Gi
    retentionPeriod: 14d

vmlogs-old:
  server:
    persistentVolume:
      enabled: true
      storageClassName: gp2-retain
      size: 30Gi
    retentionPeriod: 14d

vlselect:
  server:
    extraArgs:
      storageNode: atlas-victoriametrics-vmlogs-new-server:9428,atlas-victoriametrics-vmlogs-old-server:9428
...

Deploy the chart, and check the Pods:

$ kk get pod | grep 'vmlogs\|vlselect'
atlas-victoriametrics-vlselect-server-0                           1/1     Running     0              19h
atlas-victoriametrics-vmlogs-new-server-0                         1/1     Running     0              76s
atlas-victoriametrics-vmlogs-old-server-0                         1/1     Running     0              76s

Services:

$ kk get svc | grep 'vmlogs\|vlselect'
atlas-victoriametrics-vlselect-server                    ClusterIP   None             <none>        9428/TCP                     22h
atlas-victoriametrics-vmlogs-new-server                  ClusterIP   None             <none>        9428/TCP                     42s
atlas-victoriametrics-vmlogs-old-server                  ClusterIP   None             <none>        9428/TCP                     42s

Now we have Promtail on the new cluster continuing to write to atlas-victoriametrics-vmlogs-new-server, and in the atlas-victoriametrics-vmlogs-old-server we have an empty VMLogs instance.

We can check access to the logs through the vlselect instance:

$ kk port-forward svc/atlas-victoriametrics-vlselect-server 9428

Transferring data from the old cluster

Next, we simply repeat what we did above: find the PVC directory, and copy the data from the old cluster there.

This time, I’ll first copy data to my work laptop, and then from it will copy to the Kubernetes cluster:

[setevoy@setevoy-work ~] $ mkdir vmlogs_back

While writing this, VictoriaLogs on the old cluster has already moved to the new EC2, so we’re looking for the data again.

Switch kubectl to the old cluster and find the Pod and its WorkerNode:

$ kk describe pod atlas-victoriametrics-victoria-logs-single-server-0 | grep 'Node\|Container'
Node:             ip-10-0-38-72.ec2.internal/10.0.38.72
Containers:
    Container ID:  containerd://c168d4487282dd7d868aadcfcd1840e4e15cfd360f56f542a98b77978f91e252
...

Connect to the EC2, find the directory:

[root@ip-10-0-38-72 ec2-user]# crictl inspect c168d4487282dd7d868aadcfcd1840e4e15cfd360f56f542a98b77978f91e252
...
    "mounts": [
      {
        "containerPath": "/storage",
        "gidMappings": [],
        "hostPath": "/var/lib/kubelet/pods/f84ef4b9-272f-437e-9f98-649e1707ed09/volumes/kubernetes.io~csi/pvc-43c427fa-b05c-45b8-8bdb-92b00bff3496/mount",
...

Install rsync there, and copy the data to the laptop:

$ rsync -avz --progress -e "ssh -i .ssh/eks_ec2" \
> --rsync-path="sudo rsync" \
> [email protected]:/var/lib/kubelet/pods/f84ef4b9-272f-437e-9f98-649e1707ed09/volumes/kubernetes.io~csi/pvc-43c427fa-b05c-45b8-8bdb-92b00bff3496/mount/partitions/ \
> /home/setevoy/vmlogs_back/
...

Check data locally:

$ ll ~/vmlogs_back/
total 32
drwxrwsr-x 4 setevoy setevoy 4096 Jun 19 03:00 20250619
drwxrwsr-x 4 setevoy setevoy 4096 Jun 20 03:00 20250620
drwxrwsr-x 4 setevoy setevoy 4096 Jun 21 03:00 20250621
...

Now, we’ll move everything to the new cluster, where the atlas-victoriametrics-vmlogs-old-server-0 Pod is running.

Switch kubectl to the new cluster, find the WorkerNode and Container ID:

$ kd atlas-victoriametrics-vmlogs-old-server-0 | grep 'Node\|Container'
Node:             ip-10-0-36-143.ec2.internal/10.0.36.143
Containers:
    Container ID:  containerd://f10118b10afab75c43e03adcc0644af5caa8654687cd81e59cdf15bd8c32cb31
...

SSH to EC2, and find the directory:

[root@ip-10-0-36-143 ec2-user]# ctr -n k8s.io containers info f10118b10afab75c43e03adcc0644af5caa8654687cd81e59cdf15bd8c32cb31
...
            {
                "destination": "/storage",
                "type": "bind",
                "source": "/var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount",
...

Check its content, it should be empty:

drwxr-sr-x. 2 ec2-user 2000  4096 Jun 26 13:14 partitions
[root@ip-10-0-36-143 ec2-user]# ls -l /var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount/partitions/
total 0

Install rsync there, and copy the data from the local directory /home/setevoy/vmlogs_back/ to the new EKS cluster:

$ rsync -avz --progress -e "ssh -i .ssh/eks_ec2" --rsync-path="sudo rsync" \
> /home/setevoy/vmlogs_back/ \
> [email protected]:/var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount/partitions/
...

Check the data there:

[root@ip-10-0-36-143 ec2-user]# ls -l /var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount/partitions/
total 32
drwxrwsr-x. 4 ec2-user ec2-user 4096 Jun 19 00:00 20250619
drwx--S---. 2 root     ec2-user 4096 Jun 26 13:39 20250620
drwx--S---. 2 root     ec2-user 4096 Jun 26 13:39 20250621
drwx--S---. 2 root     ec2-user 4096 Jun 26 13:39 20250622
drwx--S---. 2 root     ec2-user 4096 Jun 26 13:39 20250623
...

Change the user and group:

[root@ip-10-0-36-143 ec2-user]# chown -R ec2-user:2000 /var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount/partitions/
[root@ip-10-0-36-143 ec2-user]# ls -l /var/lib/kubelet/pods/297b75ec-63fa-4061-bb23-7a6a120da939/volumes/kubernetes.io~csi/pvc-c7373468-f247-4596-b2e2-87852aad71bb/mount/partitions/
total 32
drwxrwsr-x. 4 ec2-user 2000 4096 Jun 19 00:00 20250619
drwxrwsr-x. 4 ec2-user 2000 4096 Jun 20 00:00 20250620
...

Restart the atlas-victoriametrics-vmlogs-old-server-0 Pod.

Checking the data

Let’s look for something.

First, something about the hostname: "ip-10-0-36-143.ec2.internal" – it’s an EC2 from the new EKS cluster, and it should come from the atlas-victoriametrics-vmlogs-new-server-0 instance, i.e. from the old instance on the new Kubernetes cluster:

And now some node from the old cluster:

Everything is there.

It’s done.