Share via

Persisent 32022 – “Hyper-V could not replicate changes… The parameter is incorrect (0x80070057) errors even afer recreating replica anbd deleting HRL files on both primary and replica

Mark Gould 0 Reputation points
2026-03-12T10:35:20.5633333+00:00

Windows Server 2022

Failover Clustering

Have stopped replication, removed replica, removed any HRL files on primary. Recreated new replica. and initial sync completes with no issue

NO issue on any other replicas between the same two clusters.

Moved replica onto different host within cluster

That volume is not close to full.

Volume1 has:

Size: 21,552,127,008,768 bytes, about 19.6 TiB

Free: 4,060,998,516,736 bytes, about 3.69 TiB

So the immediate problem is not lack of free space on the CSV.


PS C:\Windows\system32> Get-VMReplication -VMName ZDATA3 | fl *

VMCheckpointId : 00000000-0000-0000-0000-000000000000

VMCheckpointName :

Name : ZDATA3

Id : 0a2c5501-d017-4af8-9b7a-82ef0d6aa169

State : Error

Health : Critical

Mode : Primary

FrequencySec : 300

RelationshipType : Simple

PrimaryServer : lon-prod-hv2.KISLNET.LOCAL

ReplicaServer : slgh-prod-repl1.KISLNET.LOCAL

ReplicaPort : 80

AuthType : Kerberos

AuthenticationType : Kerberos

AutoResynchronizeEnabled : True

AutoResynchronizeIntervalEnd : 06:00:00

AutoResynchronizeIntervalStart : 18:30:00

BypassProxyServer : False

CertificateThumbprint :

CompressionEnabled : True

EnableWriteOrderPreservationAcrossDisks : True

ExcludedDisks : {}

InitialReplicationStartTime :

RecoveryHistory : 0

ReplicaServerPort : 80

ReplicateHostKvpItems : True

ReplicationFrequencySec : 300

ResynchronizeStartTime :

VSSSnapshotFrequencyHour : 0

VSSSnapshotReplicationEnabled : False

AllowedPrimaryServer :

CurrentReplicaServerName : SLOUGH-PROD-HV3.KISLNET.LOCAL

LastAppliedLogTime :

LastReplicationTime : 12/03/2026 03:21:09

PrimaryServerName : lon-prod-hv2.KISLNET.LOCAL

ReplicaServerName : slgh-prod-repl1.KISLNET.LOCAL

ReplicatedDisks : {Hard Drive on SCSI controller number 0 at location 0, Hard Drive on SCSI controller number 0 at location 1, Hard Drive on SCSI controller number 0 at location 2}

ReplicationHealth : Critical

ReplicationMode : Primary

ReplicationRelationshipType : Simple

ReplicationState : Error

TestVirtualMachine :

VMId : 0a2c5501-d017-4af8-9b7a-82ef0d6aa169

VMName : ZDATA3

VMSnapshotId : 00000000-0000-0000-0000-000000000000

VMSnapshotName :

CimSession : CimSession: .

ComputerName : LON-PROD-HV2

IsDeleted : False


SSD disks on both clusters with sub 10ms avg disk sec/write

Very large .hrl on replica host

Health.jpg

Windows for business | Windows Server | Storage high availability | Virtualization and Hyper-V
0 comments No comments
{count} votes

Answer accepted by question author
  1. VPHAN 25,000 Reputation points Independent Advisor
    2026-03-12T12:34:47.2166667+00:00

    Mark Gould

    Since the Hyper-V Replica engine is mathematically proving that hundreds of gigabytes of physical block-level writes are occurring on that virtual hard disk, an unseen process is rapidly altering the underlying data. When a file server experiences a sudden, massive spike in write operations without legitimate user activity, you must immediately investigate the possibility of a ransomware infection. Cryptographic malware systematically opens, encrypts, and overwrites existing files on the volume, which Hyper-V interprets as massive, continuous block modifications that will instantly bloat your replication tracking logs.

    If you verify that your files are fully accessible and unencrypted, the next hidden culprits are native Windows file services that operate independently of your third-party backup software. Even with backups paused, Windows Server might still be running Shadow Copies for Shared Folders, a feature that automatically takes snapshots of the volume to provide users with the Previous Versions feature. If the hidden storage area dedicated to these shadow copies is churning, deleting old snaps, or dynamically resizing, it generates immense block-level traffic. Similarly, if this server participates in Distributed File System Replication, background staging tasks can silently rewrite gigabytes of data into hidden system folders.

    To solve this mystery, you need to catch the responsible process in the act from within the ZDATA3 guest operating system. Open the Windows Run dialog and launch resmon.exe to access the Resource Monitor. Navigate directly to the Disk tab and expand the Processes with Disk Activity section. By sorting this list by the Write column, you will immediately expose the exact executable, whether it is a malicious payload, a runaway Windows service like the search indexer, or a hidden sync agent, that is generating this massive volume of writes and choking your replication queue.


4 additional answers

Sort by: Most helpful
  1. VPHAN 25,000 Reputation points Independent Advisor
    2026-03-12T12:56:38.1333333+00:00

    Oh!!! AnyText is a full-text local search indexer, meaning it crawls through your entire three-terabyte dataset and builds a localized database to make file contents instantly searchable. As the software actively indexes those documents, it continuously writes, merges, and reorganizes data blocks within its own underlying database files. Hyper-V Replica is completely blind to the fact that this is merely a rebuildable search index; it simply detects continuous physical block-level modifications on the virtual hard disk and dutifully captures every single change, resulting in the staggering terabyte-sized tracking logs you observed.

    To resolve this permanently without sacrificing your file search capabilities, you need to isolate the indexer's disk input and output from the replication engine. The most robust architectural fix is to attach a new, dedicated Virtual Hard Disk to the ZDATA3 virtual machine and reconfigure the AnyText software to store its index database exclusively on that new volume.  Once the high-churn index data is physically isolated to its own drive, you can use the Set-VMReplication PowerShell cmdlet on your primary Hyper-V host to explicitly exclude that specific VHDX from the replication relationship. This configuration ensures your critical file server data remains fully protected while the disposable, high-write search index is entirely ignored by the replication tracking process.

    If adding a secondary virtual drive is not immediately feasible, you will need to dive into the AnyText configuration settings or the Windows Services console to strictly restrict the indexing engine. You must throttle its disk usage or schedule the indexing service to only run during off-peak hours when the resulting replication backlog has enough time to synchronize without impacting daytime operations. By managing how and where this indexing database writes its physical blocks, your Hyper-V Replica health will stabilize almost immediately.


  2. VPHAN 25,000 Reputation points Independent Advisor
    2026-03-12T12:09:51.2733333+00:00

    Mark Gould

    Since ZDATA3 is strictly a file server and your total dataset is relatively small, the sudden explosion of replication traffic is likely tied to native storage optimization processes altering the physical blocks on the virtual hard disk. Because Hyper-V Replica tracks changes at the physical block level rather than the logical file level, any process that reorganizes data underneath the file system creates an enormous amount of replication churn. For a dedicated Windows file server, the most common and notorious cause of this behavior is Windows Server Data Deduplication.

    If the Data Deduplication role was recently enabled, or if a heavy scheduled optimization, scrub, or garbage collection job ran over the last two days, it would aggressively consolidate and move chunks of data across the volume. Hyper-V intercepts every single one of these underlying block movements as a brand new write operation, generating massive .hrl tracking files even though your users did not add any new data to the server. You need to check Server Manager inside the ZDATA3 guest operating system to see if the Data Deduplication role is installed and active on those file shares. If it is, you can run the Get-DedupJob PowerShell cmdlet within the guest to see if a massive background optimization correlates with the timeline of your replication failures.

    If deduplication is definitely not active on the guest, the next logical places to investigate within the virtual machine are the Volume Shadow Copy Service and the default Optimize Drives scheduled task. A large backup solution triggering massive snapshot creations or a heavy defragmentation pass will yield the exact same block-level bloat. Your adjustments to a fifteen-minute replication frequency and automatic resynchronization were the exact correct administrative responses to allow the target host enough breathing room to process the current backlog. Once the backlog clears, you simply need to identify which internal guest storage operation caused the churn and schedule it carefully so it does not overwhelm your Hyper-V Replica pipeline in the future.

    Hope this helps :)

    VP


  3. VPHAN 25,000 Reputation points Independent Advisor
    2026-03-12T11:12:41.53+00:00

    Hi Mark Gould,

    The statistics in your logs reveal an extreme amount of data modification on this specific virtual machine, with replication tracking files reaching nearly a terabyte. This sheer volume of data churn within five-minute intervals is completely overwhelming the replication mechanism. When these massive Hyper-V Replication Log files land on the target Cluster Shared Volume, endpoint security or backup software on the replica cluster is likely scanning them. This deep scanning places an exclusive file lock on the logs, which prevents the Hyper-V Virtual Machine Worker Process from merging the data into the replica disks and subsequently forces the replication state into a critical error.

    Compounding this file lock issue is your configured auto-resynchronization schedule. Your PowerShell output indicates that the system is only allowed to automatically clear errors and resynchronize between 18:30 and 06:00. Because the initial failure occurred outside of this maintenance window, the massive 2.2 terabyte backlog is currently trapped in a suspended state during your standard operating hours. To resolve this, you must configure authoritative antivirus exclusions across all host nodes in the replica cluster. You need to explicitly exclude the specific C:\ClusterStorage paths hosting the ZDATA3 replica, all files with the .hrl extension, the Virtual Machine Management Service executable vmms.exe, and the Worker Process executable vmwp.exe.

    Once these essential exclusions are actively applied on the target cluster, you can bypass the restricted evening maintenance window by manually forcing a synchronization using the Resume-VMReplication PowerShell cmdlet. Given the extraordinary volume of data being modified by this specific database or workload, you should also adjust the replication frequency from five minutes up to fifteen minutes. Providing this extended interval will allow the target storage subsystem adequate time to ingest and commit the massive tracking logs without timing out and colliding with the subsequent replication cycle.

    Hope this answer brought you some useful information. If it did, please hit “accept answer”. Should you have any questions, feel free to leave a comment.

    VP


  4. Q&A Assist
    2026-03-12T10:35:42.3366667+00:00
    AI generated content. This question contains an answer created with AI from Q&A Assist. Learn more

    The 32022 / 0x80070057 “parameter is incorrect” on Hyper‑V Replica after initial sync, combined with a very large HRL on the replica, points to a configuration or storage/permissions issue on the replica side rather than free space.

    Use the following checks and fixes that align with the documented causes for Hyper‑V Replica errors:

    1. Validate network and authentication
      1. Confirm that the primary and replica servers (and the Replica Broker if clustered) communicate correctly over the configured port (in this case HTTP/80) and that no firewall or security device is intermittently blocking traffic.
      2. Verify Kerberos authentication:
        • Ensure both hosts are in the same or trusted domains.
        • If using a Replica Broker, confirm its SPNs are correctly registered and there are no domain trust issues.
    2. Re‑check replica configuration for this VM only
      1. Compare the VM’s configuration on primary and replica:
        • Same VHDX layout and disk mapping.
        • No missing or moved VHDX files on the replica path.
      2. Confirm that the storage paths configured for the replica VM are valid and accessible on the replica host/CSV and that the volume has sufficient space (which it does, but also verify the specific folder path and quota if any).
      3. Use Set-VMReplication if needed to explicitly reset parameters such as -ReplicaServerName, -ReplicaServerPort, -AuthenticationType, and the replicated disks list for this VM, then re‑run initial replication.
    3. Check permissions and security on replica storage
      1. Verify NTFS permissions on the replica VM’s VHDX and HRL locations:
        • Ensure the Hyper‑V service account and the VM’s security identifiers have full access to the replica files.
      2. Confirm that antivirus or endpoint security is not scanning or locking the replica VHDX/HRL paths:
        • Temporarily exclude the replica VM folder and test.
        • If the issue disappears, permanently tune exclusions for Hyper‑V Replica data paths.
    4. Investigate storage and HRL behavior
      1. A very large HRL on the replica indicates that change logs are accumulating and not being applied correctly.
      2. Check for storage issues on the replica host/CSV:
        • Use cluster and storage tools (for example, Get-ClusterSharedVolume, Get-StoragePool, Get-VirtualDisk) to confirm there are no underlying disk or path problems.
        • Ensure no other process is locking the replica VHDX/HRL files.
    5. Recreate replication with clean metadata if needed
      1. If the above checks do not resolve the error for this single VM while others replicate fine between the same clusters:
        • Stop replication for this VM.
        • Remove the replica VM completely from the replica host (including its VHDX and HRL files).
        • Recreate replication from scratch to a clean folder on the replica CSV.
      2. After initial replication completes, monitor for new 32022 events and check Get-VMReplication for this VM to confirm ReplicationState : Replicating and ReplicationHealth : Normal.

    These steps directly address the documented causes for Hyper‑V Replica “parameter is incorrect” and 32022 errors: configuration mismatches, permissions/security interference, and storage issues that prevent HRL changes from being applied.


    References:

    0 comments No comments

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.