I tried to reproduce the same from my end.
Received same error:
Resource vmpassword has lifecycle.prevent_destroy set, but the plan calls for this resource to be destroyed. To avoid this error and continue
│ with the plan, either disable lifecycle.prevent_destroy or reduce the scope of the plan using the -target flag

Note: If you are trying to have two different Key Vaults for VM,
it would be better to use another resource block for new keyvault.
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
lifecycle {
prevent_destroy = true
}
}
resource "azurerm_key_vault" "kv2" {
name = "kavy-newkv2"
resource_group_name = data.azurerm_resource_group.example.name
location = data.azurerm_resource_group.example.location
enabled_for_disk_encryption = true
tenant_id = data.azurerm_client_config.current.tenant_id
soft_delete_retention_days = 7
purge_protection_enabled = false
sku_name = "standard"
access_policy {
tenant_id = data.azurerm_client_config.current.tenant_id
object_id = data.azurerm_client_config.current.object_id
key_permissions = [
"Get","Create", "Decrypt", "Delete", "Encrypt", "Update"
]
secret_permissions = [
"Get", "Backup", "Delete", "List", "Purge", "Recover", "Restore", "Set",
]
storage_permissions = [
"Get", "Restore","Set"
]
}
}
resource "azurerm_key_vault_secret" "vmpassword" {
name = "vmpassword"
value = random_password.vmpassword.result
key_vault_id = azurerm_key_vault.kv1.id
depends_on = [ azurerm_key_vault.kv1 ]
}
import each resource manually to show in your state file. Terraform tracks each resource individually.

You can ignore changes in vm ,if required
lifecycle {
ignore_changes = [
tags,
]
}

Reference:
- terraform-lifecycle-prevent-destroy | StackOverflow
- https://developer.hashicorp.com/terraform/language/meta-arguments/lifecycle#prevent_destroy