17

I have a repository with several separate configs which share some modules, and reference those modules using relative paths that look like ../../modules/rabbitmq. The directories are setup like this:

tf/
  configs/
    thing-1/
    thing-2/
  modules/
    rabbitmq/
    cluster/
    ...

The configs are setup with a remote backend to use TF Cloud for runs and state:

terraform {
  backend "remote" {
    hostname     = "app.terraform.io"
    organization = "my-org"

    workspaces {
      prefix = "config-1-"
    }
  }
}

Running terraform init works fine. When I try to run terrform plan locally, it gives me an error saying:

Initializing modules...
- rabbitmq in 

Error: Unreadable module directory

Unable to evaluate directory symlink: lstat ../../modules: no such file or
directory

...as if the modules directory isn't being uploaded to TF Cloud or something. What gives?

mltsy
  • 6,598
  • 3
  • 38
  • 51

6 Answers6

27

It turns out the problem was (surprise, surprise!) it was not uploading the modules directory to TF Cloud. This is because neither the config nor the TF Cloud workspace settings contained any indication that this config folder was part of a larger filesystem. The default is to upload just the directory from which you are running terraform (and all of its contents).

To fix this, I had to visit the "Settings > General" page for the given workspace in Terraform Cloud, and change the Terraform Working Directory setting to specify the path of the config, relative to the relevant root directory - in this case: tf/configs/config-1

After that, running terraform plan displays a message indicating which parent directory it will upload in order to convey the entire context relevant to the workspace.

mltsy
  • 6,598
  • 3
  • 38
  • 51
  • 1
    I encountered the same problems with relative path to my modules. But I'm not sure I understand your solution. My "Terraform working directory" in the cloud is already defined as a path to 'sandbox' directory, like 'live/sandbox' (The same structure as yours, but with live/different-environments). – Vitaly Karasik DevOps Mar 09 '22 at 13:22
6

Update @mlsy answer with a screenshot. Using Terraform Cloud with free account. Resolving module source to using local file system.

terraform version
Terraform v1.1.7
on linux_amd64

Workspace Settings

Polymerase
  • 6,311
  • 11
  • 47
  • 65
0

Here is the thing I worked for me. I used required_version = ">= 0.11" and then put all those tf files which have provider and module in a subfolder. Kept the version.tf which has required providers at root level. Somehow I have used the same folder path where terraform.exe is present. Then Built the project instead of executing at main.tf level or doing execution without building. It downloaded all providers and modules for me. I am yet to run on GCP.

enter image description here - Folder path on Windows

enter image description here - InteliJ structure

enter image description hereenter image description here

  • As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Sep 11 '22 at 08:33
  • Use this source = "mhmdio/rabbitmq/aws – Shridhar Gavai Sep 12 '22 at 12:13
0

Use this source = "mhmdio/rabbitmq/aws

I faced this problem when I started. Go to hashicorp/terraform site and search module/provider block. They have full path. The code snippets are written this way. Once you get path run Terraform get -update Terraform init - upgrade

Which will download modules and provider locally.

Note: on cloud the modules are in repo but still you need to give path if by default repo path not mapped

  • 1
    As it’s currently written, your answer is unclear. Please [edit] to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers [in the help center](/help/how-to-answer). – Community Sep 20 '22 at 00:02
0

I have similar issue, which I think someone might encounter.

I have issue where in my project the application is hosted inside folder1/folder2. However when I run terraform plan inside the folder2 there was an issue because it tried to load every folder from the root repository.

% terraform plan  
Running plan in the remote backend. Output will stream here. Pressing Ctrl-C
will stop streaming the logs, but will not stop the plan running remotely.

Preparing the remote plan...

The remote workspace is configured to work with configuration at
infrastructure/prod relative to the target repository.

Terraform will upload the contents of the following directory,
excluding files or directories as defined by a .terraformignore file
at /Users/yokulguy/Development/arepository/.terraformignore (if it is present),
in order to capture the filesystem context the remote workspace expects:
    /Users/yokulguy/Development/arepository

╷
│ Error: Failed to upload configuration files: Failed to get symbolic link destination for "/Users/yokulguy/Development/arepository/docker/mysql/mysql.sock": lstat /private/var/run/mysqld: no such file or directory
│ 
│ The configured "remote" backend encountered an unexpected error. Sometimes this is caused by network connection problems, in which case you could retry the command. If the issue persists please open a support
│ ticket to get help resolving the problem.
╵

The solution is sometime I just need to remove the "bad" folder whic is the docker/mysql and then rerun the terraform plan and it works.

abmap
  • 141
  • 5
0

I got a similar error message during tf init after moving a module to a different folder.

Error: Unreadable module directory
Unable to evaluate directory symlink: lstat

I had to update the path!

I.e change:

module "my-module" {
  source = "../../component_module"
}

to (depending on the new home of my-module)

module "my-module" {
  source = "../component_module"
}
intotecho
  • 4,925
  • 3
  • 39
  • 54