Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Terraform - AzureDataLake Create Error Failure responding to request: StatusCode=403

I'm trying to create 3 datalakes using terraform by I'm getting a 403 error.

I'm using a admin account with owner roler. I also tried to create an SP and set Blob Reader Role.

Below find my code and the errror

Terraform v1.2.1 on windows_amd64

  • provider registry.terraform.io/hashicorp/azuread v2.22.0
  • provider registry.terraform.io/hashicorp/azurerm v3.7.0
resource "azurerm_storage_data_lake_gen2_filesystem" "stg-datalake" {
  for_each           = toset(["bronze", "silver", "gold"])
  name               = each.value
  storage_account_id = azurerm_storage_account.stg-datalake.id

  ace {
    scope       = "access"
    type        = "user"
    id          = azurerm_data_factory.adf.identity[0].principal_id
    permissions = "rwx"
  }
}

Error: Error: checking for existence of existing File System "gold" (Account "stgaclientteste"): datalakestore.Client#GetProperties: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: error response cannot be parsed: {"" '\x00' '\x00'} error: EOF

like image 982
npinotti Avatar asked Nov 02 '25 11:11

npinotti


2 Answers

The issue still persists after months, hence I used the workaround below. An ADLS gen2 filesystem is somewhat different than a regular storage container, you need the Storage Blob Data Owner role to create/update the filesystem.

data "azurerm_client_config" "current" {}

# HACK: Role assignment is needed to apply adls gen2 filesystem changes
resource "azurerm_role_assignment" "role_assignment" {
  scope                = var.storage_account_id
  role_definition_name = "Storage Blob Data Owner"
  principal_id         = data.azurerm_client_config.current.object_id
}

resource "azurerm_role_assignment" "role_assignment" {
  scope                = var.storage_account_id
  role_definition_name = "Contributor"
  principal_id         = data.azurerm_client_config.current.object_id
}

# HACK: Sleep is needed to wait for role assignment to propagate
resource "time_sleep" "role_assignment_sleep" {
  create_duration = "60s"

  triggers = {
    role_assignment = azurerm_role_assignment.role_assignment.id
  }
}

resource "azurerm_storage_data_lake_gen2_filesystem" "filesystem" {
  name               = var.filesystem_name
  storage_account_id = var.storage_account_id
  depends_on         = [time_sleep.role_assignment_sleep]
}
like image 106
Nebulastic Avatar answered Nov 04 '25 19:11

Nebulastic


In my case, I was disabling public network access to my storage, which was the cause of the 403 error:

public_network_access_enabled = false

If you're just testing locally, you can add your public IP in the ip_rules list:

resource "azurerm_storage_account" "lake" {
  # ...

  # Enable the access
  public_network_access_enabled = true

  # Control access with IP rules
  network_rules {
    default_action             = "Deny"
    ip_rules                   = [var.public_ip_address_to_allow]
    virtual_network_subnet_ids = [var.subnet_id]
    bypass                     = ["AzureServices"]
  }
}

To automate this in your pipelines such as on Terraform Cloud, it should be possible to get the IP ranges and automated with the hashicorp/http provider.

like image 26
Evandro Pomatti Avatar answered Nov 04 '25 19:11

Evandro Pomatti