Feature Requests

Terraform Plan Policies for Building Block Definitions — safe apply, no blind tf applies
Problem / Use Case Every terraform apply in a Building Block (BB) run is blind: there's no way to preview what Terraform intends to do before it executes. A version upgrade, input change, or configuration drift can silently trigger destructive operations — deleting an S3 bucket, removing a VPC, wiping a database — with no warning and no chance to intervene. Three concrete pain points: Blind applies during BB upgrades. When rolling out a new BBD version, there's no way to see the plan before apply kicks off. Destructive changes are only discovered from logs after the fact. No abort-safe workflow. If you spot dangerous output mid-run and abort, all logs disappear — along with the evidence needed to understand what happened. No per-BBD guardrails. Platform teams managing critical infrastructure cannot declare "this BBD must never delete an aws_s3_bucket without approval" — there's no such hook today. Value / Impact Prevents accidental destruction of production infrastructure by surfacing dangerous operations before tf apply runs. Lets platform engineers define per-BBD safety rules — not one-size-fits-all global settings. Reduces support incidents from runaway deletions or unexpected replacements during upgrades. Provides natural approval gates for regulated environments (SOC 2, ISO 27001, change management). Foundation for drift detection: once meshStack evaluates a plan, periodic dry runs become straightforward. Proposed Roadmap ## Phase 1 — Plan safety checks (opt-in) Run terraform plan as part of every BB run and surface the JSON plan output in the run view. Automatically scan for dangerous operations (deletions, forced replacements). Introduced as an opt-in BBD flag: "enable plan safety checks" (default: false ). When enabled and dangerous operations are detected: The BB run is paused — this is a hard gate, not a warning. The platform engineer sees the detected actions in meshPanel and must explicitly approve before terraform apply proceeds. ## Phase 2 — Configurable policies with OPA/Rego (conftest) Attach OPA/Rego policies to a BBD version. meshStack evaluates them against the plan JSON before terraform apply using conftest. Policies declare one of three outcomes: deny — abort the run, surface the violation to the operator. warn — continue the apply, surface the warning in the run log. require_approval — pause the BB run; platform engineer approves in meshPanel before apply proceeds. Policy libraries can be fetched from a Git repository or OCI registry, enabling shared org-wide rule sets. ## Phase 3 — Policy management in meshPanel and Terraform provider Expose policy management as a first-class feature in meshPanel and the meshStack Terraform provider. Platform engineers manage, version, and assign Rego policies to BBD versions as code (GitOps-style). As-Code Story — meshStack Terraform Provider When managing BB instances via meshstack_building_block , policy outcomes surface as warnings and status attributes , never as errors that taint or recreate the resource — BB instances hold Terraform state that would be permanently lost on recreation. ** require_approval :** Apply completes. Provider emits a warning and exposes last_run_status = "awaiting_approval" . Downstream automation can detect and react. ** deny : Apply completes with a warning (not an error). Resource is not tainted and not recreated**. Related Requests Building Block Deletion Approval Drift detection for building blocks Retain logs after aborting a building block run Approval process for marketplace requests If you are hitting this today, reach out to support@meshcloud.io .
0
Complete zero-configuration import of Building Block Definitions from meshStack Hub
Problem / Use Case When importing a Building Block Definition (BBD) from meshStack Hub into the meshStack Admin Panel, I still have to manually configure inputs, output types, and variable mappings — even when the Hub module already ships a meshstack_integration.tf file that fully describes the BBD. For example, the version_spec of a BBD — which defines every input, its type (user-provided, static, tenant ID, etc.), validation rules, and outputs — is already expressed in the meshstack_building_block_definition Terraform resource inside meshstack_integration.tf . Today, the import wizard only partially reads this information, leaving me to manually fill in the rest through the UI. This manual step is especially painful when: Adopting a Hub module for the first time and there are 10+ inputs to configure Upgrading a BBD version and needing to re-enter unchanged metadata Onboarding a team that doesn't know Terraform but wants to use pre-built Hub modules Doing a live demo where every manual click breaks the "one-click" narrative Value / Impact A complete, zero-configuration import that reads all fields from meshstack_integration.tf would: Eliminate manual UI configuration entirely for Hub-sourced BBDs Reduce onboarding time from hours to minutes for teams adopting Hub modules Provide a compelling "one-click bootstrap" story for live demos and trials Ensure consistency between the Terraform definition and the Panel representation (no drift from manual re-entry) Automatically detect the correct input type (user-provided, static, tenant ID) from the Terraform variable's type and description — removing a common source of misconfiguration Context / Links Descriptor file for Building Block definitions in Git Repos — related in-progress request; the meshStack team is already working on using meshstack_integration.tf for import prefill Automatically set building block input type from terraform — specific ask for auto-detecting input types If you're hitting this today when importing Hub BBDs, reach out to our customer success team or support@meshcloud.io — we'd love to hear your specific scenario and help prioritize the missing fields.
0
Support GitHub App and Personal Access Token authentication for Building Block repo cloning
Problem / Use Case When configuring a Building Block Definition that clones a GitHub repository, meshStack currently only supports SSH keys. This is a significant operational burden for platform teams in many organizations because: SSH keys must be individually managed per Building Block Definition — every definition requiring its own key pair, with no way to share credentials across definitions. GitHub Apps and Personal Access Tokens (PATs) are the preferred, modern approach for automation in GitHub, offering fine-grained repository permissions, short-lived credentials, and centralized management without requiring a dedicated "machine user." As a result, platform teams working in GitHub-first organizations are forced to maintain workarounds or cannot adopt meshStack Building Blocks at all for their GitHub-based automation. Value / Impact Supporting GitHub App installations and/or Personal Access Tokens (PATs) as alternative authentication methods for cloning repos would: Remove a major adoption blocker for GitHub-heavy platform teams. Align meshStack with GitHub's own recommended authentication best practices (see GitHub Apps documentation and Managing personal access tokens ). Enable fine-grained repository access control without requiring a dedicated GitHub machine user. Support short-lived, automatically-rotating tokens through GitHub Apps, improving security posture. Allow one set of credentials to be reused across multiple Building Block Definitions in a workspace. Context / Links Related Canny requests that highlight the same pain for other git providers: Support Azure DevOps OAuth via Service Principal to checkout Git repositories One SSH-key for multiple Building Blocks If you're running into this issue or have a specific use case, please reach out to support@meshcloud.io — we'd love to hear the details.
0
Add WIF subject attribute to the Building Block Definition Terraform resource
Problem / Use Case With WIF for Building Block Runs now available, platform engineers configure their backplane (e.g. an AWS IAM role trust policy or GCP WIF pool) to trust the specific subject claim for a Building Block Definition (BBD): system:serviceaccount:<meshstack_id>:workspace.<workspace_identifier>.buildingblockdefinition.<bbd_uuid> When using the meshStack Terraform provider to manage BBDs as infrastructure-as-code, setting up this trust is currently cumbersome and error-prone: The BBD UUID is only known after the BBD has been created (it is assigned by meshStack). Platform engineers must manually construct the subject string from multiple components ( meshstack_id , workspace_identifier , bbd_uuid ) and pass it to downstream Terraform resources that configure WIF trust. This creates a chicken-and-egg problem: to get the UUID you must create the BBD first, but configuring the WIF backplane trust may be needed before (or alongside) creating the BBD version spec. Today, this requires splitting terraform apply into multiple steps. There is currently no computed attribute on meshstack_building_block_definition that exposes the full WIF subject claim. Workarounds exist (see GCP Storage Bucket example and AWS S3 Bucket example ), but they require manually composing the subject string and splitting the Terraform apply into multiple steps. Value / Impact Adding a computed workload_identity_federation_subject (or workload_identity_federation.subject ) attribute to meshstack_building_block_definition would: Allow platform engineers to reference the WIF subject directly in Terraform without manual string construction: meshstack_building_block_definition.my_bbd.workload_identity_federation_subject Eliminate string composition errors and make backplane trust configuration declarative and repeatable. Bring the developer experience in line with the meshstack_integrations data source, which already exposes WIF info for platform replication. When does a single terraform apply work? If your backplane is structured such that the resources consuming the WIF subject (e.g. an IAM role trust policy or GCP WIF pool condition) are separate from the resources that provide static inputs to the BBD version spec , Terraform can resolve the dependency graph in a single apply pass. In that case, the computed workload_identity_federation_subject attribute is all you need. When is an additional resource split needed? If your backplane configuration (e.g. an IAM role ARN or a GCP pool ID) must also be passed back as a static input to the BBD version spec , a true circular dependency arises that Terraform cannot resolve in a single pass — even with the computed attribute. In that scenario, fully automating the setup in one terraform apply would require a future building_block_definition_version_spec resource that separates BBD creation from BBD version spec creation. See this follow-on request Context / Links WIF for Building Block Runs — shipped (Canny) meshStack Terraform Provider — meshstack_building_block_definition GCP Storage Bucket BBD workaround example AWS S3 Bucket BBD workaround example If you're hitting this issue, reach out to our customer success team or support@meshcloud.io — we'd love to hear your specific setup and use case.
0
Self-service replication failure notification contacts for platform owners
Problem / Use Case As a platform engineer or platform owner, I want to subscribe myself (or a team distribution list) to receive email notifications when a tenant replication fails on my platform — without needing to ask the meshStack operator to make a dhall configuration change. Today, replication failure email recipients are statically configured in the dhall deployment config by an operator. If my team wants to be alerted when an Azure or GCP tenant fails replication, we have to raise a support request and wait for the operator to update the deployment config. This creates unnecessary friction and blocks platform teams from being autonomous. Value / Impact Platform team autonomy : Platform engineers can manage their own alert contacts directly from the platform configuration screen in the meshStack panel. Faster incident response : Reducing the time between a replication failure occurring and the right person being notified improves mean time to recovery. Operator offload : Operators no longer need to make deployment-level config changes for a routine notification preference. Scales with team changes : Platform teams can add/remove notification contacts as team membership evolves, without a support ticket each time. Proposed Experience In the platform configuration screen of the meshStack panel, a platform owner should be able to: Add one or more email addresses (individual or distribution list) as notification contacts for replication failures on that platform. These contacts receive the same failure alert that is today only reachable via dhall config. No operator/dhall involvement required. Context / Links Related (complete): E-mail notification when tenant fails replication — the underlying notification feature is already shipped; this request is about making it self-serviceable. Related (open): E-Mail notification when tenant replication is stuck in "in progress"
0
Permission Elevation Workflow for Platform Engineers (Just-in-Time Admin Access)
Problem / Use Case As a Platform Engineer using meshStack's Platform Builder, I can create building blocks that operate using workspace-scoped ephemeral API keys. This model works well for day-to-day automation — but a significant class of operations requires admin-level roles (Organization Admin, Organization User, or FinOps Manager) that go far beyond what ephemeral keys support today. A concrete example: provisioning or managing meshstack_payment_method resources via the Terraform provider requires one of these elevated roles. There is currently no way to perform this operation from within a building block or as a platform engineer without being permanently assigned full admin rights. This creates a damaging trade-off: either I get permanent admin access (violating least-privilege principles), or I can't automate a legitimate part of my platform's lifecycle at all. Key pain points: No mechanism to request a time-limited, scoped elevation of permissions for a specific operation (e.g., "create payment methods in workspace X for the next 2 hours"). No approval workflow that lets a central admin review and grant such a request, with a clear audit trail. Platform engineers are effectively blocked from fully automating platform lifecycle operations that involve financial or cross-workspace resource management. The only escape hatch today is permanent full admin assignment — a clear security anti-pattern. Value / Impact Introducing a just-in-time (JIT) permission elevation workflow would: Enforce least privilege by default: platform engineers work within their workspace scope, and elevated access is always time-limited and explicitly approved. Unblock legitimate automation : building blocks could request and use elevated permissions (e.g., for payment method management) as part of a controlled, auditable flow — without requiring permanent admin accounts. Improve auditability : every elevation request, approval, and use would be traceable in meshStack's event log, supporting compliance and security requirements. Reduce blast radius : if a workspace API key or building block is compromised, the attacker does not gain persistent admin access — only an already-approved, short-lived token. Align with industry patterns : JIT access (as seen in Azure PIM, AWS IAM Identity Center, and HashiCorp Vault) is a widely adopted security best practice that our customers' security teams already expect. Context / Links This request is closely related to the Canny post on Admin Approval for Ephemeral API Key Permission Increases , which addresses the governance side of this same problem (approving a new BBD version with expanded permissions). A JIT elevation workflow is the operational complement: allowing a platform engineer to temporarily gain elevated access to perform a specific task, with admin oversight and a full audit trail. For questions or to discuss your specific use case, reach out to support@meshcloud.io .
0
Load More