Tri-Secret Secure is only available in Snowflake Business Critical edition or higher, making that edition mandatory to satisfy the first requirement (Answer B). Secure views are required when sharing data with another Snowflake customer to ensure that underlying table data, logic, and metadata are protected and governed during Secure Data Sharing (Answer C). To hide or reveal portions of sensitive data at query time based on role or policy, Dynamic Data Masking is the appropriate Snowflake feature (Answer E).
Row access policies control row-level visibility but do not mask column values. The Enterprise edition does not support Tri-Secret Secure, making it insufficient for the stated requirements. Materialized views are unrelated to security or data masking and introduce unnecessary storage overhead.
This question integrates multiple SnowPro Architect domains—security, governance, sharing, and environment management—and tests the architect’s ability to select the minimal set of Snowflake features that collectively meet complex compliance and operational requirements.
=========
QUESTION NO: 52 [Snowflake Data Engineering]
Which columns can be included in an external table schema? (Select THREE).
A. VALUE
B. METADATA$ROW_ID
C. METADATA$ISUPDATE
D. METADATA$FILENAME
E. METADATA$FILE_ROW_NUMBER
F. METADATA$EXTERNAL_TABLE_PARTITION
Answer: A, D, E
External tables in Snowflake expose a combination of user-defined columns and system-generated metadata columns. The VALUE column is commonly used to store semi-structured data (such as JSON or Avro records) read directly from external storage (Answer A).
Snowflake also provides metadata columns that describe the source file. METADATA$FILENAME identifies the name of the file from which a given row was read (Answer D), and METADATA$FILE_ROW_NUMBER indicates the row number within that file (Answer E). These columns are frequently used for auditing, debugging, and data lineage tracking.
METADATA$ROW_ID and METADATA$ISUPDATE are associated with streams and change tracking, not external tables. METADATA$EXTERNAL_TABLE_PARTITION is not a valid selectable column in the external table schema definition. This question reinforces SnowPro Architect knowledge of how Snowflake represents external data and exposes file-level metadata for data lake architectures.
=========
QUESTION NO: 53 [Security and Access Management]
An Architect needs to ensure that users can upload data from Snowsight into an existing table.
What privileges must be granted? (Select THREE).
A. Database: USAGE
B. Database: OWNERSHIP
C. Schema: CREATE TABLE
D. Schema: USAGE
E. Table: SELECT
F. Table: OWNERSHIP
Answer: A, D, E
Uploading data into an existing table via Snowsight requires sufficient privileges to access the database and schema and to interact with the target table. Database-level USAGE is required to access objects within the database (Answer A). Schema-level USAGE is required to access the schema containing the table (Answer D).
Table-level SELECT is required by Snowsight to validate and preview the data and table structure during the upload process (Answer E). OWNERSHIP privileges are not required and would grant excessive control. CREATE TABLE is unnecessary when uploading into an existing table.
This reflects Snowflake’s least-privilege security model and is a common SnowPro Architect exam topic when designing user self-service data ingestion workflows.
=========
QUESTION NO: 54 [Performance Optimization and Monitoring]
An Architect is troubleshooting a long-running statement and needs to identify blocked transactions and the queries blocking them.
Which views should be used? (Select TWO).
A. QUERY_HISTORY
B. OBJECT_DEPENDENCIES
C. DATA_TRANSFER_HISTORY
D. LOCK_WAIT_HISTORY
E. ACCESS_HISTORY
Answer: A, D
LOCK_WAIT_HISTORY provides detailed information about transactions waiting on locks, including which transactions are blocked and which ones are blocking them (Answer D). This view is essential for diagnosing contention and concurrency issues.
QUERY_HISTORY complements this by providing execution details about the blocking queries, such as duration, user, and SQL text (Answer A). Together, these views allow architects to correlate blocked transactions with the responsible workloads.
The other views are unrelated to transaction locking behavior. This question highlights SnowPro Architect troubleshooting skills related to concurrency and transaction management.
=========
QUESTION NO: 55 [Snowflake Ecosystem and Integrations]
Which functions does the Data Build Tool (dbt) facilitate? (Select TWO).
A. Data loading
B. Data testing
C. Data visualization
D. Data transformation
E. Data replication
Answer: B, D
dbt is a transformation-focused analytics engineering tool that operates inside the data warehouse. It enables SQL-based data transformations and manages dependencies between models (Answer D). dbt also provides built-in data testing capabilities, allowing teams to define and run tests for data quality, such as uniqueness, not-null constraints, and referential integrity checks (Answer B).
dbt does not handle data loading, visualization, or replication; those functions are typically handled by ingestion tools, BI platforms, or replication services. SnowPro Architect candidates are expected to understand dbt’s role in the Snowflake ecosystem and how it fits into modern ELT architectures.