A developer is granted ownership of a table that has a masking policy. The developer's role is not able to see the masked data. Will the developer be able to modify the table to read the masked data?
Yes, because a table owner has full control and can unset masking policies.
Yes, because masking policies only apply to cloned tables.
No, because masking policies must always reference specific access roles.
No, because ownership of a table does not include the ability to change masking policies
 Even if a developer is granted ownership of a table with a masking policy, they will not be able to modify the table to read the masked data if their role does not have the necessary permissions. Ownership of a table does not automatically confer the ability to alter masking policies, which are designed to protect sensitive data. Masking policies are applied at the schema level and require specific privileges to modify12.
References:
How often are encryption keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days
Snowflake automatically rotates encryption keys when they are more than 30 days old. Active keys are retired, and new keys are created. This process is part of Snowflake’s comprehensive security measures to ensure data protection and is managed entirely by the Snowflake service without requiring user intervention.
References:
What are value types that a VARIANT column can store? (Select TWO)
STRUCT
OBJECT
BINARY
ARRAY
CLOB
A VARIANT column in Snowflake can store semi-structured data types. This includes:
The VARIANT data type is specifically designed to handle semi-structured data like JSON, Avro, ORC, Parquet, or XML, allowing for the storage of nested and complex data structures.
References:
When reviewing a query profile, what is a symptom that a query is too large to fit into the memory?
A single join node uses more than 50% of the query time
Partitions scanned is equal to partitions total
An AggregateOperacor node is present
The query is spilling to remote storage
 When a query in Snowflake is too large to fit into the available memory, it will start spilling to remote storage. This is an indication that the memory allocated for the query is insufficient for its execution, and as a result, Snowflake uses remote disk storage to handle the overflow. This spill to remote storage can lead to slower query performance due to the additional I/O operations required.
References:
What is a best practice after creating a custom role?
Create the custom role using the SYSADMIN role.
Assign the custom role to the SYSADMIN role
Assign the custom role to the PUBLIC role
Add__CUSTOM to all custom role names
Assigning the custom role to the SYSADMIN role is considered a best practice because it allows the SYSADMIN role to manage objects created by the custom role. This is important for maintaining proper access control and ensuring that the SYSADMIN can perform necessary administrative tasks on objects created by users with the custom role.
References:
When reviewing the load for a warehouse using the load monitoring chart, the chart indicates that a high volume of Queries are always queuing in the warehouse
According to recommended best practice, what should be done to reduce the Queue volume? (Select TWO).
Use multi-clustered warehousing to scale out warehouse capacity.
Scale up the warehouse size to allow Queries to execute faster.
Stop and start the warehouse to clear the queued queries
Migrate some queries to a new warehouse to reduce load
Limit user access to the warehouse so fewer queries are run against it.
 To address a high volume of queries queuing in a warehouse, Snowflake recommends two best practices:
These strategies help to optimize the performance of the warehouse by ensuring that resources are scaled appropriately to meet demand.
References:
Which of the following are best practice recommendations that should be considered when loading data into Snowflake? (Select TWO).
Load files that are approximately 25 MB or smaller.
Remove all dates and timestamps.
Load files that are approximately 100-250 MB (or larger)
Avoid using embedded characters such as commas for numeric data types
Remove semi-structured data types
When loading data into Snowflake, it is recommended to:
These best practices are designed to optimize the data loading process, ensuring that data is loaded quickly and accurately into Snowflake.
References:
What is the default character set used when loading CSV files into Snowflake?
UTF-8
UTF-16
ISO S859-1
ANSI_X3.A
https://docs.snowflake.com/en/user-guide/intro-summary-loading.html#:~:text=For%20delimited%20files%20(CSV%2C%20TSV,encoding%20to%20use%20for%20loading .
For delimited files (CSV, TSV, etc.), the default character set is UTF-8. To use any other characters sets, you must explicitly specify the encoding to use for loading. For the list of supported character sets, see Supported Character Sets for Delimited Files (in this topic).
In the query profiler view for a query, which components represent areas that can be used to help optimize query performance? (Select TWO)
Bytes scanned
Bytes sent over the network
Number of partitions scanned
Percentage scanned from cache
External bytes scanned
 In the query profiler view, the components that represent areas that can be used to help optimize query performance include ‘Bytes scanned’ and ‘Number of partitions scanned’. ‘Bytes scanned’ indicates the total amount of data the query had to read and is a direct indicator of the query’s efficiency. Reducing the bytes scanned can lead to lower data transfer costs and faster query execution. ‘Number of partitions scanned’ reflects how well the data is clustered; fewer partitions scanned typically means better performance because the system can skip irrelevant data more effectively.
References:
A company strongly encourages all Snowflake users to self-enroll in Snowflake's default Multi-Factor Authentication (MFA) service to provide increased login security for users connecting to Snowflake.
Which application will the Snowflake users need to install on their devices in order to connect with MFA?
Okta Verify
Duo Mobile
Microsoft Authenticator
Google Authenticator
 Snowflake’s default Multi-Factor Authentication (MFA) service is powered by Duo Security. Users are required to install the Duo Mobile application on their devices to use MFA for increased login security when connecting to Snowflake. This service is managed entirely by Snowflake, and users do not need to sign up separately with Duo1.
A marketing co-worker has requested the ability to change a warehouse size on their medium virtual warehouse called mktg__WH.
Which of the following statements will accommodate this request?
ALLOW RESIZE ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT MODIFY ON WAREHOUSE MKTG WH TO ROLE MARKETING;
GRANT MODIFY ON WAREHOUSE MKTG__WH TO USER MKTG__LEAD;
GRANT OPERATE ON WAREHOUSE MKTG WH TO ROLE MARKETING;
 The correct statement to accommodate the request for a marketing co-worker to change the size of their medium virtual warehouse called mktg__WH is to grant the MODIFY privilege on the warehouse to the ROLE MARKETING. This privilege allows the role to change the warehouse size among other properties.
References:
A virtual warehouse's auto-suspend and auto-resume settings apply to which of the following?
The primary cluster in the virtual warehouse
The entire virtual warehouse
The database in which the virtual warehouse resides
The Queries currently being run on the virtual warehouse
The auto-suspend and auto-resume settings in Snowflake apply to the entire virtual warehouse. These settings allow the warehouse to automatically suspend when it’s not in use, helping to save on compute costs. When queries or tasks are submitted to the warehouse, it can automatically resume operation. This functionality is designed to optimize resource usage and cost-efficiency.
References:
Which is the MINIMUM required Snowflake edition that a user must have if they want to use AWS/Azure Privatelink or Google Cloud Private Service Connect?
Standard
Premium
Enterprise
Business Critical
https://docs.snowflake.com/en/user-guide/admin-security-privatelink.html
User-level network policies can be created by which of the following roles? (Select TWO).
ROLEADMIN
ACCOUNTADMIN
SYSADMIN
SECURITYADMIN
USERADMIN
 User-level network policies in Snowflake can be created by roles with the necessary privileges to manage security and account settings. The ACCOUNTADMIN role has the highest level of privileges across the account, including the ability to manage network policies. The SECURITYADMIN role is specifically responsible for managing security objects within Snowflake, which includes the creation and management of network policies.
References:
What data is stored in the Snowflake storage layer? (Select TWO).
Snowflake parameters
Micro-partitions
Query history
Persisted query results
Standard and secure view results
The Snowflake storage layer is responsible for storing data in an optimized, compressed, columnar format. This includes micro-partitions, which are the fundamental storage units that contain the actual data stored in Snowflake. Additionally, persisted query results, which are the results of queries that have been materialized and stored for future use, are also kept within this layer. This design allows for efficient data retrieval and management within the Snowflake architecture1.
References:
Which Snowflake object enables loading data from files as soon as they are available in a cloud storage location?
Pipe
External stage
Task
Stream
In Snowflake, a Pipe is the object designed to enable the continuous, near-real-time loading of data from files as soon as they are available in a cloud storage location. Pipes use Snowflake’s COPY command to load data and can be associated with a Stage object to monitor for new files. When new data files appear in the stage, the pipe automatically loads the data into the target table.
References:
https://docs.snowflake.com/en/user-guide/data-load-snowpipe-intro.html
Which of the following Snowflake capabilities are available in all Snowflake editions? (Select TWO)
Customer-managed encryption keys through Tri-Secret Secure
Automatic encryption of all data
Up to 90 days of data recovery through Time Travel
Object-level access control
Column-level security to apply data masking policies to tables and views
 In all Snowflake editions, two key capabilities are universally available:
These features are part of Snowflake’s commitment to security and governance, and they are included in every edition of the Snowflake Data Cloud.
References:
A user unloaded a Snowflake table called mytable to an internal stage called mystage.
Which command can be used to view the list of files that has been uploaded to the staged?
list @mytable;
list @%raytable;
list @ %m.ystage;
list @mystage;
 The command list @mystage; is used to view the list of files that have been uploaded to an internal stage in Snowflake. The list command displays the metadata for all files in the specified stage, which in this case is mystage. This command is particularly useful for verifying that files have been successfully unloaded from a Snowflake table to the stage and for managing the files within the stage.
References:
What SQL command would be used to view all roles that were granted to user.1?
show grants to user USER1;
show grants of user USER1;
describe user USER1;
show grants on user USER1;
The correct command to view all roles granted to a specific user in Snowflake is SHOW GRANTS TO USER
What is a responsibility of Snowflake's virtual warehouses?
Infrastructure management
Metadata management
Query execution
Query parsing and optimization
Management of the storage layer
The primary responsibility of Snowflake’s virtual warehouses is to execute queries. Virtual warehouses are one of the key components of Snowflake’s architecture, providing the compute power required to perform data processing tasks such as running SQL queries, performing joins, aggregations, and other data manipulations.
References:
Which Snowflake objects track DML changes made to tables, like inserts, updates, and deletes?
Pipes
Streams
Tasks
Procedures
In Snowflake, Streams are the objects that track Data Manipulation Language (DML) changes made to tables, such as inserts, updates, and deletes. Streams record these changes along with metadata about each change, enabling actions to be taken using the changed data. This process is known as change data capture (CDC)2.
What features does Snowflake Time Travel enable?
Querying data-related objects that were created within the past 365 days
Restoring data-related objects that have been deleted within the past 90 days
Conducting point-in-time analysis for Bl reporting
Analyzing data usage/manipulation over all periods of time
 Snowflake Time Travel is a powerful feature that allows users to access historical data within a defined period. It enables two key capabilities:
While Time Travel does allow querying of past data, it is limited to the retention period set for the Snowflake account, which is typically 1 day for standard accounts and can be extended up to 90 days for enterprise accounts. It does not enable querying or restoring objects created or deleted beyond the retention period, nor does it provide analysis over all periods of time.
References:
How does a scoped URL expire?
When the data cache clears.
When the persisted query result period ends.
The encoded URL access is permanent.
The length of time is specified in the expiration_time argument.
A scoped URL expires when the persisted query result period ends, which is typically after the results cache expires. This is currently set to 24 hours
How long does Snowflake retain information in the ACCESS HISTORY view?
7 days
14 days
28 days
365 days
Snowflake retains information in the ACCESS HISTORY view for 365 days. This allows users to query the access history of Snowflake objects within the last year1.
What is the MOST performant file format for loading data in Snowflake?
CSV (Unzipped)
Parquet
CSV (Gzipped)
ORC
Parquet is a columnar storage file format that is optimized for performance in Snowflake. It is designed to be efficient for both storage and query performance, particularly for complex queries on large datasets. Parquet files support efficient compression and encoding schemes, which can lead to significant savings in storage and speed in query processing, making it the most performant file format for loading data into Snowflake.
References:
Which formats does Snowflake store unstructured data in? (Choose two.)
GeoJSON
Array
XML
Object
BLOB
Snowflake supports storing unstructured data and provides native support for semi-structured file formats such as JSON, Avro, Parquet, ORC, and XML1. GeoJSON, being a type of JSON, and XML are among the formats that can be stored in Snowflake. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What role is required to use Partner Connect?
ACCOUNTADMIN
ORGADMIN
SECURITYADMIN
SYSADMIN
 To use Partner Connect, the ACCOUNTADMIN role is required. Partner Connect allows account administrators to easily create trial accounts with selected Snowflake business partners and integrate these accounts with Snowflake
Data storage for individual tables can be monitored using which commands and/or objects? (Choose two.)
SHOW STORAGE BY TABLE;
SHOW TABLES;
Information Schema -> TABLE_HISTORY
Information Schema -> TABLE_FUNCTION
Information Schema -> TABLE_STORAGE_METRICS
 To monitor data storage for individual tables, the commands and objects that can be used are ‘SHOW STORAGE BY TABLE;’ and the Information Schema view ‘TABLE_STORAGE_METRICS’. These tools provide detailed information about the storage utilization for tables. References: Snowflake Documentation
What is the MAXIMUM Time Travel retention period for a transient table?
O days
1 day
7 days
90 days
 The maximum Time Travel retention period for a transient table in Snowflake is 1 day. This is the default and maximum duration for which Snowflake maintains the historical data for transient tables, allowing users to query data as it appeared at any point within the past 24 hours2.
What statistical information in a Query Profile indicates that the query is too large to fit in memory? (Select TWO).
Bytes spilled to local cache.
Bytes spilled to local storage.
Bytes spilled to remote cache.
Bytes spilled to remote storage.
Bytes spilled to remote metastore.
 In a Query Profile, the statistical information that indicates a query is too large to fit in memory includes bytes spilled to local cache and bytes spilled to local storage. These metrics suggest that the working data set of the query exceeded the memory available on the warehouse nodes, causing intermediate results to be written to disk
Which URL type allows users to access unstructured data without authenticating into Snowflake or passing an authorization token?
Pre-signed URL
Scoped URL
Signed URL
File URL
Pre-signed URLs in Snowflake allow users to access unstructured data without the need for authentication into Snowflake or passing an authorization token. These URLs are open and can be directly accessed or downloaded by any user or application, making them ideal for business intelligence applications or reporting tools that need to display unstructured file contents
What internal stages are available in Snowflake? (Choose three.)
Schema stage
Named stage
User stage
Stream stage
Table stage
Database stage
 Snowflake supports three types of internal stages: Named, User, and Table stages. These stages are used for staging data files to be loaded into Snowflake tables. Schema, Stream, and Database stages are not supported as internal stages in Snowflake. References: Snowflake Documentation1.
Which of the following activities consume virtual warehouse credits in the Snowflake environment? (Choose two.)
Caching query results
Running EXPLAIN and SHOW commands
Cloning a database
Running a custom query
Running COPY commands
 Running EXPLAIN and SHOW commands, as well as running a custom query, consume virtual warehouse credits in the Snowflake environment. These activities require computational resources, and therefore, credits are used to account for the usage of these resources. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which statement describes how Snowflake supports reader accounts?
A reader account can consume data from the provider account that created it and combine it with its own data.
A consumer needs to become a licensed Snowflake customer as data sharing is only supported between Snowflake accounts.
The users in a reader account can query data that has been shared with the reader account and can perform DML tasks.
The SHOW MANAGED ACCOUNTS command will view all the reader accounts that have been created for an account.
Snowflake supports reader accounts, which are a type of account that allows data providers to share data with consumers who are not Snowflake customers. However, for data sharing to occur, the consumer needs to become a licensed Snowflake customer because data sharing is only supported between Snowflake accounts. References: Introduction to Secure Data Sharing | Snowflake Documentation2.
Which of the following are handled by the cloud services layer of the Snowflake architecture? (Choose two.)
Query execution
Data loading
Time Travel data
Security
Authentication and access control
The cloud services layer of Snowflake architecture handles various aspects including security functions, authentication of user sessions, and access control, ensuring that only authorized users can access the data and services23.
Which privilege must be granted to a share to allow secure views the ability to reference data in multiple databases?
CREATE_SHARE on the account
SHARE on databases and schemas
SELECT on tables used by the secure view
REFERENCE_USAGE on databases
To allow secure views the ability to reference data in multiple databases, the REFERENCE_USAGE privilege must be granted on each database that contains objects referenced by the secure view2. This privilege is necessary before granting the SELECT privilege on a secure view to a share.
How many resource monitors can be assigned at the account level?
1
2
3
4
 Snowflake allows for only one resource monitor to be assigned at the account level. This monitor oversees the credit usage of all the warehouses in the account. References: Snowflake Documentation
Credit charges for Snowflake virtual warehouses are calculated based on which of the following considerations? (Choose two.)
The number of queries executed
The number of active users assigned to the warehouse
The size of the virtual warehouse
The length of time the warehouse is running
The duration of the queries that are executed
Credit charges for Snowflake virtual warehouses are calculated based on the size of the virtual warehouse and the length of time the warehouse is running. The size determines the compute resources available, and charges are incurred for the time these resources are utilized
A Snowflake user executed a query and received the results. Another user executed the same query 4 hours later. The data had not changed.
What will occur?
No virtual warehouse will be used, data will be read from the result cache.
No virtual warehouse will be used, data will be read from the local disk cache.
The default virtual warehouse will be used to read all data.
The virtual warehouse that is defined at the session level will be used to read all data.
 Snowflake maintains a result cache that stores the results of every query for 24 hours. If the same query is executed again within this time frame and the data has not changed, Snowflake will retrieve the data from the result cache instead of using a virtual warehouse to recompute the results2.
Which pages are included in the Activity area of Snowsight? (Select TWO).
Contacts
Sharing settings
Copy History
Query History
Automatic Clustering History
The Activity area of Snowsight includes the Query History page, which allows users to monitor and view details about queries executed in their account, including performance data1. It also includes the Automatic Clustering History, which provides insights into the automatic clustering operations performed on tables2.
Which statement MOST accurately describes clustering in Snowflake?
The database ACCOUNTADMIN must define the clustering methodology for each Snowflake table.
Clustering is the way data is grouped together and stored within Snowflake micro-partitions.
The clustering key must be included in the COPY command when loading data into Snowflake.
Clustering can be disabled within a Snowflake account.
 Clustering in Snowflake refers to the organization of data within micro-partitions, which are contiguous units of storage within Snowflake tables. Clustering keys can be defined to co-locate similar rows in the same micro-partitions, improving scan efficiency and query performance12.
References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the difference between a stored procedure and a User-Defined Function (UDF)?
Stored procedures can execute database operations while UDFs cannot.
Returning a value is required in a stored procedure while returning values in a UDF is optional.
Values returned by a stored procedure can be used directly in a SQL statement while the values returned by a UDF cannot.
Multiple stored procedures can be called as part of a single executable statement while a single SQL statement can only call one UDF at a time.
 Stored procedures in Snowflake can perform a variety of database operations, including DDL and DML, whereas UDFs are designed to return values and cannot execute database operations1.
How does Snowflake recommend handling the bulk loading of data batches from files already available in cloud storage?
Use Snowpipe.
Use the INSERT command.
Use an external table.
Use the COPY command.
 Snowflake recommends using the COPY command for bulk loading data batches from files already available in cloud storage. This command allows for efficient and large-scale data loading operations from files staged in cloud storage into Snowflake tables3.
Which activities are included in the Cloud Sen/ices layer? (Select TWO).
Data storage
Dynamic data masking
Partition scanning
User authentication
Infrastructure management
The Cloud Services layer in Snowflake includes activities such as user authentication and infrastructure management. This layer coordinates activities across Snowflake, including security enforcement, query compilation and optimization, and more
What type of columns does Snowflake recommend to be used as clustering keys? (Select TWO).
A VARIANT column
A column with very low cardinality
A column with very high cardinality
A column that is most actively used in selective filters
A column that is most actively used in join predicates
Snowflake recommends using columns with very high cardinality and those that are most actively used in selective filters as clustering keys. High cardinality columns have a wide range of unique values, which helps in evenly distributing the data across micro-partitions. Columns used in selective filters help in pruning the number of micro-partitions to scan, thus improving query performance. References: Based on general database optimization principles.
When would Snowsight automatically detect if a target account is in a different region and enable cross-cloud auto-fulfillment?
When using a paid listing on the Snowflake Marketplace
When using a private listing on the Snowflake Marketplace
When using a personalized listing on the Snowflake Marketplace
When using a Direct Share with another account
 Snowsight automatically detects if a target account is in a different region and enables cross-cloud auto-fulfillment when using a paid listing on the Snowflake Marketplace. This feature allows Snowflake to manage the replication of data products to consumer regions as needed, without manual intervention1.
What are benefits of using Snowpark with Snowflake? (Select TWO).
Snowpark uses a Spark engine to generate optimized SQL query plans.
Snowpark automatically sets up Spark within Snowflake virtual warehouses.
Snowpark does not require that a separate cluster be running outside of Snowflake.
Snowpark allows users to run existing Spark code on virtual warehouses without the need to reconfigure the code.
Snowpark executes as much work as possible in the source databases for all operations including User-Defined Functions (UDFs).
 Snowpark is designed to bring the data programmability to Snowflake, enabling developers to write code in familiar languages like Scala, Java, and Python. It allows for the execution of these codes directly within Snowflake’s virtual warehouses, eliminating the need for a separate cluster. Additionally, Snowpark’s compatibility with Spark allows users to leverage their existing Spark code with minimal changes1.
What is a characteristic of the Snowflake Query Profile?
It can provide statistics on a maximum number of 100 queries per week.
It provides a graphic representation of the main components of the query processing.
It provides detailed statistics about which queries are using the greatest number of compute resources.
It can be used by third-party software using the Query Profile API.
The Snowflake Query Profile provides a graphic representation of the main components of the query processing. This visual aid helps users understand the execution details and performance characteristics of their queries4.
What privilege should a user be granted to change permissions for new objects in a managed access schema?
Grant the OWNERSHIP privilege on the schema.
Grant the OWNERSHIP privilege on the database.
Grant the MANAGE GRANTS global privilege.
Grant ALL privileges on the schema.
 To change permissions for new objects in a managed access schema, a user should be granted the MANAGE GRANTS global privilege. This privilege allows the user to manage access control through grants on all securable objects within Snowflake2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
How often are the Account and Table master keys automatically rotated by Snowflake?
30 Days
60 Days
90 Days
365 Days.
 Snowflake automatically rotates the Account and Table master keys when they are more than 30 days old. Active keys are retired, and new keys are created, ensuring robust security through frequent key changes1
Which of the following statements describes a schema in Snowflake?
A logical grouping of objects that belongs to a single database
A logical grouping of objects that belongs to multiple databases
A named Snowflake object that includes all the information required to share a database
A uniquely identified Snowflake account within a business entity
 A schema in Snowflake is a logical grouping of database objects, such as tables and views, that belongs to a single database. Each schema is part of a namespace in Snowflake, which is inferred from the current database and schema in use for the session5
Which file format will keep floating-point numbers from being truncated when data is unloaded?
CSV
JSON
ORC
Parquet
The Parquet file format is known for preserving the precision of floating-point numbers when data is unloaded, preventing truncation of the values3.
Which command is used to unload files from an internal or external stage to a local file system?
COPY INTO
GET
PUT
TRANSFER
 The command used to unload files from an internal or external stage to a local file system in Snowflake is the GET command. This command allows users to download data files that have been staged, making them available on the local file system for further use23.
What happens to the shared objects for users in a consumer account from a share, once a database has been created in that account?
The shared objects are transferred.
The shared objects are copied.
The shared objects become accessible.
The shared objects can be re-shared.
 Once a database has been created in a consumer account from a share, the shared objects become accessible to users in that account. The shared objects are not transferred or copied; they remain in the provider’s account and are accessible to the consumer account
Which privilege must be granted by one role to another role, and cannot be revoked?
MONITOR
OPERATE
OWNERSHIP
ALL
The OWNERSHIP privilege is unique in that it must be granted by one role to another and cannot be revoked. This ensures that the transfer of ownership is deliberate and permanent, reflecting the importance of ownership in managing access and permissions.
What Snowflake feature provides a data hub for secure data collaboration, with a selected group of invited members?
Data Replication
Secure Data Sharing
Data Exchange
Snowflake Marketplace
Snowflake’s Data Exchange feature provides a data hub for secure data collaboration. It allows providers to publish data that can be discovered and accessed by a selected group of invited members, facilitating secure and controlled data sharing within a collaborative environment3. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which Snowflake command can be used to unload the result of a query to a single file?
Use COPY INTO
Use COPY INTO
Use COPY INTO
Use COPY INTO
The Snowflake command to unload the result of a query to a single file is COPY INTO
Which operation can be performed on Snowflake external tables?
INSERT
JOIN
RENAME
ALTER
Snowflake external tables are read-only, which means data manipulation language (DML) operations like INSERT, RENAME, or ALTER cannot be performed on them. However, external tables can be used for query and join operations3.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
What type of query will benefit from the query acceleration service?
Queries without filters or aggregation
Queries with large scans and selective filters
Queries where the GROUP BY has high cardinality
Queries of tables that have search optimization service enabled
The query acceleration service in Snowflake is designed to benefit queries that involve large scans and selective filters. This service can offload portions of the query processing work to shared compute resources, which can handle these types of workloads more efficiently by performing more work in parallel and reducing the wall-clock time spent in scanning and filtering2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
A permanent table and temporary table have the same name, TBL1, in a schema.
What will happen if a user executes select * from TBL1 ;?
The temporary table will take precedence over the permanent table.
The permanent table will take precedence over the temporary table.
An error will say there cannot be two tables with the same name in a schema.
The table that was created most recently will take precedence over the older table.
 In Snowflake, if a temporary table and a permanent table have the same name within the same schema, the temporary table takes precedence over the permanent table within the session where the temporary table was created4.
What does SnowCD help Snowflake users to do?
Copy data into files.
Manage different databases and schemas.
Troubleshoot network connections to Snowflake.
Write SELECT queries to retrieve data from external tables.
SnowCD is a connectivity diagnostic tool that helps users troubleshoot network connections to Snowflake. It performs a series of checks to evaluate the network connection and provides suggestions for resolving any issues4.
What is a characteristic of materialized views in Snowflake?
Materialized views do not allow joins.
Clones of materialized views can be created directly by the user.
Multiple tables can be joined in the underlying query of a materialized view.
Aggregate functions can be used as window functions in materialized views.
 One of the characteristics of materialized views in Snowflake is that they allow multiple tables to be joined in the underlying query. This enables the pre-computation of complex queries involving joins, which can significantly improve the performance of subsequent queries that access the materialized view4. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What step can reduce data spilling in Snowflake?
Using a larger virtual warehouse
Increasing the virtual warehouse maximum timeout limit
Increasing the amount of remote storage for the virtual warehouse
Using a common table expression (CTE) instead of a temporary table
To reduce data spilling in Snowflake, using a larger virtual warehouse is effective because it provides more memory and local disk space, which can accommodate larger data operations and minimize the need to spill data to disk or remote storage1. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What is the purpose of the STRIP NULL_VALUES file format option when loading semi-structured data files into Snowflake?
It removes null values from all columns in the data.
It converts null values to empty strings during loading.
It skips rows with null values during the loading process.
It removes object or array elements containing null values.
The STRIP NULL_VALUES file format option, when set to TRUE, removes object or array elements that contain null values during the loading process of semi-structured data files into Snowflake. This ensures that the data loaded into Snowflake tables does not contain these null elements, which can be useful when the “null†values in files indicate missing values and have no other special meaning2.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
Which parameter can be set at the account level to set the minimum number of days for which Snowflake retains historical data in Time Travel?
DATA_RETENTION_TIME_IN_DAYS
MAX_DATA_EXTENSION_TIME_IN_DAYS
MIN_DATA_RETENTION_TIME_IN_DAYS
MAX CONCURRENCY LEVEL
The parameter DATA_RETENTION_TIME_IN_DAYS can be set at the account level to define the minimum number of days Snowflake retains historical data for Time Travel1.
What will prevent unauthorized access to a Snowflake account from an unknown source?
Network policy
End-to-end encryption
Multi-Factor Authentication (MFA)
Role-Based Access Control (RBAC)
 A network policy in Snowflake is used to restrict access to the Snowflake account from unauthorized or unknown sources. It allows administrators to specify allowed IP address ranges, thus preventing access from any IP addresses not listed in the policy1.
Which views are included in the DATA SHARING USAGE schema? (Select TWO).
ACCESS_HISTORY
DATA_TRANSFER_HISTORY
WAREHOUSE_METERING_HISTORY
MONETIZED_USAGE_DAILY
LISTING TELEMETRY DAILY
The DATA_SHARING_USAGE schema includes views that display information about listings published in the Snowflake Marketplace or a data exchange, which includes DATA_TRANSFER_HISTORY and LISTING_TELEMETRY_DAILY2.
Which command is used to start configuring Snowflake for Single Sign-On (SSO)?
CREATE SESSION POLICY
CREATE NETWORK RULE
CREATE SECURITY INTEGRATION
CREATE PASSWORD POLICY
To start configuring Snowflake for Single Sign-On (SSO), the CREATE SECURITY INTEGRATION command is used. This command sets up a security integration object in Snowflake, which is necessary for enabling SSO with external identity providers using SAML 2.01.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
What does the LATERAL modifier for the FLATTEN function do?
Casts the values of the flattened data
Extracts the path of the flattened data
Joins information outside the object with the flattened data
Retrieves a single instance of a repeating element in the flattened data
The LATERAL modifier for the FLATTEN function allows joining information outside the object (such as other columns in the source table) with the flattened data, creating a lateral view that correlates with the preceding tables in the FROM clause2345. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which VALIDATION_MODE value will return the errors across the files specified in a COPY command, including files that were partially loaded during an earlier load?
RETURN_-1_R0WS
RETURN_n_ROWS
RETURN_ERRORS
RETURN ALL ERRORS
The RETURN_ERRORS value in the VALIDATION_MODE option of the COPY command instructs Snowflake to validate the data files and return errors encountered across all specified files, including those that were partially loaded during an earlier load2. References: [COF-C02] SnowPro Core Certification Exam Study Guide
What function can be used with the recursive argument to return a list of distinct key names in all nested elements in an object?
FLATTEN
GET_PATH
CHECK_JSON
PARSE JSON
The FLATTEN function can be used with the recursive argument to return a list of distinct key names in all nested elements within an object. This function is particularly useful for working with semi-structured data in Snowflake
Which data types can be used in Snowflake to store semi-structured data? (Select TWO)
ARRAY
BLOB
CLOB
JSON
VARIANT
 Snowflake supports the storage of semi-structured data using the ARRAY and VARIANT data types. The ARRAY data type can directly contain VARIANT, and thus indirectly contain any other data type, including itself. The VARIANT data type can store a value of any other type, including OBJECT and ARRAY, and is often used to represent semi-structured data formats like JSON, Avro, ORC, Parquet, or XML34.
References:Â [COF-C02] SnowPro Core Certification Exam Study Guide
What are key characteristics of virtual warehouses in Snowflake? (Select TWO).
Warehouses that are multi-cluster can have nodes of different sizes.
Warehouses can be started and stopped at any time.
Warehouses can be resized at any time, even while running.
Warehouses are billed on a per-minute usage basis.
Warehouses can only be used for querying and cannot be used for data loading.
 Virtual warehouses in Snowflake can be started and stopped at any time, providing flexibility in managing compute resources. They can also be resized at any time, even while running, to accommodate varying workloads910. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Who can grant object privileges in a regular schema?
Object owner
Schema owner
Database owner
SYSADMIN
 In a regular schema within Snowflake, the object owner has the privilege to grant object privileges. The object owner is typically the role that created the object or to whom the ownership of the object has been transferred78.
References = [COF-C02] SnowPro Core Certification Exam Study Guide
At what level is the MIN_DATA_RETENTION_TIME_IN_DAYS parameter set?
Account
Database
Schema
Table
The MIN_DATA_RETENTION_TIME_IN_DAYS parameter is set at the account level. This parameter determines the minimum number of days Snowflake retains historical data for Time Travel operations
A JSON file, that contains lots of dates and arrays, needs to be processed in Snowflake. The user wants to ensure optimal performance while querying the data.
How can this be achieved?
Flatten the data and store it in structured data types in a flattened table. Query the table.
Store the data in a table with a variant data type. Query the table.
Store the data in a table with a vai : ant data type and include STRIP_NULL_VALUES while loading the table. Query the table.
Store the data in an external stage and create views on top of it. Query the views.
 Storing JSON data in a table with a VARIANT data type is optimal for querying because it allows Snowflake to leverage its semi-structured data capabilities. This approach enables efficient storage and querying without the need for flattening the data, which can be performance-intensive1.
What is the minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data?
Standard Edition
Enterprise Edition
Business Critical Edition
Virtual Private Snowflake Edition
The minimum Snowflake Edition that supports secure storage of Protected Health Information (PHI) data is the Business Critical Edition. This edition offers enhanced security features necessary for compliance with regulations such as HIPAA and HITRUST CSF4.
Which function unloads data from a relational table to JSON?
TO_OBJECT
TO_JSON
TO_VARIANT
OBJECT CONSTRUCT
 The TO_JSON function is used to convert a VARIANT value into a string containing the JSON representation of the value. This function is suitable for unloading data from a relational table to JSON format. References: [COF-C02] SnowPro Core Certification Exam Study Guide
Which command is used to unload data from a Snowflake database table into one or more files in a Snowflake stage?
CREATE STAGE
COPY INTO