ANF QuickStart Tips

  1. Make sure ANF is registered for your Subscription(s):
    • In Cloud Shell:
      • Set your context to the correct subscription:
        Select-AzSubscription -Subscription <subscriptionId>
      • Then Register the Azure Resource Provider:
        Register-AzResourceProvider -ProviderNamespace Microsoft.NetApp
  2. Always use Standard Networking
  3. Enable Accelerated Networking on client NICs whenever possible for better throughput and lower latency.
  4. Consider the Availability Zone when deploying ANF Volumes. Volumes in the same AZ as your clients should be about 2ms faster. Might also be slightly relevant for your AD DC.
    • Tip: If you already have an ANF volume deployed, you can identify it’s AZ by clicking “Populate availability zone” on the volume’s Overview page. The current AZ will be shown before any change is committed and you can cancel out after noting the AZ.
  5. Consider Flexible service level (preview) for lower costs.
    • Note: Cool Data Access is not available for FSL Capacity Pools during FSL preview. Depending on the workload, Cool Data may be a lower cost.
  6. Register Cool Tiering for Premium and Ultra (just so you have it as an option in the UI)
    • Run in cloud shell both of these commands, in the Sub(s) where ANF is deployed:
    • Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFCoolAccessPremium
    • Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFCoolAccessUltra
  7. Remember Cool Data Access caveats
    • Within a Premium Capacity pool, the combined throughput allocated to Cool Data Access Volumes can only use up to 56.25% (36MiB/s per TiB) of the total Capacity Pool throughput. Other volumes within that same pool can be allocated the full throughput of the capacity pool.
    • Within an Ultra Capacity pool, the combined throughput allocated to Cool Data Access Volumes can only use up to 53.125% (68MiB/s per TiB) of the total Capacity Pool throughput. Other volumes within that same pool can be allocated the full throughput of the capacity pool.
    • Cool Data Access is not compatible with Double encryption since one of the encryption layers is hardware-based and moving data to the cool data is moved to a different storage device/service.
    • Cool Data Access is not supported with Flexible Service Level (FSL) while (FSL) in Preview.
  8. Register for the file access logs (FAL) (preview) feature at no cost. From the Cloud Shell:
    • Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFFileAccessLogs
  9. Increase your Subscription’s ANF Quotas before you need it:
    https://learn.microsoft.com/en-us/azure/azure-netapp-files/azure-netapp-files-resource-limits#request-limit-increase
    • For new subscriptions, you’re limited to 25 TiB of ANF per region. Increasing this is free but requires a manual request and takes a couple days to take effect.
    • Other limitations you may want to increase are:
      • Number of NetApp accounts per Azure region per subscription (10)
      • Number of capacity pools per NetApp account (25)
      • Number of volumes per subscription (500)
      • Number of volumes per capacity pool (500)
      • Maximum size of a single large volume (1024 TiBs)
      • Maximum number of files maxfiles per volume (variable)
      • Number of cross-region replication data protection volumes (destination volumes) (500)
      • Number of cross-zone replication data protection volumes (destination volumes) (500)
      • Maximum number of volumes supported for cool access per subscription per region (10)
  10. For SMB shares, minimize latency between the ANF Delegated subnet and the AD DC vNet. The DC should be read/write and ideally hosted in Azure. Use Sites and services to identify and specify the site of the AD DC that is closest to ANF in terms of latency.
  11. A hub and spoke architecture can be an issue if the firewall/gateway SKU not able to support the throughput of ANF, consider a peering of the ANF vNet to the AD DC and the clients if there are throughput concerns.
  12. Remember to use all your available throughput.
    • When using Auto QoS, the throughput (MiB/s) is allocated to the volume based on the volume capacity (GiBs), in the same ratio as the Capacity Pool (e.g. 16 MiB/s for each 1024 GiB). If the entire Capacity Pool capacity is not allocated, the entire throughput won’t be allocated.
      • This PowerShell script is an example you can use with Manual QoS to mimic Auto QoS, but with one helpful improvement. With the script, the total throughput is allocated to all volumes based on their relative size. For example, with the script a 512GiB volume, alone in a 1024 GiB Standard pool would be allocated the entire 16MiB/s. In contrast, using the Auto QoS setting would only allocate 8MiB/s of throughput to the volume, leaving the other 8MiB/s of Capacity Poll throughput unallocated.
      • In summary, this script makes sure all throughput is allocated to volumes, regardless of their total size within the Capacity Pool.
      • tvanroo/public-anf-toolbox/ANF QoS Mimic Auto
  13. If using Cool Data Access, fill up the volume. More data = less costs
    • Unused space is billed at the “hot” rate. Used data that has been untouched for the defined coolness period is moved to cheaper storage and billed at a lower rate. So, if you have unused space, you can save costs by using it for data that is infrequently used.
    • Options for this include:
      • longer and/or more granular snapshot policy
      • storage of other organizational data that needs low cost storage
      • backup or archive location in your overarching data security and retention strategy
  14. Consider Large Volumes if the volume will be 50+ TiBs to start.
    • Growing past 100TiBs can’t happen on a regular volume, but Large Volumes can grow to 1024TiBs (2048 TiBs in some regions).
    • Note that Large Volumes don’t currently support ANF Backup and are not best suited for data or log volumes for database workloads.
    • Note that Large Volumes only support FAL in Azure Commercial regions under a preview program, Large Volumes + FAL are not yet supported in Azure Gov regions.
    • More details are here: https://learn.microsoft.com/en-us/azure/azure-netapp-files/large-volumes-requirements-considerations#requirements-and-considerations
  15. Consider using NetApp’s SnapCenter Software (at no additional cost) to orchestrate more advanced backup and replication tasks, especially for App Consistent backups of MS SQL, Oracle and SAP workloads. Protect applications running on Azure NetApp Files
  16. Consider Data Classification & Data Mapping tool (at no additional cost) for ANF to understand your data better. Fild old, sensitive and duplicate data. Scan Azure NetApp Files volumes with BlueXP classification | NetApp Documentation
  17. Resources:
    • Azure NetApp Files Effective Price Estimator is an online tool for modeling costs and comparing how ANF’s snapshot efficiency protects massive amounts of data without actually consuming that data on disk. “How much will my workload cost on ANF?”
    • Azure NetApp Files Performance Calculator is an online tool for identifying which ANF Service Level can meet you Capacity and Throughput needs at the lowest cost. “Which ANF Service Level should I pick?
    • Azure NetApp Files storage with cool access cost savings estimator is an online tool for estimating how much the Cool Data Access feature can save you. “How much can cool data access save me?”
    • ANF Capacity Manager is an Azure Logic App that manages capacity-based alert rules and automatically increases volume sizes to prevent your Azure NetApp Files volumes from running out of space.
    • ANF Health Check is a PowerShell Runbook that will provides information about the health of your ANF resources and optionally remediate various issues.
    • Awesome ANF: is a GitHub repository full of ANF related tools, scripts and documentation.