My eleven year old daughter recently asked me ‘the question’. You know the one…

“Hey dadda, what’s Kubernetes?”

How do you explain kubernetes to an eleven year old?! Check out this video to learn how. Brilliant.

I am not a Kubernetes expert. I am learning, probably just like you are. Creating resources, tearing down resources, building them again, kubectl-ing, scratching my head, scratching my head some more, rinse and repeat. I can understand how powerful it is. I can also understand how complex it is.

As an engineer, I am constantly trying to distill complex things down to their simplest form. I figure out how to do something by reading documentation, blogs, dissecting code, and a humorous amount of trial and error. If I don’t distill this information and document it, I’ll have to do it all over again in a few months as my memory is just awful. I do enjoy this process, and I hope these types of posts can help you get to where you are going a tiny bit faster.

If you want to deploy or demonstrate dymamically provisioned, persistent storage without being a Kubernetes expert, this is the post for you.

Let’s start with some basic definitions:

  1. Kubernetes; container orchestration system
  2. Azure Kubernetes Service (AKS); a fully managed Kubernetes cluster service provided by Microsoft
  3. NetApp Trident; dynamic, persistent, storage orchestrator for Kubernetes (or Docker)
  4. Azure NetApp Files; lightning fast, enterprise grade, file storage service (NFS/SMB) provided by Microsoft

At the end of this tutorial you will have deployed AKS, installed NetApp Trident, configured a Trident backend to dynamically provision Azure NetApp Files volumes, and have a running nginx deployment being served by Azure NetApp Files storage.

Before we dive in… You will need a Linux operating system to interact with Trident (tridentctl). I am using an Ubuntu VM running in Azure. You could use the Windows Subsystem for Linux or any Linux distro on the supported host operating systems list found here.

Ok, let’s dive in!

Deploy your Kubernetes cluster using the Azure Kubernetes Service (AKS)

  1. From within the Azure portal, navigate to ‘Kubernetes services’ and click the ‘+Add’ button at the top, choose ‘Add Kubernetes cluster’. AKS Add Cluster
  2. You can accept the default settings for everything. You will need to provide the resource group and give your cluster a name. Feel free to reduce the number of nodes and node size to save yourself some money. A single node is enough for this demo. AKS will create all of the required Azure networking components for you.

Create a delegated subnet for Azure NetApp Files

  1. Navigate to ‘Virtual networks’ within the Azure portal. Find your newly created virtual network. It should have a prefix similiar to ‘aks-vnet’. Click on the name of the VNet. AKS VNet
  2. Click on ‘Subnets’ and select ‘+Subnet’ from the top toolbar. ANF New Subnet
  3. Give your subnet a name like ‘ANF.sn’ and under the ‘Subnet delegation’ heading, select ‘Microsoft.Netapp/volumes’. Do not change anything else. Click ‘OK’. ANF Subnet Detail

Continue reading

I know you have all been waiting patiently for part two of this series. I’d like less time to go by in between these multi part blogs but sometimes life just gets in the way. In part one of this series, I showed you how to get started with Insomnia and how to get your ‘bearer token’. Now that we have these two boxes checked, we can begin to query the Azure Management API to get information about our Azure NetApp Files resources.

First, I’ll show you how to create a basic query to get a list of your NetApp Storage Accounts. For more information about the Azure NetApp Files storage heirarchy, head on over to docs.microsoft.com. Once we have a successful response to our basic query, we’ll create a more specific request and get some useful information about one of our Azure NetApp Files Volumes. Lastly, I’ll show you how to use cURL to query the API from the Linux command line. This can be really useful if you would like to integrate API calls into scripts or other automation tools.

Let’s get started.

Get a List of NetApp Accounts

Open Insomnia and if needed, switch to your ‘AzureNetAppFiles’ environment. It should look something like this: Insomnia Core

Continue reading

Hello and welcome to the first post in a three part series that will help you get up and running with the Azure REST API. More specifically I’ll show you how to use the Azure API to interact with your Azure NetApp Files resources.

Getting started with REST APIs can be a little tricky. There are several components to a REST API call. Combine that with the various types of authentication and things can get pretty overwhelming. Microsoft has chosen to use what is called ‘bearer token’ as the authentication method for their Azure Management API.

I’ll call this first part ‘bearer of good tokens’.

This ‘bearer token’ is unique to you and your Azure subscription. It needs to be passed as part of your REST API call in order to prove to Azure that you are authorized to interact with your Azure resources. This token should be treated as a very sensitive bit of information. Keep it in a secure place and don’t accidently commit it to a public code repository. (been there, done that!)

“This sounds exciting, how do I get my bear token!?”

Erm… that’s ‘bearer token’ and great question! Microsoft has made this quite easy and I have broken it down in to three easy steps:

  1. Create an Azure ‘service principal’
  2. Install and Configure Insomnia Core (REST API client)
  3. Issue a POST request to https://login.microsoftonline.com

Let’s dive in to each of these a little bit deeper…

1. Create an Azure ‘service principal’

I think the easiest way to do this is to use the Azure CLI (az cli):

az ad sp create-for-rbac --name "AzureNetAppFilesSP"

If you are not familiar with the Azure CLI, go check out this getting started guide.

The output (in JSON) should look something like this:

{
 "appId": "11111111-1111-1111-1111-111111111111",
 "displayName": "AzureNetAppFilesSP",
 "name": "http://AzureNetAppFilesSP",
 "password": "22222222-2222-2222-2222-222222222222",
 "tenant": "33333333-3333-3333-3333-333333333333"
}

We’ll need some of this information a bit later.

Continue reading

Snapshot policies are here! Until today, if you wanted to schedule or automate the creation (and retention) of snapshots in the Azure NetApp Files service, you needed to BYOA (bring your own automation). Like any first-party Azure service, ANF supports all of the standard Azure APIs and automation tools, so this wasn’t terribly difficult and there was even a Logic App that was simple to deploy and took care of the heavy lifting.

But of course, Microsoft and NetApp continue to bring new features and more value to this great service. There is one tiny caveat… at this time, the snapshot policy feature is currently in preview. But don’t worry, the registration process is painless. We’ll have you creating snapshot policies in just a few minutes.

As a prerequisite, you’ll need Azure PowerShell installed and connected to your Azure account:

Install-Module -Name Az -AllowClobber -Scope CurrentUser
Connect-AzAccount

Once you have successfully connected to your Azure account, continue with registering the ‘ANFSnapshotPolicy’ feature:

Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy

Lastly, verify the feature is ‘Registered’:

Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFSnapshotPolicy


The registration process should take about ten minutes, but your experience may vary slightly. At this point, you should see the ‘Data protection’ sub-heading and ‘Snapshot policy’ menu item within the Azure portal (take a look at the screen shot below). You are now ready to create your first snapshot policy. Head on over to the official Azure NetApp Files documentation to get started.

Continue reading

Controlling cloud spend is a massive challenge for cloud consumers of all sizes. Having good systems in place to monitor resources and provide proper alerting mechanisms when that consumption goes beyond expected levels is critical to any successful cloud deployment.

While Azure NetApp Files is a great enterprise grade file service, the Azure metrics that we have to work with today can be a bit limiting when it comes to capacity consumption. One of the current shortcomings is the absence of a metric that allows you to alert on a percentage of space consumed at a capacity pool or volume level. Currently, the alert threshold needs to be specified in bytes.. bytes?! Yes, BYTES. Furthermore, when a capacity pool or volume is resized, the corresponding alert threshold needs to be manually increased or decreased accordingly.

After a few conversations with customers and colleagues, I decided to see if I could automate the creating and updating of Azure monitor alert rules for Azure NetApp Files resources. Inspired by my good friend, Kirk Ryan’s anfScheduler Logic App, I thought that may be a good place to start.

And that is how ANFAutoAlerts was born…

ANFAutoAlerts is an Azure Logic App that automates the creating, updating, and deleting of capacity based alerts for Azure NetApp Files.

Continue reading

Hello and welcome! I guess this is seanluce.com 2.0. 3? 4? I’ve lost count. I felt like it was time for a fresh start. So this is it. A clean slate. With a new start comes a new content platform. I kicked the tires on a few static site generators like Jekyll, Gatsby, and Pelican (you can see a pretty exhaustive list at staticgen.com if you are curious). I eventually landed on Hugo for static site generation and Netlify to build, test, and deploy. I am still getting used to the workflow, but it goes something like this:

Continue reading

Author's picture

Sean Luce

Cloud Solutions Architect @NetApp

Cloud Solutions Architect @NetApp

Michigan