On the value of certification.

Recently I sat two of the new Microsoft Security exams, Security Operations Analyst (SC-200) and Identity and Access Management Administrator (SC-300). These two exams are number 9&10 in my Microsoft certification journey and number 17&18 over the last two years. As beta exams, I won’t receive my results for eight weeks, ample time for me to finally get around to writing a post on the my journey and the overall value I got from it. Buckle up, because I didn’t know I had this many certification thoughts in me!

My certification journey started as I began developing my Azure skills. I had recently moved from being an on-premise focused engineer, dealing with VMWare clusters, SANs, general Windows infrastructure and all the fun stuff you get on premises. Azure was so new to me and I felt like a complete bonehead. Cloud seemed hard and I felt stupid. I’d taken some technology associate exams when I worked as a service desk analyst and stopped my initial foray into study as I dipped my toe into the whole MCSA/MCSE ecosystem. I enrolled initially into the AZ-100 exam, half skeptical that I would drop my studies in a similar manner.

What I ended up finding was an accessible ecosystem, where I was able to develop myself and learn new technology. For that first exam, I spent a three weeks or so preparing – mostly watching Nick Colyer on ACloudGuru (Actually just re-branded skylines academy content). I sat at home and completed the exam quickly scoring an 885. To say I was stoked was an understatement. I quickly booked my AZ-101 a week later, feeling buoyed by a decent result & a shorter set of content. The 101 exam prep was only ~5 hours of video at the time.

The day after booking my AZ-101, Microsoft announced the newer AZ-103 type exam, with the caveat that AZ-100 holders would automatically receive the associate certification. I was a little annoyed initially, but the decision did take the pressure off my 101 exam. I really wished it didn’t, as I really struggled in this one. I remember sitting at my desk on the second question having a mental meltdown, because I had absolutely no idea about some app-service question. I thought I barely passed that exam, however I managed to luck my way into an 894. According to the scoring, I was better at web than I was at systems & network management. This was (and is) wrong. Hooray for bell curves I suppose?

At the end of this process, I had two things on my mind; One, the systems admin (AZ-100) exam was way too easy. I knew I would expect a system administrator to know more about Azure than what I was tested on. This would become a theme for me as I continued to take further certifications. Two, the labs were quite literally a lifesaver and will be for a lot of other people. I can’t speak for my readers, but having used all three cloud providers, I can without a doubt say that the Azure portal is the most user friendly. The lab scenarios would ask me to configure X thing on Y service. The entire time, I knew if I didn’t know how, I could just hover over the various blades of that resource, and have information fed to me about what each option would do. I’m almost certain I passed the 101 exam in this manner.

Following on from my associate success, my supportive employer suggested I move onward to the Azure Architecture focuses. Still feeling like I didn’t know enough Azure, I was quite happy to take the exam support and move up. AZ-300 and 301 followed a similar pattern, albeit with more study videos. This is probably the laziest way to prepare for an exam, because I found I would just watch content and hope you remember it – Labs were time consumign and I could just watch the instructor anyway. I did do my best to apply any learning in my day to day, so I was still getting some hands on, but not in the same manner of old where labbing solutions out was almost mandatory.

The Solution Architect expert exams followed a similar process to my Associate admin exams, with the 301 first (reviews said it was easier) and the 300 second. I passed the 300 comfortably and the 301 barely. At this point, I was pretty well engaged and committed to further exams. This was for a couple of reasons; I enjoyed learning about new things, even though I could not always see an application in my day to day. My employer was happy to pay – I could do nearly any exam for free, provided I passed on the first attempt. Finally, I liked the community recognition it gave me as someone adaptable and willing to learn.

Following the Architect certication, I went on to achieve another 6 Microsoft certifications (hopefully 8 soon), 4 AWS, 1 GCP, 3 Okta and 2 HashiCorp certifications. I’m not afraid to admit, that certain exams were taken only on a request from my employer – Namely HashiCorp Vault Associate & GCP Associate Cloud Engineer. While I’ve used both products in production, they are not something I’m super passionate about. This is the nature of working for a service provider. Sometimes a channel partner requires certification and you will be asked to assist in that process.

Hardest & Easiest Exams?

Of the twenty or so exams I’ve so far taken, the most difficult were Okta certifications, with the easiest being both HashiCorp exams.

The Okta exams were not difficult due to the content covered, but because to their format. Okta uses a format known as Discrete Option Multiple Choice (DOMC). If you read the blurb on DOMC, it’s all about fairness and integrity. Test takers only ever see 50-60% of the total exam, so it becomes a lot harder for dodgy “dumps” websites to steal the questions wholesale. This being said, I found the question format stressful. I was extremely used to using various exam techniques to solve some solutions where I wasn’t comfortable and this was taken away from me. You have to know the content well for DOMC exams. The other problem I found was my anxiety levels were increased, due to uncertainty on future options. DOMC is really good at ensuring you know a platform well, but if you know it too well, expect to feel stressed. You always want to select the best answer, but you don’t know if it will be presented. I distinctly remember stewing on a DOMC question about sign-in data. I knew the prompt of “you can get this x info from the system log” was not the documented/published way of getting this info, but I also knew it was possible to extract system logs and parse them to obtain the relevant data. In a scenario where you don’t know that the “correct” answer will be presented and a wrong selection means all your progress through the question is null & void, this is horribly stressful. You’re punished for knowing more. I also found a woeful lack of third party content available for Okta exams, which would have been quite challenging if I didn’t have employer support.

I think the HashiCorp exams were easier mainly due to the maturity of the program. Currently, you can complete exams on Terraform, Consul and Vault at an “Associate” level. I found the ones I took to be a bit more aligned with foundational concepts and understanding of what the products were, rather than how to use a product in anger. Similar to the Microsoft 900 series. For their price, I’m definitely not upset about the difficulty level, $70 is the most affordable exam I’ve taken. I would be interested to hear the thoughts of other people on this one, because often taking on new technology as a more experienced cloud engineer can be a bit misleading – Your past experience will help you pick up the tech a lot faster (Cloudformation -> ARM Templates -> Terraform is an easy example).

Bumpy ride?

I’ve heard from a few people that their experiences were pretty rough. I have to say, in the scheme of things I feel I’ve gotten off lightly. In lab scenarios, the worst I’ve had to deal with is a wait time for resource deletion (I made a mistake). For everything else, it really depends on the delivery method in question. My AWS & GCP exams were completed in person, an experience which I would not choose over remote proctored. It’s just a hassle for me to get out to a testing center and it’s nowhere near as nice or comfortable as my home office. For remote proctored, I’ve had a couple of testing issues/shoddy check-ins. Nothing that can’t be worked through by chatting to the various providers. My biggest bit of advice in these scenarios is just to stay calm and work through the various support processes.

Value?

For me, the value of certification is just now becoming debatable. I’m a fair bit more experienced than when I started. If I was starting the same certification journey from scratch, I think it would be a hard sell. I wouldn’t have the “I suck at this” mentality that I originally struggled with, or the “Need an entry level job” driver that people starting out have. All up, I’ve spent about $2000 out of pocket on various labs, training material and certifications. If I include what my employers have reimbursed me for, I’m well over the $5000 mark. For this investment, I’m glad to say I’ve had a couple of things happen.

  1. I’ve increased my own skills in the technology I care about.
  2. I’ve developed a network of like minded people around me.
  3. I’ve increased my employment opportunities.

Of these three things, the first is self explanatory; Studying something tends to make you better at it. As a social benefit, I found that as I posted and shared my thoughts on the certification journey, people engaged more freely and even connected with me just to chat on the cloud journey. Showing passion definitely encourages other people to engage with you. Generally I found that I was able to learn from these people too. Some people even reached out regarding the journey that might be right for them – The answer for this question is always, find something that you love. As for the employment opportunities, certifications are a great way to get noticed by recruiters & hiring managers. I receive in-mail on LinkedIn about once every three weeks based on not much else than my certification record – I don‘t think this is a good thing and I don’t like it. What you learn in a book/video/lab is always different from the real world. I’ve met people who have 20+ certifications who I would judge to be technically inept, while I’ve also met people with 0 who are absolute cloud wizards. My advice to anyone looking for a job is to show your passion through blogging, GitHub or community engagement and THEN look at getting certified. Just focusing on certifications will get you noticed by lazy and ignorant recruiting firms and through some HR filters – it won’t get you a job (In my opinion). Passion and experience are way more important here.

Closing Thoughts

If you’re still with me after this rambling 2000 word or so monologue, thank you. Hopefully this post has provided you with some insight into cloud certification. As for me, I plan to continue with my journey, albeit with some exams I think will be the most challenging yet. I’m currently preparing to take the ISC CISSP and the Certified Kubernetes Administrator. I’m still getting value out of this process and I plan to for some time to come. Until next time, stay cloudy!

Azure AD Application Policies Simplified

One of the most common arguments I hear when discussing the move to Azure AD is: “ADFS lets me control everything”. For change adverse organisations, this can be a legitimate problem. More often than not however, the challenge is not that Azure AD cannot be customised to the organisational need. Instead, it is that operators don’t understand how to customise Azure AD. When considering ADFS, the following areas are commonly updated to match business requirements

  • Branding
  • Claims Policy
  • Home Realm Discovery
  • Token Lifespans

Branding is a pretty common requirement and can be modified in two ways, depending if you’re focused on business or consumer identity. Claims Policy, HRD and Token lifespans are all a bit more confusing, with policy for these being the topic of todays post.

Policy Types

If you pop the hood on Azure AD using Graph, you will discover quickly that application policies are derived from the “stsPolicy” resource. This ensures that nearly every policy follows a standard format, with the key difference occurring within the definition element. Generally speaking, If you’ve written one policy type, you can write them all. Application Policies can be applied against both the Application and the application Service Principal, meaning rather than the two types that are immediately indicated in the Application documentation, we actually have five types. If you’re not aware of how Azure AD Applications and Service Principals work together, Microsoft provides a good summary here.

Policy TypeUsage ScenarioADFS Equivalent
HomeRealmDiscovery“Fast Forwarding” directly from Azure AD to a branded sign-in page or external IDP. Useful in migration scenarios. Home Realm Discovery
ClaimsMappingPolicyMapping data that is not supported by “Optional Claims” into SAML, ID and Access tokens. Claim Rules
PermissionGrantPolicyBypass admin approval flows when users request specific permissions. EG Graph/User.ReadN/A
TokenIssuancePolicyUpdate Characteristics of SAML tokens – Things like token signing or SAML Version. WS-Fed and custom certificates
TokenLifetimePolocyExtend or modify how long SAML, or ID tokens are valid for. Relying Party Token Lifetimes

Unfortunately documentation on application policies is currently a little light on content, and there is a few important details you must understand when applying them;

  1. As of writing, some policy types are in preview, meaning that Microsoft reserves the right to change how they work.
  2. ClaimsMappingPolicies require you to set the “acceptMappedClaims” value to true within the application manifest OR configure a custom signing key.
  3. TokenLifeTimePolicy works only for ID and Access tokens as of January 31st 2021. Refresh and session tokens have moved to Conditional Access session control.

Reading Policy Objects

Thankfully the current specifications for policy objects are quite simple. In the below example we declare a ClaimsMappingPolicy which maps employeeid data from the Azure AD User through to SAML and ID Tokens.

{
    "ClaimsMappingPolicy": {
        "Version": 1,
        "IncludeBasicClaimSet": "true",
        "ClaimsSchema": [
            {
                "Source": "user",
                "ID": "employeeid",
                "SamlClaimType": "http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid",
                "JwtClaimType": "employeeid"
            }
        ]
    }
}

One principal to apply when building policies is to ensure they remain granular. This makes the effect of a policy clear and also enables you to assign one policy to many applications.

Applying Policy

Applying a policy to an application is currently not supported within the Azure AD portal, requiring you to use PowerShell and the AzureADPreview module. This is a pretty simple five step process.

1. Import the AzureADPreview Module and sign in to Azure
2. Create your application, either in the portal or using PowerShell
3. Create your application policy using PowerShell

#Create Policy Object
New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source": "user","ID":"employeeid","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid","JwtClaimType":"employeeid"}]}}oli') -DisplayName EmitEmployeeIdClaim -Type ClaimsMappingPolicy

4. Assign your policy to your application

#Apply Policy to targeted application.
Add-AzureADServicePrincipalPolicy -Id <ServicePrincipalOBJECTId> -RefObjectId <PolicyId>

5. Validate your policy assignment

Get-AzureADServicePrincipalPolicy -Id <ServicePrincipalOBJECTId>
Policy Assignment Process

Hopefully you have found this post informative, with a few of your policy options de-mystified. As always, feel free to each out if you have any questions regarding your own Identity and Access Management scenarios.

Effortless sync for Azure AD B2B users within AD Connect

Recently I have been working on a few identity projects where Azure AD B2B users have been a focus point. The majority of organisations have always had a solution or process for onboarding contractors and partners. More often then not, this is simply “Create an AD Account” and call it a day. But what about Azure AD? How do organisations enable trusted parties, without paying for it?

Using native “cloud only” B2B accounts lets organisations onboard contractors seamlessly, but what about scenarios where you want to control password policy? Or grant access to on-premise integrated solutions? In these scenarios, retaining the on-premise process can be a hard requirement. Most importantly, we need to solve all these questions without changes to existing business process

Thankfully, Microsoft has developed support for UserTypes within AD Connect. Using this functionality, administrators can configure inbound and outbound synchronisation within AD Connect, with the end result being on-premise AD mastered, guest accounts within Azure AD.

The Microsoft Process

Enabling this synchronisation according to the Microsoft documentation is a pretty straight-forward task;

  1. Disable synchronisation – You should complete this before carrying out any work on AD connect
  2. Designate and populate an attribute which will identify your partner accounts. “ExtensionAttributes” within AD are a prime target here.
  3. Using the AD Connect Sync manager, ensure that you are importing your selected attribute.
  4. Using the AD Connect Sync Manager, enable “userType” within the Azure AD schema
Add source attribute to Azure AD Connector schema
Enabling UserType within the AAD Schema

5. Create an import rule within the AD Connect rules editor, targeting your designated attribute. Use an expression rule like so to ensure the correct value is applied.

IIF(IsPresent([userPrincipalName]),IIF(CBool(InStr(LCase([userPrincipalName]),"@partners.fabrikam123.org")=0),"Member","Guest"),Error("UserPrincipalName is not present to determine UserType"))

6. Create an export rule moving your new attribute from the metaverse through to Azure AD
7. Enable synchronisation and validate your results.

A Better Way to mark B2B accounts

While the above method will most definitely work, it has a couple of drawbacks. Firstly, it relies on data entry. If the designated attribute is not set correctly, your users will not update. If you haven’t already got this data, you also need to apply it. More work. Secondly, this process can be achieved through a single sync rule and basic directory management. Less locations for our configuration to break.

To apply this simpler configuration, you still complete Steps 1 and 4 from above. Next, you ensure that your users are properly organised into OU’s. For this example, I’m using a “Standard” and “Partner” OU structure.

Finally, you create a single rule outbound from the AD Connect metaverse to Azure AD. As with most outbound rules, ensure you have an appropriate scope. In the below example we want all users, who are NOT mastered by Azure AD.

The critical part of your rule is the transformations. Because DistinguishedName (CN + OU) is imported to AD Connect by default, our rule can quickly filter on the OU which holds our users.

IIF(IsPresent([distinguishedName]),IIF(CBool(InStr(LCase([distinguishedName]),"ou=users - partners,dc=ad,dc=westall,dc=co")=0),"Member","Guest"),Error("distinguishedName is not present to determine UserType"))

Our outbound transformation rule

And just like that, we have Azure AD Accounts, automatically marked as Guest Users!

Balon Greyjoy 
Barristan Selmy 
Benjen Stark 
Beric Dondarr.,. 
Bran Stark 
Brienne Of Tar... 
Brynden Tully 
BaIon.Greyjoy@lab.westall.co 
Barristan.selmy@lab.westall.co 
Benjen.stark@lab.westall.co 
Beric.Dondarrion@lab.westall.co 
Bran.Stark@lab.westall.co 
Brienneof.Tarth@lab.westall.co 
Brynden.Tully@lab.westall.co 
Guest 
Guest 
Member 
Guest 
Member 
Guest 
Guest

B2B and Member accounts copied from AD

Empowered Multi Cloud: Azure Arc and Kubernetes

At Arinco, we love Kubernetes, and in this post I’ll be covering the basics of configuring Azure Arc on Kubernetes. As a preview feature, this integration enables Azure administrators to connect to remote Kubernetes clusters, manage deployments, policy and monitoring data, without leaving the Azure Portal. If you’re experienced with Google Cloud, this functionality is remarkably similar to Google Anthos, with the main difference being that Anthos only focuses on Kubernetes, whereas Arc will quite happily manage Servers, SQL and Data platforms as well.

Azure Arc Architecture

Before we begin, there is a couple of key facts that you need to be aware of while Arc for Kubernetes is in preview:

  • Currently only East US and West Europe deployments are supported.
  • Only x64 based clusters will work at this time and no manifests are published for you to recompile software on other architectures.
  • Testing of supported clusters is still in early days. Microsoft doesn’t recommend the Arc enabled Kubernetes solution for production workloads

Enabling Azure Arc

Assuming that you already have a cluster that will be supported, configuring a connected Kubernetes instance is a monumentally simple task Two steps to be exact.

1. Enable the preview azure cli extensions

1az extension add --name connectedk8s
az extension add --name k8sconfiguration

2. Run the CLI commands to enable an ARC enabled cluster

1az connectedk8s connect --name GKE-KUBERNETES-LAB --resource-group KUBERNETESARC-RG01
Enabling Azure Arc

Under the hood, Azure CLI completes the following when we execute the above command:

  1. Creates an ARM Resource for your cluster, generating the relevant connections and secrets.
  2. Connects to your currently cluster context (see kubeconfig) and creates a deployment using Helm. ConfigMaps are provided with details for connecting to Azure, with resources being published into an azure-arc namespace
  3. Monitors this deployment to completion. For failing clusters, expect to be notified of failure after approximately 5-10 minutes.

If you would like to watch the deployment, it generally takes around 30 seconds for an Arc namespace to show up and from there you can watch as Azure Arc related pods are scheduled.

So what can we do?

Once a cluster is on-boarded to Arc, there is actually quite a bit you can do in preview, including monitor. The most important in my opinion is simplified method to control clusters via the GitOps model. If you were paying attention during deployment, you will have noticed that Flux is used to deliver this functionality. Expect further updates here, as Microsoft has publicly committed recently to further developing a standardised GitOps model.

Using this configurations model is quite simple, and to be perfectly honest, you don’t even need to understand exactly how Flux works. First, commit your Kubernetes manifests to a public repository, don’t stress too much about order or structure. Flux is basically magic here and can figure everything out. Next add a configuration to your cluster and go grab a coffee.

For my cluster, I’ve used the Microsoft demo repository. Simply fork this and you can watch the pods create as you update your manifests.

Closing Thoughts

There is a lot of reasons to run your own cluster, or a cluster in another cloud. Generally speaking, if you’re currently considering Azure Arc you will be pretty comfortable with the Kubernetes ecosystem as a whole.

Arc enabled clusters will just be another tool you could add, and you should use same consideration that you apply for every other service you consider utilising. In my opinion the biggest benefit of the service is simplified and centralized management capability across multiple clusters. This allows me to manage my own AKS clusters and AWS/GCP clusters with centralized policy enforcement, RBAC and monitoring. I would probably look to implement Arc if I was running a datacenter cluster, and definitely if I was looking to migrate to AKS in the future. If you are looking to get test out Arc for yourself, I would definitely recommend the Azure Arc Jumpstart.
Until next time, stay cloudy!

Originally posted at arinco.com.au

Empowered Multi Cloud: Onboarding IaaS to Azure Arc

More often than not, organisations move to the cloud on a one way path. This can be a challenging process with a large amount of learning, growth and understanding required. But why does it all have to be in one direction? What about modernising by bringing the cloud to you? One of the ways that organisations can begin this process when moving to Azure is by leveraging Azure Arc, a provider agnostic toolchain that supports integration of IaaS, Data services and Kubernetes to the Azure Control Plane.

Azure Arc management control plane diagram
Azure Arc Architecture

Using Arc, technology teams are enabled to use multiple powerful Azure tools in an on-premise environment. This includes;

  • Azure Policy and guest extensions
  • Azure Monitor
  • Azure VM Extensions
  • Azure Security Centre
  • Azure Automation including Update Management, Change Tracking and Inventory.

Most importantly, the Arc pricing model is my favourite type of pricing model: FREE! Arc focuses on connecting to Azure and providing visibility, with some extra cost required as you consume secondary services like Azure Security Centre.

Onboarding servers to Azure Arc

Onboarding servers to Arc is a relatively straight forward task and is supported in a few different ways. If you’re working on a small number of servers, onboarding using the Azure portal is a manageable task. However, if you’re running at scale, you probably want to look at an automated deployment using tools like the VMWare CLI script or Ansible.

For the onboarding in this blog, I’m going to use the Azure Portal for my servers. First up, ensure you have registered the HybridCompute provider using Azure CLI.

az provider register --namespace 'Microsoft.HybridCompute'

Next, search for Arc in the portal and select add a server. The process here is very much “follow the bouncing ball” and you shouldn’t have too many questions. Data residency is already supported for Australia East, so no concerns there for regulated entities!

Providing basic residency and storage information

When it comes to tagging of Arc servers, Microsoft suggests a few location based tags, with options to include business based also. In a lab scenario like this demo, location is pretty useless, however in real-world scenarios this can be quite useful for identifying what resources exist in each site. Post completion of tagging, you will be provided with a script for the target server. You can use generated script for multiple servers, however, you will need to update any custom tags you may add.

The script execution itself is generally a pretty quick process, with the end result being a provisioned resource in Azure and the Connected Machine Agent on your device.

Connected Machine Agent – Installed
Our servers in Azure

So what can we do?

Now that you’ve completed onboarding you’re probably wondering what next? I’m a big fan of the Azure Monitoring platform (death to SCOM), so for me this will always be a Log Analytics onboarding task, closely followed by Security Centre. One of the key benefits with Azure Arc is the simplicity of everything, so you should find onboarding any Arc supported solution to be a straight forward process. For Log Analytics navigate to insights, select your analytics workspace, enable and you’re done!

Enabling Insights

What logs you collect is entirely on your logging collection strategy with Microsoft providing further detail on that process here. In my opinion, the performance data being located in a single location is worth it’s weight in gold.

Performance Data

If you have already connected Security Centre to your workspace, onboarding to Log Analytics often also connects your device to Security centre, enabling detailed monitoring and vulnerability management.

Domain controller automatically enabled for Security Centre

Right for you?

While the cloud enables organisations to move quickly, sometimes moving slowly is just what the doctor ordered. Azure Arc is definitely a great platform for organisations looking to begin using Azure services and most importantly, bring Azure into their data centre. If you’re wanting to learn more about Arc, Microsoft has published an excellent set of quick-starts here and the documentation is also pretty comprehensive. Stay tuned for our next post, where we explore using Azure Arc with Kubernetes. Until next time, stay cloudy!

Managing Container Lifecycle with Azure Container Registry Tasks

Recently I’ve been spending a bit of time working with a few customers, onboarding them to Azure Kubernetes Service. This is generally a pretty straight forward process; Build Cluster, Configure ACR, Setup CI/CD.

During the CI/CD buildout with one customer, we noticed pretty quickly that our cheap and easy basic ACR was filling up rather quickly. Mostly with development containers which were used once or twice and then never again.

Not yet 50% full in less than a month;

In my opinion the build rate of this repository wasn’t too bad. We pushed to development and testing 48 times over a one week period, with these incremental changes flowing through to production pretty reliably on our weekly schedule.

That being said, the growth trajectory put our development ACR filling up in about 3-4 months. Sure we could simply upgrade the ACR to a standard or premium tier, but at what cost? A 4x price increase between basic and standard SKU’s, and even steeper 9x to premium. Thankfully, we can solve for this in few ways.

  1. Manage our container size – Start from scratch or a container specific OS like alpine.
  2. Build containers less frequently – We have almost a 50:1 development to production ratio, so there is definitely a bit of wiggle room there.
  3. Manage the registry contents, deleting old or untagged images.

Combining these options will provides our team with a long term and scalable solution. But how can we implement item number 3?

ACR Purge & Automatic Cleanup

As a preview feature, Azure Container Registry now supports filter based cleanup of images and containers. This can be completed as an ad-hoc process or as a scheduled task. To get things right, I’ll first build an ACR command that deletes tagged images.

# Environment variable for container command line
PURGE_CMD="acr purge \
  --filter 'container/myimage:dev-.*' \
  --ago 3d --dry-run"

az acr run \
  --cmd "$PURGE_CMD" \
  --registry mycontainerregistry \
  /dev/null

I’ve set an agreed upon container age for my containers and I’m quite selective of which containers I purge. The above dry-run only selects the development “myimage” container and gives me a nice example of what my task would actually do.

Including multiple filters in purge commands is supported. So, feel free to build expansive query sets. Once you are happy with the dry run output, it’s time to setup an automatic job. ACR uses standard cronjob syntax for scheduling, so this should be a pretty familiar experience for linux administrators.

PURGE_CMD="acr purge \
  --filter 'container/my-api:dev-.*' \
  --filter 'container/my-db:dev-.*' \
  --ago 3d"

az acr task create --name old-container-purge \
  --cmd "$PURGE_CMD" \
  --schedule "0 2 * * *" \
  --registry mycontainerregistry \
  --timeout 3600 \
  --context /dev/null

And just like that, we have a task which will clean up our registry daily at 2am.

As an ARM template please?

If you’re operating or deploying multiple container registries for various teams, you might want to standardise this type of task across the board. As such, integrating this into your ARM templates would be mighty useful.

Microsoft provides the “Microsoft.ContainerRegistry/registries/tasks” resource type for deploying these actions at scale. There is, however, a slightly irritating quirk. Your ACR command must be base64 encoded YAML following the tasks specification neatly documented here. I’m not sure about our readers, but generally combining Base64, YAML and JSON leaves a nasty taste in my mouth!

{
    "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#",
    "contentVersion": "1.0.0.0",
    "parameters": {
        "containerRegistryName": {
            "type": "String",
            "metadata": {
                "description": "Name of the ACR to deploy task resource."
            }
        },
        "containerRegistryTaskName" : {
            "defaultValue": "old-container-purge",
            "type": "String",
            "metadata": {
                "description": "Name for the ACR Task resource."
            }
        },
        "taskContent" : {
            "defaultValue": "dmVyc2lvbjogdjEuMS4wCnN0ZXBzOiAKICAtIGNtZDogYWNyIHB1cmdlIC0tZmlsdGVyICdjb250YWluZXIvbXktYXBpOmRldi0uKicgLS1maWx0ZXIgJ2NvbnRhaW5lci9teS1kYjpkZXYtLionIC0tYWdvIDNkIgogICAgZGlzYWJsZVdvcmtpbmdEaXJlY3RvcnlPdmVycmlkZTogdHJ1ZQogICAgdGltZW91dDogMzYwMA==",
            "type": "String",
            "metadata": {
                "description": "Base64 Encoded YAML for the ACR Task."
            }
        },
        "taskSchedule"  : {
            "defaultValue": "0 2 * * *",
            "type": "String",
            "metadata": {
                "description": "CRON Schedule for the ACR Task resource."
            }
        },
        "location": {
            "type": "string",
            "defaultValue": "[resourceGroup().location]",
            "metadata": {
                "description": "Location to deploy the ACR Task resource."
            }
        }
    },
    "functions": [],
    "variables": {},
    "resources": [
        {
            "type": "Microsoft.ContainerRegistry/registries/tasks",
            "name": "[concat(parameters('containerregistryName'), '/', parameters('containerRegistryTaskName'))]",
            "apiVersion": "2019-06-01-preview",
            "location": "[parameters('location')]",
            "properties": {
                "platform": {
                    "os": "linux",
                    "architecture": "amd64"
                },
                "agentConfiguration": {
                    "cpu": 2
                },
                "timeout": 3600,
                "step": {
                    "type": "EncodedTask",
                    "encodedTaskContent": "[parameters('taskContent')]",
                    "values": []
                },
                "trigger": {
                    "timerTriggers": [
                        {
                            "schedule": "[parameters('taskSchedule')]",
                            "status": "Enabled",
                            "name": "t1"
                        }
                    ],
                    "baseImageTrigger": {
                        "baseImageTriggerType": "Runtime",
                        "status": "Enabled",
                        "name": "defaultBaseimageTriggerName"
                    }
                }
            }
        }
    ],
    "outputs": {}
}

The above encoded base64 translates to the following YAML. Note that it includes the required command and some details about the execution timeout limit. For actions that purge a large amount of containers, Microsoft advises you might need to increase this limit beyond the default 3600 seconds (1 Hour).

version: v1.1.0
steps: 
  - cmd: acr purge --filter 'container/my-api:dev-.*' --filter 'container/my-db:dev-.*' --ago 3d"
    disableWorkingDirectoryOverride: true
    timeout: 3600

Summary

Hopefully, you have found this blog post informative and useful. There are a number of scenarios for this feature-set; deleting untagged images, cleaning up badly named containers or even building new containers from scratch. I’m definitely excited to see this feature move to general availability. As always, please feel free to reach out if you would like to know more. Until next time!

Attempting to use Azure ARC on an RPi Kubernetes cluster

Recently I’ve been spending a fair bit of effort working on Azure Kubernetes Service. I don’t think it really needs repeating, but AKS is an absolutely phenomenal product. You get all the excellence of the K8s platform, with a huge percentage of the overhead managed by Microsoft. I’m obviously biased as I spend most of my time on Azure, but I definitely find it easier than GKE & EKS. The main problem I have with AKS is cost. Not for production workloads or business operations, but for lab scenarios where I just want to test my manifests, helm charts or whatever. There’s definitely a lot of options for spinning up clusters on demand for lab scenarios or even reducing cost of an always present cluster; Terraform, Kind or even just right sizing/power management. I could definitely find a solution that fits within my current Azure budget. Never being one to take the easy option, I’ve taken a slightly different approach for my lab needs. A two node (soon to be four) Raspberry Pi Kubernetes cluster.

Besides just being cool, It’s great to have a permanent cluster available for personal projects, with the added bonus that my Azure credit is saved for more deserving work!

That’s all good and well I hear you saying, but I needed this cluster to lab AKS scenarios right? Microsoft has been slowly working to integrate “non AKS” Kubernetes into Azure in the form of ARC enabled clusters – Think of this almost as an Azure compete to Google Anthos, but with so much more. The reason? Arc doesn’t just cover the K8s platform and it brings a whole host of Azure capability right onto the cluster.

The setup

Configuring a connected ARC cluster is a monumentally simple task for clusters which meet muster. Two steps to be exact.

1. Enable the preview azure cli extensions

az extension add --name connectedk8s
az extension add --name k8sconfiguration

2. Run the CLI commands to enable an ARC enabled cluster

az connectedk8s connect --name RPI-KUBENETES-LAB --resource-group KUBERNETESARC-RG01

In the case of my Raspberry Pi cluster – arm64 architecture really doesn’t cut it. Shortly after you run your commands you will receive a timeout and discover pods stuck in a pending state.

Timeouts like this are never good.
Our very suck pods.

Digging into the deployments, it quickly becomes obvious that an amd64 architecture is really needed to make this work. Pods are scheduled across the board with a node selector. Removing this causes a whole host of issues related to what looks like both container compilation & software architecture. For now it looks like I might be stuck with a tantalising object in Azure & a local cluster for testing. I’ve become a victim of my own difficult tendencies!

So close, yet so far.

Right for you?

There is a lot of reasons to run your own cluster – Generally speaking, if you’re doing so you will be pretty comfortable with the Kubernetes ecosystem as a whole. This will just be “another tool” you could add, and you should apply the same consideration for every other service you consider using. In my opinion the biggest benefit of the service is the simplified/centralised management plane across multiple clusters. This allows me to manage my own (albeit short lived) AKS clusters and my desk cluster with centralised policy enforcement, RBAC & monitoring. I would probably look to implement if I was running my datacenter cluster, and definitely if I was looking to migrate to AKS in the future. If you are considering, keep in mind a few caveats;

  1. The Arc Service is still in preview – expect a few bumps as the service grows
  2. Currently only available in EastUS & WestEurope – You might be stuck for now if operating under data residency requirements.

At this point in time, I’ll content myself with local cluster. Perhaps I’ll publish a future blog post if I manage to work through all these architecture issues. Until next time, stay cloudy!

Security Testing your ARM Templates

In medicine there is a saying “an ounce of prevention is worth a pound of cure”” – What this concept boils down to for health practitioners is that engaging early is often the cheapest & simplest method for preventing expensive & risky health scenarios. It’s a lot cheaper & easier to teach school children about healthy foods & exercise than to complete a heart bypass operation once someone has neglected their health. Importantly, this concept extends to multiple fields, with CyberSecurity being no different.
Since the beginning of cloud, organisations everywhere have seen explosive growth in infrastructure provisioned into Azure, AWS and GCP. This explosive growth all too often corresponds with increases to security workload without required budgetary & operational capability increases. In the quest to increase security efficiency and reduce workload, this is a critical challenge. Once a security issue hits your CSPM, Azure Security Centre or AWS Trusted Inspector dashboard, it’s often too late; The security team now has to work to complete within a production environment. Infrastructure as Code security testing is a simple addition to any pipeline which will reduce the security group workload!

Preventing this type of incident is exactly why we should complete BASIC security testing..

We’ve already covered quality testing within a previous post, so today we are going to focus on the security specific options.

The first integrated option for ARM templates is easily the Azure Secure DevOps kit (AzSK for short). The AzSK has been around for while and is published by the Microsoft Core Services and Engineering division; It provides governance, security IntelliSense & ARM template validation capability, for free. Integrating to your DevOps Pipelines is relatively simple, with pre-built connectors available for Azure DevOps and a PowerShell module for local users to test with.

Another great option for security testing is Checkov from bridgecrew. I really like this tool because it provides over 400 tests spanning AWS, GCP, Azure and Kubernetes. The biggest drawback I have found is the export configuration – Checkov exports JUnit test results, however if nothing is applicable for a specified template, no tests will be displayed. This isn’t a huge deal, but can be annoying if you prefer to see consistent tests across all infrastructure…

The following snippet is all you really need if you want to import Checkov into an Azure DevOps pipeline & start publishing results!

  - task: UsePythonVersion@0
    inputs:
      versionSpec: '3.7'
      addToPath: true
    displayName: 'Install Python 3.7'
  
  - script: python -m pip install --upgrade pip setuptools wheel
    displayName: 'Install pip3'

  - script: pip3 install checkov
    displayName: 'Install Checkov using pip3'

  - script: checkov -d ./${{parameters.iacFolder}} -o junitxml -s >> checkov_sectests.xml
    displayName: 'Security test with Checkov'

  - task: PublishTestResults@2
    displayName: Publish Security Test Results (Checkov)
    condition: always()
    inputs:
      testResultsFormat: JUnit
      testResultsFiles: '**sectests.xml'

When to break the build & how to engage..

Depending on your background, breaking the build can really seem like a negative thing. After all, you want to prevent these issues getting into production, but you don’t want to be a jerk. My position on this is that security practitioners should NOT break the build for cloud infrastructure testing within dev, test and staging. (I can already hear the people who work in regulated environments squirming at this – but trust me, you CAN do this). While integration of tools like this is definitely an easy way to prevent vulnerabilities or misconfigurations from reaching these environments, the goal is to raise awareness & not increase negative perceptions.

Security should never be the first team to say no in pre-prod environments.

Use the results of any tools added into a pipeline as a chance to really evangelize security within your business. Yelling something like “Exposing your AKS Cluster publicly is not allowed” is all well and good, but explaining why public clusters increase organisational risk is a much better strategy. The challenge when security becomes a blocker is that security will no longer be engaged. Who wants to deal with the guy who always says no? An engaged security team has so much more opportunity to educate, influence and effect positive security change.

Don’t be this guy.

Importantly, engaging well within dev/test/sit and not being that jerk who says no, grants you a magical superpower – When you do say no, people listen. When warranted, go ahead and break the build – That CVSS 10.0 vulnerability definitely isn’t making it into prod. Even better, that vuln doesn’t make it to prod WITH support of your development & operational groups!

Hopefully this post has given you some food for thought on security testing, until next time, stay cloudy!

Note: Forest Brazael really has become my favourite tech related comic dude. Check his stuff out here & here.

Using Azure AD Access Packages in B2B scenarios

With the advent of modern collaboration platforms, users are no longer content to work within the organisational boundary. More and more organisations are being challenged to bring in external partners and users for projects and day to day operations. But how can we do this securely? How do IT managers minimise licensing costs? Most importantly, how can we empower the business to engage without IT? This problem is at the forefront of the thinking behind Azure AD Access Packages. An Azure solution enabling self service onboarding of partners, providers and collaborators at scale. Even better than that, this solution enables both internal and external onboarding. You can and should set this up internally, the less work that IT has to do managing access, the better right?

Before we dig too deep, I think a brief overview of how access packages are structured would be useful. On a hierarchy level, packages are placed into catalogs, which can be used to enable multiple packages for a team to use. Each package holds resources, and a policy defines the who and when of requesting access to these using the process. The below diagram from Microsoft neatly sums this up.

Entitlement management overview
Access Package Hierarchy

This all sounds great I hear you saying. So what does this look like? If you have an Office 365 account, you’re welcome to log in and look for yourself here, otherwise a screenshot will have to do.

External Access Package UI

To get started with this solution, you will need an Azure AD P2 licensed tenant. Most organisations will obtain P2 licences this through an M365 E5 subscription, however you can purchase these directly if have M365 E3 or lower and are looking to avoid some costs. You will need to have at-least a 1:1 license assignment for internal use cases, while external identity has recently moved to a “Monthly Active Users” licensing model. One P2 licence in your tenant will license the first 50 thousand external users for free!

Once you’ve enabled this, head on over to the “Identity Governance” blade within Azure AD. This area has a wealth of functionality that benefits nearly all organisations, so I would highly recommend investigating the other items available here. Select Access Packages to get started.

The UI itself for creating an access packages is quite simple, clicking create-new will walk you through a process of assigning applications, groups, teams & share-point sites.

Access Package creation UI

Unfortunately some services like Windows Virtual Desktop will not work with access packages, however this is a service limitation rather than an Azure AD limitation. Expect these challenges to be resolved over time.

At the time of writing, the AzureADPreview module does not support Access Packages. Microsoft Graph beta does however, and so, have an MS Graph based script!

While all this PowerShell might look a bit daunting, understand all that is being done is generating API request bodies and pushing that over 6 basic API calls;

  1. Retrieve information about our specified catalog (General)
  2. Create an Access Package
  3. Add each resource to our specified catalog
  4. Get each resource’s available roles
  5. Assign the resource & role to our Access Packages
  6. Create a Policy which enables assignment of our Access Package

Hopefully this article has provided you with a decent overview of Azure AD Access Packages. There are a lot of benefits when applying this in B2B scenarios, especially when it comes to automating user onboarding & access management. With big investments & changes from Microsoft occurring in this space, expect further growth & new features as the year comes to a close!

Please Note: While we do distribute an access package link within this blog, requests for access are not sent to an monitored email and will not be approved. If you would like to know more, please don’t hesitate reach out as we would be happy to help.

Azure AD Administrative Units – Preview!

Recently I was approached by a customer regarding a challenge they wanted to solve. How to delegate administrative control of a few users within Azure Active Directory to some lower level administrators? This is a common problem experienced by teams as they move to cloud based directories – a flat structure doesn’t really allow for delegation on business rules. Enter Azure AD Administrative Units; A preview feature enabling delegation & organisation of your cloud directory. For Active Directory Administrators, this will be a quite familiar experience to Organisational Units & delegating permissions. Okta also has a similar functionality, albeit implemented differently.

Active Directory Admins will immediately feel comfortable with Azure AD Admin Units

So when do you want to use this? Basically any time you find yourself wanting a hierarchical & structured directory. While still in preview, this feature will likely grow over time to support advanced RBAC controls and in the interim, this is quite an elegant way to delegate out directory access.

Setting up an Administrative Unit

Setting up an Administrative Unit is quite a simple task within the Azure Portal; Navigate to your Azure AD Portal & locate the option under Manage.

Select Add, and provide your required names & roles. Admin assignment is focused on user & group operations, as device administration has similar capability under custom intune roles and application administrators can be managed via specified roles.

You can also create administrative units using the Azure AD PowerShell Module; A simple one line command will do the trick!

New-AzureADAdministrativeUnit -Description "Admin Unit Blog Post" -DisplayName "Blog-Admin-Users"

User Management

Once you have created an administrative unit, you can begin to add users & groups. As this point in time, administrative units only support assignment manually, either one by one or via csv upload. The process itself is quite simple; Select Add user and click through everyone you would like to be included.

While this works quite easily for small setups, at scale you would likely find this to be a bit tedious. One way to work around this is to combine Dynamic Groups with your chosen PowerShell execution environment. For me, This is an Automation Account. First, configure a dynamic group which automatically drags in your desired users.

Next, execute the following PowerShell snippet. Note that I am using the Azure AD Preview module, as support is yet to move to the production module.

https://gist.github.com/jameswestall/832549f95ac7caac80a1f6c74fef1931.js

This can be configured on a schedule as frequently as you need this information to be accurate!

You will note here that one user gets neatly removed from the Administrative Unit – This is because the above PowerShell treats the dynamic group as an authoritative source for Admin Unit Membership. When dealing with assignment through user details (Lifecycle Management) I find that selecting authoritative sources reduces both work effort and confusion. Who wants to do manual management anyway? Should you really want to allow manual addition, simply remove the line marked to remove members!

Hopefully you find this post a useful insight to the usage of Administrative Units within your organisation. There a lot of useful scenarios where this can be leveraged and this feature should most definitely help you minimise administrative privilege in your environment (hooray!). As always, feel free to reach out with any questions or comments! Stay tuned for my next post, where I will be diving into Azure AD Access Packages 🙂