Azure Spring Clean – 5 Tips to help you align to Enterprise Scale

Azure Spring clean. Easily one of my favourite Azure events of each year.  I spend a lot of my year helping organisations clean up their Azure tenancies, so even though I’m writing this as Australia enters autumn, I’m super pumped to take you through my contribution for 2022. 5 Tips for how you can start your own Enterprise Scale journey, today.

For those who haven’t heard of Enterprise Scale Landing Zones (ES) before – It’s a bloody straight forward concept. Microsoft has developed several Azure best practices through the years, with these being reflected in the Cloud Adoption and Well Architected Frameworks. Enterprise Scale is guidance on how best to use these techniques in your environment.

This article will take you through five tips for customers who already have an Azure deployment, albeit not really aligned to the ES reference architectures. Microsoft also provides guidance on this process here. Let’s dive right in!

1. Understand the right reference architecture for you!

While Enterprise Scale (ES) is generic in implementation, every organisation is unique. As such, Microsoft has provided multiple options for organisations considering ES. Factors such as your size, growth plans or team structure will all influence your design choices. The first tip is pretty simple – Understand where you currently are, compared to the available architectures.

The four reference architectures that Microsoft provides for ES are:

Each Enterprise Scale pattern builds in capability

Note: The ES reference architectures that Microsoft provides here aren’t the only options; Cloud Adoption Framework clearly allows for “Partner Led” implementations which are often similar or a little more opinionated. Shameless Plug 😉 Arinco does this with our Azure Done Right offering.

2. Implement Management Groups & Azure Policy

Once you have selected a reference architecture, you then need to begin aligning. This can be challenging, as you’re more than likely already using Azure in anger. As such you want to make a change with minimal effort, but a high return on investment. Management Groups & Policy are without a doubt the clear winner here, even for single subscription deployments.

Starting simple with Management groups is pretty easy, and allows you to segment subscriptions as you grow and align. Importantly, Management Groups will help you to target Azure Policy deployments.

A simple structure here is all you need to get going, Production/Development as an easy line to draw, but it’s really up to you. In the below plan, I’ve segmented Prod and Dev, Platform and Landing Zone and finally individual products. Use your own judgement as required. A word from the wise; Don’t go too crazy, you can continue to segregate with subscriptions and resource groups.

Once you’ve set up Management Groups, it’s time to limit any future re-work and minimise effort for changes. Azure Policy is perfect for this, and you should create a Policy initiative which enforces your standards quickly. Some examples of where you might apply policy are;

If you haven’t spent much time with Azure Policy, the AWESOME-Azure-Policy repository maintained by Jesse Loudon has become an amazing source for anything you would want to know here!

3. Develop repeatable landing zones to grow in.

The third tip I have is probably the most important for existing deployments. Most commonly, non ES organisations operate in a few monolithic subscriptions, sometimes with a few resource groups to separate workloads. In the same way that microservices allow development teams to iterate on applications faster, Landing Zones allow you to develop capability on Azure faster.

A Landing Zone design is always slightly different by organisation, depending on what Azure architecture you selected and your business requirements.

Some things to keep in mind for your LZ design pattern are:

  • How will you network each LZ?
  • What security and monitoring settings are you deploying?
  • How will you segment resources in a LZ? Single Resource Group or Multiple?
  • What cost controls do you need to apply?
  • What applications will be deployed into each LZ?
A Microsoft Example LZ design

There’s one common consideration on the above list that I’ve intentionally left off the above list;

  • How will you deploy a LZ?

The answer for this should be Always as Code. Using ARM Templates, Bicep, Terraform, Pulumi or any IaC allows you to quickly deploy a new LZ in a standardised pattern. Microsoft provides some excellent reference ARM templates here or Terraform here to demonstrate exactly this process!

4. Uplift security with Privileged Identity Management (PIM)

I love PIM. It’s without a doubt, my favourite service on Azure. If you haven’t heard of PIM before (how?), PIM focuses on applying approved administrative access within a time-boxed period. This works by automatically removing administrative access when not required, and requiring approval with strong authentication to re-activate the access. You can’t abuse an administrator account that has no admin privileges.

While the Enterprise Scale documentation doesn’t harp on the benefits of PIM, the IAM documentation makes it clear that you should be considering your design choices and that’s why using PIM is my fourth tip.

I won’t deep dive into the process of using PIM, the 8 steps you need here are already documented. What I will say is, spend the time to onboard each of your newly minted landing zones, and then begin to align your existing subscriptions. This process will give you a decent baseline of access which you can compare to when minimising ongoing production access.

5. Minimise cost by sharing platform services

Cost is always something to be conscious of when operating on any cloud provider and my final tip focuses on the hip pocket for that reason. Once you are factoring in things like reserved instances, right sizing or charge back models into your landing zones, this final tip is something which can really allow you to eek the most out of a limited cloud spend. That being said, this tip also requires a high degree of maturity within your operating model you must have a strong understanding of how your teams are operating and deploying to Azure.

Within Azure, there is a core set of services which provide a base capability you can deploy on top of. Key items which come to mind here are:

  • AKS Clusters
  • App Service Plans
  • API Management instances
  • Application Gateways

Once you have a decent landing zone model and Enterprise Scale alignment, now you can begin to share certain services. Take the below diagram as an example. Rather than build a single plan per app service or function, a dedicated plan helps to reduce the operating cost of all the resources. In the same way, a platform team might use the APIM DevOps Toolkit to provide a shared APIM instance.

Note that multiple different functions are using the same app service plan here.

Considering this capability model when you develop your alignment is an easy way which you can minimise work required to move resources to a new Enterprise Scale deployment. In my opinion, consolidating Kubernetes pods or APIM API’s is a lot easier than moving clusters or Azure resources between landing zones.

Note: While technically possible, try to avoid sharing IaaS virtual machines. This does save cost, but encourages using the most expensive Azure compute. You want to push engineering teams towards cheaper and easier PaaS capabilities where possible.

Final Thoughts

Hopefully you have found some value in this post and my tips for Enterprise Scale alignment. I’m really looking forward to seeing some of the community generated content. Until next time, stay cloudy!

GitHub Advanced Security – Exporting results using the Rest API

Recently while working on a code uplift project with a customer, I wanted a simple way to analyse our Advanced Security results. While the Github UI provides easy methods to do basic analysis and prioritisation, we wanted to complete our reporting and detailed planning off platform. This post will cover the basic steps we followed to export GitHub Advanced Security results to a readable format!

Available Advanced Security API Endpoints

GitHub provides a few API endpoints for Code Scanning which are important for this process, with the following used today:

This post will use PowerShell as our primary export tool, but reading the GitHub documentation carefully should get you going in your language or tool of choice!

Required Authorisation

As a rule, all GitHub API calls should be authenticated. While you can implement a GitHub application for this process, the easiest way is to use an authorised Personal Access Token (PAT) for each API call.

To do create a PAT, navigate to your account settings, and then to Developer Settings and Personal Access Tokens. Exporting Advanced Security results requires the security_events scope, shown below.

The PAT scope required to export Advanced Security results

Note: Organisations which enforce SSO will require a secondary step where you log into your identity provider, like so:

Authorising for an SSO enabled Org

Now that we have a PAT, we need to build the basic authorisation API headers as per the GitHub documentation.

  $GITHUB_USERNAME = "james-westall_demo-org"
  $GITHUB_ACCESS_TOKEN = "supersecurepersonalaccesstoken"
  
 
  $credential = "${GITHUB_USERNAME}:${GITHUB_ACCESS_TOKEN}"
  $bytes = [System.Text.Encoding]::ASCII.GetBytes($credential)
  $base64 = [System.Convert]::ToBase64String($bytes)
  $basicAuthValue = "Basic $base64"
  $headers = @{ Authorization = $basicAuthValue }

Exporting Advanced Security results for a single repository

Once we have an appropriately configured auth header, calling the API to retreive results is really simple! Set your values for API endpoint, organisation and repo and you’re ready to go!

  $HOST_NAME = "api.github.com"
  $GITHUB_OWNER = "demo-org"
  $GITHUB_REPO = "demo-repo"

  $response = Invoke-RestMethod -FollowRelLink -Method Get -UseBasicParsing -Headers $headers -Uri https://$HOST_NAME/repos/$GITHUB_OWNER/$GITHUB_REPO/code-scanning/alerts

  $finalResult += $response | %{$_}

The above code is pretty straight forward, with the URL being built by providing the “owner” and repo name. One thing we found a little unclear in the doco was who the owner is. For a personal public repo this is obvious, but for our Github EMU deployment we had to set this as the organisation instead of the creating user.
Once we have a URI, we call the API endpoint with our auth headers for a standard REST response. Finally, we parse the result to a nicer object format (due to the way Invoke-RestMethod -FollowRelLink parameter works).

The outcome we quickly achieve using the above is a PowerShell object which can be exported to parsable JSON or CSV formats!

Exported Advanced Security Results
Once you have a PowerShell Object, this can be exported to a tool of your choice

Exporting Advanced Security results for an entire organisation

Depending on the scope of your analysis, you might want to export all the results for your GitHub organisation – This is possible, however it does require elevated access, being that your account is an administrator or security administrator for the org.

  $HOST_NAME = "api.github.com"
  $GITHUB_ORG = "demo-org"

  $response = Invoke-RestMethod -FollowRelLink -Method Get -UseBasicParsing -Headers $headers -Uri https://$HOST_NAME/orgs/$GITHUB_ORG/code-scanning/alerts

  $finalResult += $response | %{$_}

Securing Privileged Access with Azure AD (Part 3) – Hybrid Scenarios

While many organisations are well on the journey to exclusively operating in the cloud, the reality is that most companies operate in a hybrid state for an extended period of time. As such, we cannot always apply all of our Privileged Access effort on securing the only the cloud. In this post, I’ll walk you through three simple methods which allow you to extend Azure AD capability into an on-premise environment, with the support of key “legacy” technology. If you’re just joining us for this series, head over to part one to learn about strategy, or part two for Azure AD Basics!

1. Reducing privileged access on premise with PIM

One of the challenges that many organisations perceive with PIM, is that it doesn’t extend to on-premise services. This perception is wrong – Yes, PIM itself doesn’t have native capability for on-premise, but it is extremely simple to consume PIM groups within an on premise environment. This can be done in two ways.

  1. Custom group write-back using Microsoft Identity Manager

2. Automation write-back using a script, automation account or logic-app.

Both of these options require a pragmatic approach to deployment tradeoffs. For MIM group write-back, precise time bound access doesn’t really work. MIM generally syncs on a pre defined schedule, so you would need to configure PIM lifespans to cater for this, leaving some wriggle room on either side of the PIM window. Some companies prefer not to run custom built integration, so scripts which do the sync on our own schedule are avoided.

Thankfully, the community has put some excellent effort into this space, with by far the best example of this being the goodworkaround write-back script.

Sync Privileged Access from Azure to Active Directory with custom scripts.
Visualisation of the Hybrid scenario. Source: github.com

2. Forcing MFA for administrative access using Windows Admin Center

Regardless of how you choose to manage group membership for administrative access, sometimes the simplest security control you can apply to access is the best. MFA is by far, the most effective control you can apply to admin logins.

But how to achieve this? Unfortunately, Windows Server still doesn’t include native support for Azure AD MFA inside the RDP UI (Some secondary products like Duo or Okta have solutions for this). Sure this is a bit of a bummer, but let’s be honest; Direct RDP access to a server should NOT be required in the majority of scenarios. This is for two reasons;

  • Infrastructure as Code – If you’re able to configure a server to be replaced by a pipeline, you should. Maintenance and incident remediation is a lot easier when you can simply replace the infrastructure at the click of a button, without ever logging in.
  • Remote shell – You can do pretty much anything from the command line or PowerShell these days. In my opinion, RDP by default isn’t worth the security hassle. Restrict RDP usage and move to the CLI.

If you’re not comfortable in this space, or would just like an excellent solution which lets you monitor and configure multiple servers, Microsoft provides a world class solution for remote management, Windows Admin Center (WAC). In my opinion, this is highly under-utilised and a great addition to any IT Pros toolkit.

Thankfully, Windows Admin Center has native support for Azure AD authentication. Using Conditional access, you can then apply MFA to admin access.

Managing server Privileged Access with Windows Admin Centre

Configuring this within WAC is a straight forward task, with the settings for Azure AD Authentication available to configure under the “Settings > Access” blade:

Once enabled, you will be able to locate an Admin Center application within your Azure AD Tenant, which you can utilise to scope a targeted Conditional Access policy.



For this capability to truly be effective, you can also combine the WAC solution with an RD Gateway for RDP scenarios. Because RD Gateways operate using a Connection Authorisation Policy with NPS, you can quickly apply MFA to user sessions with the NPS extension. Be warned, this does add a small configuration overhead and occasionally a “double auth” scenario.

3. Extending Azure AD to networking infrastructure using SSO Integration or Network Policy Server

A lot of focus is generally exerted by IT teams on securing server infrastructure. But what about the network? As discussed in our strategy post, a holistic approach to privileged access includes all the solutions you manage. As the network carries all traffic for your solutions, some security practitioners will argue that securing this access is more important than securing the infrastructure!

Networking infrastructure being so diverse, you could generally enhance network privileged access security in two distinct manners.

  1. Integrate Azure AD to centralised control plane. This will require standardisation of access through a network vendor solution.
  2. Integrate networking devices to AAD via Radius. This requires support of specific radius protocols on your network devices.

Our first option in my opinion is the best one. Nearly every networking vendor these days provides a secure access control mechanism, Cisco has Identity Service Engine, Aruba uses ClearPass, Palo Alto uses Panorama, the list goes on for miles. Because these native tools integrate directly with access control for your networking appliances it can be an extremely quick win to apply SSO via Azure AD and MFA via Conditional Access. You can then combine this with services like Privileged Identity Management (PIM) to manage access through approval flows and group claims. Each of your networking vendors will provide documentation for this:

The second option works in privileged access scenarios where you don’t have a centralised identity service. Provided you can use the correct radius protocols, admins can configure the Azure MFA extension for NPS, with radius integration enabling MFA for your networking kit! In the below example, I use this to apply MFA to a SSH management interface for a Palo Alto firewall.

Managing Privileged Access for SSH using Radius and the MFA Extension

Up Next

Using the above three techniques, you very quickly end up with a potential architecture that might look like this.

Thanks for sticking with me through this weeks guidance on Hybrid Scenarios If you’re after some more Privileged Access information, have a read of my AAD Basics guidance, or stay tuned for more info on what can be done using Azure AD, including some tips and techniques to help you stay in control. Topics this series is covering are:

  1. Strategy & Planning
  2. Azure AD Basics
  3. Hybrid Scenarios (This Post)
  4. Zero Trust
  5. Protecting Identity
  6. Staying in Control

Until next time, stay cloudy!

Originally Posted on arinco.com.au

Easy management of Github Wikis with Actions

Recently Arinco has made an internal move to GitHub Enterprise for some of our code storage. For the most part, this has been a seamless process. All of our code is agnostic, and we support customers using both Azure DevOps and GitHub already. While supporting this move, some consideration was made for how best to manage documentation – We’ve found the Azure DevOps wiki feature to be extremely useful. It provides a suitable UI for business users to modify documentation, while also enabling developer friendly markdown. Github provides a similar capability using its own wiki feature.

On investigating the process for wiki usage within GitHub, we noticed an interesting difference to Azure DevOps – GitHub stores wiki files in a separate repo. This can be quickly seen when you navigate to the wiki tab and are presented with a second git URL to clone.

Our extra GitHub wiki repository
Another repo to manage? No thanks.

Now while this works in the same manner as Azure DevOps for developer/business scenarios, managing two repositories is annoying. Git does support adding the wiki as a submodule, however developers are required to complete a double commit and to some, the submodule UI on GitHub is a bit clunky.

To solve this challenge, we turned to the community, specifically looking for a pre-canned GitHub Action.
Thankfully this isn’t a new complaint from the community and SwiftDoc had already created an action. After setting up a PAT and running couple of tests with this, we found some behaviour annoying on the developer side. Specifically that files are not deleted, only created and directory structure is not preserved. And so, we have a slightly modified action:


name: Update Wiki
on:
  workflow_dispatch:
  push:
    branches: [ main ]
    paths:
      - 'wiki/**'
jobs:
  Update-Wiki:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v2
      - name: update wiki data
        env:
          GH_PERSONAL_ACCESS_TOKEN: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
          WIKI_DIR: wiki
        run: |
          echo $GITHUB_ACTOR
          WIKI_COMMIT_MESSAGE='Automatically publish wiki'
          GIT_REPOSITORY_URL="https://${GH_PERSONAL_ACCESS_TOKEN}@${GITHUB_SERVER_URL#https://}/$GITHUB_REPOSITORY.wiki.git"
          echo "Checking out wiki repository"
          tmp_dir=$(mktemp -d -t ci-XXXXXXXXXX)
          (
              cd "$tmp_dir" || exit 1
              git init
              git config user.name $GITHUB_ACTOR
              git config user.email $GITHUB_ACTOR@users.noreply.github.com
              git pull "$GIT_REPOSITORY_URL"
              echo "Removing Files, ensuring deletion."
              rm -r -f ./*
          ) || exit 1

          echo "Copying contents of $WIKI_DIR"
          cp -R $WIKI_DIR "$tmp_dir"
          
          echo "Committing and pushing changes"
          (
              cd "$tmp_dir" || exit 1
              git add .
              git commit -m "$WIKI_COMMIT_MESSAGE"
              git push --set-upstream "$GIT_REPOSITORY_URL" master
          ) || exit 1

          rm -rf "$tmp_dir"

This action doesn’t really cater as well for the business/developer split (files created in the GUI will be deleted), but for us, this works just fine and isn’t annoying. Until next time, stay cloudy!

Enabling Password-less in Azure AD with Feitian security keys!

Recently I was lucky to receive some evaluation security keys from Feitian – One of the select companies currently providing Microsoft tested FIDO2 hardware. I’ve always been passionate about enabling Windows Hello for business, so the chance to get even more password-less was something I leaped at.

If you haven’t used a FIDO key before, the first things you will want to know are, what are they? how do I enable usage and how do I use them? Thankfully, the answer to these questions is pretty simple.

My new FEITIAN K40 and K26 security keys

What is a FIDO2 Key?

Most people reading this post will currently maintain some form of password. The key detail being, this is generally a single password, maybe with a few permutations (hats off to the 1 in every 5 people using a password manager). These passwords are never very good – Hard to remember, simple to steal, easy to brute force and allowing massive re-use when stolen. FIDO is a solution to this nightmare.

FIDO as a concept is pretty easy to understand – You own a cryptographically verifiable key and this is used to authenticate on your services. Because FIDO allows you to physically own something with your security info, organisations can generate long & complex data, without you having to memorise it. Most importantly, this security data stays with you (sometimes even locked by your biometrics). Possession is nine tenths of the law and using FIDO it is much harder for malicious entities or hackers to break into your accounts.

As a protocol, FIDO has a fair bit of minutiae to understand. Microsoft provides an excellent summary within their existing password-less documentation. If you really enjoy long boring technical documents, the Technical Specs for FIDO2 from the FIDO alliance and W3C can be found here.

Enabling security key usage in Azure AD

Enabling authentication within Azure AD is a pretty straight forward process. If you have an older AAD Tenant, you want to make sure that Combined Security registration is enabled. (On by default from August 15th 2020)

To do so, navigate to Azure AD user settings, and ensure the combined option is set to all users.

Enable the combined security info experience for users

Next to enable security keys, navigate to Security > Authentication Methods

By selecting FIDO2 Security Key you can enable this authentication method for a select group or all users. There isn’t any major penalty for enabling this on all users, however if completing this task under a dedicated rollout, you may want to consider who should have a key OR if you wanted to configure allowed AAGUIDs using data provided from your manufacturer.

Setting up and using a key

Now that we’ve enabled the service, it’s time for the part we are all keen for – actually using a key! To do so, plug your security key into your pc. Next, I would recommend installing the relevant configuration software for your device. This allows you to configure any settings available in your key.

.

In the case of my FEITIAN K26 key, I have an option to configure an extra biometric – My Fingerprint. This is great, as I’m now protecting the access that this key grants with my unique properties!

Once you’ve configured your key settings, it’s time to connect this to Azure AD. Complete a login into the MyApps portal. From here, you can use the security settings page to add the FIDO key for use.

The Security Info Page – Select Add Method, then Security Key
Follow the Bouncing Ball to configure your key.

Once setup, your next login should have the option to login with a security key!

The Security Key User prompt!

A few weeks with my keys

After spending some time using these security keys, I’m thoroughly enjoying the change. They simplify login and provide me with a degree of confidence in my account security. As far as product feedback goes, I have had no issues with the FEITIAN keys – Just personal nigglings which will vary between users. The build quality is great, with the inbuilt loops letting me easily attach to my keys for on the go use. I accepted two USB C devices, which I surprisingly found challenging. As a Mac user, Apple has pushed a lot of my devices to USB-C. I thought I was all done with USB 2.0, but didn’t really think of my corporate devices, meaning I wasn’t able to use the keys there. Form factor wise, the devices could be a bit smaller, with the larger keys being a little bit concerning when moving my laptop around. I was really worried I would snap one off. FEITIAN offers the K21, K28 keys with a slimline build, so next time I might grab a pair of those!

A big Thank you to Della @ FEITIAN for the opportunity to test these keys, until next time – Stay cloudy!

Securing Privileged Access with Azure AD (Part 2) – The AAD Basics

Welcome back to my series on securing privileged access. In this post, I’m going to walk you through five basic concepts which will allow you to keep your identity secure when using Azure AD. If you missed part one on building your PAM strategy, head over here to learn about the rationale and mentality behind securing privileged access and why it should be part of your cybersecurity strategy.

1. Azure AD Groups

This might seem a bit simple, but if you’re not using group assignments wherever possible, you’re doing something wrong. Assigning applications, roles, resources and service ownership to a group makes everything easier when building a privileged access deployment. If you’re starting out, this is fairly easy to implement. If you’re already using Azure AD, an afternoon is all you need to convert the majority of role assignments to groups for Azure AD (Your Milage May Vary for Azure IAM!).

When Assigning, develop role and access groups with the following mantra in your mind

Mutually Exclusive, Collectively Exhaustive. (MECE)

This mantra will help you to nest groups together, in a fashion that ensures your administrators have access to all the services they need. Take a help desk admin as an example. Assign a group to Helpdesk Administrator, Global Reader and Teams Communications Support Engineer. Nest the “Helpdesk Admin Users” within each . As separate access assignments, these access groups are mutually exclusive. Once nested to a group, these become collectively exhaustive. As an added benefit, applying the above MECE process to role group assignment will make some Identity Governance activities like Segregation of Duty easier!

Make the new group eligible for privileged access assignment
Assigning Privileged Access to Azure AD Groups requires you to enable role assignment on creation

Pro Tip: Dynamic Groups are a great way to grant low privileged access to business services and minimise operational overhead. However, you need to be aware of lateral movement paths – If users can edit the attribute which the dynamic access is tied to, that is a method which may allow users to bypass your identity design.

2. Conditional Access (CA)

Easily the most effective identity security control for organisations to implement is Multi Factor Authentication. Microsoft has made no secret of its opinion with regard to MFA, even touting that MFA prevents 99.9% of identity based attacks.

In it’s most simple form, a Conditional access rule applies a set of logic to each sign-in which occurs against Azure AD. Combine conditional access with ubiquitous integration to Azure AD and you can secure a large number of applications with a single control.

Conceptual Conditional Access process flow
Conditional Access is a great solution for securing Privileged Access

If you’re wanting the fastest conditional access setup ever, apply the Multi-Factor Authentication sign in control to All Users, for All Applications on every sign-in.

While this would technically work, I wouldn’t recommend this approach and the reason is simple – It degrades trust in your MFA setup. As security practitioners, we know that our users will slowly grow accustomed to an enforced behaviour. If you setup Conditional access to prompt for MFA frequently without a clear scenario, you will very quickly find that MFA is almost useless, as users select accept for every MFA prompt they see without thought or consideration. If you don’t have time to configure Conditional Access, enable the Azure AD Secure Defaults.

A better approach to Conditional Access is to define your scenarios. In the case of Privileged Access, you have a few critical scenarios where Conditional Access configurations should be applied. These are:

  1. MFA Registration from outside your operating country. Block this. Hackers shouldn’t be able to enroll MFA tokens for breached accounts.
  2. Login for Azure, Azure AD and integrated SaaS admin accounts. Require MFA and secure endpoints for all sessions.
  3. High risk logins. Block all or most of these events. Require a password reset by another administrator.

3. Split administrative accounts

For the security aficionados reading this post, the “minimal blast radius” concept should be quite familiar. For those of you newer to security, this concept focuses on the idea that one small breach should be isolated by default and not cause one big breach.

The easiest way to do this for Privileged Access is to split up your key administrator accounts. One admin for Azure AD, one admin for Active Directory and one admin for your external SaaS applications. A prominent example of this control not being applied recently, was the Solorigate attacks against Solarwinds customers. In this attack chain, an on-premise breach was used to compromise cloud administrator accounts using forged ADFS tokens. With segregated admin accounts, this attack would have been reduced in impact – You can’t log into a cloud only global admin account with an ADFS token.

Microsoft recommends you separate admin accounts in hybrid environments

If you’re on the fence about this control because it may seem inconvenient for day to day operations, consider the following.

Good identity controls are automatic

As you spend more time investing into advanced identity capability, you will notice that operational overhead for identity should decrease. It might start out challenging, but over time you will rely less on highly privileged roles such as global administrator.

4. Configure and monitor break glass accounts

Setting up Privileged Access management is an important process, and perhaps one of the most critical step within this process is to have a plan for when things go wrong. It’s ok to admit it. Everyone makes mistakes. Services have outages or sometimes you just click the wrong button. A break glass account is your magical get out of jail card for these exact scenarios. If you don’t spend two minutes to set these up, you will definitely curse when you find them missing.

There is a couple things you should keep in mind when creating break glass accounts. Firstly, how will this access be stored and secured? Organisations may opt to vault credentials in a password manager, print passwords for physical storage in a safe, or have two “keepers” who each retain half of the password (nuclear launch code style). In my opinion, the best action for break glass credentials is to go password less. Spend the money and get yourself a FIDO2 compliant hardware key such as those from Yubico or Feitian. Store this hardware key somewhere safe and you’re home free – NO password management overhead and hyper secure sign in for these accounts.

The second thing to keep in mind for break glass accounts is: They should NOT be used. As these accounts are generic, tied to the business and not a user, there isn’t always a method to attribute actions that a break glass account takes to a specific employee. This is a challenge for insider threat scenarios. If all your administrators have access to the account, how are you to know who intentionally deleted all your files with the account when they had really bad day?

Securely storing credentials for a break glass account is the first method which you prevent this happening, but the second is to alert on usage. If your business process somehow fails and the credentials leak, you have a rapid prompt by which lets you know something may be going wrong.

5. Privileged Identity Management

Azure AD Privileged Identity Management, PIM for short, focuses on applying approved administrative access within a time-boxed period. This works by automatically removing administrative access when not required, and requiring approval with strong authentication to re-activate the access. You can’t abuse an administrator account that has no admin privileges.

The PIM Process. Source: Robert Przybylski

Good PIM implementations are generally backed by strong business process. At the end of the day, identity is a people centric technology. Sometimes real world process needs to be considered. The following tips should help you design a decent PIM implementation, keeping in mind your key stakeholders.

  • Be pragmatic about Eligible vs Permanently assigned roles. Your corporate risk profile may allow some roles to be active all the time.
  • Have multiple approvers for each role. What if someone has a day off? You don’t want to block the business because you haven’t got an approver available.
  • Consider the time it takes you to execute a common task. If Admins have tasks which take two hours, but need to re-activate a role every hour, you’re simply adding frustration to peoples days.
  • Build a data driven review process. PIM provides rich reporting on usage and activation of roles, so use this to remove or grant further access at a periodic interval.

Finally, Notice how the last item in this list is the only one that explicitly mentions privileged access in the name? This is because PIM provides the best benefit when used within a healthy and well-managed environment. In my opinion, taking the time to use your Azure AD P1 Features before paying extra for an Azure AD P2 feature is the best approach. Consider the Microsoft guidance and your own strategy before making that decision however.

Up Next

Thanks for sticking with me through this weeks guidance on Azure AD Basics If you’re after some more Privileged Access information, have a read of my strategy guidance, or stay tuned for more info on what can be done using Azure AD, including some tips and techniques to help you stay in control. Topics this series is covering are:

  1. Strategy & Planning
  2. Azure AD Basics (This Post)
  3. Hybrid Scenarios
  4. Zero Trust
  5. Protecting Identity
  6. Staying in Control

Until next time, stay cloudy!

Securing Privileged Access with Azure AD (Part 1) – Strategy and Planning

I’ve been approached a few times recently on how best to govern and secure Privileged Access using the Microsoft stack. Often this conversation is with organizations who don’t have the need, budget or skillset to deploy a dedicated solution such as those from CyberArk, BeyondTrust or Thycotic. Understanding this, these organizations are looking to uplift security, but are pragmatic about doing it within the ecosystem they know and can manage. This series will focus on getting the most of out Azure AD, challenging your thinking on Azure AD capabilities and using the Microsoft ecosystem to extend into hybrid environments!

What is Privileged Access?

Before we dive too deep into the topic, it’s important to understand what exactly is privileged access? Personally, I believe that a lot of organizations look at this in the wrong light. The simplest way to expand your understanding is by asking two questions.

  1. If someone unauthorized to see or use my solution/data had the ability to do so, would the impact to my business be negative?
  2. If the above occurred, how bad would it be?

The first question really focuses on the core of privileged access – It is a special right you grant your employees and partners, with the implicit trust it won’t be abused in a negative way. Using this question is good because it doesn’t just focus on administrative access – A pitfall which many organizations fall into. It also brings specialized access into scope. Question two is all about prioritizing the risk associated with each of your solutions – Understanding that intentional leakage of the organizational crown jewels is more important than someone who can access a server will often allow you to be pragmatic with your focus in the early stages of your journey.

Access diagram showing the split between privileged and user access.
This Microsoft visual shows how user access & privileged access often overlap.

Building a Strategy

Understanding your strategy for securing privileged access is a critical task and it should most definitely be distinct from any planning activities. Privileged access strategy is all about defining where to exert your effort over the course of your program. Having a short term work effort, aligned to a long term light on the hill ensures that your PAM project doesn’t revisit covered ground.

To do this well, start by building an understanding of where your capabilities exist. Something as simple as location is adequate. For example, I might begin with; Azure Melbourne, Azure Sydney, Canberra datacenter and Unknown (SaaS & everything else).

From that initial understanding, you can begin to build out some detail, aligned to services or data. If you have a CASB service like Cloud App Security enabled, this can be really good tool to gain insights on what is used within in your environment. Following this approach, our location based data suddenly expands to; Azure IaaS/PaaS resources, Azure Control Plane, SaaS application X, Data Platform (Storage Accounts) and Palo Alto Firewalls.

This list of services & data can then be used to build a list of access which users have against each service. For IaaS/PaaS and SaaS app X, we have standard users and administrators. ARM and Data platform overlaps for admin access, but data platform also has user access. Our networking admins have access to the Palo Alto devices, but this service is transparent to our users.

Finally, build a matrix of impact, using risks to the identity & likelihood of occurrence. Use this data to prioritize where you will exert your effort. For example; A breach of my SaaS administrator account for a region isn’t too dangerous, because I’ve applied a zero trust network architecture. You cannot access customer data or another region from the service in question. I’ll move that access down in my strategy. My users with access to extremely business sensitive data commonly click phishing emails. I’ll move that access up in my strategy.

How to gauge impact easily – Which version of the CEO would you be seeing, if this control of this privileged access was lost?
Source: Twitter

This exercise is really important, because we have begun to build our understanding of where the value is. Based on this, a short PAM strategy could be summarized into something like so;

  1. Apply standard controls for all privileged users, decreasing the risk of account breach.
  2. Manage administrative Accounts controlling identity, ensuring that access is appropriate, time bound and audited.
  3. Manage user accounts with access to key data, ensuring that key access is appropriate, reviewed regularly and monitored for misuse.
  4. Manage administrative Accounts controlling infrastructure with key data.
  5. Apply advanced controls to all privileged users, enhancing the business process aligned to this access.
  6. Manage administrative accounts with access to isolated company data (no access from service to services).

My overarching light on the hill for all of this could be summarized as: “Secure my assets, with a focus on business critical data enhancing the security of ALL assets in my organization”

Planning your Solutions

After you have developed your strategy, it’s important to build a plan on how to implement each strategic goal. This is really focused on each building block you want to apply and the technology choices you are going to make. Notice how the above strategy did not focus on how we were going to achieve each item. My favourite bit about this process is; Everything overlaps! Developing good controls in one area, will help secure another area, because identity controls generally cover all the user base!

The easiest way to plan solutions is to build out a controls matrix for each strategic goal. As an example,

Apply Standard Controls for all privileged users

Could very quickly be mapped out to the following basic controls:

Solution ControlPurpose
Conditional AccessMulti-Factor AuthenticationWorks to prevent password spray, brute force and phishing attacks. High quality MFA design combined with attentive users can prevent 99.9% of identity based attacks.
Conditional AccessSign In Geo BlockingAdministration should be completed only from our home country. Force this behaviour by blocking access from other locations.
Azure AD Password ProtectionPassword PolicyWhile we hope that our administrators don’t use Summer2021 as a password, We can sleep easy knowing this will be prevented by a technical control.

These control mappings can be as complex or as simple as needed. As a general recommendation, starting small will allow you to aggressively achieve high coverage early. From there you can re-cover the same area with deeper and advanced controls over time. Rinse and repeat this process for each of your strategic goals. You should quickly find that you have a solution for the entire strategy you developed!

Up Next

If you’ve stuck with me for this long, thank-you! Securing Privileged Access really is a critical process for any cyber security program . Hopefully you’re beginning to see some value in really expanding out a strategy and planning phase for your next privileged access management project. Over the next few posts, I’ll elaborate on what can be done using Azure AD, and some tips and techniques to help you stay in control. Topics we will cover are:

  1. Strategy & Planning (This Post)
  2. Azure AD Basics
  3. Hybrid Scenarios
  4. Zero Trust
  5. Protecting Identity
  6. Staying in Control

Until next time, stay cloudy!

Originally Posted on arinco.com.au

A first look at Decentralised Identity

As an identity geek, something I’ve always struggled with has been user control and experience. Modern federation (such as OIDC/SAML) allows for generally coherent experiences, but it relies on user interaction with a central platform, removing control. Think of how many services you log into via your Facebook account. If Facebook was to go down, are you stuck? The same problem exists (albeit differently) with your corporate credentials.

Aside from centralisation, ask yourself, do you have a decent understanding of where your social media data is being shared? For most, I would guess the answer is no. Sure you can go and get this information, but that doesn’t show you who else has access to it. You own your identity and the associated data, but you don’t always control its use.

Finally, how many of your credentials cross that bridge into the real world? I would posit that not many do. If they are, it’s likely some form of app or website.

Enter Decentralised Identity

Thankfully, with these challenges in mind, the Decentralised Identity Foundation (DIF) has set to work. As a group, the foundation is working to develop an open, standards based ecosystem for scalable sharing and verification of identity data. At its core, the DIF has developed standards for Decentralised Identifiers (DIDs) as a solution, with varying secondary items under differing working groups. So, how does Decentralised Identity work?

In short, it uses cryptographic protocols and blockchain ledgers to enable verifiers that validate a user claim without talking to the original issuer. The owner of each claim holds full possession of the data, and presentation of the data in question requires the owners consent.

Explaining Decentralised Credentials. 
Source:
https://sovrin.org/wp-content/uploads/2018/03/Sovrin-Protocol-and-Token-White-Paper.pdf
DID High level summary

In English Please?

A few excellent real world examples exist for where this Decentralised Identity could easily be applied. Say you (the owner) are an accredited accountant with Contoso Financial Advisors (the issuer). As a member, you are provided a paper based certificate of accreditation. On a job application, you provide this paper based record to a prospective employer (the verifier).

From here, your employer has a few options to validate your accreditation.

  • You have the “accredited” paper, so you must be legit right? Without verification of your accreditation, the work you put into obtaining it is invalidated.
  • Look for security features in your accreditation. This is vulnerable to fraud, with some documents not containing these features.
  • They can contact Contoso to check the validity of your accreditation.This relies on Contoso actually having a service to validate credentials while still operating.

As you provide this accreditation to the employer, you also have a few concerns about the data contained within. What if they take a copy of this accreditation for another purpose? What if they also sell this information?

Combined, the current options for validation of identity ensure that;

  • Any presented data is devalued, through either lack of verification or fraud,
  • Data control is given away without recourse,
  • A reliance is built on organisations to be permanently contactable.

DID’s work to solve these challenges in a few ways. A credential issued within the DID ecosystem is signed by both the Issuer and the owner. As this signature information is shared in a secure, public location anyone can complete a verification activity with a high degree of confidence. As only the verification data is held publicly, you (the owner) can provide data securely, with the verifier unable to pass this information onto third parties with an authentic signature.

Finally, if Contoso was to close down or be uncontactable, the use of a decentralised leger allows your employer to verify that you are who you say you are. The ledger itself has the added benefit of not requiring ongoing communication with Contoso, meaning they also benefit as they no longer have to validate requests from third parties.

Azure AD Verifiable Credentials

As a new technology, I was quite enthused to see Microsoft as member level contributor to the standards and working groups of the DIF. I was even more excited to see Microsofts DID implementation “Azure AD Verifiable credentials” announced into public preview. Although it is still a new service and the documentation is light on, I’ve been able to tinker with the Microsoft example codebase and have found the experience to be pretty slick.

To get started yourself, pull down the code from GitHub and step through the documentation snippets. Pretty quickly, you should have a service available using ngrok and a verifiable credential issued to your device. Look at me mum, I’m a verified credential expert!

Verifiable Credentials Expert Card

Using the example codebase, you should note that the credential issuance relies on a Microsoft managed B2C tenant. The first step for anyone considering this technology is to plumb the solution into your own AAD.

To do so, you first need to create an Azure Key Vault resource as the Microsoft VC service stores the keys used to sign each DID. When provisioning, make sure you have key create, delete and sign permissions for your account. Without this, VC activation will fail.

Next, you need to navigate through to Azure AD, then enable VC under: Security> Verifiable Credentials (Preview).

Verifiable Credentials Enablement screen

Take note, if you plan to verify your domain it must be available over https and not utilise redirects. This held up my testing as my container hit the lets-encrypt provisioning limits.

Once you have enabled your environment, you need to create a rules and a display file before activating a new credential type. These files define what your credential looks like and what must be completed to obtain one. I created a simple corporate theme matching the Arinco environment, plus a requirement to log into Azure AD. Each rule is defined within an attestations block, with the mapping for my id token copying through to attributes held by the VC. One really nice thing when testing out basic capability is that you can create an attestation which takes only user input, meaning no configuration of an external IDP or consumption of other VC is required.


My Rules File

{
  "attestations": {
    "idTokens": [
      {
        "mapping": {
          "firstName": { "claim": "given_name" },
          "lastName": { "claim": "family_name" }
        },
        "configuration": "https://login.microsoftonline.com/<MY Tenant ID>/v2.0/.well-known/openid-configuration",
        "client_id": "<MY CLIENT ID>",
        "redirect_uri": "vcclient://openid/",
        "scope": "openid profile"
      }
    ]
  },
  "validityInterval": 2592000,
  "vc": {
    "type": ["ArincoTestVC"]
  }
}


My Display File

{
  "default": {
    "locale": "en-US",
    "card": {
      "title": "Verified Employee",
      "issuedBy": " ",
      "backgroundColor": "#001A31",
      "textColor": "#FFFFFF",
      "logo": {
        "uri": "https://somestorageaccount.blob.core.windows.net/aad-vc-logos/Logo-SymbolAndText-WhiteOnTransparent-Small.png",
        "description": "Arinco Australia Logo"
      },
      "description": "This employee card is issued to employees and directors of Arinco Australia"
    },
    "consent": {
      "title": "Do you want to get your digital employee card from Arinco Demo?",
      "instructions": "Please log in with your Arinco Demo account to receive your employee card."
    },
    "claims": {
      "vc.credentialSubject.firstName": {
        "type": "String",
        "label": "First name"
      },
      "vc.credentialSubject.lastName": {
        "type": "String",
        "label": "Last name"
      }
    }
  }
}

Once you have created your files, select create new under the credentials tab within Azure AD. The process here is pretty straight forward, with a few file uploads and some next-next type clicking!

Verifiable Credentials Provisioning Screen

Once uploaded to Azure AD, you’re ready to build out your custom website and test VC out! The easiest way to do this is to follow the Microsoft documentation, updating the provided sample, testing functionality and then rebranding the page to suit your needs. With a bit of love, you end up with a nice site like below.

And all going well, you should be able to create your second verifiable credential.

Two Verified Credentials Cards

The overall experience?

As verifiable credentials is a preview service, there’s always going to be a bit of risk associated with deployment. That being said, I found the experience to be straight forward with only a few teething issues.

One challenge I would articulate for others is while provisioning https certificates do not configure DNS for your DID well known domain. This causes authenticator to attempt to connect over https with the user experience slowed by about two to three minutes of spinning progress wheels while the application completes retries.

As for new capability, I’m really looking forward to seeing where the service goes with my primary wish list as follows:

  1. Some form of secondary provisioning aside from QR Codes. I personally don’t enjoy QR due to a leftover distaste from COVID-19 contact tracing in Australia. A way to distribute magic links for provisioning, or silent admin led provisioning, would be really appreciated.
  2. Any form of NFC support. To me, this is the final frontier to cross for digital/real world identity. Imagine if we could use VC for services such as access to buildings, local shops or even public transport.

Hopefully, you have found this article informative. Until next time, stay cloudy!

Connecting Security Centre to Slack – The better way

Recently I’ve been working on some automated workflows for Azure Security Center and Azure Sentinel. Following best practice, after initial development, all our Logic Apps and connectors are deployed using infrastructure as code and Azure DevOps. This allows us to deploy multiple instances across customer tenants at scale. Unfortunately, there is a manual step required when deploying some Logic Apps, and you will encounter this on the first run of your workflow.

A broken logic app connection

This issue occurs because connector resources often utilise OAuth flows to allow access to the target services. We’re using Slack as an example, but this includes services such as Office 365, Salesforce and GitHub. Selecting the information prompt under the deployed connector display name will quickly open a login screen, with the process authorising Azure to access your service.

Microsoft provides a few options to solve this problem;

  1. Manually apply the settings on deployment. Azure will handle token refresh, so this is a one time task. While this would work, it isn’t great. At Arinco, we try to avoid manual tasks wherever possible
  2. Pre-deploy connectors in advance. As multiple Logic Apps can utilise the same connector, operate them as a shared resource, perhaps owned by a platform engineering group.
  3. Operate a worker service account, with a browser holding logged-in sessions. Use DevOps tasks to interact and authorise the connection. This is the worst of the three solutions and prone to breakage.

A better way to solve this problem would be to sidestep it entirely. Enter app webhooks for Slack. Webhooks act as a simple method to send data between applications. These can be unauthenticated and are often unique to an application instance.

To get started with this method, navigate to the applications page at api.slack.com, create a basic application, providing an application name and a “development” workspace.

Next, enable incoming webhooks and select your channel.

Just like that, you can send messages to a channel without an OAuth connector. Grab the CURL that is provided by Slack and try it out.

Once you have completed the basic setup in Slack, the hard part is all done! To use this capability in a Logic App, add the HTTP task and fill out the details like so:O

Our simple logic app.

You will notice here that the request body we are using is a JSON formatted object. Follow the Slack block kit and you can develop some really nice looking messages. Slack even provides an excellent builder service.

Block kit enables you to develop rich UI within Slack.

Completing our integration in this manner has a couple of really nice benefits – Avoiding the manual work almost always pays off.

  1. No Manual Integration, Hooray!
  2. Our branding is better. Using the native connector does not allow you to easily change the user interface, with messages showing as sent by “Microsoft Azure Logic Apps”
  3. Integration to the Slack ecosystem for further workflows. I haven’t touched on this here, but if you wanted to build automatic actions back to Logic Apps, using a Slack App provides a really elegant path to do this.

Until next time, stay cloudy!

Integrating Kubernetes with Okta for user RBAC.

Recently I built a local Kubernetes cluster – for fun and learning. As an identity geek, the first thing I assess when building any service, is “How will I log in?” Sure I can use local Kubernetes service accounts, but where is the fun in that? Getting started with my research, I found a couple of really great articles documents and I would be amiss If I didn’t start by sharing these;

I don’t think it needs acknowledgement, but the K8s ecosystem is diverse, and the documentation can be daunting at the best of times. This post is slightly tuned to my setup, so you may need to tinker a bit as you get things working on your own cluster. I’m using the following for those who would like to follow along at home;

  • A MicroK8s based Kubernetes cluster (See here for quick setup on raspberry pi’s)
  • An Okta Developer tenant (Signup here – Free for the first 5 apps)

Initial Okta Configuration

To configure this integration for any identity provider, we will need a client application; First up – create an OIDC application using the app integration wizard. You will want a “web application” with a login URL that looks something like so

https://localhost:8000

Pretty straight forward stuff, customise the name/logo as you like. Once you pass the initial screen, note down your client ID & secret for use later. Kubernetes services often are able to refresh OIDC tokens for you; to support this, you will need to modify the allowed grant types to include Refresh – A simple checkbox under the application options.

Finally, assign some users to your application. After all, it’s great to be able to login with OIDC tokens, but if Okta won’t assign them in the first place, why bother? 😀

Modifying the Kubernetes API Server

Once you’ve completed your Okta setup, it’s time to move to Kubernetes. There is a few configuration flags needed to configure the kube-apiserver, but only two are hard requirements. The others will make things easier to manage in the long run;

--oidc-issuer-url=https://dev-987710.okta.com/oauth2/default #Required
--oidc-client-id=00a1g3wxh9x9KLciv4x9 #Required
--oidc-username-prefix=oidc: #Recommended - Makes things a bit neater for RBAC
--oidc-groups-prefix=oidcgroup: #Recommended

Apply these to your api config. If you’re using kops, simply add the changes using kops edit cluster. In my case, I’m using microk8s, so I edit the apiserver configuration located at:

/var/snap/microk8s/current/args/kube-apiserver

and then apply this with:

microk8s stop; microk8s start

Testing Login

At this point we can actually test our sign-in, albeit with limited functionality. To do this, we need to grab an ID token from Okta API using curl or postman. This is a three step process.

1. Establish a session using the Sessions API

curl --location --request POST 'https://dev-987710.okta.com/api/v1/authn' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json'\
--data-raw '{
"username": "kubernetesuser@westall.co",
"password": "supersecurepassword",
"options": {
"multiOptionalFactorEnroll": true,
"warnBeforePasswordExpired": true
} 
}'

2. Exchange your session token for an auth code

curl --location --request GET 'https://dev-987710.okta.com/oauth2/v1/authorize?client_id=00a1g3wxh9x9KLciv4x9&response_type=code&response_mode=form_post&scope=openid%20profile%20offline_access&redirect_uri=http%3A%2F%2Flocalhost%3A8000&state=demo&nonce=b14c6ea9-4975-4dff-9cf6-1b475045dffa&sessionToken=<SESSION TOKEN FROM STEP 1>'

3. Exchange your auth code for an access & id token

curl --location --request POST 'https://dev-987710.okta.com/oauth2/v1/token' \
--header 'Accept: application/json' \
--header 'Authorization: Basic <Base64 Encoded clientId:clientSecret>=' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'redirect_uri=http://localhost:8000' \
--data-urlencode 'code=<AUTH CODE FROM STEP 2>'

One we have collected a valid ID token, we can run any command against the kubernetes API using the –token & server flag.

kubectl get pods -A --token=<superlongJWT> --server='https://192.168.1.20:16443'

Don’t stress if you get an “Error from server (Forbidden)” prompt back from your request. Kubernetes has a deny by default RBAC design that means nothing will work until a permission is configured.

If you are like me and are also using Microk8s, you should only get this error if you have already enabled the RBAC add on. By default, Microk8s runs with the api server flag: –authorization-mode=AlwaysAllow . This means that any authenticated user should be able to run kubectl commands. If you want to enable fine grained RBAC, the command you need is:

microk8s enable rbac

Applying access control

To make our kubectl commands work, we need to apply a cluster permission. But before I dive into that, I want to point out something that is a bit yucky. Note that for each command, my user is prefixed as configured, but the identifier presented is the users unique Okta profile ID.

While this will work when I assign access, it’s definitely hard to read. To fix this, I’m going to add another flag to my kube-api config.

--oidc-username-claim=preferred_username

Now that we have applied this, you can see that I’m getting a slightly better experience – My users are actually identified by a username! It’s important for later to understand that this claim is not provided by default. In my original authorization request I ask for the “profile” scope in ADDITION to the openid scope.

From here, we can begin to apply manifests against each user for access (using an authorized account). I’ve taken the easy route and assigned a clusterAdmin role here:

kubectl apply -f - <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: oidc-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: User
  name: oidc:Okta-Lab@westall.co
EOF

At this point you should be able to test again and get your pods with an OIDC token!

Tidying up the login flow

Now that we have a working kubectl client, I think most people would agree that 3 Curl requests and a really long kubectl command is a bit arduous. One option to simplify this process is to use the native kubectl support for oidc within your kubeconfig.

Personally, I prefer to use the kubectl extension kubelogin. The benefit of using this extension is it simplifies the login process for multiple accounts and your kubeconfig contains arguably less valuable data. To enable kubelogin, first install it;

# Homebrew (macOS and Linux)
brew install int128/kubelogin/kubelogin

# Krew (macOS, Linux, Windows and ARM)
kubectl krew install oidc-login

# Chocolatey (Windows)
choco install kubelogin

Next update your kubeconfig with an OIDC user like so. Note the use of the –oidc-extra-scope flag. Without this Okta will return a token without your preferred_username claim, and signin will fail!

- name: okta
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://dev-987710.okta.com
      - --oidc-client-id=00a1g3wxh9x9KLciv4x9
      - --oidc-client-secret=<okta client secret>
      - --oidc-extra-scope=profile
      - -v1 #Not required, useful to see the process however
      command: kubectl
      env: null

Finally, configure a a new context with your user and run a command. Kubelogin should take care of the rest.

In my opinion, for any solution you deploy, personal or professional. Identity should always be something you properly implement. I’ve been lucky to have most of the K8s magic abstracted away from me (Thanks Azure) and I found this process immensely informative and a useful exercise. Hopefully you will find this post useful in your own identity journey 🙂 Until next time, stay cloudy!