Integrating Kubernetes with Okta for user RBAC.

Recently I built a local Kubernetes cluster – for fun and learning. As an identity geek, the first thing I assess when building any service, is “How will I log in?” Sure I can use local Kubernetes service accounts, but where is the fun in that? Getting started with my research, I found a couple of really great articles documents and I would be amiss If I didn’t start by sharing these;

I don’t think it needs acknowledgement, but the K8s ecosystem is diverse, and the documentation can be daunting at the best of times. This post is slightly tuned to my setup, so you may need to tinker a bit as you get things working on your own cluster. I’m using the following for those who would like to follow along at home;

  • A MicroK8s based Kubernetes cluster (See here for quick setup on raspberry pi’s)
  • An Okta Developer tenant (Signup here – Free for the first 5 apps)

Initial Okta Configuration

To configure this integration for any identity provider, we will need a client application; First up – create an OIDC application using the app integration wizard. You will want a “web application” with a login URL that looks something like so

https://localhost:8000

Pretty straight forward stuff, customise the name/logo as you like. Once you pass the initial screen, note down your client ID & secret for use later. Kubernetes services often are able to refresh OIDC tokens for you; to support this, you will need to modify the allowed grant types to include Refresh – A simple checkbox under the application options.

Finally, assign some users to your application. After all, it’s great to be able to login with OIDC tokens, but if Okta won’t assign them in the first place, why bother? 😀

Modifying the Kubernetes API Server

Once you’ve completed your Okta setup, it’s time to move to Kubernetes. There is a few configuration flags needed to configure the kube-apiserver, but only two are hard requirements. The others will make things easier to manage in the long run;

--oidc-issuer-url=https://dev-987710.okta.com/oauth2/default #Required
--oidc-client-id=00a1g3wxh9x9KLciv4x9 #Required
--oidc-username-prefix=oidc: #Recommended - Makes things a bit neater for RBAC
--oidc-groups-prefix=oidcgroup: #Recommended

Apply these to your api config. If you’re using kops, simply add the changes using kops edit cluster. In my case, I’m using microk8s, so I edit the apiserver configuration located at:

/var/snap/microk8s/current/args/kube-apiserver

and then apply this with:

microk8s stop; microk8s start

Testing Login

At this point we can actually test our sign-in, albeit with limited functionality. To do this, we need to grab an ID token from Okta API using curl or postman. This is a three step process.

1. Establish a session using the Sessions API

curl --location --request POST 'https://dev-987710.okta.com/api/v1/authn' \
--header 'Accept: application/json' \
--header 'Content-Type: application/json'\
--data-raw '{
"username": "kubernetesuser@westall.co",
"password": "supersecurepassword",
"options": {
"multiOptionalFactorEnroll": true,
"warnBeforePasswordExpired": true
} 
}'

2. Exchange your session token for an auth code

curl --location --request GET 'https://dev-987710.okta.com/oauth2/v1/authorize?client_id=00a1g3wxh9x9KLciv4x9&response_type=code&response_mode=form_post&scope=openid%20profile%20offline_access&redirect_uri=http%3A%2F%2Flocalhost%3A8000&state=demo&nonce=b14c6ea9-4975-4dff-9cf6-1b475045dffa&sessionToken=<SESSION TOKEN FROM STEP 1>'

3. Exchange your auth code for an access & id token

curl --location --request POST 'https://dev-987710.okta.com/oauth2/v1/token' \
--header 'Accept: application/json' \
--header 'Authorization: Basic <Base64 Encoded clientId:clientSecret>=' \
--header 'Content-Type: application/x-www-form-urlencoded' \
--data-urlencode 'grant_type=authorization_code' \
--data-urlencode 'redirect_uri=http://localhost:8000' \
--data-urlencode 'code=<AUTH CODE FROM STEP 2>'

One we have collected a valid ID token, we can run any command against the kubernetes API using the –token & server flag.

kubectl get pods -A --token=<superlongJWT> --server='https://192.168.1.20:16443'

Don’t stress if you get an “Error from server (Forbidden)” prompt back from your request. Kubernetes has a deny by default RBAC design that means nothing will work until a permission is configured.

If you are like me and are also using Microk8s, you should only get this error if you have already enabled the RBAC add on. By default, Microk8s runs with the api server flag: –authorization-mode=AlwaysAllow . This means that any authenticated user should be able to run kubectl commands. If you want to enable fine grained RBAC, the command you need is:

microk8s enable rbac

Applying access control

To make our kubectl commands work, we need to apply a cluster permission. But before I dive into that, I want to point out something that is a bit yucky. Note that for each command, my user is prefixed as configured, but the identifier presented is the users unique Okta profile ID.

While this will work when I assign access, it’s definitely hard to read. To fix this, I’m going to add another flag to my kube-api config.

--oidc-username-claim=preferred_username

Now that we have applied this, you can see that I’m getting a slightly better experience – My users are actually identified by a username! It’s important for later to understand that this claim is not provided by default. In my original authorization request I ask for the “profile” scope in ADDITION to the openid scope.

From here, we can begin to apply manifests against each user for access (using an authorized account). I’ve taken the easy route and assigned a clusterAdmin role here:

kubectl apply -f - <<EOF
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: oidc-admin-binding
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: cluster-admin
subjects:
- kind: User
  name: oidc:Okta-Lab@westall.co
EOF

At this point you should be able to test again and get your pods with an OIDC token!

Tidying up the login flow

Now that we have a working kubectl client, I think most people would agree that 3 Curl requests and a really long kubectl command is a bit arduous. One option to simplify this process is to use the native kubectl support for oidc within your kubeconfig.

Personally, I prefer to use the kubectl extension kubelogin. The benefit of using this extension is it simplifies the login process for multiple accounts and your kubeconfig contains arguably less valuable data. To enable kubelogin, first install it;

# Homebrew (macOS and Linux)
brew install int128/kubelogin/kubelogin

# Krew (macOS, Linux, Windows and ARM)
kubectl krew install oidc-login

# Chocolatey (Windows)
choco install kubelogin

Next update your kubeconfig with an OIDC user like so. Note the use of the –oidc-extra-scope flag. Without this Okta will return a token without your preferred_username claim, and signin will fail!

- name: okta
  user:
    exec:
      apiVersion: client.authentication.k8s.io/v1beta1
      args:
      - oidc-login
      - get-token
      - --oidc-issuer-url=https://dev-987710.okta.com
      - --oidc-client-id=00a1g3wxh9x9KLciv4x9
      - --oidc-client-secret=<okta client secret>
      - --oidc-extra-scope=profile
      - -v1 #Not required, useful to see the process however
      command: kubectl
      env: null

Finally, configure a a new context with your user and run a command. Kubelogin should take care of the rest.

In my opinion, for any solution you deploy, personal or professional. Identity should always be something you properly implement. I’ve been lucky to have most of the K8s magic abstracted away from me (Thanks Azure) and I found this process immensely informative and a useful exercise. Hopefully you will find this post useful in your own identity journey 🙂 Until next time, stay cloudy!

Thoughts from an F5 APM Multi Factor implementation

Recently I was asked to assist with implementation of MFA in a complex on-premises environment. Beyond the implementation of Okta, all infrastructure was on-premises and neatly presented to external consumers through an F5 APM/LTM solution. This post details my thoughts & lessons I learnt configuring RADIUS authentication for services behind and F5, utilising AAA Okta Radius servers.

Ideal Scenario?

Before I dive into my lessons learnt – I want to preface this article by saying there is a better way. There is almost always a better way to do something. In a perfect world, all services would support token based single sign on. When security of a service can’t be achieved by the best option, always look for the next best thing. Mature organisations excel at finding a balance between what is best, and what is achievable. In my scenario, the best case implementation would have been inline SSO with an external IdP . Under this model, Okta completes SAML authentication with the F5 platform and then the F5 creates and provides relevant assertions to on-premise services.

Unfortunately, the reality of most technology environments is that not everything is new and shiny. My internal applications did not support SAML and so here we are with the Okta Radius agent and a flow that looks something like below (replace step 9 with application auth).

Importantly, this is implementation is not inherently insecure or bad, however it does have a few more areas that could be better. Okta calls this out in the documentation for exactly this reason. Something important to understand is that radius secrets can be and are compromised, and it is relatively trivial to decrypt traffic once you have possession of a secret.

APM Policy

If you have a read of the Okta documentation on this topic. you will quickly be presented with an APM policy example.

You will note there is two Radius Auth blocks – These are intended to separate the login data verification – Radius Auth 1 is responsible for password authentication, and Auth 2 is responsible for verifying a provided token. If you’re using OTP only, you can get away with a simpler APM policy – Okta supports providing both password and an OTP inline and separate by a comma for verification.

Using this option, the policy can be simplified a small amount – Always opt to simplify policy; Less places for things to go wrong!

Inline SSO & Authentication

In a similar fashion to Okta, F5 APM provides administrators the ability to pass credentials through to downstream applications. This is extremely useful when dealing with legacy infrastructure, as credential mapping can be used to correctly authenticate a user against a service using the F5. The below diagram shows this using an initial login with RSA SecurID MFA.

For most of my integrations, I was required to use HTTP forms. When completing this authentication using the APM, having an understanding of exactly how the form is constructed is really critical. The below example is taken from a Exchange form – Leaving out the flags parameter originally left my login failing & me scratching my head.

An annoying detail about forms based inline authentication is that if you already have a session, the F5 will happily auto log back into the target service. This can be a confusing experience for most users as we generally expect to be logged out when we click that logout button. Thankfully, we can handle this conundrum neatly with an iRule.

iRule Policy application

For this implementation, I had a specific set of requirements on when APM policy should be applied to enforce MFA; not all services play nice with extra authentication. Using iRules on virtual services is a really elegant way in which we can control when and APM policy applies. On-Premise Exchange is something that lots of organisations struggle with securing – especially legacy ActiveSync. The below iRule modifies when policy is applied using uri contents & device type.

when HTTP_REQUEST {
    if { (([HTTP::header User-Agent] contains "iPhone") || ([HTTP::header User-Agent] contains "iPad")) && (([string tolower [HTTP::uri]] contains "activesync") || ([string tolower [HTTP::uri]] contains "/oab")) } {
        ACCESS::disable
    } elseif { ([string tolower [HTTP::uri]] contains "logoff") } {
        ACCESS::session remove
    } else {
        ACCESS::enable
        if { ([string tolower [HTTP::uri]] contains "/ecp") } {
            if { not (([string tolower [HTTP::uri]] contains "/ecp/?rfr=owa") || ([string tolower [HTTP::uri]] contains "/ecp/personalsettings/") || ([string tolower [HTTP::uri]] contains "/ecp/ruleseditor/") || ([string tolower [HTTP::uri]] contains "/ecp/organize/") || ([string tolower [HTTP::uri]] contains "/ecp/teammailbox/") || ([string tolower [HTTP::uri]] contains "/ecp/customize/") || ([string tolower [HTTP::uri]] contains "/ecp/troubleshooting/") || ([string tolower [HTTP::uri]] contains "/ecp/sms/") || ([string tolower [HTTP::uri]] contains "/ecp/security/") || ([string tolower [HTTP::uri]] contains "/ecp/extension/") || ([string tolower [HTTP::uri]] contains "/scripts/") || ([string tolower [HTTP::uri]] contains "/themes/") || ([string tolower [HTTP::uri]] contains "/fonts/") || ([string tolower [HTTP::uri]] contains "/ecp/error.aspx") || ([string tolower [HTTP::uri]] contains "/ecp/performance/") || ([string tolower [HTTP::uri]] contains "/ecp/ddi")) } {  
                HTTP::redirect "https://[HTTP::host]/owa"
            }
        }
    }
}

One thing to be aware of when implementing iRules like this is directory traversal – You really do need a concrete understanding of what paths are and are not allowed. If a determined adversary can authenticate against a desired URI, they should NOT be able to switch to an undesired URI. The above example is really great to show this – I want my users to access personal account ECP pages just fine – Remote administrative exchange access? Thats a big no-no and I redirect to an authorised endpoint.

Final Thoughts

Overall, the solution implemented here is quite elegant, considering the age of some infrastructure. I will always advocate for MFA enablement on a service – It prevents so many password based attacks and can really uplift the security of your users. While overall service uplift is always a better option to enable security, you should never discount small steps you can take using existing infrastructure. As always, leave a comment if you found this article useful!

Inbound Federation from Azure AD to Okta

Recently I spent some time updating my personal technology stack. As an Identity nerd, I thought to myself that SSO everywhere would be a really nice touch. Unfortunately SSO everywhere is not as easy as it sounds – More on that in a future post. For my personal setup, I use Office 365 and have centralised the majority of my applications on Azure AD. I find that the licensing inclusions for my day to day work and lab are just too good to resist. But what about my other love? If you’ve read this blog recently, you will know I’ve heavily invested into the Okta Identity platform. However aside from a root account I really don’t want to store credentials any-more. Especially considering my track record with lab account management. This blog details my experience and tips for setting up inbound federation from AzureAD to Okta, with admin role assignment being pushed to Okta using SAML JIT.

So what is the plan?

For all my integrations, I’m aiming to ensure that access is centralised; I should be able to create a user in AzureAD and then push them out to the application. How this occurs is a problem to handle per application. As Okta is traditionally an identity provider, this setup is a little different – I want Okta to act as the service provider. Queue Inbound Federation. For the uninitiated, Inbound federation is an Okta feature that allows any user to SSO into Okta from an external IdP, provided your admin has done some setup. More commonly, inbound federation is used in hub-spoke models for Okta Orgs. In my scenario, Azure AD is acting as a spoke for the Okta Org. The really nice benefit of this is setup I can configure SSO from either service into my SaaS applications.

Configuring Inbound Federation

The process to configure Inbound federation is thankfully pretty simple, although the documentation could probably detail this a little bit better. At a high level, we’re going to complete 3 SSO tasks, with 2 steps for admin assignment via SAML JIT.

  1. Configure an application within AzureAD
  2. Configure a identity provider within Okta & download some handy metadata
  3. Configure the Correct Azure AD Claims & test SSO
  4. Update our AzureAD Application manifest & claims
  5. Assign Admin groups using SAMIL JIT and our AzureAD Claims.

While it does seem like a lot, the process is quite seamless, so let’s get started. First up, add an enterprise application to Azure AD; Name this what you would like your users to see in their apps dashboard. Navigate to SSO and select SAML.

Next, Okta configuration. Select Security>Identity Providers>Add. You might be tempted to select ‘Microsoft’ for OIDC configuration, however we are going to select SAML 2.0 IdP

Using the data from our Azure AD application, we can configure the IDP within Okta. My settings are summarised as follows:

  • IdP Username should be: idpuser.subjectNameId
  • SAML JIT should be ON
  • Update User Attributes should be ON (re-activation is personal preference)
  • Group assignments are off (for now)
  • Okta IdP Issuer URI is the AzureAD Identifier
  • IdP Single Sign-On URL is the AzureAD login URL
  • IdP Signature Certificate is the Certificate downloaded from the Azure Portal

Click Save and you can download service provider metadata.

Upload the file you just downloaded to the Azure AD application and you’re almost ready to test. Note that the basic SAML configuration is now completed.

Next we need to configure the correct data to flow from Azure AD to Okta. If you have used Okta before, you will know the four key attributes on anyone’s profile: username, email, firstName & lastName. If you inspect the downloaded metadata, you will notice this has slightly changed, with mobilePhone included & username seemingly missing. This is because the Universal Directory maps username to the value provided in NameID. We configured this in the original IdP setup.

Add a claim for each attribute, feeling free to remove the other claims using fully qualified namespaces. My Final claims list looks like this:

At this point, you should be able to save your work ready for testing. Assign your app to a user and select the icon now available on their myapps dashboard. Alternately you can select the “Test as another user” within the application SSO config. If you have issues when testing, the “MyApps Secure Sign In Extension” really comes in handy here. You can grab this from the Chrome or Firefox web store and use it to cross reference your SAML responses against what you expect to be sent. Luckily, I can complete SSO on the first pass!

Adding Admin Assignment

Now that I have SSO working, admin assignment to Okta is something else I would really like to manage in Azure AD. To do this, first I need to configure some admin groups within Okta. I’ve built three basic groups, however you can provide as many as you please.

Next, we need to update the application manifest for our Azure AD app. This can be done at Application Registrations > Appname>Manifest. For each group that you created within Okta, add a new approle like the below, ensuring that the role ID is unique.

{
    "allowedMemberTypes": [
	"User"
    ],
    "description": "Admin-Okta-Super",
    "displayName": "Admin-Okta-Super",
    "id": "18d14569-c3bd-438b-9a66-3a2aee01d14f",
    "isEnabled": true,
    "lang": null,
    "origin": "Application",
    "value": "Admin-Okta-Super"
},

For simplicity, I have matched the value, description and displayName details. The ‘value’ attribute for each approle must correspond with a group created within the Okta Portal, however the others can be a bit more verbose should you desire.

Now that we have modified our application with the appropriate Okta Roles, we need to ensure that AzureAD & Okta to send/accept this data as a claim. First within AzureAD, update your existing claims to include the user Role assignment. This can be done with the “user.assignedRoles” value like so:

Next, update the Okta IDP you configured earlier to complete group sync like so. Note that the group filter prevents any extra memberships from being pushed across.

For a large amounts of groups, I would recommend pushing attributes as claims and configuring group rules within Okta for dynamic assignment.

Update your Azure AD user/group assignment within the Okta App, and once again, you’re ready to test. A second sign-in to the Okta org should reveal an admin button in the top right and moving into this you can validate group memberships. In the below example, I’ve neatly been added to my Super admins group.

Wrapping Up

Hopefully this article has been informative on the process for setting up SAML 2.0 Inbound federation using Azure AD to Okta. Depending on your identity strategy, this can be a really powerful way to manage identity for a service like Okta centrally, bring multiple organisations together or even connect with customers or partners. Personally, this type of setup makes my life easier across the board – I’ve even started to minimise the use of my password manager just by getting creative with SSO solutions!

If you’re interested in chatting further on this topic, please leave a comment or reach out!

Okta Workflows – Unlimited Power!

If you have ever spoken with me in person; you know I’m a huge fan of the Okta identity platform – It just makes everything easy. It’s no surprise then, that the Okta Workflows announcement at Oktane was definitely something I saw value in – Interestingly enough; I’ve utilised postman collections and Azure LogicApps for an almost identical Integration solution in the past.

Custom Okta LogicApps Connector

This post will cover my first impressions, workflow basics & a demo of the capability. If you’re wanting to try this in your own Org, reach out to your Account/Customer Success Manager – The feature is still hidden behind a flag in the Okta Portal, however it is well worth the effort!

The basics of Workflows

If you have ever used Azure LogicApps or AWS Step Functions, you will instantly find the terminology of workflows familiar. Workflows are broken into three core abstractions;

  • Events – Used Start your workflow
  • Functions – Provide Logic Control (If then and the like) & advanced transformations/functionality
  • Actions – DO things

All three abstractions have input & output attributes, which can be manipulated or utilised throughout each flow using mappings. Actions & Events require a connection to a service – pretty self explanatory.

Workflows are built from left to right, starting with an event. I found the left to right view when building functions is really refreshing, If you have ever scrolled down a large LogicApp you will know how difficult it can get! Importantly, keeping your flows short and efficient will allow easy viewing & understanding of functionality.

Setting up a WorkFlow

For my first workflow I’ve elected to solve a really basic use case – Sending a message to slack when a user is added to an admin group. ChatOps style interactions are becoming really popular for internal IT teams and are a lot nicer than automated emails. Slack is supported by workflows out of the box and there is an O365 Graph API option available if your organisation is using Microsoft Teams.

First up is a trigger; User added to a group will do the trick!

Whenever you add a new integration, you will be prompted for a new connection and depending on the service, this will be different. For Okta, this is a simple OpenID app that is added when workflows is onboarded to the org. Okta Domain, Client ID, Client Secret and we are up and running!

Next, I need to integrate with Slack – Same process; Select a task, connect to the service;

Finally, I can configure my desired output to slack. A simple message to the #okta channel will do.

Within about 5 minutes I’ve produced a really simple two step flow, and I can click save & test on the right!

Looking Good!

If you’ve been paying attention, you would have realised that this flow is pretty noisy – I would have a message like this for ALL okta groups. How about adding conditions to this flow for only my desired admin group?

Under the “Functions” option, I can elect to add a simple Continue If condition and drag across the group name for my trigger. Group ID would definitely be a bit more implicit, but this is just a demo 💁🏻.

Finally, I want to clean up my slack message & provide a bit more information. A quick scroll through the available functions and I’m presented with a text concatenate;

Save & Test – Looking Good!

Whats Next?

My first impressions of the Okta Workflows service are really positive – The UI is definitely well designed & accessible to the majority of employees. I really like the left to right flow, the functionality & the options available to me in the control pane.

The early support for key services is great. Don’t worry if something isn’t immediately available as an Okta deployed integration – If something has an API you can consume it with some of the advanced functions.

REST API Integration

If you want to dive straight into the Workflows deep end, have a look at the documentation page – Okta has already provided a wealth of knowledge. This Oktane video is also really great.

Okta Workflows only gets better from here. I’m especially excited to see the integrations with other cloud providers and have already started planning out my advanced flows! Until then, Happy tinkering!

How to setup Okta and Active Directory Integration & Provisioning

More often then not, companies begin a modern identity journey by expanding the capability of an existing identity store. This could be federation using ADFS, identity governance using SailPoint, or integration to a third party directory.

Active Directory (AD) is a highly extensible and capable solution for the majority of legacy business cases and is easily the most common identity store in the industry. With the advent of cloud, however, the on-premise directory is starting to show its age – and that’s why my favourite initial capability addition is to integrate Okta.

In this technical blog, I’ll take you through the basics, and demonstrate some Universal Directory capability.

Integrating Okta with AD: An Introduction

Before we get started, it’s valuable to address the three most common questions most people will ask when I begin the conversation about using Okta with Active Directory:

1. Why choose a cloud directory?

The cloud directory conversation boils down to one point: Less infrastructure. I’ll probably need a bit of infrastructure to run the initial phase of any identity uplift, but lets be honest; Infrastructure is hard work, We don’t want it and we certainly don’t want to plan our transformation with the idea of working harder in mind.


2. Why a third party directory?

The second question isn’t a dig at any one provider, but providers generally operate with their own services in mind. Sure, you can plug external solutions into a proprietary solution, but the integration to the vendor ecosystem products is always a little bit better. A third party directory really removes this problem, as they are focused on  offering businesses well-managed, easy identity and access management (IAM).


3. Why Okta?

Okta is the world’s leading Identity as a Service (IDaaS) solution for both enterprise and small and midsize businesses, with some incredible versatility owing to its cloud-based delivery model. For a deeper comparison on some of the Gartner market leaders in the modern identity space, head on over to our comparison of Ping & Okta.

How to connect AD to Okta

So you’ve decided to connect AD to Okta. Great! Now is the time to understand the requirements for your Okta connectors and your AD integration scenario before deployment – generally two member servers will work for a HA deployment. Greater than 30,000 users? You probably should have a few more!

AD_singledomain

As an optional first step, we can create a service account and assign the relevant permissions for provisioning.

By default, the AD Agent installer will create “OktaService”, but you need to make updates for provisioning and group push. I’ve got quick little script to create the required account with the minimum requirements for both.

#Quick and easy File to write output to – A Lazy mans logging
Start-Transcript ./OktaServiceAccountConfig.log
#I would like an AD module please
Import-Module ActiveDirectory
#Basic Details for the Service Account & Domain.
$serviceAccountName = "svcOktaAgent"
$serviceAccountUsername = "svcOktaAgent"
$serviceAccountDescription = "svcOktaAgent – Okta AD Agent Service"
$serviceAccountPassword = "1SuperSecretPasswordThatWasRandomlyGenerated!!!"
$serviceAccountOU = "OU=ExampleOU,DC=corp,DC=contoso,DC=com"
$targetUserOUs = @("OU=ExampleOU,DC=corp,DC=contoso,DC=com", "OU=ExampleOU,DC=corp,DC=contoso,DC=com")
$targetGroupOUs = @("OU=ExampleOU,DC=corp,DC=contoso,DC=com")
$domain = Get-ADDomain
$serviceAccountUPN = "svcOktaAgent@$($domain.Forest)"
#Create an AD User
New-ADUser SamAccountName $serviceAccountUsername Name $serviceAccountName DisplayName $serviceAccountName Path $serviceAccountOU UserPrincipalName $serviceAccountUPN CannotChangePassword $true Description $serviceAccountDescription
Set-ADAccountPassword $serviceAccountUsername NewPassword $(ConvertTo-SecureString String $serviceAccountPassword AsPlainText –Force) –Reset
Enable-ADAccount $serviceAccountUsername
#Assign Permissions for User creation & basic attribute write.
foreach($TargetOU in $targetUserOUs){
$UserCommands = @(
"dsacls `"$TargetOU`" /G $($domain.Name)\$($serviceAccountUsername)`:CC;user"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;mail;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;userPrincipalName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sAMAccountName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;givenName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sn;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;pwdLastSet;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;lockoutTime;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;cn;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;name;user",
"dsacls `"$TargetOU`" /I:S /G `"$($domain.Name)\$($serviceAccountUsername)`:CA;Reset Password;user`""
"dsacls `"$TargetOU`" /I:S /G `"$($domain.Name)\$($serviceAccountUsername)`:WP;userAccountControl;user`""
)
foreach($command in $userCommands){
CMD /C $command
}
}
#Permissions required for group push.
foreach($targetOU in $targetGroupOUs){
$groupCommands = @(
"dsacls `"$TargetOU`" /G $($domain.Name)\$($serviceAccountUsername)`:CCDC;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sAMAccountName;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;description;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;groupType;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`;member;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;cn;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;name;group"
)
foreach($command in $groupCommands){
CMD /C $command
}
}
Stop-Transcript

You will notice the account gets write on a specific set of attributes within the OU – this is by design as you should only ever assign access to the minimum required attributes. If you intend on adding extra attributes to Universal Directory, you will need to configure the service account access to each attribute in a similar fashion.

For my lab, I’ve taken a short-cut and provided the account to full attribute write.

Next, we can log into our Okta portal and download the AD agent. Run the installer on a separate server to your domain controller, as you should always strive to separate out services between servers.

The agent installer is fairly straight forward, and the process can be initiated from the web portal. (Directory Integrations > Add Directory > Active Directory.) The key steps are as follows:

Specify the service account: In our case, this will be custom account.

mstsc_Njlly6h5aG

Provide your subdomain and log in to Okta: This step will allow the agent to register.

mstsc_ymamWeFRG0

Select the essentials: The OU’s you would like to run sync from + attributes to be included in your profiles.

mstsc_Hs2ccWtCHc
mstsc_AqEuwXkDIg

You should finish step 3 with the following prompt. If you get stuck, there is detailed instructions in the Okta documentation.

mstsc_qTdSkUvLRZ

Disable Active Directory Profile Mastering: For the purpose of this blog, I’m going to disable AD Profile Mastering. This ensures that Okta remains as the authoritative source for my user profiles. You can find this option under the import and provisioning settings of the directory.

mstsc_qJgJgxtxwC

Enabling Okta to provision AD Accounts

Now we have completed a base setup, most administrators will configure up user matching and synchronisation (Step 2 in the official Okta provided documentation).

I’m lucky enough to have a brand new AD domain, that I would like to push Okta users to, allowing me to skip straight to Okta Mastered AD Accounts – winning!

Okta mastered accounts rely on the concept of Universal Directory. The UD is an excellent Okta feature that enables administrators to pivot data from Okta into and out of third party services. Universal Directory enables HR as a master scenarios, and data flow between the vast majority of Okta applications. For a detailed overview on UD, have a look at this excellent page from Okta’s official documentation.

UD

Enabling Okta provisioning in AD: First I need to navigate to my directory settings and enable “Create Users”. To ensure my user data always stays accurate, I’ll also be enabling “Update User attributes”.

brave_m08hlYI7gV

Create an Okta Group: Self-explanatory! Click Add Group and fill out the details as desired.

brave_XBL3bBGY8P

Assign a directory to the Okta Group: This ensures users who are added to the Okta Group are automatically written down into the AD OU. To do this, navigate to the group, and select Manage Directories.

brave_vAMGMTrgC0

Add your AD domain to the “Member Domain” section: You are able to select multiple AD domains here, which is extremely useful for Multi Domain scenarios. You can also use UD to daisy chain together AD domains, enabling a migration scenario. Select Next and configure your target OU. This should match the OU you specified in the earlier service account setup, as the service does need permission to create your AD account.

brave_Dbib0TiVSK
brave_9PeWC9O18B

Assign users to AD: Everything is now set up! The assignment process is pretty simple – navigate to the group, manage users and move across your targeted user. Select save to initiate the Okta provisioning action.

brave_6wYGSMIh6J

A quick look in AD, and my account has already been provisioned! This process is extremely quick, with the account remaining disabled initially. This is a by-default security feature.

mstsc_1ecDxQpLe0

How to setup Okta and Active Directory Integration & Provisioning: Next steps

We hope our walkthrough of Okta and Active Directory Integration & Provisioning has given you the 10,000 foot overview on what is possible with Okta to AD integration – and you’re able to see the unique value and potential business case within your company.

There is a broad range of integration options, processes and nitty gritty application settings and while the barrier to initial entry quite easy, a detailed setup can get complex quickly. As always, please feel free to reach out to myself or the team if you enjoyed this article or have any questions!

Building Okta Resources with Terraform

As a consultant who spends large amounts of time implementing customer solutions, automation has become a key part of my job. A technology that can be automated is instantly more attractive for me to use. This is one of the key reasons that I love Terraform. It enables me to write cross platform automation and when automation isn’t natively supported, I can write a custom provider. That’s why the recent announcement of a custom Terraform provider for Okta is my favorite feature announcement of 2019 and why I’ll be covering the basics of Okta & Terraform in this blog. If you’re not sure how to use Terraform, have a look here for an initial overview, otherwise let’s dive in!

Setting up – Building the provider and API key generation.

The first thing you’ll need to do is build the Okta provider. This isn’t too hard; however, the GitHub readme is written for Unix users. If you’re on Windows like me, you will need to have a basic understanding of how to compile Go and how to use custom providers in terraform.

Cloning the Okta Terraform Git Repo

First step, clone the Git repo and CD in.

Building the Okta Terraform Provider 1

Next, build the Provider, note that if you try run the generated EXE, you’ll be prompted that the file is a plugin. To stick with the Terraform documentation, I’ve used the following EXE naming format: terraform-provider-PRODUCT-vX.Y.Z.exe

Building the Okta Terraform Provider 2

Finally, copy the provider into the Terraform plugin path. For 64 bit windows, this is generally: %APPDATA%\terraform.d\plugins\windows_amd64

Building the Okta Terraform Provider 3

Now that we have a functional provider for Terraform, its time to generate an API key. Please be careful with these keys. as they inherit your Okta permissions and shouldn’t be left lying around!

Start in the Okta portal, navigate to Security and then API. You should find the following button on the top left.

Okta new token button

Fill in a token name – If you’re using this in production, generally keep some data about the token usage here!

Okta API Token Name
My super secret Okta key


Insert your freshly generated API token into the following Terraform HCL!

provider "okta" {
 org_name  = "demoorg"
 api_token = "API TOKEN HERE"
 base_url  = "okta.com"
}

Building resources

Now that we have a provider configured, lets create some resources. Thankfully, the provider developers (articulate) have detailed out common use cases for the provider here. Creating a user isn’t too difficult, provided you have the four mandatory fields handy:

resource "okta_user" "XelloDemo" {
 first_name = "Xello"
 last_name  = "Okta Terraform"
 login      = "Xello.OktaTerraform@xellolabs.com"
 email      = "Xello.OktaTerraform@xellolabs.com"
 status     = "STAGED"
}

Terraform Init will begin the deploy process

Terraform Init to setup the base environment

And Terraform apply to deploy!

Using Terraform Apply to configure Okta User

Now deploying a user isn’t too imaginative or special. After all, you can easily bulk import from a CSV and scripting out API creation isn’t too hard. This is where the power of articulates provider comes in – It currently supports the majority of the Okta API, and the public can add support where possible. Let’s look at something a bit more advanced.

resource "okta_user" "AWSUser1" {
first_name = "XelloLab"
 last_name  = "AWSDemo"
 login      = "XelloLab.AWSDemo@xellolabs.com"
 email      = "XelloLab.AWSDemo@xellolabs.com"
 status     = "ACTIVE"
}

resource "okta_group" "AWSGroup" {
 name = "AWS Assigned Users"
 description = "This group was deployed by Terraform"
 users = [
   "${okta_user.AWSUser1.id}"
 ]
}

resource "okta_app_saml" "test" {
 preconfigured_app = "amazon_aws"
 label             = "AWS - Terraform Deployed"
  groups            = ["${okta_group.AWSGroup.id}"]
 users {
   id       = "${okta_user.AWSUser1.id}"
   username = "${okta_user.AWSUser1.email}"
 }
}

In the above Terraform code, we have a user being configured, assigned to a group, and then assigned into an Okta integration network application. Rather than clicking through the Okta portal, I’ve relied on infrastructure as code to deploy all the resources. I can even begin to combine providers to deliver cross platform integration – This snippet will work nicely with the AWS identity provider, enabling me to neatly configure the SAML integration between the two services, without leaving my shell or pipeline. The end result? No click ops, just code and the AWS  application configured in a matter of minutes!

AWS Application Configured in Okta

Fast and Furious – Okta Drift

One of the things that Terraform is really excellent for is minimizing configuration drift. Regularly running Terraform apply, either from your laptop or a CICD pipeline, can ensure that applications are maintained as documented and deployed. In the below example you can see Terraform correcting an undesired application update.

Okta Config drift with Terraform

You shouldn’t have to worry about an overeager intern destroying your application setup, and the Okta/Terraform combo prevents this!

Cleaning up 

Destroying Okta resources with Terraform

The other super useful thing with using Terraform is the cleanup process. I’ve lost count of how many times I’ve clicked through a portal or navigated to an API doc just to bulk delete resources. Okta users Immediately come to mind for this problem! By running Terraform destroy, I can immediately clean up my environment. Great for testing out new functionality or lab scenarios.




Hopefully by now you’re beginning to understand what some of the options will be when configuring Okta with Terraform. For my day to day work as a consultant, this is an excellent integration and the varied cross platform use cases are nearly limitless. As always, for any questions please feel free to reach out to myself of the team!