More often than not, organisations move to the cloud on a one way path. This can be a challenging process with a large amount of learning, growth and understanding required. But why does it all have to be in one direction? What about modernising by bringing the cloud to you? One of the ways that organisations can begin this process when moving to Azure is by leveraging Azure Arc, a provider agnostic toolchain that supports integration of IaaS, Data services and Kubernetes to the Azure Control Plane.
Azure Arc Architecture
Using Arc, technology teams are enabled to use multiple powerful Azure tools in an on-premise environment. This includes;
Azure Policy and guest extensions
Azure Monitor
Azure VM Extensions
Azure Security Centre
Azure Automation including Update Management, Change Tracking and Inventory.
Most importantly, the Arc pricing model is my favourite type of pricing model: FREE! Arc focuses on connecting to Azure and providing visibility, with some extra cost required as you consume secondary services like Azure Security Centre.
Onboarding servers to Azure Arc
Onboarding servers to Arc is a relatively straight forward task and is supported in a few different ways. If you’re working on a small number of servers, onboarding using the Azure portal is a manageable task. However, if you’re running at scale, you probably want to look at an automated deployment using tools like the VMWare CLI script or Ansible.
For the onboarding in this blog, I’m going to use the Azure Portal for my servers. First up, ensure you have registered the HybridCompute provider using Azure CLI.
az provider register --namespace 'Microsoft.HybridCompute'
Next, search for Arc in the portal and select add a server. The process here is very much “follow the bouncing ball” and you shouldn’t have too many questions. Data residency is already supported for Australia East, so no concerns there for regulated entities!
Providing basic residency and storage information
When it comes to tagging of Arc servers, Microsoft suggests a few location based tags, with options to include business based also. In a lab scenario like this demo, location is pretty useless, however in real-world scenarios this can be quite useful for identifying what resources exist in each site. Post completion of tagging, you will be provided with a script for the target server. You can use generated script for multiple servers, however, you will need to update any custom tags you may add.
The script execution itself is generally a pretty quick process, with the end result being a provisioned resource in Azure and the Connected Machine Agent on your device.
Connected Machine Agent – InstalledOur servers in Azure
So what can we do?
Now that you’ve completed onboarding you’re probably wondering what next? I’m a big fan of the Azure Monitoring platform (death to SCOM), so for me this will always be a Log Analytics onboarding task, closely followed by Security Centre. One of the key benefits with Azure Arc is the simplicity of everything, so you should find onboarding any Arc supported solution to be a straight forward process. For Log Analytics navigate to insights, select your analytics workspace, enable and you’re done!
Enabling Insights
What logs you collect is entirely on your logging collection strategy with Microsoft providing further detail on that process here. In my opinion, the performance data being located in a single location is worth it’s weight in gold.
Performance Data
If you have already connected Security Centre to your workspace, onboarding to Log Analytics often also connects your device to Security centre, enabling detailed monitoring and vulnerability management.
Domain controller automatically enabled for Security Centre
Right for you?
While the cloud enables organisations to move quickly, sometimes moving slowly is just what the doctor ordered. Azure Arc is definitely a great platform for organisations looking to begin using Azure services and most importantly, bring Azure into their data centre. If you’re wanting to learn more about Arc, Microsoft has published an excellent set of quick-starts here and the documentation is also pretty comprehensive. Stay tuned for our next post, where we explore using Azure Arc with Kubernetes. Until next time, stay cloudy!
Recently I’ve been spending a bit of time working with a few customers, onboarding them to Azure Kubernetes Service. This is generally a pretty straight forward process; Build Cluster, Configure ACR, Setup CI/CD.
During the CI/CD buildout with one customer, we noticed pretty quickly that our cheap and easy basic ACR was filling up rather quickly. Mostly with development containers which were used once or twice and then never again.
Not yet 50% full in less than a month;
In my opinion the build rate of this repository wasn’t too bad. We pushed to development and testing 48 times over a one week period, with these incremental changes flowing through to production pretty reliably on our weekly schedule.
That being said, the growth trajectory put our development ACR filling up in about 3-4 months. Sure we could simply upgrade the ACR to a standard or premium tier, but at what cost? A 4x price increase between basic and standard SKU’s, and even steeper 9x to premium. Thankfully, we can solve for this in few ways.
Manage our container size – Start from scratch or a container specific OS like alpine.
Build containers less frequently – We have almost a 50:1 development to production ratio, so there is definitely a bit of wiggle room there.
Manage the registry contents, deleting old or untagged images.
Combining these options will provides our team with a long term and scalable solution. But how can we implement item number 3?
ACR Purge & Automatic Cleanup
As a preview feature, Azure Container Registry now supports filter based cleanup of images and containers. This can be completed as an ad-hoc process or as a scheduled task. To get things right, I’ll first build an ACR command that deletes tagged images.
# Environment variable for container command line
PURGE_CMD="acr purge \
--filter 'container/myimage:dev-.*' \
--ago 3d --dry-run"
az acr run \
--cmd "$PURGE_CMD" \
--registry mycontainerregistry \
/dev/null
I’ve set an agreed upon container age for my containers and I’m quite selective of which containers I purge. The above dry-run only selects the development “myimage” container and gives me a nice example of what my task would actually do.
Including multiple filters in purge commands is supported. So, feel free to build expansive query sets. Once you are happy with the dry run output, it’s time to setup an automatic job. ACR uses standard cronjob syntax for scheduling, so this should be a pretty familiar experience for linux administrators.
And just like that, we have a task which will clean up our registry daily at 2am.
As an ARM template please?
If you’re operating or deploying multiple container registries for various teams, you might want to standardise this type of task across the board. As such, integrating this into your ARM templates would be mighty useful.
Microsoft provides the “Microsoft.ContainerRegistry/registries/tasks” resource type for deploying these actions at scale. There is, however, a slightly irritating quirk. Your ACR command must be base64 encoded YAML following the tasks specification neatly documented here. I’m not sure about our readers, but generally combining Base64, YAML and JSON leaves a nasty taste in my mouth!
The above encoded base64 translates to the following YAML. Note that it includes the required command and some details about the execution timeout limit. For actions that purge a large amount of containers, Microsoft advises you might need to increase this limit beyond the default 3600 seconds (1 Hour).
Hopefully, you have found this blog post informative and useful. There are a number of scenarios for this feature-set; deleting untagged images, cleaning up badly named containers or even building new containers from scratch. I’m definitely excited to see this feature move to general availability. As always, please feel free to reach out if you would like to know more. Until next time!
Recently I’ve been spending a fair bit of effort working on Azure Kubernetes Service. I don’t think it really needs repeating, but AKS is an absolutely phenomenal product. You get all the excellence of the K8s platform, with a huge percentage of the overhead managed by Microsoft. I’m obviously biased as I spend most of my time on Azure, but I definitely find it easier than GKE & EKS. The main problem I have with AKS is cost. Not for production workloads or business operations, but for lab scenarios where I just want to test my manifests, helm charts or whatever. There’s definitely a lot of options for spinning up clusters on demand for lab scenarios or even reducing cost of an always present cluster; Terraform, Kind or even just right sizing/power management. I could definitely find a solution that fits within my current Azure budget. Never being one to take the easy option, I’ve taken a slightly different approach for my lab needs. A two node (soon to be four) Raspberry Pi Kubernetes cluster.
Besides just being cool, It’s great to have a permanent cluster available for personal projects, with the added bonus that my Azure credit is saved for more deserving work!
That’s all good and well I hear you saying, but I needed this cluster to lab AKS scenarios right? Microsoft has been slowly working to integrate “non AKS” Kubernetes into Azure in the form of ARC enabled clusters – Think of this almost as an Azure compete to Google Anthos, but with so much more. The reason? Arc doesn’t just cover the K8s platform and it brings a whole host of Azure capability right onto the cluster.
The setup
Configuring a connected ARC cluster is a monumentally simple task for clusters which meet muster. Two steps to be exact.
1. Enable the preview azure cli extensions
az extension add --name connectedk8s
az extension add --name k8sconfiguration
2. Run the CLI commands to enable an ARC enabled cluster
az connectedk8s connect --name RPI-KUBENETES-LAB --resource-group KUBERNETESARC-RG01
In the case of my Raspberry Pi cluster – arm64 architecture really doesn’t cut it. Shortly after you run your commands you will receive a timeout and discover pods stuck in a pending state.
Timeouts like this are never good.Our very suck pods.
Digging into the deployments, it quickly becomes obvious that an amd64 architecture is really needed to make this work. Pods are scheduled across the board with a node selector. Removing this causes a whole host of issues related to what looks like both container compilation & software architecture. For now it looks like I might be stuck with a tantalising object in Azure & a local cluster for testing. I’ve become a victim of my own difficult tendencies!
So close, yet so far.
Right for you?
There is a lot of reasons to run your own cluster – Generally speaking, if you’re doing so you will be pretty comfortable with the Kubernetes ecosystem as a whole. This will just be “another tool” you could add, and you should apply the same consideration for every other service you consider using. In my opinion the biggest benefit of the service is the simplified/centralised management plane across multiple clusters. This allows me to manage my own (albeit short lived) AKS clusters and my desk cluster with centralised policy enforcement, RBAC & monitoring. I would probably look to implement if I was running my datacenter cluster, and definitely if I was looking to migrate to AKS in the future. If you are considering, keep in mind a few caveats;
The Arc Service is still in preview – expect a few bumps as the service grows
Currently only available in EastUS & WestEurope – You might be stuck for now if operating under data residency requirements.
At this point in time, I’ll content myself with local cluster. Perhaps I’ll publish a future blog post if I manage to work through all these architecture issues. Until next time, stay cloudy!
In medicine there is a saying “an ounce of prevention is worth a pound of cureโ” – What this concept boils down to for health practitioners is that engaging early is often the cheapest & simplest method for preventing expensive & risky health scenarios. It’s a lot cheaper & easier to teach school children about healthy foods & exercise than to complete a heart bypass operation once someone has neglected their health. Importantly, this concept extends to multiple fields, with CyberSecurity being no different. Since the beginning of cloud, organisations everywhere have seen explosive growth in infrastructure provisioned into Azure, AWS and GCP. This explosive growth all too often corresponds with increases to security workload without required budgetary & operational capability increases. In the quest to increase security efficiency and reduce workload, this is a critical challenge. Once a security issue hits your CSPM, Azure Security Centre or AWS Trusted Inspector dashboard, it’s often too late; The security team now has to work to complete within a production environment. Infrastructure as Code security testing is a simple addition to any pipeline which will reduce the security group workload!
Preventing this type of incident is exactly why we should complete BASIC security testing..
We’ve already covered quality testing within a previous post, so today we are going to focus on the security specific options.
The first integrated option for ARM templates is easily the Azure Secure DevOps kit (AzSK for short). The AzSK has been around for while and is published by the Microsoft Core Services and Engineering division; It provides governance, security IntelliSense & ARM template validation capability, for free. Integrating to your DevOps Pipelines is relatively simple, with pre-built connectors available for Azure DevOps and a PowerShell module for local users to test with.
Another great option for security testing is Checkov from bridgecrew. I really like this tool because it provides over 400 tests spanning AWS, GCP, Azure and Kubernetes. The biggest drawback I have found is the export configuration – Checkov exports JUnit test results, however if nothing is applicable for a specified template, no tests will be displayed. This isn’t a huge deal, but can be annoying if you prefer to see consistent tests across all infrastructure…
The following snippet is all you really need if you want to import Checkov into an Azure DevOps pipeline & start publishing results!
Depending on your background, breaking the build can really seem like a negative thing. After all, you want to prevent these issues getting into production, but you don’t want to be a jerk. My position on this is that security practitioners should NOT break the build for cloud infrastructure testing within dev, test and staging. (I can already hear the people who work in regulated environments squirming at this – but trust me, you CAN do this). While integration of tools like this is definitely an easy way to prevent vulnerabilities or misconfigurations from reaching these environments, the goal is to raise awareness & not increase negative perceptions.
Security should never be the first team to say no in pre-prod environments.
Use the results of any tools added into a pipeline as a chance to really evangelize security within your business. Yelling something like “Exposing your AKS Cluster publicly is not allowed” is all well and good, but explaining why public clusters increase organisational risk is a much better strategy. The challenge when security becomes a blocker is that security will no longer be engaged. Who wants to deal with the guy who always says no? An engaged security team has so much more opportunity to educate, influence and effect positive security change.
Don’t be this guy.
Importantly, engaging well within dev/test/sit and not being that jerk who says no, grants you a magical superpower – When you do say no, people listen. When warranted, go ahead and break the build – That CVSS 10.0 vulnerability definitely isn’t making it into prod. Even better, that vuln doesn’t make it to prod WITH support of your development & operational groups!
Hopefully this post has given you some food for thought on security testing, until next time, stay cloudy!
Note: Forest Brazael really has become my favourite tech related comic dude. Check his stuff out here & here.
With the advent of modern collaboration platforms, users are no longer content to work within the organisational boundary. More and more organisations are being challenged to bring in external partners and users for projects and day to day operations. But how can we do this securely? How do IT managers minimise licensing costs? Most importantly, how can we empower the business to engage without IT? This problem is at the forefront of the thinking behind Azure AD Access Packages. An Azure solution enabling self service onboarding of partners, providers and collaborators at scale. Even better than that, this solution enables both internal and external onboarding. You can and should set this up internally, the less work that IT has to do managing access, the better right?
Before we dig too deep, I think a brief overview of how access packages are structured would be useful. On a hierarchy level, packages are placed into catalogs, which can be used to enable multiple packages for a team to use. Each package holds resources, and a policy defines the who and when of requesting access to these using the process. The below diagram from Microsoft neatly sums this up.
Access Package Hierarchy
This all sounds great I hear you saying. So what does this look like? If you have an Office 365 account, you’re welcome to log in and look for yourself here, otherwise a screenshot will have to do.
External Access Package UI
To get started with this solution, you will need an Azure AD P2 licensed tenant. Most organisations will obtain P2 licences this through an M365 E5 subscription, however you can purchase these directly if have M365 E3 or lower and are looking to avoid some costs. You will need to have at-least a 1:1 license assignment for internal use cases, while external identity has recently moved to a “Monthly Active Users” licensing model. One P2 licence in your tenant will license the first 50 thousand external users for free!
Once you’ve enabled this, head on over to the “Identity Governance” blade within Azure AD. This area has a wealth of functionality that benefits nearly all organisations, so I would highly recommend investigating the other items available here. Select Access Packages to get started.
The UI itself for creating an access packages is quite simple, clicking create-new will walk you through a process of assigning applications, groups, teams & share-point sites.
Access Package creation UI
Unfortunately some services like Windows Virtual Desktop will not work with access packages, however this is a service limitation rather than an Azure AD limitation. Expect these challenges to be resolved over time.
At the time of writing, the AzureADPreview module does not support Access Packages. Microsoft Graph beta does however, and so, have an MS Graph based script!
While all this PowerShell might look a bit daunting, understand all that is being done is generating API request bodies and pushing that over 6 basic API calls;
Retrieve information about our specified catalog (General)
Create an Access Package
Add each resource to our specified catalog
Get each resource’s available roles
Assign the resource & role to our Access Packages
Create a Policy which enables assignment of our Access Package
Hopefully this article has provided you with a decent overview of Azure AD Access Packages. There are a lot of benefits when applying this in B2B scenarios, especially when it comes to automating user onboarding & access management. With big investments & changes from Microsoft occurring in this space, expect further growth & new features as the year comes to a close!
Please Note: While we do distribute an access package link within this blog, requests for access are not sent to an monitored email and will not be approved. If you would like to know more, please don’t hesitate reach out as we would be happy to help.
Recently I was approached by a customer regarding a challenge they wanted to solve. How to delegate administrative control of a few users within Azure Active Directory to some lower level administrators? This is a common problem experienced by teams as they move to cloud based directories – a flat structure doesn’t really allow for delegation on business rules. Enter Azure AD Administrative Units; A preview feature enabling delegation & organisation of your cloud directory. For Active Directory Administrators, this will be a quite familiar experience to Organisational Units & delegating permissions. Okta also has a similar functionality, albeit implemented differently.
Active Directory Admins will immediately feel comfortable with Azure AD Admin Units
So when do you want to use this? Basically any time you find yourself wanting a hierarchical & structured directory. While still in preview, this feature will likely grow over time to support advanced RBAC controls and in the interim, this is quite an elegant way to delegate out directory access.
Setting up an Administrative Unit
Setting up an Administrative Unit is quite a simple task within the Azure Portal; Navigate to your Azure AD Portal & locate the option under Manage.
Select Add, and provide your required names & roles. Admin assignment is focused on user & group operations, as device administration has similar capability under custom intune roles and application administrators can be managed via specified roles.
You can also create administrative units using the Azure AD PowerShell Module; A simple one line command will do the trick!
New-AzureADAdministrativeUnit -Description "Admin Unit Blog Post" -DisplayName "Blog-Admin-Users"
User Management
Once you have created an administrative unit, you can begin to add users & groups. As this point in time, administrative units only support assignment manually, either one by one or via csv upload. The process itself is quite simple; Select Add user and click through everyone you would like to be included.
While this works quite easily for small setups, at scale you would likely find this to be a bit tedious. One way to work around this is to combine Dynamic Groups with your chosen PowerShell execution environment. For me, This is an Automation Account. First, configure a dynamic group which automatically drags in your desired users.
Next, execute the following PowerShell snippet. Note that I am using the Azure AD Preview module, as support is yet to move to the production module.
This can be configured on a schedule as frequently as you need this information to be accurate!
You will note here that one user gets neatly removed from the Administrative Unit – This is because the above PowerShell treats the dynamic group as an authoritative source for Admin Unit Membership. When dealing with assignment through user details (Lifecycle Management) I find that selecting authoritative sources reduces both work effort and confusion. Who wants to do manual management anyway? Should you really want to allow manual addition, simply remove the line marked to remove members!
Hopefully you find this post a useful insight to the usage of Administrative Units within your organisation. There a lot of useful scenarios where this can be leveraged and this feature should most definitely help you minimise administrative privilege in your environment (hooray!). As always, feel free to reach out with any questions or comments! Stay tuned for my next post, where I will be diving into Azure AD Access Packages ๐
Recently I was asked to assist with implementation of MFA in a complex on-premises environment. Beyond the implementation of Okta, all infrastructure was on-premises and neatly presented to external consumers through an F5 APM/LTM solution. This post details my thoughts & lessons I learnt configuring RADIUS authentication for services behind and F5, utilising AAA Okta Radius servers.
Ideal Scenario?
Before I dive into my lessons learnt – I want to preface this article by saying there is a better way. There is almost always a better way to do something. In a perfect world, all services would support token based single sign on. When security of a service can’t be achieved by the best option, always look for the next best thing. Mature organisations excel at finding a balance between what is best, and what is achievable. In my scenario, the best case implementation would have been inline SSO with an external IdP . Under this model, Okta completes SAML authentication with the F5 platform and then the F5 creates and provides relevant assertions to on-premise services.
Unfortunately, the reality of most technology environments is that not everything is new and shiny. My internal applications did not support SAML and so here we are with the Okta Radius agent and a flow that looks something like below (replace step 9 with application auth).
Importantly, this is implementation is not inherently insecure or bad, however it does have a few more areas that could be better. Okta calls this out in the documentation for exactly this reason. Something important to understand is that radius secrets can be and are compromised, and it is relatively trivial to decrypt traffic once you have possession of a secret.
APM Policy
If you have a read of the Okta documentation on this topic. you will quickly be presented with an APM policy example.
You will note there is two Radius Auth blocks – These are intended to separate the login data verification – Radius Auth 1 is responsible for password authentication, and Auth 2 is responsible for verifying a provided token. If you’re using OTP only, you can get away with a simpler APM policy – Okta supports providing both password and an OTP inline and separate by a comma for verification.
Using this option, the policy can be simplified a small amount – Always opt to simplify policy; Less places for things to go wrong!
Inline SSO & Authentication
In a similar fashion to Okta, F5 APM provides administrators the ability to pass credentials through to downstream applications. This is extremely useful when dealing with legacy infrastructure, as credential mapping can be used to correctly authenticate a user against a service using the F5. The below diagram shows this using an initial login with RSA SecurID MFA.
For most of my integrations, I was required to use HTTP forms. When completing this authentication using the APM, having an understanding of exactly how the form is constructed is really critical. The below example is taken from a Exchange form – Leaving out the flags parameter originally left my login failing & me scratching my head.
An annoying detail about forms based inline authentication is that if you already have a session, the F5 will happily auto log back into the target service. This can be a confusing experience for most users as we generally expect to be logged out when we click that logout button. Thankfully, we can handle this conundrum neatly with an iRule.
iRule Policy application
For this implementation, I had a specific set of requirements on when APM policy should be applied to enforce MFA; not all services play nice with extra authentication. Using iRules on virtual services is a really elegant way in which we can control when and APM policy applies. On-Premise Exchange is something that lots of organisations struggle with securing – especially legacy ActiveSync. The below iRule modifies when policy is applied using uri contents & device type.
One thing to be aware of when implementing iRules like this is directory traversal – You really do need a concrete understanding of what paths are and are not allowed. If a determined adversary can authenticate against a desired URI, they should NOT be able to switch to an undesired URI. The above example is really great to show this – I want my users to access personal account ECP pages just fine – Remote administrative exchange access? Thats a big no-no and I redirect to an authorised endpoint.
Final Thoughts
Overall, the solution implemented here is quite elegant, considering the age of some infrastructure. I will always advocate for MFA enablement on a service – It prevents so many password based attacks and can really uplift the security of your users. While overall service uplift is always a better option to enable security, you should never discount small steps you can take using existing infrastructure. As always, leave a comment if you found this article useful!
Recently I spent some time updating my personal technology stack. As an Identity nerd, I thought to myself that SSO everywhere would be a really nice touch. Unfortunately SSO everywhere is not as easy as it sounds – More on that in a future post. For my personal setup, I use Office 365 and have centralised the majority of my applications on Azure AD. I find that the licensing inclusions for my day to day work and lab are just too good to resist. But what about my other love? If you’ve read this blog recently, you will know I’ve heavily invested into the Okta Identity platform. However aside from a root account I really don’t want to store credentials any-more. Especially considering my track record with lab account management. This blog details my experience and tips for setting up inbound federation from AzureAD to Okta, with admin role assignment being pushed to Okta using SAML JIT.
So what is the plan?
For all my integrations, I’m aiming to ensure that access is centralised; I should be able to create a user in AzureAD and then push them out to the application. How this occurs is a problem to handle per application. As Okta is traditionally an identity provider, this setup is a little different – I want Okta to act as the service provider. Queue Inbound Federation. For the uninitiated, Inbound federation is an Okta feature that allows any user to SSO into Okta from an external IdP, provided your admin has done some setup. More commonly, inbound federation is used in hub-spoke models for Okta Orgs. In my scenario, Azure AD is acting as a spoke for the Okta Org. The really nice benefit of this is setup I can configure SSO from either service into my SaaS applications.
Configuring Inbound Federation
The process to configure Inbound federation is thankfully pretty simple, although the documentation could probably detail this a little bit better. At a high level, we’re going to complete 3 SSO tasks, with 2 steps for admin assignment via SAML JIT.
Configure an application within AzureAD
Configure a identity provider within Okta & download some handy metadata
Configure the Correct Azure AD Claims & test SSO
Update our AzureAD Application manifest & claims
Assign Admin groups using SAMIL JIT and our AzureAD Claims.
While it does seem like a lot, the process is quite seamless, so let’s get started. First up, add an enterprise application to Azure AD; Name this what you would like your users to see in their apps dashboard. Navigate to SSO and select SAML.
Next, Okta configuration. Select Security>Identity Providers>Add. You might be tempted to select ‘Microsoft’ for OIDC configuration, however we are going to select SAML 2.0 IdP
Using the data from our Azure AD application, we can configure the IDP within Okta. My settings are summarised as follows:
IdP Username should be: idpuser.subjectNameId
SAML JIT should be ON
Update User Attributes should be ON (re-activation is personal preference)
Group assignments are off (for now)
Okta IdP Issuer URI is the AzureAD Identifier
IdP Single Sign-On URL is the AzureAD login URL
IdP Signature Certificate is the Certificate downloaded from the Azure Portal
Click Save and you can download service provider metadata.
Upload the file you just downloaded to the Azure AD application and you’re almost ready to test. Note that the basic SAML configuration is now completed.
Next we need to configure the correct data to flow from Azure AD to Okta. If you have used Okta before, you will know the four key attributes on anyone’s profile: username, email, firstName & lastName. If you inspect the downloaded metadata, you will notice this has slightly changed, with mobilePhone included & username seemingly missing. This is because the Universal Directory maps username to the value provided in NameID. We configured this in the original IdP setup.
Add a claim for each attribute, feeling free to remove the other claims using fully qualified namespaces. My Final claims list looks like this:
At this point, you should be able to save your work ready for testing. Assign your app to a user and select the icon now available on their myapps dashboard. Alternately you can select the “Test as another user” within the application SSO config. If you have issues when testing, the “MyApps Secure Sign In Extension” really comes in handy here. You can grab this from the Chrome or Firefox web store and use it to cross reference your SAML responses against what you expect to be sent. Luckily, I can complete SSO on the first pass!
Adding Admin Assignment
Now that I have SSO working, admin assignment to Okta is something else I would really like to manage in Azure AD. To do this, first I need to configure some admin groups within Okta. I’ve built three basic groups, however you can provide as many as you please.
Next, we need to update the application manifest for our Azure AD app. This can be done at Application Registrations > Appname>Manifest. For each group that you created within Okta, add a new approle like the below, ensuring that the role ID is unique.
For simplicity, I have matched the value, description and displayName details. The ‘value’ attribute for each approle must correspond with a group created within the Okta Portal, however the others can be a bit more verbose should you desire.
Now that we have modified our application with the appropriate Okta Roles, we need to ensure that AzureAD & Okta to send/accept this data as a claim. First within AzureAD, update your existing claims to include the user Role assignment. This can be done with the “user.assignedRoles” value like so:
Next, update the Okta IDP you configured earlier to complete group sync like so. Note that the group filter prevents any extra memberships from being pushed across.
For a large amounts of groups, I would recommend pushing attributes as claims and configuring group rules within Okta for dynamic assignment.
Update your Azure AD user/group assignment within the Okta App, and once again, you’re ready to test. A second sign-in to the Okta org should reveal an admin button in the top right and moving into this you can validate group memberships. In the below example, I’ve neatly been added to my Super admins group.
Wrapping Up
Hopefully this article has been informative on the process for setting up SAML 2.0 Inbound federation using Azure AD to Okta. Depending on your identity strategy, this can be a really powerful way to manage identity for a service like Okta centrally, bring multiple organisations together or even connect with customers or partners. Personally, this type of setup makes my life easier across the board – I’ve even started to minimise the use of my password manager just by getting creative with SSO solutions!
If you’re interested in chatting further on this topic, please leave a comment or reach out!
If you have ever spoken with me in person; you know I’m a huge fan of the Okta identity platform – It just makes everything easy. It’s no surprise then, that the Okta Workflows announcement at Oktane was definitely something I saw value in – Interestingly enough; I’ve utilised postman collections and Azure LogicApps for an almost identical Integration solution in the past.
Custom Okta LogicApps Connector
This post will cover my first impressions, workflow basics & a demo of the capability. If you’re wanting to try this in your own Org, reach out to your Account/Customer Success Manager – The feature is still hidden behind a flag in the Okta Portal, however it is well worth the effort!
The basics of Workflows
If you have ever used Azure LogicApps or AWS Step Functions, you will instantly find the terminology of workflows familiar. Workflows are broken into three core abstractions;
Events – Used Start your workflow
Functions – Provide Logic Control (If then and the like) & advanced transformations/functionality
Actions – DO things
All three abstractions have input & output attributes, which can be manipulated or utilised throughout each flow using mappings. Actions & Events require a connection to a service – pretty self explanatory.
Workflows are built from left to right, starting with an event. I found the left to right view when building functions is really refreshing, If you have ever scrolled down a large LogicApp you will know how difficult it can get! Importantly, keeping your flows short and efficient will allow easy viewing & understanding of functionality.
Setting up a WorkFlow
For my first workflow I’ve elected to solve a really basic use case – Sending a message to slack when a user is added to an admin group. ChatOps style interactions are becoming really popular for internal IT teams and are a lot nicer than automated emails. Slack is supported by workflows out of the box and there is an O365 Graph API option available if your organisation is using Microsoft Teams.
First up is a trigger; User added to a group will do the trick!
Whenever you add a new integration, you will be prompted for a new connection and depending on the service, this will be different. For Okta, this is a simple OpenID app that is added when workflows is onboarded to the org. Okta Domain, Client ID, Client Secret and we are up and running!
Next, I need to integrate with Slack – Same process; Select a task, connect to the service;
Finally, I can configure my desired output to slack. A simple message to the #okta channel will do.
Within about 5 minutes I’ve produced a really simple two step flow, and I can click save & test on the right!
Looking Good!
If you’ve been paying attention, you would have realised that this flow is pretty noisy – I would have a message like this for ALL okta groups. How about adding conditions to this flow for only my desired admin group?
Under the “Functions” option, I can elect to add a simple Continue If condition and drag across the group name for my trigger. Group ID would definitely be a bit more implicit, but this is just a demo ๐๐ป.
Finally, I want to clean up my slack message & provide a bit more information. A quick scroll through the available functions and I’m presented with a text concatenate;
Save & Test – Looking Good!
Whats Next?
My first impressions of the Okta Workflows service are really positive – The UI is definitely well designed & accessible to the majority of employees. I really like the left to right flow, the functionality & the options available to me in the control pane.
The early support for key services is great. Don’t worry if something isn’t immediately available as an Okta deployed integration – If something has an API you can consume it with some of the advanced functions.
REST API Integration
If you want to dive straight into the Workflows deep end, have a look at the documentation page โ Okta has already provided a wealth of knowledge. This Oktane video is also really great.
Okta Workflows only gets better from here. Iโm especially excited to see the integrations with other cloud providers and have already started planning out my advanced flows! Until then, Happy tinkering!
One of the many things I love about the cloud is the ease at which it allows me to develop and deploy solutions. I recently got married – An event which is both immensely fulfilling and incredibly stressful to organise. Being a digital first millennial couple, my partner and I wanted to deliver our invites electronically. Being the stubborn technologist that I am, I used the wedding as an excuse to practice my cloud & python skills! This blog neatly summarises what I implemented, and the fun I dealt with along the way.
The Plan – How do I want to do this?
For me, the main goal moving was to deliver a simple, easy to use solution which enabled me to keep sharp on some cloud technology, time and complexity was not a deciding factor. Being a consultant, I generally touch a multitude of different services/providers and I need to keep challenged to stay up to date on a broad range of things.
For my partner, it was important that I could quickly deliver a website, at low cost, with personalised access codes and email capability – A fully fledged mobile app would have been the nirvana, but I’m not that great at writing code (yet) – Sorry hun, maybe at a future vow renewal?
When originally planning, I really wanted to design a full end to end solution using functions & all the cool serverless features. I quickly realised that this would also take me too long to keep my partner happy, so I opted for a simpler path – an ACI deployment, with Azure Traffic manager allowing a nice custom domain (Feature request please MS). I designed Azure Storage as a simple table backend, and utilised SendGrid as the email service. Azure DNS allowed me to host all the relevant records, and I built my containers for ACR Using Azure DevOps.
Slapping together wedding invites on Azure in an afternoon? Why not?
Implementing – How to use this flask thing?
Ask anyone who knows me and they will tell you I will give just about anything a crack. I generally use python when required for scripting/automation and I really don’t use it for much beyond that. When investigating how to build a modern web app, I really liked the idea of learning some more python – It’s such a versatile language and really deserves more of my attention. I also looked at using React, WordPress & Django. However I really hate writing javascript, this blog is WordPress so no learning there, and Django was have been my next choice after flask.
Implementing into flask was actually extremely simple for basic operations. I’m certain I could have implemented my routing in a neater manner – perhaps a task for future refactoring/pull requests! I really liked the ability to test flask apps by simply running python3 app.py. A lot quicker than a full docker build process, and super useful in development mode!
The template based model that flask enables developers to utilise is extremely quick. Bootstrap concepts haven’t really changed since it was released in 2011, and modifying a single template to cater for different users was really simple.
For user access, I used a simple model where a code was utilised to access the details page, and this code was then passed through all the web requests from then on. Any code submitted that did not exist in azure storage simply fired a small error!
The end result of my bootstrap & flask configuration was really quite simple – my Fiance was quite impressed!
Deployment – Azure DevOps, ACI, ARM & Traffic Manager
Deploying to Azure Container Registry and Instances is almost 100% idiotproof within Azure DevOps. Within about five minutes in the GUI, you can get a working pipeline with a docker build & push to your Azure Container Registry, and then refresh your Azure Container Instances from there. Microsoft doesn’t really recommend using ACI for anything beyond a simple workloads, and I found support for nearly everything to be pretty limited. Because I didn’t want a fully fledged AKS cluster/host or an App Service Plan running containers, I used traffic manager to work around the custom domain limitations of ACI. As a whole, the traffic manager profile would cost me next to nothing, and I knew that I wouldn’t be receiving many queries to the services.
At some point I looked at deploying my storage account using ARM templates, however I found that table storage is currently not supported for deployment using this method. You will notice that my azure pipeline uses the Azure Shell commands to do this. I didn’t get around to automating the integration from storage to container instances – Mostly because I had asked my partner to fill out another storage account table manually and didn’t want to move anything!
For my outbound email I opted to utilise SendGrid. You can actually sign up for this service within the Azure Portal as a “third party service”. It adds an object to your resource group, however administration is still within the SendGrid portal.
Issues?
As an overall service, I found my deployment to be relatively stable. I ran two issues through my deployment, both of which were not too simple to resolve.
1. Azure Credit & Azure DNS – About halfway through the live period after sending my invites, I noticed that my service was down. This was actually due to DNS not servicing requests due to insufficient credit. A SQL server I was also labbing had killed my funds! This was actually super frustrating to fix as I had another unrelated issue with the Owner RBAC on my subscription – My subscription was locked for IAM editing due to insufficient funds, and I couldn’t add another payment method because I was not owner – Do you see the loop too? I would love to see some form of payment model that allows for upfront payment of DNS queries in blocks or chunks – Hopefully this would prevent full scale DNS based outages when using Azure DNS and Credit based payment in the future.
2. SPAM – I also had a couple of reports of emails sent from sendgrid being marked as spam. This was really frustrating, however not common enough for me to dig into as a whole, especially considering I was operating in the free tier. I added a DKIM & DMARC Record for my second run of emails and didn’t receive as much feedback which was good.
The Cost – Was it worth it?
All in All the solution I implemented was pretty expensive when compared to other online products and even other Azure services. I could have definitely saved money by using App Services, Azure Functions or even static Azure Storage websites. Thankfully, the goal for me wasn’t to be cheap. It was practice. Even better though, my employer provides me with an Azure Credit for dev/test, so I actually spent nothing! As such, I really think this exercise was 100% worth it.
Summary – Totally learnt some things here!
I really hoped you enjoyed this small writeup on my experience deploying small websites in Azure. I spent a grand total of about three hours over two weeks tinkering on this project, and you can see a mostly sanitised repo here. I definitely appreciated the opportunity to get a little bit better at python, and will likely look to revisit the topic again in the future!
(Heres a snippet of the big day – I’m most definitely punching above my average! ๐)