Thoughts from an F5 APM Multi Factor implementation

Recently I was asked to assist with implementation of MFA in a complex on-premises environment. Beyond the implementation of Okta, all infrastructure was on-premises and neatly presented to external consumers through an F5 APM/LTM solution. This post details my thoughts & lessons I learnt configuring RADIUS authentication for services behind and F5, utilising AAA Okta Radius servers.

Ideal Scenario?

Before I dive into my lessons learnt – I want to preface this article by saying there is a better way. There is almost always a better way to do something. In a perfect world, all services would support token based single sign on. When security of a service can’t be achieved by the best option, always look for the next best thing. Mature organisations excel at finding a balance between what is best, and what is achievable. In my scenario, the best case implementation would have been inline SSO with an external IdP . Under this model, Okta completes SAML authentication with the F5 platform and then the F5 creates and provides relevant assertions to on-premise services.

Unfortunately, the reality of most technology environments is that not everything is new and shiny. My internal applications did not support SAML and so here we are with the Okta Radius agent and a flow that looks something like below (replace step 9 with application auth).

Importantly, this is implementation is not inherently insecure or bad, however it does have a few more areas that could be better. Okta calls this out in the documentation for exactly this reason. Something important to understand is that radius secrets can be and are compromised, and it is relatively trivial to decrypt traffic once you have possession of a secret.

APM Policy

If you have a read of the Okta documentation on this topic. you will quickly be presented with an APM policy example.

You will note there is two Radius Auth blocks – These are intended to separate the login data verification – Radius Auth 1 is responsible for password authentication, and Auth 2 is responsible for verifying a provided token. If you’re using OTP only, you can get away with a simpler APM policy – Okta supports providing both password and an OTP inline and separate by a comma for verification.

Using this option, the policy can be simplified a small amount – Always opt to simplify policy; Less places for things to go wrong!

Inline SSO & Authentication

In a similar fashion to Okta, F5 APM provides administrators the ability to pass credentials through to downstream applications. This is extremely useful when dealing with legacy infrastructure, as credential mapping can be used to correctly authenticate a user against a service using the F5. The below diagram shows this using an initial login with RSA SecurID MFA.

For most of my integrations, I was required to use HTTP forms. When completing this authentication using the APM, having an understanding of exactly how the form is constructed is really critical. The below example is taken from a Exchange form – Leaving out the flags parameter originally left my login failing & me scratching my head.

An annoying detail about forms based inline authentication is that if you already have a session, the F5 will happily auto log back into the target service. This can be a confusing experience for most users as we generally expect to be logged out when we click that logout button. Thankfully, we can handle this conundrum neatly with an iRule.

iRule Policy application

For this implementation, I had a specific set of requirements on when APM policy should be applied to enforce MFA; not all services play nice with extra authentication. Using iRules on virtual services is a really elegant way in which we can control when and APM policy applies. On-Premise Exchange is something that lots of organisations struggle with securing – especially legacy ActiveSync. The below iRule modifies when policy is applied using uri contents & device type.

when HTTP_REQUEST {
    if { (([HTTP::header User-Agent] contains "iPhone") || ([HTTP::header User-Agent] contains "iPad")) && (([string tolower [HTTP::uri]] contains "activesync") || ([string tolower [HTTP::uri]] contains "/oab")) } {
        ACCESS::disable
    } elseif { ([string tolower [HTTP::uri]] contains "logoff") } {
        ACCESS::session remove
    } else {
        ACCESS::enable
        if { ([string tolower [HTTP::uri]] contains "/ecp") } {
            if { not (([string tolower [HTTP::uri]] contains "/ecp/?rfr=owa") || ([string tolower [HTTP::uri]] contains "/ecp/personalsettings/") || ([string tolower [HTTP::uri]] contains "/ecp/ruleseditor/") || ([string tolower [HTTP::uri]] contains "/ecp/organize/") || ([string tolower [HTTP::uri]] contains "/ecp/teammailbox/") || ([string tolower [HTTP::uri]] contains "/ecp/customize/") || ([string tolower [HTTP::uri]] contains "/ecp/troubleshooting/") || ([string tolower [HTTP::uri]] contains "/ecp/sms/") || ([string tolower [HTTP::uri]] contains "/ecp/security/") || ([string tolower [HTTP::uri]] contains "/ecp/extension/") || ([string tolower [HTTP::uri]] contains "/scripts/") || ([string tolower [HTTP::uri]] contains "/themes/") || ([string tolower [HTTP::uri]] contains "/fonts/") || ([string tolower [HTTP::uri]] contains "/ecp/error.aspx") || ([string tolower [HTTP::uri]] contains "/ecp/performance/") || ([string tolower [HTTP::uri]] contains "/ecp/ddi")) } {  
                HTTP::redirect "https://[HTTP::host]/owa"
            }
        }
    }
}

One thing to be aware of when implementing iRules like this is directory traversal – You really do need a concrete understanding of what paths are and are not allowed. If a determined adversary can authenticate against a desired URI, they should NOT be able to switch to an undesired URI. The above example is really great to show this – I want my users to access personal account ECP pages just fine – Remote administrative exchange access? Thats a big no-no and I redirect to an authorised endpoint.

Final Thoughts

Overall, the solution implemented here is quite elegant, considering the age of some infrastructure. I will always advocate for MFA enablement on a service – It prevents so many password based attacks and can really uplift the security of your users. While overall service uplift is always a better option to enable security, you should never discount small steps you can take using existing infrastructure. As always, leave a comment if you found this article useful!

Inbound Federation from Azure AD to Okta

Recently I spent some time updating my personal technology stack. As an Identity nerd, I thought to myself that SSO everywhere would be a really nice touch. Unfortunately SSO everywhere is not as easy as it sounds – More on that in a future post. For my personal setup, I use Office 365 and have centralised the majority of my applications on Azure AD. I find that the licensing inclusions for my day to day work and lab are just too good to resist. But what about my other love? If you’ve read this blog recently, you will know I’ve heavily invested into the Okta Identity platform. However aside from a root account I really don’t want to store credentials any-more. Especially considering my track record with lab account management. This blog details my experience and tips for setting up inbound federation from AzureAD to Okta, with admin role assignment being pushed to Okta using SAML JIT.

So what is the plan?

For all my integrations, I’m aiming to ensure that access is centralised; I should be able to create a user in AzureAD and then push them out to the application. How this occurs is a problem to handle per application. As Okta is traditionally an identity provider, this setup is a little different – I want Okta to act as the service provider. Queue Inbound Federation. For the uninitiated, Inbound federation is an Okta feature that allows any user to SSO into Okta from an external IdP, provided your admin has done some setup. More commonly, inbound federation is used in hub-spoke models for Okta Orgs. In my scenario, Azure AD is acting as a spoke for the Okta Org. The really nice benefit of this is setup I can configure SSO from either service into my SaaS applications.

Configuring Inbound Federation

The process to configure Inbound federation is thankfully pretty simple, although the documentation could probably detail this a little bit better. At a high level, we’re going to complete 3 SSO tasks, with 2 steps for admin assignment via SAML JIT.

  1. Configure an application within AzureAD
  2. Configure a identity provider within Okta & download some handy metadata
  3. Configure the Correct Azure AD Claims & test SSO
  4. Update our AzureAD Application manifest & claims
  5. Assign Admin groups using SAMIL JIT and our AzureAD Claims.

While it does seem like a lot, the process is quite seamless, so let’s get started. First up, add an enterprise application to Azure AD; Name this what you would like your users to see in their apps dashboard. Navigate to SSO and select SAML.

Next, Okta configuration. Select Security>Identity Providers>Add. You might be tempted to select ‘Microsoft’ for OIDC configuration, however we are going to select SAML 2.0 IdP

Using the data from our Azure AD application, we can configure the IDP within Okta. My settings are summarised as follows:

  • IdP Username should be: idpuser.subjectNameId
  • SAML JIT should be ON
  • Update User Attributes should be ON (re-activation is personal preference)
  • Group assignments are off (for now)
  • Okta IdP Issuer URI is the AzureAD Identifier
  • IdP Single Sign-On URL is the AzureAD login URL
  • IdP Signature Certificate is the Certificate downloaded from the Azure Portal

Click Save and you can download service provider metadata.

Upload the file you just downloaded to the Azure AD application and you’re almost ready to test. Note that the basic SAML configuration is now completed.

Next we need to configure the correct data to flow from Azure AD to Okta. If you have used Okta before, you will know the four key attributes on anyone’s profile: username, email, firstName & lastName. If you inspect the downloaded metadata, you will notice this has slightly changed, with mobilePhone included & username seemingly missing. This is because the Universal Directory maps username to the value provided in NameID. We configured this in the original IdP setup.

Add a claim for each attribute, feeling free to remove the other claims using fully qualified namespaces. My Final claims list looks like this:

At this point, you should be able to save your work ready for testing. Assign your app to a user and select the icon now available on their myapps dashboard. Alternately you can select the “Test as another user” within the application SSO config. If you have issues when testing, the “MyApps Secure Sign In Extension” really comes in handy here. You can grab this from the Chrome or Firefox web store and use it to cross reference your SAML responses against what you expect to be sent. Luckily, I can complete SSO on the first pass!

Adding Admin Assignment

Now that I have SSO working, admin assignment to Okta is something else I would really like to manage in Azure AD. To do this, first I need to configure some admin groups within Okta. I’ve built three basic groups, however you can provide as many as you please.

Next, we need to update the application manifest for our Azure AD app. This can be done at Application Registrations > Appname>Manifest. For each group that you created within Okta, add a new approle like the below, ensuring that the role ID is unique.

{
    "allowedMemberTypes": [
	"User"
    ],
    "description": "Admin-Okta-Super",
    "displayName": "Admin-Okta-Super",
    "id": "18d14569-c3bd-438b-9a66-3a2aee01d14f",
    "isEnabled": true,
    "lang": null,
    "origin": "Application",
    "value": "Admin-Okta-Super"
},

For simplicity, I have matched the value, description and displayName details. The ‘value’ attribute for each approle must correspond with a group created within the Okta Portal, however the others can be a bit more verbose should you desire.

Now that we have modified our application with the appropriate Okta Roles, we need to ensure that AzureAD & Okta to send/accept this data as a claim. First within AzureAD, update your existing claims to include the user Role assignment. This can be done with the “user.assignedRoles” value like so:

Next, update the Okta IDP you configured earlier to complete group sync like so. Note that the group filter prevents any extra memberships from being pushed across.

For a large amounts of groups, I would recommend pushing attributes as claims and configuring group rules within Okta for dynamic assignment.

Update your Azure AD user/group assignment within the Okta App, and once again, you’re ready to test. A second sign-in to the Okta org should reveal an admin button in the top right and moving into this you can validate group memberships. In the below example, I’ve neatly been added to my Super admins group.

Wrapping Up

Hopefully this article has been informative on the process for setting up SAML 2.0 Inbound federation using Azure AD to Okta. Depending on your identity strategy, this can be a really powerful way to manage identity for a service like Okta centrally, bring multiple organisations together or even connect with customers or partners. Personally, this type of setup makes my life easier across the board – I’ve even started to minimise the use of my password manager just by getting creative with SSO solutions!

If you’re interested in chatting further on this topic, please leave a comment or reach out!

Okta Workflows – Unlimited Power!

If you have ever spoken with me in person; you know I’m a huge fan of the Okta identity platform – It just makes everything easy. It’s no surprise then, that the Okta Workflows announcement at Oktane was definitely something I saw value in – Interestingly enough; I’ve utilised postman collections and Azure LogicApps for an almost identical Integration solution in the past.

Custom Okta LogicApps Connector

This post will cover my first impressions, workflow basics & a demo of the capability. If you’re wanting to try this in your own Org, reach out to your Account/Customer Success Manager – The feature is still hidden behind a flag in the Okta Portal, however it is well worth the effort!

The basics of Workflows

If you have ever used Azure LogicApps or AWS Step Functions, you will instantly find the terminology of workflows familiar. Workflows are broken into three core abstractions;

  • Events – Used Start your workflow
  • Functions – Provide Logic Control (If then and the like) & advanced transformations/functionality
  • Actions – DO things

All three abstractions have input & output attributes, which can be manipulated or utilised throughout each flow using mappings. Actions & Events require a connection to a service – pretty self explanatory.

Workflows are built from left to right, starting with an event. I found the left to right view when building functions is really refreshing, If you have ever scrolled down a large LogicApp you will know how difficult it can get! Importantly, keeping your flows short and efficient will allow easy viewing & understanding of functionality.

Setting up a WorkFlow

For my first workflow I’ve elected to solve a really basic use case – Sending a message to slack when a user is added to an admin group. ChatOps style interactions are becoming really popular for internal IT teams and are a lot nicer than automated emails. Slack is supported by workflows out of the box and there is an O365 Graph API option available if your organisation is using Microsoft Teams.

First up is a trigger; User added to a group will do the trick!

Whenever you add a new integration, you will be prompted for a new connection and depending on the service, this will be different. For Okta, this is a simple OpenID app that is added when workflows is onboarded to the org. Okta Domain, Client ID, Client Secret and we are up and running!

Next, I need to integrate with Slack – Same process; Select a task, connect to the service;

Finally, I can configure my desired output to slack. A simple message to the #okta channel will do.

Within about 5 minutes I’ve produced a really simple two step flow, and I can click save & test on the right!

Looking Good!

If you’ve been paying attention, you would have realised that this flow is pretty noisy – I would have a message like this for ALL okta groups. How about adding conditions to this flow for only my desired admin group?

Under the “Functions” option, I can elect to add a simple Continue If condition and drag across the group name for my trigger. Group ID would definitely be a bit more implicit, but this is just a demo 💁🏻.

Finally, I want to clean up my slack message & provide a bit more information. A quick scroll through the available functions and I’m presented with a text concatenate;

Save & Test – Looking Good!

Whats Next?

My first impressions of the Okta Workflows service are really positive – The UI is definitely well designed & accessible to the majority of employees. I really like the left to right flow, the functionality & the options available to me in the control pane.

The early support for key services is great. Don’t worry if something isn’t immediately available as an Okta deployed integration – If something has an API you can consume it with some of the advanced functions.

REST API Integration

If you want to dive straight into the Workflows deep end, have a look at the documentation page – Okta has already provided a wealth of knowledge. This Oktane video is also really great.

Okta Workflows only gets better from here. I’m especially excited to see the integrations with other cloud providers and have already started planning out my advanced flows! Until then, Happy tinkering!

Happy Wife Happy Life – Building my wedding invites in Python on Azure!

One of the many things I love about the cloud is the ease at which it allows me to develop and deploy solutions. I recently got married – An event which is both immensely fulfilling and incredibly stressful to organise. Being a digital first millennial couple, my partner and I wanted to deliver our invites electronically. Being the stubborn technologist that I am, I used the wedding as an excuse to practice my cloud & python skills! This blog neatly summarises what I implemented, and the fun I dealt with along the way.

The Plan – How do I want to do this?

For me, the main goal moving was to deliver a simple, easy to use solution which enabled me to keep sharp on some cloud technology, time and complexity was not a deciding factor. Being a consultant, I generally touch a multitude of different services/providers and I need to keep challenged to stay up to date on a broad range of things.

For my partner, it was important that I could quickly deliver a website, at low cost, with personalised access codes and email capability – A fully fledged mobile app would have been the nirvana, but I’m not that great at writing code (yet) – Sorry hun, maybe at a future vow renewal?

When originally planning, I really wanted to design a full end to end solution using functions & all the cool serverless features. I quickly realised that this would also take me too long to keep my partner happy, so I opted for a simpler path – an ACI deployment, with Azure Traffic manager allowing a nice custom domain (Feature request please MS). I designed Azure Storage as a simple table backend, and utilised SendGrid as the email service. Azure DNS allowed me to host all the relevant records, and I built my containers for ACR Using Azure DevOps.

Slapping together wedding invites on Azure in an afternoon? Why not?

Implementing – How to use this flask thing?

Ask anyone who knows me and they will tell you I will give just about anything a crack. I generally use python when required for scripting/automation and I really don’t use it for much beyond that. When investigating how to build a modern web app, I really liked the idea of learning some more python – It’s such a versatile language and really deserves more of my attention. I also looked at using React, WordPress & Django. However I really hate writing javascript, this blog is WordPress so no learning there, and Django was have been my next choice after flask.

Implementing into flask was actually extremely simple for basic operations. I’m certain I could have implemented my routing in a neater manner – perhaps a task for future refactoring/pull requests! I really liked the ability to test flask apps by simply running python3 app.py. A lot quicker than a full docker build process, and super useful in development mode!

The template based model that flask enables developers to utilise is extremely quick. Bootstrap concepts haven’t really changed since it was released in 2011, and modifying a single template to cater for different users was really simple.

For user access, I used a simple model where a code was utilised to access the details page, and this code was then passed through all the web requests from then on. Any code submitted that did not exist in azure storage simply fired a small error!

import flask 
from string import Template
from flask import request
from flask import render_template
from flask import redirect
import os
from datetime import datetime
from azure.cosmosdb.table.tableservice import TableService
from azure.cosmosdb.table.models import Entity

app = flask.Flask(__name__)
app.config['StorageName'] = os.environ.get('StorageName')
app.config['StorageKey'] = os.environ.get('StorageKey')

#StorageName = os.environ.get('StorageName')
#StorageKey = os.environ.get('StorageKey')
@app.route('/', methods=['GET'])
def home():
    return render_template('index.html')  # render a template

@app.route('/badCode')
def badCode():
    return render_template('index.html', formError = "Incorrect Code, Please try again.")

@app.route('/user/<variable>', methods=['GET'])
def userpage(variable):
    table_service = TableService(account_name=app.config['StorageName'], account_key=app.config['StorageKey'])
    name= variable.lower()
    try:
        details = table_service.get_entity('weddingtable', 'Invites', name)
        print(details)
        return render_template("user.html",People1=details.Names, People2=details.Names2, hide=details.Hide, userCode = variable, commentmessage=details.Message)
    except:
        return redirect('/badCode')


@app.route('/locations')
def locations():
    return render_template('locations.html',HomeLink="./")

@app.route('/locations/<UserCode>')
def authedUser(UserCode):
    link = "../user/" + UserCode
    return render_template('locations.html',HomeLink=link)

@app.route('/code', methods=['POST'])
def handle_userCode():
    codepath = '/user/' + request.form['personalCode']
    return redirect(codepath)

@app.route('/Thankyou/<UserCode>')
def thank(UserCode):

    codepath = '/user/' + UserCode
    return render_template('thankyou.html', HomeLink=codepath)

@app.route('/RSVP', methods=['POST'])
def handle_RSVP():
    print('User Code Is: {}'.format(request.form['userCode']))
    table_service = TableService(account_name=app.config['StorageName'], account_key=app.config['StorageKey'])
    now = datetime.now()
    time = now.strftime("%m-%d-%Y %H-%M-%S")
    rsvp = {'PartitionKey': 'rsvp', 'RowKey': time ,'GroupID': request.form['userCode'],
        'comments': request.form['comment'], 'Status': request.form['action']}
    print(rsvp)
    table_service.insert_entity('weddingrsvptable', rsvp)
    redirectlink = '/Thankyou/{}'.format(request.form['userCode'])
    return redirect(redirectlink)

app.run(host='0.0.0.0', port=80, debug=True)

The end result of my bootstrap & flask configuration was really quite simple – my Fiance was quite impressed!

Deployment – Azure DevOps, ACI, ARM & Traffic Manager

Deploying to Azure Container Registry and Instances is almost 100% idiotproof within Azure DevOps. Within about five minutes in the GUI, you can get a working pipeline with a docker build & push to your Azure Container Registry, and then refresh your Azure Container Instances from there. Microsoft doesn’t really recommend using ACI for anything beyond a simple workloads, and I found support for nearly everything to be pretty limited.
Because I didn’t want a fully fledged AKS cluster/host or an App Service Plan running containers, I used traffic manager to work around the custom domain limitations of ACI. As a whole, the traffic manager profile would cost me next to nothing, and I knew that I wouldn’t be receiving many queries to the services.

At some point I looked at deploying my storage account using ARM templates, however I found that table storage is currently not supported for deployment using this method. You will notice that my azure pipeline uses the Azure Shell commands to do this. I didn’t get around to automating the integration from storage to container instances – Mostly because I had asked my partner to fill out another storage account table manually and didn’t want to move anything!

trigger:
- master

pool:
  vmImage: 'ubuntu-latest'

variables:
  imageName: 'WeddingContainer'

steps:
- task: Docker@2
  inputs:
    containerRegistry: 'ACR Connection'
    repository: 'WeddingWebsite'
    command: 'buildAndPush'
    Dockerfile: 'Dockerfile'
    tags: |
      v1
- task: Docker@2
  inputs:
    containerRegistry: 'ACR Connection'
    command: 'login'

- task: AzureCLI@2
  inputs:
    azureSubscription: 'PAYG - James Auchterlonie(2861f6bf-8886-47a9-bc4b-de1a11df0e5f)'
    scriptType: 'bash'
    scriptLocation: 'inlinescript'
    inlineScript: 'az storage account create --name weddingazdevops --resource-group CONTAINER-RG01 --location australiaeast --sku Standard_LRS --kind StorageV2'

- task: AzureCLI@2
  inputs:
    azureSubscription: 'PAYG - James Auchterlonie(2861f6bf-8886-47a9-bc4b-de1a11df0e5f)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: 'az storage table create -n weddingtable --account-name weddingazdevops'

- task: AzureCLI@2
  inputs:
    azureSubscription: 'PAYG - James Auchterlonie(2861f6bf-8886-47a9-bc4b-de1a11df0e5f)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: 'az container create --resource-group CONTAINER-RG01 --name weddingwebsite --image youracrnamehere.azurecr.io/weddingwebsite:v1 --dns-name-label weddingwebsite --ports 80 --location australiaeast --registry-username youracrname --registry-password $(ACRSECRET) --environment-variables StorageName=$(StorageName) StorageKey=$(StorageKey)'

- task: AzureCLI@2
  inputs:
    azureSubscription: 'PAYG - James Auchterlonie(2861f6bf-8886-47a9-bc4b-de1a11df0e5f)'
    scriptType: 'bash'
    scriptLocation: 'inlineScript'
    inlineScript: 'az container restart --name weddingwebsite --resource-group CONTAINER-RG01'

For my outbound email I opted to utilise SendGrid. You can actually sign up for this service within the Azure Portal as a “third party service”. It adds an object to your resource group, however administration is still within the SendGrid portal.

Issues?

As an overall service, I found my deployment to be relatively stable. I ran two issues through my deployment, both of which were not too simple to resolve.

1. Azure Credit & Azure DNS – About halfway through the live period after sending my invites, I noticed that my service was down. This was actually due to DNS not servicing requests due to insufficient credit. A SQL server I was also labbing had killed my funds! This was actually super frustrating to fix as I had another unrelated issue with the Owner RBAC on my subscription – My subscription was locked for IAM editing due to insufficient funds, and I couldn’t add another payment method because I was not owner – Do you see the loop too?
I would love to see some form of payment model that allows for upfront payment of DNS queries in blocks or chunks – Hopefully this would prevent full scale DNS based outages when using Azure DNS and Credit based payment in the future.

2. SPAM – I also had a couple of reports of emails sent from sendgrid being marked as spam. This was really frustrating, however not common enough for me to dig into as a whole, especially considering I was operating in the free tier. I added a DKIM & DMARC Record for my second run of emails and didn’t receive as much feedback which was good.

The Cost – Was it worth it?

All in All the solution I implemented was pretty expensive when compared to other online products and even other Azure services. I could have definitely saved money by using App Services, Azure Functions or even static Azure Storage websites. Thankfully, the goal for me wasn’t to be cheap. It was practice. Even better though, my employer provides me with an Azure Credit for dev/test, so I actually spent nothing! As such, I really think this exercise was 100% worth it.

Summary – Totally learnt some things here!

I really hoped you enjoyed this small writeup on my experience deploying small websites in Azure. I spent a grand total of about three hours over two weeks tinkering on this project, and you can see a mostly sanitised repo here. I definitely appreciated the opportunity to get a little bit better at python, and will likely look to revisit the topic again in the future!

(Heres a snippet of the big day – I’m most definitely punching above my average! 😂)

Here’s to 2019.

As sweat our way through the first days of 2020 (Thanks Melbourne), I think it would be good for me to write a post on the crazy 2019 I’ve had. Probably a bit obnoxious, but who cares – My blog my rules.

For me, 2019 was a huge success. I started out the year as a consultant, with no particular goals aside from presenting at some community events. I finish as a technical lead, with 5 speaking engagements, 8 certifications exams completed and quite a few blogs/webinars outings under my belt. Lets start at the top.

Speaking

I started 2019 with the knowledge that I sucked at public speaking. I wore a suit for the first six months of my consulting career because it made me feel secure when talking to three or four people. I knew I sucked at presenting and wanted to get better. At the end of the year, I really do feel that I’m getting there. That whole cliche of Practice making perfect. In order of my appearances, I presented at:

  • Global Integration BootCamp – Azure Service Bus
  • Global Azure BootCamp – Securing Azure DevOps
  • Melbourne PowerApps & Flow Meetup – Vulnerability Scanning with PowerApps
  • Azure Security Meetup – Securing Azure @ Scale, The Basics
  • Melbourne MultiCloud Meetup – One Cloud to Rule them All

I really enjoyed presenting at these events, especially the Global Boot Camps. I think the funniest story I got from these presentations was the Securing DevOps talk – I arrived at the event with no knowledge that I would be presenting, fully intending to spend the day learning. I yelled some jokes at the MC, at which point I was asked if I was ready to present that afternoon. Queue confused expression. I had originally put my hand up to speak, but due to a communications breakdown, I never received any confirmation. I spent the full day locked in a room scrambling to get a talk together – In the end, I cheated and talked about governance. Half the room fell asleep and I skated out with an alright presentation!

Getting Certified

What a labor of love getting certified is. Thankfully I am fully supported by my employer 🙂 . I started in March, with a goal to get Azure Certified at the Architect level. First up was AZ-100/101 and I really found the certification process an interesting one. I left the 100 exam feeling extremely underwhelmed, and absolutely died in the 101 exam. I do think that the administrator exams didn’t cover the level of detail I would expect from an actual service Administrator, but I’ve had different feedback from multiple sources. It was pretty disappointing to realize that not long after stressing through the 101, it was retired in favor of the new 103 exam! Live and learn I suppose.
From the success of the Azure administrator exams, I did my best to sort my architect exams by the end of July, just scraping in before August with both exams! These were a lot more in depth than the admin pair, although there was definitely a couple of “vendor speak” questions. These are easily the most frustrating types of questions for me, because often they translate into real world product selection by architects or administrators where the product isn’t best fit for a company!

After finishing those four exams I was invigorated, setting myself a goal of doing 12 in 12 months. I’m currently 8 in, having also passed two AWS certifications, Azure Security & Okta Professional! Easily the most difficult exam within these was the Okta Pro; Firstly they use a ridiculous exam format designed to induce stress and anxiety – Check out DOMC for all your heart palpitation needs. Secondly, there just isn’t affordable and accessible high quality content! I love ACloudGuru and Pluralsight for this – They provide world class content for next to nothing.

I’m starting 2020 off with GCP & AWS exams in January before taking a break to get married in February. Two months from March to April to hit 12 in 12, wish me luck!

Blogging & All the Other Successes!

Blogging & Other – What a category 😀 . One of the ways which I learn concepts is through explaining them to other people. This was the primary reason I started blogging here. Once I got the hang of it, I found that I really did enjoy writing here. I’m proud to have published 10 blogs on my site (and on xello.com.au). This year I want to write focused blogs on cloud security & identity topics. If you’re reading this and you have any requests, please reach out! I also contributed to a couple of webinars, mostly work based topics. These are something I find really difficult, as it takes time to actually get a decent attendee list & the shorter time frames are quite difficult. I think the success I am most happy about is to have got Multi Cloud Melbourne off the ground! Community events are honestly the best way to dip your toe into a topic, and the community in Melbourne is actually insane! If you’re not signed up to attend an event, head on over to meetup.com and register now – Seriously it’s great.

I was also extremely blessed to be made a technical lead @ Xello this year. This role has been really fun, and I’ve loved getting involved with more customer work & technical challenges.

Thankyou

Most importantly, I wouldn’t be writing this post without a cadre of people who support me, both professionally and personally. A big thank you to all of these people, they all know who they are. Another big thank you goes out to you – my network. Thank you for engaging with me, on whatever medium you choose. Its always a pleasure to meet another person who is cloud obsessed. Here’s to a brilliant 2020, may all your plans come to fruition!!

How to setup Okta and Active Directory Integration & Provisioning

More often then not, companies begin a modern identity journey by expanding the capability of an existing identity store. This could be federation using ADFS, identity governance using SailPoint, or integration to a third party directory.

Active Directory (AD) is a highly extensible and capable solution for the majority of legacy business cases and is easily the most common identity store in the industry. With the advent of cloud, however, the on-premise directory is starting to show its age – and that’s why my favourite initial capability addition is to integrate Okta.

In this technical blog, I’ll take you through the basics, and demonstrate some Universal Directory capability.

Integrating Okta with AD: An Introduction

Before we get started, it’s valuable to address the three most common questions most people will ask when I begin the conversation about using Okta with Active Directory:

1. Why choose a cloud directory?

The cloud directory conversation boils down to one point: Less infrastructure. I’ll probably need a bit of infrastructure to run the initial phase of any identity uplift, but lets be honest; Infrastructure is hard work, We don’t want it and we certainly don’t want to plan our transformation with the idea of working harder in mind.


2. Why a third party directory?

The second question isn’t a dig at any one provider, but providers generally operate with their own services in mind. Sure, you can plug external solutions into a proprietary solution, but the integration to the vendor ecosystem products is always a little bit better. A third party directory really removes this problem, as they are focused on  offering businesses well-managed, easy identity and access management (IAM).


3. Why Okta?

Okta is the world’s leading Identity as a Service (IDaaS) solution for both enterprise and small and midsize businesses, with some incredible versatility owing to its cloud-based delivery model. For a deeper comparison on some of the Gartner market leaders in the modern identity space, head on over to our comparison of Ping & Okta.

How to connect AD to Okta

So you’ve decided to connect AD to Okta. Great! Now is the time to understand the requirements for your Okta connectors and your AD integration scenario before deployment – generally two member servers will work for a HA deployment. Greater than 30,000 users? You probably should have a few more!

AD_singledomain

As an optional first step, we can create a service account and assign the relevant permissions for provisioning.

By default, the AD Agent installer will create “OktaService”, but you need to make updates for provisioning and group push. I’ve got quick little script to create the required account with the minimum requirements for both.

#Quick and easy File to write output to – A Lazy mans logging
Start-Transcript ./OktaServiceAccountConfig.log
#I would like an AD module please
Import-Module ActiveDirectory
#Basic Details for the Service Account & Domain.
$serviceAccountName = "svcOktaAgent"
$serviceAccountUsername = "svcOktaAgent"
$serviceAccountDescription = "svcOktaAgent – Okta AD Agent Service"
$serviceAccountPassword = "1SuperSecretPasswordThatWasRandomlyGenerated!!!"
$serviceAccountOU = "OU=ExampleOU,DC=corp,DC=contoso,DC=com"
$targetUserOUs = @("OU=ExampleOU,DC=corp,DC=contoso,DC=com", "OU=ExampleOU,DC=corp,DC=contoso,DC=com")
$targetGroupOUs = @("OU=ExampleOU,DC=corp,DC=contoso,DC=com")
$domain = Get-ADDomain
$serviceAccountUPN = "svcOktaAgent@$($domain.Forest)"
#Create an AD User
New-ADUser SamAccountName $serviceAccountUsername Name $serviceAccountName DisplayName $serviceAccountName Path $serviceAccountOU UserPrincipalName $serviceAccountUPN CannotChangePassword $true Description $serviceAccountDescription
Set-ADAccountPassword $serviceAccountUsername NewPassword $(ConvertTo-SecureString String $serviceAccountPassword AsPlainText –Force) –Reset
Enable-ADAccount $serviceAccountUsername
#Assign Permissions for User creation & basic attribute write.
foreach($TargetOU in $targetUserOUs){
$UserCommands = @(
"dsacls `"$TargetOU`" /G $($domain.Name)\$($serviceAccountUsername)`:CC;user"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;mail;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;userPrincipalName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sAMAccountName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;givenName;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sn;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;pwdLastSet;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;lockoutTime;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;cn;user",
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;name;user",
"dsacls `"$TargetOU`" /I:S /G `"$($domain.Name)\$($serviceAccountUsername)`:CA;Reset Password;user`""
"dsacls `"$TargetOU`" /I:S /G `"$($domain.Name)\$($serviceAccountUsername)`:WP;userAccountControl;user`""
)
foreach($command in $userCommands){
CMD /C $command
}
}
#Permissions required for group push.
foreach($targetOU in $targetGroupOUs){
$groupCommands = @(
"dsacls `"$TargetOU`" /G $($domain.Name)\$($serviceAccountUsername)`:CCDC;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;sAMAccountName;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;description;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;groupType;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`;member;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;cn;group"
"dsacls `"$TargetOU`" /I:S /G $($domain.Name)\$($serviceAccountUsername)`:WP;name;group"
)
foreach($command in $groupCommands){
CMD /C $command
}
}
Stop-Transcript

You will notice the account gets write on a specific set of attributes within the OU – this is by design as you should only ever assign access to the minimum required attributes. If you intend on adding extra attributes to Universal Directory, you will need to configure the service account access to each attribute in a similar fashion.

For my lab, I’ve taken a short-cut and provided the account to full attribute write.

Next, we can log into our Okta portal and download the AD agent. Run the installer on a separate server to your domain controller, as you should always strive to separate out services between servers.

The agent installer is fairly straight forward, and the process can be initiated from the web portal. (Directory Integrations > Add Directory > Active Directory.) The key steps are as follows:

Specify the service account: In our case, this will be custom account.

mstsc_Njlly6h5aG

Provide your subdomain and log in to Okta: This step will allow the agent to register.

mstsc_ymamWeFRG0

Select the essentials: The OU’s you would like to run sync from + attributes to be included in your profiles.

mstsc_Hs2ccWtCHc
mstsc_AqEuwXkDIg

You should finish step 3 with the following prompt. If you get stuck, there is detailed instructions in the Okta documentation.

mstsc_qTdSkUvLRZ

Disable Active Directory Profile Mastering: For the purpose of this blog, I’m going to disable AD Profile Mastering. This ensures that Okta remains as the authoritative source for my user profiles. You can find this option under the import and provisioning settings of the directory.

mstsc_qJgJgxtxwC

Enabling Okta to provision AD Accounts

Now we have completed a base setup, most administrators will configure up user matching and synchronisation (Step 2 in the official Okta provided documentation).

I’m lucky enough to have a brand new AD domain, that I would like to push Okta users to, allowing me to skip straight to Okta Mastered AD Accounts – winning!

Okta mastered accounts rely on the concept of Universal Directory. The UD is an excellent Okta feature that enables administrators to pivot data from Okta into and out of third party services. Universal Directory enables HR as a master scenarios, and data flow between the vast majority of Okta applications. For a detailed overview on UD, have a look at this excellent page from Okta’s official documentation.

UD

Enabling Okta provisioning in AD: First I need to navigate to my directory settings and enable “Create Users”. To ensure my user data always stays accurate, I’ll also be enabling “Update User attributes”.

brave_m08hlYI7gV

Create an Okta Group: Self-explanatory! Click Add Group and fill out the details as desired.

brave_XBL3bBGY8P

Assign a directory to the Okta Group: This ensures users who are added to the Okta Group are automatically written down into the AD OU. To do this, navigate to the group, and select Manage Directories.

brave_vAMGMTrgC0

Add your AD domain to the “Member Domain” section: You are able to select multiple AD domains here, which is extremely useful for Multi Domain scenarios. You can also use UD to daisy chain together AD domains, enabling a migration scenario. Select Next and configure your target OU. This should match the OU you specified in the earlier service account setup, as the service does need permission to create your AD account.

brave_Dbib0TiVSK
brave_9PeWC9O18B

Assign users to AD: Everything is now set up! The assignment process is pretty simple – navigate to the group, manage users and move across your targeted user. Select save to initiate the Okta provisioning action.

brave_6wYGSMIh6J

A quick look in AD, and my account has already been provisioned! This process is extremely quick, with the account remaining disabled initially. This is a by-default security feature.

mstsc_1ecDxQpLe0

How to setup Okta and Active Directory Integration & Provisioning: Next steps

We hope our walkthrough of Okta and Active Directory Integration & Provisioning has given you the 10,000 foot overview on what is possible with Okta to AD integration – and you’re able to see the unique value and potential business case within your company.

There is a broad range of integration options, processes and nitty gritty application settings and while the barrier to initial entry quite easy, a detailed setup can get complex quickly. As always, please feel free to reach out to myself or the team if you enjoyed this article or have any questions!

Building Okta Resources with Terraform

As a consultant who spends large amounts of time implementing customer solutions, automation has become a key part of my job. A technology that can be automated is instantly more attractive for me to use. This is one of the key reasons that I love Terraform. It enables me to write cross platform automation and when automation isn’t natively supported, I can write a custom provider. That’s why the recent announcement of a custom Terraform provider for Okta is my favorite feature announcement of 2019 and why I’ll be covering the basics of Okta & Terraform in this blog. If you’re not sure how to use Terraform, have a look here for an initial overview, otherwise let’s dive in!

Setting up – Building the provider and API key generation.

The first thing you’ll need to do is build the Okta provider. This isn’t too hard; however, the GitHub readme is written for Unix users. If you’re on Windows like me, you will need to have a basic understanding of how to compile Go and how to use custom providers in terraform.

Cloning the Okta Terraform Git Repo

First step, clone the Git repo and CD in.

Building the Okta Terraform Provider 1

Next, build the Provider, note that if you try run the generated EXE, you’ll be prompted that the file is a plugin. To stick with the Terraform documentation, I’ve used the following EXE naming format: terraform-provider-PRODUCT-vX.Y.Z.exe

Building the Okta Terraform Provider 2

Finally, copy the provider into the Terraform plugin path. For 64 bit windows, this is generally: %APPDATA%\terraform.d\plugins\windows_amd64

Building the Okta Terraform Provider 3

Now that we have a functional provider for Terraform, its time to generate an API key. Please be careful with these keys. as they inherit your Okta permissions and shouldn’t be left lying around!

Start in the Okta portal, navigate to Security and then API. You should find the following button on the top left.

Okta new token button

Fill in a token name – If you’re using this in production, generally keep some data about the token usage here!

Okta API Token Name
My super secret Okta key


Insert your freshly generated API token into the following Terraform HCL!

provider "okta" {
 org_name  = "demoorg"
 api_token = "API TOKEN HERE"
 base_url  = "okta.com"
}

Building resources

Now that we have a provider configured, lets create some resources. Thankfully, the provider developers (articulate) have detailed out common use cases for the provider here. Creating a user isn’t too difficult, provided you have the four mandatory fields handy:

resource "okta_user" "XelloDemo" {
 first_name = "Xello"
 last_name  = "Okta Terraform"
 login      = "Xello.OktaTerraform@xellolabs.com"
 email      = "Xello.OktaTerraform@xellolabs.com"
 status     = "STAGED"
}

Terraform Init will begin the deploy process

Terraform Init to setup the base environment

And Terraform apply to deploy!

Using Terraform Apply to configure Okta User

Now deploying a user isn’t too imaginative or special. After all, you can easily bulk import from a CSV and scripting out API creation isn’t too hard. This is where the power of articulates provider comes in – It currently supports the majority of the Okta API, and the public can add support where possible. Let’s look at something a bit more advanced.

resource "okta_user" "AWSUser1" {
first_name = "XelloLab"
 last_name  = "AWSDemo"
 login      = "XelloLab.AWSDemo@xellolabs.com"
 email      = "XelloLab.AWSDemo@xellolabs.com"
 status     = "ACTIVE"
}

resource "okta_group" "AWSGroup" {
 name = "AWS Assigned Users"
 description = "This group was deployed by Terraform"
 users = [
   "${okta_user.AWSUser1.id}"
 ]
}

resource "okta_app_saml" "test" {
 preconfigured_app = "amazon_aws"
 label             = "AWS - Terraform Deployed"
  groups            = ["${okta_group.AWSGroup.id}"]
 users {
   id       = "${okta_user.AWSUser1.id}"
   username = "${okta_user.AWSUser1.email}"
 }
}

In the above Terraform code, we have a user being configured, assigned to a group, and then assigned into an Okta integration network application. Rather than clicking through the Okta portal, I’ve relied on infrastructure as code to deploy all the resources. I can even begin to combine providers to deliver cross platform integration – This snippet will work nicely with the AWS identity provider, enabling me to neatly configure the SAML integration between the two services, without leaving my shell or pipeline. The end result? No click ops, just code and the AWS  application configured in a matter of minutes!

AWS Application Configured in Okta

Fast and Furious – Okta Drift

One of the things that Terraform is really excellent for is minimizing configuration drift. Regularly running Terraform apply, either from your laptop or a CICD pipeline, can ensure that applications are maintained as documented and deployed. In the below example you can see Terraform correcting an undesired application update.

Okta Config drift with Terraform

You shouldn’t have to worry about an overeager intern destroying your application setup, and the Okta/Terraform combo prevents this!

Cleaning up 

Destroying Okta resources with Terraform

The other super useful thing with using Terraform is the cleanup process. I’ve lost count of how many times I’ve clicked through a portal or navigated to an API doc just to bulk delete resources. Okta users Immediately come to mind for this problem! By running Terraform destroy, I can immediately clean up my environment. Great for testing out new functionality or lab scenarios.




Hopefully by now you’re beginning to understand what some of the options will be when configuring Okta with Terraform. For my day to day work as a consultant, this is an excellent integration and the varied cross platform use cases are nearly limitless. As always, for any questions please feel free to reach out to myself of the team!

SCOM of the Earth: Replacing Operations Manager with Azure Monitor (Part Two)

In this blog, we continue where we left off  in part one, spending a bit more time expanding on the capabilities of Azure Monitor. Specifically, how powerful Log Analytics & KQL can be, saving us huge amounts of time and  preventing alert fatigue. If you haven’t already decided whether or not to use SCOM or Azure monitor, head over to the Xello comparison article here.

For now, lets dive in!

Kusto Query Language (KQL) – Not your average query tool.

Easily the biggest change that Microsoft recommends when moving from SCOM to Azure Monitor is to change your alerting mindset. Often organisations get bogged down in resolving meaningless alerts – Azure Monitor enables administrators to query data on the fly, acting on what they know to be bad, rather than what is defined in a SCOM Management Pack. To provide these fast queries, Microsoft developed Kusto Query Language – a big data analytics cloud service optimised for interactive ad-hoc queries over structured, semi-structured, and unstructured data. Getting started is pretty simple and Microsoft have provided cheat-sheets for those of you familiar with SQL or Splunk queries.

What logs do I have?

By default, Azure Monitor will collect and store platform performance data for 30 days. This might be adequate for simple analysis of your virtual machines, but ongoing investigations and detailed monitoring will quickly fall over with this constraint. Enabling extra monitoring is quite simple. Navigate to your work space, select advanced settings, and then data.

From here, you can on board extra performance metrics, event logs and custom logs as required. I’ve already completed this task, electing to on board some Service, Authentication, System & Application events as well as guest level performance counters. While you get platform metrics for performance by default, on-boarding metrics from the guest can be an invaluable tool – Comparing the two can indicate where systems are failing & if you have an underlying platform issue!

Initially, I just want to see what servers I’ve on-boarded so here we run our first KQL Query:

Heartbeat | summarize count() by Computer  

A really quick query and an even quicker response! I can instantly see I have two servers connected to my work space, with a count of heartbeats. If I found no heartbeats, something has gone wrong in my on-boarding process and we should investigate the monitoring agent health.

Show me something useful!

While a heartbeat is a good indicator of a machine being online, it doesn’t really show me any useful data. Perhaps I have a CPU performance issue to investigate. How do I query for that?


Perf | where Computer == “svdcprod01.corp.contoso.com” and ObjectName == “Processor” and TimeGenerated > ago(12h) | summarize avg(CounterValue) by bin(TimeGenerated, 1minutes) | render timechart

Looks like a bit, but in reality this query is quite simple. First, I select my Performance data. Next I filter this down. I want data from my domain controller, specifically CPU performance events from the last 12 hours. Once I have my events, I request a 1 minutes summary of the CPU value and push that into a nice time chart! The result?

perf

Using this graph, you can pretty quickly identify two periods when my CPU has spiked beyond a “normal level”. On the left, I spike twice above 40%. On the right, I have a huge spoke to over 90%. Here is where Microsoft’s new monitoring advice really comes into effect – Monitor what you know, when you need it. As this is a lab domain controller, I know it turns on at 8 am every morning. Note there is no data in the graph prior to this time? I also know that I’ve installed AD Connect & the Okta agent – The CPU increases twice an hour as each data sync occurs. With this context, I can quickly pick that the 90% CPU spike is of concern. I haven’t setup an alert for performance yet, and I don’t have to. I can investigate when and if I have an issue & trace this back with data! My next question is – What started this problem?

If you inspect the usage on the graph, you can quickly ascertain that the major spike started around 11:15 – As the historical data indicates this is something new, it’s not a bad assumption that this is something new happening on the server. Because I have configured auditing on my server and elected to ingest these logs, I can run the following query:


SecurityEvent | where EventID == “4688” and TimeGenerated between(datetime(“2019-07-14 1:15:00”) .. datetime(“2019-07-14 1:25:00”))

This quickly returns me out a manageable 75 records. Should I wish, I could probably manually look through this and find my problem. But where is the fun in that? A quick scan reveals that our friend xelloadmin appears to be logged into the server during the specified time frame. Updated Query?

SecurityEvent | where EventID == “4688” and Account contains “xelloadmin” and TimeGenerated between(datetime(“2019-07-14 1:15:00”) .. datetime(“2019-07-14 1:25:00”))

By following a “filter again” approach you can quickly bring large 10,000 row data sets to a manageable number. This is also great for security response, as ingesting a the correct events will allow you to reconstruct exactly what has happened on a server without even logging in!
Thanks to my intelligent filtering, I’m now able to zero in on what appears to be a root cause. It appears that xelloadmin launched two cmd.exe processes less than a second apart, exactly prior to the CPU spike. Time to log in and check!

Sure enough, these look like the culprits! Terminating both process has resulted in the following graph!

Let’s create alerts and dashboards!

I’m sure you’re thinking at this point, that everything I’ve detailed out is after the fact – More importantly, I had to actively look for this data. You’re not wrong to be concerned about this. Again, this is the big change in mindset that Microsoft is pushing with Azure Monitor – Less alerting is better. Your applications are fault tolerant, loosely coupled and scale to meet demand already right? 

If you need an alert, make sure it matters first. Thankfully, configuration is extremely simple should you require one!
First, work out your alert criteria- What defines that something has gone wrong? In my case, I would like to know when the CPU has spiked to over a threshold. We can then have a look in the top right of our query window- You should notice a “new alert rule” icon. Clicking this will give you a screen like the following: 


The condition is where the magic happens – Microsoft has been gracious enough to provide some pre-canned conditions, and you can write your own KQL should you desire. For the purpose of this blog, we’re going to use a Microsoft rule. 


As you can see, this rule is configured to trigger when CPU hits 50% – Our earlier spike thanks to the careless admin would definitely be picked up by this! Once I’m happy with my alert rule, I can configure my actions – Here is where you can integrate to existing tools like ServiceNow, JIRA or send SMS/Email alerts. For my purposes, I’m going to setup email alerts. 
Finally, I configure some details about my alert and click save!

Next time my CPU spikes, I will get an email from Microsoft to my specified address and I can begin investigating in almost realtime!

The final, best and easiest way for administrators to get quick insights into their infrastructure is by building a dashboard.  This process is extremely simple – Work out your metrics, write your queries and pin the results.

You will be prompted to select your desired dashboard – If you haven’t already created one, you can deploy a new one within your desired resource group! With a properly configured workspace and the right queries, you could easily build a dashboard like the one shown below. For those of you who have Azure Policy in place, please note that custom dashboards deploy to the Central US region by default, and you will need to allow an exception to your policy to create them.

Dashboard

Final Thoughts

If you’ve stuck with me for this entire blog post, thank you! Hopefully by now you’re well aware of the benefits of Azure monitor over System Center Operations Manager. If you missed our other blogs, head on over to Part One or our earlier comparison article! As Always, please feel free to reach out should you have any questions, and stay tuned for my next blog post where I look at replacing System Center Orchestrator with cloud native services!

AWS GuardDuty: What you need to know

One of the most common recurring questions asked by customers across all business sectors is: How do I monitor security in the cloud?

While extremely important to have good governance, design and security practice in place when moving the cloud, it’s also extremely important to have tools in place for detecting when something has gone wrong.

For AWS customers, this is where GuardDuty comes in.

A managed threat detection service, GuardDuty utilities the size and breadth of AWS to detect malicious activity within your network. It’s a fairly simple concept, with huge benefits. As a business, you have visibility to your assets & services. As a provider, Amazon has visibility of network services along with visibility of ALL customers networks.

Using this, Amazon has been able to analyse, predict and prevent huge amounts of  malicious cyber activity. It’s hard to see the forest from the trees, and GuardDuty is your satellite – provided all thanks to AWS.

product-page-diagram-Amazon-GuardDuty_how-it-works.4370200b49eddc34d3a55c52c584484ceb2d532b

In this blog, we’ll cover why AWS GuardDuty is great for cloud security on AWS deployments, its costs and benefits, and key considerations your business needs to evaluate before adopting the service.

Why is security monitoring & alerting important?

Once a malicious actor penetrates your network, time is key.

Microsoft’s incident response team has the “Minutes Matter” motto for a reason.  In 2018, the average dwell time for Asia Pacific was 204 days (FireEye). That’s over half of a year where your data can be stolen, modified or destroyed.

Accenture recently estimated the average breach costs a company 13 million dollars. That’s an increase of 12% since 2017, and a 72% increase on figures from 5 years ago.

As a business, it’s extremely important to have a robust detection and response strategy. Minimising dwell time is critical and enabling your IT teams with the correct tooling to remove these threats can reduce your risk profile.

The result of your hard efforts? Potential savings of huge sums of money.

AWS GuardDuty helps your teams by offloading the majority of the heavy lifting to Amazon. While it’s not a silver bullet, removal of monotonous tasks like comparing logs to threat feeds is an easy way to free up your team’s time.

What does GuardDuty look like?

For those of you who are technically inclined, Amazon provides some really great tutorials for trying out GuardDuty in your environment and we’ll be using this one for demonstration purposes. 

GuardDuty’s main area of focus is the findings panel. Hopefully this area remains empty with no alerts or warnings. In a nightmare scenario, it could look like this:

Capture-5

Thankfully, this panel is just a demo and you can see a couple of useful features that are designed to help your security teams respond effectively.  On the left, you will notice a coloured icon, denoting the severity of each incident – Red Triangle for critical issues, orange squares for warnings and blue circles for information. Under findings, you will find a quick summary on the issue – We’re going to select one and hopefully dig into the result. 

As you can see, a wealth of data is presented when you navigate into the threat itself. You can quickly see details of the event, in this case Command & Control activity, understand exactly what is affected and then navigate directly to the affected instance. Depending on the finding & your configuration,  GuardDuty may have even automatically completed an action to resolve this issue for you.

AWS GuardDuty: What are the costs?

AWS GuardDuty is fairly cheap due to the fact it relies on on existing services within the AWS ecosystem.

First cab off the rank is CloudTrail, the consolidated log management solution for AWS. Amazon themselves advise that CloudTrail will set you back approximately:

  • $8 for 2.15 MILLION events
  • $5 for the log ingestion
  • Around $3 for the S3 storage.
  • Required VPC flow logs will then set you back 50¢ per GB. 

Finally AWS Guardduty service itself costs $4 dollars for a million events.

Working on the basis that we generate about two million events a month, we end up paying only $16 dollars (AUD)

Pretty cheap, if you ask us.

AWS GuardDuty: Key business considerations

GuardDuty is great, but you need to make sure you’re aware of a couple of things before you enable it:

It’s a regional service. If you’re operating in multiple regions you need to enable it for each, and remember that alerts will only show in those regions. Alternately, you can ship your logs to a central account or region and use a single instance. 

It’s not a silver bullet. While some activity will be automatically blocked, you do need to check in on the panel and act on each issue. While the machine learning (ML) capability of AWS GuardDuty is great, sometimes it will get it wrong and human (manual) intervention is needed. AWS GuardDuty doesn’t analyse historical data. Analysis is completed on the fly, so make sure to enable it sooner rather than later. 

Can you extend AWS GuardDuty?

Extending GuardDuty is a pretty broad topic, so I’ll give you the short answer: Yes, you can.

If you’re interested there’s a wealth of information available at the following locations:

Hopefully by now you’re eager to give GuardDuty a go within your own environment! It’s definitely a valuable tool for any IT administrator or security team. As always, feel free to reach out to myself or the Xello team should you have any questions about staying secure within your cloud environment.

Originally Posted on xello.com.au