GitHub Advanced Security – Exporting results using the Rest API

Recently while working on a code uplift project with a customer, I wanted a simple way to analyse our Advanced Security results. While the Github UI provides easy methods to do basic analysis and prioritisation, we wanted to complete our reporting and detailed planning off platform. This post will cover the basic steps we followed to export GitHub Advanced Security results to a readable format!

Available Advanced Security API Endpoints

GitHub provides a few API endpoints for Code Scanning which are important for this process, with the following used today:

This post will use PowerShell as our primary export tool, but reading the GitHub documentation carefully should get you going in your language or tool of choice!

Required Authorisation

As a rule, all GitHub API calls should be authenticated. While you can implement a GitHub application for this process, the easiest way is to use an authorised Personal Access Token (PAT) for each API call.

To do create a PAT, navigate to your account settings, and then to Developer Settings and Personal Access Tokens. Exporting Advanced Security results requires the security_events scope, shown below.

The PAT scope required to export Advanced Security results

Note: Organisations which enforce SSO will require a secondary step where you log into your identity provider, like so:

Authorising for an SSO enabled Org

Now that we have a PAT, we need to build the basic authorisation API headers as per the GitHub documentation.

  $GITHUB_USERNAME = "james-westall_demo-org"
  $GITHUB_ACCESS_TOKEN = "supersecurepersonalaccesstoken"
  
 
  $credential = "${GITHUB_USERNAME}:${GITHUB_ACCESS_TOKEN}"
  $bytes = [System.Text.Encoding]::ASCII.GetBytes($credential)
  $base64 = [System.Convert]::ToBase64String($bytes)
  $basicAuthValue = "Basic $base64"
  $headers = @{ Authorization = $basicAuthValue }

Exporting Advanced Security results for a single repository

Once we have an appropriately configured auth header, calling the API to retreive results is really simple! Set your values for API endpoint, organisation and repo and you’re ready to go!

  $HOST_NAME = "api.github.com"
  $GITHUB_OWNER = "demo-org"
  $GITHUB_REPO = "demo-repo"

  $response = Invoke-RestMethod -FollowRelLink -Method Get -UseBasicParsing -Headers $headers -Uri https://$HOST_NAME/repos/$GITHUB_OWNER/$GITHUB_REPO/code-scanning/alerts

  $finalResult += $response | %{$_}

The above code is pretty straight forward, with the URL being built by providing the “owner” and repo name. One thing we found a little unclear in the doco was who the owner is. For a personal public repo this is obvious, but for our Github EMU deployment we had to set this as the organisation instead of the creating user.
Once we have a URI, we call the API endpoint with our auth headers for a standard REST response. Finally, we parse the result to a nicer object format (due to the way Invoke-RestMethod -FollowRelLink parameter works).

The outcome we quickly achieve using the above is a PowerShell object which can be exported to parsable JSON or CSV formats!

Exported Advanced Security Results
Once you have a PowerShell Object, this can be exported to a tool of your choice

Exporting Advanced Security results for an entire organisation

Depending on the scope of your analysis, you might want to export all the results for your GitHub organisation – This is possible, however it does require elevated access, being that your account is an administrator or security administrator for the org.

  $HOST_NAME = "api.github.com"
  $GITHUB_ORG = "demo-org"

  $response = Invoke-RestMethod -FollowRelLink -Method Get -UseBasicParsing -Headers $headers -Uri https://$HOST_NAME/orgs/$GITHUB_ORG/code-scanning/alerts

  $finalResult += $response | %{$_}

Connecting Security Centre to Slack – The better way

Recently I’ve been working on some automated workflows for Azure Security Center and Azure Sentinel. Following best practice, after initial development, all our Logic Apps and connectors are deployed using infrastructure as code and Azure DevOps. This allows us to deploy multiple instances across customer tenants at scale. Unfortunately, there is a manual step required when deploying some Logic Apps, and you will encounter this on the first run of your workflow.

A broken logic app connection

This issue occurs because connector resources often utilise OAuth flows to allow access to the target services. We’re using Slack as an example, but this includes services such as Office 365, Salesforce and GitHub. Selecting the information prompt under the deployed connector display name will quickly open a login screen, with the process authorising Azure to access your service.

Microsoft provides a few options to solve this problem;

  1. Manually apply the settings on deployment. Azure will handle token refresh, so this is a one time task. While this would work, it isn’t great. At Arinco, we try to avoid manual tasks wherever possible
  2. Pre-deploy connectors in advance. As multiple Logic Apps can utilise the same connector, operate them as a shared resource, perhaps owned by a platform engineering group.
  3. Operate a worker service account, with a browser holding logged-in sessions. Use DevOps tasks to interact and authorise the connection. This is the worst of the three solutions and prone to breakage.

A better way to solve this problem would be to sidestep it entirely. Enter app webhooks for Slack. Webhooks act as a simple method to send data between applications. These can be unauthenticated and are often unique to an application instance.

To get started with this method, navigate to the applications page at api.slack.com, create a basic application, providing an application name and a “development” workspace.

Next, enable incoming webhooks and select your channel.

Just like that, you can send messages to a channel without an OAuth connector. Grab the CURL that is provided by Slack and try it out.

Once you have completed the basic setup in Slack, the hard part is all done! To use this capability in a Logic App, add the HTTP task and fill out the details like so:O

Our simple logic app.

You will notice here that the request body we are using is a JSON formatted object. Follow the Slack block kit and you can develop some really nice looking messages. Slack even provides an excellent builder service.

Block kit enables you to develop rich UI within Slack.

Completing our integration in this manner has a couple of really nice benefits – Avoiding the manual work almost always pays off.

  1. No Manual Integration, Hooray!
  2. Our branding is better. Using the native connector does not allow you to easily change the user interface, with messages showing as sent by “Microsoft Azure Logic Apps”
  3. Integration to the Slack ecosystem for further workflows. I haven’t touched on this here, but if you wanted to build automatic actions back to Logic Apps, using a Slack App provides a really elegant path to do this.

Until next time, stay cloudy!

Security Testing your ARM Templates

In medicine there is a saying “an ounce of prevention is worth a pound of cure”” – What this concept boils down to for health practitioners is that engaging early is often the cheapest & simplest method for preventing expensive & risky health scenarios. It’s a lot cheaper & easier to teach school children about healthy foods & exercise than to complete a heart bypass operation once someone has neglected their health. Importantly, this concept extends to multiple fields, with CyberSecurity being no different.
Since the beginning of cloud, organisations everywhere have seen explosive growth in infrastructure provisioned into Azure, AWS and GCP. This explosive growth all too often corresponds with increases to security workload without required budgetary & operational capability increases. In the quest to increase security efficiency and reduce workload, this is a critical challenge. Once a security issue hits your CSPM, Azure Security Centre or AWS Trusted Inspector dashboard, it’s often too late; The security team now has to work to complete within a production environment. Infrastructure as Code security testing is a simple addition to any pipeline which will reduce the security group workload!

Preventing this type of incident is exactly why we should complete BASIC security testing..

We’ve already covered quality testing within a previous post, so today we are going to focus on the security specific options.

The first integrated option for ARM templates is easily the Azure Secure DevOps kit (AzSK for short). The AzSK has been around for while and is published by the Microsoft Core Services and Engineering division; It provides governance, security IntelliSense & ARM template validation capability, for free. Integrating to your DevOps Pipelines is relatively simple, with pre-built connectors available for Azure DevOps and a PowerShell module for local users to test with.

Another great option for security testing is Checkov from bridgecrew. I really like this tool because it provides over 400 tests spanning AWS, GCP, Azure and Kubernetes. The biggest drawback I have found is the export configuration – Checkov exports JUnit test results, however if nothing is applicable for a specified template, no tests will be displayed. This isn’t a huge deal, but can be annoying if you prefer to see consistent tests across all infrastructure…

The following snippet is all you really need if you want to import Checkov into an Azure DevOps pipeline & start publishing results!

  - task: UsePythonVersion@0
    inputs:
      versionSpec: '3.7'
      addToPath: true
    displayName: 'Install Python 3.7'
  
  - script: python -m pip install --upgrade pip setuptools wheel
    displayName: 'Install pip3'

  - script: pip3 install checkov
    displayName: 'Install Checkov using pip3'

  - script: checkov -d ./${{parameters.iacFolder}} -o junitxml -s >> checkov_sectests.xml
    displayName: 'Security test with Checkov'

  - task: PublishTestResults@2
    displayName: Publish Security Test Results (Checkov)
    condition: always()
    inputs:
      testResultsFormat: JUnit
      testResultsFiles: '**sectests.xml'

When to break the build & how to engage..

Depending on your background, breaking the build can really seem like a negative thing. After all, you want to prevent these issues getting into production, but you don’t want to be a jerk. My position on this is that security practitioners should NOT break the build for cloud infrastructure testing within dev, test and staging. (I can already hear the people who work in regulated environments squirming at this – but trust me, you CAN do this). While integration of tools like this is definitely an easy way to prevent vulnerabilities or misconfigurations from reaching these environments, the goal is to raise awareness & not increase negative perceptions.

Security should never be the first team to say no in pre-prod environments.

Use the results of any tools added into a pipeline as a chance to really evangelize security within your business. Yelling something like “Exposing your AKS Cluster publicly is not allowed” is all well and good, but explaining why public clusters increase organisational risk is a much better strategy. The challenge when security becomes a blocker is that security will no longer be engaged. Who wants to deal with the guy who always says no? An engaged security team has so much more opportunity to educate, influence and effect positive security change.

Don’t be this guy.

Importantly, engaging well within dev/test/sit and not being that jerk who says no, grants you a magical superpower – When you do say no, people listen. When warranted, go ahead and break the build – That CVSS 10.0 vulnerability definitely isn’t making it into prod. Even better, that vuln doesn’t make it to prod WITH support of your development & operational groups!

Hopefully this post has given you some food for thought on security testing, until next time, stay cloudy!

Note: Forest Brazael really has become my favourite tech related comic dude. Check his stuff out here & here.

Thoughts from an F5 APM Multi Factor implementation

Recently I was asked to assist with implementation of MFA in a complex on-premises environment. Beyond the implementation of Okta, all infrastructure was on-premises and neatly presented to external consumers through an F5 APM/LTM solution. This post details my thoughts & lessons I learnt configuring RADIUS authentication for services behind and F5, utilising AAA Okta Radius servers.

Ideal Scenario?

Before I dive into my lessons learnt – I want to preface this article by saying there is a better way. There is almost always a better way to do something. In a perfect world, all services would support token based single sign on. When security of a service can’t be achieved by the best option, always look for the next best thing. Mature organisations excel at finding a balance between what is best, and what is achievable. In my scenario, the best case implementation would have been inline SSO with an external IdP . Under this model, Okta completes SAML authentication with the F5 platform and then the F5 creates and provides relevant assertions to on-premise services.

Unfortunately, the reality of most technology environments is that not everything is new and shiny. My internal applications did not support SAML and so here we are with the Okta Radius agent and a flow that looks something like below (replace step 9 with application auth).

Importantly, this is implementation is not inherently insecure or bad, however it does have a few more areas that could be better. Okta calls this out in the documentation for exactly this reason. Something important to understand is that radius secrets can be and are compromised, and it is relatively trivial to decrypt traffic once you have possession of a secret.

APM Policy

If you have a read of the Okta documentation on this topic. you will quickly be presented with an APM policy example.

You will note there is two Radius Auth blocks – These are intended to separate the login data verification – Radius Auth 1 is responsible for password authentication, and Auth 2 is responsible for verifying a provided token. If you’re using OTP only, you can get away with a simpler APM policy – Okta supports providing both password and an OTP inline and separate by a comma for verification.

Using this option, the policy can be simplified a small amount – Always opt to simplify policy; Less places for things to go wrong!

Inline SSO & Authentication

In a similar fashion to Okta, F5 APM provides administrators the ability to pass credentials through to downstream applications. This is extremely useful when dealing with legacy infrastructure, as credential mapping can be used to correctly authenticate a user against a service using the F5. The below diagram shows this using an initial login with RSA SecurID MFA.

For most of my integrations, I was required to use HTTP forms. When completing this authentication using the APM, having an understanding of exactly how the form is constructed is really critical. The below example is taken from a Exchange form – Leaving out the flags parameter originally left my login failing & me scratching my head.

An annoying detail about forms based inline authentication is that if you already have a session, the F5 will happily auto log back into the target service. This can be a confusing experience for most users as we generally expect to be logged out when we click that logout button. Thankfully, we can handle this conundrum neatly with an iRule.

iRule Policy application

For this implementation, I had a specific set of requirements on when APM policy should be applied to enforce MFA; not all services play nice with extra authentication. Using iRules on virtual services is a really elegant way in which we can control when and APM policy applies. On-Premise Exchange is something that lots of organisations struggle with securing – especially legacy ActiveSync. The below iRule modifies when policy is applied using uri contents & device type.

when HTTP_REQUEST {
    if { (([HTTP::header User-Agent] contains "iPhone") || ([HTTP::header User-Agent] contains "iPad")) && (([string tolower [HTTP::uri]] contains "activesync") || ([string tolower [HTTP::uri]] contains "/oab")) } {
        ACCESS::disable
    } elseif { ([string tolower [HTTP::uri]] contains "logoff") } {
        ACCESS::session remove
    } else {
        ACCESS::enable
        if { ([string tolower [HTTP::uri]] contains "/ecp") } {
            if { not (([string tolower [HTTP::uri]] contains "/ecp/?rfr=owa") || ([string tolower [HTTP::uri]] contains "/ecp/personalsettings/") || ([string tolower [HTTP::uri]] contains "/ecp/ruleseditor/") || ([string tolower [HTTP::uri]] contains "/ecp/organize/") || ([string tolower [HTTP::uri]] contains "/ecp/teammailbox/") || ([string tolower [HTTP::uri]] contains "/ecp/customize/") || ([string tolower [HTTP::uri]] contains "/ecp/troubleshooting/") || ([string tolower [HTTP::uri]] contains "/ecp/sms/") || ([string tolower [HTTP::uri]] contains "/ecp/security/") || ([string tolower [HTTP::uri]] contains "/ecp/extension/") || ([string tolower [HTTP::uri]] contains "/scripts/") || ([string tolower [HTTP::uri]] contains "/themes/") || ([string tolower [HTTP::uri]] contains "/fonts/") || ([string tolower [HTTP::uri]] contains "/ecp/error.aspx") || ([string tolower [HTTP::uri]] contains "/ecp/performance/") || ([string tolower [HTTP::uri]] contains "/ecp/ddi")) } {  
                HTTP::redirect "https://[HTTP::host]/owa"
            }
        }
    }
}

One thing to be aware of when implementing iRules like this is directory traversal – You really do need a concrete understanding of what paths are and are not allowed. If a determined adversary can authenticate against a desired URI, they should NOT be able to switch to an undesired URI. The above example is really great to show this – I want my users to access personal account ECP pages just fine – Remote administrative exchange access? Thats a big no-no and I redirect to an authorised endpoint.

Final Thoughts

Overall, the solution implemented here is quite elegant, considering the age of some infrastructure. I will always advocate for MFA enablement on a service – It prevents so many password based attacks and can really uplift the security of your users. While overall service uplift is always a better option to enable security, you should never discount small steps you can take using existing infrastructure. As always, leave a comment if you found this article useful!

Okta Workflows – Unlimited Power!

If you have ever spoken with me in person; you know I’m a huge fan of the Okta identity platform – It just makes everything easy. It’s no surprise then, that the Okta Workflows announcement at Oktane was definitely something I saw value in – Interestingly enough; I’ve utilised postman collections and Azure LogicApps for an almost identical Integration solution in the past.

Custom Okta LogicApps Connector

This post will cover my first impressions, workflow basics & a demo of the capability. If you’re wanting to try this in your own Org, reach out to your Account/Customer Success Manager – The feature is still hidden behind a flag in the Okta Portal, however it is well worth the effort!

The basics of Workflows

If you have ever used Azure LogicApps or AWS Step Functions, you will instantly find the terminology of workflows familiar. Workflows are broken into three core abstractions;

  • Events – Used Start your workflow
  • Functions – Provide Logic Control (If then and the like) & advanced transformations/functionality
  • Actions – DO things

All three abstractions have input & output attributes, which can be manipulated or utilised throughout each flow using mappings. Actions & Events require a connection to a service – pretty self explanatory.

Workflows are built from left to right, starting with an event. I found the left to right view when building functions is really refreshing, If you have ever scrolled down a large LogicApp you will know how difficult it can get! Importantly, keeping your flows short and efficient will allow easy viewing & understanding of functionality.

Setting up a WorkFlow

For my first workflow I’ve elected to solve a really basic use case – Sending a message to slack when a user is added to an admin group. ChatOps style interactions are becoming really popular for internal IT teams and are a lot nicer than automated emails. Slack is supported by workflows out of the box and there is an O365 Graph API option available if your organisation is using Microsoft Teams.

First up is a trigger; User added to a group will do the trick!

Whenever you add a new integration, you will be prompted for a new connection and depending on the service, this will be different. For Okta, this is a simple OpenID app that is added when workflows is onboarded to the org. Okta Domain, Client ID, Client Secret and we are up and running!

Next, I need to integrate with Slack – Same process; Select a task, connect to the service;

Finally, I can configure my desired output to slack. A simple message to the #okta channel will do.

Within about 5 minutes I’ve produced a really simple two step flow, and I can click save & test on the right!

Looking Good!

If you’ve been paying attention, you would have realised that this flow is pretty noisy – I would have a message like this for ALL okta groups. How about adding conditions to this flow for only my desired admin group?

Under the “Functions” option, I can elect to add a simple Continue If condition and drag across the group name for my trigger. Group ID would definitely be a bit more implicit, but this is just a demo 💁🏻.

Finally, I want to clean up my slack message & provide a bit more information. A quick scroll through the available functions and I’m presented with a text concatenate;

Save & Test – Looking Good!

Whats Next?

My first impressions of the Okta Workflows service are really positive – The UI is definitely well designed & accessible to the majority of employees. I really like the left to right flow, the functionality & the options available to me in the control pane.

The early support for key services is great. Don’t worry if something isn’t immediately available as an Okta deployed integration – If something has an API you can consume it with some of the advanced functions.

REST API Integration

If you want to dive straight into the Workflows deep end, have a look at the documentation page – Okta has already provided a wealth of knowledge. This Oktane video is also really great.

Okta Workflows only gets better from here. I’m especially excited to see the integrations with other cloud providers and have already started planning out my advanced flows! Until then, Happy tinkering!

AWS GuardDuty: What you need to know

One of the most common recurring questions asked by customers across all business sectors is: How do I monitor security in the cloud?

While extremely important to have good governance, design and security practice in place when moving the cloud, it’s also extremely important to have tools in place for detecting when something has gone wrong.

For AWS customers, this is where GuardDuty comes in.

A managed threat detection service, GuardDuty utilities the size and breadth of AWS to detect malicious activity within your network. It’s a fairly simple concept, with huge benefits. As a business, you have visibility to your assets & services. As a provider, Amazon has visibility of network services along with visibility of ALL customers networks.

Using this, Amazon has been able to analyse, predict and prevent huge amounts of  malicious cyber activity. It’s hard to see the forest from the trees, and GuardDuty is your satellite – provided all thanks to AWS.

product-page-diagram-Amazon-GuardDuty_how-it-works.4370200b49eddc34d3a55c52c584484ceb2d532b

In this blog, we’ll cover why AWS GuardDuty is great for cloud security on AWS deployments, its costs and benefits, and key considerations your business needs to evaluate before adopting the service.

Why is security monitoring & alerting important?

Once a malicious actor penetrates your network, time is key.

Microsoft’s incident response team has the “Minutes Matter” motto for a reason.  In 2018, the average dwell time for Asia Pacific was 204 days (FireEye). That’s over half of a year where your data can be stolen, modified or destroyed.

Accenture recently estimated the average breach costs a company 13 million dollars. That’s an increase of 12% since 2017, and a 72% increase on figures from 5 years ago.

As a business, it’s extremely important to have a robust detection and response strategy. Minimising dwell time is critical and enabling your IT teams with the correct tooling to remove these threats can reduce your risk profile.

The result of your hard efforts? Potential savings of huge sums of money.

AWS GuardDuty helps your teams by offloading the majority of the heavy lifting to Amazon. While it’s not a silver bullet, removal of monotonous tasks like comparing logs to threat feeds is an easy way to free up your team’s time.

What does GuardDuty look like?

For those of you who are technically inclined, Amazon provides some really great tutorials for trying out GuardDuty in your environment and we’ll be using this one for demonstration purposes. 

GuardDuty’s main area of focus is the findings panel. Hopefully this area remains empty with no alerts or warnings. In a nightmare scenario, it could look like this:

Capture-5

Thankfully, this panel is just a demo and you can see a couple of useful features that are designed to help your security teams respond effectively.  On the left, you will notice a coloured icon, denoting the severity of each incident – Red Triangle for critical issues, orange squares for warnings and blue circles for information. Under findings, you will find a quick summary on the issue – We’re going to select one and hopefully dig into the result. 

As you can see, a wealth of data is presented when you navigate into the threat itself. You can quickly see details of the event, in this case Command & Control activity, understand exactly what is affected and then navigate directly to the affected instance. Depending on the finding & your configuration,  GuardDuty may have even automatically completed an action to resolve this issue for you.

AWS GuardDuty: What are the costs?

AWS GuardDuty is fairly cheap due to the fact it relies on on existing services within the AWS ecosystem.

First cab off the rank is CloudTrail, the consolidated log management solution for AWS. Amazon themselves advise that CloudTrail will set you back approximately:

  • $8 for 2.15 MILLION events
  • $5 for the log ingestion
  • Around $3 for the S3 storage.
  • Required VPC flow logs will then set you back 50¢ per GB. 

Finally AWS Guardduty service itself costs $4 dollars for a million events.

Working on the basis that we generate about two million events a month, we end up paying only $16 dollars (AUD)

Pretty cheap, if you ask us.

AWS GuardDuty: Key business considerations

GuardDuty is great, but you need to make sure you’re aware of a couple of things before you enable it:

It’s a regional service. If you’re operating in multiple regions you need to enable it for each, and remember that alerts will only show in those regions. Alternately, you can ship your logs to a central account or region and use a single instance. 

It’s not a silver bullet. While some activity will be automatically blocked, you do need to check in on the panel and act on each issue. While the machine learning (ML) capability of AWS GuardDuty is great, sometimes it will get it wrong and human (manual) intervention is needed. AWS GuardDuty doesn’t analyse historical data. Analysis is completed on the fly, so make sure to enable it sooner rather than later. 

Can you extend AWS GuardDuty?

Extending GuardDuty is a pretty broad topic, so I’ll give you the short answer: Yes, you can.

If you’re interested there’s a wealth of information available at the following locations:

Hopefully by now you’re eager to give GuardDuty a go within your own environment! It’s definitely a valuable tool for any IT administrator or security team. As always, feel free to reach out to myself or the Xello team should you have any questions about staying secure within your cloud environment.

Originally Posted on xello.com.au