TechOnTip Weblog

Run book for Technocrats

SuperMicro ZFS Storage Server

Posted by Brajesh Panda on August 5, 2017

I want one of these!!

http://www.jonkensy.com/832-tb-zfs-on-linux-project-cheap-and-deep-part-1/

Posted in Mix & Match | Leave a Comment »

Slow Code: Top 5 Ways to Make Your PowerShell Scri…

Posted by Brajesh Panda on July 13, 2017

https://blogs.technet.microsoft.com/ashleymcglone/2017/07/12/slow-code-top-5-ways-to-make-your-powershell-scripts-run-faster/

Get Outlook for iOS

Posted in Mix & Match | Leave a Comment »

Office 365 – Exchange Online Protection is so bad!!

Posted by Brajesh Panda on July 10, 2017

After so many years Microsoft’s algorithm cannot catch this kind of phishing emails. Not sure what they doing. AI, Machine Learning… blah blah blah

We were doing a pilot for a 3rd party anti spam product and now we removed the Pilot before we go for negotiation. Now in a week we have 8 major inbound phishing campaign.

Every time you create a ticket with Ms, they will take it like one time incident and at the end recommend submit a copy, block the IP.

As much as I like 365 as a solution, I hate this product.

Posted in Mix & Match | Leave a Comment »

Powershell Error Handling

Posted by Brajesh Panda on July 7, 2017

https://blogs.msdn.microsoft.com/kebab/2013/06/09/an-introduction-to-error-handling-in-powershell/

https://rkeithhill.wordpress.com/2009/08/03/effective-powershell-item-16-dealing-with-errors/

https://www.chrisgolden.de/blog/2017/03/09/powershell-error-output/

http://tommymaynard.com/quick-learn-get-pscallstack-demonstration-using-multiple-functions-2016/

Posted in Mix & Match | Leave a Comment »

SAML vs OAuth 2.0

Posted by Brajesh Panda on May 8, 2017

Good article by Zach Dennis.

https://www.mutuallyhuman.com/blog/2013/05/09/choosing-an-sso-strategy-saml-vs-oauth2/

Chances are you’ve logged into an application (mobile app or web app) by clicking on a ‘Log in with Facebook’ button. If you use Spotify, Rdio, or Pinterest, then you know what I’m talking about.

As a user, you likely don’t care about how SSO works. You just want to use an application and can be thankful for a smoother experience and that you have to remember fewer logins and passwords.

In order to provide a user with a single sign on experience a developer needs to implement a SSO solution. Over the years there have been many attempts at achieving SSO, but this article is going to focus on a comparison between SAML and OAuth2 – a recent exploration that we took on (thankfully coming out the other end unscathed, but with a lot of information).

Our Need for SSO

We’re working on a platform which will have several client applications. Some of these applications will be web-based, others will be native, such as mobile apps.

This platform will roll out being accessible to a few different clients (owned by different organizations). Down the road, additional third-party applications are intended to be built around this platform.

The platform is a front end to a large enterprise system that already has identity information about the people who would be interacting with it. Rather than having each client application maintain their own user database with usernames and passwords, it seems more appropriate to utilize SSO.

Single sign on would allow the enterprise system to securely store and own all of the user credentials. The platform can establish a trust relationship with the enterprise authentication server and client applications can be built to utilize the trusted auth server to authenticate users.

Our goal was to identify an SSO strategy and implementation that could support these needs.

Enter SAML 2.0

We originally looked into SAML 2.0 which is a set of open standards, one of which is specifically designed for SSO.

The SAML 2.0 specification (henceforth SAML) provides a Web Browser SSO Profile which describes how single sign on can be achieved for web apps. There are three main players in SAML:

SAML vs. OAuth2 terminology

SAML and OAuth2 use similar terms for similar concepts. For comparison the formal SAML term is listed with the OAuth2 equivalent in parentheses.

  • Service Provider (Resource Server) – this is the web-server you are trying to access information on.
  • Client – this is how the user is interacting with the Resource Server, like a web app being served through a web browser.
  • Identity Provider (Authorization Server) – this is the server that owns the user identities and credentials. It’s who the user actually authenticates with.

The most common SAML flow is shown below:

Here’s a fictitious scenario describing the above diagram:

  • A – a user opens their web-browser and goes to MyPhotos.com which stores all of their photos. MyPhotos.com doesn’t handle authentication itself.
  • B – to authenticate the user MyPhotos.com constructs a SAML Authnrequest, signs it, optionally encrypts it, and encodes it. After which, it redirects the user’s web browser to the Identidy Provider (IdP) in order to authenticate. The IdP receives the request, decodes it, decrypts it if necessary, and verifies the signature.
  • C – With a valid Authnrequest the IdP will present the user with a login form in which they can enter their username and password.
  • D– Once the user has logged in, the IdP generates a SAML token that includes identity information about the user (such as their username, email, etc). The Id takes the SAML token and redirects the user back to the Service Provider (MyPhotos.com).
  • E – MyPhotos.com verifies the SAML token, decrypts it if necessary, and extracts out identity information about the user, such as who they are and what their permissions might be. MyPhotos.com now logs the user into its system, presumably with some kind of cookie and session.

At the end of the process the user can interact with MyPhotos.com as a logged in user. The user’s credentials never passed through MyPhotos.com, only through the Identity Provider.

There is more detail to the above diagram, but this is high-level of what’s going on.

SAML token vs. SAML Assertion

When first being introduced to SAML, the term "SAML token" came up over and over again. It’s not actually a term in the SAML spec, but people kept using it, and its meaning was elusive.

As it turns out, the term "SAML token" seems to be a colloquial way to refer to the SAML Assertion, often compressed, encoded, possibly encrypted, and it usually looks like gobbly-gook. And a SAML Assertion is just an XML node with certain elements.

SAML’s Native App Limitation

SAML supports the concepts of bindings. These are essentially the means by which the Identity Provider redirects the user back to the Service Provider. For example, in step D above, the user gets redirected back to the MyPhotos.com, but how?

The two relevant types of bindings are the HTTP Redirect and the HTTP POST binding defined in the SAML 2.0 spec. The HTTP Redirect binding will use a HTTP Redirect to send the user back to the Service Provider, in the case of our example: MyPhotos.com.

The HTTP Redirect binding is great for short SAML messages, but it is advised against using them for longer messages such as SAML assertions. From wikipedia:

Longer messages (e.g., those containing signed SAML assertions) should be transmitted via other bindings such as the HTTP POST Binding.

The recommended way of using an HTTP POST has its own oddities. For example, the SAML specification recommends that step D above renders an HTML form where the action points back to the Service Provider.

You can either have the user click another button to submit that form or you can utilize JavaScript to automate submitting the form. Why is there a form that needs to be submitted? In my opinon, SAML 2.0 is showing it’s age (circa 2005), as the form here only exists so an HTTP POST can be used to send the SAML token back to the Service Provider. Which to SAML’s defense, in 2005, was likely a necessary decision at the time.

This is a problem when the client is not a web-based application, but a native one, such as a mobile app. For example, let’s say we’ve installed the MyPhotos iPhone app. We open the app, and it wants us to authenticate against the Identity Provider. Once we authenticate, the Identity Provider needs to send the SAML token back to the MyPhotos app.

Most mobile applications can be launched via a custom URI, such as, "my-photos://authenticate", and presumably, the Identity Provider submits the form that includes the SAML token to that URL. Our MyPhotos app launches, but we’re not logged in. What gives?

Mobile apps don’t have access to the HTTP POST body. They only have access to the URL use to launch the application. This means that we can’t read the SAML token.

Launching Mobile Apps via URLs

On Android: launching an application from a url using Intents.

On iOS: launching an application by registering a custom URI scheme.

No SAML token, no authenticated user.

Working Around SAML’s HTTP POST Binding

The limitation of the HTTP POST binding for native mobile apps can be worked around. For example, you can use embedded web views, in which you write custom code to watch the entire authentication process. At the very end of the process you scrape the HTML of the page and extract out the SAML token.

A second workaround is to implement a proxy server which can receive the HTTP POST, extract the SAML token, and then make a URL that includes the SAML token (e.g.: "myphotos://authenticate/?SAMLRequest=asdfsdfsdf") The proxy server could then use an HTTP Redirect to cause the device to open the MyPhotos app. And since the SAML token is a part of the URL the MyPhotos app can extract it, and use that to log in.

A third workaround would be to ignore the specification’s recommendation against using the HTTP Redirect binding. This is very tempting, but it’s hard to shake off the feeling that you’re walking into a mine-field, just hoping you don’t make one wrong step.

Another approach which avoided workarounds altogether is to not rely on SAML, but look at another approach, like OAuth 2.0.

Enter OAuth 2.0

Unlike SAML, OAuth 2.0 (henceforth OAuth2), is a specification whose ink has barely dried (circa late 2012). It has the benefit of being recent and takes into consideration how the world has changed in the past eight years.

Mobile devices and native applications are prevalent today in ways that SAML could not anticipate in 2005.

The basic players with OAuth2 are:

SAML vs. OAuth2 terminology

SAML and OAuth2 use similar terms for similar concepts. For comparison the formal OAuth2 term is listed with the SAML equivalent in parentheses.

  • Resource Server (Service Provider) – this is the web-server you are trying to access information on.
  • Client – this is how the user is interacting with the Resource Server. This could be a browser-based web app, a native mobile app, a desktop app, a server-side app.
  • Authorization Server (Identity Provider) – this is the server that owns the user identities and credentials. It’s who the user actually authenticates and authorizes with.

At a high level, the OAuth2 flow is not that different from the earlier SAML flow:

Let’s walk through the same scenario we walked through with SAML earlier:

  • A – a user opens their web-browser and goes to MyPhotos.com which stores all of their photos. MyPhotos.com doesn’t handle authentication itself, so the user is redirected to the Authorization Server with a request for authorization. The user is presented with a login form and is asked if they want to approve the Resource Server (MyPhotos.com) to act on their behalf. The user logs in and they are redirected back to MyPhotos.com.
  • B – the client receives an authorization grant code as a part of the redirect and then passes this along to the client.
  • C – the Client then uses that authorization grant code to request an access token from the Authorization Server.
  • D – if the authorization grant code is valid, then the Authorization Server grants an access token. The access token is then used by the client to request resources from the Resource Server (MyPhotos.com).
  • E – MyPhotos.com receives the request for a resource and it receives the access token. In order to make sure it’s a valid access token it sends the token directly to the Authorization Server to validate. If valid, the Authorization Server sends back information about the user.
  • F – having validated the user’s request MyPhotos.com sends the requested resource back to the user.

This is the most common OAuth2 flow: the authorization code flow. OAuth2 provides three other flows (or what they call authorization grants) which work for slightly different scenarios, such as single page javascript apps, native mobile apps, native desktop apps, traditional web apps, and server-side applications where a user isn’t directly involved but they’ve granted you permission to do something on their behalf.

The big advantage with OAuth2 flows are that the communication from the Authorization Server back to the Client and Resource Server is done over HTTP Redirects with the token information provided as query parameters. OAuth2 also doesn’t assume the Client is a web-browser whereas the default SAML Web Browser SSO Profile does.

Native mobile applications will just work out of the box. No workarounds necessary.

OAuth2’s Favorite Phrase: Out of Scope

The OAuth2 specification doesn’t prescribe how the communication between the Resource Server and the Authorization Server works in a lot of situations, such as validating a token. It also doesn’t say anything about what information should be returned about the user or in what format.

There are quite a few spots where the OAuth2 specification states that things are "outside the scope of this specification." This has brought criticism to the OAuth2 spec because it leaves a lot of things up to implementation which could lead to incompatible implementations at some point.

OAuth2, is still very young, and it already has widespread adoption with the likes of Google, Facebook, Salesforce, and Twitter to name a few. The true beauty of OAuth2 though is its simplicity. In fact, the OpenID Connect Basic Profile, which builds on OAuth2 fills in some of the areas that the OAuth2 spec itself doesn’t define.

OAuth2: Not Requiring Digital Signatures By Default

OAuth2 doesn’t require signing messages by default. If you want to add that in, feel free, but out of the box, the spec works without it. It does prescribe that all requests should be made over SSL/TLS.

This has caused commotion in the past:

Having worked with OAuth2 and OAuth1 in the past, I can say that OAuth2 is much simpler than OAuth1 (and more enjoyable to work with). Interoperability and automatic discovery of services may be something useful in the future, but right now, it’s not anything we’re looking for.

We may be asked to sign messages once the security team of the enterprise does final auditing of the OAuth2 implementation, but for now, OAuth2 fits our current goals in a more standardized manner than SAML. It’s also far simpler.

Who’s Got Your Keys?

If every application has a secured web-server backing it then signing works great, but when that’s not the case the problem becomes more nuanced. How do you securely store your keys in the browser for browser-based JS apps or in native mobile apps?

If you google decompiling iOS and Android apps have your heart will sink. Your keys really aren’t that secure if you can’t own and secure the device.

OAuth2 is for Authorization, not Authentication

The "auth" in OAuth does stand for "Authorization" and not "Authentication". The pedant in you may be smiling. You’ve got me!

But – yes, there’s always a but! Even though the term OAuth is fairly recent, the fact that "auth" meant authorization seems a tad bit anachronistic. It’s already being used to achieve SSO out in the wild (thanks to the likes of Facebook, Twitter, Salesforce, and Google and thousands of sites using them for authenticating and authorizing users).

The biggest complaint I’ve seen is in lack of prescription and the plentiful "out of scope" usages in the OAuth2 spec. The fact that the OpenID Connect Basic Profile is built directly on top of OAuth2 should be enough to dispel the myth that OAuth2 can’t be used for authentication.

What a word meant six years ago is much less important than what it can encompass today.

Summary

SAML has one feature that OAuth2 lacks: the SAML token contains the user identity information (because of signing). With OAuth2, you don’t get that out of the box, and instead, the Resource Server needs to make an additional round trip to validate the token with the Authorization Server.

On the other hand, with OAuth2 you can invalidate an access token on the Authorization Server, and disable it from further access to the Resource Server.

Both approaches have nice features and both will work for SSO. We have proved out both concepts in multiple languages and various kinds of applications. At the end of the day OAuth2 seems to be a better fit for our needs (since there isn’t an existing SAML infrastructure in place to utilize).

OAuth2 provides a simpler and more standardized solution which covers all of our current needs and avoids the use of workarounds for interoperability with native applications.

As this begins to unfold and we work with various security teams we’ll see how far this holds up.

So far, so good.

Posted in Mix & Match | Leave a Comment »

ADFS SAML Claim: Windows Domain Name (NetBIOS)

Posted by Brajesh Panda on March 28, 2017

As there are no attributes in Active Directory which can show you which domain the user account belongs to, I have designed my SAML claim rules to retrieve NetBios name of the Active Directory Domain Name.

Edit: June/14/2017:  Well there is an easy way to do it. Use Windows Account Name & Name claim from ADFS. These two provides NetBIOS name in the claims; like domain\samaccountname. Then you can use this to strip out \samaccountname. But to pass these two in your claim, you have to create a passthru claim rule.

  1. Create a new claim description
    1. ADFS Management Console – Service – Claim Description – Create a new
    2. Give a name,
    3. Supply Claim Type, like http://custom.techontip.dom/adattribute/windowsdomainname
  2. Claim Rule 1: Create a Claim Rule to capture / add all AD groups into claim

c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname”, Issuer == “AD AUTHORITY”]

=> add(store = “Active Directory”, types = (“http://schemas.xmlsoap.org/claims/Group”), query = “;tokenGroups(domainQualifiedName);{0}”, param = c.Value);

  1. Claim Rule 2: Select Only one group. Here I will select Domain\Domain Users Group.

c:[Type == “http://schemas.xmlsoap.org/claims/Group”, Value =~ “(?i)Domain Users“]

=> issue(claim = c);

  1. Claim Rule 3: Use RegexReplace to replace \Domain Users and pass on the remaining value to newly created claim description

c:[Type == “http://schemas.xmlsoap.org/claims/Group”, Value =~ “(?i)Domain Users”]

=> issue(Type = “http://custom.colliersadfs.com/adattribute/windowsdomainname”, Value = RegexReplace(c.Value, “\\[^\n]*”, “”));

Claim rules need to be in order.

In Claim rule 2: “(?i) means not case sensitive

In Claim rule 3: “\\[^\n]*”; means 1st back slash and everything after it. To capture the black shlash you have to mention two back slashes (\\) but if it is other special character like , or @ no need of double characters.

Here is a nice article about RegEx in Claim rule

https://social.technet.microsoft.com/wiki/contents/articles/16161.ad-fs-2-0-using-regex-in-the-claims-rule-language.aspx

Posted in Mix & Match | 2 Comments »

Replacing legacy Domain Controller Certificates

Posted by Brajesh Panda on January 13, 2017

Original article: http://www.open-a-socket.com/index.php/2012/11/21/replacing-legacy-domain-controller-certificates/

Something you may have noticed in your journey on the road to AD enlightenment is that if you deploy a new Microsoft Enterprise Certificate Authority (CA) and publish the default templates, your Domain Controllers will automatically enroll for a certificate. The template used is the DomainController V1 certificate, which has been around since Windows 2000 days.

But what if you wanted to assign a different certificate based on the most recent template designed for use with DCs (KerberosAuthentication)? Easy, you would think, given that the DCs have this in-built autoenrollment capability. All I would need to do is unpublish the old DomainController template, publish the new KerberosAuthentication template, ensure that DCs have autoenroll permissions on the template and then perform a Certutil –pulse command on the DCs. Right? Wrong. It’s actually not that straightforward. From what I have managed to infer (no one will provide me with a definitive answer) it seems the in-built auto-enrollment feature of Domain Controllers is tied specifically to the legacy DomainController template. In other words it will only work with the DomainController template and no other.

The only way I can get the DCs to successfully autoenroll for a certificate based on the KerberosAuthentication template is to follow the steps shown below.

1. Ensure the Domain Controllers group has permissions on the KerberosAuthentication template (it has by default).

2. Modify the properties of the KerberosAuthentication template to add the DomainController, DirectoryEmailReplication and DomainControllerAuthentication templates to the list of superseded templates

3. Publish the KerberosAuthentication template

4. Modify a GPO linked to the Domain Controllers OU to enable the “Certificate Services Client – Auto-Enrollment setting as shown below.

5. Wait for policy to apply to the DCs (or run gpupdate /force).

6. Runcertutil –pulse from an elevate CMD prompt to force re-enrollment.

7. Confirm that a new certificate has been issued based on the KerberosAuthentication template and that the old certificate based on the DomainController template has been automatically removed.

Posted in Mix & Match | Leave a Comment »

Get child objects thru ADSI & Powershell

Posted by Brajesh Panda on January 3, 2017

For example exchange org details..

$ForestRootDSE= (Get-ADRootDSE).rootDomainNamingContext

$config = [ADSI]"LDAP://CN=Microsoft Exchange,CN=Services,CN=Configuration,$ForestRootDSE"

$orgName = $config.psbase.children | where {$_.objectClass -eq ‘msExchOrganizationContainer’}

$orgName.name

$orgName.objectVersion

I have to understand little more about psbase. Seems like more methods are available under psbase ($config.psbase | gm) & mostly used with ADSI, WMI or XML kind of stuff. Folks says; PSBase lets you get at the “raw” object behind the object PowerShell exposes by default; in other words, PSBase lets you get at all the properties and methods of the object.

https://blogs.technet.microsoft.com/heyscriptingguy/2008/03/12/hey-scripting-guy-how-can-i-use-windows-powershell-to-add-a-domain-user-to-a-local-group/

https://blogs.msdn.microsoft.com/powershell/2006/11/24/whats-up-with-psbase-psextended-psadapted-and-psobject/

Posted in Mix & Match | Leave a Comment »

Active Directory LDAP filters

Posted by Brajesh Panda on October 26, 2016

http://social.technet.microsoft.com/wiki/contents/articles/5392.active-directory-ldap-syntax-filters.aspx

Posted in Mix & Match | Leave a Comment »

Powershell Tip: Date Formatting ISO 8601

Posted by Brajesh Panda on October 10, 2016

Convert current date to ISO 8601 format (like 2016-10-10T16:35:48.477):  Get-Date -format s

Convert ISO 8601 format to Powershell’s date time format:  [datetime]::Parse($date). where $date has the ISO 8601 input.  Here is the reference to one of my old post which converts user input to powershell’s DateTime format.

Here are two other interesting articles.

https://technet.microsoft.com/en-us/library/ee692801.aspx

https://blogs.technet.microsoft.com/heyscriptingguy/2010/08/03/how-to-express-dates-in-different-fashions-with-windows-powershell/

image001

image001

Posted in Powershell | Leave a Comment »

Powershell: Use Convert-String to parse email addresses

Posted by Brajesh Panda on September 13, 2016

Feed: GoateePFE
Posted on: Tuesday, September 13, 2016 8:14 AM
Author: Ashley McGlone
Subject: Use the new PowerShell cmdlet Convert-String to parse email addresses

Tired of hacking away at RegEx and string functions to parse text? This post is for you!

New toys

PowerShell 5.x includes a number of new features. One of the lesser-known and incredibly powerful is the string conversion set of cmdlets. The names are very similar. Check out how Get-Help describes them:

PS C:\> Get-Help Convert*-String | Format-Table Name,Synopsis -AutoSize
Name Synopsis
---- 

Posted in Mix & Match | Leave a Comment »

Powershell – Change CSV Headers

Posted by Brajesh Panda on September 2, 2016

CSV file has below headers and I want to change them by appending 1 to each one of them.

Name, samaccountname, ComputerEnabled, User, UserEnable, UserOffice, UserDepartment

$headers = "name1","samaccountname1","ComputerEnabled1","User1","UserEnabled1","UserOffice1","UserDepartment1"

Get-Content C:\AdobeStandard.csv -Encoding Default | Select-Object -Skip 1 | ConvertFrom-CSV -UseCulture -Header $headers

http://powershell.com/cs/blogs/tips/archive/2016/09/02/replacing-csv-file-headers.aspx

Posted in Mix & Match | Leave a Comment »

Azure Backup Reporting

Posted by Brajesh Panda on August 10, 2016

In the absence of any good reporting mechanism for Azure Backup Agents, I have created this one for my work. We just start backing up some branch file servers to Azure and that number will go up. So for now it is okay, will see what improvements I can do to this. If you have an idea please suggest or implement and send me a copy 😉

Download from here.

# Base Script
# Added Recovery Points
# Convert UTC times to server's local time

# Add all of your servers computer accouts to a group called Azure_Backup_reporting
$AzureBackupAgents = Get-ADGroupMember Azure_Backup_Reporting | select Name

$Consolidated = @()

foreach($Node in $AzureBackupAgents)
{
 $Agent = $Node.Name
 $AGent
 # Select last 10 Azure Backup Jobs from the remote server
 $AgentReports = Invoke-Command -ComputerName $Agent -ScriptBlock `
 {
 # Find time (ticks) offset between Local and UTC. 
 $Date = Get-Date;
 $UTC = $Date.ToUniversalTime();
 $OffSet = $Date – $UTC;
 # Find last 10 job details
 Get-OBJob -Previous 10 | select Jobtype, `
 @{Name="StartTime"; Expression ={($_.JobStatus.StartTime).AddTicks($OffSet.Ticks)}}, `
 @{Name="EndTime"; Expression ={($_.JobStatus.EndTime).AddTicks($OffSet.Ticks)}}, `
 @{Name="JobState"; Expression ={$_.JobStatus.JobState}}, `
 @{Name="BackupSizeInGB"; Expression ={($_.JobStatus.DatasourceStatus.ByteProgress.Progress / 1GB).ToString('#.##')}}, `
 @{Name="TotalSizeInGB"; Expression ={($_.JobStatus.DatasourceStatus.ByteProgress.Total / 1GB).ToString('#.##')}}, `
 @{Name="FailedFileLog"; Expression ={$_.JobStatus.FailedFileLog}}
 }
 
 # Filter & Select last 24 hours logs 
 $Day = (Get-Date).AddHours(-24)
 $Filtered = $AgentReports | where{$_.StartTime -gt $Day} | Sort StartTime -Descending | Select @{Name="AgentName"; Expression ={$_.PSComputerName}}, JobType, StartTime, Endtime, JobState, BackupSizeInGB, TotalSizeInGB, FailedFileLog
 ($Filtered | Measure-Object).Count
 $Consolidated += $Filtered
}
$Node = $null

# Recovery Point Details
$Consolidated | Add-Member -MemberType NoteProperty TotalRPs ''
$Consolidated | Add-Member -MemberType NoteProperty OldestRP ''
$Consolidated | Add-Member -MemberType NoteProperty LatestRP ''

foreach($Node in $Consolidated)
{
 $RecoveryPoints = Invoke-Command -ComputerName $Node.AgentName -ScriptBlock `
 {
 Get-OBAllRecoveryPoints | Sort-Object backuptime
 }
 $Node.TotalRPs = ($RecoveryPoints | Measure-Object).Count
 $Node.OldestRP = $RecoveryPoints[0].BackupTime
 $Node.LatestRP = $RecoveryPoints[-1].BackupTime
}

 # HTML Formating; Styles to be placed in the header
 $a = "<style>"
 $a = $a + "BODY{font-family: Verdana, Arial, Helvetica, sans-serif;font-size:10;font-color: #000000}"
 $a = $a + "TABLE{border-width: 1px;border-style: solid;border-color: black;border-collapse: collapse;}"
 $a = $a + "TH{border-width: 1px;padding: 0px;border-style: solid;border-color: black;background-color: #E8E8E8}"
 $a = $a + "TD{border-width: 1px;padding: 0px;border-style: solid;border-color: black}"
 $a = $a + "</style>"

 # Mail Alert
 $MailSubject = "Consolidated Azure Backup Report for Last 24hrs"
 $MailBody = $Consolidated | Select AgentName,JobType,StartTime,EndTime,JobState,BackupSizeInGB,TotalSizeInGB,TotalRPs,OldestRP,LatestRP,FailedFileLog | ConvertTo-Html -Head $a

 # Mailout the Report
 $smtpServer = "smtp.server.name"
 $msg = new-object Net.Mail.MailMessage
 $smtp = new-object Net.Mail.SmtpClient($smtpServer)

 $msg.From = "AzureBackupAdmin@abc.com"
 $msg.To.Add("Admin@abc.com")
 $msg.Subject = $MailSubject
 $msg.Body = $MailBody
 $msg.IsBodyHTML=1
 $smtp.Send($msg)

Email Output looks like below;

azuer backup

Posted in Azure, Powershell | 4 Comments »

Upgrade Windows 10 Pro to Enterprise

Posted by Brajesh Panda on August 9, 2016

From an elevated command prompt run changepk.exe and supply the MAK activation key. It will take 15-20mins to finish the upgrade & couple of times auto reboot to get converted to enterprise edition.

Posted in Mix & Match | Leave a Comment »

CRL vs OCSP

Posted by Brajesh Panda on July 26, 2016

Here is a nice article which describes Certificate Revocation List vs Online Certificate Status Protocol.

https://www.fir3net.com/Security/Concepts-and-Terminology/certificate-revocation.html

Credit goes to Mr. R DONATO

Ricky Donato is the Founder and Chief Editor of Fir3net.com. He currently works as a Principal Network Security Engineer and has a keen interest in automation and the cloud.

You can find Ricky on Twitter @f3lix001

INTRODUCTION

Certificate Revocation is used within PKI (Public Key Infrastructure) to instruct the client that the certificate can no longer be trusted. This is required in scenarios where the private key has been compromised.

CERTIFICATE TYPES

Prior to a CA issuing a certificate to a company the CA performs a level of validation on the authenticity that the company are who they say they are. There are 3 levels of validation ranging from DV (lowest) level all the way up to EV (highest).

Domain Validation (DV) – This type of certificate is the least expensive of the 3. It requires a basic form of domain validation to be performed. Validation is performed by email.
Organization Validation (OV) – When obtaining an OV certificate the company name is checked against a company register, i.e Chamber of Commerce.
Extended Validation (EV) – Like OV a company search is performed, however the physical location is also checked and the contact who requested the certificate is also validated.

REVOCATION METHODS

CRL (Certificate Revocation) was first released to provide the CA with the ability to revoke certificates., however due to limitations with this method it was superseded by OCSP.

Below details each of these methods along with their main advantages and disadvantages.

CRL

CRL (Certificate Revocation Lists) contains a list of certificate serial numbers that have been revoked by the CA. The client then checks the serial number from the certificate against the serial numbers within the list (sample shown below).

Revoked Certificates:
Serial Number: 2572757EAAF2BEC5980067579A0A7705
Revocation Date: May 1 19:56:10 2013 GMT
Serial Number: 776DDD15D25C713616E7D4A8EACFB4A1
Revocation Date: May 24 13:03:16 2013 GMT

To instruct the client on where to find the CRL, a CRL distribution point is embedded within each certificate (shown below),

X509v3 extensions:
X509v3 Authority Key Identifier:
keyid:D1:6D:2E:7C:5C:AD:14:FC:2A:72:92:C2:82:CB:B9:6E:DC:A5:C4:02
 X509v3 Subject Key Identifier:
35:42:17:CF:F0:9A:FF:B7:9F:FC:C5:A4:95:D6:68:4F:97:81:1E:1D
X509v3 Key Usage: critical
Digital Signature, Key Encipherment
X509v3 Basic Constraints: critical
CA:FALSE
X509v3 Extended Key Usage:
TLS Web Server Authentication, TLS Web Client Authentication
X509v3 Certificate Policies:
Policy: 1.3.6.1.4.1.6449.1.2.2.43
CPS: https://cps.trust-provider.com
Policy: 2.23.140.1.2.2
 X509v3 CRL DistributionPoints:
 URI:http://crl.trust-provider.com/McAfeeOVSSLCA.crl

The main disadvantages to CRL are :

· Can create a large amount of overhead, as the client has to search through the revocation list. In some cases this can be 1000’s of lines long.

· CRLs are updated periodically every 5-14 days. Potentially leaving the attack surface open until the next CRL update.

· The CRL is not checked for OV or DV based certificates. Checked for EV certificates.

· If the client is unable to download the CRL then by default the client will trust the certificate.

OCSP

OCSP (Online Certificate Status Protocol) removes many of the disadvantages of CRL by allowing the client to check the certificate status for a single certificate.

The OCSP process in shown below,

1. Client receives certificate.

2. Client sends OCSP Request to a OCSP Responder (over HTTP) with the certificates serial number.

3. OCSP Responder replies with a certificate status of either Good, Revoked or Unknown (shown below)

Response verify OK
0x25F5V12D5E6FD0BD4EAF2A2C966F3B4aE: good
 This Update: Jan 19 00:24:56 2011 GMT
 Next Update: Jan 26 00:24:56 2011 GMT

The main advantage to OCSP is that because the client can query the status of a single certificate, rather then having to download and parse an entire list there is much less overhead on the client and network.

However the main disadvantages to OCSP are,

· OCSP Requests are sent for each certificate. Because of this there can be a huge over head on the OCSP Responder (i.e the CA) for high traffic websites.

· If the private key was comprised the attacker would need to leverage a MITM attack to intercept and pose as the server. Because most browsers slienty ignore OCSP if the protocol times out OCSP can still not be considered a 100% reliable method for mitigating HTTPS server key comprises.

· The OCSP is not enforced for OV or DV based certificates. Checked for EV certificates.

OCSP STAPLING

OCSP Stapling resolves the overhead issues with OCSP and CRL by having the certificate holder (i.e the server) periodically performing the OCSP Request. The OCSP Response is then sent back to the client (i.e stapled) during the SSL handshake.

NOTE The OCSP Response is signed by the CA to ensure that it has not been modified before being sent back to the client.

The main disadvantages with OCSP Stapling are,

· Only supported within TLS 1.2.

· It is still not supported by many browsers . This results in either the OCSP validity method not being used or standard OCSP being used instead.

REFERENCE

OCSP

https://www.grc.com/revocation/commentary.htm

OCSP Stapling

http://en.wikipedia.org/wiki/OCSP_stapling<

Certificate Types / Browser Functionality

http://blog.spiderlabs.com/2011/04/certificate-revocation-behavior-in-modern-browsers.html

Posted in Mix & Match | Leave a Comment »

 
%d bloggers like this: