TechOnTip Weblog

Run book for Technocrats

Backup Exec 2014 – Windows 2012 R2 Active Directory Backup

Posted by Brajesh Panda on December 19, 2014

I was setting up Backup Exec 2014 SP1 for Active Directory backup. Here are the components for System State backup in Windows 2012 R2 version of Active Directory.

Few Notes;

  • To take GRT (Granular Recover Technology) backup from a Windows 2012 R2 Domain Controller, Backup Server need to be Windows 2012 R2.

Posted in Mix & Match | Leave a Comment »

ADFS 2.0 Proxy Error Event ID 364 – Encountered error during federation passive request

Posted by Brajesh Panda on December 15, 2014

This error may be related to IIS Service, Application Pool or System Time. Make sure App pools are not stopped, system time is not beyond Kerberos skewing limit (5mins~). I have DMZ Workgroup machines synching time with underline ESXi server & that particular ESXi has wrong date & hours config. So fixing clock on ESXi and ADFS DMZ server, resolved my issue.

Thanks

Posted in Mix & Match | Leave a Comment »

Posted by Brajesh Panda on November 14, 2014

Applies to: System Center 2012 r2 Data Protection Manager

During installation it kept throwing below two errors in event log and installation get terminated abruptly.

##############

Application: SetupDpm.exe

Framework Version: v4.0.30319

Description: The process was terminated due to an unhandled exception.

Exception Info: System.Reflection.TargetInvocationException

#############

Faulting application name: SetupDpm.exe, version: 4.2.1205.0, time stamp: 0x5226e038

Faulting module name: KERNELBASE.dll, version: 6.3.9600.17278, time stamp: 0x53eebf2e

Exception code: 0xe0434352

Fault offset: 0x000000000000606c

Faulting process id: 0xca4

Faulting application start time: 0x01d000630dc6415a

Faulting application path: C:\Users\BRAJES~1.BRA\AppData\Local\Temp\2\DPMC697.tmp\DPM2012\Setup\SetupDpm.exe

Faulting module path: C:\Windows\system32\KERNELBASE.dll

Report Id: 08f95133-6c59-11e4-80c5-000d3a103dc0

Faulting package full name:

Faulting package-relative application ID:

#############

 

After wasting 2 attempts, did bit research on google, found somebody mentioning IE doesn’t download the DPM from Volume Licensing Center properly. Here is the community article https://social.technet.microsoft.com/Forums/en-US/8f6a8a0d-c1ed-4f4d-b71d-7433070a771b/fresh-2012-r2-install-error-4378?forum=dpmsetup.

 

Earlier I have downloaded this ISO from our Volume Licensing Center using Web Browser method. Then tried with Google Browser and installation went thru properly

Funny part is this issue is reported in MS Community portal in early 2014. And it is going to end of 2014 but MS didn’t fix their volume licensing content distribution network binaries for this tool.

If you are facing below error during DPM, installation make sure uninstall & while re-installing, supply Domain NETBIOS name instead of Domain FQDN in Prerequisite Page.

I have no idea why Microsoft Dev folks did their coding like this. If it is required they could mention in the prerequisite page itself. I ended up wasting 2hrs for this.

Correct Credential Inputs by putting NETBIOS name

Posted in Mix & Match | Leave a Comment »

Setup cannot grant the “FQDN\Username” account access to DPM database

Posted by Brajesh Panda on November 14, 2014

Applies to: System Center 2012 r2 Data Protection Manager

If you are facing below error during DPM, installation make sure uninstall & while re-installing, supply Domain NETBIOS name instead of Domain FQDN in Prerequisite Page.

I have no idea why Microsoft Dev folks did their coding like this. If it is required they could mention in the prerequisite page itself. I ended up wasting 2hrs for this.

Correct Credential Inputs by putting NETBIOS name

Posted in Mix & Match | Leave a Comment »

Important Performance Counters

Posted by Brajesh Panda on October 27, 2014

http://blogs.technet.com/b/bulentozkir/archive/2014/02/14/top-10-most-important-performance-counters-for-windows-and-their-recommended-values.aspx

Posted in Mix & Match | Leave a Comment »

PowerShell Trick: Converting User Input to DATE

Posted by Brajesh Panda on August 17, 2014

$TD = Read-Host "Type TO Date in mm-dd-yyyy formart"

$Today = [DateTime]::Parse($TD)

http://dusan.kuzmanovic.net/2012/05/07/powershell-parsing-date-and-time/

PowerShell: Parsing Date and Time

in PowerShell

Parsing Date and Time

Parsing a date and/or time information is tricky because formatting depends on the regional settings. This is why PowerShell can convert date and time based on your regional settings or in a culture-neutral format. Let’s assume this date:

PS> $date = '1/6/2013'

If you convert this to a datetime type, PowerShell always uses the culture-neutral format (US format), regardless of your regional settings. The output is shown here on a German system:

PS> [DateTime]$date
Sonntag, 6. Januar 2013 00:00:00

To use your regional datetime format, use the Parse() method which is part of the DateTime type, like this:

PS> [DateTime]::Parse($date)
Samstag, 1. Juni 2013 00:00:00

Alternately, you can use Get-Date and the -date parameter:

PS> Get-Date -Date $date
Samstag, 1. Juni 2013 00:00:00

Parsing Custom DateTime Formats

Sometimes, date and time information may not conform to standards, and still you’d like to interpret that information correctly as date and time.

That’s when you can use ParseExact() provided by the DateTime type. Here’s an example:

PS> $timeinfo = '12 07 2012 18 02'

To tell PowerShell what piece of information belongs to which datetime part, you submit a template like this:

PS> $template = 'HH mm yyyy dd MM'

This template defines the custom format to specify hours first (HH), then minutes (mm), then the year (yyyy), the day (dd) and the month (MM).

Now let’s use the template to interpret the raw datetime information:

PS> $timeinfo = '12 07 2012 18 02'
PS> $template = 'HH mm yyyy dd MM'
PS> [DateTime]::ParseExact($timeinfo, $template, $null) 
Samstag, 18. Februar 2012 12:07:00

Voilá! To define patterns, here are the placeholders you can use (note that they are case-sensitive!):

d Day of month 1-31
dd Day of month 01-31
ddd Day of month as abbreviated weekday name
dddd Weekday name
h Hour from 1-12
H Hour from 1-24
hh Hour from 01-12
HH Hour from 01-24
m Minute from 0-59
mm Minute from 00-59
M Month from 1-12
MM Month from 01-12
MMM Abbreviated Month Name
MMMM Month name
s Seconds from 1-60
ss Seconds from 01-60
t A or P (for AM or PM)
tt AM or PM
yy Year as 2-digit
yyyy Year as 4-digit
z Timezone as one digit
zz Timezone as 2-digit
zzz Timezone

Parsing Extra Text

Using ParseExact() to parse custom datetime formats only works if the date and time information does not contain extra characters except whitespace.

To parse date and time information that has extra text in the middle of it, you must escape any ambiguous character. Here’s a sample:

PS> $raw = 'year 2012 and month 08'
PS> $pattern = '\year yyyy an\d \mon\t\h MM'
PS>
PS> [DateTime]::ParseExact($raw, $pattern, $null)

Note how in the pattern, each character that represents a date or time information is escaped. Other characters that are not placeholders for date or time information do not necessarily need to be escaped. If you are unsure, simply escape any character that is not meant to be a placeholder.

Posted in Mix & Match | Leave a Comment »

Enable External GalSync Contacts for Lync Address Book

Posted by Brajesh Panda on July 28, 2014

I found this article from http://uccexperts.com/enabling-ad-mail-contacts-for-lync/ & used the same procedures for my MIIS based GalSync Solution. Perfectly works. I just did one correction from the original article & added couple of lines here and there. Solution credit goes to original author. Cheers!!

Situation

While working in an environment with multiple Exchange 2010 forests where Forefront Identity Manager was used to realize a common global address list (GAL). Each forest also has its own Lync 2010 implementation without Enterprise voice. Primarily both environment has two different Lync environment with two different SIP Domain.

By default the Lync address book is automatically populated with all objects that have one of the following attributes filled in:

msRTCSIP-PrimaryUserAddress

telephoneNumber

homePhone

mobile

In case msRTCSIP-PrimaryUserAddress attribute is missing, Lync will not able to show presence info for the contact & it may just show a phone icon instead of a person icon/picture.

By default the FIM GalSync synchronizes all those attributes, except the msRTCSIP-PrimaryUserAddress. This caused contacts in the remote forest to appear in the address book with a telephone icon:


This situation caused confusion for our users because they expect the Lync client to work for Instant Messaging with Lync users in the remote forests. When they try to start an IM session with a remote forest user Outlook starts and will created a new e-mail message.

Note: If you see phone icon for those users, make sure to test Federation using their SIP Address directly rather than, default AD Objects. You can add a Lync object to Outlook address book & stamp SIP Address manually & try to test federation.

You can also try out exporting & manually adding/updating this attribute. That should work too. But that will be manual process for future updates too. Using below procedure you can configure GalSync Management Agents to replicate this Lync Attribute too.

Solution

The solution is to include the AD attribute “msRTCSIP-PrimaryUserAddress” in the FIM address list synchronization.

Lab Setup

The overview below depicts my lab setup:


The lab is running Exchange 2010, Lync 2010 and FIM 2010 in a Windows 2008 R2 Active Directory. My environment is MIIS GALSync.

Scope

The scope of this procedure is to add the “msRTCSIP-PrimaryUserAddress” in the local forest to the contact in the remote forest by using the built-in Galsync management agents of FIM 2010. This procedure does not cover the implementation of the Galsync itself.

Presence and instant messaging to the remote forest will only be available when you have Lync Edge servers and federation in place. This procedure focuses on changing the AD attributes so that Lync recognizes the contact as a lync-enabled contact.

PROCEDURE

Step 1: Extend the metaverse schema

  1. Start the Synchronization Service Manager and click Metaverse Designer.
  2. Select person in the Object types pane
  3. Click Add Attribute in the Actions pane

  4. Click New Attribute in the “Add Attribute to object type” windows

  5. Enter the following information in the “New Attribute” windows:

Attribute name

msRTCSIP-PrimaryUserAddress

Attribute type

String (indexable)

Mapping Type

Direct

Multi-valued

Clear check box

Indexed

Clear check box


  1. Click OK
  2. Click OK

Step 2: Configure Management Agent of corporate.nl

  • Start the FIM Synchronization Service Manager Console and select “Management Agents
  • Right click the Management Agent you want to modify and select Properties.
  • Go to the “Select Attributes“section
  • Check the Show All box and select the attribute “msRTCSIP-PrimaryUserAddress“, click OK


  • Return to the properties of the Management Agent and select the section “Configure Attribute Flow
  • Configure this section according to the following table:

Data source object type

user

Metaverse object type

person

Mapping Type

Direct

Flow Direction

Export (allow nulls)

Data source attribute

msRTCSIP-PrimaryUserAddress

Metaverse attribute

msRTCSIP-PrimaryUserAddress


  • Click New
  • Verify this modification by collapsing the following header:

  • Check if the following rule is added:

Step 3: Import modification to the metaverse

  • Right click the management agent you just modified and select Properties
  • Select Run  and do a Full Import and Full Synchronization

Step 4: Verify attribute import

  • Start the FIM Synchronization Service Manager Console and select “Metaverse Search
  • Click “Add clause
  • Enter the following clause:

  • Click “Search
  • In the “Search Results” pane, right click the user with displayname corporate01 and select Properties
  • Confirm that the attribute “msRTCSIP-PrimaryUserAddress” contains a value

  • Click Close

Step 5: Configure Management Agent of company.nl

  • Start the FIM Synchronization Service Manager Console and select “Management Agents
  • Right click the Management Agent you want to modify and select Properties.
  • Go to the “Select Attributes“section
  • Check the Show All box and select the attribute “msRTCSIP-PrimaryUserAddress”, click OK

  • Return to the properties of the Management Agent and select the section “Configure Attribute Flow
  • Configure this section according to the following table:
Data source object type contact
Metaverse object type person
Mapping Type Direct
Flow Direction Export (allow nulls)
Data source attribute msRTCSIP-PrimaryUserAddress
Metaverse attribute msRTCSIP-PrimaryUserAddress


  • Click New
  • Verify this modification by collapsing the following header:

  • Check if the following rule is added:

Step 6: Export modification to the remote forest

  • Right click the management agent you just modified and select Properties
  • Select Run  and do an Full Import and Full Synchronization
  • Right click the management agent you just modified and select Properties
  • Select Run  and do an Export

Step 7: Verify attributes in remote forest

  • Start Active Directory Users And Computers and enable the Advanced features
  • Go to the OU where the FIM Galsync creates the contacts
  • Double click the contact “corporate01″ and go the the Attribute Editor

  • Confirm that the attribute “msRTCSIP-PrimaryUserAddress” contains a value.

What does it look like in the Lync client ?

If I log in as  user company01 and we can see the following result in the Lync client:

In the screenshot above the users in the remote forest have a status of “Presence Unknown”. This is because I did not have Edge servers implemented in my test environment.

If you have implemented Lync Edge servers and you have your Lync federations between both organizations in place, the presence will be shown for the contacts as if they were users in the local Lync organization.

Posted in Mix & Match | 1 Comment »

SSD Caching versus Tiering

Posted by Brajesh Panda on July 10, 2014

BY TEKINERD, ON NOVEMBER 8TH, 2010

http://tekinerd.com/2010/11/ssd-caching-versus-tiering/

In some recent discussions, I sensed there is some confusion around solid state device (SSD) storage used as a storage tier vs, a cache. While there are some similarities and both are intended to achieve the same end result i.e. acceleration of data accesses from slower storage, there are some definite differences which I thought I’d try to clarify here. This is from my working viewpoint here, so please do post comments if  you feel differently.

Firstly, SSD caching is temporary storage of data in an SSD cache whereas true data tiering is classed as a semi-permanent movement of data to or from an SSD storage tier. Both are based on algorithms or policies that ultimately result in data being copied to, or removed from, SSDs. To clarify further, if you were to unplug or remove your SSDs, for the caching case, the user data is still stored in the primary storage behind the SSD cache and is still from the original source (albeit slower) whereas in a data tier environment, the user data (and capacity) is no longer available if the SSD tier were removed as the data was physically moved to the SSDs and most likely removed from the original storage tier.

Another subtle difference between caching and teiring is if the SSD capacity is visible or not. In a cached mode, the SSD capacity is totally invisible i.e. the end application simply sees the data accessed much faster if it has been previously accessed and is still in cache store (i.e. a cache hit). So if a 100G SSD cache exists in a system with say 4TB of hard disk drive (HDD) storage, the total capacity is still only 4TB i.e. that of the hard disk array, with 100% of the data always existing on the 4TB with copies only of the data in the SSD cache based on the caching algorithm used. In a true data tiering setup using SSDs, the total storage is 4.1TB and though this may be presented to a host computer as one large virtual storage device, part of the data exists on the SSD and the remainder on the hard disk storage. Typically, such small amounts of SSD would not be implemented as a dedicated tier, but you get the idea if say 1TB of SSD storage was being used in a storage area network system of 400TB of hard drive based storage creating 401TB of usable capacity.

So how does data make it into a cache versus a tier? Cache and block level automated data tiering controllers monitor and operate on statistics gathered from the stream of storage commands and in particular the addresses of the storage blocks being accessed.

SSD Caching Simplified

Caching models typically employ a lookup table method based on the block level address (or range of blocks) to establish if the data the host is requesting has been accessed before and potentially exists in the SSD cache. Data is typically moved more quickly into an SSD cache versus say tiering where more analysis of the longer term trend is typically employed which can span hours if not days in some cases. Unlike DRAM based caches however where it is possible to cache all reads, a little more care and time is taken with SSDs to ensure that excessive writing to the cache is avoided given the finite number of writes an SSD can tolerate. Most engines use some form of “hot-spot” detection algorithm to identify frequently accessed regions of storage and move data into the cache area once it has been established there is a definite frequent access trend.

Traditional caching involves one of several classic caching algorithms which result in either read-only or read and write caching. Cache algorithms and approaches vary by vendor and dictate how a read from the HDD storage results in a copy of the original data entering the cache table and how long it “lives” in the cache itself. Subsequent reads to that same data who’s original location was on the hard drive can now be sent from the SSD cache instead of the slower HDD i.e. a cache hit (determined using a address lookup in the cache tables). If this is the first time data is being accessed from a specific location on the hard drive(s), then the data must first be accessed from the slower drives and a copy made in the SSD cache if the hot spot checking algorithms deems necessary (triggered by the cache miss).

Caching algorithms often try to use more sophisticated models to pre-fetch data based on a trend and store it in the cache if it thinks there a high probability it may be accessed soon e.g. in sequential video streaming or VMware virtual machine migrations where it is beneficial to cache data from the next sequential addresses and pull them into the cache at the same time as the initial access. After some period of time or when new data needs to displace older or stale data in the cache, a cache flush cleans out the old data. This may also be triggered by the hot spot detection logic determining that the data is now “cold”.

The measure of a good cache is how many hits it gets versus misses. If data is very random and scattered over the entire addressable range of storage with infrequent accesses back to the same locations, then the effectiveness is significantly lower and sometimes detrimental to overall performance as there is an overhead in attempting to locate data in the cache on every data access.

SSD Auto Tiering Basics

An automated data tiering controller treats the SSD and HDDs as two separate physical islands of storage, even if presented to the host application (and hence the user) as one large contiguous storage pool (a virtual disk). A statistics gathering or scanning engine collects data over time and looks for data access patterns and trends that match a pre-defined set of policies or conditions. These engines use a mix of algorithms and rules that indicate how and when a particular block (or group of blocks) of storage is to be migrated or moved.

The simplest “caching like” approach used by a data tiering controller is based on frequency of access. For example, it may monitor data blocks being accessed from the hard drives and if it passes a pre-defined number of accesses per hour “N” for a period of time “T”, then a rule may be employed that says when N>1000 AND T>60 minutes, move the data up to the next logical tier. So if data being accessed a lot from the hard drive and there are only two tiered defined, SSD being the faster of the two, the data will be copied to the SSD tier (i.e. promoted) and the virtual address map that converts real time host addresses to the physical updated to point data to the new location in SSD storage. All of this of course happens behind a virtual interface to the host itself who has no idea the storage just moved to a new physical location. Depending on the tiering algorithm and vendor, the data may be discarded on the old tier to free up capacity. The converse is also true. If data is infrequently accessed and lives on the SSD tier, it may be demoted to the HDD tier based on similar algorithms.

More sophisticated tiering models exist of course, some that work at file layers and look at the specific data or file metadata to make more intelligent decisions about what to do with data.

Where is SSD Caching or Tiering Applied?

Typically, SSD caching is implemented as a single SATA or PCIe flash storage device along with an operating system driver layer software in a direct attached storage (DAS) environment to speed up Windows or other operating system accesses. In much larger data center storage area networks (SAN) and cloud server-storage environments, there are an increasing number of dedicated rackmount SSD storage units that can act as a transparent cache at LUN level where the caching is all done in the storage area network layer, again invisible to the host computer. The benefit of cache based systems are that they can be added transparently and often non-disruptively (other than the initial install). Unlike with tiering, there is no need to setup dedicated pools or tiers of storage i.e. they can be overlaid on top of an existing storage setup.

Tiering is more often found in larger storage area network based environments with several disk array and storage appliance vendors offering the capability to tier between different disk arrays based on their media type or configuration. Larger tiered systems often also use other backup storage media such as tape or virtual tape systems. Automated tiering can substantially reduce the management overhead associated with backup and archival of large amounts of data by fully automating the movement process, or helping meet data accessibility requirements of government regulations. In many cases, it is possible to tier data transparently between different media types within the same physical disk array e.g. a few SSD drives in RAID 1 or 10, 4-6 SAS drives in a RAID 10 and 6-12 SATA drives in a RAID  i.e. 3 distinct tiers of storage. Distributed or virtualizaed storage environments also offer either manual or automated tiering mechanisms that work within their proprietary environments. At the other end of the spectrum, file volume manager and storage virtualization solutions running on the host or in a dedicated appliance can allow IT managers to organize existing disk array devices of different types and vendors and sort them into tiers. This is typically a process that requires a reasonable amount of planning and often disruption, but can yield tremendous benefits once deployed.

SSD Tiering versus Caching: Part 2

 

BY TEKINERD, ON AUGUST 14TH, 2011

http://tekinerd.com/2011/08/ssd-tiering-versus-caching-part-2/

A while back I wrote about some of the differences between caching and tiering when using solid state disk (SSD) drives in a PC or server.

Having just returned from the 2011 Flash Memory Summit in Santa Clara, I feel compelled to add some additional color around the topic given the level of confusion clearly evident at the show. Also I’d like to blatantly plug an upcoming evolution in tiering, called MicroTiering from our own company, Enmotus which emerged from stealth at the show.

The simplest high level clarification that emerged from the show I’m glad to say matched what we described in our earlier blog (SSD Caching versus Tiering): caching makes a copy of frequently accessed data from a hard drive and places it in the SSD for future reads, whereas tiering moves the data permanently to the SSD and it’s no longer stored on the hard drive. Caching speeds up reads only at this point with a modified caching algorithm to account for SSD behavior versus RAM based schemes, whereas tiering simply maps the host reads and writes to appropriate storage tier with no additional processing overhead. So in teiring, you get the write advantage and of lesser benefit, the incremental capacity of the SSD which becomes available to the host as usable storage (minus some minor overheads to keep track of the mapping tables).

Why the confusion? One RAID vendor in particular, along with several caching companies, are calling their direct attached storage (or DAS) caching solution “tiering”, even though they are only caching the data to speed up reads and data isn’t moved. Sure write based caching is coming, but it’s still fundamentally a copy of the data that is on the hard drive not a move and SSD caching algorithms apply.

Where Caching is Deployed

SSD caching has a strong and viable place in the world of storage and computing at many levels so it’s not a case of tiering versus caching, but more when to use either or both. Also, caching is relatively inexpensive and will most likely end up bundling for free in PC desktop applications with the SSD you are purchasing for Windows applications for example, simply because this is how all caching ends up i.e. “free” with some piece of hardware, an SSD in this case. Case in point is Intel and Matrix RAID, which has now been enhanced with it’s own caching scheme called Smart Response Technology (SRT) currently available for Z68 flavor motherboards and systems.

In the broader sense, we are now seeing SSD caching deployed in a number of environments:

  • Desktops (eventually notebooks with both SSD and hard drives) bundled with SSDs or as standalone software e.g. Intel SRT and Nvelo (typically Windows only)
  • Server host software based caching e.g. FusionIO, IOturbine, Velobit (Windows and VMware)
  • Hardware PCIe adapter based server RAID SSD caching e.g. LSI’s CacheCade (most operating systems)
  • SAN based SSD caching software, appliances or modules within disk arrays e.g. Oracle’s ZFS caching schemes (disk arrays) or specialist appliances that transparently cache data into SSDs in the SAN network.

Where Data Tiering is Deployed

Tiering is still fundamentally a shared SAN based storage technology used in large data sets. In its current form, it’s really an automated way to move data to and from slow, inexpensive bulk storage (e.g. SATA drives, possibly even tape drives) to fast, expensive storage based on its frequency of access or “demand”. Why? So data managers can keep expensive storage costs to a minimum by taking advantage of the fact that typically less than 20% of data is being accessed over any specific period of time. Youtube is a perfect example. You don’t want to store a newly uploaded video and keep it stored on a large SSD disk array just in case it becomes highly popular versus the other numerous uploads. Tiering automatically identifies that the file (or more correctly a file’s assocatied low level storage ‘blocks’) is starting to increase in popularity, and moves it up to the fast storage for you automatically. Once on the higher performance storage, it can handle a significantly higher level of hits without causing excessive end user delays and the infamous video box ‘spinning wheel’. Once it dies down, it moves it back making way for other content that may be on the popularity rise.

Tiering Operates Like A Human Brain

The thing I like about teiring is that it’s more like how we think as humans i.e. pattern recognition over a large data set, with an almost automated and instant response to a trend rather than looking at independent and much smaller slices of data as with caching. A tiering algorithm observes data access patterns on the fly and determines how often and more importantly, what type of access is going on and adapts accordingly. For example, it can determine if an access pattern is random or sequential and allocate storage to the right type of storage media based on it’s characteristics. A great “big iron” example solution is EMC’s FAST, or the now defunct Atrato.

Tiering can also scale better to multiple levels of storage types. Whereas caching is limited to either RAM, single SSDs or tied to a RAID adapter, tiering can operating on multiple tiers of storage from a much broader set up to and including cloud storage (i.e. a very slow tier) for example.

MicroTeiring

At the show, I introduced the term MicroTiering, one of the solutions our company Enmotus will be providing in the near future. MicroTiering is essentially a direct attach storage version of its SAN cousin but applied on the much smaller subset of storage that is inside the server itself. It’s essentially a hardware accelerated approach to teiring at DAS level that doesn’t tax the host CPU and facilitates a much broader set of operating systems and hypervisor support versus the narrow host SSD caching only offerings we see today that are confined to just a few environments.

Tiering and Caching Together

The two technologies are not mutually exclusive. In fact, it is more than likely that tiering and caching involving SSDs will be deployed together as they both provide different benefits. For example, caching tends to favor the less expensive MLC SSDs as the data is only copied and handles the highly read only transient or none critical data, so loss of the SSD cache itself is none critical. It’s also the easiest way to add a very fast, direct attached SSD cache to your sever provided your operating system or VM environment can handle it.

On the other hand, as tiering relocates the data to the SSD, SLC is preferable for it’s higher performance on reads and writes, higher resilience and data retention characteristics. In the case of DAS based tiering solutions like MicroTiering, it is expected that tiering may also be better suited to virtual machine environments and databases due to it’s inherent and simpler write advantage, low to zero host software layers and VMware’s tendencies to shift the read-write balance more toward 50/50.

What’s for sure, lots of innovation and exciting things still going on this space with lots more to come.

Posted in Mix & Match | Leave a Comment »

PCIe Flash versus SATA or SAS Based SSD

Posted by Brajesh Panda on July 10, 2014

BY TEKINERD, ON SEPTEMBER 2ND, 2010

http://tekinerd.com/2010/09/pcie-flash-versus-sata-or-sas-based-ssd/

The impressive results being presented by the new PCIe based server or workstation add-in card flash memory products hitting the market from the likes of FusionIO and others are certainly pushing up the performance envelope of many applications, especially in transactional database applications where the number of user requests is directionally proportional to the storage IOPs or data throughput capabilities.

In just about all cases, general purpose off the shelf PCIe SSD devices all present themselves as a regular storage device to the server e.g. in Windows, they appear as a SCSI like device that can be configured in the disk manager as regular disk volume (e.g. E: or F:). The biggest advantage PCIe SSDs have over standalone SATA or SAS SSD drives is that they can handle greater data traffic throughput and I/Os as they use the much faster PCIe bus to connect directly to multiple channels of flash memory, often using a built in RAID capability to stripe data across multiple channels of flash mounted directly on board the add-in card.

To help clear up confusion for some of the readers, the primary differences between PCIe Flash memory and conventional SSDs can be summarized as follows:


Where PCIe Flash Works Well

The current generation of PCIe flash SSDs are best suited to applications that require the absolute highest performance with less of an emphasis on long term serviceability as you have to take the computer offline to replace defective or worn out SSDs. They also tend to work best when the total storage requirements for the application can live on the flash drive. Today’s capacities of up to 320G (SLC) or 640G (MLC) are more than ample for many database applications, so placing the entire SQL database on the drive is not uncommon. Host software RAID 1 is typically used to make the setup more robust but starts to get expensive as high capacity PCIe SSD cards run well north of $10,000 retail, the high price typically a result of the extensive reliability and redundancy capability of the card’s on-board flash controller. As the number of PCIe flash adapter offerings grow and the market segments into the more traditional low-mid-high product categories and features, expect the average price of these types of products to come down relatively fast.

Where SSDs Work Well

SATA or SAS based SSDs, by design, work pretty much anywhere a conventional hard drive does. For that reason we see laptops, desktops, servers and external disk arrays adopting them relatively quickly. Depending on the PCIe flash being compared to, it can take anywhere from 5-8 SSDs to match the performance of a PCIe version using a hardware RAID adapter which tends to push the overall price higher when using the more expensive SLC based SSDs. So SATA or SAS SSDs tend to be best suited to applications that can use them as a form of cache in combination with a traditional SATA or SAS disk array setup. For instance, it is possible to achieve a similar performance and significantly lower system and running costs using 1-4 enterprise class SSDs and SATA drive in a SAN disk array versus a Fibre Channel or SAS 15K SAN disk array setup. Most disk array vendors are now offering SSD versions of their Fibre Channel, iSCSI or SAS based RAID offerings.

Enterprise Flash Memory Industry Direction

At the Flash Summit we learned that between SSDs and DRAM a new class of storage will appear for computing, referred to as SCM, or storage class memory. Defined as something broader than just ultra fast flash based storage, it does require that the storage be persistent and appear more like conventional DRAM does to the host i.e. linear memory versus a storage I/O controller with mass storage and a SCSI host driver. SCM is expected to enter mainstream servers by 2013.

Posted in Mix & Match | Leave a Comment »

AD FS 2.0: How to Change the Local Authentication Type

Posted by Brajesh Panda on June 3, 2014

http://social.technet.microsoft.com/wiki/contents/articles/1600.ad-fs-2-0-how-to-change-the-local-authentication-type.aspx

AD FS 2.0, out of the box, supports four local authentication types:

  1. Integrated Windows authentication (IWA) – can utilize Kerberos or NTLM authentication. You should always prefer Kerberos authentication over NTLM and configure the appropriate service principal name (SPN) for the AD FS 2.0 service account so that Kerberos can be used. Credential collection can happen in two ways depending on how your browser is configured:
    1. automatic logon with current user name and  password – used when AD FS 2.0 URL is in IE Intranet Zone or another IE Zone which is configured to automatically logon with current user name and password
    2. Browser-based HTTP 401 authentication prompt – used when credentials cannot be automatically supplied to the 401 challenge for credentials
  2. Forms-based authentication (FBA) – A forms-based .aspx page is presented to the user containing username and password fields. This page is fully customizable so that you can add new sign-in logic or page customizations (logos, style sheet, etc.)
  3. Transport layer security client authentication – a.k.a. Client certificate authentication or Smart Card authentication. The credential is supplied by selecting an appropriate client authentication certificate.
  4. Basic authentication – The web browser displays a credential prompt and the credentials supplied are sent across the network. The advantage of Basic authentication is that it is part of the Hypertext Transfer Protocol (HTTP) specification, and is supported by most browsers. The disadvantage is that Web browsers that use Basic authentication transmit passwords in an unencrypted form. If a non-user monitors communications on your network, they can easily intercept and decipher these passwords by using publicly available tools. Therefore, Basic authentication is not recommended unless you are confident that the connection between the user and your Web server is secure; direct cable connections or a dedicated lines are secure connections.

By default AD FS 2.0 Federation Servers use IWA and AD FS 2.0 Federation Server Proxy servers use FBA. The reason for this is because we assume that you would prefer no credential prompt for your internal users who can directly contact your internal Federation Servers, and we also assume that users who are coming from the internet via the Federation Server Proxy servers would not be able to experience integrated Windows authentication, thus a customizable forms-based page is the best fit.

If you prefer to select a non-default local authentication type, perform the following steps:

  1. In Windows Explorer, browse to C:\inetpub\adfs\ls (assuming that inetpub lives in C:\)
  2. Select web.config and Edit in Notepad
  3. Find (Ctrl+F) <localAuthenticationTypes>
  4. There are four lines below <localAuthenticationTypes>. Each line represents one of the local authentication types listed above.
  5. Cut your preferred local authentication type (the entire line), and Paste it to the top of the list (under <localAuthenticationTypes>)
  6. Save and Close the web.config file

Note: There is no need to restart IIS or make any further changes. Your change will be immediately picked up by IIS since you edited the web.config.

Example:

If I want to change the local authentication type for my internal Federation Servers from IWA to FBA, the resultant web.config section would look like this:

  <microsoft.identityServer.web>
    <localAuthenticationTypes>
      <add name=”Forms” page=”FormsSignIn.aspx” />
      <add name=”Integrated” page=”auth/integrated/” />
      <add name=”TlsClient” page=”auth/sslclient/” />
      <add name=”Basic” page=”auth/basic/” />
    </localAuthenticationTypes>

Posted in Mix & Match | Leave a Comment »

PowerShell – Expand Multivalued Attributes

Posted by Brajesh Panda on April 25, 2014

Use –expandproperty for multi value attributes. Here I am expanding proxy addresses of a user.

Get-Recipient brajesh.panda@techontip.com | select name -Expandproperty emailaddresses | where {$_.smtpaddress} | ft name, smtpaddress

Name SmtpAddress

—-

Posted in Mix & Match | Leave a Comment »

The Claims Rule Language in Active Directory Federation Services

Posted by Brajesh Panda on March 10, 2014

Original Article: http://windowsitpro.com/active-directory/claims-rule-language-active-directory-federation-services

May 16, 2013
Joji Oshima

Advertisement

Microsoft Active Directory Federation Services (AD FS) uses the Claims Rule Language to issue and transform claims between claims providers and relying parties. Dynamic Access Control, introduced with Windows Server 2012, also uses this common language. The flow of claims follows a basic pipeline. The rules we create define which claims are accepted, processed, and eventually sent to the relying party. In this article, I’ll go over the basics of how AD FS builds claims then dive deep into the language that makes it all work. At the end, you should be able to read a claim rule, understand its function, and write custom rules.

The Basics

Before diving into the language used to manipulate and issue claims, it’s important to understand the basics. A claim is information about a user from a trusted source. The trusted source is asserting that the information is true, and that source has authenticated the user in some manner. The claims provider is the source of the claim. This can be information pulled from an attribute store such as Active Directory (AD), or it can be a partner’s federation service. The relying party is the destination for the claims. This can be an application such as Microsoft SharePoint or another partner’s federation service.

A simple scenario would be AD FS authenticating the user, pulling attributes about the user from AD, and directing the user to the application to consume. The scenario can be more complex by adding partner federation services. In any scenario, we’re taking information from some location and sending it somewhere else. Figure 1 shows a sample relationship between federation servers and an application.


Figure 1: Sample Relationship Between Federation Servers and an Application

 
 

Claim Sets

You need to understand claim sets in relation to the claims pipeline. When claims come in, they’re part of the incoming claim set. The claims engine is responsible for processing each claim rule. It examines the incoming claim set for possible matches and issues claims as necessary. Each issued claim becomes part of the outgoing claim set. Because we have claim rules for claims providers and relying parties, there are claim sets associated with each of them.

  1. Claims come in to the claims provider trust as the incoming claim set.
  2. Claim rules are processed, and the output becomes part of the outgoing claim set.
  3. The outgoing claim set moves to the respective relying party trust and becomes the incoming claim set for the relying party.
  4. Claim rules are processed, and the output becomes part of the outgoing claim set.

     
     

    General Syntax of the Claims Rule Language

    A claim rule consists of two parts: a condition statement and an issuance statement. If the condition statement evaluates true, the issuance statement will execute. The sample claim rule that Figure 2 shows takes an incoming Contoso department claim and issues an Adatum department claim with the same value. These claim types are uniform resource identifiers (URIs) in the HTTP format. URIs aren’t URLs and don’t need to be pages that are accessible on the Internet.


    Figure 2: A Simple Claim Rule

     
     

    Condition Statements

    When a rule fires, the claims engine evaluates all data currently in the incoming claim set against the condition statement. Any property of the claim can be used in the condition statement, but the most common are the claim type and the claim value. The format of the condition statement is c:[query], where the variable c represents a claim currently in the incoming claim set.

    The simple condition statement

    c:[type == “http://contoso.com/department”%5D

    checks for an incoming claim with the claim type http://contoso.com/department, and the condition statement

    c:[type == “http://contoso.com/department&#8221;, value == “sales”]

    checks for an incoming claim with the claim type http://contoso.com/department with the value of sales. Condition statements are optional. If you know you want to issue a claim to everyone, you can simply create a rule with the issuance statement.

    Issuance Statements

    There are two types of issuance statements. The first is ADD, which adds the claim to the incoming claim set, but not the outgoing set. A typical use for ADD is to store data that will be pulled in subsequent claim rules. The second is ISSUE, which adds the claim to the incoming and outgoing claim sets. The ISSUE example

    => issue(type = “http://contoso.com/department&#8221;, value = “marketing”);

    issues a claim with the type http://contoso.com/department with the value of marketing. The ADD example

    => add(type = “http://contoso.com/partner&#8221;, value = “adatum”);

    adds a claim with the type http://contoso.com/partner with the value of adatum. The issuance statement can pull information from the claim found in the condition statement, or it can use static information. The static data example

    c:[type == “http://contoso.com/emailaddress”%5D
    => issue(type = “http://contoso.com/role&#8221;, value = “Exchange User”);

    checks for an incoming claim type http://contoso.com/emailaddress and, if it finds it, issues a claim http://contoso.com/role with the value of Exchange User. The static data example

    c:[type == “http://contoso.com/role”%5D
    => issue(claim = c);

    checks for an incoming claim type http://contoso.com/role and, if it finds it, issues the exact same claim to the outgoing claim set. An example of pulling data from the claim,

    c:[type == “http://contoso.com/role”%5D
    => issue(type = “http://adatum.com/role&#8221;, value = c.Value);

    checks for an incoming claim type http://contoso.com/role and, if it finds it, issues the exact same claim to the outgoing claim set.

    Multiple Conditions

    Another possibility is to use multiple conditions in the condition statement. The issuance statement will fire only if all conditions are met. Each separate condition is joined with the && operator. For example,

    c1:[type == “http://contoso.com/role&#8221;, value==”Editor”] &&
    c2:[type == “http://contoso.com/role&#8221;, value==”Manager”]
    => issue(type = “http://contoso.com/role&#8221;, value = “Managing Editor”);

    checks for an incoming claim with the type http://contoso.com/role with a value of Editor and another incoming claim with the type http://contoso.com/role with a value of Manager. If the claims engine finds both, it will issue a claim with the type http://contoso.com/role with the value of Managing Editor.

    The values of claims in any condition can be accessed and joined using the + operator. For example,

    c1:[type == “http://contoso.com/location”%5D &&
    c2:[type == “http://contoso.com/role”%5D
    => issue(type = “http://contoso/targetedrole&#8221;, value = c1.Value + ” ” c2.Value);

    checks for an incoming claim with the type http://contoso.com/location and separate incoming claim with the type http://contoso.com/role. If it finds both, it will issue a claim with the type http://contoso.com/targetedrole, combining the values of the incoming roles.

    Aggregate Functions

    Up to this point, each claim rule checks individual claims or groups of claims and fires each time there’s a match. There are some circumstances in which this behavior isn’t ideal, however. For example, you might want to look at the entire incoming claim set and make a condition statement based on that. In such cases, you can use the EXISTS, NOT EXISTS, and COUNT functions. The EXISTS function checks whether there are any incoming claims that match; if there are, it fires a rule. The NOT EXISTS function checks whether there are any incoming claims that match; if there aren’t, it fires a rule. The COUNT function counts the number of matches in the incoming claim set.

    The EXISTS example

    EXISTS([type == “http://contoso.com/emailaddress”%5D)
    => issue(type = “http://contoso/role&#8221;, value = “Exchange User”);

    checks for any incoming claims with the type http://contoso.com/emailaddress and, if it finds any, issues a single claim with the type http://contoso.com/role and the value of Exchange User. The NOT EXISTS example

    NOT EXISTS([type == “http://contoso.com/location”%5D)
    => add(type = “http://contoso/location&#8221;, value = “Unknown”);

    checks for any incoming claims with the type http://contoso.com/location and, if it doesn’t find any, adds a single claim with the type http://contoso.com/location with the value of Unknown. The COUNT example

    COUNT([type == “http://contoso.com/proxyAddresses”%5D) >= 2
    => issue(type = “http://contoso.com/MultipleEmails&#8221;, value = “True”);

    checks for any incoming claims with the type http://contoso.com/proxyAddresses and, if there are two or more, issues a single claim with the type http://contoso.com/MultipleEmails with the value of True.

    Querying Attribute Stores

    By default, AD is the only attribute store created when you install AD FS. You can query LDAP servers or SQL Server systems to pull data to be used in a claim. To utilize another attribute store, you first create the attribute store and enter the appropriate connection string. Figure 3 shows how to create an LDAP server as an attribute store.


    Figure 3: Creating an LDAP Server as an Attribute Store

    Once you create the store, you can query the store from a claim rule. For an LDAP attribute store, the query should be in this format:

    query = <query_filter>;<attributes>

    The parameter sent into the query is represented with the {0} operator. If multiple parameters are sent, they would be {1}, {2}, etc. For example,

    c:[Type == “http://contoso.com/emailaddress”%5D
    => issue(
      store = “LDAP STORE”,
      types = (“http://contoso.com/attribute1&#8243;, “http://contoso.com/attribute2&#8243;),
      query = “mail={0};attribute1;attribute2″,
      param = c.Value
      );

    queries LDAP STORE for attribute1 and attribute2, where the email address matches, and issues two claims based on the data returned from the query.

    A SQL Server attribute store uses the same basic format of the claim rule language; only the query syntax is different. It follows the standard Transact-SQL format, and the {0} operator is used to pass the parameter. For example,

    c:[Type == “http://contoso.com/emailaddress”%5D
    => issue(
      store = “SQL STORE”,
      types = (“http://contoso.com/attribute1&#8243;, “http://contoso.com/attribute2&#8243;),
      query = “SELECT attribute1,attribute2 FROM users WHERE email = {0}”,
      param = c.Value
      );

    queries SQL STORE for attribute1 and attribute2, where the email address matches, and issues two claims based on the data returned from the query.

    Regular Expressions

    The use of regular expressions (RegEx) lets you search or manipulate data strings in powerful ways to get a desired result. Without RegEx, any comparisons or replacements must be an exact match. This is sufficient for many situations, but if you need to search or replace based on a pattern, you can use RegEx. RegEx uses pattern matching to search inside strings with great precision. You can also use it to manipulate the data inside the claims.

    To perform a pattern match, you can change the double equals operator (==) to =~ and use special metacharacters in the condition statement. If you’re unfamiliar with RegEx, let’s start with some of the common metacharacters and see what the result is when using them. Table 1 shows basic RegEx metacharacters and their functions.


    RegExReplace

    You can also use RegEx pattern matching in replacement scenarios. This is similar to a find-and-replace algorithm found in many text editors, but it uses pattern matching instead of exact values. To use this in a claim rule, use the RegExReplace() function in the value section of the issuance statement.

    The RegExReplace function accepts three parameters.

  • The first is the string in which you’re searching. You’ll typically want to search the value of the incoming claim (c.Value), but this could be a combination of values (c1.Value + c2.Value).
  • The second is the RegEx pattern you’re searching for in the first parameter.
  • The third is the string value that will replace any matches found.

    The RegExReplace example

    c:[type == “http://contoso.com/role”%5D
    => issue (Type = “http://contoso.com/role&#8221;, Value =  RegExReplace(c.Value, “(?i)director”, “Manager”);

    passes through any role claims. If any of the claims contain the word Director, RegExReplace will change it to Manager. For example, Director of Finance would pass through as Manager of Finance.

    If you combine the power of RegEx pattern matching with the concepts mentioned earlier in the article, you can accomplish many tasks using the Claims Rule Language.

    Coding Custom Attribute Stores

    AD FS gives you the ability to plug in a custom attribute store if the built-in functionality isn’t sufficient to accomplish your goals. You can use standard .NET code such as toUpper() and toLower() or pull data from any source through the code. This code should be a class library and will need references to the Microsoft.IdentityModel and Microsoft.IdentityServer.ClaimsPolicy assemblies.

    Try Custom!

    Creating custom rules with the Claims Rule Language gives you more flexibility with claims issuance and transformation. It can take a while to familiarize yourself with the syntax, but it becomes much easier with practice. If you want to dive into this language, try writing custom rules instead of using the templates next time.

Posted in Mix & Match | 1 Comment »

AD FS 2.0 Claims Rule Language Part 2

Posted by Brajesh Panda on February 26, 2014

Original: http://blogs.technet.com/b/askds/archive/2013/05/07/ad-fs-2-0-claims-rule-language-part-2.aspx

Hello, Joji Oshima here to dive deeper into the Claims Rule Language for AD FS. A while back I wrote a getting started poston the claims rule language in AD FS 2.0. If you haven’t seen it, I would start with that article first as I’m going to build on the claims rule language syntax discussed in that earlier post. In this post, I’m going to cover more complex claim rules using Regular Expressions (RegEx) and how to use them to solve real world issues.

An Introduction to Regex

The use of RegEx allows us to search or manipulate data in many ways in order to get a desired result. Without RegEx, when we do comparisons or replacements we must look for an exact match. Most of the time this is sufficient but what if you need to search or replace based on a pattern? Say you want to search for strings that simply start with a particular word. RegEx uses pattern matching to look at a string with more precision. We can use this to control which claims are passed through, and even manipulate the data inside the claims.

Using RegEx in searches

Using RegEx to pattern match is accomplished by changing the standard double equals “==” to “=~” and by using special metacharacters in the condition statement. I’ll outline the more commonly used ones, but there are good resourcesavailable online that go into more detail. For those of you unfamiliar with RegEx, let’s first look at some common RegEx metacharacters used to build pattern templates and what the result would be when using them.

Symbol

Operation

Example rule

^

Match the beginning of a string

c:[type == “http://contoso.com/role“, Value =~ “^director”]

=> issue (claim = c);

 
 

Pass through any role claims that start with “director”

$

Match the end of a string

c:[type == “http://contoso.com/email“, Value =~ “contoso.com$”]

=> issue (claim = c);

 
 

Pass through any email claims that end with “contoso.com”

|

OR

c:[type == “http://contoso.com/role“, Value =~ “^director|^manager”]

=> issue (claim = c);

 
 

Pass through any role claims that start with “director” or “manager”

(?i)

Not case sensitive

c:[type == “http://contoso.com/role“, Value =~ “(?i)^director”]

=> issue (claim = c);

 
 

Pass through any role claims that start with “director” regardless of case

x.*y

“x” followed by “y”

c:[type == “http://contoso.com/role“, Value =~ “(?i)Seattle.*Manager”]

=> issue (claim = c);

 
 

Pass through any role claims that contain “Seattle” followed by “Manager” regardless of case.

+

Match preceding character

c:[type == “http://contoso.com/employeeId“, Value =~ “^0+”]

=> issue (claim = c);

 
 

Pass through any employeeId claims that contain start with at least one “0”

*

Match preceding character zero or more times

Similar to above, more useful in RegExReplace() scenarios.

 
 

Using RegEx in string manipulation

RegEx pattern matching can also be used in replacement scenarios. It is similar to a “find and replace”, but using pattern matching instead of exact values. To use this in a claim rule, we use the RegExReplace() function in the value section of the issuance statement.

The RegExReplace() function accepts three parameters.

  1. The first is the string in which we are searching.
    1. We will typically want to search the value of the incoming claim (c.Value), but this could be a combination of values (c1.Value + c2.Value).
  2. The second is the RegEx pattern we are searching for in the first parameter.
  3. The third is the string value that will replace any matches found.

Example:

c:[type == “http://contoso.com/role“]

=> issue (Type = “http://contoso.com/role“, Value = RegExReplace(c.Value, “(?i)director”, “Manager”);

 
 

Pass through any role claims. If any of the claims contain the word “Director”, RegExReplace() will change it to “Manager”. For example, “Director of Finance” would pass through as “Manager of Finance”.

 
 

Real World Examples

Let’s look at some real world examples of regular expressions in claims rules.

Problem 1:

We want to add claims for all group memberships, including distribution groups.

Solution:

Typically, group membership is added using the wizard and selecting Token-Groups Unqualified Names and map it to the Group or Role claim. This will only pull security groups, not distribution groups, and will not contain Domain Local groups.


We can pull from memberOf, but that will give us the entire distinguished name, which is not what we want. One way to solve this problem is to use three separate claim rules and use RegExReplace() to remove unwanted data.

Phase 1: Pull memberOf, add to working set “phase 1″

 
 

c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname“, Issuer == “AD AUTHORITY”]

=> add(store = “Active Directory”, types = (“http://test.com/phase1“), query = “;memberOf;{0}”, param = c.Value);

Example: “CN=Group1,OU=Users,DC=contoso,DC=com” is put into a phase 1 claim.

 
 

Phase 2: Drop everything after the first comma, add to working set “phase 2″

 
 

c:[Type == “http://test.com/phase1“]

=> add(Type = “http://test.com/phase2“, Value = RegExReplace(c.Value, “,[^\n]*”, “”));

Example: We process the value in the phase 1 claim and put “CN=Group1″ into a phase 2 claim.

 
 

Digging Deeper: RegExReplace(c.Value, “,[^\n]*”, “”)

  • c.Value is the value of the phase 1 claim. This is what we are searching in.
  • “,[^\n]*” is the RegEx syntax used to find the first comma, plus everything after it
  • “” is the replacement value. Since there is no string, it effectively removes any matches.

 
 

Phase 3: Drop CN= at the beginning, add to outgoing claim set as the standard role claim

 
 

c:[Type == “http://test.com/phase2“]

=> issue(Type = “http://schemas.microsoft.com/ws/2008/06/identity/claims/role“, Value = RegExReplace(c.Value, “^CN=”, “”));

Example: We process the value in phase 2 claim and put “Group1″ into the role claim

Digging Deeper: RegExReplace(c.Value, “^CN=”, “”)

  • c.Value is the value of the phase 1 claim. This is what we are searching in.
  • “^CN=” is the RegEx syntax used to find “CN=” at the beginning of the string.
  • “” is the replacement value. Since there is no string, it effectively removes any matches.

 
 

Problem 2:

We need to compare the values in two different claims and only allow access to the relying party if they match.

Solution:

In this case we can use RegExReplace(). This is not the typical use of this function, but it works in this scenario. The function will attempt to match the pattern in the first data set with the second data set. If they match, it will issue a new claim with the value of “Yes”. This new claim can then be used to grant access to the relying party. That way, if these values do not match, the user will not have this claim with the value of “Yes”.

 
 

c1:[Type == “http://adatum.com/data1“] &&

c2:[Type == “http://adatum.com/data2“]

=> issue(Type = “http://adatum.com/UserAuthorized“, Value = RegExReplace(c1.Value, c2.Value, “Yes”));

 
 

Example: If there is a data1 claim with the value of “contoso” and a data2 claim with a value of “contoso”, it will issue a UserAuthorized claim with the value of “Yes”. However, if data1 is “adatum” and data2 is “fabrikam”, it will issue a UserAuthorized claim with the value of “adatum”.

 
 

Digging Deeper: RegExReplace(c1.Value, c2.Value, “Yes”)

  • c1.Value is the value of the data1 claim. This is what we are searching in.
  • c2.Value is the value of the data2 claim. This is what we are searching for.
  • “Yes” is the replacement value. Only if c1.Value & c2.Value match will there be a pattern match and the string will be replaced with “Yes”. Otherwise the claim will be issued with the value of the data1 claim.

 
 

Problem 3:

Let’s take a second look at potential issue with our solution to problem 2. Since we are using the value of one of the claims as the RegEx syntax, we must be careful to check for certain RegEx metacharacters that would make the comparison mean something different. The backslash is used in some RegEx metacharacters so any backslashes in the values will throw off the comparison and it will always fail, even if the values match.

Solution:

In order to ensure that our matching claim rule works, we must sanitize the input values by removing any backslashes before doing the comparison. We can do this by taking the data that would go into the initial claims, put it in a holding attribute, and then use RegEx to strip out the backslash. The example below only shows the sanitization of data1, but it would be similar for data2.

Phase 1: Pull attribute1, add to holding attribute “http://adatum.com/data1holder

 
 

c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname“, Issuer == “AD AUTHORITY”]

=> add(store = “Active Directory”, types = (“http://adatum.com/data1holder“), query = “;attribute1;{0}”, param = c.Value);

Example: The value in attribute 1 is “Contoso\John” which is placed in the data1holder claim.

 
 

Phase 2: Strip the backslash from the holding claim and issue the new data1 claim

 
 

c:[Type == “http://adatum.com/data1holder“, Issuer == “AD AUTHORITY”]

=> issue(type = “http://adatum.com/data1“, Value = RegExReplace(c.Value,”\\”,””);

Example: We process the value in the data1holder claim and put “ContosoJohn” in a data1 claim

Digging Deeper: RegExReplace(c.Value,”\\”,””)

  • c.Value is the value of the data1 claim. This is what we are searching in.
  • “\\” is considered a single backslash. In RegEx, using a backslash in front of a character makes it a literal backslash.
  • “” is the replacement value. Since there is no string, it effectively removes any matches.

 
 

An alternate solution would be to pad each backslash in the data2 value with a second backslash. That way each backslash would be represented as a literal backslash. We could accomplish this by using RegExReplace(c.Value,”\\”,”\\”) against a data2 input value.

 
 

Problem 4:

Employee numbers vary in length, but we need to have exactly 9 characters in the claim value. Employee numbers that are shorter than 9 characters should be padded in the front with leading zeros.

Solution:

In this case we can create a buffer claim, join that with the employee number claim, and then use RegEx to use the right most 9 characters of the combined string.

Phase 1: Create a buffer claim to create the zero-padding

 
 

=> add(Type = “Buffer”, Value = “000000000”);

 
 

Phase 2: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

 
 

c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname“, Issuer == “AD AUTHORITY”]

=> add(store = “Active Directory”, types = (“ENHolder”), query = “;employeeNumber;{0}”, param = c.Value);

 
 

Phase 3: Combine the two values, then use RegEx to remove all but the 9 right most characters.

 
 

c1:[Type == “Buffer”]

&& c2:[Type == “ENHolder”]

=> issue(Type = “http://adatum.com/employeeNumber“, Value = RegExReplace(c1.Value + c2.Value, “.*(?=.{9}$)”, “”));

Digging Deeper: RegExReplace(c1.Value + c2.Value, “.*(?=.{9}$)”, “”)

  • c1.Value + c2.Value is the employee number padded with nine zeros. This is what we are searching in.
  • “.*(?=.{9}$)” represents the last nine characters of a string. This is what we are searching for. We could replace the 9 with any number and have it represent the last “X” number of characters.
  • “” is the replacement value. Since there is no string, it effectively removes any matches.

 
 

Problem 5:

Employee numbers contain leading zeros but we need to remove those before sending them to the relying party.

Solution:

In this case we can pull employee number from Active Directory and place it in a holding claim, then use RegEx to use the strip out any leading zeros.

Phase 1: Pull the employeeNumber attribute from Active Directory, place it in a holding claim

 
 

c:[Type == “http://schemas.microsoft.com/ws/2008/06/identity/claims/windowsaccountname“, Issuer == “AD AUTHORITY”]

=> add(store = “Active Directory”, types = (“ENHolder”), query = “;employeeNumber;{0}”, param = c.Value);

 
 

Phase 2: Take the value in ENHolder and remove any leading zeros.

 
 

c:[Type == “ENHolder”]

=> issue(Type = “http://adatum.com/employeeNumber“, Value = RegExReplace(c.Value, “^0*”, “”));

Digging Deeper: RegExReplace(c.Value, “^0*”, “”)

  • c1.Value is the employee number. This is what we are searching in.
  • “^0*” finds any leading zeros. This is what we are searching for. If we only had ^0 it would only match a single leading zero. If we had 0* it would find any zeros in the string.
  • “” is the replacement value. Since there is no string, it effectively removes any matches.

 
 

Conclusion

As you can see, RegEx adds powerful functionality to the claims rule language. It has a high initial learning curve, but once you master it you will find that there are few scenarios that RegEx can’t solve. I would highly recommend searching for an online RegEx syntax tester as it will make learning and testing much easier. I’ll continue to expand the TechNet wiki article so I would check there for more details on the claims rule language.

Understanding Claim Rule Language in AD FS 2.0

AD FS 2.0: Using RegEx in the Claims Rule Language

Regular Expression Syntax

AD FS 2.0 Claims Rule Language Primer

Until next time,

Joji “Claim Jumper” Oshima

 
 

From <http://blogs.technet.com/b/askds/archive/2013/05/07/ad-fs-2-0-claims-rule-language-part-2.aspx>

Posted in Mix & Match | Leave a Comment »

AD FS 2.0 Claims Rule Language Primer – Ask the Directory Services Team – Site Home – TechNet Blogs

Posted by Brajesh Panda on February 26, 2014

Original: https://blogs.technet.com/b/askds/archive/2011/10/07/ad-fs-2-0-claims-rule-language-primer.aspx

 
 

Hi guys, Joji Oshima here again. On the Directory Services team, we get questions regarding the Claims Rule Language in AD FS 2.0 so I would like to go through some of the basics. I’ve written this article for those who have a solid understanding of Claims-based authentication. If you would like to read up on the fundamentals first, here are some good resources.

An Introduction to Claims

http://msdn.microsoft.com/en-us/library/ff359101.aspx

Security Briefs: Exploring Claims-Based Identity

http://msdn.microsoft.com/en-us/magazine/cc163366.aspx

AD FS 2.0 Content Map

http://social.technet.microsoft.com/wiki/contents/articles/2735.aspx

Claims Rules follow a basic pipeline. The rules define which claims are accepted, processed, and eventually sent to the relying party. You define claims rules as a property of the Claims Provider Trust (incoming) and the Relying Party Trust (outgoing).


Basic flowchart for the Claims Pipeline taken from TechNet.

There is also an authorization stage checks if the requestor has access to receive a token for the relying party. You can choose to allow all incoming claims through by setting the Authorization Rules to Permit All. Alternately, you could permit or deny certain users based on their incoming claim set. You can read more about authorization claim rules here and here.

You can create the majority of claims issuance and claims transformations using a Claim Rule Template in AD FS 2.0 Management console, but there are some situations where a custom rule is the only way to get the results you need. For example, if you want to combine values from multiple claims into a single claim, you will need to write a custom rule to accomplish that. To get started, I would recommend creating several rules through the Claim Rule Templates and view the rule language generated. Once you save the template, you can click the View Rule Language button from the Edit Rule window to see how the language works.


 
 


In the screenshot above, the rule translates as follows:

If (there is an incoming claim that matches the type “http://contoso.com/department“)

Then (issue a claim with the type “http://adatum.com/department“, using the Issuer, Original Issuer, Value, and ValueType of the incoming claim)

The claims “http://contoso.com/department” and “http://adatum.com/department” are URIs. These claims can be in the URN or HTTP format. The HTTP format is NOT a URL and does not have to specifically link to actual content on the Internet or intranet.

Claims Rule Language Syntax:

Typically, the claims rule language is structured similarly to an “if statement” in many programming languages.

If (condition is true)

Then (issue a claim with this value)

What this says is “if a condition is true, issue this claim”. A special operator “=>” separates the condition from the issuance statement and a semicolon ends the statement.

Condition statement => issuance statement;

Review some of the claims you created and look at the structure. See if you can pick out each part. Here is the one we looked at in the first section. Let’s break it down in to the basic parts.


The “if statement” condition:

The special operator:

  • =>

The issuance statement:

  • issue(Type = “http://adatum.com/department“, Issuer = c.Issuer, OriginalIssuer = c.OriginalIssuer, Value = c.Value, ValueType = c.ValueType);

For each rule defined, AD FS checks the input claims, evaluates them against the condition, and issues the claim if the condition is true. You probably notice the variable “C” in the syntax. Think of “C” as an incoming claim that you can check conditions against, and use values from it to add to an outgoing claim. In this example, we are checking if there is an incoming claim that has a type that is “http://contoso.com/department“. We also use the values in this claim to assign the value of Issuer, OriginalIssuer, Value, and ValueType to the outgoing claim.

There are exceptions to this that are discussed later (using ADD instead of ISSUE and issuing a claim without a condition statement).

Issue a claim to everyone:

In the Claims Rule Language, the condition part is optional. Therefore, you can choose to issue or add a claim regardless of what claims are incoming. To do this, start with the special operator “=>“.

Syntax:

You could set similar rules for each Claims Provider Trust so that the Relying Party (or application) can know where the user came from.

Using a Single Condition:

In this example, we will look at a single condition statement. A basic claim rule checks to see if there is an incoming claim with a certain type and if so, issue a claim.

You can create this claim rule using the GUI. Choose the template named “Pass Through or Filter an Incoming Claim” and choose the appropriate incoming claim type.


Screenshot: Entries for a simple pass through claim.

You may also check for multiple values within your condition statement. For example, you can check and see if there is an incoming claim with a specific value. In the following example, we will check for an incoming claim with the type “http://contoso.com/role” that has the value of “Editors” and, if so, issue the exact same claim.

You can create this claim rule using the GUI as well. Choose “Pass Through or Filter an Incoming Claim”, choose the appropriate incoming claim type, select “Pass though only a specific claim value”, then enter the appropriate value.


Screenshot: Entries to pass through the Role claim if the value is “Editors”

Using Multiple Conditions:

Say you want to issue a claim only if the user has an Editor and has an Email claim and, if so, issue the Editor Role claim. To have multiple conditions, we will use multiple “C” variables. We will join the two condition statements with the special operator “&&“.

The first condition (c1) checks to see if you have an incoming role claim with the value of Editors. The second condition (c2) checks to see if there is an incoming email claim. If both conditions are met, it will issue an outgoing claim identical to the incoming c1 claim.

Combining Claim Values:

Say you want to join information together from multiple incoming claims to form a single outgoing claim. The following example will check for an incoming claim type of “http://contoso.com/location” and “http://contoso.com/role“. If it has both, it will issue a new claim, “http://contoso.com/targeted“, combining the two values.

The resulting value is the value of the first claim (c1), plus a space, plus the value of the second claim (c2). You can combine static strings with the values of the claims using the special operator “+“. The example below shows a sample set of incoming claims, and the resulting output claim.

Example Incoming Claims:

http://contoso.com/location” is “Seattle”

http://contoso.com/role” is “Editor”

Example Outgoing Claim:

http://contoso.com/targeted” is “Seattle Editor”

Using ADD instead of ISSUE:

As mentioned in an earlier section, you can ADD a claim instead of ISSUE a claim. You may be wondering what the difference between these two statements are. Using the ADD command instead of the ISSUE command will add a claim to the incoming claim set. This will not add the claim to the outgoing token. Use this for adding placeholder data to use in subsequent claims rules.


This illustration was taken from a TechNet article. Here you can see that the first rule adds a role claim with the value of Editor. It then uses this newly added claim to create a greeting claim. Assuming these are the only two rules, the outgoing token will only have a greeting claim, not a role claim.

I’ve outlined another example below.

Sample Rule 1:

Sample Rule 2:

c:[Type == “http://contoso.com/location“, Value==”LAX”]

=> add(Type = “http://contoso.com/region“, Value = “West”);

Sample Rule 3:

In this example, we have two rules that ADD claims to the incoming claim set, and one that issues a claim to the outgoing claim set. This will add a region claim to the incoming claim set and use that to create combine the values to create an area claim. The ADD functionality is very useful with the next section for aggregate functions.

Using aggregate functions (EXISTS and NOT EXISTS):

Using aggregate functions, you can issue or add a single output claim instead of getting an output claim for each match. The aggregate functions in the Claims Rule Language are EXISTS and NOT EXISTS.

Say we want to use the location claim, but not all users have it. Using NOT EXISTS, we can add a universal location claim if the user does not have one.

In Sample Rule 1, we will add a location claim with the value of “Unknown” if the user does not have a location claim. In Sample Rule 2, we will use that value to generate the “http://contoso.com/targeted” claim.

Sample Rule 1:

This way, users without the “http://contoso.com/location” claim can still get the “http://contoso.com/targeted” claim.

Claims Rule Language, beyond this post:

There is more you can do with the Claims Rule Language that goes beyond the scope of this blog post. If you would like to dig deeper by using Custom Attribute Stores and using Regular Expressions in the language, I’ve put up a TechNet Wiki article that contains these advanced topics and other sample syntax. In addition, some other articles may help with these topics.

Understanding Claim Rule Language in AD FS 2.0:

http://social.technet.microsoft.com/wiki/contents/articles/4792.aspx

When to Use a Custom Claim Rule:

http://technet.microsoft.com/en-us/library/ee913558(WS.10).aspx

The Role of the Claim Rule Language:

http://technet.microsoft.com/en-us/library/dd807118(WS.10).aspx

The Role of the Claims Engine:

http://technet.microsoft.com/en-us/library/ee913582(WS.10).aspx

The Role of the Claims Pipeline:

http://technet.microsoft.com/en-us/library/ee913585(WS.10).aspx

Conclusion:

Creating custom rules with the Claims Rule Language gives you more flexibility over the standard templates. Syntax familiarization takes a while, but with some practice, you should be able to write custom rules in no time. Start by writing custom rules instead of using the templates in your lab environment and build on those.

- Joji “small claims court” Oshima

 
 

Inserted from <https://blogs.technet.com/b/askds/archive/2011/10/07/ad-fs-2-0-claims-rule-language-primer.aspx>

Posted in Mix & Match | Leave a Comment »

Exchange Calendar Additional Response HTML Formatting

Posted by Brajesh Panda on February 21, 2014

In last couple of weeks we were setting up our 1st Lync Room System solution from Crestron. To bring awareness about Lync Room System, I used Calendar Additional Response feature in the room mailbox. You can set this txt using shell or MMC GUI

Shell – Set-CalenderResponse – id <Identity> -AddAddtionalResponse $True –AdditionalRespone “TXT”

MMC Gui – Mailbox Properties – Resource Information – Additional Text

Here is the html formatted message for your response & how it looks ;-)

If your meeting request was declined please disregard the rest of this message. <p align="justify"> This message is intended to help with using the new Lync Room System (LRS) in Wilanow Palace. If your meeting request was accepted: Congratulations, you have scheduled a meeting with LRS in this meeting room! LRS is a combination of software and hardware that enables rich meeting scenarios, including video conferencing, white boarding, PowerPoint sharing, and more. We are excited to have you try LRS, and we would love to hear your feedback! To use LRS, you need to schedule a Lync Meeting. Key Scenarios: </p>

<p align="justify"><STRONG>1.) Join Meeting. </STRONG> If you’re reading this mail, you’ve already scheduled a meeting. Just touch the join button on your scheduled meeting to join it. Don’t see a join button? Make your meeting an online meeting in Outlook. </p>

<p align="justify"><STRONG>2.) Launch Whiteboard. </STRONG> You can start white boarding, and then invite participants to share the whiteboard. You can also start white boarding from within a meeting. </p>

<p align="justify"><STRONG>3.) PowerPoint Sharing. </STRONG> You can share your PPT slides with the room. To do this, upload the PPT file into the meeting from your machine (just as in Lync). From the room, you can then watch the PPT presentation, or take control and present. </p>

<p align="justify"><STRONG>4.) Display Modes. </STRONG>Try using different display modes to see which one best fits your meeting. </p>

<p align="justify"> If you run into any issues or have any questions, ideas, or feedback for the feature team, please contact us: Service Desk (ServiceDesk@TechOnTip.com) Thanks! </p>

Your request was accepted.

_____

If your meeting request was declined please disregard the rest of this message.

This message is intended to help with using the new Lync Room System (LRS) in Wilanow Palace. If your meeting request was accepted: Congratulations, you have scheduled a meeting with LRS in this meeting room! LRS is a combination of software and hardware that enables rich meeting scenarios, including video conferencing, white boarding, PowerPoint sharing, and more. We are excited to have you try LRS, and we would love to hear your feedback! To use LRS, you need to schedule a Lync Meeting. Key Scenarios:

1.) Join Meeting. If you’re reading this mail, you’ve already scheduled a meeting. Just touch the join button on your scheduled meeting to join it. Don’t see a join button? Make your meeting an online meeting in Outlook.

2.) Launch Whiteboard. You can start white boarding, and then invite participants to share the whiteboard. You can also start white boarding from within a meeting.

3.) PowerPoint Sharing. You can share your PPT slides with the room. To do this, upload the PPT file into the meeting from your machine (just as in Lync). From the room, you can then watch the PPT presentation, or take control and present.

4.) Display Modes. Try using different display modes to see which one best fits your meeting.

If you run into any issues or have any questions, ideas, or feedback for the feature team, please contact us: Colliers Service Desk (ServiceDesk) Thanks!

_____

Sent by Microsoft Exchange Server 2013

Posted in Mix & Match | Leave a Comment »

 
Follow

Get every new post delivered to your Inbox.

Join 76 other followers

%d bloggers like this: