Using .NET objects with PowerShell

Example of showing a folder dialog

In this short example I will demonstrate how easy it is to use .net objects from PowerShell. This practically enables you to develop straight from PowerShell (although there is no designer.cs nor a visual interface).

In the example I will open a dialog were the user can select a folder which then will be stored in a variable (only if the user clicked “OK”).

Here is the script:

function Get-Folder { 
    if(!([appdomain]::CurrentDomain.GetAssemblies() | ? {$_.GetName().Name -eq "System.Windows.Forms"}))
    {
        Add-Type -AssemblyName System.Windows.Forms
    }
    [System.Windows.Forms.FolderBrowserDialog]$fbdlg = New-Object System.Windows.Forms.FolderBrowserDialog
    [System.Windows.Forms.DialogResult]$result = $fbdlg.ShowDialog();
    $path = $null;
    if($result -eq [System.Windows.Forms.DialogResult]::OK)
    {
        $path = $fbdlg.SelectedPath;
    }
    return $path
}

First I check if System.Windows.Forms is already loaded (line 2)
[appdomain]::CurrentDomain.GetAssemblies() returns all currently loaded assemblies. If none of them have the name “System.Windows.Forms” it will be loaded then (at line 4).

In the same way other assemblies can be loaded (like System.Drawing, System.IO, System.Net etc.).

After this I simply create a FolderBrowserDialog object (line 6) using the default constructor with no parameters.

Next I call the “ShowDialog” function on the FolderBrowserDialog object and store the result in a DialogResult object (line 7)

If this object is equal to DialogResult.OK (line 9) the path will be stored in the $path variable (line 11).

Finally the $path variable is returned (line 13). This will be $null if the dialog was canceled.

Simple search with PowerShell

One day I was looking for a certain video file on my computer but I didn’t know where to look, however I new the name (in this case it was a video I posted on YouTube and there I could easily find the original file name).

With the standard search bar in the Windows Explorer window the file could not be found (it was not indexed). I figured: “how hard can it be to use PowerShell to look for a file with a certain name?”. I limited my scope to only search based on the file name. After this I constructed a fairly simple but effective script which did just that. Not only did I find the file, it even appeared multiple times on my 2TB drive (several copies of the same file). It went through the 2TB drive faster then I would expect (benefit is this case was the limited scope which only looked at the names of files).

I wanted to share this simple (and really, it doesn’t get much more straight forward then this) script which enables empowers you to search large amounts of files as long as you know a part of the filename. If you are familiar with regular expressions you can use this in the search. If you are not familiar with them you can still use a part of the file name.

This is the script:

function Find-File {
    Param(
    [Parameter(Mandatory=$true)]
    [ValidateNotNull()]
    [string]$Path,
    [Parameter(Mandatory=$true)]
    [ValidateNotNull()]
    [string]$regexSearchString
    )
    return Get-ChildItem -Path $Path -Recurse -File -ErrorAction SilentlyContinue | ? {$_.Name -match $regexSearchString};
}
New-Alias -Name ff -Value Find-File;

To make the function easier to work with I also created an alias “ff” as you can see in the last line.

Most of the lines consist of the two mandatory parameters: Path and regexSearchString (don’t worry if you do not know much about regular expressions; normal text will also work).

The actual search is basically a one liner. It collects all files below the provided path (as you can see by the switches “-File” and “-Recurse”. In addition if no files are found (or the path is no valid path) the result will be $null, which indicates there are no results for the current patch/search string combination. When all files are collected there is a filter which matches each file name against the regular expression (or search string).

If called directly it will simply write the result to the screen:

ff -Path c:\ -regexSearchString test

It makes more sense to store the result in a variable like this:

$result = ff -Path c:\ -regexSearchString test

Here $result will contain a collection fileInfo objects; if there is only one match it will be no collection but it will be a fileInfo object directly.

If you are only interested in the location of the files you may choose to only collect the FullName (path+filename) property of the objects. You can do this by piping it to a select:

$result = ff -Path c:\ -regexSearchString test | Select FullName

Here $result will be a collection of strings (or a single string if there is only one match).

If you are looking for a directory of which you know the name, the script can easily be modified to look for directories. Simply replace “-File” with “-Directory”, the rest works the same.

Get groups with users from SharePoint Online

One of the things PowerShell enables you to do with Office 365 (particularly SharePoint Online) is collecting bulk info. In this post I will be providing a nice little script which can be used to collect groups from site collections including the names of users in those groups.

The main reason you might want to collect this is the information takes quite some time to be collected. By the time the information would be needed It would take a long unnecessarily amount of  time. If the data however is already collected the requested information can be looked up quickly. The only real downside is that your data used will be “old” data. How old depends on how often you execute the function in this post.

Before going into detail about what the script does, let me elaborate about what goes in and what comes out.

There is one mandatory parameter which must be specified: “outputFullFilePath”. This will be the path where the csv will be stored. Providing an invalid or unreachable path will result in the output being lost.

Optional parameters are:

  • csvSeparator: this will be used as separator for the output csv file, by default its value is ‘;’
  • internalSeparator: this will be used as separator inside csv fields (make sure it is different from the csvSeperator), by default its value = ‘,’
  • selectSites: if selected you will be prompted to select of which site collections the groups will be collected (this is a switch it requires no value, if omitted its value is false).

The output will be a csv file with the following headers: SiteCollectionUrl, LoginName, Title, OwnerLoginName, OwnerTitle, GroupUsers, GroupRoles

If the output file is opened in Microsoft Excel the columns can be used for filtering and searching. Making it an east way to find out who is in which group or where a certain person has access over all selected site collections.

Important note: groups can only be collected if the account that runs the script is site collection admin. Tenant admin is not enough! The account has to be specified at each site collection as site collection admin.

Important note: before the following script can be run a connection to the Microsoft Online service and the SharePoint Online service must be established. For more information on how to achieve this, check out this previous post.

Here is the total script (further down I will highlight the main parts of the script):

function Get-SiteCollectionGroups {
    Param(
    [Parameter(Mandatory=$true)]
    [ValidateNotNull()]
    [string]$outputFullFilePath,
    [Parameter(Mandatory=$false)]
    [switch]$selectSites,
    [Parameter(Mandatory=$false)]
    [char]$csvSeparator = ';',
    [Parameter(Mandatory=$false)]
    [char]$internalSeparator = ','
    )
    Write-Host "Collecting site collection groups";
    $SiteCollectionGroups = @();
    $sites = $null;
    if($selectSites)
    {
        $sites = Get-SPOSite -Detailed | Out-GridView -Title "Select site collections to collect groups from" -PassThru;
    }
    else
    {
        $sites = Get-SPOSite -Detailed;
    }
    [int]$counter = 0;
    [int]$total = $sites.Count;
    [string]$counterFormat = "0" * $total.ToString().Length;
    foreach($site in $sites)
    {
        $counter++;
        Write-Host "$($counter.ToString($counterFormat))/$($total.ToString($counterFormat)) [" -ForegroundColor Yellow -NoNewline;
        Write-Host "$($site.Url)" -ForegroundColor Cyan -NoNewline;
        Write-Host "]: " -ForegroundColor Yellow -NoNewline;
        try {
            $groups = Get-SPOSiteGroup -Site $site;
            foreach($group in $groups)
            {
                [string]$groupUsers = "";
                foreach($user in $group.Users)
                {
                    $groupUsers += "$user$($internalSeparator)";
                }
                if($groupUsers -match "$($internalSeparator)$")
                {
                    $groupUsers = $groupUsers.Substring(0, $groupUsers.Length-1);
                }
                [string]$groupRoles = "";
                foreach($role in $group.Roles)
                {
                    $groupRoles += "$role$($internalSeparator)";
                }
                if($groupRoles -match "$($internalSeparator)$")
                {
                    $groupRoles = $groupRoles.Substring(0, $groupRoles.Length-1);
                }
                $group | Add-Member -MemberType NoteProperty -Name "SiteCollectionUrl" -Value $site.Url
                $group | Add-Member -MemberType NoteProperty -Name "GroupUsers" -Value $groupUsers
                $group | Add-Member -MemberType NoteProperty -Name "GroupRoles" -Value $groupRoles
                $SiteCollectionGroups += $group;
            }
            Write-Host "$($groups.Count) groups are successfully collected" -ForegroundColor Green;
        }
        catch
        {
            Write-Host "Groups could not be collected" -ForegroundColor Red;
        }
    }
    $SiteCollectionGroups | Select SiteCollectionUrl,LoginName,Title,OwnerLoginName,OwnerTitle,GroupUsers,GroupRoles | Export-Csv -Path $outputFullFilePath -Delimiter $csvSeparator -NoTypeInformation
    Write-Host "Site collection groups are collected and written to $outputFullFilePath" -ForegroundColor Green
}
# 2 examples, the first is a minimal call, the second is a call with all optional parameters
#Get-SiteCollectionGroups -outputFullFilePath "C:\Backup\AllSiteCollectionGroups$(Get-Date -Format "yyyyMMddhhmmss").csv"
#Get-SiteCollectionGroups -outputFullFilePath "C:\Backup\AllSiteCollectionGroups$(Get-Date -Format "yyyyMMddhhmmss").csv" -csvSeparator ',' -internalSeparator ';' -selectSites

At line 14 we create a generic collection (which can hold any type of object). At line 58 each group is added to this collection. At line 67 this collection is exported to the csv file which is specified at the outputFullFilePath parameter.

If the switch is set to manually select site collections a prompt will be shown. This will be in form of an Out-Gridview (line 18). You can select multiple items with Ctrl or Shift. If manual selection of sites is off (not set) then the groups of all site collections will be collected. Because of the time it takes to collect groups it is advised to only collect the most important site collections. Keep in mind that the collection of the groups is dependant on the permissions of the account that runs them. If the account is not site collection admin of one site no groups will be collected and the host will show a red line where the site collection URL is mentioned (line 64).

Because the process may take a while I added a progress indicator. It does not give an accurate estimation for the remaining time (as it only counts the amount of site collections and not the remaining groups or users). For this three variables are used. They are defined at lines 24 through 26. At line 29 the counter is raised by one for every site collection. At line 30 through 32 the count is written to the host including the URL of the current site collection. Note the switch “NoNewLine” which means that the success or error message (lines 60 and 64) are places behind it in stead of below the counter.

The main loops are quite simple. First there is a loop through all site collections (starts at line 27). Inside this loop there is a loop which loops all groups for each site collection (starts at line 35). Inside each group, all users are added to a string, also all roles of the group (these are only roles on site collection / root site level). After the users and roles are collected the site collection URL, the groups users and the group roles are added to the group object (at lines 55 through 57). Finally the group object is added to the siteCollectionGroups collection.

At the bottom of the script there are three lines commented. The first of the three provides a brief explanation of the two following examples.

The first example (second comment line) is a minimum required use of the function. It only specifies the outputFullFilePath (if this parameter is omitted you will be prompted to enter it before the script is ran.

The second example (third comment line) has all optional parameters, this includes the separators and the manual selection switch.

Save the script someplace, remove the hash (#) before one of the examples, and modify this as it suits your need. Then simply run the file and wait… After completion check the file in the location that is specified in the script and start working the numbers.

Because the file is in CSV format it is easy to load it in PowerShell and use scripting to quickly analyse data.

In my next post I will share a followup script which collects external users over all site collections using the output csv of this script as input.

Which css dominates

In my previous post I explained how css selection works. However what happens when a property of an element is defined multiple times? Which one will be used?

Take for example the following html:

<html>
  <head>
    <title>test</title>
  </head>
  <body>
    <h1>Header</h1>
    <p>Line of text</p>
    <p>Another line of text</p>
    <div class="special">
      <p>Special line of text</p>
    </div>
  </body>
</html>

If we have the following css line:

p { color: red; }

The color of all p elements will be red

If we have the following css lines:

div.special p { color: blue; }
p { color: red; }

The p inside the div with class “special” will be blue while the other p elements will still be red. This is because “div.special p” is more specific then the “normal” p selector.

If we have the following css lines:

p { color: red; }
p { color: blue; }

The color of all p elements will be blue. This is because the blue value is more recent then the red value. Take this into account when loading multiple css files, the more recent will dominate is both are equally specific.

If we have the following css lines:

p { color: red !important; }
p { color: blue; }

The color of all p elements will be red. Even though the blue property is more recent and they are equally specific. This is because the first one is marked with important. Important overrules any later changes to the specified property. This is a way to make sure a earlier property is applied in stead of a later one. This can be useful is different style sheets are loaded after your own.

With inline style, no css (except for css marked with Important) can overrule the style. This can only be overruled using JavaScript.
Inline style example:

<p style="color: orange;">This is always orange<p>

This can be overruled using the following jQuery line:

$("p").css("color","blue");

Which will add inline style to all p elements with blue as color property.

In short this is what decides which css style will dominate:

  1. Important suffix
  2.  Specific
    1. Inline
    2. Id
    3. Class
    4. Element
  3. Most recent

Most recent will dominate if none of the others apply. More specific css will dominate over less specific css. With important this will overrule all non-important css, between multiple important css parts the most specific will eventually be dominant. The only css that cannot be overrules with css alone is inline style with the important tag applied; the only way to overrule that is by replacing is with a piece of JavaScript.