What tool I am using for writing scripts?
Most of the time I am using a free third party ISE (Integrated Scripting Environment) tool called PowerShell Plus Professional by Idera . It has some great feature like code highlighting and it is easy to run and debug scripts. A great feature is that you can copy the code as HTML. I find this very useful because I don't use any code highlighters in this blog, instead I just copy my code from the tool and paste it in the HTML source of the posts. The tool comes with big script library for Windows,AD,SharePoint,SQL and many more. But it has some limitations and I also use the native PowerShell ISE and of course the powershell console to run and debug.
How I am reading three different types of Input files?
A bit of a background. One of the best practices for PowerShell scripting is modularity. Every piece of code with different functionality that will be reused is organised in functions with own parameters and scope. The code from this script that do initial checks and logic is only 25 rows, but the functions are 222 rows.
So I have three different functions for reading different type of Input files. The script will check what is the file extension and will decide what function to use for reading the input file.
The function for CSV:
function Read-CSV{ [CmdletBinding()] Param ( [parameter(Mandatory=$true)] [string]$Path, [parameter(Mandatory=$false)] [string]$DefaultPass ) $csvImput = Import-Csv -Path $Path $OutputAll = @() ForEach($row in $csvImput) { $Output = @{ "AccountName" = $row.AccountName "Description" = $row.Description "Password" = $row.Password } If($DefaultPass){ $Output['Password'] = $DefaultPass } If (Test-AccountData -Data $Output){ $OutputAll += $Output } } Write-Output $OutputAll }
The key for reading csv files is the native cmdlet Import-Csv. It will read the file and will create an object for each row(without the column names) in the csv with properties named as the csv columns and corresponding value. Then I am building a hash table for each object and the function returns a collection of hash tables for every account that needs to be created.
The function for XML:
function Read-XML{ [CmdletBinding()] Param ( [parameter(Mandatory=$true)] [string]$Path, [parameter(Mandatory=$false)] [string]$DefaultPass ) [xml]$xmlInput = Get-Content $Path $OutputAll = @() ForEach ($xElement in ($xmlInput.ServiceAccounts.Account)) { $Output = @{ "AccountName" = $xElement.AccountName "Description" = $xElement.Description "Password" = $xElement.Password } If($DefaultPass){ $Output['Password'] = $DefaultPass } If (Test-AccountData -Data $Output){ $OutputAll += $Output } } Write-Output $OutputAll }
For reading XML I am creating new XmlDocument object by reading the content of the xml file. Then we can work with every xmlElement that describes an account and take it's properties. Again as in the other functions for every account we are creating hash table and when we read all the information the function will return us a collection of hash tables.
The function for Excel files:
function Read-Excel{ [CmdletBinding()] Param ( [parameter(Mandatory=$true)] [string]$Path, [parameter(Mandatory=$false)] [string]$DefaultPass ) $objExcel = New-Object -ComObject Excel.Application $objExcel.Visible = $false $WorkBook = $objExcel.Workbooks.Open($Path) $WorkSheet = $WorkBook.sheets | Where {$_.Index -eq '1'} $intRowMax = ($WorkSheet.UsedRange.Rows).count $OutputAll = @() for($intRow = 2 ; $intRow -le $intRowMax ; $intRow++) { $Output = @{ "AccountName" = $WorkSheet.Range("A$($intRow)").Text "Description" = $WorkSheet.Range("B$($intRow)").Text "Password" = $WorkSheet.Range("C$($intRow )").Text } If($DefaultPass){ $Output['Password'] = $DefaultPass } If (Test-AccountData -Data $Output){ $OutputAll += $Output } } Write-Output $OutputAll $objExcel.Quit() (Get-Process -name excel -ErrorAction SilentlyContinue | Sort-Object StartTime)[-1] | Stop-Process -ErrorAction SilentlyContinue Remove-Variable objExcel }
For reading Excel file(.xlsx) we will need Excel office application installed. The script is creating a COM Excel application instance, opening the file, choosing the first workbook. Then unfortunately we need to read the content of each cell, as far as I know there is no way to treat every row as an object or something similar so we can take its content like in the CSV. So we are taking the count of the rows, skipping the first and we read every cell and creating a hash table. This is done in For loop where it depend what is the number of the row we are reading. And again the function will return a collection of hash table. All function will output the same object(s), so after the processing the Input file we will not care what type it was we will just process every hash table in our collection.
For the record, killing the Excel.exe process is not the correct way of removing COM object instance, but I had some weird behavior when I used below code to get rid of the COM instance.
[System.Runtime.Interopservices.Marshal]::ReleaseComObject($x)
All three functions are very similar, they have similar parameters and outputs, so we we will not care for input/output when we read different input file.
For more information on PowerShell function you can read about_Functions, about_Functions_Advanced , about_Functions_Advanced and all about files from the PowerShell Help.
Why so many Try/Catch constructions?
Try/Catch/Finally constructions are preferred way for error handling in PowerShell. This statements were introduced in v2.0, before that there was only available Trap statement. The error handling is big subject in Powershell, I will try to explain how Try/Catch/Finally works in example for creating new AD user.
Try{ New-ADUser -SamAccountName "JohnD1111111111111111111" -Name "John Doe" -AccountPassword $secPass -DisplayName "John Doe" -ErrorAction Stop } Catch [System.Exception]{ Write-Host "Unable to create Account, folowing exception occurred: $($_.Exception.Message)" -ForegroundColor Red }
In this sample we will run the block in Try statement. If there is an error/exception the block in the 'Catch' statement will be executed. I have not included 'Finally' statement, but if there was its code will be executed anyway no matter if we have exception or not and it is not required. In our example the New-ADUser will fail because the value for SamAccountName is with length more than 20 symbols. However the 'Catch' block will be triggered only if the error produced in the 'Try' is Terminating. Most of the PowerShell commandlets produce non-Terminating errors, that means that if the command fails the script/loop/pipeline will continue.Our cmdlet for creating new AD user also produces non-Terminating error, this is why we have explicitly set ErrorAction to Stop, this will make the error Terminating, The whole idea of error handling is to identify if we have error and do something. In this sample we will not see the error that the cmdlet produces, instead we will execute Catch and it will show us customer message that contains the message of the original exception and will tell us what is wrong, without seeing the entire exception from PowerShell. And because the error is in Try/Catch construction the whole script will not stop and will continue.
How locally defined function is executed on remote computer?
You can launch the account creation script from the DC that you want to create the accounts in or you can run it from completely different computer and create the accounts in the target DC remotely.
This is achieved very easy, I am just executing the function that creates the accounts locally on our remote DC and also remoting the parameters that I will use. The line looks like this:
Invoke-Command -Session $session -ScriptBlock ${function:Create-UsersLocal} -ArgumentList $Hash,$OUnit
Our function for creating accounts when the script run on the DC is Create-UsersLocal, I am just promoting it via the ScriptBlock and its parameter values via the ArgumentList.
And now explained. Let say that we have a functions called Hello-World, a very simple function that just say hello from the computer it runs in different color. The color of the text can be set as parameter. If we load the function it goes to a PSDrive called 'Function'. We can see on the screenshot below that this PSDrive contains our function. If we supply just the path to the function it will execute it on the remote computer as it is and we will fail if the function is not defined in the remote session. Our goal is to get the definition of the FunctionInfo object, this is done by adding dollar sign when supplying the script block for the remote session, this will force reading the definition of the function and convert it to useful script block for the remote session we are also transferring a variable with the same name as the parameter of the function and this is how the magic happens. You can see on the screenshot below.
For more information on PowerShell scripting I highly recommend to visit http://powershell.org/ especialy their Free eBooks section.
You can download my script from the Technet Gallery
No comments:
Post a Comment