As a part of infrastructure as code or using DSC in the deployment pipelines, it is desired that we be able to identify the version of the node configuration. With all configurations being put into a source / version control repository, it won't be difficult to promote the version number for a given configuration document. However, at this point in time, there is no way to specify the version of the configuration document itself anywhere. This makes it hard to determine or identify what version of the configuration is being used on a target node without using a 3rd party console or integration into source control.

The ask here is to add support for specifying configuration meta properties such as version of the configuration.

As a part of infrastructure as code or using DSC in the deployment pipelines, it is desired that we be able to identify the version of the node configuration. With all configurations being put into a source / version control repository, it won't be difficult to promote the version number for a given configuration document. However, at this point in time, there is no way to specify the version of the configuration document itself anywhere. This makes it hard to determine or identify what version of the configuration is being used on a target node without using a 3rd party…

Currently the only way to apply conditions is using the Script resource. However the Script Resource has limitations especially when dealing with Azure credential objects. Conditions would be very useful, similar to what SCCM uses for it's Configuration Items.
Syntax would be:
Condition = [Boolean PowerShell Expression]

Example #1: Apply a Package Resource only for SQL servers
Example #2: Apply a WindowsFeature Resource only for IIS servers.

This is more a indirect failure experience. Please bear with me if this is the wrong place: Expanding a zip-file with DSC in an Archive Resource leaves a process holding the zip-file. This gives that the zip-file can not be deleted by the same script or in the same PowerShell session.
The error message from Remove-Item on the zip-file is like this: "Remove-Item : Cannot remove item C:\temp\jjdk1.8.0_112-CE.zip: The process cannot access the file 'jdk1.8.0_112-CE.zip' because it is being used by another process.".
In issue 240 to Plaster on GitHub (https://github.com/PowerShell/Plaster/issues/240) it is mentioned by Keith Hill (rkeithhill) that Get-FileHash leaves a handle on the file. This I refer to as the DSC Archive Resource has validation options on checksum.

This is more a indirect failure experience. Please bear with me if this is the wrong place: Expanding a zip-file with DSC in an Archive Resource leaves a process holding the zip-file. This gives that the zip-file can not be deleted by the same script or in the same PowerShell session.
The error message from Remove-Item on the zip-file is like this: "Remove-Item : Cannot remove item C:\temp\jjdk1.8.0_112-CE.zip: The process cannot access the file 'jdk1.8.0_112-CE.zip' because it is being used by another process.".
In issue 240 to Plaster on GitHub (https://github.com/PowerShell/Plaster/issues/240) it is mentioned by Keith Hill (rkeithhill)…

PS /root> netsh netsh : The term 'netsh' is not recognized as the name of a cmdlet, function, script file, or operable program. Check the spelling
of the name, or if a path was included, verify that the path is correct and try again.
At line:1 char:1
+ netsh
+ ~~~~~
+ CategoryInfo : ObjectNotFound: (netsh:String) [], CommandNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
~~~

This requires PSDscResources module version 2.8.0.0 from the gallery and the file resource from the PSDesiredStateConfiguration in-box module. However, compiling this configuration will fail with a message that File resource cannot be loaded. If we remove -Name File from the first import command, you will see an error that Registry resource cannot be found. We have to use -Name with PSDscResources since PSDesiredStateConfiguration already exports the same resource.

So, the only work around at this time is to remove the PSDesiredStateConfiguration import completely from the configuration so that we can at least compile this. Here is something that works.

Dynamically detecting the taskbar position, and dynamically modifying the starting position of the powershell window when opening powershell might be a good solution. I am on Windows 10 with three monitors. The taskbar is locked, not automatically hidden, taskbar is only shown on main display.

Once a node meta configuration is enacted, it is easy for an administrator or process (with malicious intent) to modify the MetaConfig.mof file in C:\Windows\System32\Configuration directory. The GetMetaConfiguration method in MSFT_DscMetaConfiguration class does not validate the property values against the allowed values of the CIM properties.

Steps to reproduce this behavior:
- Enact a simple meta configuration and enact it.
- Open the MetaConfig.MOF file in your favorite editor and change the value of ConfigurationMode to some random text.
- Save the file and close it.
- Run Get-DscLocalConfigurationManager.
- You will see the random value assigned to ConfigurationMode in the output although it is not a valid value for the ConfigurationMode property.

Once a node meta configuration is enacted, it is easy for an administrator or process (with malicious intent) to modify the MetaConfig.mof file in C:\Windows\System32\Configuration directory. The GetMetaConfiguration method in MSFT_DscMetaConfiguration class does not validate the property values against the allowed values of the CIM properties.

Steps to reproduce this behavior:
- Enact a simple meta configuration and enact it.
- Open the MetaConfig.MOF file in your favorite editor and change the value of ConfigurationMode to some random text.
- Save the file and close it.
- Run Get-DscLocalConfigurationManager.
- You will see the random value assigned to ConfigurationMode in…

1. We will validate the MOF when it is passed in as part of our API (i.e. Set-DscLocalConfigurationManager) and error if the values are not valid.
2. We will write a warning when Get-DscLocalConfigurationManager reads a MOF that has invalid values and at LCM startup. The resultant behavior will behave like it does today where invalid values will be read as the default value by the LCM.

I have system modules that are deployed in nodes according to the configuration data file (psd1). For example, one node can have three modules, or there may be three nodes each one with a module.
All modules have a common configuration, so I put this configuration in a separate composite resource. Then, if there are more than one module in a node, the resulting configuration will have duplicate resources.
The problem occur when one of these duplicate resources have some complex property. Here is a simplified version of my configuration:

PSDesiredStateConfiguration\Configuration : A conflict was detected between resources '[xWebsite]DefaultSite::[BaseWebConfiguration]BaseWebConfigForWebBackend
(C:\Temp\Config\ConfigurationTest.ps1::5::5::xWebsite)' and '[xWebsite]DefaultSite::[BaseWebConfiguration]BaseWebConfigForMobileBackend
(C:\Temp\Config\ConfigurationTest.ps1::5::5::xWebsite)' in node 'localhost'. Resources have identical key properties but there are differences in the following non-key
properties: 'LogFlags'. Values 'System.Object[]' don't match values 'System.Object[]'. Please update these property values so that they are identical in both cases.

If the LogFlags property is not set, everything is ok. The error seems to be in the PSDesiredStateConfiguration.psm1 file, line 1489:

elseif ( $resource[$property] -ne $properties[$property] )

where a comparision ByValue is done even with complex properties.

Any way to workaround this for the time being?

I have system modules that are deployed in nodes according to the configuration data file (psd1). For example, one node can have three modules, or there may be three nodes each one with a module.
All modules have a common configuration, so I put this configuration in a separate composite resource. Then, if there are more than one module in a node, the resulting configuration will have duplicate resources.
The problem occur when one of these duplicate resources have some complex property. Here is a simplified version of my configuration:

As it seems, it is not possible to pass an complex type to the DSC Resource.
I am, currently, bypassing this behavior by serializing the Complex type to JSON and the resource will deserialize back to the Complex type.

Is this the recommended implementation? Is this on the roadmap of DSC?

Best regards,
Jens

Dear,

As we use Powershell V5 Classes extensively, we create instances of a class and add them to a Hashtable.

This hashtable is assigned to an member of a DSC Resource. When creating the .MOF file, I receive an System.ArgumentException on the DSC resource "PSDesiredStateConfiguration.psm1"

I have created a Pull Server Configuration. I have created the DSC Signing Certificate using a custom template on a Enterprise Root CA which has worked for 2012R2 nodes and also tested using xDSCUtils New-xSelfSignedDscEncryptionCertificate. Using the same Certificate to Compile and Execute the MOF on the same Computer works, it is only if you compile on one and execute on another that the problems arise.
I kept getting errors when passing in credentials. I therefore wrote a tiny custom DSC Resource that Displays the Password as clear text when I run Start-DSCConfiguration -Wait -Verbose
You can find it here https://gist.github.com/aboersch/65e846a4966fe2c4708ed21d655a54a7The Client does not correctly decrypt the Credentials. As a Password I am receiving
-----BEGIN CMS-----
<Long Multi-Line Base64 String>
-----END CMS-----
If I pass this to Unprotect-CmsMessage I receive the correct Password.
The Certificate passes $_.PrivateKey.KeyExchangeAlgorithm and $_.Verify
I have tried changing the Certificate Provider to '"Microsoft Enhanced Cryptographic Provider v1.0"', '"Legacy Cryptographic Service Provider"', and '"Microsoft RSA SChannel Cryptographic Provider"'.
I have already tried these:
http://stackoverflow.com/questions/34006865/dsc-problems-with-credentials-and-build-10586https://wespoint.wordpress.com/2017/01/19/powershell-dsc-encryption-issue/

I have created a Pull Server Configuration. I have created the DSC Signing Certificate using a custom template on a Enterprise Root CA which has worked for 2012R2 nodes and also tested using xDSCUtils New-xSelfSignedDscEncryptionCertificate. Using the same Certificate to Compile and Execute the MOF on the same Computer works, it is only if you compile on one and execute on another that the problems arise.
I kept getting errors when passing in credentials. I therefore wrote a tiny custom DSC Resource that Displays the Password as clear text when I run Start-DSCConfiguration -Wait -Verbose
You can find it here …

I'm not sure why this is, but for some reason Get-TargetResource requires not only the 'Key' parameters, but also the 'Required' parameters. My understanding is only 'Keys' should be necessary to find an existing resource and the 'Required' parameters are necessary when you want to create/update/delete something related to a resource.

This really makes no sense and will hinder our ability to build higher level tech on top of DSC. If I have the key values for any piece of data in any system, I should be able to retrieve it without any additional information.

In almost all of the DSC resources I look at, users have their Get-TargetResource accepting a bunch of parameters they never use due to this restriction in the engine.

xComputerManagement\xScheduledTask is a quick example. The only information needed to find any scheduled task is the 'TaskName', all other parameters are unnecessary. However, DSC has forced the authors to include a bunch of extra parameters in the definition which are never even used in the Get-TargetResource codebase.

If some one can explain a logical reason why we are doing this, I will happily accept it, but I can not think of one and this appears to be creating a bunch of unnecessary work on development and testing staff.

I'm not sure why this is, but for some reason Get-TargetResource requires not only the 'Key' parameters, but also the 'Required' parameters. My understanding is only 'Keys' should be necessary to find an existing resource and the 'Required' parameters are necessary when you want to create/update/delete something related to a resource.

This really makes no sense and will hinder our ability to build higher level tech on top of DSC. If I have the key values for any piece of data in any system, I should be able to retrieve it without any additional information.

This extremely useful feature sould also be implemented in the on-premises DSC pull server. Especially because the needed functionality must alread be in the WMF 5 as nothing more than WMF 5 is needed to use the Azure Automation DSC service.

This would be huge improvment for the DSC pull server.

Thx!

In Azure Automation DSC the whole MOF file gets encrypted without the need to manually issue certificates for every node and then to collect the public keys.

This extremely useful feature sould also be implemented in the on-premises DSC pull server. Especially because the needed functionality must alread be in the WMF 5 as nothing more than WMF 5 is needed to use the Azure…

We use logic to dynamically compose a ConfigurationData structure and pass it to the configuration. This logic executes quickly. Calling the configuration generates more than 5,000 MOF files for unique nodes. The process takes 1.5 hours on modern server-class hardware with 16GB RAM. Also, the MOF files are all created at the end of the process, rather than one-at-a-time throughout the process. This causes high memory usage. Please optimize the PSDesiredStateConfiguration module to generate large quantities of MOF files more quickly. This issue adds significant delay to the DSC pipeline.

When upgrading to WMF 5.1, we are seeing errors in the DSC event log on servers that use a configuration with encrypted content (passwords). We are using encrypted credentials to create application pools and assign a service account as the identity. The data gets decrypted fine and the app pools get created with the correct identities. However the following error gets logged in the DSC event log every time a consistency check runs:

We updated our certificate to use the new requirements for WMF 5.1 and everything looks fine there.

When upgrading to WMF 5.1, we are seeing errors in the DSC event log on servers that use a configuration with encrypted content (passwords). We are using encrypted credentials to create application pools and assign a service account as the identity. The data gets decrypted fine and the app pools get created with the correct identities. However the following error gets logged in the DSC event log every time a consistency check runs:

Job :
Message Cannot unprotect message. The input contained no encrypted content. Specify the '-IncludeContext' parameter if you wish to output the original content when no encrypted…

Currently, the DSC reporting server only offers the ability to query a single node at a time. I think for any environment, it would be crucial to get a list of the last reported compliance for all nodes. I'd really like to see this feature added in a future release.