Wednesday, August 10, 2016

The Danger of the Local Administrator Account

The local administrator account resides on every Windows Server and is usually in an enabled state.  This account is a major security vulnerability and is commonly prone to hacking attempts.

Security flaws with this account include:
  • This account cannot be locked out and does not adhere to local or domain account lockout password policies.  This allows brute force attacks to be conducted against the account.
  • The local administrator account is a well known SID, it always begins with S-1-5- and end with -500.  There are also tools allowing you to login with a SID rather then an account name so an attacker could launch a brute force without knowing the account!

Quote from Microsoft https://technet.microsoft.com/en-us/library/jj852165.aspx"

"The built-in Administrator account cannot be locked out no matter how many failed logons it accrues, which makes it a prime target for brute-force attacks that attempt to guess passwords. Also, this account has a well-known security identifier (SID), and there are non-Microsoft tools that allow authentication by using the SID rather than the account name. Therefore, even if you rename the Administrator account, an attacker could launch a brute-force attack by using the SID to log on. All other accounts that are members of the Administrator's group have the safeguard of locking out the account if the number of failed logons exceeds its configured maximum."

If security if your top concern, my recommendation is to disable this account and always create a new Administrator account regardless if it is the default domain Administrator account or default local Administrator account.

Need IT Support in Perth, give me a call now on 08 9468 7575

Thursday, July 21, 2016

Error downloading unsupported File Types with IIS

A customer of mine was complained that when users attempted to download unknown file types, they received a "404 - File or directory not found" error message.

The file type my user was attempting to download was a ".indd" file.


This error is caused when the filetype is not a supported "MIME" an IIS Web Server.

In the event you want to add a new unknown file type under the IIS Website, click MIME Types.


Add the unknown file extension users are attempting to download.


After adding the mime, do an "iisreset".

Users will now be able to download the unknown file type.

Wednesday, July 20, 2016

DFSR Event ID 6404 - The Replicated Folder has an Invalid Local Path

I was attempting to add a new DFSR node to an existing DFSR Replication Group consisting of 17 file servers.  When adding the server the following error was experienced.

DFS Replication cannot replicate the replicated folder REPLICATEDFOLDER because the local path E:\replicationgroup is not the fully qualified path name of an existing, accessible local folder. This replicated folder is not replicating to or from this server. Event ID: 6404


This particular error is generally experienced when people attempt a non NTFS volume such as ReFS to a DFSR replication group as documented on the following forum.

https://social.technet.microsoft.com/Forums/windowsserver/en-US/8860b354-3bf2-43ed-9acf-07fa43931046/dfsr-replication-cant-be-used-on-a-refs-volume?forum=winserver8gen

 In my case, I experienced this error with DFSR setup on NTFS volume on a new branch server.

The customer moved the DFSR node from one office, to another office.  The C:\ "SYSTEM" volume was formatted and reloaded with a fresh copy of 2012 R2 with latest patches, however the E:\ "DATA" volume was not formatted.  As a result it has its legacy DFSR Database, Staging data etc.

To clean up the staging data I created a new directory in C:\ called "empty":

C:\Empty

I then ran the following commands after stopping the DFSR Replication service on the spoke server:

Robocopy "C:\Empty" "E:\System Volume Information\DFSR" /MIR

Followed by

rmdir "E:\System Volume Information\DFSR"

Starting the DFSR replication service resulted in the error above.

After playing around a bit more, I found that in addition to flushing all data in System Volume Information\DFSR you must also remove the DfsrPrivate link which points to the System Volume Information\DFSR sub directory for DfsrPrivate data.


After doing this, starting the DFS Replication service kicks of its normal DFSR Initial Sync shown by a state of 2:

Wmic /namespace:\\root\microsoftdfs path dfsrreplicatedfolderinfo get replicationgroupname,replicatedfoldername,state


State Values (Uninitialized, Initialized, Initial Sync, Auto Recovery, Normal, In Error), ValueMap (0, 1, 2, 3, 4, 5)}

Need help with DFSR In Perth?  Click for IT Support.

Tuesday, July 19, 2016

DFSR Backlog Count Not Reducing - Server 2012 R2

Microsoft recently released an update which causes DFSRS.exe CPU usage to hit 100%.

https://support.microsoft.com/en-us/kb/3156418

I wrote about this problem on the following blog post:

http://clintboessen.blogspot.com.au/2016/07/dfsr-service-100-cpu-usage.html

Since installing this update and removing this update, one of our spoke servers failed to replicate back to the primary hub server.  All data from the spoke server replicated correctly to the hub, however data from the hub was not propagating down to the spoke.

The backlog count continued to increase.
 


DFSR Hotfix https://support.microsoft.com/en-us/kb/2996883 was not installed on the affected hub server.

We knew there was no issue with our hub server as this was replicating successfully to 16 other DFSR spoke servers.

On the affected spoke server, I worked with the Directory Services team from Microsoft who made the following changes to the Network Interface.


C:\Windows\system32>netsh int tcp set global chimney=disabled
Ok.


C:\Windows\system32>netsh int tcp set global rss=disabled
Ok.


C:\Windows\system32>netsh int ip set global taskoffload=disabled
Ok.


C:\Windows\system32>netsh int tcp set global autotuninglevel=disabled
Ok.


C:\Windows\system32>netsh int tcp set global ecncapability=disabled
Ok.


C:\Windows\system32>netsh int tcp set global timestamps=disabled
Ok.


C:\Windows\system32>netsh advf set allp state off
Ok.


Microsoft then disabled the following Offload features on the HP Network Interface card on the spoke server:
  • IPv4 Large Send Offload
  • Large Send Offload V2 (IPv4)
  • Large Send Offload V2 (IPv6)
  • TCP Connection Offload (IPv4)
  • TCP Connection Offload (IPv6)
  • TCP/UDP Checksum Offload (IPv4)
  • TCP/UDP Checksum Offload (IPv6)
 
After disabling these components of the network interface card and restarting the DFS Replication service, the backlog count from the hub server to the spoke server slowly started decreasing.
 
TCP offload engine is a function used in network interface cards (NIC) to offload processing of the entire TCP/IP stack to the network controller. By moving some or all of the processing to dedicated hardware, a TCP offload engine frees the system's main CPU for other tasks

Sunday, July 10, 2016

DFSR Service 100% CPU Usage

I had an enterprise DFSR customer of mine with 17 File Server nodes complaining that CPU utilisation was at 100% on numerous DFSR nodes in the cluster.


In addition to the DFSR.exe process maxing out CPU on the file server, the following error was being generated on a regular basis:

Log Name:      Application
Source:        ESENT
Date:          7/07/2016 2:33:18 PM
Event ID:      623
Task Category: Transaction Manager
Level:         Error
Keywords:      Classic
User:          N/A
Computer:      SERVER.DOMAIN.LOCAL
Description:
DFSRs (1696)
\\.\E:\System Volume Information\DFSR\database_5AAC_EEEA_ACEE_C01D\dfsr.db: The version store for this instance (0) has reached its maximum size of 127Mb. It is likely that a long-running transaction is preventing cleanup of the version store and causing it to build up in size. Updates will be rejected until the long-running transaction has been completely committed or rolled back.
Possible long-running transaction:
 SessionId: 0x00000032FFE94EC0
 Session-context: 0x00000000
 Session-context ThreadId: 0x00000000000012BC
 Cleanup: 0
 Session-trace:
32997@2:32:18 PM


 
After speaking with the Directory Services team at Microsoft, this was due to a bug with KB3156418.  This was documented on the knowledge base website:
 
 
Symptoms
If you installed update rollup 3156418 on Windows Server 2012 R2, the DFSRS.exe process may consume a high percentage CPU processing power (up to 100 percent). This may cause the DFSR service to become unresponsive to the point at which the service cannot be stopped. You must hard-boot affected computers to restart them.

Workaround

To work around this problem, uninstall update rollup 3156418.

Status

Microsoft is aware of this problem and is working on a solution.

Make sure you do not install KB3156418 on any DFSR nodes until this issue has been fixed by Microsoft.
 

Wednesday, July 6, 2016

Bug with Windows 7 and Access Based Enumeration

I encountered an interesting bug with Windows 7 workstations with Access Based Enumeration enabled on a SMB Share and DFS Namespace running on Windows Server 2012 R2.

When a user tries to create a file or folder under a location which they have "Full Modify Rights" in Windows Explorer they receive the following error:

Drive Mapping refers to a location that is unavailable. It could be a hard drive on this computer, or on a network. Check to make sure that the disk is properly inserted, or that you are connected to the Internet on your network, and then try again. If it still cannot be located, the information might have been moved to a different location.


This issue occurs under the following circumstances:
  • Access Based Enumeration is enabled on a Network Share or DFS Namespace
  • If a Mapped Network Drive is created to the Share
  • The user is connecting from a Windows 7 workstation.

The Windows 7 client works under the following circumstances:
  • If the user creates a file from an application such as Microsoft Word (not Windows Explorer) using a mapped network drive to the folder share, it works corrctly.
  • If the user opens the UNC Path of the share \\server\share\folder, not via the mapped network drive it works correctly.
Note: If the user connects from 2008 R2, Windows 8.1 or Windows 10 it connects without issues.

When setting up Access Based Enumeration, the root folder should have:
  • List Folder / Read Data
  • Applies to "This Folder Only"
This ensures that users have the rights to list all folders at the base level folder for Access Based Enumeration, but requires additional rights to all sub folders hence the folders "hidden" as expected.

The root level permissions are shown below.  All sub folders are provided with full modify permissions for the respective security groups.

This works with 2008 R2, Windows 8.1 and Windows 10.

 
With Windows 7 clients they need additional permissions granted at the root level folder including:
  • Transverse folder / execute file
  • List folder / read data
  • Read Attributes
  • Read extended attributes
  • Read permissions
 
These extra permissions at the root level folder are only required if:
  • You run Access Based Enumeration
  • You have Windows 7 Clients on the network

Wednesday, June 29, 2016

IMAP4 Frontend with Exchange 2013

A customer of Avantgarde Technologies was having issues with IMAP4 on Exchange 2013.  In Exchange 2013 the IMAP Role has changed and how has a "front end" and "backend role".  This is shown by two services below.


The IMAP4 backend role listens on 127.0.0.1:143

If we telnet 127.0.0.1:143or "localhost" we hit the IMAP backend service.


If we telnet the server on any other IP address on TCP143 we hit the IMAP front end service.

This is what is confusing.  By default the IMAP4 frontend proxy server component is disabled even if the service is started.

If you telnet the server on its local IP address as shown below.


You will simply get a blank Telnet screen which will automatically close the session seconds after.


This happens as the ImapProxy frontend is disabled by default on the Server Component State of the server.


To enable it, use the following command.

Set-ServerComponentState -Identity Servername -State Active -Requester HealthAPI -Component ImapProxy

Need IT Support with Microsoft Exchange in Perth?  Click Here.

Tuesday, June 21, 2016

Modern Public Folders in Multi-Tenant Environments

Modern Public Folders is a new feature introduced in Exchange 2013 and also available in Exchange 2016 to allow companies to leverage Database Availability Groups and utilise log shipping as opposed to SMTP for replication providing numerous advantages which we will not focus on in this article.

If you new to Modern Public Folders, here is a good article to get you started:

http://www.msexchange.org/articles-tutorials/exchange-server-2013/planning-architecture/exchange-2013-preview-public-folders-part1.html

In this article, we will focus on Modern Public Folders in a multi-tenant environment.  We will be going through how to achieve multi-tenancy with Microsoft Exchange 2013 / 2016.

It is important to note, there are numerous methods for setting up Public Folder in a multi tenant environment.

In the example below we have:
  • One root Public Folder per tenant/company
  • One Public Folder Mailbox per tenant/company
  • One security group to represent all users from each tenant/company
  • Access based enumeration to each tenant/company can only see their root level Public Folder.
With Exchange 2013 and Exchange 2016, the first Public Folder mailbox created is always deployed as the Master Hierarchy Mailbox.  This mailbox is the only Public Folder mailbox with a writeable copy of the Public Folder hierarchy.  All additional Public Folder mailboxes created contain a read-only copy of the hierarchy.

When we are talking about Hierarchy, we are not talking about Public Folder content, only the folder structure which makes up the Public Folders.

First thing we need to create is a master Public Folder Hierarchy mailbox.  In a multi-tenant environment I generally recommend no content be placed in the master Public Folder Hierarchy mailbox and it only be used to maintain the writeable copy of the Public Folder Hierarchy.

As the first Public Folder mailbox created is always the Master Hierarchy, simply use the New-Mailbox command to create the mailbox.  I always recommend clearly naming this mailbox so it is easily identifiable as the Public Folder Mailbox Hierarchy mailbox.  This was done by giving it the name of "MasterHierarchy".


Next create a Public Folder mailbox for each tenant/company.  The intent here is all content for each company will be stored in their respective public folder mailbox.  In this example we will be using my company Avantgarde Technologies as an example tenant.  I'm using the naming convention CompanyPF for each respective Public Folder mailbox.


Next create a new Public Folder at the root "\" of the Hierarchy.  Make sure you specify the Public Folder mailbox you want to store the Public Folder in or by default Exchange will automatically pick any Public Folder mailbox which could be the Master Hierarchy mailbox or another tenants mailbox.

The name of the Public Folder specified below is the name the tenant will see in Outlook.


By default, all root public folders can be seen by all tenants.  To ensure no tenants can see the "Avantgarde Technologies" root level Public Folder, remove the Default user Access Rights as shown in the screenshot below.  This will ensure no one can see this Public Folder.


Lastly, ensure you have a Security Group containing all users from the tenant/company. Grant the group access to the root level Public Folder - I recommend Owner or PublishingEditor rights or refer to the following TechNet article about other Public Folder permissions you can grant here.


Only users of the "Avantgarde Users" security group will be able to see the root Public Folder Avantgarde Technologies and all other Tenants in the environment will be hidden to the Avantgarde Technologies employees.


To add additional tenants to the environment, repeat the process documented above.  Make sure you ensure that:
  • All root level public folders have the default user permissions removed straight away to protect privacy of each tenant on your Exchange environment.
  • When creating the root level public folders in PowerShell you manually specify the correct Public Folder mailbox or Exchange will pick one at random.
  • Consider using provisioning scripts to remove user error and protect yourself against privacy breaches.
All sub-folders created in Outlook will automatically append to the parents Public Folder mailbox.

One last thing I want to touch on is the "-DefaultPublicFolderMailbox" of the Set-Mailbox command.  Many people when they initially go about setting up Public Folders for multi-tenant Exchange they think about creating a public folder mailbox for each tenant then using "-DefaultPublicFolderMailbox" for all user mailboxes of each tenant.  Before writing this article I googled around to see if there was already an article similar, and saw people were attempting this incorrect method of deployment.  The reason this approach will not work is as mentioned earlier, all Public Folder Mailboxes have a "read only" copy of the entire Public Folder Hierarchy (meaning all Public Folders in the Exchange Organisation).  This means "yes they do have the Public Folder structure of other tenants/companies in the environment".  We want to ensure tenants only see public folders related to their company by locking down permissions to meet privacy reasons.

I hope this article has been informative for you and I would like to thankyou for reading.

Click here for IT Support in Perth with Microsoft Exchange

Friday, June 17, 2016

Setting Delivery Restrictions and Exchange Administration Center

Delivery Restrictions is a feature of Exchange Server which has been around since Exchange 2000 and allows companies to easily limit who can send to a distribution list or mailbox.  A very common use of Delivery Restrictions is to limit who can send to the "All Users" or "All Staff" distribution group.

In previous releases of Exchange such as 2010, 2007 and even 2003 - delivery restrictions were very easy to configure via the Graphic User Interface (GUI).

In later revisions of Exchange, Delivery Restrictions have been left out of the GUI and now must be configured by PowerShell.

This can be easily configured by using the "AcceptMessageOnlyFrom" attribute on DistributionGroups and Mailboxes for which you want to configure Delivery Restrictions.  The "AcceptMessagesOnlyFromDLMembers" is when you want to limit the group/mailbox to a Group of users who are allowed to send.


To set this, simple:

Set-DistributionGroup "Avantgarde Users" -AcceptMessageOnlyFrom "Clint Boessen"

However what if you want to configure multiple users or groups who need delivery restrictions on a mailbox or distribution group?  The -AcceptMessageOnlyFrom attribute does not allow you to comma separate values.

This is where it becomes more complicated...

Here is the syntax to add multiple groups or users to Delivery Restrictions in PowerShell:

Set-DistributionGroup -Identity "All Staff" -AcceptMessagesOnlyFromDLMembers "All Managers"

Set-DistributionGroup "All Staff" -AcceptMessagesOnlyFromDLMembers((Get-DistributionGroup "All Staff").AcceptMessagesOnlyFromDLMembers + "Executive Assistant")

Set-DistributionGroup "All Staff" -AcceptMessagesOnlyFromDLMembers((Get-DistributionGroup "All Staff").AcceptMessagesOnlyFromDLMembers + "Executive Group")


I hope this post has been helpful!

Need IT Services in Perth from Subject Matter Experts?

Exchange 2016 CU2 - Automatically Move Exchange Databases back to Preferred Server

In Exchange 2016 CU2, Microsoft has released a new feature which has been a request of mine since the release of Exchange Database Availability Groups (DAGs) in Exchange 2010.  I have built many Exchange 2010/2013 and 2016 clustered environments for customers around Perth over the years however one problem I always see is customers do not rebalance databases after Windows Updates.

Many customers (even after trained) do not put DAG nodes into maintenance mode and simply install updates on a node and reboot causing a database failover to occur.  They then do the remaining servers in the cluster usually ending up with all databases residing on a single server.

In Exchange 2016 CU2 there is a new shiny feature which "automatically fails back" the database to the preferred server based on Activation Preference.  This is known as "PreferenceMoveFrequency".

After all your Exchange 2016 servers (or more technically the Primary Active Manager) have been upgraded to Exchange 2016 CU2, you will have a new DAG property called PreferenceMoveFrequency.

What this switch does is define a frequency (measured in time) when the Microsoft Exchange Replication service will rebalance the database copies by performing a lossless switchover that activates the copy with an ActivationPreference of 1.

How cool is that... automatic failback to preferred members.

Now when I build a new Exchange cluster for a customer, after I leave and close out the project I can be sure that the databases will automatically fail back to the preferred server after the customer installs Windows Updates on cluster nodes.  This is important as it keeps the load across the cluster balanced.

To set this feature, simply use the following PowerShell command:

Set-DatabaseAvailabilityGroup -Identity DAG01 -PreferenceMoveFrequency ([TimeSpan]::Zero)

Need specialised Exchange consulting in Perth?  Contact Avantgarde IT Services on 08 9468 7575

Sunday, June 12, 2016

Reduce your IT Support costs by purchasing quality hardware upfront

During my career in the IT Industry, I see many small business entities purchase cheap consumer grade hardware from a local computer store in order to reduce IT costs.  This is bad practice and almost always leads to significantly higher IT Support costs in the long term.

For more information, please see my article "The Importance of Quality Hardware to reduce IT Support Costs"

http://www.articlesbase.com/computers-articles/the-importance-of-quality-hardware-to-reduce-it-support-costs-7457694.html

Reduce your IT Support Costs in Perth by talking to us now.

Tuesday, June 7, 2016

.NET Framework 4.6.1 and Exchange 2013 / 2016

Currently with Exchange 2013 CU12 and Exchange 2016 CU1, it is not supported to install .NET Framework 4.6.1 as it causes issues databases unexpectedly dismount or failover to alternative servers within a DAG cluster.  This issue is documented on the following KB article:

https://support.microsoft.com/en-us/kb/3095369

To ensure .NET Framework 4.6.1 does not install on your Exchange Servers, make sure you put the following registry key in place on your Exchange Servers:
  1. Back up the registry.
  2. Start Registry Editor. To do this, click Start, type regedit in the Start Search box, and then press Enter.
  3. Locate and click the following subkey:

    HKEY_LOCAL_MACHINE\Software\Microsoft\NET Framework Setup\NDP
  4. After you select this subkey, point to New on the Edit menu, and then click Key.
  5. Type WU, and then press Enter.
  6. Right-click WU, point to New, and then click DWORD Value.
  7. Type BlockNetFramework461, and then press Enter.
  8. Right-click BlockNetFramework461, and then click Modify.
  9. In the Value data box, type 1, and then click OK.
  10. On the File menu, click Exit to exit Registry Editor.
What if I have already installed .NET Framework 4.6.1 on my Exchange Servers?  If so use the following procedure:
  1. If the server has already automatically updated to 4.6.1 and has not rebooted yet, do so now to allow the installation to complete
  2. Stop all running services related to Exchange.  You can run the following cmdlet from Exchange Management Shell to accomplish this:  (Test-ServiceHealth).ServicesRunning | %{Stop-Service $_ -Force}
  3. Go to add/remove programs, select view installed updates, and find the entry for KB3102467.  Uninstall the update.  Reboot when prompted.
  4. Check the version of the .NET Framework and verify that it is showing 4.5.2.  If it shows a version prior to 4.5.2 go to windows update, check for updates, and install .NET 4.5.2 via the KB2934520 update.  Do NOT select 4.6.1/KB3102467.  Reboot when prompted.  If it shows 4.5.2 proceed to step 5.
  5. Stop services using the command from step 2.  Run a repair of .NET 4.5.2 by downloading the offline installer, running setup, and choosing the repair option.  Reboot when setup is complete.
  6. Apply the February security updates for .NET 4.5.2 by going to Windows update, checking for updates, and installing KB3122654 and KB3127226.  Do NOT select KB3102467.  Reboot after installation.
  7. After reboot verify that the .NET Framework version is 4.5.2 and that security updates KB3122654 and KB3127226 are installed.
  8. Follow the steps here to block future automatic installations of .NET 4.6.1.
Need IT Support Perth?  Contact Avantgarde Technologies on 08 9468 7575

Monday, May 2, 2016

Windows 7 Computers Rebooting During Day for Updates

A customer was having an issue where Windows 7 computers randomly rebooted during the day for Windows Updates without providing a prompt for users the option to postpone updates.  This was resulting in frustrated users with computers rebooting in the middle of sending important emails, word processing tasks etc.

We checked Group Policy Windows Update settings, all was configured correctly however computers still rebooted.

After troubleshooting further, we found that a deadline in WSUS was set to "Same day approval at 5:00AM".  This meant as a deadline was set at 5:00AM in the morning, as soon a computers received the update upon boot, they already missed the deadline and immediately installed without prompting users to postpone the reboot.


We removed the setting for same day approval and this resolved the problem.

Avantgarde Technologies, a leading IT Support Perth based company.

Wednesday, April 13, 2016

Windows 7 SP1 hanging on Checking for Updates

Trying to perform a simple task of installing Windows 7 x64 Enterprise with Service Pack 1 on some virtual machines in my lab to test a product.  Windows 7 SP1 comes with Internet Explorer 8 and is very out of date in most aspects for application testing.

After in building a few Windows 7 VM's from my ISO, all of them sat there hanging on "Checking for Updates" for hours.

Ugg... something I didn't have time for as I was trying to test something urgently for a customer.

After installing the latest Windows Update client from https://www.microsoft.com/en-us/download/details.aspx?id=49540 on each freshly built Windows 7 workstation, they then detected updates in 4 minutes and I was able to start patching.

Wednesday, March 16, 2016

Microsoft Word Performance Issues - KB3114717

A customer contacted me mid-February complaining of significant performance issues with Microsoft Word 2013 SP1 (32bit).  When users copied and pasted text, scrolled up and down a document or changed formatting Microsoft Word continuously hung and entered a not responding state.  In addition users sometimes experienced up a 60 second delay when typing from when characters appeared on the screen.

We did significant troubleshooting on the issue including
  • Disabled Anti-Virus products
  • Full malware scan using multiple AV engines
  • Disabled all non Microsoft services and applications from System Startup
  • Disabled all Microsoft Word Add-ins
  • Disabled graphics acceleration in Microsoft Word
None of these troubleshooting steps resolved the issue.

After talking to Microsoft, it turns out that Microsoft released a bad Windows Update KB3114717.  This update was released on the 9th of February and caused numerous performance issues with Microsoft Office.  After removing this update from all workstations, it resolved the issue.

Tuesday, March 15, 2016

Searching RBL Agent Logs on Microsoft Exchange

In this blog post I will show you a quick way to search through large amounts of Real Time Blacklists logs on an Exchange Server.  This article assumes you have RBL Providers in place on an Exchange server which can be enabled as per the following article:

http://clintboessen.blogspot.com.au/2014/05/rbl-providers-and-exchange-2013.html

Once RBL listing is turned on, you will have a bunch of log files under the following directory (provided Exchange was installed to the default C:\ directory):

C:\Program Files\Microsoft\Exchange Server\V15\TransportRoles\Logs\FrontEnd\AgentLog


We want to track down which RBL provider blocked an email from a spammer with the from address of bqmppf@apremiertravel.com but we first need to identify what log file contains the entry.  To do this open a command prompt and navigate to the FrontEnd\AgentLogs directory.

Run the following command:

find /c "bqmppf@apremiertravel.com" *.log | find ":" | find /v ": 0"

After running this command we can see that it found one entry for bqmppf@apremiertravel.com in the file AGENTLOG20160313-1.LOG.


Open the log and search for the email in question.


We can see this email was blocked by zen.spamhaus.org RBL provider.


Hope this post was helpful. 

Wednesday, March 9, 2016

Remove Run menu from Start Menu - Policy Bug

Today I stumbled across a bug with Windows 8.1 / 2012 R2 with the Group Policy setting:

User Configuration ->Policies -> Administrative Templates -> Start Menu and Taskbar-> Remove Run menu from Start Menu

I had this policy applied to a Remote Desktop Session Host through Loopback in a user session lockdown policy.

When this policy was applied, I could not access any network share on my 2012 R2 file servers and received the following error message:

Accessing the resource \\fileserver\share has been disallowed.


I could received the same error when attempting to access the file server shares through a DFS namespace.

After removing this policy setting from my RD Session Hosts, the issue was resolved.

Monday, February 15, 2016

Windows cannot move object because Access is denied

I came across an interesting error today when attempting to move a security group to a new location in Active Directory.

Windows cannot move object because Access is denied


This issue was caused by "Protect object from accidental deletion"


By default this feature is only enabled on organisational units however in this environment, some groups and user objects had this feature enabled as well.  I believe this was manually enabled by a sysadmin in the past.

Tuesday, January 26, 2016

Why my Domain Password Policy Not Applying?

Back in 2009 I published a very popular article "The Low-Down on Password Policies" which has been viewed by thousands of IT Professionals and referenced by application vendors in online documentation such as SysOp Tools Software.

http://clintboessen.blogspot.com.au/2009/12/low-down-on-password-policies.html

In this post we are going to talk about password policies further and cover off what appears to be a bug but is actually "by design".

My customer had a handful of domain controllers with a single 2008 R2 domain controller and three Server 2012 R2 domain controllers.  The PDC Emulator resides on Server 2008 R2.

The Server 2008 R2 domain controller was applying the password policy correctly however the 2012 R2 domain controllers were not (or so I thought).

Running an rsop.msc on the 2008 R2 domain controller (the PDC) shows the policy being applied from the Default Domain Policy.

 
 The 2012 R2 domain controllers the resultant set of policy displayed no policies being applied.


The same was experienced running an "gpresult /v" on the 2008 R2 or 2012 R2 domain controllers.

"gpresult /v" on 2008 R2:

 
"gpresult /v" on 2012 R2:
 
 
The account policies above are the domain Kerberos policy, not the password policy.
 
The password policy simply did not apply to the 2012 servers.  After further investigation in my test lab, I saw that only the domain controller running the PDC emulator displays the password policy when performing a Resultant Set of Policy.
 
This means every domain controller in a domain will not display the password policy from a resultant set of policy apart from the primary domain controller.
 
How do I check if the password policy is applying correctly on my domain controllers?
 
There are two commands which check the password policy:
  • net accounts (checks local password policies on a server)
  • net accounts /domain (checks the domain password policy on a server)
 
 
 
Domain Policy always wins over a local policy.
 
Computer Role: Backup means it is not a Primary Domain Controllers (PDC).
 
So in summary... if you see a password policy not applying to a domain controller when you check Group Policy, this is normal behaviour and is by design unless the server is the PDC emulator.

Thursday, January 14, 2016

Exchange 2013 - Could not find any available Global Catalog in forest

I was contracted to redesign a companies AD Sites and Services Topology - it was never setup correctly and despite being a 500 user organisation with 13 branch sites, they were still running of the "Default-First-Site-Name" which is generated automatically by Active Directory for a new domain.

As part of the new design, I updated the Default-First-Site-Name to a name which reflects their main datacentres then went through the process creating the additional site objects, site links and subnet objects.

After renaming Default-First-Site-Name I also updated the AutodiscoverSiteScope on the Client Access Servers in the Exchange 2013 cluster to reflect the new site name (as required for correct site SCP lookups).

After approximately 30 minutes, the IT Department complained they were no longer able to work on Exchange 2013 servers - all commands in the Exchange Management Shell failed with:

Could not find any available Global Catalog in forest


Oh dear!

After a quick investigation, the issue was only isolated to Exchange 2013 management tools and Outlook clients were not affected by the Site Object rename.

In order to force Exchange Server to redetect Active Directory Sites and Services topology, a restart of the "Microsoft Exchange Active Directory Topology" service is required on all Exchange servers.  Unfortunately almost every Exchange Service is dependent on this service!


As a result, we needed to wait until after business hours where we rebooted every Exchange 2013 server in the cluster.

This resolved the problem.

Tuesday, January 5, 2016

Windows DNS Forwarder Population

A customer contacted me today asking why when they promoted all these domain controllers, they had old DNS forwarders automatically configured on each server.

When DCPROMO installs the DNS Server service it also activates, by default, the auto-configuration of the DNS Server service. This auto-configuration process configures the forwarders list, the root-hints and the resolver, among other things, like creating the zones if required.

During the automatic configuration of the Forwarders, the following process occurs:
  1. Try to copy the forwarders list from a peer DNS server. A peer DNS server is any DNS server that has a copy of this DC domain’s zone. To get the peer server list the process queries for the NS list of the domain’s zone and then contacts each server returned on the list until it finds one from which it can copy the forwarders list. Once the process finds a peer from which it can copy the forwarders list it skips the next step. If no peer is found (because the NS query returned empty), none of them could be contacted, or none of them has forwarders configured, then move to step #2.
  2. If the previous step was not able to provide a forwarders list, then use as forwarders all the DNS Servers that are currently listed in the resolver for all the adapters, without any specific order.
  3. If none of the previous two steps can provide a forwarders list, then the new DNS server will not have forwarders configured.
If you have different DNS Forwarders configured for various sites on your network, the DNS server will automatically configure itself to one at random so make sure you check the forwarders after promoting a new server!

Wednesday, December 30, 2015

Data Deduplication Enhancement in Windows Server 2012 R2

Windows Server 2012 R2 has a new feature which I can see very handy in the real world especially with VDI environments which have lots of Virtual Hard Disk files (VHD's) of similar nature.

In Server 2012 R2, Data Deduplication is now supported on VHD data stores.  This was not supported with the initial release of Server 2012.

Data Deduplication is also supported on Cluster Shared Volumes (CSVs) with file servers configured in scaled-out for high availability.

For companies that run a Microsoft-based VDI pool with multiple hosts, Data Deduplication can reduce the storage requirements of the VDI environment up to 90%.

Sunday, December 27, 2015

The Dirty Little Secret about P2V Migration with System Center Virtual Machine Manager

Physical to Virtual Migration has been around for a long time ever since companies started making the transition to Virtualisation as a standard back in 2008 with the release of VMware ESX 3.x quickly followed by 4.x and vSphere.

There a many tools on the market for Physical to Virtual migration of machines with the most common being "VMware vCenter Converter: P2V Virtual Machine Converter", "Microsoft Virtual Machine Converter 3.0" and the handy little tool from sysinternals "Disk2vhd".

In the brand new shiny System Center Virtual Machine Manager (VMM) 2012 R2, this tool also supports Physical to Virtual migration of workstations as an easy transition to a virtual platform for physical servers.

However if you look at the fine print in the "prerequisites" you will see:

"Cannot have any volumes larger than 2040 GB"

https://technet.microsoft.com/en-au/library/hh427293.aspx

What the @$%@!!!

Very disappointing seeming this is the latest release of VMM and this limitation is still around... this would trip up many companies who are still looking to virtualise that legacy file server or mail server sitting around on their network!

Saturday, December 26, 2015

Event Viewer Tasks

I just want to touch against a feature in Windows Server 2008 R2 - 2012 which I believe is very cool.  Windows Event Viewer has the ability to launch tasks automatically when a particular error occurs.  This is great for companies that do not have System Center (or similar) tools in the environment to perform remediation tasks when problems occur on server infrastructure.

The button in Event Viewer is called "Attach Task to This Event"


Clicking it we can see that it actually relies on the Task Scheduler service to monitor the event logs.




Select Start a Program then associate it with cscript.exe or powershell.exe to launch a script that performs remediation tasks whenever the event error reoccurs.  You can instruct your script to also notify administrators via email which is very easy with PowerShell using the Send-MailMessage cmdlet.

Monday, December 21, 2015

Common DCDIAG Error with NCSecDesc

When running a DCDiag on 2008 or 2008 R2 domain controllers, it is very common to see the following error when running a dcdiag.exe.

Starting test: NCSecDesc
   Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context:
   DC=DomainDnsZones,DC=domain,DC=local
   Error NT AUTHORITY\ENTERPRISE DOMAIN CONTROLLERS doesn't have Replicating Directory Changes In Filtered Set access rights for the naming context:
   DC=ForestDnsZones,DC=domain,DC=local
   ......................... DC1 failed test NCSecDesc



This is caused on Active Directory domains which have not prepared Active Directory for read only domain controllers with "adprep /rodcprep".

Server 2012 / 2012 R2 domain controllers do not receive this error for NCSecDesc.

Also it is recommended you do not prepare you domain for RODC unless you intend to deploy Read Only Domain Controllers provided you have the requirement for specific branch locations from a physical security perspective.

Sunday, December 20, 2015

MSExchange ADAccess EventID 4027

A customer of mine contacted me today regarding an EventID 4027 from MSExchange ADAccess they wanted resolved.  This error was being generated on all Exchange 2013 servers in their cluster.


After looking into the issue, I found that legacy Cross-Forest configuration remained in configuration partition from a previous Cross-Forest Exchange Migration.  This is located under the Configuration --> Services --> Microsoft Exchange Autodiscover.


Simply remove the additional references to the legacy forest (the ones highlighted in yellow above).

Do not remove "Microsoft Exchange Online".  This is a default entry and is used when you create a Hybrid deployment with Office 365.

Monday, December 14, 2015

Outloook 2010 Starting in Safe Mode?

Microsoft recently released a bad Windows Update (KB3114409) which caused Outlook 2010 to start loading in safe mode for multiple clients of mine.  This update has recently been recalled by Microsoft due to the number of issues it caused.

In the event your company installed it across multiple workstations, you can quietly uninstall it across all computers.

One of the easiest ways to do this is creating a Startup Script with Group Policy and create a batch script with the following command:


C:\Windows\System32\wusa.exe /uninstall /kb:3114409 /quiet /norestart


Make sure the batch file is launched via a Startup Script and not a logon script.  Logon Scripts in Group Policy require users to have local administration rights to make system wide changes (something which is not best practice).  Startup Scripts will run under the SYSTEM account with administrative rights.

Sunday, November 29, 2015

Active Directory Topology Diagrammer Error

I'm running Windows 10 with Visio 2016 (both x64 builds).  I needed to draw out the topology from a customers Active Directory domain however when attempting to draw the topology using the free Microsoft tool I received the following error message:

Could not open the Visio Stencil !!!
ADTD can not continue with the drawing.
The current drawing will be canceled !

 
To resolve this issue, in Visio click File --> Options.
 
Open the "Trust Center" then click "Trust Center Settings".

Untick all Open and Save boxes as shown below:


After making this change the topology should generate without issues: