• No results found

Moving to Exchange Server 2013

N/A
N/A
Protected

Academic year: 2021

Share "Moving to Exchange Server 2013"

Copied!
13
0
0

Loading.... (view fulltext now)

Full text

(1)

Cohesive Logic LLC

Moving to Exchange

Server 2013

What You Need To Know Before You Migrate

(2)

Introduction

At first glance, many people think that Exchange Server 2013 isn’t nearly as remarkable as the previous two versions. Those versions were both remarkably revolutionary in capabilities and user experience:

 Exchange Server 2007 was a radical product: a shift to 64-bit architecture built on top of

Windows PowerShell. It offered five discrete server roles, including Unified Messaging (UM) and Edge Transport (ET). It introduced Autodiscover and Exchange Web Services for better client integration and configuration. It also boasted native continuous replication for high availability (HA) and disaster recovery (DR) as an alternative to shared storage clusters.

 Exchange Server 2010 took continuous replication to its logical conclusion by integrating HA and DR in a single Database Availability Group (DAG) offering. Traditional clustering, WebDAV, and direct client access to mailbox servers were all eliminated. The RPC Client Access service was introduced, and load balancing became a far more important consideration for Exchange 2010 deployments. Finally, online mailboxes moves became a reality.

It is easy to assume that Exchange Server 2013 is just an evolutionary extension of Exchange 2010. Exchange now has only two server roles (Mailbox and Client Access), the same transport and UM functionality is still in the Mailbox role. Outlook Web App (OWA) and the Exchange Control Panel (ECP) are redesigned and the Exchange Admin Center (EAC) web-based console replaces the MMC-based management console. Exchange 2013 boasts improved integration with SharePoint Server 2013 and Lync Server 2013, including changes to public folders.

However, like an iceberg, I think that the most radical changes in Exchange Server 2013 are hidden out of sight. Although they are not directly visible, they will make operations and user experience better if you know about them during deployment. Come with me to explore these hidden features.

Capacity vs. Performance: Mailbox Changes

At its heart, Exchange is a messaging system. Messaging systems have four major components, which are all included in the Exchange Server 2013 Mailbox role:

The message storage component: The Information Store and related services.

The message transport component: The three Transport and related services.

The client access component: The Autodiscover, Exchange Web Services (EWS), Remote Procedure Call (RPC), and Offline Address Book (OAB) virtual directories in Internet Information Server (IIS); the Outlook Anywhere (OA) protocol; and the POP3 and IMAP services.

The client component: The OWA and ECP virtual directories in IIS, although external clients such as Outlook and mobile clients are also part of the system.

While the transport, client access, and client components have been included in the Mailbox role, the storage component core is still the foundation of this role – and successfully planning, deploying, and operating an Exchange Server 2013 organization. The changes to the Information Store are critical in understanding how Exchange Server 2013 is so different from its predecessors.

(3)

Meet the New Information Store

In Exchange Server 2007, Microsoft began the process of re-writing Exchange in managed code (code that runs in the .NET Framework). Even through Exchange Server 2010, though, the Information Store has been a monolithic Windows service written in native C++. Regardless of how many databases or copies were present and mounted, one single service handled all I/O and memory access. As a result, the Information Store could become a bottleneck. Worse, bugs in the code or I/O issues affecting a single database copy could affect all of the others on the same server.

For Exchange Server 2013, Microsoft rewrote the Information Store in managed code, re-implementing the Extensible Storage Engine (ESE) that is the heart of the Information Store. They also changed the search mechanism used to perform content indexing of the mailbox databases; the SQL-based system has been replaced with the Microsoft Search Foundation, which is the same content index used in SharePoint Server. During this time, they took the opportunity to re-architect the store so that each mounted database copy now runs in a separate process.

This rewrite of the Information Store has several consequences:

Faster content indexing and searches. Message data, metadata, and most common attachment formats are indexed during the transport pipeline as messages are created, sent, or delivered to mailboxes – as part of the out-of-the-box experience. You no longer have to manually enable content indexing or ensure that the correct parameters are set. If you need to index additional formats, you can easily add them using the iFilter format used in Exchange Server 2007 and Exchange Server 2010.

Fault isolation. Despite the great amount of care and testing, there are occasional bugs that slip through. Or, third-party utilities and drivers somehow corrupt Exchange messages and

databases. In previous versions, these problems could bring down the Information Store for the entire server. In Exchange Server 2013, only the database copies affected will be impacted. Transient faults that can be dealt with by restarting the Information Store process can now be automatically handled through the Exchange Server 2013 Managed Availability functionality.

Higher overhead and greater scalability. Many people have asked us why Exchange Server 2013 has higher baseline hardware requirements, especially for smaller organizations. This is a direct consequence of the Information Store rewrite from a monolithic process to a process-per-database. Obviously, having the overhead of more processes start up requires more initial resource availability. Additionally, managed code does require more resources than unmanaged code; in return, certain classes of bugs are less likely to happen.

Add all this up and you get higher initial RAM & CPU requirements, but with that you buy a lot: better reliability, better search performance, and better scalability for Exchange Server 2013 servers.

I/O Improvements

In the Exchange Server 2007 timeframe, we reached an interesting tipping point for Exchange storage. Up until then, almost all Exchange storage configurations needed to use RAID not only to protect data from disk loss, but just to provide enough aggregate IOPS for mailbox database read/writes. Given the disks of the time, RAID volumes would often have spare capacity and be performance bound; databases would run out of IOPs before they ran out of disk space. With the storage changes in Exchange Server 2007, many properly sized systems now found themselves being capacity bound; running disks out of

(4)

space while they still had IOPS to spare. In Exchange Server 2010, most deployments were capacity bound, even when deployed on SATA disks.

Exchange Server 2013 continues this trend, as shown in Figure 1. Given modern SATA disk capacities, you can even now deploy several large database copies on a single multi-terabyte disk. More than ever, you no longer need to deploy Exchange Server 2013 on RAID storage solutions. In order to keep this level of IOPS reduction, though, you must provide enough CPU and RAM to your servers; Exchange aggressively uses processor, memory, and sophisticated storage algorithms to provide advanced caching consolidated multiple I/O reads and writes into a smaller number of contiguous operations.

Figure 1: Comparing per-mailbox IOPs across Exchange versions

The Exchange Server 2013 sizing process is designed to ensure that you can get the highest level of IOPS reduction possible from your system. For best results, you should follow four important guidelines:

Follow the sizing process before purchasing hardware or storage. One of the most common mistakes I see is when companies buy hardware for Exchange upgrades before running through the sizing process. This often causes you to work within artificial limitations. Plan before purchase – and don’t be afraid to find knowledgeable help if you find the process confusing. Paying for consulting help up front will often save you money, both in hardware costs and in reduced operational issues through the lifetime of your Exchange deployment.

Thoroughly examine your hardware and storage options. While you can of course deploy Exchange Serve 2013 using high-end storage systems, in most cases you don’t need to. Storage Area Network (SAN) systems usually are not tuned for Exchange in particular, and the best practice guidelines for Exchange on SAN deployment typically conflict with optimal use of the SAN for general applications. With the Exchange DAG providing multiple copies of your data, the right servers with Direct Attached Storage (DAS) can be more cost-effective both in capital and operational expense than other options, including virtualization.

Don’t designate mailbox databases by size. Another common mistake I see is a holdover from Exchange Server 2003 and previous: grouping mailboxes in databases based on quota size. All

0.000 0.200 0.400 0.600 0.800 1.000 1.200 1.400 1.600 1.800 50 msgs 100 msgs 150 msgs 200 msgs 250 msgs 300 msgs 350 msgs 400 msgs 450 msgs 500 msgs

IOPS Comparison per Mailbox

(5)

this practice does in Exchange Server 2013 is create IOPS hotspots. IOPS are created by the number of messages sent and received by a mailbox per day, not by the size of those messages; Exchange works best when database sizes are steady and IOPS are evenly spread between databases. Think of filling a series of jars with various-sized rocks: you can only fit a few bigger rocks in a jar, but that leaves a lot of gaps. Place the large mailboxes evenly between your databases, then fill in the medium mailboxes, then the small mailboxes. Fill the gaps between the big rocks with the gravel and pebbles.

Deploy large mailboxes. I do not see any sense, regardless of which storage system you use, in deploying small mailboxes for Exchange Server 2013. If you’re trying to control the amount of data synchronized to client devices, there are other (and better) mechanisms to use. Exchange Server 2013 is designed to support mailboxes up to 100GB in size. Of course, the cost per GB per mailbox goes up on high-end SANs…yet another reason to look at DAS solutions.

Database Format Changes

As part of the changes to ESE and the performance increases, Microsoft made changes to the database format. How this affects you (and your users) is that when moving a mailbox from Exchange Server 2010 to Exchange Server 2007, even though the number of messages in the mailbox are the same and the amount of data is the same, the space used in the Exchange Server 2013 database will be higher than on previous versions of Exchange.

This seeming increase in mailbox disk usage comes from better accounting inside the mailbox database files. Exchange Server 2013 keeps better track of fields inside of the database file and makes sure that metadata that was not previously associated is now associated. The exact amount of growth depends on the number of items and type of items in the mailbox. If your users don’t have quotas, this will be merely a cosmetic reporting difference. However, if they do have quotas set, you may need to adjust them before moving mailboxes to Exchange Server 2013.

Reducing Infrastructure: Client Access

Previously, I said that all four major roles (transport, store, client access, and client) are present in the Exchange Server 2013 Mailbox role. From a purely technical perspective, you can just deploy the Mailbox Role – try it in a lab and you’ll find that the Client Access role is not required for normal operation! (Don’t try this in production.)

Whatever the technical possibilities, the Client Access role is required in every site that hosts a Mailbox role. The good news is that having it means that the infrastructure required by your Exchange

deployment gets a lot simpler when compared to previous versions of Exchange. Let us explore exactly what this role is and what it does for you.

The New Stateless Client Access Role

Prior to Exchange Server 2007, the Exchange server hosting the user’s mailbox was responsible for talking to the client and rendering the mailbox data for the appropriate protocols. Front-end Exchange servers were stateful proxies to the mailbox server that translated the incoming client protocol to Remote Procedure Calls (RPC). Once Exchange Server 2007 came out, however, oncoming connections would be handled by code on the Client Access role, which would then use RPCs to pull the user’s information from the appropriate Mailbox server.

(6)

While this approach was scalable, it created other complications. RPC communications are highly sensitive to network latency. RPCs are also unique per program, making them difficult to trace and diagnose in the event of a problem. This configuration requires the client, the Client Access server, and the Mailbox server to track the state of the session; if one of them gets out of sync with the others, the session must be re-initiated.

For Exchange Server 2013, Microsoft realized that sometimes in order to go forward, you have to take a step back. Once again, the Mailbox server is responsible for rendering all mailbox data for the

appropriate client protocols and the Client Access Server is a proxy…with several twists:

No RPCs, just native protocols. MAPI RPCs over TCP are latency sensitive, but they can be wrapped in HTTP, a standard protocol that is easily monitored and much more forgiving of latent, busy networks. RPC over HTTP, also known as Outlook Anywhere, allows MAPI-speaking clients to continue to connect to Exchange Server 2013, even though it no longer directly supports incoming RPC connections or uses RPCs to talk to the Mailbox role. Instead, the Client Access role proxies the native protocol on to the appropriate Mailbox server.

No client state. Rendering data for the client would require the Client Access role to keep track of the client state. As a true proxy, the Client Access role no longer keeps track of the state of the session. This is possible because it is no longer transmitting or receiving RPCs; incoming connections are HTTP, POP3, IMAP, or SMTP; if the session is interrupted, the client can simply open a new session. It doesn’t matter if the new session is to the same Client Access server or a new one; the connection is simply passed on to the Mailbox server that hosts the active copy of the user’s mailbox database, permitting state to be re-established between the client and the Mailbox server.

Connection pooling. Not having to keep state makes the new Client Access use fewer CPU, RAM, and disk resources. By pooling and reusing network connections to the Mailbox servers, fewer network resources are used and performance improves.

The Client Access server doesn’t just pass incoming connections through blindly; it has to know which Mailbox server the user’s mailbox is active on. To do this, the Client Access role authenticates incoming connections and performs the appropriate lookups to find the matching mailbox server. Sometimes, this may even include redirecting the client to a different Client Access server.

Outlook Anywhere, Everywhere

No Exchange Server 2013 mailbox uses MAPI RPCs over TCP; the Client Access role no longer supports it. It’s easy to remember to change Outlook client profiles, but they may not be the only clients using direct RPC for mailbox access. All clients – including third-party clients such as BlackBerry Enterprise Server using the MAPI CDO library – must be configured to use Outlook Anywhere before the corresponding mailboxes are moved to an Exchange Server 2013 Mailbox role. You may have more of these

applications in your organization than you know or remember, so take time to thoroughly audit your systems. Otherwise, your users and helpdesk will find them for you.

Load Balancing and SSL Offload

Deploying external client access on Exchange Server 2007 and Exchange Server 2010 could be complicated, depending on which protocols you were publishing. Most organizations suddenly found themselves needing to address load balancing. Smaller organizations tried (and sometimes succeeded)

(7)

with Windows Network Load Balancing (WNLB), but others deployed hardware load balancers. Hardware load balancers offer many capabilities, including the ability to offload SSL transactions from the Client Access servers and (depending on the manufacturer and model) a variety of options for balancing incoming protocol sessions from clients.

Load balancing legacy Client Access servers adds complexity. Multiple connections coming from the same client need to be sent to the same Client Access server to preserve the session state. Session affinity tracking needs to happen at layer seven, which requires the load balancer to open, inspect, and modify protocol packets. You could slide by using client (source) IP address to track protocol sessions, but that method has definite drawbacks in any network using Network Address Translation (NAT) Worse, each protocol has its own requirements for full session affinity. In some cases, to make sure that each protocol is being appropriately handled, you need separate namespaces for each protocol! This greatly complicates the configuration of Exchange virtual directories, certificates, and load balancer virtual IP addresses.

By using a stateless Client Access model, Exchange Server 2013 makes load balancing much simpler. SSL Offload is no longer needed (and isn’t even included in Exchange Server 2013), as the Client Access server is no longer rendering data; they just proxy sessions on to the Mailbox server. Client Access server affinity is no longer required, which in turn means that the load balancer can simply round-robin incoming connections at layer four without inspection or modification. Not only does this simplify the namespace and load balancer configuration (and troubleshooting), it removes the need to export the Exchange server certificates and private keys and import them on the load balancing hardware.

So does this mean that hardware load balancers are even still required? I unapologetically say yes, they are necessary. This isn’t 2009 when load balancers were big and expensive, and the average

administrator does not need to be stuck with workarounds like WNLB and DNS round robin. WNLB may be adequate for the smallest of organizations, but using it adds complexity to your Exchange Server 2013 Client Access servers and to your network, making it no longer necessary to deal with that complexity. If you deploy multirole servers per Microsoft’s best practice recommendations, you can’t even use WNLB; it cannot co-exist with the clustering components required by the DAG.

If you already have load balancers, they’re probably adequate for Exchange Server 2013. You can still perform layer seven inspection and session affinity, although you will have to do SSL bridging

(remember, no SSL offload). However, if you switch to layer four balancing, your existing hardware will be able to scale to handle more connections than it can today. You’ll reduce the number of connections, the amount of packet and session inspection, eliminate the SSL CPU overhead, and dramatically simplify your configuration. No more SSL certificates and keys, service probes, cookie and header manipulations, or per-protocol virtual services. Just point HTTPS to the IP address of your Exchange Server 2013 Client Access roles, select a basic round-robin protocol, step back, and enjoy the increased uptime your users will experience.

If you don’t have load balancers, there are several alternatives on the market. For the best bang for your buck, though, I personally recommend the products from KEMP Technologies. They have a variety of physical and virtual offerings that won’t break your budget, are specially crafted to work with Microsoft Exchange Server (and Lync and SharePoint too), and are easy to set up and maintain. For the cost of one or two more servers, you could have a fully redundant hardware load balancing setup that gives you a ton of features and flexibility you won’t get with WNLB or DNS round robin.

(8)

The Death of the Reverse Proxy

For years, many organizations that needed to securely publish Exchange to external users and devices used one of Microsoft’s security solutions:

 Internet Security and Acceleration Server (ISA) was a proxy and reverse proxy server with integrated firewall, VPN, and client caching features. ISA was used most commonly with Exchange Server 2003 and Exchange Server 2007.

 Forefront Threat Management Gateway (TMG) is the successor to ISA, but is no longer sold by Microsoft as of December 2012. TMG was used most commonly with Exchange Server 2007 and Exchange Server 2010.

 Forefront Unified Access Gateway (UAG) is a comprehensive remote access and policy enforcement engine built on top of TMG technology. UAG’s feature set extends far beyond publishing Exchange and is commonly used in situations where the degree of external access to corporate information is correlated to the status of the remote endpoint. UAG is commonly considered to be too expensive just to use to publish Exchange.

If UAG is too expensive and complicated to put in just for Exchange, and TMG is no longer around, what are your options?

Many administrators, at this point, remember that the Exchange Server 2013 Client Access role performs pre-authentication against Active Directory. Then they start thinking, “I’ll put that in my perimeter network, just like I did with the Exchange Server 2003 front-end, and use that instead of a reverse proxy!” Not so fast, hotshot; Microsoft supported that configuration in the Exchange 2003 timeframe because they had no other option (ISA hadn’t been released yet). Once ISA was available, Microsoft couldn’t stop supporting that configuration, but they could change the best practice

recommendation. Once they released Exchange Server 2007, though, placing the Client Access role in a separate subnet without full access to the rest of the Exchange servers became unsupported. The canonical answer is “use a reverse proxy” as shown in Figure 2. Everyone knows, of course, that you have to protect your Exchange servers. But how we protect them is all too often a case of received wisdom passed on from previous generations.

Figure 2: Complexity is the enemy of real security

The Exchange product team recently released a fantastic blog post explaining why they don’t believe you need a separate reverse proxy. Many organizations, including Microsoft, don’t use them. Instead, they publish their CAS servers through a firewall (often through some sort of NAT mapping) directly to the Internet. More accurately, they publish their load balancers through the Internet. The difference between load balancers and reverse proxies like TMG is some of the features offered in the name of security (such as pre-authentication of incoming connections against LDAP, RADIUS, or Active Directory).

(9)

We agree that you don’t actually need reverse proxies, and that all too often, deploying them is not giving you the true security benefits that blind faith attributes to them. Consider the following real-world design compromises caused by a hard requirement for reverse proxies:

 Reverse proxies sit in front of the load balancer and in many cases can duplicate the

functionality of the load balancer. Now you have to choose whether to duplicate functionality and have separate network paths for internal and external clients, or combine it all into a single path and force your internal clients to talk to the reverse proxy as well. Both approaches have their downsides and make troubleshooting and configuration updates more difficult. I have seen service outages from using both models.

 Reverse proxies are the most common single point of failure I see today in Exchange deployments. After redundant SANs, hardware, and networking gear, having a redundant reverse proxy solution is a bridge too far for many organizations. Reverse proxies need load balancing too, and here is a common place organizations cut corners. Yes, TMG can be run on a WNLB cluster, but that doesn’t make WNLB any better of a choice at that point of your

Exchange network stream than it would be closer to the Client Access role.

 Reverse proxies often cause more exposure of internal services, not less, because the features they offer are forbidden by other boilerplate security “best practices.” For example, TMG can be joined to the AD domain so that TMG pre-authentication takes advantage of Kerberos and constrained delegation. TMG has an impressive security record and arguably protects itself from attack better than any other firewall or reverse proxy product on the market. But “domain joined” and “perimeter” automatically sets off alarms, so TMG is deployed using RADIUS or LDAP lookups – trading complexity and reduced security for the appearance of security. Unless you have a legitimate business or regulatory need for a reverse proxy that the previous points don’t satisfy, you’re better off just using your load balancer in a dual-homed configuration. If you determine that pre-authentication is needed, pick one of the two following recommendations:

1. If you already have TMG deployed in your network, continue to use it! There’s no reason (yet) to rip and replace it. Extended support for TMG will last through April 14, 2020 and for now, you can publish Exchange Server 2013 on the existing product. If you need more TMG servers, you can’t buy those licenses directly any more, but you may already have them under Software Assurance/Enterprise Assurance – and if you don’t, there are still appliance vendors selling brand new TMG-in-a-box solutions.

2. If you have yet to deploy any reverse proxy pre-authentication solution, talk to your load

balancer manufacturer. Many load balancers now offer the ability to perform pre-authentication and other reverse proxy functions, either in a newer firmware version or through an optional add-on package. This is one reason I like KEMP load balancers; they now offer the Edge Security Pack, which was specifically added to recent firmware versions to handle pre-authentication for Microsoft solutions.

Rethinking Secondary Services

Just installing Exchange Server 2013 isn’t enough to ensure a healthy, functional messaging system – at least, not once your users are granted access. Other applications that I call “secondary services” interact

(10)

with Exchange and supplement it. Two of the most important secondary services, however, warrant a dramatic rethink of how – and if – you should use them.

We Don’t Need No Stinking Backups

Backups are a given in the modern IT environment. Although there have been several transformative technologies in the past decade – disk to disk, backup to cloud storage – the core of backups for most organizations still revolves around tape-based systems.

Although the tape systems of today have impressive transfer rates and capacities, backups systems still represent a significant limitation on the design of an Exchange deployment: database sizes are governed by backup and recovery time, which in turn limits your ability to get the most IOPS reduction benefit from the new ESE. You either have to limit mailbox sizes or deploy far more databases, which increases the number of servers and overhead in your design.

In Exchange Server 2010, Microsoft made a strong case for using the native single-item recovery, deleted item recovery, deleted mailbox recovery, and three or more copies of your databases in combination with archival and retention capabilities to take advantage of Exchange Native Data Protection (NDP). With NDP in Exchange Server 2013, you have even fewer reasons to take backups:

 Use in-place hold to ensure mailboxes and items are kept immutably instead of taking backups for legal archival.

 Use in-place discovery to perform legal searches in real-time across all mailboxes in the organization.

 Use archive mailboxes and retention features to ensure that messages that shouldn’t be deleted are preserved.

 Use the DAG in multiple sites to ensure that server and site outages do not result in the loss of messaging data.

 Use low-cost SATA direct-attached storage and Automatic Reseed to ensure that database copies are automatically reseeded to spare disks if one disk or copy is lost.

Exchange Native Data Protection is only available when you have a DAG and preserve three or more copies of your databases. If you don’t meet those requirements, or still need to take backups and want to move past the limitations of tape, consider a combination of disk-based and cloud-based backups. A disk-based solution like Microsoft System Center Data Protection Manager or a cloud-based backup solution like Cohesive Backup can help you simplify your backups.

Monitoring

Building Exchange systems is easy. Keeping them running during day-to-day operations is a challenge. Legacy versions of Exchange focused on helping you provide inexpensive, manageable native

redundancy into the various Exchange roles. However, Microsoft has always recommended that you deploy a comprehensive monitoring solution that can keep track of the health of all components of the Exchange system, such as Microsoft System Center Operations Manager (SCOM). SCOM does have downsides, though; it can be complex to deploy and tune, as the default Exchange management packs can produce a high volume of alerts until the thresholds are adjusted.

(11)

Simple IP-based monitoring solutions, or solutions that merely look at event logs, provide only part of the coverage you need to keep the system healthy; they are usually less expensive and complex than more comprehensive alternatives.

In Exchange Server 2013, Microsoft has focused on a new principle: self-healing systems. Exchange now has built-in health monitoring that looks for errors in every component of Exchange. When it finds a problem, it has a list of automatic options to perform. The theory is simple: most errors are transient or can be cleared through a simple action such as restarting a service or moving an active database copy to another server. Because Exchange Server 2013 is designed to ensuring that protocol rendering happens at the active mailbox database copy, that native protocols are used whenever possible, and that the Client Access role has no state, moving active operations to a redundant healthy copy is an efficient strategy for the majority of problems. If the automatic health options can’t resolve the problem, then an alert is raised for manual operator intervention.

When combined with the latest SCOM management packs, this strategy is even more effective; the noise of meaningless or uncorrelated events is reduced, making it easier to see the important events, understand their significance, and have a good idea of how to proceed in solving the problem. To get these benefits, though, you must deploy a management system that actually understands how Exchange Server 2013 works; otherwise, the automatic options can create even more noise in your monitoring stream, obscuring the good information.

Upgrade and Migration Paths

By now, it should be clear that Exchange Server 2013 is more revolutionary than many people appreciate. The real question now is what do you need to know to upgrade successfully? Exchange upgrades touch many components, as there is a definite link between the Exchange organization and configuration, the Active Directory forest and domains, the network, the desktops and mobile devices, and many other elements in the IT infrastructure.

I often find that customers don’t always know what options they have for upgrade, even if they’ve been long-time users of Exchange. Allow me to finish up by giving you an overview of your choices.

Greenfield Deployments

If you’re just starting out with a new Exchange organization, or have so many problems with your existing Active Directory forest that it’s easier to replace it than fix it, creating a brand-new Active Directory forest and installing a fresh Exchange organization is certainly a compelling choice. Depending on how much data you can stand to lose, this kind of slash and burn operation can be a quick fix.

If you can’t leave the data behind but can’t fix the problems in place, a greenfield deployment is the first step in a multiple-forest deployment, which I will talk more about below.

In-Place Upgrade with Native Tools

Upgrading your organization in-place is accomplished by installing Exchange Server 2013 on new servers, then transferring features and mailbox from the old version to the new. This should be a familiar step for those who upgraded to Exchange Server 2007 and Exchange Server 2010. Many of these organizations still have some pieces of Exchange Server 2003 left – servers left running because of legacy software that is not compatible with newer versions of Exchange, servers that haven’t been

(12)

properly decommissioned, traces left in Active Directory. The first step is to run an Exchange Best Practices Analyzer (ExBPA) health check on your organization. Any Exchange Server 2003 servers or traces must be decommissioned and cleaned up, or the Exchange Server 2013 installation cannot proceed. If you have been putting off migrating these other applications, now is the time to fix these items.

To install Exchange Server 2013 into an existing Exchange organization you must have all legacy Exchange servers running the following versions at a minimum:

 Exchange Server 2007 Service Pack 3 with Update Rollup 10. This must be present on all Exchange Server 2007 servers, including Edge Transport servers.

 Exchange Server 2010 Service Pack 3. This must be present on all Exchange Server 2010 servers, including Edge Transport servers.

 Use the latest Cumulative Update (CU) of Exchange Server 2013 available. At the time of writing, that is Exchange Server 2013 CU2.

Upgrade each of your Exchange sites one at a time. There will be a period of coexistence during which Client Access should be initially handled by the Exchange Server 2013 Client Access roles, which will then proxy or redirect connections as required. Once client access is established, then you can start moving resources, processes, and mailboxes.

Multiple-Forest Deployments

Exchange is no longer limited by a single Active Directory forest. As mentioned previously, a greenfield deployment can become part of a multiple-forest Exchange organization, of which there are two types:

Cross-org forest topologies are configurations where multiple forests contain both Exchange servers and user accounts. The accounts in each forest are only visible to that forest unless a GAL synchronization solution is established.

Resource forest topologies are configurations where one forest contain the Exchange servers, while one or more forests contain user accounts. While user accounts must be created in the Exchange forest, these accounts are disabled and linked with accounts from the user forests. These topologies are common during mergers and acquisitions, but can also be used as part of a larger Active Directory migration to solve problems with underlying legacy forests.

Of the two configurations, resource forests are the easier to maintain and operate in the long-term using native tools. Even when multiple user forests exist, the linked disabled accounts in the resource forest provide a single consistent view of the entire GAL. Exchange Server 2013 tools and scripts will work this topology. To perform the necessary GAL synchronization, you need to deploy a tool such as Microsoft Identity Lifecycle Manager 2007 Feature Pack 1 or Forefront Identity Manager 2010, or use a third-party option.

Migration using 3

rd

Party Tools

There are common scenarios where the native Exchange Server 2013 tools may not be adequate for your needs or schedule. When performing migrations across forests, establishing hybrid

(13)

solution, or migrating from Exchange Server 2003 directly to Exchange Server 2013, you may have additional co-existence or data retention requirements that justify the expense of a third party solution. For lightweight migrations involving a hosted service or a simple acquisition/merger scenario, we recommend looking at the MigrationWiz service. This is a cloud-based subscription service that uses connections to and from their server farm to download mailbox data from the source organization, perform any necessary conversions, and upload the content to the target organization. While not all data is transferred, this solution requires no hardware or software to be installed – merely a high-bandwidth, low latency connection to the Internet.

For longer, more complicated migration projects, there are two main toolsets in the Exchange world: Binary Tree and Quest Migration Manager (now part of Dell Software). Both tools are commercial offerings that usually require a professional services consulting engagement with a licensed partner in order to plan and execute. These tools include multiple components that address all aspects of the entire lifecycle of a long-term migration project: co-existence, mailbox batching, profile updates, public folders, availability and free/busy information, and more.

Cohesive Logic is a Quest partner with extensive migration experience. Our consultants have performed migrations for hundreds of thousands of Exchange mailboxes using the Quest Migration Manager tools.

Conclusion

By now, I hope I’ve convinced you that Exchange Server 2013 is a significant version of Exchange worthy of closer attention. From the new robust architecture to the self-monitoring and healing capabilities, Exchange Server 2013 will provide great value to you, as long as you know what’s under the hood and how to take advantage of the changes. Good luck with your upgrades and migrations, whether you perform them yourself or contact Cohesive Logic for consultation.

About Cohesive Logic

Cohesive Logic (www.cohesivelogic.com) is a privately owned company based in Issaquah, Washington that offers professional services, hosted services, and remotely managed services surrounding email, collaboration and unified communications services centered on Microsoft Windows Active Directory, Exchange Server and Lync Server. Cohesive Logic was founded in March of 2008 and built its reputation doing design, deployment and migrations for mid to large sized companies across North America. Cohesive Logic has taken deep expertise in Exchange, Lync, and Active Directory and combined it with experience building and running Network Operations Centers for Managed Service Providers to offer both fully managed, and hosted solutions for Microsoft Exchange Server.

About Devin

Devin L. Ganger is a Principal Consultant for Cohesive Logic. After working as a systems administrator for Windows and UNIX for several years, he became a consultant with a primary focus on Exchange Server, Windows Active Directory, and related Microsoft and third-party technologies. Devin was recognized as a Microsoft MVP for Exchange Server for five years (2007-2011) and earned the Microsoft Certified Master (MCM) Exchange 2007 certification in 2009. He is a blogger, industry speaker, the author of several magazine and newsletter articles, and has co-written and contributed chapters to a number of technical books and a variety of white papers. In his spare time, he studies karate, volunteers with the Boy Scouts, enjoys Xbox gaming, writes speculative fiction, and speaks about himself in the third person.

References

Related documents

This module explains how to migrate client access services from Exchange Server 2003 and configure the Client Access server role in Exchange Server

Install & configure Microsoft Exchange 2013 on the servers, 1 on each physical host with the Mailbox Server Role & 1 on each physical host with the Client

If you’re using an Exchange 2013 DAG and you want to use WNLB, you need to have the Client Access server role and the Mailbox server role running on separate servers.. WNLB

WAN WAN Directory Services Server Hub Transport Server Role Unified Messaging Server Client Access Server Role Mailbox Server Site Directory Services Server Hub Transport Server

In Exchange Server 2010, the Mailbox Server Role may be combined with the Client Access Server and/or Hub Transport roles, regardless of whether or not the mailbox server

You need to prepare the Exchange organization for the deployment of Exchange Server 2010 Mailbox, Client Access, and Hub Transport servers?. What should you

In Microsoft Exchange Server 2013, you can create mobile device mailbox policies to apply a common set of policies or security settings to a collection of users.. After you

Prior to configuring Web Access for Exchange 2007 Mailbox/Public Folder Archiver, Exchange WebProxy Archiver Agent, or OWA Proxy Enabler, ensure that the Client Access Role has