Quantcast
Channel: Running SAP Applications on the Microsoft Platform
Viewing all 61 articles
Browse latest View live

BW Queries by factors faster using FEMS-pushdown

$
0
0

Complex SAP BW queries often contain FEMS-filters. These filters used to be applied on the SAP application server, which was not very efficient. A new SQL statement generator in SAP BW implements an optimized algorithm for FEMS processing. We have seen performance improvements up-to factor 100. The actual performance improvement varies heavily for different BW queries. The main idea was to reduce the processing time of the application server by pushing down the FEMS filters from the application server to the database. However, the new algorithm even reduces the database processing time in most cases. The following example shows the BW query statistics of a simple BW query with 15 Selection Groups. The total runtime went down from almost 42 seconds to 1.4 seconds.

fems1

What are FEMS filters?

A FEMS filter (Form EleMent Selektion) in SAP BW are filters on Selection Groups. To make a long story short: FEMS filters are filters on key figures, structures or cells. For an existing BW query you can check the number of Selection Groups in SAP transaction RSRT:

fems2

fems3

A BW query with FEMS filters is not necessarily slow. However, complex FEMS filters were often not applied on the database. A SQL query containing almost no filters was created, which returned a huge result set (672,184 rows of the 100,000,000 rows cube in the example above). Running such a SQL query requires high processing time on the DB server and results in high network I/O. Afterwards, the FEMS filters have to be applied on the result set on the SAP application server. Due to the architecture of the SAP application server, this step is always performed single-treaded!

SAP realized soon, that FEMS filters should be pushed down to the database. However, SAP was convinced that this cannot be performed efficiently with SQL. Therefore, an undocumented API was implemented in SAP BWA (and later in HANA) some years ago. This API was used for pushing down FEMS filters to BWA by bypassing the SQL interface. SAP implemented further DB-pushdowns exclusively for BWA (and HANA). However, it figured out, that some BW queries were faster without using DB-pushdown. Therefore, the so-called TREXOPS modes were introduced. Mode 2 is the FEMS-pushdown, mode 3 is the MultiProvider-pushdown. You can configure the TREXOPS mode per BW provider and per BW query. A higher number always includes all optimizations of a lower mode number.

fems4

SAP BWA supports TREXOPS modes 2 and 3. The most important is the FEMS-pushdown. The default setting for BWA is TREXOPS mode 2 (see SAP note 1790426)

FEMS-pushdown with SQL Server

The FEMS-pushdown on SQL Server implements a new algorithm, which uses the standard SQL interface. This algorithm does not only push-down the FEMS filters. For example, it further reduces the complexity of the SQL query by factorizing the FEMS filters on the application server before running the SQL query. As a result, even the DB response time decreases in many cases. One may argue, that reducing the DB response time will result in a higher utilization of the DB server (which is a unique resource, in contrast to the application servers). However, this is not necessarily the case. The optimized algorithm even reduced the consumed CPU time on the DB server in the example above. You can see the total CPU time in the SQL Server query statistics in column worker_time.

fems5

For checking the SQL Server response time, the SAP BW statistics (see Data Manager time above) is more accurate. The column elapsed_time in the SQL Server query statistics does not contain the compilation of the SQL statement. On the other hand, it contains processing time on the application server between the fetches of the SQL query (which is using a cursor).

The FEMS-pushdown on SQL Server also uses the TREXOPS modes described above. Any mode higher than 0 enables the new SQL statement generator for FEMS queries. For non-FEMS queries (or TREXOPS mode 0) the old SQL statement generator is used. The performance improvements caused by the FEMS-pushdown is highly depend on the actual BW query. In a few cases it may even increase the BW query runtime. However, we have only seen this for BW queries, which are fast anyway. It does not really hurt whether a BW query takes 3 or 4 seconds, but it makes a difference whether it takes 300 or 4 seconds. In the worst case, you can disable the FEMS-pushdown for a particular query using the TREXOPS modes.

Prerequisites for FEMS-pushdown

The FEMS-pushdown will be generally available for SQL Server in some weeks. We want to start now with a few pilot customers to get feedback before finally releasing the new statement generator. You can apply by opening an SAP message in component BW-SYS-DB-MSS. We will then provide you the required SAP correction instruction, which enables the FEMS-pushdown for Microsoft SQL Server.

The new statement generator can only be used, when the following prerequisites are fulfilled:

  • The required SAP BW code is implemented (minimum: 7.50 SP4 + SAP correction instruction)
  • The BW query is a FEMS-query (it has at least 2 FEMS-filters)
  • FEMS-pushdown is activated by setting RSADMIN parameter USE_FEMS_IN_DB = X (and TREXOPS mode ≥ 2)
  • An existing BWA connection has to be completely disabled (for all cubes)
  • The InfoProvider is a Flat Cube (This requires SQL Server 2014 or newer). You can also benefit from FEMS Filter Pushdown for Multi-Provider. In this case, a separate SQL query is running against each Part-Provider. SQL queries against Flat Cube Part-Providers use the new statement generator, SQL queries against other Part-Providers simply use the old statement generator.
  • Inventory queries are currently not supported for FEMS Filter Pushdown. They always use the old statement generator.
    However, you can use the new statement generator for inventory cubes, as long as the query does not contain an inventory key figure.

Summary

There are already several options available to speed-up BW queries with SQL Server. Each of them alone can speed-up BW queries by factors:

  • Having sufficient hardware resources, you can simply increase SQL Server intra-query parallelism using RSADMIN parameter MSS_MAXDOP_QUERY.
  • You can use an optimized index structure for BW cubes by applying SQL Server Columnstore. With SQL Server 2012, 2014 and 2016 we already released the 3rd generation of SQL Server Columnstore.
  • As of SAP BW 7.40 (SP8) you can apply an optimized table structure for BW cubes by converting them to Flat Cubes.
  • Finally, as of SAP BW 7.50 (SP4) you can use the optimzed SQL statement generator as described above.

You can already test the FEMS-pushdown when becoming a pilot customer. In any case, your SAP release planning should include an upgrade to SAP BW 7.50 SP4 (or higher) and SQL Server 2016.


Windows 2016 is now Generally Available for SAP

$
0
0

Windows 2016 has been released by Microsoft and is now Generally Available for any SAP NetWeaver 7.0 or higher components. Release information can be found in SAP Note 2384179 – SAP Systems on Windows Server 2016

This blog discusses the relevant SAP Notes and release information. For the official release status of individual SAP applications can be found in the SAP Product Availability Matrix (PAM).

Windows 2016 includes many features deeply integrated into the Azure public cloud platform. SAP on Windows 2016 is simultaneously released for on-premises deployments and on Azure cloud deployments

1. Windows 2016 Long Term Servicing Branch

Starting as of Windows Server 2016 the Server product offers two deployment models: Long Term Servicing Branch and Current Branch for Business.

SAP only supports Windows Server 2016 64bit LTSB with full GUI (Desktop Experience). More information about LTSB vs. CBB and the different versions can be found here.

Datacenter and Standard Edition are both supported for SAP applications. The differences between Standard Edition and Data Center Edition do not impact a SAP NetWeaver Application server. Both versions support 640 Logical processors and 24TB of RAM. The most significant technical difference between Datacenter and Standard Edition is the virtualization features and related capabilities in the network and storage area. Some information about licensing is here

2. Required SAP Kernels

SAP Kernels 7.21_EXT, 7.22_EXT and 7.49 Kernels or higher are the only kernels supported on Windows 2016. It is recommended to run either 7.22_EXT or 7.49 as these are the latest generation kernels. 7.22_EXT is fully downward compatible to SAP_BASIS 7.00. This means any NetWeaver 7.00 to 7.31 application can run on the latest 7.22_EXT kernel. Customers are generally advised not to use old kernels such as 7.00, 7.01, 7.21, 7.40 or 7.42 as this impacts supportability of a system.

SAP Java based components that require the SAP JVM 4.1 are not supported on Windows 2016 at this time. The SAP JVM 4.1 is now end of life and will likely not be validated on Windows 2016. Java based systems must be 7.30 or higher to be supported on Windows 2016.

The screenshot below shows the SAP PAM selection screen and supported 7.2x based kernels that are supported on Windows 2016.

3. Supported Databases and SAP Standalone Engines

Windows 2016 supports:

SQL Server 2012, 2014, 2016 or later

DB2 11.1 or later

Sybase 16 SP2 or later

MaxDB 7.9

Hana 1.0 SP12 and Hana 2.0 SP00 Client Components

This allows SAP customers to leverage the security and high availability features built into Windows for Suite on Hana, BW on Hana and S4 Hana deployments.

Oracle 12c to be supported later. Please check Note 2384179 – SAP Systems on Windows Server 2016 and the SAP Product Availability Matrix

4. Windows 2016 Hyper-V Support

As at March 2017 Windows 2016 Hyper-V scenarios are not supported.  Due to changes in Windows Server 2016 Hyper-V SAP cannot support SAP applications on Hyper-V 2016. Therefore, support of Windows Server 2016 Hyper-V has been postponed until a suitable solution is provided by Microsoft. Note 1409608 – Virtualization on Windows will be updated when this process is complete

5. Benefits of Windows 2016

Azure Cloud WitnessCluster File Share Witness must be in a 3rd location with diverse network connections can now be on Azure cloud

Storage Spaces Direct
(S2D) – new generation storage solution. Datacenter SKU only.

Storage Replica
– Async or Sync replication. Works with Hyper-V, Dedup, ReFS improvements

Powershell 5.0 Many new PowerShell cmdlets for Networking and Hyper-V

6. Required SAP Notes for Windows 2016

It is required to read the following OSS Notes before installing SAP applications on Windows 2016:

2356977 – Error during connection of JDBC Driver to SQL Server: This error will prevent Java based systems on SQL Server from installing on Windows 2016

2424617 – Correction of OS version detection for Windows Server 2012R2 and higher

2287140 – Support of Failover Cluster Continuous Availability feature (CA)

1869038 – SAP support for ReFs filesystem: It is supported to use ReFS instead of NTFS as of Windows 2016 (only for SAP application server and SQL Server at this time)

2055981 – Removing Internet Explorer & Remotely Managing Windows Servers: It is recommended to remove any non-essential software from Windows servers. This procedure works on Windows 2016

1928533 – SAP Applications on Azure: Supported Products and Azure VM types

2325651 – Required Windows Patches for SAP Operations

2419847 – Support of Windows in-place upgrade in Failover Cluster environments

Important Links

Windows Server 2016 Evaluation Edition Download

Windows Server 2016 documentation getting started

Cloud Witness documentation and blog

Windows 2016 on Channel9

SAP Business Objects support of SQL Server 2016 and Azure products

$
0
0

These days, we get a lot of questions around SAP Business Objects components supporting SQL Server 2016. As of February 2017, the support situation looks like:

The support of more recent OS or DBMS releases is not always apparent in the SAP PAM grid where supported OS and DBMS versions usually are listed. Instead you should open the PDF file that you often can find in the upper right corner of the SAP PAM page of the specific SAP product. Like shown below:

clip_image002

We will keep you informed on further progress with other SAP BusinessObjects or Data Services supporting newer SQL Server Releases or
Azure properties.

Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

$
0
0

This blog is a technical overview of a project recently completed by a large Australian Energy Company (the Company) over the past 12 months to transform the Company’s SAP Solution from an End of Life HPUX/Oracle platform to a modern SQL Server 2016 solution running on the Azure Public Cloud. This blog focuses on the technical aspects of the project that involved multiple OS/DB Migrations, Unicode Conversions and SAP Upgrades in addition to a move to Azure public cloud.

The project has now been successfully completed by the Company and BNW Consulting, a SAP consulting company specializing in SAP on Microsoft platforms.

1. Legacy Platform & Challenges

The Company implemented SAP R/3 in 1996 and deployed on HPUX and Oracle, a popular platform for running SAP applications at the time. Over time additional applications such as SAP Business Warehouse, Supply Chain Management, Enterprise Portal, GRC, PI, MDM and SRM were deployed. The UNIX based platform had reached end of life and the Company conducted a due diligence assessment of the available operating systems, databases and hardware platforms for running SAP and selected Azure, Windows 2012 R2 and SQL Server 2016.

Factors that were taken into consideration included:

1. Huge performance and reliability increases on standard commodity Intel platforms over the last 10-15 years.

2. No new investment by UNIX vendors into this technology and near universal move away from UNIX for standard package type applications like SAP.

3. SQL Server 2016 includes a Column Store capability that is integrated into SAP BW and delivers performance sufficient to allow removal of expensive Business Warehouse Accelerator (BWA) from the solution.

4. Azure Public Cloud platform has matured and is fully certified and supported by SAP.

5. Improvements in High Availability and Disaster Recovery technologies for Windows and SQL Server 2016 which allow for improved SLA.

6. Azure cloud has an integrated Disaster Recovery tool called Azure Site Recovery (ASR). This tool allows for DR testing at any time without impacting the DR SLA or production system.

7. Azure platform has a strong partner roadmap with SAP including the provision of 4TB Hana appliances and certifications of Azure VMs for Hana.

8. Overall technical platform capabilities and a single point for support. Availability of skilled partners

In addition the current platform was running on Oracle 11g which was also at end of life, there are restrictions on database compression with Oracle, no roadmap for FEMS Push Down for BW on Oracle and the SAN storage was out of space and needed renewing.

Before the migration to Windows, SQL Server and Azure the SAP application landscape consisted of these SAP components:

SAP ECC 6.0 EHP 5 (Non-Unicode)

SAP BW 7.30 (Non-Unicode)

SAP BWA 7.20

SAP Business Objects 4.1 SP6

SAP SRM 7.31

SAP SRM TREX 7.1

Content Server 6.40

SAP SCM (Non-Unicode)

SAP SCM LiveCache 7.7

SAP PI 7.3

SAP EP 7.3

SAP GRC 7.02

SAP Gateway 7.31

SAP Solution Manager 7.1

SAP MDM 7.1

SAP Console

2. Target Landscape

The target landscape on Azure (Australia Region), Windows 2012 R2 and SQL Server 2016 Service Pack 1 required some components to be simultaneously OS/DB migrated, transferred to Azure and converted to Unicode. The target landscape releases are listed below:

SAP ECC 6.0 EHP 7 (Unicode)

SAP BW 7.40 (Unicode)

SAP Business Objects 4.2 SP2 pl4

SAP SRM EHP2

SAP SRM TREX 7.1

Content Server 6.5

SAP SCM EHP3 (Unicode)

SAP SCM LiveCache 7.9

SAP PI 7.4

SAP EP 7.4

SAP GRC 10

SAP Gateway 7.4

SAP Solution Manager 7.2

SAP SLD/ADS 7.4

SAP MDM 7.1

SAP Console

Operating System: Windows 2012 R2 Datacenter

Database: SQL Server 2016 SP1 Enterprise Edition

Database Server VM types: BW: GS5, ERP:G4, Other SAP:D14v2. MaxDB\TREX: DS12v2

SAP Application Server VM types: D12v2 and D14v2

Storage: Azure Premium Storage used for all DBMS servers (SQL Server, MaxDB and TREX). Standard used for all other workloads

Network Connectivity: ExpressRoute with High Performance Gateway

Azure Deployment Model: Azure Resource Manager (ARM)

3. Upgrades, Unicode Conversions and OS/DB Migrations

To achieve the best outcome for the Company the SAP OS/DB migration partner BNW Consulting recommended against a “Big Bang” go live. Such a go live would involve moving the entire production environment to Azure in an entire weekend. While this is certainly technically possible to do, the advantages of this approach are only small and the resources required to do this are considerable.

An incremental go live approach was successfully used. Smaller NetWeaver systems were migrated, upgraded or reinstalled on Azure. Then the BW system was exported in the existing datacenter using fast Intel based Windows servers to run R3load. The performance of R3load on HPUX and UNIX platforms in general far slower than Intel based platforms. The dumpfiles were transferred using ftp and SAP migmon, imported in parallel and then BW was upgraded to BW 7.40 with the latest Support Pack Stack. The database was converted to Unicode during the migration.

In the final phase the SAP ECC 6.0 system was moved to Azure, upgraded from EHP5 to EHP7 and converted to Unicode.

To speed up the import the SQL Server transaction log file was temporarily placed on the D: drive (non-persistent disk) on the GS5 DB server. Additional log capacity was allocated on a Windows Storage Space created from 6 x P30 Premium disks.

Setup and configuration of the SAP Application Servers was simplified considerably by placing all the profile parameters into the Default.pfl. There is no reason for individual SAP Application Servers to have different configurations and troubleshooting is greatly simplified by having a uniform configuration.

The implementation partner also tested and remediated where necessary more than 200 interfaces to more than 40 non-SAP applications.

4. SQL Server 2016 Enterprise Edition SP1 Features for SAP

SQL Server 2016 has many features of great benefit to SAP Customers:

1. Integrated Column Store: Drastic reduction in storage and vastly increased performance. This technology can be automatically deployed during a migration during the RS_BW_POSTMIGRATION phase or using report MSSCSTORE. Many customers have terminated SAP Business Warehouse Accelerator and achieved the same or better performance after implementing SQL Server Column Store.

2. SQL Server AlwaysOn is a built in integrated HA/DR solution that is simple to setup.

3. SQL Server Transparent Data Encryption secures the database at rest and database backups. TDE is integrated with the Azure Key Vault service.

4. SQL Server Backup to Blob Storage allows a backup to write directly to multiple files directly on a URL path on Azure storage. Full and Transaction Log backups are taken off SQL Server AlwaysOn secondary databases. This reduces the load on the primary database and also allows for easy creation of offsite backups in the DR datacentre

5. SQL Server 2016 supports Azure Storage level “snapshots” similar to Enterprise SAN level snapshots – currently 4 hourly and retained for 48 hours. SQL 2016 has the ability to do larger backups > 1TB which unified and simplified the Company’s backup processes across all SAP applications

5. SQL Server Database Compression reduces the database size dramatically

SQL Server 2016 database compression results:

SAP Component

DB Size on HPUX/Oracle

DB Size on Windows & SQL Server

Comment

SAP BW 7.40

7.7TB

2.3TB

7.3 to 7.4 Upgrade, Unicode Conversion and Column Store & Flat Cube

SAP ECC 6.0

3.1TB

1.5TB

7.3 to 7.4 Upgrade, EHP6 to EHP7 and Unicode Conversion

SAP SRM

261GB

126GB

 

SAP SCM

122GB

53GB

Upgrade from 7.0 EHP1 to EHP3 and Unicode Conversion

SQL Server 2016 Column Store for SAP BW drastically improves user query and data load times. The Flat Cube functionality found on SAP BW on Hana is available to SQL Server customers. The graph below shows the performance comparison on Oracle (with SAP Business Warehouse Accelerator) and SQL Server 2016 (BWA removed). The graph was collected and prepared by the Company several weeks after go live

Graph 1. On-premises HPUX/Oracle + BWA performance vs. Windows 2012 R2, SQL Server 2016 and SQL Server Flat Cube and Column Store

In addition the BW Process Chain times have reduced by 50%. This has allowed users to report on more up to date data.

5. Azure Configuration

The project has utilized the following Azure specific features and capabilities to improve the performance and resiliency of the SAP solution

1. Availability Sets – SQL servers, ASCS servers and SAP application servers were configured into Availability Sets to ensure that there was at least one node available and running at any one time.

2. Premium Storage – SQL Server and any DBMS like workload utilizes Premium Storage to deliver high IOPS at stable latencies of low single digit milliseconds.

3. Multiple IP address on a ILB – ASCS components were consolidated onto a single ASCS cluster using the multiple frontend IP feature of the Azure Internal Load Balancer (ILB)

4. Storage Pinning – Separate storage accounts were used for SQL Server AlwaysOn nodes. This ensures that the failure of an individual storage account does not result in the storage being unavailable for both nodes. This capability is now built into a new Azure feature called “Managed Disks”.

6. Network Security Groups – Network ACLs were can be created per vNet, subnet and per individual VMs.

7. Azure ExpressRoute – High speed connectivity via a dedicated encrypted WAN link between a customer site and the Azure datacenter.

8. Active Directory Group Policy – SAP Servers and Service Accounts are placed into a container in Active Directory and a Group Policy is enforced to harden the Windows operating system thereby reducing the servicing and patching requirements. The Service Accounts for SQL Server were configured to Lock Pages in Memory and Perform Volume Maintenance Tasks. Internet Explorer was removed from Windows Server 2012 R2 further reducing the need for servicing and patching. Most customers adopting this deployment configuration are able to eliminate regular patching and move to a yearly update and patching cadence.

6. High Availability & Disaster Recovery

All SAP application components within the Primary Datacenter have High Availability. This means the failure of an individual server or service will not result in an outage. SQL Server is protected via synchronous AlwaysOn with a 0 second RPO, 10-45 sec RTO. The SAP ASCS service is protected using Windows Clustering.

Because Azure does not natively support shared disks a disk replication tool called BNW AFSDrive is used to present a shared disk to the cluster.

The Disaster Recovery site must be in a totally separate geolocation in order to provide true resilient DR capabilities. Completely independent electricity, WAN links and governmental services and security are required to reach the required SLA.

The Disaster Recovery solution incorporates one SQL Server and ASCS node in the DR site. If a DR event forced the Company to run for more than a few days at the DR site High Availability would be added to the DR solution.

 

7. Partner Products & Services

BNW StopLoss – Large Enterprise customers require absolute certainty that every single row in the source database has been correctly migrated to SQL Server. The SAP Migration tools do keep track of export and import row counts at a basic level, but when these tools have been used by uncertified consultants cases of inconsistencies have occurred.  BNW StopLoss eliminates this possibility. BNW StopLoss is a tool that scans export row counts and is asynchronously counts the rows in the target SQL Server database. If any differences are detected an alert is generated.

BNW AFSDrive – Azure does not natively support a shared cluster disk. BNW AFSDrive creates a virtual shared disk cluster disk between two or more Azure VMs. BNW AFSDrive is certified and supported on Windows 2012 R2 and Windows 2016.

BNW CloudSnap – SQL Server, Oracle and Sybase databases can be copied using the Azure disk/storage cloning technologies. The CloudSnap utility will briefly quiesce the disks and command the database to checkpoint. This technology allows for backups and system copies to be done effortlessly.

8. Benefits of Azure

The Company was able to move off an end of life HPUX platform, leverage the many performance and space savings features in SQL Server 2016 SP1 and eliminate the expensive proprietary BWA appliance. In parallel all applications were upgraded to the latest versions, patches and converted to Unicode. The solution delivered allows the Company to stabilize their SAP applications on a modern platform that will stay in support and maintenance until 31 December 2025.

Test environments can be created and refreshed much more rapidly than before and performance of BW and ECC was greatly improved.

The runtimes of DBA tasks such as backups, database integrity checks and restores are drastically faster than on HPUX/Oracle.

Overall system performance and response times have improved dramatically. In addition the HA/DR solution has improved and the DR solution is now easily testable.

Links & Notes

SAP & Azure Partner – BNW Consulting http://www.bnw.com.au/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research.

SAP on SQL: General Update for Customers & Partners March 2017

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the SAP on SQL Server platform. The key objective of the SAP on Windows SQL port is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. Urgent Kernel Update Required on 7.4x & 7.5x Systems

A bug in the SAP Kernel can cause this error message in the SQL Server Errorlog and can cause SQL Server to freeze.

A bug in the SAP opens a cursor to read from the database, but this cursor is never closed. Eventually all worker threads are occupied and SQL Server may freeze

It is recommended to update the SAP Kernel on any NetWeaver 7.4x or 7.5x system!

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 4%.

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 5%.

All schedulers on Node 3 appear deadlocked due to a large number of worker threads waiting on ASYNC_NETWORK_IO. Process Utilization 5%.

SAP Note 2333478 – Database cursors not closed when table buffer is full

2. Very Useful SQL Server DMV & Logging for Detecting Issues

The procedures below are useful for diagnosing problems such as those described above

1. Useful script to display blocking, long running queries and how many active requests are running at any given instant

select session_id, request_id,start_time,
status
,

command, wait_type, wait_resource, wait_time, last_wait_type, blocking_session_id

from
sys.dm_exec_requests
where session_id >
49 order
by wait_time desc;

To show the SQL Server internal engine processes remove where session_id >49

2. Useful script for displaying issues related to memory consumption

— current memory grants per query/session

select

session_id, request_time, grant_time ,

requested_memory_kb /
( 1024.0 * 1024 ) as
requested_memory_gb ,

granted_memory_kb /
( 1024.0 * 1024 ) as

granted_memory_gb ,

used_memory_kb /
( 1024.0 * 1024 ) as
used_memory_gb ,

st.text

from

sys.dm_exec_query_memory_grants g  cross
apply sys.dm_exec_sql_text(sql_handle) as  st

— uncomment the where conditions as needed

where grant_time is not null  — these sessions are using memory allocations

where grant_time is     null  — these sessions are waiting for memory allocations

SQL Server Column Store systems with a high value for Max Degree of Parallelism can experience high memory usage

The formula for calculating the amount of memory to run a query on a Column Store table is:

Memory Grant Request in MB = ((4.2 * number of columns in the columnstore index) + 68) * Degree of Parallelism + (number of string columns * 34)

The DMV above is useful for diagnosing this issues. Remember to UNCOMMENT where grant_time is or is not null.

3. Diagnostic tool for collecting logs to attach to an OSS Message – Hangman.vbs

It is recommended to be familiar with running this utility. Hangman.vbs captures many useful parameters on a system and should always be run when sending OSS messages for performance or stability problems.

948633 – Hangman.vbs

2142889 – How to Run the Hangman Tool [VIDEO]

Two very good blogs explaining Hangman.vbs analysis

https://blogs.msdn.microsoft.com/saponsqlserver/2008/10/24/analyzing-a-hangman-log-file-part-1/

https://blogs.msdn.microsoft.com/saponsqlserver/2008/11/23/analyzing-a-hangman-log-file-part-2/

4. SQL Server 2016 Query Store

SQL Server Query Store is switched off by default but is a very useful tool for pinpointing expensive queries or queries that are sometimes fast and sometime slow depending on input parameters (such as BUKRS – Company Code).

This feature is only available in SQL 2016 and higher. A good video is available on Channel9

3. Very Useful Windows Perfmon Template for Diagnosing Issues

Attached to this blog is a XML template file that contains a recommended set of Perfmon counters to run on SQL Server Database servers and SAP Application servers. The template file has been renamed from “zsapbaseline” to “zSAPBaseline.txt” as downloading XML files is blocked by some firewalls and proxies. After downloading this file, rename the file to zSAPBaseline.xml and follow the following steps

1. Open perfmon.msc from the Run menu

2. Navigate to Data Collector Sets -> User Defined

3. Right click on User Defined -> New

4. Create from template from the zSAPBaseline.xml file

5. Ensure the properties below are set to avoid filling up the file system

6. Set the schedule so that the collector will automatically restart if the server is restarted

Note: for those who prefer graphing in Excel a *.blg file can be converted into a csv with relog -f csv inputfile.blg -o outputFile.csv

3. Please Use Windows 2016 for New Installation

Windows 2016 is now generally available for most SAP software including all releases of NetWeaver based applications from 7.00 through to 7.51.

We recommend all new installations use this operating system. If you are unsure about the exact release status of a SAP application post a question in this blog.

Windows 2016 is now Generally Available for SAP

4. Windows 2016 Server Hardening Recommendations

Windows 2016 does not have the Security Configuration Wizard. Many of the hardening tasks required on previous Windows releases are not required.

It is still recommended to:

1. Remove Internet Explorer and follow SAP Note 2055981 – Removing Internet Explorer & Remotely Managing Windows Servers

dism /online /Disable-Feature /FeatureName:Internet-Explorer-Optional-amd64

2. Always activate the Windows Firewall and configure ports via Group Policy Object

3. Review the security resources here: https://www.microsoft.com/en-us/cloud-platform/windows-server-security

4. Use the Security Baseline for Windows Server 2016

5. SQL Server 2016 Backup to URL Settings

To backup database to URL (Azure Blob), especially for VLDB, please use below best practices:

1. Backup to multiple URL targets. Prior to SQL 2016 only one URL target was supported.

2. Specify MAXTRANSFERSIZE = 4194304 to tell SQL server to use 4MB as max transfer size. Without this parameter, most of network IO to Azure blob is 1MB.  The test below shows that it can reduce 70% of blocks consumed.

3. Use COMPRESSION to reduce the number of block write requests to Azure Blob. The test below shows this option can help reduce 65% of backup size, and 2~4 times faster.  However please be aware of the CPU usage when using compression. If on your server CPU usage is already very high (say, >80%) please monitor CPU usage closely when using compression.

4. You can also increase BUFFERCOUNT if your target storage account has enough bandwidth, to increase backup throughput. Say, you can use 20~500, and choose the best one to meet your needs.

Please note that below issue has been reported when you  backup VLDB to azure blob:

DBCC execution completed. If DBCC printed error messages, contact your system administrator.

10 percent processed.

20 percent processed.

30 percent processed.

Msg 3202, Level 16, State 1, Line 78

Write on “https://customer.blob.core.windows.net/dbw-db-backups/DBW_20170102111029_FULL_X22.bak” failed: 1117(The request could not be performed because of an I/O device error.)

Msg 3013, Level 16, State 1, Line 78

BACKUP DATABASE is terminating abnormally.

If you enable trace by below trace command:

DBCC TRACEON(3004, 3051, 3212,3014, 3605, 1816)

(To turn the trace off, you can run “DBCC TRACEOFF(3004, 3051, 3212,3014, 3605, 1816)”)

You can then get more diagnostic information in errorlog:

Write to backup block blob device https://storageaccount.blob.core.windows.net/backup/xx.bak failed. Device has reached its limit of allowed blocks.

The above error is because that azure blob has 50,000 blocks limitation. Use above best practices can help avoid this error.

Below is the testing result for your reference. My testing database is small, it is about 2GB so your database may behavior difference if it is very large. But the above best practices apply.

Non-TDE database means the database is without TDE. TDE database means the database is with TDE encryption.

Backup to URL non-TDE database 64kb network IO 1MB network IO 4MB network IO Target backup size Backup Duration Backup Speed
No option specified 40 752 761MB 85 seconds 8.9 MB/sec
MAXTRANSFERSIZE
= 4194304
39 185 764MB 129 seconds 5.86 MB/sec
MAXTRANSFERSIZE
= 4194304, COMPRESSION
5 67 269MB 27 seconds 27.9 MB/sec
Backup to URL TDE database
No option specified 40 752 761MB 73 seconds 10.3 MB/sec
MAXTRANSFERSIZE
= 4194304
39 185 764MB 80 seconds 9.4 MB/sec
MAXTRANSFERSIZE
= 4194304, COMPRESSION
53 67 271MB 33 Seconds 22.5 MB/sec

Use the below backup command for your reference:

DECLARE @Database varchar(3)
DECLARE @BackupPath varchar(100)
DECLARE @TimeStamp varchar(15)
DECLARE @Filename1 VARCHAR(MAX)
DECLARE @Full_Filename1 AS
VARCHAR (300)
DECLARE @Full_Filename2 AS
VARCHAR (300)
DECLARE @Full_Filename3 AS
VARCHAR (300)
DECLARE @Full_Filename4 AS
VARCHAR (300)
DECLARE @Full_Filename5 AS
VARCHAR (300)
DECLARE @Full_Filename6 AS
VARCHAR (300)
DECLARE @Full_Filename7 AS
VARCHAR (300)
DECLARE @Full_Filename8 AS
VARCHAR (300)

SET @Database =
‘DBW’
SET @BackupPath =
‘https://customer.blob.core.windows.net/’
+
Lower(@Database)
+
‘-db-backups/’
SET @TimeStamp =
REPLACE(CONVERT(VARCHAR(10),
GETDATE(), 112),
‘/’,
)
+
REPLACE(CONVERT(VARCHAR(10),
GETDATE(), 108)
,
‘:’,
)
SET @Full_Filename1 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X1.bak’
SET @Full_Filename2 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X2.bak’
SET @Full_Filename3 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X3.bak’
SET @Full_Filename4 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X4.bak’
SET @Full_Filename5 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X5.bak’
SET @Full_Filename6 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X6.bak’
SET @Full_Filename7 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X7.bak’
SET @Full_Filename8 = @BackupPath + @Database +
‘_’
+ @TimeStamp +
‘_FULL_X8.bak’

–Backup database

BACKUP
DATABASE @Database TO
URL
= @Full_Filename1,
URL
=  @Full_Filename2,
URL
=  @Full_Filename3,
URL
=  @Full_Filename4,
URL
=  @Full_Filename5,
URL
= @Full_Filename6,
URL
=  @Full_Filename7,
URL
=  @Full_Filename8
WITH
STATS
= 10,
FORMAT, MAXTRANSFERSIZE
= 4194304, COMPRESSION
GO


SQLCAT team provide a blog on backing up VLDB here.

Thanks to Simon Su, Escalation Engineer, Microsoft CSS, Asia Pacific & Greater China for contributing this article on Backup to URL after resolving this problem for a customer.

6. Obsolete Windows Server, SQL Server & SAP Kernels – Please Upgrade

The majority of SAP on Windows and SQL Server customers have modernized their Operating System and Database releases to at least Windows 2012 R2 and SQL Server 2012. However there are some customers running very old operating systems and database releases. Here is a list:

1. Windows 2003 – now over 14 years old and out of support by both Microsoft and SAP. Please update immediately! 2135423 – Support of SAP Products on Windows Server 2003 after 14-Jul-2015

2. SQL Server 2005 – out of support by Microsoft since last year. Please update immediately!

3. Windows 2008 & Windows 2008 R2 – both of these operating system versions are approaching end of life soon and we recommend planning to upgrade to the latest available operating system

4. SQL Server 2008 & SQL Server 2008 R2 – both these database versions are near end of life and we recommend upgrading to SQL Server 2014 or SQL Server 2016. Customers running SAP BW can benefit from performance gains measured in many hundreds of percent due to improvements in modern SQL Server versions

5. SAP Kernels 7.00, 7.01, 7.21, 7.40, 7.42 are end of life. It is recommended to run either 7.22_EXT or 7.49 Kernels as at March 2017

Recommendation: As at March 2017 deploy Windows 2016 with all the latest updates & SQL Server 2016 with the latest Service Pack & Cumulative Update. SQL Server patches can be downloaded from here.

2254428 – Error while upgrading to SAP NetWeaver 7.5 based systems: OS version 6.1 out of range (too low)

Downward Compatible Kernel Documentation:

2350788 – Using kernel 7.49 instead of kernel 7.40, 7.41, 7.42 or 7.45

2133909 – SAP Kernel 722 (EXT): General Information and Usage

7. Windows Server 2016 Cloud Witness – How to Change Port Number Used

Windows 2016 includes a very useful feature called Cloud Witness.

Cloud Witness is an enhancement on the previous File Share Witness and has much better costs and functionality.

Cloud Witness is implemented inside the process rhs.exe and issues a https call on port 443 to the address <storage-account-name>.blob.core.windows.net

Some customers with high security requirements have requested a process to change the default port used by Cloud Witness

rhs.exe can be routed via a proxy server. To do this run the following command:

netsh winhttp set proxy proxy-server=”https=SERVER:PORT”

To configure the settings to enable PowerShell or the UI to work, enable the .net proxy settings. The easiest way to do this is by setting the proxy settings in Control Panel -> Internet Options (Internet Explorer should already be removed)

If additional security is required the address <storage-account-name>.blob.core.windows.net can be added to the firewall whitelist

The proxy server and any other supporting infrastructure (such as firewalls) become critical to the quorum calculation if the default behavior of the cloud witness is changed (meaning a failure of the proxy server could cause the cluster to lose majority and this would mean the cluster would deliberating offline the cluster role).

8. Important Notes for SAP BW on SQL Server Customers & Column Store on SAP ERP Systems

SAP BW on SQL Server Customers can benefit from several new technologies.

1. SAP BW 7.00 to 7.50 customers – SQL Server Column Store on F Fact and E Fact tables
https://launchpad.support.sap.com/#/notes/2116639

2. SAP BW 7.40 to 7.50 customers – SQL Server Flat Cube

3. SAP BW 7.50 SPS 04 or higher can leverage FEMS pushdown

Additional performance improvements for SAP BW are documented here

It is recommended to update the report MSSCOMPRESS. This blog discusses the new version of this report

9. SQL Server on AlwaysOn Post Installation Steps – SWPM

The SAP Installation tools are not fully aware of SQL Server AlwaysOn. In general it is recommended to install SAP applications prior to establishing an AlwaysOn availability group.

After adding an AlwaysOn replica the following SAPInst option can be run to create the required users and jobs.

An example high level install procedure for installing a NetWeaver

Note: This option needs to be run on each replica while the replica is online as the Primary node. SAP applications must be shutdown prior to running this option.

Review this Blog – Script sap_synchronize_always_on can be used

https://blogs.msdn.microsoft.com/saponsqlserver/2016/02/25/always-on-synchronize-sap-login-jobs-and-objects/

SAP Note 1294762 – SCHEMA4SAP.VBS

SAP Note 683447 – SAP Tools for MS SQL Server

10. SQL Server 2016 SP1 – Transaction Log Writing to NVDIMM

It is possible to drastically speed up SQL Server transaction log write performance by using NVDIMMs.

This is a new technology released in SQL Server 2016 SP1.

Customers who have an ultra-low downtime requirement for OS/DB migrations may wish to test this new capability.

Information about this feature can be found below

https://blogs.msdn.microsoft.com/psssql/2016/04/19/sql-2016-it-just-runs-faster-multiple-log-writer-workers/

https://blogs.msdn.microsoft.com/bobsql/2016/11/08/how-it-works-it-just-runs-faster-non-volatile-memory-sql-server-tail-of-log-caching-on-nvdimm/

Recommended Notes & Links

555223 – FAQ: Microsoft SQL Server in NetWeaver based

1676665 – Setting up Microsoft SQL Server 2012

1966701 – Setting up Microsoft SQL Server 2014

2201060 – Setting up Microsoft SQL Server 2016

1294762 – SCHEMA4SAP.VBS

1744217 – MSSQL: Improving the database performance

2447884 – VMware vSphere with VMware Tools 9.10.0 up to 10.1.5: Performance Degradation on Windows

2381942 – Virtual Machines Hanging with VMware ESXi 5.5 p08 and p09

2287140 – Support of Failover Cluster Continuous Availabilty feature (CA)

2046718 – Time Synchronization on Windows

2325651 – Required Windows Patches for SAP Operations

2438832 – Network problems if firewall used between database and application servers

1911507 – Background Jobs canceled during failover of (A)SCS instance in windows failover cluster

Netsh config

https://parsiya.net/blog/2016-06-07-windows-netsh-interface-portproxy/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

More Questions From Customers About SQL Server Transparent Data Encryption – TDE + Azure Key Vault

$
0
0

Recently many customers have been moving from AIX and HPUX to Windows 2016 & SQL Server 2016 running on Azure as these UNIX platforms are no longer mainstream, developed or invested by their vendors

Most of these customers are deploying TDE to protect the Database files and backups. Encrypting databases with strong ciphers like AES-256 is a highly effective way to prevent theft of data, consequently great care must be taken with the keys. If there is a DR event or some need to restore the database and the keys cannot be found the database is for all purposes lost. It is not possible to unencrypt AES-256 by “brute force” methods.

To prevent this from happening we recommend leveraging the Azure Key Vault to securely store the SQL Server TDE keys

Many readers of this blog and customers have asked for an end to end process for a new SAP installation or migration on SQL 2016 on Azure with AlwaysOn with TDE using the Azure Key Vault.

Before reviewing the rest of this blog topic it is recommended to fully review this link https://msdn.microsoft.com/en-us/library/mt720686.aspx

Note: Using SQL Server TDE & storing SQL datafiles on Bitlocker or Azure ADE disks is not tested and is not recommended due to performance concerns

Prerequisites:

1. Segregate duties between the DBA and the Azure Key Manager. The DBA should not have access to the Azure Key Vault and the Key Administrator should not have access to SQL Server databases and backups

2. Ensure Azure Active Directory has been setup (most commonly this is integrated with on-premises Active Directory)

3. Ask the Key Administrator to assist with the Key Vault steps

4. Download the Azure Key Vault Integration

Before proceeding it is essential to read this documentation and understand the following process flow. More information is here

Implementation:

5. Register the SQL Server Application in Azure Active Directory

Open the ASM portal https://manage.windowsazure.com and navigate to the “Active Directory” service

Click on the Directory Service (either the default directory or if configured the integrated directory). Then click on “Applications”

The values in the URL/URI can be any value so long as the site is available

After creating the Azure Active Directory Application, click on the configure tab and note the Client ID and the Client Secret

Note: Some documentation and the PowerShell scripts refer to the “Client ID” as the “ServicePrincipalName”. In this procedure they are the same. A potential source of confusion.

Create the Secret with either 1 or 2 years duration under the “keys” section of the Configuration Tab of the Applications menu in Azure Active Directory

6. Create the vault, master key and authorize SQL Server to access the Key

Grant the Client ID (ServicePrincipalName) permissions to get, list, wrapKey and unwrapKey on the Key Vault that already exists or has just been created

Set-AzureRmKeyVaultAccessPolicy
-VaultName
SAPKeyVault
-ServicePrincipalName
2db602bd-4a4b-xxxx-xxxx-d128c143c8a9
-PermissionsToKeys
get, list, wrapKey, unwrapKey

Check permissions on the Key Vault with the following command. The application registered in Azure Active Directory can be seen highlighted below

Get-AzureRmKeyVault -VaultName SAPKeyVault


Create the Key with the following command:

Add-AzureRmKeyVaultKey
-VaultName
‘SAPKeyVault’
-Name
‘SAPonSQLTDEKey’
-Destination
‘Software’

Alternatively a Key can be created via the Azure Portal as shown below

7. Create the database in advance in SQL Management Studio. Make the database size at creation large enough for the installation or import plus enough for a few months growth. Provided the database is created with the DB name = <SID> SAPInst will recognize this as the installation target database.

8. Set the database recovery model to SIMPLE

9. Enable TDE with this command

— Enable advanced options.

USE
master;

GO

sp_configure
‘show advanced options’, 1 ;

GO

RECONFIGURE
;

GO

— Enable EKM provider

sp_configure
‘EKM provider enabled’, 1 ;

GO

RECONFIGURE
;

GO

— Create a cryptographic provider, using the SQL Server Connector

— which is an EKM provider for the Azure Key Vault. This example uses

— the name AzureKeyVault_EKM_Prov.

On all releases of SQL Server it is still required to download and install the SQL Server Connector for Microsoft Azure Key Vault.

After Installation of the connector run this command.

CREATE
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov

FROM
FILE
=
‘C:\Program Files\SQL Server Connector for Microsoft Azure Key Vault\Microsoft.AzureKeyVaultService.EKM.dll’;

GO

The next part is quite difficult. The Secret in this command is = Client ID (referenced as the ServicePrincipalName) with the hyphens removed + the Secret from the Azure Active Directory Application

Example:

Azure Active Directory Application Client ID = 2db602bd-4x4x-4322-8xxf-d128c143c8a9

Azure Active Directory Application Secret = FZCzXY3K8RpZoK12MxF/WFxxAw6aOxxPU2ixxEkQBbc=

Step A: remove the hyphens 2db602bd-4x4x-4322-8xxf-d128c143c8a9 -> 2db602bd4x4x43228xxfd128c143c8a9

Step B: concatenate Client ID (minus hyphens) and Secret = 2db602bd4x4x43228xxfd128c143c8a9FZCzXY3K8RpZoK12MxF/WFxxAw6aOxxPU2ixxEkQBbc=

******* NEXT STEP

USE
master;

CREATE
CREDENTIAL sysadmin_ekm_cred


WITH
IDENTITY
=
‘SAPKeyVault’,
— for public Azure


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.usgovcloudapi.net’, — for Azure Government


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.azure.cn’, — for Azure China


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.microsoftazure.de’, — for Azure Germany


SECRET
=
‘2db602bd4a4b43228d7fd128c143c8a9fhEP5adz9FTrx2Nt4N36HGxxxx1X0Lo5VcTyJRxte7E=’

FOR
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov;

— Add the credential to the SQL Server administrator’s domain login

— The login needs to already exist. This would typically be the DBA or SAP <sid>adm user

ALTER
LOGIN [SQLTDETEST\cgardin]

ADD
CREDENTIAL sysadmin_ekm_cred;

******* NEXT STEP

— While logged in as the DBA or SAP <sid>adm run this command. This may not work if logged in as another user

CREATE
ASYMMETRIC
KEY
SAP_PRD_KEY

FROM
PROVIDER [AzureKeyVault_EKM_Prov]

WITH PROVIDER_KEY_NAME =
‘SAPonSQLTDEKey’,

CREATION_DISPOSITION = OPEN_EXISTING;

******* NEXT STEP

USE
master;

CREATE
CREDENTIAL Azure_EKM_TDE_cred


WITH
IDENTITY
=
‘SAPKeyVault’,
— for public Azure


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.usgovcloudapi.net’, — for Azure Government


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.azure.cn’, — for Azure China


— WITH IDENTITY = ‘ContosoDevKeyVault.vault.microsoftazure.de’, — for Azure Germany


SECRET
=
‘2db602bd4a4b43228d7fd128c143c8a9fhEP5adz9FTrx2Nt4N36HGxxxb1X0Lo5VcTyJRxte7E=’

FOR
CRYPTOGRAPHIC
PROVIDER AzureKeyVault_EKM_Prov;

******* NEXT STEP

USE
master;

— Create a SQL Server login associated with the asymmetric key

— for the Database engine to use when it loads a database

— encrypted by TDE.

CREATE
LOGIN TDE_Login

FROM
ASYMMETRIC
KEY
SAP_PRD_KEY;

GO

— Alter the TDE Login to add the credential for use by the

— Database Engine to access the key vault

ALTER
LOGIN TDE_Login

ADD
CREDENTIAL Azure_EKM_TDE_cred ;

GO

******* NEXT STEP

USE PRD;

GO

CREATE
DATABASE
ENCRYPTION
KEY

WITH
ALGORITHM
=
AES_256

ENCRYPTION
BY
SERVER
ASYMMETRIC
KEY
SAP_PRD_KEY;

GO

— Alter the database to enable transparent data encryption.

ALTER
DATABASE PRD

SET
ENCRYPTION
ON;

GO

******* NEXT STEP

USE
master

SELECT
*
FROM
sys.asymmetric_keys

— Check which databases are encrypted using TDE

SELECT d.name, dek.encryption_state

FROM
sys.dm_database_encryption_keys
AS dek

JOIN
sys.databases
AS d


ON dek.database_id = d.database_id;

11. Only when the ENCRYPTION STATUS = 3 continue this procedure

Even a blank database with no data will take some time to encrypt. The reason is that “nothing” is encrypted using a symmetric key and the original “nothing” or null value is represented by a completely random value. All of the above steps can be done prior to a SAP OS/DB migration and therefore these steps do not increase downtime

12. Run SWPM to install or migrate the SAP NetWeaver system

13. Complete post processing as per the SAP System Copy Guide

14. Set the SQL Server database recovery model to FULL

15. Start a full database backup

16. Copy the database backup file to a location where AlwaysOn Replica #1 can restore the file

17. Run the commands from step 9 up and including the step “ALTER LOGIN TDE_Login” step in this procedure to install the TDE Key on Replica #1 [Repeat on each AlwaysOn Replica node]

18. Restore the database on AlwaysOn Replica #1

19. Configure the Azure Internal Load Balancer – ILB if this has not already been done in advance (ensure Direct Server Return is enabled)

20. The AlwaysOn Availability Group Wizard will not work with TDE databases. It is not possible to use the wizard to setup AlwaysOn

These two blogs discuss how to setup AlwaysOn on TDE databases

In these blogs ignore the Key Management procedures as in this scenario Keys are stored in Azure and not locally. The T-SQL to create the AlwaysOn Availability Group is the same

https://blogs.msdn.microsoft.com/alwaysonpro/2015/01/07/how-to-add-a-tde-encrypted-database-to-an-availability-group/

https://blogs.msdn.microsoft.com/sqlserverfaq/2013/11/22/how-to-configure-always-on-for-a-tde-database/

21. Test failover by running the Failover wizard in SSMS

22. Run the step listed in topic #9 in this blog to create users on the new Replica Node (SAPInst would have already performed this activity as part of the install or migration on the Primary Node)

23. Check access to the database with a simple query SELECT * FROM <sid>.T000;

24. Change the default.pfl value for dbs/mss/server = <primary node hostname> to dbs/mss/sqlserver = <alwayson listener name> (for Java systems use ConfigTool)

25. Start the SAP application servers and run SICK

26. Run the Always On failover wizard again to test failover and failback.

Note: Azure Key Vault integration for SQL Server TDE requires these hosts and ports to be whitelisted

login.microsoftonline.com/*:443
*.vault.azure.net/*:443

If any problems are observed check the contents of the trace file dev_w0. The contents of the tracefile should look something like:

M Fri Mar 24 22:37:40 2017

M calling db_connect …

B Loading DB library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ …

B Library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ loaded

B Version of ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’ is “745.04”, patchlevel (0.201)

C Callback functions for dynamic profile parameter registered

C Warning: Env(MSSQL_SERVER) [<LISTENER>,<PORT>;MultiSubnetFailover=YES] <> Prof(dbs/mss/server) [<LISTENER>,<PORT>;MultiSubnetFailover=YES]

C Thread ID:15964

C Thank You for using the SLODBC-interface

C Using dynamic link library ‘C:\usr\sap\PRD\DVEBMGS00\exe\dbmssslib.dll’

C 7450 dbmssslib.dll patch info

C SAP patchlevel 0

C SAP patchno 201

C Last MSSQL DBSL patchlevel 0

C Last MSSQL DBSL patchno 201

C Last MSSQL DBSL patchcomment SAP Support Package Stack Kernel 7.45 Patch Level 201 (2340627)

C ODBC Driver chosen: ODBC Driver 13 for SQL Server native

C Network connection used from <APPSERVER> to <LISTENER>,<PORT>;MultiSubnetFailover=YES using tcp: <LISTENER>,<PORT>;MultiSubnetFailover=YES

Using Columnstore on ERP tables

$
0
0

SQL Server columnstore indexes are optimized for aggregations of large amounts of data. Therefore, they are successfully used in SAP’s data warehouse system SAP BW since years. ERP systems typically still use rowstore (b-tree) indexes, because they are optimized for the most common data access pattern of ERP systems: Directly reading a few rows specified by very selective filters or the primary key. However, there are also reporting queries in ERP systems, which have to access a large number of rows. Such queries would benefit from a columnstore index, too.

When talking about ERP below, we mean all non-BW products of the SAP Business Suite like ERP or CRM.

Since we released our first version of SQL Server columnstore in 2012, we constantly get requests from SAP customers for using the columnstore on an ERP system, too. This was not possible for various technical reasons in SQL Server 2012 and 2014. This restriction is gone in SQL Server 2016, see also https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/11/sql-server-2016-improvements-for-sap-bw. You can now create an additional Nonclustered Columnstore Index (NCCI) on an SAP ERP table, which results in a hybrid data storage: The table itself, the primary key and all existing indexes stay in row format (b-trees). Only the new, additional index is stored in columnar format.

Customer Scenarios

SAP will not deliver columnstore indexes on ERP tables within SAP installation or upgrade. The NCCI is intended as a tuning option for specific customer scenarios. Selecting suitable ERP tables and testing the impact of the NCCI is a consulting project. An NCCI will certainly improve reporting performance. However, it may have negative impact on transactional throughput. Every modification of a data record has to be done on the columnstore, too.

For cubes of an SAP BW system, the columnstore index replaces several rowstore indexes, which results in disk space savings. This is possible, because we exactly know the workload of SAP BW cubes. However, for ERP systems, we cannot exactly tell you unneeded indexes. Each customer implements different customizing and has a different work load mix. Therefore, the NCCI is intended as an optional, additional index.

Using the columnstore in an SAP ERP system is not intended as a replacement for a dedicated SAP BW system. SAP BW is fully optimized for reporting queries. A distinct SAP BW system separates the reporting workload from the transactional workload of SAP ERP.

Creating a Columnstore Index

After applying SAP Note 2419662, you can create one NCCI per table using SAP report MSS_CS_CREATE. This Columnstore Index always contains all fields of the table (with a few exceptions, e.g. IMAGE fields). Report MSS_CS_CREATE only has three parameters: Table name, index name and degree of parallelism, which defines the number of logical CPUs used for the index creation.

erp

You can schedule report MSS_CS_CREATE as a batch job. The NCCI is always created offline, means concurrent row modifications on the same table are blocked while the NCCI is being created. SQL Server 2016 does not support the online creation of an NCCI. This feature is planned for the next version of SQL Server.

Integration in SAP DDIC

All indexes in SAP are defined in the SAP Data Dictionary (DDIC). Unfortunately, an index in DDIC is restricted in the number of columns and the number of bytes per index row. Therefore, we had to trick the DDIC somehow: For an NCCI, the index columns in DDIC and on the DB do not always match. However, you do not have to take care about this: The new SAP report MSS_CS_CREATE creates the NCCI on the database. At the same time, it creates a DDIC definition for the NCCI, which fulfills all DDIC requirements.

DDIC does not know anything about the columnstore property of an index (it is stored as a DDSTORAGE parameter). This results in a restriction for creating the NCCI in SAP: You cannot transport an NCCI from the development system to the productive system. Instead, you have to create the NCCI on both systems separately using report MSS_CS_CREATE.

Best practices

Columnstore indexes are only useful for large tables. Therefore, you should not even consider creating an NCCI on an ERP table with less than 10 million rows. As a matter of course, an NCCI is only useful on tables, which are used for long running, complex reporting queries. Ideally, these tables have a low or moderate rate of change.

For best reporting performance, you should make sure that all columnstore rowgroups are compressed. The concepts of columnstore rowgroups and the procedure of rowgroup compression is described in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/24/concepts-of-sql-server-2014-columnstore. For SAP BW, rowgroup compression is performed as a final step during data load. In SAP ERP, there is no separate data load phase. Instead, ERP tables are updated all the time during normal working hours. If you have a dedicated time window for your reporting, you might run the columnstore rowgroup compression strait before running your reports (which are supposed to use the NCCI). For this purpose, you can use report MSSCOMPRESS as described in https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/25/improved-sap-compression-tool-msscompress.

erp2

There are two options in report MSSCOMPRESS for processing columnstore rowgroups:

  • When choosing “Compress Rowgroups”, then an ALTER INDEX REORGANIZE command (with the option COMPRESS_ALL_ROW_GROUPS=ON) is performed, if (and only if) there are uncompressed rowgroups in the columnstore index of the selected table.
  • When choosing in addition “Force CS Reorganize”, then the ALTER INDEX REORGANIZE command is always performed. Thereby small rowgroups are merged (in addition to the rowgroup compression of open rowgroups).

For very large SAP ERP tables (some hundred million rows), the rowgroup compression is not as important as for SAP BW. Since an SAP ERP table is never partitioned, you can have a maximum of one million uncompressed rows. On the other hand, the ALTER INDEX REORGANIZE also optimizes already compressed rowgroups when lots of UPDATEs and DELETEs had been executed before. Therefore, you might run the rowgroup compression as a periodical SAP batch job using the scheduler in report MSSCOMPRESS.

Conclusion

The 3rd generation of SQL Server columnstore provides lots of improvements. You can now use the columnstore even for SAP ERP systems. Therefore, it is highly recommended to upgrade to SQL Server 2016.

Transparent Data Encryption (TDE) acceleration for SQL 2016 in Windows Azure

$
0
0

Today we want to show you the speed improvements we get by supporting the Intel AES-NI instruction set for transparent data encryption (TDE) on Windows Azure. This instruction set reduces the CPU overhead of turning on Transparent Data Encryption for SQL Server databases.

For the testing scenario we used an Azure DS15 virtual machine with 40 CPUs and 140 GB of Memory. All 16 Disk drives were SSDs, the log drive a RAID 1 over 2 SSDs. The test were performed against SQL Server 2014 and SQL Server 2016 with a 1TB SAP Database. All 4 Encryption algorithms (AES_128, AES_192, AES_256 and TRIPLE_DES) were used, for TRIPLE_DES on SQL Server 2016 the database has to be in compatibility mode for SQL Server 2014 (120) (*) as this algorithm was deprecated in SQL Server 2016.

runtimes

In this graph you see that the encryption and decryption of this 1TB database always run faster on SQL Server 2016 ((*) except for the Triple_DES algorithm that is only available in the SQL Server 2014 compatibility mode on SQL Server 2016). The decrease in runtime goes up to 67 % (AES_192 Decryption) , the average is 38% without Triple_DES (22,5 % with Triple_DES). Means SQL Server 2016 improves the encryption / decryption speed by 38% just by making use of the Intel AES-NI instruction set.

For the load test we used a DBCC and a DBCC with the physical_only option. These were the measured run times:

dbccpersql

One can see that the run times on SQL Server 2016 (green) are much smaller due to the hardware support of the Intel AES-NI chip set and the changes for DBCC in SQL Server 2016. The execution times for the three AES algorithm do not depend very much the algorithm, the times are nearly the same encrypted or not encrypted. Even the deprecated algorithm TRIPLE_DES is on SQL Server 2016 faster than the default algorithm AES_256 on SQL Server 2014.

Building an average over all the algorithms and the 4 executions (Normal, physical_only, encrypted, encrypted and physical_only) the picture is even clearer:

DBCC Averages

The difference between the encrypted and not encrypted run (normal or physical_only) is on SQL Server 2014 much higher than on SQL Server 2016, means in our test SQL Server 2014 needed an hour more time (1:27 h to 2:22 h, 38,8 % increase) for the encrypted case whereas SQL Server 2016 only needed 10 minutes more (0:49 h to 0:59 h, 16,9 % increase). The overhead that is added through TDE (difference between blue and gray or between orange and yellow) is in SQL Server 2016 much smaller than in SQL Server 2014.

SQL Server 2016 is able to detect and to leverage the Intel AES-NI instruction set on the Azure virtual machine and to cut so the overhead of transparent data encryption in half.


SAP on Azure: General Update for Customers & Partners April 2017

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure cloud platform. The objective of the Azure cloud platform is to provide the same performance, product availability support matrix and availability as on-premises solutions with the added flexibility of cloud. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. SQL Server Multiple Availability Groups on Azure

The Azure platform requires an Internal Load Balancer (ILB) to support clustering on both Linux or Windows High Availability solutions.

Previously there was a limit of one IP address per ILB. This has been removed and now up to 30 IP addresses can be balanced on a single ILB. There is also a limit of 150 port rules per ILB

This means that it is now possible to consolidate multiple AlwaysOn Listeners onto two or more cluster nodes in the same manner as an on-premises deployment.

Before deploying such a configuration it is highly recommended to make a detailed diagram and plan resources such as:

a. Premium Disk design, Number of SQL Datafiles, NTFS format size and whether to use SQL Datafiles stored directly in Azure blob storage

b. Cluster Quorum model and votes

c. Physical and virtual hostnames and IP addresses

d. AlwaysOn replica configuration (such as auto-failover nodes, synchronous and asynchronous replicas)

e. Document the Port that the SQL Server AlwaysOn Listener will use for each Availability Group

IP Address Name IP Address Number Hostname Port Probe Port Comment
Host1 – Physical IP xx.xx.xx.10 Host1 IP assigned to SQL Node 1
Host2 – Physical IP xx.xx.xx.11 Host2 IP assigned to SQL Node 2
Host3 – Physical IP yy.yy.yy.12 Host3 IP assigned to SQL Node 3 in DR DC
Virtual IP for SQL Listener 1 xx.xx.xx.100 SAPDB1 56000 59998 Virtual IP created by cluster for SQL AG #1 in Primary DC [assigned to ILB]
Virtual IP for SQL Listener 2 xx.xx.xx.101 SAPDB2 56001 59999 Virtual IP created by cluster for SQL AG #2 in Primary DC [assigned to ILB]
Virtual IP for Windows Cluster xx.xx.xx.1 SAPCLUDB1 Virtual IP for internal cluster in Primary DC [not assigned to ILB]
Virtual IP for SQL Listener 1 yy.yy.yy.100 SAPDB1 56000 Virtual IP created by cluster for SQL AG #1 in DR DC
Virtual IP for SQL Listener 2 yy.yy.yy.101 SAPDB2 56001 Virtual IP created by cluster for SQL AG #2 in DR DC
Virtual IP for Windows Cluster yy.yy.yy.1 SAPCLUDB1 Virtual IP for internal cluster in DR DC

After careful planning then the ILB configuration PowerShell script can be found in this link

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sql/virtual-machines-windows-portal-sql-ps-alwayson-int-listener

https://blogs.msdn.microsoft.com/igorpag/2016/01/25/configure-an-ilb-listener-for-sql-server-alwayson-availability-groups-in-azure-arm/

https://blogs.msdn.microsoft.com/sql_pfe_blog/2017/02/21/trouble-shooting-availability-group-listener-in-azure-sql-vm/

Note: it is recommended to set the SQL Server max memory parameter in this configuration. It is recommended to enable Direct Server Return (called Floating IP in the Portal) in this configuration. Similar functionality to stack multiple instances on a single VM is also available on other DBMS. If there are 2 AlwaysOn nodes in DR, then another separate ILB is required in the DR datacenter. Probe ports must be unique per ILB, but the same probe port number can be reused on different ILB

2. Multiple ASCS on a Cluster in Azure

Multiple ASCS clusters can be consolidated onto a single cluster. This Multi-SID configuration is explained in detail in this documentation.

It is essential to plan and document the solution before attempting to run the PowerShell Scripts to configure the ASCS

Note: As the same port 445 is shared by multiple Frontend IP addresses Direct Server Return must be enabled in this scenario. Direct Server Return is the PowerShell terminology. Floating IP is the terminology for Direct Server Return in the Azure Portal

IP Address Name IP Address Number Hostname Ports Probe Port Comment
Host1 – Physical IP xx.xx.xx.20 Host1 IP assigned to ASCS Node 1
Host2 – Physical IP xx.xx.xx.21 Host2 IP assigned to ASCS Node 2
Host3 – Physical IP yy.yy.yy.22 Host3 IP assigned to ASCS Node 3 in DR DC
Virtual IP for ECC ASCS xx.xx.xx.200 SAPECC 3200,3300,3600 59998 Virtual IP created by cluster for ECC ASCS in Primary DC [assigned to ILB]
Virtual IP for BW ASCS xx.xx.xx.201 SAPBW 3201,3301,3601 59999 Virtual IP created by cluster for BW ASCS in Primary DC [assigned to ILB]
Virtual IP for Windows Cluster xx.xx.xx.2 SAPCLUSAP1 Virtual IP for internal cluster in Primary DC [not assigned to ILB]
Virtual IP for ECC ASCS yy.yy.yy.200 SAPECC 3200,3300,3600 Virtual IP created by cluster for ECC ASCS in DR DC
Virtual IP for BW ASCS yy.yy.yy.201 SAPBW 3201,3301,3601 Virtual IP created by cluster for BW ASCS in DR DC
Virtual IP for Windows Cluster yy.yy.yy.2 SAPCLUSAP1 Virtual IP for internal cluster in DR DC

SAP ASCS on Azure Checklist:

1. Ensure the Timeout on the ILB is set to 30 minutes (this is the default in the PowerShell script)

2. Ensure the default.pfl parameter enque/encni/set_so_keepalive = TRUE

3. On Windows set TCP/IP registry values KeepAliveTime and KeepAliveInterval set to 180000 (3 minutes) 1593183 – TCP/IP networking parameters for SQL Server https://launchpad.support.sap.com/#/notes/1593183/E

4. Choose Probe Ports (normally 59999)

5. Set the Windows Cluster timeout

With PowerShell

$cluster = Get-Cluster; $cluster.SameSubnetDelay = 1500

$cluster = Get-Cluster; $cluster.SameSubnetThreshold = 10

$cluster = Get-Cluster; $cluster.CrossSubnetDelay = 1500

$cluster = Get-Cluster; $cluster.CrossSubnetThreshold = 20

With Cluster Command

cluster /cluster:<ClusterName> /prop SameSubnetDelay=1500

cluster /cluster:<ClusterName> /prop SameSubnetThreshold=10

cluster /cluster:<ClusterName> /prop CrossSubnetDelay=1500

cluster /cluster:<ClusterName> /prop CrossSubnetThreshold=20

On Windows 2016 the default values are already set to the correct values

5. A future blog post will discuss setup and configuration of HA on Suse

See SAP Note 1634991 – How to install (A)SCS instance on more than 2 cluster nodes https://launchpad.support.sap.com/#/notes/0001634991

3. Does the Internal Cluster Virtual IP Need To Be Assigned the Azure Internal Load Balancer (ILB)?

Windows cluster has its own internal Virtual IP and Virtual Hostname. These resources are needed for the operation of the cluster. The virtual IP address of the internal cluster does not need to be added as a Frontend IP address onto the Azure Internal Load Balancer (ILB).

There is no requirement to add the cluster Virtual IP address to the ILB, however this can optionally be done.

4. Useful PowerShell Commands for Azure

A basic level of PowerShell knowledge is typically required to deploy SAP systems on Azure at large scale.

It is possible to perform nearly all activities via the Azure Portal, however it is fast, simple and very repeatable to make PowerShell scripts.

To setup Azure PowerShell Cmdlets:

Make sure to install AzureRM PowerShell Cmdlets while running PowerShell as an Administrator

https://msdn.microsoft.com/en-us/library/mt125356.aspx

On a Windows 10 based console it should be possible to open PowerShell as an Administrator and run:

PS C:\> Install-Module AzureRM

PS C:\> Install-AzureRM

Login using the account provided by the Azure administrator. Typically this is username@domain.com

Login-AzureRmAccount

List the available Azure subscriptions with:

Get-AzureRmSubscription

Set-AzureRmContext -SubscriptionName “<subscription name goes here>”

https://docs.microsoft.com/en-us/powershell/#pivot=main&panel=getstarted

https://blogs.technet.microsoft.com/heyscriptingguy/2013/06/22/weekend-scripter-getting-started-with-windows-azure-and-powershell/

https://docs.microsoft.com/en-us/powershell/azure/overview?view=azurermps-3.7.0

5. SAP Hana on Azure – Virtual Machines

SAP Hana is certified for production OLAP workloads on Azure VM GS5. SAP BW on Hana and similar applications can be run in production on this VM type

GS5 is not Generally Available for Production OLTP workloads as at April 2017

The GS5 VM is certified for all workloads: Suite on Hana, BW on Hana and S/4 Hana for non-production scenarios

https://global.sap.com/community/ebook/2014-09-02-hana-hardware/enEN/iaas.html

More information about installing SAP applications on Hana on Azure GS5 VM type can be found here

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started

1928533 – SAP Applications on Azure: Supported Products and Azure VM types https://launchpad.support.sap.com/#/notes/1928533

2015553 – SAP on Microsoft Azure: Support prerequisites https://launchpad.support.sap.com/#/notes/2015553

6. SAP Hana on Azure – Azure Large Instances

Enterprise customers running large scale Hana scenarios are likely to require more than 2TB of RAM to run large BW on Hana or Suite on Hana scenarios and allow for 1-3 years DB growth. The Azure platform provides the following Large Instances for these scenarios based on Intel E7 Haswell & Broadwell processors:

  • SAP HANA on Azure S72 (2-socket, 768GB)
  • SAP HANA on Azure S72m (2-socket, 1.5TB)
  • SAP HANA on Azure S192 (4-socket, 2.0TB)
  • SAP HANA on Azure S192m (4-socket, 4.0TB)
  • all use cases are supported, OLAP and OLTP, including BWoH, SoH, S/4H
  • in production and non-production
  • scale-out configurations are possible on “SAP HANA on Azure S144 (4-socket, 1.5TB)” and on “SAP HANA on Azure S192 (4-socket, 2.0TB)” up to 15+1 nodes (15 active (BW: 1 master, 14 worker) nodes, 1 standby)

Common Question & Answer about Azure Large Instances for Hana:

Q1. Are Large Instances VMs or bare metal? Answer = bare metal Hana TDI certified appliances

Q2. Which HA/DR solutions are supported? Answer = both HSR and storage level replication options are possible

Q3. How to order and provision an Azure Hana Large Instance for Hana? Answer = contact Microsoft Account Team

Q4. What is included in the monthly charge on the Azure Bill? Answer = all compute charges, high speed storage equal to 4 x Hana RAM, network costs between SAP application server VMs and the Hana appliance and any network utilization for storage based replication for DR solutions to another Azure DR peer datacenter are included

Q5. Can Azure Large Instances for Hana be upgraded to a larger size? Answer = Yes

Q6. Are all Hana scenarios such as MCOS and MDC supported? Answer = yes, typically the same functionalities that are available with any other TDI solution are available on Azure Large Instances for Hana

Q7. Does Microsoft resell Hana licenses or provide Hana support? Answer = No, Hana licenses and support is provided by SAP. Microsoft provides IaaS (Infrastructure as a Service) only. Hardware, firmware, storage, networking and an initial installation of Suse for SAP Applications or Redhat is provided. Hana should then be installed by a Hana certified consultant. Customers need to buy a Suse or Redhat license and obtain a support contract for Suse or Redhat

Q8. What is the SLA for Azure Large Instances for Hana? Answer = SLA of 99.99% is described here

Q9. Does Microsoft patch and maintain the Suse or Redhat OS on a Hana Large Instance? Answer = No, Hana Large Instances is a IaaS offering. Layers lower than OS are managed and supported by Microsoft.

Q10. Do Hana Large Instances fully support Suse or Redhat clustering? Answer = Yes

2316233 – SAP HANA on Microsoft Azure (Large Instances) https://launchpad.support.sap.com/#/notes/2316233

Links:

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/get-started

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-architecture?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

HA/DR on Large Instances

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-infrastructure-connectivity?toc=%2fazure%2fvirtual-machines%2flinux%2ftoc.json

Backup/Restore Guide

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-guide

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots

7. Resource Groups on Azure – What Are They & How Should I Use Them?

Resource Group is a logical collection of Azure objects. The properties of a Resource Group are:

1.All the resources in your group should share the same lifecycle. You deploy, update, and delete them together.

2.deployment cycle it should be in another resource group.

3.Each resource can only exist in one resource group.

4.You can add or remove a resource to a resource group at any time.

5.You can move a resource from one resource group to another group. For more information, see Move resources to new resource group or subscription.

6.A resource group can contain resources that reside in different regions.

7.A resource group can be used to scope access control for administrative actions.

8.A resource can interact with resources in other resource groups. This interaction is common when the two resources are related but do not share the same lifecycle

The question for SAP Basis administrators is: How should I structure resource groups across the Sandbox, Development, QAS and Production environments that make up the entire SAP Landscape?

Feedback so far from actual customer deployments:

1. Small Sandbox or Development systems might all be grouped together into only one Resource Group. A small Development or Sandbox environment might comprise of ECC 6.0, BW, SCM, GRC, PI and a Solman system.

2. Small Development or Sandbox would share a common storage account or use Managed Disks

3. Often a single Vnet or at maximum several Vnets for non-prod and a production (Note: It is possible to deploy VMs or other resources from Resource Group A onto a Vnet in Resource Group B).

4. If there is more than one Vnet within the same datacenter, then Vnet peering is used to reduce latencies

5. If there is one Vnet in Datacenter A for Production and another Vnet in Datacenter B for Non-Production and DR there is a Vnet-2-Vnet gateway setup between these two vnets

6. Network Security Groups are typically setup to only allow SAP specific ports onto the subnets such as 3200-3299, 3300-3399, 3600-3699 etc. A full list of SAP ports can be found here. Windows File Sharing ports 135, 139 and 445 would normally be blocked

7. Prior to the introduction of Managed Disks guidance around storage accounts could be summarize as:

-In all cases Premium Storage should be used for DBMS servers or for standalone engines with high IOPS requirements such as TREX

-Small systems such as Development systems might share one storage account

-Larger QAS systems that might be used for performance testing should ideally have their own storage account

-Large Production SAP applications like ECC or BW should have their own storage account

-Smaller Production systems with low IOPS requirement such as Solman, EP, PI or GRC can share a single storage account

Since the introduction of Managed Disks it is generally recommended to use Managed Disks

8. Some customers are deploying individual SAP applications into their own resource groups in production

https://docs.microsoft.com/en-us/azure/azure-resource-manager/resource-group-overview#resource-groups

8. Azure Managed Disks

Managed Disks is a new feature in Azure Storage. It is recommended to evaluate Managed Disks for new SAP deployments. The benefits of Managed Disks are explained here.

In summary the benefits are:

a. No need to create specific storage accounts and balance the number of VHDs across storage account up to a limit of 10,000 disks per subscription

b. There is no need to manually “pin” specific VHDs to different storage stamps to ensure storage level high availability (for example on AlwaysOn cluster nodes)

c. Integration into Azure Backup

Note: Azure Managed Disks request port 8443 to be opened

https://azure.microsoft.com/en-us/services/managed-disks/

https://docs.microsoft.com/en-us/azure/storage/storage-managed-disks-overview

https://azure.microsoft.com/en-us/pricing/details/managed-disks/

9. Network Latency on Azure

Azure datacenters are distributed across more geographic locations than any other cloud platform. Network latencies from an Azure datacenter to most worldwide locations are more than acceptable for running SAP applications.

Before deploying SAP on Azure it is recommended to test the network latencies from a customer office to nearby Azure datacenters.

The test below shows a test using http://azurespeed.com/ from Singapore.

A test such as this should be performed on a wired connection. Latencies of geographically nearby datacenters should be 10-50ms and trans-continental could be 100-250ms RTT

If values well in excess of these are seen it is possible that the ISP may have routing problems.

10. SAP Application Server on Azure Public Cloud

When installing SAP application servers on Azure Public Cloud Virtual Machines we recommend following the deployment pattern detailed below:

a. Do not provision additional disks for SAP application servers. Install the following components on the Windows C: drive or Linux Root

-Boot

-OS

-/usr/SAP SAP executable directory

b. Place the OS Pagefile onto the local temporary disk (this is the default for Windows)

c. Deploy the latest OS release possible. As at April 2017 Windows 2016, Suse 12.2 and Redhat 7

d. Linux FS type – review 405827 – Linux: Recommended file systems

e. Use a single virtual network card. If SAP LaMa is used a second virtual network card is recommended

Note: Do not under any circumstances use SAP application servers as file servers for interface files. Create a dedicated Management Station for interface files and SAP DVD Installation Media

11. Windows Dynamic Port Ranges

Windows server uses Dynamic Callback Ports that can overlap with SAP J2EE ports

It is recommended to reserve the ports 50000-50999 for SAP.

The commands below should be run on Windows servers with Java Instances installed:

netsh int ipv4 set dynamicport tcp start=60000 numberofports=5536

netsh int ipv4 show dynamicport tcp

1399935 – Reservation of TCP/UDP ports in Windows https://launchpad.support.sap.com/#/notes/1399935

https://support.microsoft.com/en-us/help/929851/the-default-dynamic-port-range-for-tcp-ip-has-changed-in-windows-vista-and-in-windows-server-2008

12. Switch Off SAP Application Server AutoStart During Maintenance

The availability of SAP application servers is improved by configuring Autostart.  In a scenario where an Azure component fails and the Azure platform self-heals and moves a VM to another node the impact of this restart is much less if the application server restarts automatically.

Autostart of an SAP instance is configured by adding Autostart = 1 to the SAP default.pfl

Maintenance operations like support packs, upgrades, enhancement packs or kernel updates may assume that the default behavior of the SAP system is not to automatically restart.

Therefore it is generally recommended to comment this profile parameter during such activities

13. DevTest Labs – Great for Building Sandbox Systems

The Azure Portal VM properties page has a feature to automatically shutdown VMs. This is very useful for Sandbox or Upgrade/Support Pack testing machines. This feature allows the BASIS team to provision large and powerful virtual machines for testing but to limit the costs by shutting down these VMs when not in use. This is particularly useful for Hana test machines as Hana needs very large VMs

https://azure.microsoft.com/en-us/services/devtest-lab/

SAP systems that are accessed by end users or SAP functional and ABAP teams must also have an Automatic Start feature in addition to the Automatic Stop feature.

This can be achieved using the following methods:

http://clemmblog.azurewebsites.net/start-stop-windows-azure-vms-according-time-schedule/

https://blogs.technet.microsoft.com/uktechnet/2016/07/18/how-to-schedule-vm-shutdown-using-azure-automation/

14. Disk Cache & Encryption Settings

Azure storage provides multiple options for disk caching and encryption.

General guidance for the use of Premium Azure Storage
disk caching:

Premium Disk Type Default Cache Setting Recommended Cache Setting for SAP Servers
OS Disk ReadWrite ReadWrite
Data Disk – Write Heavy None None (for example DB Transaction Log)
Data Disk – Read Only None ReadOnly

Do not use ReadWrite disk cache settings on SAP systems including DBMS disks or TREX

General Guidance for the use of Encryption:

1. Assess the risk profile of the SAP systems and evaluate if Encryption is required or not

2. Azure platform supports Encryption at two different layers

-Storage Account Level

-Disk Level

3. DBMS platforms also support encryption – SQL Server TDE or Hana encryption for example

4. Typically DBMS level encryption has the advantage of also encrypting backup files. Backup files are common attack vector for data theft

5. It is strongly recommended not to use multiple encryption technologies on the same device (for example use DB level encryption, File System level encryption and using an encrypted storage account). This can lead to performance problems

6. A recommended configuration is:

-For DB servers use Disk encryption to protect the OS/Boot disk only. Use native DBMS level encryption to protect the DB files and backups

-For SAP application server use Disk encryption to protect the OS/Boot disk.

Note: some forms of DB level encryption are vulnerable to cloning the entire Virtual Machine and all disks. Using Azure Disk Encryption on the OS/Boot disk prevents cloning an entire VM.

Customer Case Studies on Azure

Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

Zespri https://customers.microsoft.com/en-us/story/kiwi-grower-prunes-costs-defends-business-from-disaste

PACT Building on synergy for a bold growth strategy https://customers.microsoft.com/en-us/story/building-on-synergy-for-a-bold-growth-strategy

AGL Innovation Spotlight: AGL puts energy into action with the Cloud https://customers.microsoft.com/en-us/story/innovation-spotlight-agl-puts-energy-into-action-with

Coats UK https://customers.microsoft.com/en-us/story/coats

The Mosaic Company https://customers.microsoft.com/en-us/story/mosaicandcapgemini

Several new large Enterprise Case Studies and customer go lives will be released on this blog soon

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

SAP OS/DB Migration to SQL Server–FAQ v6.2 April 2017

$
0
0

The FAQ document attached to this blog is the cumulative knowledge gained from hundreds of customers that have moved off UNIX/Oracle and DB2 to Windows and SQL Server in recent years.

In this document a large number of recommendations, tools, tips and common errors are documented.  It is recommended to read this document before performing an OS/DB migration to Windows + SQL Server.

The latest version of the OS/DB Migration FAQ includes some updates including:

  1. Due to the considerable number of customers moving off AIX/DB2 to Windows and SQL Server running on Azure a section of creating an R3load server for DB2 has been added
  2. Migration of BW systems to SQL Server to leverage the Column Store, Flat Cube and FEMS pushdown logic continues to be popular with several major customers replacing SAP BWA with SQL Server Column Store
  3. Post migration report RS_BW_POSTMIGRATION automatically converts F Fact and E Fact tables to Column Store when the recommended “All Column-Store” option is selected in SMIGR_CREATE_DDL
  4. Windows 2016 is now Generally Available for almost all SAP applications on all major DB platforms other than Oracle 12c
  5. Older Operating System and Database releases such as Windows 2012 (non-R2) and SQL 2012 are now no longer recommended for new projects
  6. Ensure sp_updatestats is included in the post processing steps after the import
  7. Many other recommendations around kernels, known bugs and Azure specific migration information

Recently a large Asian Airline recently moved from AIX/DB2 to Windows 2016 and SQL Server 2016 on Azure.  The migration was completed in two phases over two weekends and was 100% successful with the customer realizing significant performance improvement running on D-Series VMs and Premium Storage.

Another large Energy Company in Australia moved from HPUX/Oracle to Windows 2012 R2 and SQL 2016 on Azure. More information on this customer can be found Large Australian Energy Company Modernizes SAP Applications & Moves to Azure Public Cloud

Latest OS/DB Migration FAQ can be downloaded from oracle-to-sql-migration-faq-v6-2

Performance evolution of SAP BW on SQL Server

$
0
0

In SAP customer support, we still see several customers running old SAP BW code that cannot leverage the improvements we delivered within the last years. In this blog, we want to demonstrate the huge performance improvements which can be achieved even without hardware replacements. Until 2011, the standard configuration of BW queries on SQL Server used only one database thread and was running against a BW cube with b-tree indexes. With the actual improvements, you can easily speed-up BW queries by factor 100!

Test Scenario

All tests were running with SQL Server 2016 on a former high-end server with 48 CPU threads constructed in the year 2008. This server does not even support modern vector operations (SIMD), which can be natively used by SQL Server 2016. We created 54 BW test queries with varying complexity and a varying number of FEMS filters. All queries were running against a BW cube with 100,000,000 rows. BW cube compression had been performed on 90% of all rows, resulting in 100 uncompressed BW requests. The queries had been created for our own, internal performance tests. They have not been modified or optimized for this blog. However, they might not be typical for your specific BW query mix.

Optimization levels

The BW queries were running against the following configurations:

  1. MAXDOP 1
    This was the default SAP BW configuration from 2011 when running on Microsoft SQL Server: Standard BW cubes with rowstore (b-tree) indexes were used. All tables in SQL Server were PAGE compressed. BW queries were not using SQL Server intra-query parallelism.
  2. PAGE-compression (Rowstore)
    In this scenario, all SAP BW queries can use 8 CPU threads. Therefore, the SAP RSADMIN parameter MSS_MAXDOP_QUERY is set to 8.
  3. COLUMN-compression (Columnstore)
    Requires: SQL Server 2014 or higher, SAP BW 7.x

    For this scenario we change the index structure of SAP BW cubes. A clustered columnstore index is applied on the cubes using SAP report MSSCSTORE. We do not recommend using the restricted read-only columnstore of SQL Server 2012 anymore. An overview of SQL Server 2014 (and 2016) columnstore is attached in the following blog: https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/23/sql-server-2014-columnstore-released-for-sap-bw. Detailed requirements are documented in SAP Note 2116639 – SQL Server Columnstore documentation
  4. FLAT Cube
    Requires: Columnstore, SAP BW 7.40 (SP8) or higher

    The next optimization step is to apply a new table structure for the BW cube. Therefore, the cube is converted to a Columnstore Optimized Flat Cube (which does not need an e-fact table and the dimension tables any more). The Flat Cube is described in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw.
  5. FEMS Pushdown
    Requires: Flat Cube, SAP BW 7.50 (SP4) or higher

    The last optimization uses a new SQL query structure, which implements the push down of FEMS filters from the OLAP engine to SQL Server. A brief overview of this feature can be found here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown.

Measured results

The below table contains the runtime of the 54 BW queries in the different configurations. The time consumed in SQL Server is displayed in purple, the time spend in the SAP BW OLAP engine is displayed in blue. A significant OLAP runtime is only observed for queries with a couple of FEMS filters. The runtime is rounded to full seconds. It has been measured by the SAP OLAP statistics in transaction ST03.

bw_perf1

Comparing optimization levels

The following table shows the performance impact of each optimization step individually. Some optimizations may even be counterproductive for a particular BW query. However, the mix of all optimizations almost always results in great BW query performance.
In this mix of 54 BW queries, the slowest query with FEMS optimization (21 seconds) was even faster than the fastest query without any optimization (27 seconds). The average performance improvement was factor 121 faster!

bw_perf2

Conclusion

The SAP BW code is permanently being updated for supporting new Microsoft SQL Server features like columnstore. Several BW improvements have been implemented to optimize SAP BW running on SQL Server. These optimizations have increased SAP BW query performance by two magnitudes within the last 6 years.
Therefore, customers should upgrade to SQL Server 2016 and apply the required SAP BW code soon.

Parallel Processing in SAP

$
0
0

New generations of CPUs do not provide any significant single-thread performance improvements. Instead, the number of logical CPU cores is increasing with each new CPU generation. You can significantly reduce the runtime of a task by running sub-tasks in parallel on many CPU cores. However, splitting a task into sub-tasks consumes additional resources (CPU and memory). This overhead may or may not be significant, dependent on the used algorithm.

Therefore, you can optimize one aspect of IT system performance using parallelism: The response time. If you want to optimize the throughput of an IT system, then parallelism can be counterproductive on high-loaded systems (because of the overhead for coordinating the sub-tasks).

Parallelism on the SAP application server

Many tasks in an ERP system are already fast enough, for example recording of sales orders. Therefore, it makes no sense to parallelize such tasks (except parallelism of users by hiring more employees). In contrast to dialog processing, batch processing can highly benefit from parallelism on the application server. A typical example is the Data Transfer Process (DTP) in an SAP BW system. Parallelism is used here by default, but you can further customize it (for all DTPs or separately per DTP):

  • You can define whether parallelism is used at all
  • You can define the type (DIA or BTC) and the number of work processes
  • You can define the package size, which is the number of rows processed within a sub-task. If the total number of rows of a DTP is smaller than the package size, then only one work process is used. In this case, you could reduce the package size for enabling parallelism.

Historically, the default number of work processes used in a DTP is 3. In the meanwhile, customers have application servers with much more CPU threads than years ago. Therefore, it makes sense to adjust the number of configured work processes for existing DTPs in an SAP BW System. This can be configured in the Batch Manager settings of SAP transaction RSA1 (at Administration->Housekeeping Tasks->Batch Manager->Mass Maintenance)

parallel1

Parallelism in SQL Server

SQL Server can utilize many CPU threads at the same time for running a single SQL query. This is called intra-query parallelism. Creating the sub-tasks of such a query is done automatically by SQL Server. You do not have to define a package size, but you can control the maximum number of CPUs, which are used by an operator.

Before executing a SQL query, SQL Server creates an execution plan. This plan can be either a serial or a parallel plan. An execution plan consists of several operators (iterators). Each of them can be either serial or parallel. An execution plan with at least one parallel operator is called a parallel execution plan. A parallel operator is displayed with two arrows within a yellow circle:

parallel2

The same execution plan without any parallelism looks like this:

parallel3

The maximum allowed degree of parallelism (MAXDOP) is defined in the SQL Server configuration option “max degree of parallelism” (value 0 means, MAXDOP is infinite). You can overwrite MAXDOP for a particular query using an optimizer hint. SQL Server will not create a parallel plan in the following cases:

  • if the database server has only one logical CPU
  • if MAXDOP is 1
  • if the query optimizer does not see a benefit in using parallelism.

A parallel execution plan does not contain the number of logical CPUs used. The actual degree of parallelism (DOP) is decided at runtime of the query. It is not only dependent on MAXDOP, but also on the available system resources at query runtime. You can check the actual DOP of a query in the SQL Server DM view sys.dm_exec_query_stats. SQL Server intra-query parallelism typically decreases the runtime of a query, but it can result in varying execution times of the same query: You can configure MAXDOP, but the actual DOP may be different when running the same query again! Therefore, the actual runtime of a query is not predictable any more.

In SQL Server 2014, there is one important limitation: You can only use batch-mode operators, if the actual DOP is higher than one (This limitation has gone in SQL Server 2016). Batch-mode operators are much faster than row-mode operators. Hence, we want to make sure, that there are always sufficient resources for running the query in parallel (DOP 2 or higher). This can be achieved by restricting MAXDOP to a relative small value, for example MAXDOP 4 on a machine with 32 CPU threads.

Configuring SQL Server parallelism in SAP ERP

The SAP installation changes the value of the SQL Server configuration option “max degree of parallelism” to 1. Therefore, no parallelism is used for running a single database query (As a matter of course, you can still run many database queries in parallel). For years, we did not recommend changing this configuration for an SAP ERP system. We wanted to avoid the overhead of intra-query parallelism and keep a predictable query runtime. However, in the meanwhile, customers often have more logical CPUs available on the database server than concurrently running SQL queries. Not using intra-query parallelism would simply be a waste of CPU resources. Therefore, customers can increase “max degree of parallelism” for an SAP ERP system. See https://blogs.msdn.microsoft.com/saponsqlserver/2015/04/29/max-degree-of-parallelism-maxdop for details.

Configuring SQL Server parallelism in SAP BW

In contrast to SAP ERP, SQL Server is using intra-query parallelism for SAP BW since years. The configuration of SQL Server parallelism is much more sophisticated in an SAP BW system than just configuring a global MAXDOP value. For SAP BW, the SQL Server configuration option “max degree of parallelism” should always be set to 1. Hence, normal Open SQL commands use MAXDOP 1. However, SAP BW queries use MAXDOP 4 by default using optimizer hints. SQL Server index creation is running with MAXDOP 8 by default. The optimizer hints are controlled by SAP BW RSADMIN parameters, documented in

The RSADMIN parameters can be changed in SAP report SAP_RSADMIN_MAINTAIN:

parallel4

Unfortunately, we have seen a few cases with SAP BW queries, where the SQL Server query optimizer decided to create a serial execution plan. However, SAP BW queries are always quite complex. Therefore, we would always benefit from intra-query parallelism. The latest version of SQL Server 2016 provides an optimizer hint, which enforces the creation of parallel plans. We use this hint in the SAP BW statement generator. Therefore, all operators in an execution plan for SAP BW queries are parallel, if possible (be aware, that execution plans with a spool table never contain any parallel operator):

parallel5

To benefit from these forced parallel execution plans, you have to apply the newest SP und CU of SQL Server 2016 and the correction instructions of

The created execution plans are slightly different from normal parallel execution plans, because all operators are parallel (see NonClustered Index Seek in the picture above). However, we did not see a single case in SAP BW, where this slightly difference caused an issue.

Parallelism used in DB pushdown

SAP NetWeaver uses a single-threaded, multi-process application server. Parallelism on the application server has to be explicitly coded. Furthermore, it is often not easy (or even impossible) to divide a task on the application server in sub-tasks of similar size. Using intra-query parallelism on the database server is much easier.

More and more functionality in SAP is being pushed down from the application server to the database server. The main idea behind this is to reduce the network load (between application server and DB server). However, such a DB pushdown has further advantages. You can now benefit from intra-query parallelism without manually generating sub-tasks. An example is the FEMS-Pushdown, described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown. To make it clear: FEMS-Pushdown does not just use parallelism. A new algorithm improves performance beyond the usage of additional CPUs.

Conclusion

Parallelism is the key for reducing response time. However, it could result in a reduced throughput, particularly in high-loaded ERP systems. In SAP BW systems, you can increase intra-query parallelism by setting RSADMIN parameter MSS_MAXDOP_QUERY. Parallelism on the SAP application server has to be manually adjusted, dependent on the available CPU resources and the size of processed data.

End to End Setup for SAP HANA Large Instances

$
0
0

So, you are ready for “SAP HANA on Azure Large Instances” deployment, Great! And, you want to know the step by step process with screen shots to start the work? Then you are reading the right article.

This blog illustrates the various steps required for the SAP HANA on Azure Large Instances (or in short HANA Large Instances) setup.

Here are the high-level steps:

  1. Setup the vNet
  2. Provide the details to Microsoft for provisioning the HANA Large Instances
  3. Connect your Azure vNet to HANA Large Instances
  4. Test the connectivity from Azure VM to HANA Large Instances

Install the HANA on HANA Large Instances server

Please download the full article End-to-End-Setup-of-SAP-HANA-Large-Instances to get the complete details.

Customer experience with SAP BW FEMS-Pushdown

$
0
0

A few months ago we released a new SAP BW statement generator, which increases BW query performance for complex queries containing FEMS filters, see https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/06/bw-queries-by-factors-faster-using-fems-pushdown. In the meanwhile, a few customers who tested the new feature, provided feedback to the “SAP on SQL Server development”. Based on this feedback, we further improved the performance of FEMS queries and released the optimized code in SAP Note 2483734 (see below). However, in some cases the query performance was still not optimal, because of an unsuitable system configuration. The intention of this blog is give guidance and best practices, based on our customer experience.

Prerequisites

Customers typically do not want to apply and test new SAP code on the productive system. It is a good idea to use a virtual machine for the testing. However, for FEMS-Pushdown, you should keep in mind that you want to test performance, not simply functionality. Therefore, you should provide sufficient resources to the VM.

  • Hardware requirements
    For FEMS-Pushdown we consider 16 CPU threads for SQL Server as a minimum configuration. As a matter of course, SQL Server should also have access to sufficient RAM and a fast I/O system
  • SQL Server version
    We strongly recommend SQL Server 2016 (SP1 CU2 or newer) when using FEMS-Pushdown. SAP BW can force a parallel execution plan on SQL Server 2016 using an optimizer hint. Furthermore, SQL Server 2016 always uses batch-mode processing for the columnstore. See the following blog for details: https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/18/parallel-processing-in-sap.
    Technically, FEMS-Pushdown also works with SQL Server 2014. In this case, you should set SQL Server startup parameter -T8649 for forcing a parallel execution plan. However, SQL Server 2014 may use row-mode processing under high work load, which decreases BW query performance.
  • Required BW code
    The SAP code for columnstore and BW queries is permanently being updated. We regularly publish known documentation and code issues in SAP Note 2116639 – SQL Server Columnstore documentation. The scope of this SAP Note has been extended to FEMS-Pushdown. Therefore, it contains a link to the newest code improvements in SAP Note 2483734 (see below).
    FEMS-Pushdown requires the Columnstore Optimized Flat Cube. You can create a Semantically Partitioned Cube (SPO) as a Flat Cube, but you cannot convert an existing SPO to a Flat Cube yet. The conversion report is still under development by SAP.

Best Practices

When running a BW query with FEMS-Pushdown, you can run into the same issues as with conventional BW queries: Lack of system resources, sub-optimal execution plans and poorly designed BW queries. Therefore, you should follow the following recommendations:

  • Update Statistics
    When loading data into a cube, you should update the database statistics and perform columnstore rowgroup compression within the BW process chain. This is described in https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/14/simplified-and-faster-sap-bw-process-chains. However, when using the Flat Cube, SQL Server execution plans are much more robust, even with outdated database statistics.
  • Force parallel execution plans
    After applying the newest SQL Server 2016 Cumulative Update and the newest SAP BW code, all SQL Server queries created by the SAP BW statement generator will have a parallel execution plan. See SAP Note 2395652 – SQL Query Hints USE HINT for details.
  • Avoid Sematically Partitioned Cubes (SPOs)
    This recommendation is not specific for FEMS-Pushdown. It is related to all cubes using the columnstore. Existing SPOs work fine with the columnstore. However, we do not encourage our customers to create additional SPOs
    • A BW Multi-Provider is a logical cube consisting of many physical cubes. This concept is similar to a union-view on many tables in SQL Server. There are often organizational (structure of data load) or business reasons for using Multi-Providers. Therefore, customers often use Multi-Provider (with or without columnstore)
    • An SPO is a specific Multi-Provider, where all part-providers have the exactly same structure. It logically “partitions” a cube by time (or another characteristic). The idea is to speed-up BW query performance by dividing a cube into smaller chunks and running SQL queries on these chunks in parallel.
      However, when using the columnstore, one large cube results in better performance than many small ones. Selects on the columnstore use efficiently intra-query parallelism and can benefit from rowgroup elimination (similar to partition pruning). Also archiving is very fast on columnstore tables (however, archiving is not so important anymore, because columnstore data is stored very storage-efficient).

New improvements (with SAP Note 2483734)

Optimized BW code for FEMS-Pushdown has been released in SAP Note 2483734 – FEMS-Pushdown performance improvements for Microsoft SQL Server. The correction instructions of this SAP Note are available as of SAP BW 7.50 SP4. They are not available on SAP BW 7.51 or 7.52. On these SAP BW releases, you have to wait for the next SAP Support Package. The following improvements have been implemented

  • Columnstore-Pushdown
    The idea of FEMS-Pushdown is to shift the evaluation of SAP BW query filters from the SAP application server to the database server. Therefore, a SQL command is being created in the SQL statement generator for FEMS-Pushdown. The new version of this statement generator creates additional, redundant filters in the SQL Server statement. Therefore, these filters can be directly evaluated in the columnstore clustered index scan (before running the first level of aggregation). Hereby, the BW filters are further pushed-down inside the SQL Server statement execution.
  • Intra-Query parallelism
    BW queries with FEMS-Pushdown benefit much more from additional CPU threads than other BW queries. Furthermore, increasing the maximum intra-query parallelism on SQL Server 2016 does not have the negative side effect as on SQL Server 2014 (sporadic row-mode processing). With the new FEMS-Pushdown code, the maximum number of CPU threads for a FEMS query is calculated based on the complexity of the query. It can even exceed the value of the RSADMIN parameter MSS_MAXDOP_QUERY, but it will never be higher than the new parameter MSS_MAXDOP_FEMS. Hence, FEMS-Pushdown queries can use more SQL Server threads than normal BW queries. However, SQL Server can reduce the actual used CPU threads at query runtime, once there is a resource bottleneck. Only for SQL Server 2014, we recommend setting RSADMIN parameter MSS_MAXDOP_FEMS. There is no need for this on SQL Server 2016 or newer.
  • BW hierarchy improvements
    We implemented some additional improvements, for example for BW hierarchy filters. Keep in mind, that we did not use any of the new improvements of the new FEMS-Statement generator when measuring the performance in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server.

Analyzing FEMS Queries

FEMS-Pushdown cannot be used for all FEMS queries. For example, inventory queries cannot use FEMS-Pushdown yet. There are several tools, where you can check the FEMS-Pushdown usage in SAP BW:

  • SQL statement in RSRT
    You can easily verify, that FEMS-Pushdown is actually used, by looking at the SQL query in SAP transaction RSRT. The query contains a common table expression (CTE) starting with “WITH [T_FEMS] AS“.
  • Statistics Data in RSRT
    For a FEMS-Pushdown, the aggregate name <cube>$F is displayed in the Aggregation Layer tab of the RSRT statistics data
  • Event IDs in Statistics Data
    The idea of FEMS-Pushdown is reducing the runtime on the SAP application server. Particularly, the runtime of OLAP event ID 3110 should be significantly reduced. However, seeing a long runtime for event ID 3110 does not necessarily mean, that FEMS-Pushdown was not used. When using BW Exceptional Aggregation, additional time is spent for event ID 3110 and 3200.

  • ST03 Statistics
    The best way for monitoring BW query runtime is the BI Workload Monitor in SAP transaction ST03. Here you can see the runtime of BW queries by day, cube and query. Furthermore, you can see where the time was spent: “DB Time” is the time consumed by SQL Server and “OLAP Time” is consumed by the SAP application server. You can reset the statistics (on your test system) by running report RSDDSTAT_DATA_DELETE. Take care, this permanently deletes the ST03 statistics, also for other SAP users.
    Be aware, that the SQL Statement statistics in SAP transaction DBACOCKPIT can be misleading, particularly for SAP BW queries. SAP BW always opens a database cursor for running a BW query. Processing in the BW OLAP engine is always performed in packages between database fetches. SQL Server is measuring the runtime of a SQL query as the time between the OPEN and the last FETCH. Therefore, the SQL query runtime in DBACOCKPIT contains the processing time on the application server! However, in SAP transaction ST03 (or RSRT), the processing time on the application server is correctly not included in the “DB Time” (or “Data Manager” time).

Conclusion

For best BW query performance, we recommend using SQL Server 2016 and the newest SAP BW code of SAP Note 2483734. SAP BW FEMS-Pushdown requires using the Flat Cube. More and more customers start using the Flat Cube actually because of the FEMS-Pushdown. We got feedback from many customers, the merely the Flat Cube (even without FEMS-Pushdown) running on a modern hardware results in similar performance as they observed on their BW Accelerator. Using the FEMS-Pushdown can reduce peaks in query runtime caused by the most complex BW queries.

Migrating to SAP HANA on Azure

$
0
0

The S/4HANA platform on the SAP HANA DBMS provides many functional improvements over SAP Business Suite 7, and additions to business processes that should provide customers with a compelling reason to migrate to SAP S/4HANA with SAP HANA as the underlying DBMS. Another reason to migrate to S/4 HANA is that support for all SAP Business Suite 7 applications based on the ABAP stack will cease at the end of 2025, as detailed in SAP Note #1648480 – “Maintenance for SAP Business Suite 7 Software”. This SAP Note details support for all SAP Business Suite 7 applications and maintenance dates for the SAP Java Stack versions.

Note: some SAP references linked in this article may require an SAP account.

HANA migration strategy

SAP HANA is sold by SAP as a high-performance in-memory database. You can run most of the existing SAP NetWeaver-based applications (for example, SAP ERP, or SAP BW), on SAP HANA. Functionally this is hardly different from running those NetWeaver applications on top of any other SAP supported DBMS (for example, SQL Server, Oracle, DB2). Please refer to the SAP Product Availability Matrix for details.

The next generation of SAP platforms, S/4HANA and BW/4HANA, are built specifically on HANA and take full advantage of the SAP HANA DBMS. For customers who want to migrate to S/4HANA, there are two main strategies:

In discussions about migrating to HANA, it is important to determine which strategy to follow. The impact and benefit of each option is quite different, from the perspective of SAP, the customer, and Azure. The initial step of the second option is only a technical migration with very limited benefit from a business process point a view. Whereas the migration to S/4HANA (either directly or as a second step), will involve a functional migration. A functional migration has more impact to the business and business processes, and as such takes more effort. SAP S/4HANA usually comes with significant changes to the mapping of business processes. Therefore, most S/4HANA projects we are pursuing with our global system integrators require rethinking the complete mapping of business processes into different SAP and LOB products, and the usage of SaaS services.

HANA + cloud

Besides initiating a rearchitecting of the business process mapping and integration based on S/4HANA, looking at S/4HANA and/or SAP HANA DBMS prompts discussions about moving SAP workloads into public clouds, like Microsoft Azure. Leveraging Azure usually minimizes migration cost and increases the flexibility of the SAP environment. The fact that SAP HANA needs to keep most data in memory, usually increases costs for the server infrastructure compared to the server hardware customers have been using.

Azure is an ideal public cloud to host SAP HANA-based workloads with a great TCO. Azure provides the flexibility to engage and disengage resources which reduces costs. For example, in a multitier environment like SAP, you could increase and decrease the number of SAP application instances in a SAP system based on workload. And with the latest announcements, Microsoft Azure offers the largest server SKUs available in the public cloud tailored to SAP HANA.

Current Azure SAP HANA capabilities

The diagram below shows the Azure certifications that run SAP HANA.

HANA large instances provide a bare metal solution to run large HANA workloads. A HANA environment can currently be scaled up to 32 TB using multiple units of HANA large instances, with the potential to move up to 60TB as soon as the newly announced SKUs are available. HANA large instances can be purchased with a 1-year or 3-year commitment, depending on large instance size. With a 3-year commitment, customers get a significant discount providing high performance at a very competitive price. Because HANA large instances are a bare metal solution, the ordering process differs from ordering/deploying an Azure Virtual Machine (VM). You can just create a VM in the Azure Portal and have it available in minutes. Once you order a HANA large instance unit it can take up to several days before you can use it. To learn about HANA large instances, please check out SAP HANA (large instances) overview and architecture on Azure documentation.

To order HANA large instances, fill out the SAP HANA on Azure information request.

The above diagram shows that not all Azure SKUs are certified to run all SAP workloads. Only larger VM SKUs are certified to run HANA workloads. For dev/test workloads you can use smaller VMs sizes such as DS13v2 and DS14v2. For the highest memory demands, customers seeking to migrate their existing SAP landscape to HANA on Azure will need to use HANA large instances.

The new Azure VM sizes were announced in Jason Zander’s blog post. The certification for those, as well as some existing VM sizes, are on the Azure roadmap. These new VM sizes will allow more flexibility for customers moving their SAP HANA, S/4HANA and BW/4HANA instances to Azure. You can check for the latest certification information on the SAP HANA on Azure page.

Combining multiple databases on one large instance

Azure is a very good platform for running SAP and SAP HANA systems. Using Azure, customers can save costs compared to an on-premises or hosted solution, while having more flexibility and robust disaster recovery. We’ve already discussed the benefits for large HANA databases, but what if you have smaller HANA databases?

Smaller HANA databases, common to small and midsize customers or departmental systems, can be combined on a single instance, taking advantage of the power and cost reductions that large instances provide. SAP HANA provides two options:

  • MCOS – Multiple components in one system
  • MDC – Multitenant database containers

The differences are detailed in the Multitenancy article on the SAP website. Please refer to SAP notes #1681092, #1826100, and #2096000 for more details on these multitenant options.

MCOS could be used with single customers. SAP hosting partners could use MDC to share HANA large instances between multiple customers.

Customers that want to run SAP Business Suite (OLTP) on SAP HANA can host the SAP HANA part on HANA large instances. The SAP application layer would be hosted on native Azure VMs and benefit from the flexibility they provide. Once M-series VMs are available, the SAP HANA part can be hosted in a VM for even more flexibility.

Momentum of SAP workload moving to Azure

Azure is enjoying great popularity with customers from various industries using it to run their SAP workloads. Although Azure is an ideal platform for SAP HANA, the majority of customers will still start by moving their SAP NetWeaver systems to Azure. This isn’t restricted to lift & shift scenarios running Oracle, SQL Server, DB2, or SAP ASE. Some customers move from proprietary on-premises UNIX-based systems to Windows/SQL Server, Windows/Oracle, or Linux/DB2-driven SAP systems hosted in Azure.

Many system integrators we are working with observe that the number of these customers is increasing. The strategy of most customers is to skip the migration of SAP Business Suite 7 applications to SAP HANA, and instead fully focus on the long term move to S/4HANA. This strategy can be summarized in the following steps:

  1. Short term: focus on cost savings by moving the SAP landscape to industry standard OS platforms on Azure.
  2. Short to mid-term: test and develop S/4HANA implementations in Azure, leveraging the flexibility of Azure to create (and destroy) proof of concept and development environments quickly without hardware procurement.
  3. Mid to long-term: deploy production S/4HANA based SAP applications in Azure.

High Available ASCS for Windows on File Share – Shared Disk No Longer Required

$
0
0

SAP Highly Available ASCS Now Supports File Share UNC Source

SAP has released documentation and a new Windows Cluster DLL that enables the SAP ASCS to leverage a SMB UNC source as opposed to a Cluster Shared Disk.

The solution has been tested and documented by SAP for usage in non-productive systems and can be used in Azure Cloud. This feature is for SAP NetWeaver components 7.40 and higher.

The Azure Cloud platform fully supports cluster solutions such as Windows Cluster and Suse Cluster in contrast to other Cloud providers.

1. Requirements for SAP Highly Available ASCS on File Share

The requirements for the SAP ASCS on File Share are listed below:

1. SAP Kernel Update: The latest 7.49 [for Netweaver 7.40 or higher] is required.

2. The SAP profile parameter service/check_ha_node=1 must be set

3. The Windows cluster DLL must be updated – 1596496 – How to update SAP Resource Type DLLs for Cluster Resource Monitor

4. The SAP landscape must have a SMB Server to provide the file share \\<SAPGLOBALHOST>\sapmnt

Before deploying this solution review the documentation:

SAP ASCS on File Share Installation Document

2. SMB Server Options for the File Share

There are many options for providing a highly available SMB 3.x compatible share.

These options are documented in this blog here: How to create a high available SAPMNT share?

It is not supported to use the Azure Files Service as Azure Files does not support NTFS ACLs yet.

WARNING: It is not supported to change the share name from \\< SAPGLOBALHOST>\sapmnt to \\< SAPGLOBALHOST>\sapmnt_<SID>

Every SAP SID must have its own unique SAPGLOBALHOST

For example: \\< SAPGLOBALHOST_<SID>>\sapmnt such as

\\sapsmb3_PRD\sapmnt

\\sapsmb3_BWP\sapmnt

\\sapsmb3_SOL\sapmnt

Review this note: 2492395 – Can the share name sapmnt be changed?

Also review:

2287140 – Support of Failover Cluster Continuous Availabilty feature (CA)

2506805 – Transport Directory DIR_TRANS

The SMB server used the SAPMNT share can also be used for Interface Files and the DIR_TRANS

3. Integration with Azure Site Recovery

The SAP ASCS on File Share works in combination with Azure Site Recovery.

Azure Site Recovery is tested and supported with SAP applications.

This blog discusses SAP applications on Azure Site Recovery.

A full whitepaper on protecting SAP applications with Azure Site Recovery

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-workload#protect-sap

4. Frequently Asked Questions

Q1. Where to find the Documentation for SAP ASCS on File Share?

A1. SAP ASCS on File Share Installation Document

Q2. Is the SAP ASCS on File Share fully integrated with SWPM (SAPInst)?

A2. The initial installation is via SAPInst and then there are some manual steps required

Q3. What is the recommended SMB server technology?

A3. It is recommended to evaluate all the options. There are pluses and minuses for each option. DFS-R is a technology that is quite mature, supports NFS (for Linux systems) and supports DR scenarios. DFS is site aware – review the documentation about the INSITE option

Q4. Windows 2016 includes a Scale Out File Server (SOFS). Can this be used as the SMB source?

A4. Yes, SOFS is a good option. Note that SOFS is not site aware, this means that if a DR solution is setup with SOFS Servers in remote locations (possibly over slow WAN links) the SMB client [the SAP app server] cannot determine which server is local. This may result in unpredictable performance and is not recommended. If all the SOFS are in the same site SOFS is suitable

Q5. Does the SAP ASCS work with Azure Site Recovery?

A5. Yes, the SAP ASCS on File Share works well with Azure Site Recovery

Q6. Is the SAP ASCS on File Share supported on old kernels?

A6. No, 7.49 [Netweaver 7.40 or higher] is required. Do not run old unsupported kernels

Kernels 7.22 or lower cannot be used with the ASCS File Share as they do not understand the parameter service/check_ha_node=1

Q7. Is the SAP ASCS on File Share supported on Cloud platforms?

A7. The SAP ASCS on File Share works on Azure. Windows cluster solutions do not work correctly on other Cloud platforms that do not support the dynamic assignment, change and start of an IP address.

Q8. Is the SAP ASCS Enqueue Replication Server supported?

A8. Yes, use the SWPM tool to add the ERS onto the cluster.

Protecting SAP Solutions with Azure Site Recovery

$
0
0

Protect SAP Applications

Most large and medium sized SAP solutions have some form of Disaster Recovery solution. The importance of robust and testable Disaster Recovery solutions has increased as more core business processes are moved to applications such as SAP. Azure Site Recovery has been tested and integrated with SAP applications and exceeds the capabilities of most on-premises Disaster Recovery solutions and does so at a lower TCO than competing solutions.

A new Whitepaper has been written to guide SAP customers through the deployment of Azure Site Recovery for SAP solutions

Start by reviewing the documentation Protect a multi-tier SAP NetWeaver application deployment using Azure Site Recovery

Benefits of Azure Site Recovery for SAP Customers:

  1. Azure Site Recovery substantially lowers the cost of DR solutions. Site Recovery does not start Azure VMs until an actual or test failover therefore compute charges are not incurred normally. Only the Storage cost is charged while a VM is in replication mode.
  2. Azure Site Recovery allows customers to perform non-disruptive DR Tests at any time without the need to roll back the DR solution after the test. Site Recovery Test Failovers mimic actual failover conditions and can be isolated to a separate test network. Test failovers can also be run for as long as required.
  3. The resiliency and redundancy built into Azure far exceeds what most customers and hosting providers are able to provide in their own datacenters.
  4. Site Recovery “Recovery Plans” allow customers to orchestrate sequenced DR failover / failback procedures or runbooks, giving you the ability to achieve true Application level DR.
  5. Azure Site Recovery is a heterogeneous solution and works with Windows and Linux VMs, supports VMware and Hyper-V and works well with a range of database solutions.
  6. Azure Site Recovery has been tested with many SAP NetWeaver and non-NetWeaver applications.

Supported Scenarios

The following scenarios:

  • SAP systems running in one Azure datacenter replicating to another Azure datacenter (Azure-to-Azure DR), as architected here.
  • SAP systems running on VMWare (or Physical) servers on-premises replicating to a DR site in an Azure datacenter (VMware-to-Azure DR), which requires some additional components as architected here.
  • SAP systems running on Hyper-V on-premises replicating to a DR site in an Azure datacenter (Hyper-V-to-Azure DR), which requires some additional components as architected here.

More support information https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-support-matrix-azure-to-azure

SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types

Prerequisites

Before you start, make sure you understand the following:

  1. Replicating a virtual machine to Azure
  2. How to design a recovery network
  3. Doing a test failover to Azure
  4. Doing a failover to Azure
  5. How to replicate a domain controller
  6. How to replicate SQL Server

SAP 3-Tier vs. SAP 2-Tier Systems

3-Tier SAP Systems are recommended for Azure Site Recovery with the following considerations:

  1. Strictly 3-tier systems with no critical SAP software installed on the DBMS server
  2. Replication of the DBMS layer by the native DBMS replication tool (such as SQL Server AlwaysOn).
  3. SAP Application Server layer is replicated by Azure Site Recovery.
  4. ASCS layer can be replicated by Azure Site Recovery in most scenarios.
  5. Non-NetWeaver and non-SAP applications need to be assessed on a case by case basis to determine if they are suitable for replication by Azure Site Recovery or some other mechanism.
  6. Only Azure Resource Manager is supported for SAP systems using Site Recovery for DR purposes.

* Note: SAP Host Monitoring agents are not considered critical and may be installed on a 3-tier DBMS server.

In the diagram below the Azure Site Recovery Azure-to-Azure (ASR A2A) scenario is depicted:

  • The Primary Datacenter is in Singapore (Azure South-East Asia) and the DR datacenter is Hong Kong (Azure East Asia). In this scenario local High Availability is provided by having two VMs running SQL Server AlwaysOn in Synchronous mode in Singapore
  • The File Share ASCS is used (this does not require a cluster shared disk solution)
  • DR protection for the DBMS layer is achieved using Asynchronous replication
  • This scenario show “symmetrical DR” – a term used to describe a DR solution that is an exact replica of production, therefore the DR SQL Server solution has local High Availability. The use of symmetrical DR is not mandatory and many customers leverage the flexibility of cloud deployments to build a local High Availability Node quickly after a DR event
  • Customers may also reduce the size of the VM type used in the DR datacenter and increase the VM size after a DR event
  • The diagram shows that the SAP NetWeaver ASCS and Application server layer is replicated to DR via Azure Site Recovery tools

Note: SAP now supports deploying the ASCS without the requirement to have a shared disk (called SAP ASCS File Share Cluster). Azure Site Recovery also supports SIOS Shared Cluster Disks

SAP Notes

Following is a list of useful SAP Notes for various requirements:

License key related

94998 – Requesting license keys and deleting systems

607141 – License key request for SAP J2EE Engine

870871 – License key installation

1288121 – How to download temporary license keys for Analytics Solutions from SAP (BusinessObjects)

1644792 – License key/installation of SAP HANA

2035875 – Windows on Microsoft Azure: Adaption of your SAP License

2036825 – How to Get an Emergency Key from SAP

2413104 – How to get a license key after the hardware exchange

Supported scenarios

1380654 – SAP support in public cloud environments

1928533 – SAP Applications on Azure: Supported Products and Azure VM types

2015553 – SAP on Microsoft Azure: Support prerequisites

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions

Setup and installation

1634991 – How to install (A)SCS instance on more than 2 cluster nodes

2056228 – How to set up disaster recovery for SAP BusinessObjects

Troubleshooting

1999351 – Troubleshooting Enhanced Azure Monitoring for SAP

Microsoft Links & KB Articles

https://blogs.msdn.microsoft.com/saponsqlserver/

https://azure.microsoft.com/en-us/blog/tag/azure-site-recovery/

New SQL Server 2016 functionality helps SAP supportability

$
0
0

Due to the combined effort of the SAP – Microsoft Porting group the SQL Server Development team added a new functionality to the SQL Server UPDATE STATISTICS command and to the way SQL Server automatically updates statistics.

This new functionality enables the SAP customers on SQL Server to persist a sample rate of the manual and automatic statistics update.
In some cases the default sample rate of the manual or automatic UPDATE STATISTICS command is too small to reflect the  real distribution of data within the table. This is especially true for very large tables with a low or very low selectivity on the column in question. One should know that the sample rate that is used for the automatic update statistics depends on the total amount of rows and will decrease with an increase of the table size, means bigger tables have smaller sample rates. With this new addition we can force a sample rate for specific columns that is then used by manual (manual updates without specifying a sample rate) and automatic updates later on.

The new addition of the UPDATE STATISTICS command is a new option in the WITH clause with the syntax:

PERSIST_SAMPLE_PERCENT = { ON | OFF }

It is officially documented in books online and Pedro Lopes from the Microsoft Tiger Team blogged about it in more detail here. It is shipped with SQL Server 2016 SP1 CU4 (13.0.4446.0), so you need to have at least this update if you want to use this feature.

Please handle this new option with care and only when you have strong evidence that the default sample rate is too small. Wrong usage (e.g. very high sample rate for many columns on busy tables) can increase the system load tremendously up to a system standstill.

Few interesting findings on HANA with MDC

$
0
0

Few interesting findings on HANA with MDC

I was working on HANA with MDC and had few very interesting learnings.

If you have been using non- MDC HANA databases, you may come across those scenarios. This article summarizes the issues and their resolutions.

Setup Configuration

I had following setup for SAP HANA during my testing.

  • HANA Version Installed: HANA 2 SPS1
  • Operating System: SUSE 12 SPS2
  • Hardware: HANA Large Instances in Azure

Along with the above setup, I had following SAP application layer installed:

  • SAP Application: SAP Net Weaver 7.4
  • Database Name: H11
  • Instance Number:00

The following common scenarios were tested and here it describes the fix for those issues.

  • Scenario 1: Unable to take backup from HANA Studio
  • Scenario 2: Unable to confirm snapshot from HANA Studio
  • Scenario 3: Unable to view Index server process

Please download the complete article Few-interesting-findings-on-HANA-with-MDC for more information.

Distributed Availability Groups extended

$
0
0

SQL Server 2016 supports distributed availability groups (AG) as additional HA feature of the engine. Today we want to show how this feature can be used to transfer near-real-time productive data around the globe.

Based on Jürgens Blog about SQL Server 2016 Distributed Availability Groups we set up a Primary AG with the SAP Database (e.g. PRD) as the only database. Furthermore did we setup two additional servers with SQL Server 2016 as a single node cluster (just for demonstration purposes, a two node cluster would work as well). The one cluster will act as a distribution server, that takes the data from the productive system and distributes it to the global locations, our second one-node cluster (target). This picture illustrates how the complete setup will look like:

The availability mode is synchronous in the Primary AG and between the Primary and the Distribution AG. From there the data is send to the far-far-away location (Target AG) in the asynchronous mode. The distribution system is either located in the same or a very close data center as the primary AG to be able to use the synchronous mode. With this setup we get an synchronous like replication to the target, even if the complete primary system goes up in flames, the distribution system will still be able to sync the data to the Target AG.

We have three separate AGs (Primary, Distribution, Target) which are connected with two distributed AGs, the first over the Primary and the Distribution AG and the second over the Distribution and the target AG. With this kind of setup one can replicate multiple system over one distributor to the target AG like this picture is showing:

How do we set it up ? As a prerequisite we have the PRD database from the PRIMARY server restored on all other servers (SECONDARY, DISTRIBUTION, TARGET) in recover mode, so that we can easily setup the AGs. As the SQL Server Management Studio 17 is not supporting distributed AGs in all details, we have to set it up with a script. On a high level the script executes these steps:

  • connect to the PRIMARY server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the SECONDARY server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the DISTRIBUTION server, create and configure an AlwaysOn endpoint (AO_Endpoint)
  • connect to the TARGET server, create and configure an AlwaysOn endpoint (AO_Endpoint)

  • connect to the DISTRIBUTION server, create and configure a one-node AG (AG_PRD_DISTRIBUTOR) with a TCP/IP listener
  • connect to the TARGET server, create and configure a one-node AG (AG_PRD_TARGET) with a TCP/IP listener
  • connect to the PRIMARY server, create and configure a two-node AG (AG_PRD_MAIN) with a TCP/IP listener

  • still on the PRIMARY create a distributed AG (DAG_PRD_MAIN) over the main AG (AG_PRD_MAIN) and the AG of the DISTRIBUTION server (AG_PRD_DISTRIBUTOR)
  • connect to the DISTRIBUTION server and join the distributed AG (DAG_PRD_MAIN) from the PRIMARY
  • then create a distributed AG (DAG_DISTRIBUTOR_TARGET) over the AG of the TARGET server (AG_PRD_TARGET) and the AG of the DISTRIBUTION server (AG_PRD_DISTRIBUTOR)
  • connect to the TARGET server and join the distributed AG (DAG_DISTRIBUTOR_TARGET) with the DISTRIBUTION server
  • change the status of the DB to readable on the target

You will find the full script at the end as an attachment.

To test the scenario we started the SGEN transaction on the underlying PRD SAP System that is connected to the Primary AG. We used just one application server with 20 Dialog work processes to generate all the report loads of the system. SGEN used up to 15 work processes at the same time:

Measuring the throughput that is flowing through the DAGs we used the Windows Performance Monitor tool on one of the systems.  One can see that the primary is sending an average of 1.8 MB/Sec to the Secondary (red block). The first DAG to the distributor (green) and the data flow to the target (blue) are showing nearly the same value, so there is throughput penalty by using distributed AGs in a system.

If you want to measure the overall travel time of a change, you can setup an easy test. Prerequisite for this is, that the target database is setup as a readable secondary, so that we can connect against the DB. On the target system we are start the SQL Server Management Studio, open a new query and run the following script:


USE PRD
WHILE NOT EXISTS(SELECT name FROM sys.objects WHERE name = N’DAGTEST’)
WAITFOR DELAY N’00:00:01′

PRINT SYSDATETIME()


This script checks if a table named DAGTEST exists in the PRD database and waits a second if it can’t be found. Once it can be found it prints out the current time. On the primary AG we now open a query that creates the table:


CREATE TABLE DAGTEST (f1 INT)
PRINT SYSDATETIME()
— Later we can drop the table again
— DROP TABLE DAGTEST


This script just creates the table DAGTEST and then prints out the current time. Once the table is created the changes will be transferred over the distributor to the target. On the target the still running script detects the freshly created table and prints out the current time as well. By comparing the time from the primary script and the target script we can determine to overall travel time of changes through the system.

Full Script to setup the DAGs:
DAGScript

Viewing all 61 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>