Quantcast
Channel: Running SAP Applications on the Microsoft Platform
Viewing all 61 articles
Browse latest View live

File Server with SOFS and S2D as an Alternative to Cluster Shared Disk for Clustering of an SAP (A)SCS Instance in Azure is Generally Available

$
0
0

We are excited to announce general availability of clustering of an SAP (A)SCS instance on Windows Server Failover Cluster (WSFC) with file server, e.g. Scale Out File Server (SOFS) and Storage Spaces Direct (S2D) in Azure cloud. This is an alternative option for clustering of an SAP (A)SCS instance, to the existing option with cluster shared disks.

Many SAP customers who run their SAP system on Azure configure their SAP systems as high availability (HA) systems using Microsoft Windows Server Failover Cluster. They cluster SAP single point of failures (SPOF), that is DBMS and SAP (A)SCS instance.

When you cluster an SAP (A)SCS instance, a typical setting is to use cluster share disks.

Whoever worked with Windows Failover Cluster and cluster shared disks, knows that cluster shared disks are often hardware based solutions. As hardware-based solutions, you cannot always use it in each environment.  Azure is one such example.

Microsoft in the past didn’t offer own solution for cluster shared disks. Typically, SAP customers on Azure would use some of the 3rd party solutions for a cluster shared disk for the HA of an SAP (A)SCS instance.

One of the top request from SAP customers running their SAP HA systems in Azure was an alternative Microsoft solution for clustered shared disks.

SAP developed new HA architecture for clustering SAP (A)SCS instance using file share, as additional option to cluster shared disks. SAP also developed a new SAP cluster resource DLL which is file share aware. For more information, check this blog: New SAP cluster resource DLL is available!

Clustering of an SAP (A)SCS instances with file share is supported for SAP NetWeaver 7.40 (and higher) products, with SAP Kernel 7.49 (and higher)

Existing SAP HA Architecture with Cluster Shared Disk

When you install an SAP (A)SCS instance on a cluster shared disk, you install not only SAP (A)SCS<Nr> instance, but also SAP GLOBAL HOST folder, e.g.  SYS folder. The virtual cluster host network name of the (A)SCS instance (e.g. of message and enqueue server processes) is also at the same time the SAP GLOBAL HOST name.

Existing SAP (A)SCS HA architecture with cluster shared disks

Figure 1: Existing SAP (A)SCS HA architecture with cluster shared disks

New SAP HA Architecture With File Share

With the new SAP (A)SCS HA architecture, the most important changes are the following:

  •  SAP (A)SCS instance (message and enqueue server processes) is separated from SAP GLOBAL HOST SYS folder
  • SAP central services run under an SAP (A)SCS instance
  • SAP (A)SCS instance is clustered and is accessible using the virtual host name <(A)SCSVirtualHostName>
  • Clustered SAP (A)SCS<InstNr> instance is installed on local disks on both nodes of SAP (A)SCS cluster – therefore we do not need a shared disk
  • SAP GLOBAL files are placed on SMB file share and are accessed using the host name \\<SAPGLOBALHost>\sapmnt\<SID>\SYS
  • The <(A)SCSVirtualHostName> network name is different from <SAPGLOBALHost> name

 

Figure 2: New SAP (A)SCS HA architecture with SMB file share

If we would install file server on standalone Windows machine, we would create a single point of failure. Therefore, high availability of a file share server is also an important part of the overall SAP system HA story.

To achieve high availability of a file share:

  • You must ensure that planned or unplanned downtime of Windows servers/VMs is not causing downtime of the file share
  • Disks used to store files must not be a single point of failure

With Windows Server 2016, Microsoft offers two features which fulfil these requests:

  •  Scale Out File Server (SOFS)
  • Storage Spaces Direct (S2D)

Scale Out File Server as Microsoft File Share HA Solution

Microsoft recommends the Scale Out File Share (SOFS) solution for enabling HA file shares. In SAP case, SAPMNT file share is protected with the HA SOFS solution.

SAP (A)SCS instance and SOFS deployed in TWO clusters

Figure 3: SAP (A)SCS instance and SOFS deployed in TWO clusters

 

As the name implies, this solution is “scale-out”, e.g. access to file share is parallelized. Different clients (in our case SAP application servers and an SAP (A)SCS instance) are accessing through all cluster nodes. This is a big advantage in comparison to a Generic File share, another HA file share feature of Windows Cluster, where access to file share is running through an active node.

Storage Spaces Direct (S2D) as Cluster Shared Storage HA Solution

SOFS stores files on a cluster shared disk, e.g. on cluster shared volumes (CSV). SOFS is supports different shared storages technologies.

For running SOFS on Azure, two criteria are important for cluster shared disks:

  • Support of cluster shared disks for SOFS on Azure environment
  • High availability and resiliency of cluster shared storage

Storage spaces direct (S2D) feature that comes with Windows Server 2016 fulfills both of these criteria.

S2D enables us to stripe local disks and create storage pool across different cluster nodes. Inside of those pools, we can create volumes which are presented to a cluster as shared storage e.g. as cluster shared volumes.

S2D is synchronously replicating disk content and offering different resilience, so loosing of some disks will NOT bring the whole shared storage down.

SOFS file share used to protect SAP GLOBAL Host files

Figure 4: SOFS file share used to protect SAP GLOBAL Host files

 

The nice thing about S2D is that it is a software-based shared storage solution that works transparently in Azure cloud, as well as in on-premises physical or virtual environments.

End-to-End Architecture

Complete end-to-end architecture of SAP NetWeaver HA with File Share looks like this:

End-to-End SAP NetWeaver HA Architecture with SOFS File Share

Figure 5: End-to-End SAP NetWeaver HA Architecture with SOFS File Share

 

Multi-SID Support

SAP (A)SCS Multi-SID enables us to install and consolidate multiple SAP (A)SCS instances in one cluster. Through consolidation, your overall Azure infrastructure costs will be reduced.

SAP (A)SCS Multi-SID clustering is also supported with a file share.

To enable a file share for the second the SAP <SID2> GLOBAHOST on the SAME SOFS cluster, you can use the same existing SAP <SID1> <SAPGLOBAlHOST> network name and same Volume1.

: SAP Multi-SID configuration in two clusters

Figure 6: SAP Multi-SID configuration in two clusters

 

Multi-SID SOFS using same SAP GLOBAL host name

Figure 7: Multi-SID SOFS using same SAP GLOBAL host name

Another option, is to use new <SAPGLOBAlHOST2> network name and new Volume2 for the second <SID2> file share.

Multi-SID SOFS with a different SAP GLOBAL host name 2

Figure 8: Multi-SID SOFS with a different SAP GLOBAL host name 2

 

Available Documentation

For more information, have a look at the new documentation and white papers on Microsoft SAP on Azure site:

From SAP side, you can check this new white paper: Installation of an (A)SCS Instance on a Failover Cluster with no Shared Disks.

You can find more information on SOFS and S2D here:

 

 


SAP NetWeaver Installation on HANA database

$
0
0

This blog describes the SAP NetWeaver installation steps on the SAP HANA database – a step by step installation guide with the real screenshots!

In this setup, the HANA Large Instance server is used to install the SAP HANA database, and SAP NetWeaver application layer runs on the Azure VM. This is a hybrid mode installation where SAP application is installed on Windows operating system in Azure, and the HANA database is installed on the HANA Large Instances on the linux operating system.

Please download a PDF version for complete details: SAP-NW-on-HANA-Installation-V1

Customer Experience with Columnstore on ERP

$
0
0

SAP released the report MSS_CS_CREATE a few months ago. Using this report, customers can create an additional Nonclustered Columnstore Index (NCCI) on any SAP ERP table. This has already been described here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/13/using-columnstore-on-erp-tables.

In the meanwhile, several customers tested this feature. They reported performance improvements for reporting scenarios using huge aggregations (see below). Other customers had feature requests for the report MSS_CS_CREATE. A new version of this report is now available in SAP Note 2419662 – Implementing Columnstore Indexes for ERP tables. You have to re-apply the correction instructions of this SAP Note to get the code update.

Performance Improvements in SAP CO-PA

One of our customers is using the NCCI for SAP CO-PA. A huge performance improvement has been achieved simply by increasing SQL Server intra-query parallelism. For additional information regarding parallelism in SAP, see https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/18/parallel-processing-in-sap. You might increase the SQL Server configuration option “max degree of parallelism”, but this has an impact on all SAP queries (not only on CO-PA). Therefore, the customer decided using a SQL Server optimizer hint in the ABAP code. Just using this hint resulted in a performance improvement of factor 10. Adding an NCCI on the largest CE1, CE2, and CE4 tables further improved the performance to an overall acceleration of factor 77 (from 771 to 10 seconds).

Using ABAP Optimizer Hints for forcing an index

Having rowstore and columnstore indexes at the same time on the same table can become a challenge for the SQL Server Query optimizer. Therefore, you might have to add an ABAP optimizer hint. For example, to enforce the ABAP index IN1 (name of the index in SAP DDIC) on table ERPTEST, you have to add the following hint:
%_HINTS MSSQLNT ‘TABLE ERPTEST abindex(IN1)’.

Take care that the table name and index name are in UPPER case. If the SELECT consists of a single table (no JOIN involved), then there is no need to explicitly use the table name. In this case, you can use &TABLE& instead:
%_HINTS MSSQLNT ‘TABLE &TABLE& abindex(IN1)’.

You can use several optimizer hints within a single SELECT, for example:
SELECT MAX( msgnr ) sprsl
  FROM t100 INTO
l_t_result
  GROUP BY sprsl
  %_HINTS MSSQLNT ‘OPTION maxdop 8’
          MSSQLNT ‘OPTION hash group.

You can also combine optimizer hints in a single line:
%_HINTS MSSQLNT ‘OPTION maxdop 8  OPTION hash group’.

When using an optimizer hint for SQL Server intra-query parallelism, you should not hard code the degree of parallelism. Instead, you can use a variable. (The same is done in SAP BW with the RSADMIN parameter MSS_MAXDOP_QUERY). The ABAP code could look like this:

Used Optimizer Hints in SAP CO-PA

Our customer added a few SQL Server optimizer hints in the ABAP code of the CO-PA templates. It is a good idea to add optimizer hints using an ABAP variable. When setting this variable in an external form routine (e.g. GET_SQL_HINT), then you can change the hints without having to change the CO-PA code again:
Dependent on the input parameter (name of the CO-PA form routine), GET_SQL_HINT calculates the required optimizer hint and fills the ABAP variable SQL_HINT. Even if the report Z_COPA_SQL_HINTS (which contains GET_SQL_HINT) does not exist, you do not get an error. In this case, the variable SQL_HINT is empty and no optimizer hint is added.

  • For the SELECTs with aggregation on the CE1, CE2, and CE4 tables, a MAXDOP and a HASH GROUP hint have been added.
    Therefore, several form routines in the template include RKEVRK2B_READ_COST were changed: The call of GET_SQL_HINT and the hint %_HINTS MSSQLNT SQL_HINT has been added. Hereby, the variable SQL_HINT is set to ‘OPTION maxdop 16 OPTION hash group’. The following example shows the form routine OPEN_CURSOR_NO_HOLD_CE1 from the template include RKEVRK2B_READ_COST:

  • For the SELECTs without aggregation on the CE1, CE2, and CE4 tables, a different hint has been added.
    It turned out, that there are some SELECTs without aggregation, where the existing rowstore index would be a much better choice than the columnstore index. However, caused by the re-usage of SQL Server execution plans for different selective parameters, the columnstore index was sometimes used. This resulted in a high CPU load and suboptimal performance. You could force the required index using an optimizer index hint as described above. However, our customer decided to use a different optimizer hint, which solved the issue: OPTIMIZE FOR UNKNOWN
    Therefore, several form routines in the template include RKEVRK2A_POST have been changed: The call of GET_SQL_HINT and the hint %_HINTS MSSQLNT SQL_HINT has been added. Hereby, the variable SQL_HINT is set to ‘OPTION optimize for unknown’. The following example shows the form routine READ_ALL_PAOBJNRS_BY_CHARVALS from the template include RKEVRK2A_POST:
    Keep in mind that changing the templates above has no impact until the ABAP code is regenerated using the new templates. Therefore, the operating concerns must be regenerated using SAP transaction KEA0.

Improvements in SQL Server 2017

SQL Server 2017 allows the online creation of an NCCI. In SQL Server 2016, it was only possible to create an NCCI offline. Therefore, a shared lock was held during the whole runtime of the index creation. This blocked all data modifications (INSERTs, UPDATEs, DELETEs) while the NCCI was created. As of SQL Server 2017, you can now choose in report MSS_CS_CREATE, whether you want to use the online option or not. Keep in mind, that creating an index online takes longer and consumes tempdb space. In return, you do not block any other SAP users while creating the index.

Improvements in SAP report MSS_CS_CREATE

The NCCI cannot be transported using the SAP transport landscape. Therefore, you have to create the NCCI on the development-, consolidation-, and productive-system separately. This works fine with report MSS_CS_CREATE even on a productive system, which is configured in SAP as Not Modifiable. However, you cannot delete an NCCI using SAP transaction SE11 on a Not Modifiable SAP system. Therefore, report MSS_CS_CREATE has now a Delete Index button (Del Indx):

The second improvement in MSS_CS_CREATE is the Online option. It is greyed out in the screenshot above, because this SAP system is running on SQL Server 2016.

Conclusion

An NCCI can speed-up reporting performance on an SAP ERP system running on SQL Server 2016 or 2017. However, it is probably not useful for tables with a high transactional throughput (permanently many concurrent data modifications in the table). Based on the customer scenario, you can create an NCCI index on the tables of your choice.

Improve SAP BW Performance by Applying the Flat Cube

$
0
0

Overview

SAP released the Columnstore Optimized Flat Cube over two years ago. We want to give a brief explanation of the benefits and advantages of using the Flat Cube – and in consequence engage customers to apply it.

The Flat Cube has many benefits, for example, improved BW query performance and faster Data Transfer Processes (DTPs) into a cube. Furthermore, the Flat Cube is a prerequisite for using the improved BW statement generator (FEMS-pushdown). Before using the Flat Cube, you have to convert each cube to the Flat Cube design. Below we give a guidance for quickly converting all cubes using the new report RSDU_IC_STARFLAT_MASSCONV. The report is available as a Correction Instruction in

Benefits of Flat Cube

A brief overview of the Flat Cube is contained in https://blogs.msdn.microsoft.com/saponsqlserver/2015/03/27/columnstore-optimized-flat-cube-in-sap-bw. The Flat Cube on SQL Server uses the same table design as BW on HANA has been using for years. The benefits of a Flat Cube are:

  • Faster DTPs
    DTPs into a Flat Cube are typically much faster, because a Flat Cube does not contain dimension tables any more. Therefore, there is no need for the time consuming DIMID generation when loading data into a cube. The BW Cube Compression is much faster for a Flat Cube. In most cases, it is even not needed any more. However, for Inventory Cubes we still recommend running the BW Cube Compression.
  • BW Query Performance
    The BW query performance is typically much better for the Flat Cube. The generated SQL queries are simpler since there is only one fact table and almost all dimension tables are gone (except the P-dimension). Therefore, there is no need any more for joining the dimension tables with the fact tables. Typical performance numbers are documented here:
    https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server
  • FEMS-Pushdown
    The Flat Cube is a prerequisite for further accelerating complex BW queries: The BW FEMS-pushdown is described here: https://blogs.msdn.microsoft.com/saponsqlserver/2017/06/30/customer-experience-with-sap-bw-fems-pushdown.
  • Aggregates not required
    You cannot create BW aggregates on a Flat Cube since you typically do not need them any more. If you still see the need for aggregates on a particular cube, then you can convert the cube back to a non-Flat cube.

Prerequisites

The Flat cube requires at least SQL Server 2014, but we recommend SQL Server 2016 or newer. Read the following SAP Notes:

You can convert the cubes of a BW Multi-Provider separately to the Flat Cube design. You can create a BW Semantically Partitioned Cube (SPO) in Flat Cube design. However, there is currently no conversion report available for converting an SPO to the Flat Cube design. Until SAP releases such a report, you have to create a new SPO as a Flat Cube and then transfer the data using a DTP from the old SPO.

Originally, it was not possible to use the Flat Cube design for a Real-Time Cube. This has been fixed with

Converting to Flat Cube

SAP uses the Repartitioning Framework for converting a Non-Flat Cube to a Flat Cube and the other way around. In the following, we only describe the way to Flat Cube (which is probably the only way you need for SQL Server).

The conversion to a Flat Cube is not done by simply creating indexes or copying data. Rows in the original e-fact table will be compressed further under some circumstances during the Flat Cube conversion. Therefore, the Flat Cube might contain less rows in it fact table compared with the number of rows in the original f-fact and e-fact tables together.

The DIMIDs 0, 1 and 2 in the P-dimension of a Flat Cube are reserved for special purposes. This allows a fixed database partitioning (of 4 partitions) for best performance. Cubes, which had never been compressed in the past might use DIMID 2. In this case, you get an error in the prerequisite check of the Flat Cube conversion. You are asked to compress a specific BW request (a particular request number with DIMID 2) before you can run the Flat Cube conversion.

The Flat Cube uses an optimized approach for loading Historical Transactions into Inventory Cubes (see Inventory Management at https://help.sap.com/saphelp_nw73/helpdata/en/e1/5282890fb846899182bc1136918459/frameset.htm). Before converting to a Flat Cube, you must run BW Cube Compression of all BW requests, which contain Historical Transactions.

SAP report RSDU_REPART_UI has been extended for the Flat Cube Conversion:

After choosing “Non-Flat to Flat” and entering the cube name, press “Initialize”. A popup window occurs which reminds you to perform a full database backup before running the conversion.

In the next screen, you can schedule a batch job. Be aware that the Flat Cube conversion is always running as a batch job (with job name: RSDU_IC_FLATCUBE/<cube>). Report RSDU_REPART_UI is just used for scheduling and monitoring these batch jobs. You should not schedule the report RSDU_REPART_UI itself as a batch job.

The conversion of a Flat Cube can require a huge amount of transaction log space. Therefore, you might have to increase the size of SQL Server transaction log and the frequency of log backups. To keep the transaction size low, the cube is copied in chunks (each request in the f-fact table and each time-DIMID in the e-fact table is copied separately). The chunks are processed in parallel using RFCs. By default, up to 3 chunks are processed. You can speed up processing by configuring more parallel running chunks using RSADMIN parameter RSDU_REPART_PARALLEL_DEGREE. However, this parameter will be overwritten by the RSADMIN parameter QUERY_MAX_WP_DIAG (if it is explicitly set).

After pressing “Monitor” in RSDU_REPART_UI, you can track the progress of the Flat Cube Conversion. How to process failed conversions is described in the section “Troubleshooting” below.

Flat Cube Mass-Conversion

SAP recently released the report RSDU_IC_STARFLAT_MASSCONV (all necessary code changes are described in SAP Note 2116639 – SQL Server Columnstore documentation). This Report allows scheduling the conversion of many BW cubes at the same time. When starting report RSDU_IC_STARFLAT_MASSCONV the first time, you have to press “Generate Work List”. This starts a batch job, which collects information about all non-Flat cubes. When pressing “Refresh Display of Work List”, each non-Flat cube is displayed in one of the three tabs:

Non-Convertible cubes are displayed in the 1st tab. These cubes do not fulfill the prerequisites of the Flat Cube conversion (yet). Once you have applied all necessary prerequisites, you have to run the work list batch job again.

In the 2nd tab, you can select the cubes you want to convert. After pressing the Start icon, a SAP batch job schedule window occurs. Here you can define the start time for the conversion of the first cubes. The conversion of the other selected cubes is scheduled as a chain: The next batch job starts once its predecessor finishes.

The batch scheduling makes sure, that only 3 cubes are running at the same time. You can change this number in the main screen of RSDU_IC_STARFLAT_MASSCONV. Furthermore, the total number of rows to be converted at the same time is also limited. The idea behind this is reduction of the workload and prevention of a full database log (when running in recovery model Simple).

However, a production system should use the SQL Server recovery model Full or Bulk-logged. In this case, you should increase the size of the transaction log and the frequency of the transaction log backups during the Flat Cube conversions. Otherwise, the transaction log might fill up, whether you run the conversions serially or in parallel. To be on the save side, you should run the conversion of the biggest cubes separately. Run a transaction log backup immediately before starting the conversion.

In the 3rd tab of RSDU_IC_STARFLAT_MASSCONV, you can see all jobs for the cube conversion (whether they are scheduled, running, or failed). After selecting one conversion job, you can jump to the conversion log screen (which actually is the same screen as the monitor screen in RSDU_REPART_UI).

Troubleshooting Failed Conversions

The Flat Cube conversion of a single cube consist of a sequence of steps. You can see these steps in the monitor screen of report RSDU_REPART_UI. At the very beginning of the cube conversion, a cube conversion lock is set. In addition, a read lock is set in step SET_READ_LOCK. The data and structure of the original cube will not be touched until the step SET_READ_LOCK has been executed.

If the conversion fails before reaching step SET_READ_LOCK, then you do not need to take care of this issue immediately. You might release the cube conversion lock and continue working with the Non-Flat cube. For releasing the locks (read lock and cube conversion lock), simply press the UNLOCK button in report RSDU_REPART_UI (For this, SAP Note 2580730 – Unlock failed Flat Cube Conversion has to be applied)

In the following example, the Conversion to Flat failed in step COPY_TO_SHD_EFACT. When clicking on the step, you can see SQL error 9002, which means that the transaction log has filled up. Therefore, the first thing to do is performing a transaction log backup.

You can simply restart the conversion in RSDU_REPART_UI with 2 clicks:

  1. Select the conversion request by clicking on it (“Conversion to Flat”).
  2. Press the button “Restart Request”.
    A popup window occurs, which lets you schedule a batch job.

In report RSDU_IC_STARFLAT_MASSCONV, you have to restart each failed request individually. A mass restart is planned for a future version of RSDU_IC_STARFLAT_MASSCONV. Therefore, you should take care about transaction log size and backups when using RSDU_IC_STARFLAT_MASSCONV.

Conclusion

SAP BW performance can be improved by applying the Flat Cube. Using report RSDU_REPART_UI, you can convert a single cube. Using RSDU_IC_STARFLAT_MASSCONV, you can convert many cubes at the same time. We recommend converting all cubes to the Flat Cube design with two exceptions: Converting SPO cubes is currently not possible. For Real-Time Cubes, the benefits of the Flat Cube highly depend on the customer scenario.

SAP on Azure: General Update – January 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure cloud platform. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. M-Series, Dv3 & Ev3-Series VMs Now Certified for NetWeaver

Three new VM types are certified and supported by SAP for NetWeaver AnyDB workloads. AnyDB refers to NetWeaver applications running on SQL Server, Oracle, DB2, Sybase or MaxDB.

A subset of these VMs is currently being certified for Hana.

Dv3 VM type has 4GB of RAM per cpu and is suitable for SAP application servers or small DBMS servers

Ev3 VM type has 8GB of RAM per cpu for E2v3 – E32v3. The E64v3 has 432GB. This VM type is suitable for large DBMS servers

M VM type has up to 3.8TB and 128cpu and is suitable for very large DBMS workloads

All three of these new VM types has many new features, in particular greatly improved networking performance. More information on Dv3/Ev3

Azure Service Availability site details the release status of each VM type per datacenter, however Ev3/Dv3 should generally be available everywhere

New VM Types and SAPS values:

VM Type CPU & RAM SAPS
D2s_v3 2 CPU,  8 GB 2,178
D4s_v3 4 CPU, 16 GB 4,355
D8s_v3 8 CPU, 32 GB 8,710
D16s_v3 16 CPU, 64 GB 17,420
D32s_v3 32 CPU, 128 GB 34,840
D64s_v3 64 CPU, 256 GB 69,680
E2s_v3 2 CPU, 16 GB 2,178
E4s_v3 4 CPU, 16 GB 4,355
E8s_v3 8 CPU, 32 GB 8,710
E16s_v3 16 CPU, 64 GB 17,420
E32s_v3 32 CPU, 128 GB 34,840
E64s_v3 64 CPU, 432 GB 70,050
M64s 64 CPU, 1000 GB 67,315
M64ms 64 CPU, 1792 GB 68,930
M128s 128 CPU, 2000 GB 134,630

The official list of VM types certified for SAP NetWeaver applications can be found in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 

SAP cloud benchmarks are listed here:

E64v3 Benchmark can be found here 

D64v3 Benchmark can be found here 

m128 Benchmark can be found here 

m128 BW Hana Benchmark can be found here 

2. SAP Business One (B1) on Hana & SQL Server Now Certified on Azure

SAP Business One is a common SMB ERP solution. Many SAP B1 customers run on SQL Server today. SAP B1 on SQL Server is now Generally Available for Azure VMs

SAP has also ported SAP B1 to Hana. SAP B1 on Hana is now certified on Azure DS14v2 for approximately 40 users

Customers planning to run SAP B1 on Azure may be able to lower costs by using the B1 Browser Access feature available in more modern versions of SAP B1. Browser Access in some cases eliminates the need to install the B1 client on Terminal Server VMs on Azure.

These SAP Notes may be useful:
2442627 – Troubleshooting Browser Access in SAP Business One

2194215 – Limitations in SAP Business One Browser Access

2194233 – Behavior changes in Browser Access mode of SAP Business One as compared to Windows desktop mode

SAP on Azure certification information can be found here:

For all SAP on Azure documentation start at this page 

3. Managed Disks Recommended for SAP on Azure

Managed Disks are generally recommended for all new deployments.

Managed Disks reduce complexity and improve availability by automatically distributing storage for VM in an availability set onto different storage nodes so that the failure of a single storage node will not cause an outage on two or more VMs in an Availability Set

Notes:

1. Managed Standard disks are not supported for SAP NetWeaver application server or DBMS server. The Azure host monitoring agent does not support Managed Standard disks.

2. In general it is recommended to deploy SAP application servers without additional data disk and install the /usr/sap/<SID> on the boot disk. The boot disk can be up to 1TB in size, however SAP application servers do not require this much storage space and do not require high IOPS under normal circumstances

3. It is not possible to add both managed and unmanaged disks to a VM that is in an availability set

4. SQL Server VMs running with datafiles directly on blob cannot leverage or utilize the features of Managed Disks

5. In general it is recommended to use Managed Premium disks for SAP application servers so that these VMs are guaranteed the financially backed Azure Single VM SLA of 99.9% (Note: the actual achieved SLA is typically much higher than 99.9%)

6. In general it is recommended to use Managed Premium disks for SAP DBMS servers as detailed in SAP Note 2367194 – Use of Azure Premium SSD Storage for SAP DBMS Instance 

A good overview of managed disks is available here:

A deep dive on managed disks is available here:

Pricing and performance details for the various Azure Disks can be found here:

A very good Frequently Asked Questions is here 

4. Sybase ASE 16.3 PL2 “Always-on” on Azure

Sybase ASE 16 SP2 and higher is supported on Windows and Linux on Azure as documented in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types

Sybase ASE includes a HA/DR solution called “Always-on”. The features and functions of this Sybase solution is very different than SQL Server AlwaysOn. This HA solution does not require shared disks.

For information about Sybase ASE release schedule review 

The central Sybase HA documentation can be found here:

SAP support multiple replica databases according to SAP Note 2410733 – Always-On (HADR) support for 1-to-many replication – SAP ASE 16.0 SP02 PL05 

Sybase “Always-on” does not require an Azure ILB and the installation on Azure is relatively transparent. There is no requirement to configure an Internal Load Balancer in typical configurations. Sybase 16 SP3 PL2 is meant to offer several new features for SAP customers.

If there are questions about the setup of Sybase on Azure or inconsistencies in the documentation, please open an OSS message to BC-DB-SYB.

5. Resource Groups, Tags, Role Based Access Control, Billing, VNet, NSG, UDR and Resource Locking

Before starting an Azure deployment it is very important to design the core “Foundation Services” and structures that will support the IaaS and PaaS resources running on Azure.

To document the recommendations to design and configure all these elements would be a multi-part blog itself. This topic provides a starting point for customers planning their Azure deployment for SAP landscapes and the kinds of questions that commonly asked by SAP customers

1. Resource Groups provide a way monitor, control access, provision and manage billing for collections of Azure objects. Often SAP customers deploy a Resource Group per environment, such as Sandbox, Dev, QAS and Production. This allows for a simple billing breakdown per environment. If a business unit wants a clone of production for testing new business processes, built in Azure functionality can be used to clone production and copy this into a new Resource Group called “Project”. The monthly cost can be monitored and charged back to the business unit that requested this system

2. Azure Tags are used by some SAP customers to provide additional attributes about a particular VM or other Azure object. For example a VM could be tagged as “ECC 6.0” or “NetWeaver Application Server”. Azure tags allow for more precise billing and security control with Role Based Access Control. It is possible to query tags and for example determine which VMs are SAP or non-SAP VMs, which VMs are application servers or DBMS servers

3. Role Based Access Control allows a customer to segregate duties, delegate limited administrative rights to teams such as the SAP Basis Team and create a fine-grained security model. It is common to delegate significant Azure IaaS rights to the Basis Team. Basis should be allowed to create and change many Azure resources such as VMs. Typically Basis would not be able to create or change VNet or network level resources

4. Billing allows greater cost transparency than on-premises solutions. Azure Resource Groups and Tags should be designed so that it is very clear which SAP system or environment corresponds to line items on Azure monthly bills. Ideally additional project systems or systems requested by individual business units should be able to be charged back.

5. Azure VNet, NSG and UDR design is normally handled by a network expert and not the SAP Basis Team. There are some factors that must be considered when designed the VNet topology and the NSG and UDR:

a. Communication between the SAP application servers and DBMS servers must not be routed or inspected by virtual appliances. SAP is very sensitive to latency between application server and DBMS server. “Hub & Spoke” network topologies are one solution that allows security and inspection of client traffic while avoiding inspection of SAP traffic

b. Often the DBMS servers and Application servers are placed in separate subnets on the same VNet. Different NSGs are then applied to SAP application servers and DBMS servers

c. UDR should not route traffic unnecessarily back to the on-premises proxy server – common mistakes seen include: SQL Server with datafiles on blob are accessed via the on-premises proxy server! (leading to very poor performance) or http(s) communication between SAP applications is routed back to on-premises proxies

It is now popular to deploy a “Hub and spoke” network topology.

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-overview

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-nsg

A very good blog https://blogs.msdn.microsoft.com/igorpag/2016/05/14/azure-network-security-groups-nsg-best-practices-and-lessons-learned/

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-networks-udr-overview

6. Azure Resource Locking prevents accidental deletion of Azure objects such as VMs and storage by preventing deletion. It is recommended to create the required Azure resources at the start of a project. When most add/move/changes are finished and the Azure deployment is stabilized all the resources can be locked. Only a super administrator can then unlock a resource and allow the resource (such as a VM) to be deleted.

https://blogs.msdn.microsoft.com/cloud_solution_architect/2015/06/18/lock-down-your-azure-resources/

It is considerably easier to implement these best practices before a system is live. It is possible to move Azure objects such as VMs between subscriptions or resources groups as illustrated below (full support for Managed Disk environments due early 2018. In the interim it is possible to download VHD files for a Managed Disk VM using the “Export” button in Azure portal).

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/move-vm

https://docs.microsoft.com/en-gb/azure/azure-resource-manager/resource-group-move-resources (pls refer Virtual machines limitations section)

6. AzCopy for Linux Released

AzCopy is a popular utility for copying blob objects within Azure or for uploading or downloading objects from on-premises to Azure. An example could include uploading R3load dump file from on-premises to Azure while migrating from UNIX/Oracle to Win/SQL on Azure.

AzCopy is now available for Linux platforms. This utility requires the .Net framework 2.0 for Linux to be installed

AzCopy for Windows is available here. To improve AzCopy throughput the /NC:<xx> parameter can be specified. Depending on the bandwidth and latency of a connection values between 16-32 could significantly improve throughput. Values too much higher than 32 may saturate most internet links

An alternative to AzCopy is Blobxfer

7. Read Only Domain Controllers – RODC: Is a RODC More Secure on Azure Than a DC?

Read Only Domain Controllers is a feature that has been available for many years. This feature is documented here:

The differences between a Read Only Domain Controller and a writeable domain controller are explained here:

Recently multiple customers have proposed to put RODC in Azure stating that they believe this to be “more secure”.

The security profile of a RODC and a writeable domain controller on Azure with an ExpressRoute connection back to on-premises Domain controllers is very similar. The only exception is the “Filtered Attribute Set” – some AD attributes may not be replicated to a RODC (but almost all attributes are replicated)

There are some recommendations for securing Domain Controllers on Azure and in general:

1. Intruders can query Active Directory RODC or Writeable DC equally – so called “surveying” trying to find vulnerable, weak or unsecured user accounts. IDS and IPS solutions should be deployed both on Azure and on-premises to detect surveying

2. One of the single biggest security enhancements possible is to implement Multi-factor authentication – Azure has built in services for Multi-factor authentication 

3. It is recommended to use Azure Disk Encryption on the boot disk and the disks containing the DS database, logs and SYSVOL. This prevents cloning the entire VM, then downloading the VHD files and starting up the RODC or writeable DC. Debugging tools can then be used to try to compromise the AD database

Summary: deploying a RODC instead of a writeable Domain Controller does not significantly change the security profile of a Active Directory solution on Azure deployments with ExpressRoute connections back to on-premises AD infrastructure. Instead use IDS, multi-factor authentication and Azure Disk Encryption in conjunction with other security measures to build a highly secure AD environment. Do not rely on the simple fact that a Domain Controller is read only as the sole security mechanism

8. Azure Site Recovery: Update on Support Status

Azure Site Recovery is a powerful platform feature that allows customers to achieve best-in-class Disaster Recovery capabilities at a fraction of the cost of competitive solutions.

A blog and a whitepaper has been released detailing how to deploy Azure Site Recovery for SAP applications

https://docs.microsoft.com/en-us/azure/site-recovery/site-recovery-sap

http://aka.ms/asr-sap

There are several new features and capabilities that will be added to the Azure Site Recovery feature and some features that are already available:

1. Azure Disk Encryption is a feature that scrambles the contents of Azure boot and/or data disks. Support for this feature will be in preview soon. If this feature is required please contact Microsoft

2. Support for Storage Spaces and SIOS is Generally Available

3. Support for VMs with Managed Disks will be released soon

4. Cross subscription replication will be added in early 2018

5. Support for Suse 12.x will be added in 2018

More information on ASR and ADE

https://azure.microsoft.com/en-us/blog/tag/azure-site-recovery/

https://azure.microsoft.com/en-us/services/site-recovery/

https://docs.microsoft.com/en-us/azure/security/azure-security-disk-encryption-faq

9. Non-Production Hana Systems

It is supported to run Hana DBMS servers on non-Hana certified hardware and cloud platforms. This is documented in SAP Note 2271345 – Cost-Optimized SAP HANA Hardware for Non-Production Usage

It is recommended to review the PowerPoint and Word document attached to this SAP Note. The note states that the “whitebox” type servers typically used for Hyperscale cloud can be used for non-production systems and that virtualized solutions are also possible.

Therefore, it is completely possible to run non-production Hana systems on Azure VMs.

In general Disaster Recovery systems are considered to be Production as they could run production workloads.

10. Oracle Linux 7.x Certified on Azure for Oracle 11g & 12c

The Azure platform gives customers the widest choice of operating system and database support. Recently SAP has supported Oracle DBMS running on Linux VMs.

The full list of operating systems and database combinations supported on Azure is officially listed in SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types 

Notes:

1. SAP + Oracle + Linux
+ Azure = Fully supported
and Generally Available

2. Oracle DBMS must be installed on Oracle Linux 7.x

3. Oracle Linux 7.x or Windows can be used for SAP application servers and standalone engines (see PAM for details)

4. It is strongly recommended to install the latest updates for Oracle Linux before starting SWPM

5. The Linux host monitoring agent must be installed as detailed in SAP Note 2015553 – SAP on Microsoft Azure: Support prerequisites 

6. Customers wishing to use Accelerated Networking with Oracle Linux should contact Microsoft

7. It is not supported to run Oracle DBMS on Suse or RHEL

Important SAP Notes and information:

https://wiki.scn.sap.com/wiki/display/ORA/Oracle

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions 

2069760 – Oracle Linux 7.x SAP Installation and Upgrade 

405827 – Linux: Recommended file systems

2171857 – Oracle Database 12c – file system support on Linux

2369910 – SAP Software on Linux: General information

1565179 – This note concerns SAP software and Oracle Linux

Note: SAP + Oracle + Windows + Azure = Fully supported and Generally Available (supported since a long time, many multi-terabyte customers live on Azure)

11. Oracle 12c Release 2 is Certified by SAP and Released on Windows 2016. ASM Support on Azure in Planning

SAP has certified Oracle 12c Release 2 for SAP NetWeaver applications as documented in SAP Note 2133079 – Oracle Database 12c: Integration in SAP environment

Oracle 12c Release 2 is supported on Windows 2016 in addition to certified Linux distributions.

Oracle Database version 12.2.0.1 (incl. RDBMS 12.2.0.1, Grid Infrastructure 12.2.0.1 and Oracle RAC 12.2.0.1) is certified for SAP NetWeaver based SAP products starting December 18th, 2017. The minimum initial RDBMS 12.2.0.1 SAP Bundle Patch (SBP) is SAP12201P_1711 (Unix) or PATCHBUNDLE12201_1711 (Windows).

At least SAP Kernel version 7.21_EXT is required for Oracle 12.2.0.1

SAP note 2470660 provides important technical information about using Oracle 12.2.0.1 in a SAP environment, like database installation/upgrade guidelines, software download, patches, feature support, OS prerequisites, etc.

The Oracle features supported in Oracle version 12.1 (like Oracle In-Memory, Oracle Multitenant, Oracle Database Vault and Oracle ILM/ADO) are supported for version 12.2.0.1 as well.

2470660 – Oracle Database Central Technical Note for 12c Release 2 (12.2)

2133079 – Oracle Database 12c: Integration in SAP environment

Microsoft are working to obtain certification of Oracle ASM on Azure. The first combination planned is Oracle Linux 7.4 and Oracle 12c R1/R2. This blogsite will be updated with more information later

998004 – Update the Oracle Instant Client on Windows

12. Accelerated Networking Recommended for Medium & Large SAP Systems

Accelerated Networking drastically reduces the latency and significantly increases the bandwidth between two Azure VMs

Accelerated Networking is Generally Available for Windows & Linux VMs

It is generally recommended to deploy Accelerated Networking for all new medium and large SAP projects

Additional points to note about Accelerated Networking:

1. It is not possible to switch on Accelerated Networking for existing VMs. Accelerated Networking must be enabled when a VM is created. It is possible to delete a VM (by default the boot and data disks are kept) and create the VM again using the same disks

2. Accelerated Networking is available for most new VM types such as Ev3, Dv3, M, Dv2 with 4 physical cpu or more (as at December 2017 – note E8v3 is 4 physical CPU with 8 hyperthreads)

3. Accelerated Networking is not available on G-series VM types

4. SQL Server running with datafiles stored directly on blob storage are likely to greatly benefit from Accelerated Networking

5. Suse 12 Service Pack 3 (Suse 12.3) is strongly recommended (Hana certification is still in progress as at December 2017). RHEL 7.4 recommended. Contact Microsoft for Oracle Linux.

6. It is possible to have one or more Accelerated Network NICs and a traditional non-accelerated network card on the same VM

7. Azure vNet UDR and/or other security and inspection devices should not sit between the SAP application servers and database server. This connection needs to be as high performance as possible

8. SAP application server to database server latency can be tested with ABAP report /SSA/CAT -> ABAPMeter

9. Inefficient “chatty” ABAP code or particularly intensive operations such as large Payroll jobs or IS-Utilities Billing jobs have shown very significant improvement after enabling Accelerated Networking

Additional useful information about Azure Networking can be found here:

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-create-vm-accelerated-networking

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-optimize-network-bandwidth

https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-bandwidth-testing

https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/

https://blogs.msdn.microsoft.com/saponsqlserver/2016/11/21/moving-from-sap-2-tier-to-3-tier-configuration-and-performance-seems-worse/

Below is an example of very inefficient ABAP code that will thrash the network. Placing a SELECT statement inside a LOOP is very poor coding standards. Accelerated Networking will improve the performance of poorly coded ABAP, but it is generally recommended to avoid a SELECT statement inside a LOOP. This code is totally non-scalable and as the number of iterations increase the performance will degrade severely.

13. New Azure Features

The Azure platform has many new features and enhancements added continuously.

A good summary of many of the new features can be found here:

Several new interesting features for SAP customers include:

1. Two different datacenters can communicate over a vNet-to-vNet Gateway connection. An alternative currently in preview is Global Peering 

2. SoftNAS for storing files, DIR_TRANS and interfaces. Support NFS and SMB protocols

3. Azure Data Box is useful for data center migration scenarios

4. CIS images – these are hardened Windows images. These have not been fully tested with all SAP applications. These images should work for SAPWebDispatcher, SAP Router etc

5. SAP LaMa now has a connector for Azure 2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) 

6. A future blog will cover Hana Large Instance Networking, but addition information on Hub & Spoke Networking can be found here Whitepaper 

7. Azure Service Endpoints remove some public endpoints and move the service to an Azure vNet

Other useful links below

VMs: Performance| No Boot | Agents | Linux Support

Networking: ExpressRoute | Vnet Topologies | ARM LB Config | Vnet-to-Vnet VPN | VPN Devices | S2S VPN | ILPIPReserved IP  Network Security

Tools: PowerShell install | VPN diagnostics | PlatX-CLI | Azure Resource Explorer | ARM Json Templates | iPerf | Azure Diagnostics

Azure Security: Overview | Best Practices | Trust Center
Preview Features | Preview Support

Miscellaneous Topics

SAP on Windows & Oracle presentation covering Oracle features for Windows http://www.oracle.com/technetwork/topics/dotnet/tech-info/oow2016-whatsnew-db-windows-3628748.pdf

A new and interesting feature for UNIX/Oracle customers wishing to terminate UNIX platforms and move to Intel commodity servers is Oracle Cross Platform Transportable Tablespaces

The diagram shows the process for creating a backup that can be taken on a UNIX Big Endian system and successfully restored onto an Intel Little End system (Windows or Linux).

Additional information is in SAP Note 552464 – What is Big Endian / Little Endian? What Endian do I have?

A similar question is sometimes received from customers that have installed SAP Hana on IBM Power servers. These customers either want to move away from Hana on Power (a rather niche solution) or they wish to run DR on public cloud. SAP Note 1642148 – FAQ: SAP HANA Database Backup & Recovery states that Hana 2.0 backups can be restores from IBM Power onto Intel based systems

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

SAP on SQL Server: General Update – January 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the SAP on SQL Server platform. The key objective of the SAP on Windows SQL port is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. New Case Studies on SAP on SQL Server

Malaysia Airlines migrated their entire datacenter from on-premises to Azure. More than 100 applications were moved to Azure including a large SAP landscape. The project was executed by Tata Consulting Services. TCS SAP team demonstrated an outstanding skillset and capability during the entire project. SAP applications were migrated from DB2 to SQL Server 2016 with datafiles running on blob storage. Local High Availability is achieved by using AlwaysOn. SQL Server AlwaysOn is also used to replicate the databases to Hong Kong. A full case study can be found:

https://customers.microsoft.com/en-us/story/malaysia-airlines-berhad

https://partner.microsoft.com/it-it/case-studies/tata

Several other customers based in Malaysia in the Agricultural industry and shipping industry have also moved their SAP landscapes to Azure as well.

A useful blog discussing a large Australian Energy customer can be found here

2. Security Recommendations for Windows Servers

It is recommended to implement a few simple security settings on Windows servers. This will greatly increase the security of a SAP landscape running on Windows servers. An additional benefit is that it may not be required to implement Windows patches as frequently.

Recommendation #1: Disable SMB 1.0

SAP Note 2476242 – Disable windows SMBv1 describes how to disable legacy networking. SMB 1.0 is a protocol required for Windows NT 4.0 and should be disabled in all cases for SAP systems. There is no valid reason why SMB 1.0 should be running on any SAP server.

More information can be found here

Recommendation #2: Remove Internet Explorer

Open a command prompt with Administrative rights are run this command

dism /online /Disable-Feature /FeatureName:Internet-Explorer-Optional-amd64

On previous versions of Windows Server Windows Update may try to update earlier versions of IE to IE 11. This utility can prevent Windows Update from updating browser versions. Updating the browser is likely to require a restart of the operating system and should be avoided https://www.microsoft.com/en-us/download/details.aspx?id=40722

Additional useful information for Security & Networking can be found in these SAP OSS Notes

2532012 – SSL Security error while installing database instance on SQL Server

1570930 – SQL Server network encryption with SAP

2356977 – Error during connection of JDBC Driver to SQL Server

2385898 – SSL connection to SQL Server fails with: Could not generate secret

1702456 – Errors with Microsoft JDBC drivers connecting with SSL

1428134 – sqljdbc driver fails to establish secure connection

2559590 – SQL Server connection fails after TLS 1.2 has been enabled

Detailed SAP on Windows Security Whitepaper

3. Uninstall Internet Explorer from Windows Servers? – What about the new SAPInst?

SWPM 1.0 SP20 now has a Fiori based interface that is run from a Web Browser. It is a specific recommendation to remove Internet Explorer, third party browsers and any unnecessary software from all SAP servers including non-production servers. Fortunately there are several ways to run the new SWPM on a server where no browser is installed.

This SAP blog discusses several options

Option #1: Run SAPInst with the following command line option to load the previous gui

Sapinst.exe SAPINST_SLP_MODE=false

Option #2: Connect to SAPinst remotely from a Management Server with a browser

Run sapinst -nogui

Ensure the Windows Firewall has port 4237 open

From a dedicated Management Server open a browser and connect to https://<sap_server_hostname or_IP>:4237/JfIVyDkxlsXSBFYi/docs/index.html

(Note: the exact URL can be found in the logs of the SAPInst program starter)


The username and password are the OS level user and password on the server where SAPInst was started (eg. DOMAIN\sapadmin)

Hint: If there are problems running SWPM add the SAP Server hostname and/or IP to the Trusted Sites

4. SQL Server 2017 for SAP NetWeaver Systems

Microsoft has released SQL Server 2017. SQL Server 2017 has many new features

In general the required SAP Support Packs for SQL Server 2017 are the same as SQL Server 2016. Details can be found in 2492596 – Release planning for Microsoft SQL Server 2017

SAP and Microsoft plan to complete testing and make SQL Server 2017 generally available in January 2017 or shortly after

5. Power Options – Set for Maximum Performance

It is recommended to set SQL Server and SAP application server Power Plan to Maximum Performance.

SAP has released SAP Note 2497079 – Poor performance in SAP NetWeaver and/or MS SQL Server due to power settings

It is important to set the power settings to maximum performance at all layers of an infrastructure, for example the server BIOS, Hypervisor and Windows Operating system.

6. Business Suite 7 Maintenance, SAP NetWeaver Java Systems, Windows & SQL – End of Life & JDBC Drivers

SAP has documented the end of life of SAP Business Suite in SAP Note 1648480 – Maintenance for SAP Business Suite 7 Software.  The note states that the current generation of SAP applications running on “AnyDB” installed in over 200,000 customers worldwide will be out of support after 31st December 2025.

Many SAP customers have decided to migrate their SAP applications to Windows 2016, SQL Server 2016/2017 and upgraded to SAP versions that remain in support until 31st December 2025. Some of these customers plan to implement S4/HANA and wish to upgrade to a supported platform “stack” in the meantime. SAP NetWeaver 7.5 components, Windows 2016 and SQL 2016/2017 will remain in support until 2025. This means a customer can move to Windows 2016 and SQL Server 2016/2017 on Azure and never need to upgrade the OS, Database or Hardware until the end of life of the application.

Windows Server 2016 is in mainstream support until 11th January 2022 and extended maintenance until 11th January 2027 as documented here on the Microsoft Product Lifecycle tool

https://support.microsoft.com/en-us/lifecycle/search?alpha=Windows%20Server%202016%20Datacenter

https://support.microsoft.com/en-us/lifecycle/search

SQL Server 2016 final service pack (currently only SP1 is released) will remain in support until 2026. It is possible another SQL 2016 Service Pack will be released that might further extend the support lifetime of SQL Server 2016. SQL Server 2017 is in support until October 2027

The Azure platform will automatically upgrade hardware, networking and storage transparently over time.

Moving the current SAP applications to a stack that is fully supported until the end of life of the applications has allowed many customers to focus resources into planning for S/4HANA implementation projects.

Notes:

Java systems for 7.5x will be in maintenance until 31st December 2024

Java 7.0 EHP0, EHP1, EHP2 and EHP3 are out of support as of 31st December 2017 (support for Java 4.1 is terminated)

7. SAP ASCS File Share vs. ASCS Shared Disk

SAP has released documentation and a new Windows Cluster DLL that enables the SAP ASCS to leverage a SMB UNC source as opposed to a Cluster Shared Disk.

The solution has been tested and documented by SAP for usage in non-productive systems and can be used in Azure Cloud. This feature is for SAP NetWeaver components 7.40 and higher.

This feature is now fully Generally Available to all customers (both on-premises and on Azure) and is documented here

File Server with SOFS and S2D as an Alternative to Cluster Shared Disk for Clustering of an SAP (A)SCS Instance in Azure is Generally Available

High Available ASCS for Windows on File Share – Shared Disk No Longer Required

8. ReFS, Cluster, Continuous Access File Share and Windows Update Patches

SAP fully supports 1869038 – SAP support for ReFs filesystem

Some Antivirus software or other software that intercepts the Windows IO subsystem require this patch

It is therefore required to apply this patch on all Windows 2016 systems running ReFS

Older versions of the SWPM prerequisite checker will still warn that it is required to Disable the Windows Continuous Availability feature.

SAP now fully support Continuous Availability as documented in Note 2287140 – Support of Failover Cluster Continuous Availability feature (CA)

https://blogs.sap.com/2017/07/21/how-to-create-a-high-available-sapmnt-share/

https://wiki.scn.sap.com/wiki/display/SI/Should+I+run+the+Web+Dispatcher+as+a+standalone+installation+or+as+part+of+an+ABAP+or+J2EE+system

It is generally recommended to always use the latest SWPM available from here

Recent releases of SWPM should not request to disable this feature.

It is generally recommended to apply these updates to Windows 2012 R2 upgrades to cluster systems

http://aka.ms/2012R2ClusterUpdates

http://aka.ms/AzureClusterThreshold

Windows Server 2016 Long Term Servicing Branch is the support release for SAP applications. Do not use the Semi-Annual Channel

https://blogs.technet.microsoft.com/windowsitpro/2017/07/27/waas-simplified-and-aligned/

9. Adding Secondary IP Address onto Cluster Core Resource & Read Only Cluster Admins

Customers installing Windows Geoclusters on-premises and on Azure will need to a second IP address to the Cluster Core Resource

This is because the Primary and DR cluster nodes are typically on different subnets. To add a second cluster core resource IP address follow this guide here

The key point in the blog is this PS command:

PS > Add-ClusterResource –Name NewIP –ResourceType “IP Address” –Group “Cluster Group”

Some outsourced or managed service customers sometimes want to delegate readonly access

Grant-ClusterAccess -User DOMAIN.com\<non-admin-user> -ReadOnly

https://docs.microsoft.com/en-us/powershell/module/failoverclusters/grant-clusteraccess?view=win10-ps

To block cluster access to specific users (even if admins) run Block-ClusterAccess https://docs.microsoft.com/en-us/powershell/module/failoverclusters/block-clusteraccess?view=win10-ps

10. SQL Server: Important Patch Level for SQL 2016

Customers running SQL Server 2016 are strongly recommended to upgrade to at least SQL Server 2016 SP1 CU6.

There are multiple features that are improved and several bugs resolved. Customers using TDE, Backup Compression, Datafiles direct on Azure blobstorage or any combination of these should upgrade to the latest available CU, but at least SP1 CU6.

The latest CU and SP is always available here

SQL Server 2017 customers will receive the same corrections in SQL Server 2017 CU3

4025628            FIX: Error 9004 when you try to restore a compressed backup from multiple files for a large TDE-encrypted database in SQL Server

Miscellaneous Topics & Interesting SAP OSS Notes

Setting up SAP applications using virtual hostnames 1564275 – Install SAP Systems Using Virtual Host Names on Windows

Updating SAP cluster resource DLL is explained here in SAP Note 1596496 – How to update SAP Resource Type DLLs for Cluster Resource Monitor

A useful note on memory analysis 2488097 – FAQ: Memory usage for the ABAP Server on Windows

AlwaysOn alerting and monitoring is discussed here

Azure Support plans for SAP on Azure customers are https://azure.microsoft.com/en-us/support/plans/

A new format option is available for NTFS that will alleviate sparse file errors during Check DB on very large databases. The syntax for Large FRS Format <Drive:> /FS:NTFS /L (-UseLargeFRS) https://blogs.technet.microsoft.com/askcore/2015/03/12/the-four-stages-of-ntfs-file-growth-part-2/
https://technet.microsoft.com/en-us/library/dn466522(v=ws.11).aspx

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Setting Up Hana System Replication on Azure Hana Large Instances

$
0
0

This blog details a recent customer deployment by Cognizant SAP Cloud team  on HANA Large Instances. From an architecture perspective of HANA Large Instances, the usage of HANA System Replication as disaster recovery functionality between two HANA Large Instance units in two different Azure regions does not work. Reason is that the network architecture applied is not supporting transit routing between two different HANA Large Instance stamps that are located in different Azure regions. Instead the HANA Large Instance architecture offers the usage of storage replication between the two regions in a geopolitical area that offer HANA Large Instances. For details, see this article. However, there are customers who already have experience with SAP HANA System Replication and its usage as disaster recovery functionality. Those customers would like to continue the usage of HANA System Replication in Azure HANA Large Instance units as well. Hence the Cognizant team needed to solve the issue around the transit routing between the two HANA Large Instance units that were located in two different Azure regions. The solution applied by Cognizant was based on Linux IPTables. A Linux Network Address Translation based solution allows SAP Hana System Replication on Azure Hana Large Instances in different Azure datacenters. Hana Large Instances in different Azure datacenters do not have direct network connectivity because Azure ExpressRoute and VPN does not allow Transit Routing.

To enable Hana System Replication (HSR) to work a small VM is created that forwards traffic between the two Hana Large Instances (HLI).

Another benefit of this solution is that it is possible to enable access to SMT and NTP servers for patching and time synchronization.

The solution detailed below is not required for Hana systems running on native Azure VMs, only for the Azure Large Instance offering that offers bare metal Tailored Data Centre Integration (TDI) Infrastructure up to 20TB

Be aware that the solution described below is not part of the HANA Large Instance architecture. Hence support of configuring, deploying, administrating and operating the solution needs to be provided by the Linux vendor and the instances that deploys and operates the IPTables based disaster recovery solution.

High Level Overview

SAP Hana Database offers three HA/DR technologies: Hana System Replication, Host Autofailover and Storage based replication

The diagram below illustrates a typical HLI scenario with Geographically Dispersed Disaster Recovery solution. The Azure HLI solution offers storage based DR replication as an inbuilt solution however some customers prefer to use HSR which is a DBMS software based HA/DR solution.

HSR can already be configured within the same Azure datacenter following the standard HSR documentation

HSR cannot be configured between the primary Azure datacenter and the DR Azure datacenter without an IPTables solution because there is no network route from primary to DR. “Transit Routing” is a term that refers to network traffic that passes through two ExpressRoute connections. To enable HSR between two different datacenters a solution such as the one illustrated below can be implemented by a SAP System Integrator or Azure IaaS consulting company

Key components and concepts in the solution are:

1. A small VM running any distribution of Linux is placed in Azure on a VNET that has network connectivity to both HLI in both datacenters

2. The ExpressRoute circuits are cross connected from the HLI to a VNET in each datacenter

3. Standard Linux functionality IPTables is used to forward traffic from the Primary HLI -> IP Forwarding VM in Azure -> Secondary HLI (and vice versa)

4. Each customer deployment has individual differences, for example:

a. Some customers deploy two HLI in the primary datacenter, setup synchronous HSR between these two local HLI and then (optionally) configure Suse Pacemaker for faster and transparent failover. The third DR node for HSR typically is not configured with Suse Pacemaker

b. Some customers have a proxy server running in Azure and allow outbound traffic from Azure to Internet directly. Other customers force all http(s) traffic back to a Firewall & Proxy infrastructure on-premises

c. Some customers use Azure Automation/Scripts on a VM in Azure to “pull” backups from the HLI and store them in Azure blob storage. This removes the need for the HLI to use a proxy to “push” a backup into blob storage

All of the above differences change the configuration of the IPTables rules, therefore it is not possible to provide a single configuration that will work for every scenario

https://blogs.sap.com/2017/02/01/enterprise-readiness-with-sap-hana-host-auto-failover-system-replication-storage-replication/

https://blogs.sap.com/2017/04/12/revealing-the-differences-between-hana-host-auto-failover-and-system-replication/

The diagram shows a hub and spoke network topology and Azure ExpressRoute cross connect, a generally recommended deployment model

https://docs.microsoft.com/en-us/azure/architecture/reference-architectures/hybrid-networking/hub-spoke

https://docs.microsoft.com/en-us/azure/expressroute/expressroute-faqs

Diagram 1. Illustration showing no direct network route from HLI Primary to HLI Secondary. With the addition of an IP forwarding solution it is possible for the HLI to establish TCP connectivity

Note: ExpressRoute Cross Connect can be changed to regional vnet peering when this feature is GA https://docs.microsoft.com/en-us/azure/virtual-network/virtual-network-peering-overview

Sample IP Table Rules Solution – Technique 1

The solution from a recent deployment by Cognizant SAP Cloud Team on Suse 12.x is shown below:

HLI Primary     10.16.0.4

IPTables VM     10.1.0.5

HLI DR         10.17.0.4

On HLI Primary

iptables -N input_ext

iptables -t nat -A OUTPUT -d 10.17.0.0/24 -j DNAT –to-destination 10.1.0.5

On IPTables

## this is HLI primary -> DR

echo 1 > /proc/sys/net/ipv4/ip_forward

iptables -t nat -A PREROUTING -s 10.16.0.0/24 -d 10.1.0.0/24 -j DNAT –to-destination 10.17.0.4

iptables -t nat -A POSTROUTING -s 10.16.0.0/24 -d 10.17.0.0/24 -j SNAT –to-source 10.1.0.5

## this is HLI DR -> Primary

iptables -t nat -A PREROUTING -s 10.17.0.0/24 -d 10.1.0.0/24 -j DNAT –to-destination 10.16.0.4

iptables -t nat -A POSTROUTING -s 10.17.0.0/24 -d 10.16.0.0/24 -j SNAT –to-source 10.1.0.5

On HLI DR

iptables -N input_ext

iptables -t nat -A OUTPUT -d 10.16.0.0/24 -j DNAT –to-destination 10.1.0.5

The configuration above is not permanent and will be lost if the Linux servers are restarted.

On all nodes after the config is setup and correct:

iptables-save > /etc/iptables.local

add “iptables-restore -c /etc/iptables.local” to the /etc/init.d/boot.local

On the IPTables VM run this command to make ip forwarding a permanent setting

vi /etc/sysctl.conf

add net.ipv4.ip_forward = 1

The above example uses CIDR networks such as 10.16.0.0/24 It is also possible to specify an individual specific host such as 10.16.0.4

Sample IP Table Rules Solution – Technique 2

Another possible solution uses an additional IP address on the IPTables VM to forward to the target HLI IP address.

This approach has a number of advantages:

1. Inbound connections are supported, such as running Hana Studio on-premises connecting to HLI

2. IPTables configuration is only on the IPTables VM, no configuration is required on the HLI

To implement technique 2 follow this procedure:

1. Identify the target HLI IP address. In this case the DR HLI is 10.17.0.4

2. Add an additional static IP address onto the IPTables VM. In this case 10.1.0.6

Note: after performing this configuration it is not required to add the IP address in YAST. The IP address will not show in ifconfig

3. Enter the following commands on the IPTables VM

iptables -t nat -A PREROUTING -d 10.1.0.6 -j DNAT –to 10.17.0.4

iptables -t nat -A PREROUTING -d << a unique new IP assigned on the IP Tables VM>> -j DNAT –to <<any other HLI IP or MDC tenant IP>>

iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

4. Ensure /proc/sys/net/ipv4/ip_forward = 1 on the IP Tables VM and save the configuration in the same way as Technique 1

5. Maintain DNS entries and/or hosts file to point to the new IP address on the IPTables VM – in this case all references to the DR HLI will be 10.1.0.6 (not the “real” IP 10.17.0.4)

6. Repeat this procedure for the reverse route (HLI DR -> HLI Primary) and for any other HLI (of MDC tenants)

7. Optionally High Availability can be added to this solution by adding the additional static IP addresses on the IPTables VM to an ILB

8. To test the configuration execute the following command:

ssh 10.1.0.6 -l <username> (address 10.1.0.6 will be NAT to 10.17.0.4)

after logging on confirm that the ssh session has connected to the HLI DR (10.17.0.4)

IMPORTANT NOTE: The SAP Application servers must use the “real” HLI IP (10.17.0.4) and should not be configured to connect via the IPTables VM.

Notes:

To purge ip tables rules:

iptables -t nat -F

iptables -F

To list ip tables rules:

iptables -t nat -L

iptables -L

ip route list

ip route del

http://www.netfilter.org/documentation/index.html

https://www.systutorials.com/816/port-forwarding-using-iptables/

https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/4/html/Security_Guide/s1-firewall-ipt-fwd.html

https://www.howtogeek.com/177621/the-beginners-guide-to-iptables-the-linux-firewall/

https://www.tecmint.com/linux-iptables-firewall-rules-examples-commands/

https://www.tecmint.com/basic-guide-on-iptables-linux-firewall-tips-commands/

What Size VM is Required for the IP Forwarder?

Testing has shown that network traffic during initial synchronization of a Hana DB is in the range of 15-25 megabytes/sec at peak. It is therefore recommended to start with a VM that has at least 2 cpu. Larger customers should consider a VM such as a D12v2 and enable Accelerated Networking to reduce network processing overhead

It is recommended to monitor:

1. CPU consumption on the IP Forwarding VM during HSR sync and subsequently

2. Network utilization on the IP Forwarding VM

3. HSR delay between Primary and Secondary node(s)

SQL: “HANA_Replication_SystemReplication_KeyFigures” displays among others the log replay backlog (REPLAY_BACKLOG_MB).

As a fallback option you can use the contents of M_SERVICE_REPLICATION to determine the log replay delay on the secondary site:

SELECT SHIPPED_LOG_POSITION, REPLAYED_LOG_POSITION FROM M_SERVICE_REPLICATION

Now you can calculate the difference and multiply it with the log position size of 64 byte:

(SHIPPED_LOG_POSITION – REPLAYED_LOG_POSITION) * 64 = <replay_backlog_byte>

1969700 – SQL Statement Collection for SAP HANA

This SAP Note contains a reference to this script that is very useful for monitoring HSR key figures. HANA_Replication_SystemReplication_KeyFigures_1.00.120+_MDC

If high CPU or network utilization is observed it is recommended to upgrade to a VM Type that supports Accelerated Networking

Required Reading, Documentation and Tips

Below are some recommendations for those setting up this solution based on test deployments:

1. It is strongly recommended that an experienced Linux engineer and a SAP Basis consultant jointly setup this solution. Test deployments have shown that there is considerable testing of the iptables rules required to get HSR to connect reliably. Sometimes troubleshooting has been delayed because the Linux engineer is unfamiliar with Hana System Replication and the Basis consultant may have only very limited knowledge of Linux networking

2. A simple and easy way to test if ports are opened and correctly configured is to ssh to the HLI using the specific port. For example from the primary HLI run this command: ssh <secondary hli>:3<sys nr.>15 If this command times out then there is likely an error in the configuration. If the configuration is correct the ssh session should connect briefly

3. The NSG and/or Firewall for the IP forwarder VM must be opened to allow the HSR ports to/from the HLIs

4. More information Network Configuration for SAP HANA System Replication

5. How To Perform System Replication for SAP HANA

6. If there is a failure on the IPTables VM Hana will treat this the same way as any other network disconnection or interruption. When the IPTables VM is available again (or networking is restored) there is considerable traffic while HSR replicates queued transactions to the DR site.

7. The IPTables solution documented here is exclusively for Asynchronous DR replication. We do not recommend using such a solution for the other types of replication possible with HSR such as Sync in Memory and Synchronous. Asynchronous replication across geographically dispersed locations with true diverse infrastructure such as power supply always has the possibility of some data loss as the RPO <> 0. This statement is true for any DR solution on any DBMS with or without using an IP forwarder such as IPTables. It is expected that IPTables would have a negligible impact on RPO assuming the CPU and Network on the IPTable VM is not saturated https://help.sap.com/viewer/6b94445c94ae495c83a19646e7c3fd56/2.0.02/en-US/c039a1a5b8824ecfa754b55e0caffc01.html

8. Some recommended SAP OSS Notes:

2142892 – SAP HANA System Replication sr_register hangs at “collecting information” (Important note)

An alternative to this SAP Note is to use this command on the IPTables VM: iptables -t mangle -A POSTROUTING -p tcp –tcp-flags SYN,RST SYN -o eth0 -j TCPMSS –set-mss 1500

2222200 – FAQ: SAP HANA Network

2484128 – hdbnsutil hangs while registering secondary site

2382421 – Optimizing the Network Configuration on HANA- and OS-Level

2081065 – Troubleshooting SAP HANA Network

2407186 – How-To Guides & Whitepapers For SAP HANA High Availability

9. HSR can enable a multi-tier HSR scenario even on small and low cost VMs as per SAP Note 1999880 – FAQ: SAP HANA System Replication – buffer for replication target has a minimum size of 64GB or row store + 20GB (whichever is higher) [no preload of data]

10. Refer to existing SAP Notes and documentation for limitations on HSR and storage based snapshot backup restrictions and MDC

11. The IPTables rules above forward entire CIDR networks. It is also possible to specify individual hosts.

Sample Network Diagrams

The following two diagrams show two possible Hub & Spoke Network topologies for connecting HLI.

The second diagram differs from the first in that the HLI is connected to a Spoke VNET. Connecting the HLI to a Spoke on a Hub & Spoke topology might be useful if IDS/IPS inspection and monitoring Virtual Appliances or other mechanisms were used to secure the hub network and traffic passing from the Hub network to on-premises

Note: In the normal deployment of a Hub and Spoke network leveraging network appliances for routing, a User defined route is required on the expressroute gateway subnet to force all traffic though network routing appliances. If a user defined route has been applied, it will apply to the traffic coming from the HLI expressroute and this may lead to significant latency between the DBMS server and the SAP application server. Ensure an additional user defined route allows for the HLI’s to have direct routing to the application servers without having to pass through the network appliances.

Thanks to:

Rakesh Patil – Azure CAT Team Linux Expert for his invaluable help.

Peter Lopez – Microsoft CSA

For more information on the solution deployed:

Sivakumar Varadananjayan – Cognizant Global SAP Cloud and Technology Consulting Head https://www.linkedin.com/in/sivakumarvaradananjayan/detail/recent-activity/posts/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Very Large Database Migration to Azure – Recommendations & Guidance to Partners

$
0
0

SAP systems moved onto Azure cloud now commonly include large multinational “single global instance” systems and are many times larger than the first customer systems deployed when the Azure platform was first certified for SAP workloads some years ago

Very Large Databases (VLDB) are now commonly moved to Azure. Database sizes over 20TB require some additional techniques and procedures to achieve a migration from on-premises to Azure within an acceptable downtime and a low risk.

The diagram below shows a VLDB migration with SQL Server as the target DBMS. It is assumed the source systems are either Oracle or DB2

A future blog will cover migration to HANA (DMO) running on Azure. Many of the concepts explained in this blog are applicable to HANA Migrations

This blog does not replace the existing SAP System Copy guide and SAP Notes which should be reviewed and followed.

High Level Overview

A fully optimized VLDB migration should achieve around 2TB per hour migration throughput per hour or possibly more.

This means the data transfer component of a 20TB migration can be done in approximately 10 hours. Various postprocessing and validation steps would then need to be performed.

In general with adequate time for preparation and testing almost any customer system of any size can be moved to Azure.

VLDB Migrations require do considerable skill, attention to detail and analysis. For example the net impact of Table Splitting must be measured and analyzed. Splitting a large table into more than 50 parallel exports may considerably decrease the time taken to Export a table, but too many Table Splits may result in drastically increased Import times. Therefore the net impact of table splitting must be calculated and tested. An expert licensed OS/DB migration consultant will be familiar with the concepts and tools. This blog is intended to be a supplement to highlight some Azure specific content for VLDB migrations

This blog deals with Heterogeneous OS/DB Migration to Azure with SQL Server as the target database using tools such as R3load and Migmon. The steps performed here are not intended for Homogenous System Copies (a copy where the DBMS and Processor Architecture (Endian Order) stays the same). In general Homogeneous System Copies should have very low downtime regardless of DBMS size because log shipping can be used to synchronize a copy of the database in Azure.

A block diagram of a typical VLDB OS/DB migration and move to Azure is illustrated below. The key points illustrated below:

1.The current source OS/DB is often AIX, HPUX, Solaris or Linux and DB2 or Oracle

2. The target OS is either Windows, Suse 12.3, Redhat 7.x or Oracle Linux 7.x

3. The target DB is usually either SQL Server or Oracle 12.2

4. IBM pSeries, Solaris SPARC hardware and HP Superdome thread performance is drastically lower than low cost modern Intel commodity servers, therefore R3load is run on separate Intel servers

5. VMWare requires special tuning and configuration to achieve good, stable and predictable network performance. Typically physical servers are used as R3load server and not VMs

6. Commonly four export R3load servers are used, though there is no limit on the number of export servers. A typical configuration would be:

-Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-Export Server #2 – dedicated to tables with table splits

-Export Server #3 – dedicated to tables with table splits

-Export Server #4 – all remaining tables

7. Export dump files are transferred from the local disk in the Intel based R3load server into Azure using AzCopy via public internet (this is typically faster than via ExpressRoute though not in all cases)

8. Control and sequencing of the Import is via the Signal File (SGN) that is automatically generated when all Export packages are completed. This allows for a semi-parallel Export/Import

9. Import to SQL Server or Oracle is structured similarly to the Export, leveraging four Import servers. These servers would be separate dedicated R3load servers with Accelerated Networking. It is recommended not to use the SAP application servers for this task

10. VLDB databases would typically use E64v3, m64 or m128 VMs with Premium Storage. The Transaction Log can be placed on the local SSD disk to speed up Transaction Log writes and remove the Transaction Log IOPS and IO bandwidth from the VM quota.  After the migration the Transaction Log should be placed onto persisted disk

Source System Optimizations

The following guidance should be followed for the Source Export of VLDB systems:

1. Purge Technical Tables and Unnecessary Data – review SAP Note 2388483 – How-To: Data Management for Technical Tables

2. Separating the R3load processes from the DBMS server is an essential step to maximize export performance

3. R3load should run on fast new Intel CPU. Do not run R3load on UNIX servers as the performance is very poor. 2-socket commodity Intel servers with 128GB RAM cost little and will save days or weeks of tuning/optimization or consulting time

4. High Speed Network ideally 10Gb with minimal network hops between the source DB server and the Intel R3load servers

5. It is recommended to use physical servers for the R3load export servers – virtualized R3load servers at some customer sites did not demonstrated good performance or reliability at extremely high network throughput (Note: very experienced VMWare engineer can configure VMWare to perform well)

5. Sequence larger tables to the start of the Orderby.txt

6. Configure Semi-parallel Export/Import using Signal Files

6. Large exports will benefit from Unsorted Export on larger tables. It is important to review the net impact of Unsorted Exports as importing unsorted exports to databases that have a clustered index on the primary key will be slower

7. Configure Jumbo Frames between source DB server and Intel R3load servers. See “Network Optimization” section later

8. Adjust memory settings on the source database server to optimize for sequential read/export tasks 936441 – Oracle settings for R3load based system copy

Advanced Source System Optimizations

1. Oracle Row ID Table Splitting

SAP have released SAP Note 1043380 which contains a script that converts the WHERE clause in a WHR file to a ROW ID value. Alternatively the latest versions of SAPInst will automatically generate ROW ID split WHR files if SWPM is configured for Oracle to Oracle R3load migration. The STR and WHR files generated by SWPM are independent of OS/DB (as are all aspects of the OS/DB migration process).

The OSS note contains the statement “ROWID table splitting CANNOT be used if the target database is a non-Oracle database”. Technically the R3load dump files are completely independent of database and operating system. There is one restriction however, restart of a package during import is not possible on SQL Server. In this scenario the entire table will need to be dropped and all packages for the table restarted. It is always recommended to kill R3load tasks for a specific split table, TRUNCATE the table and restart the entire import process if one split R3load aborts. The reason for this is that the recovery process built into R3load involves doing single row-by-row DELETE statements to remove the records loaded by the R3load process that aborts. This is extremely slow and will often cause blocking/locking situations on the database. Experience has shown it is faster to start the import of this specific table from the beginning, therefore the limitation mentioned in Note 1043380 is not a limitation at all

ROW ID has a disadvantage that calculation of the splits must be done during downtime – see SAP Note 1043380.

2. Create multiple “clones” of the source database and export in parallel

One method to increase export performance is to export from multiple copies of the same database. Provided the underlying infrastructure such as server, network and storage is scalable this approach is linearly scalable. Exporting from two copies of the same database will be twice as fast, 4 copies will be 4 times as fast. Migration Monitor is configured to export on a select number of tables from each “clone” of the database. In the case below the export workload is distributed approximately 25% on each of the 4 DB servers.

-DB Server1 & Export Server #1 – dedicated to the largest 1-4 tables (depending on how skewed the data distribution is on the source database)

-DB Server2 & Export Server #2 – dedicated to tables with table splits

-DB Server3 & Export Server #3 – dedicated to tables with table splits

-DB Server4 & Export Server #4 – all remaining tables

Great care must be taken to ensure that the databases are exactly and precisely synchronized, otherwise data loss or data inconsistencies could occur. Provided the steps below are precisely followed, data integrity is provided.

This technique is simple and cheap with standard commodity Intel hardware but is also possible for customers running proprietary UNIX hardware. Substantial hardware resources are free towards the middle of an OS/DB migration project when Sandbox, Development, QAS, Training and DR systems have already moved to Azure. There is no strict requirement that the “clone” servers have identical hardware resources. So long as there is adequate CPU, RAM, disk and network performance the addition of each clone increases performance

If additional export performance is still required open an SAP incident in BC-DB-MSS for additional steps to boost export performance (very advanced consultants only)

Steps to implement a multiple parallel export:

1. Backup the primary database and restore onto “n” number of servers (where n = number of clones). In the case illustrated 3 is chosen making a total of 4 DB servers

2. Restore backup onto 3 servers

3. Establish log shipping from the Primary source DB server to 3 target “clone” servers

4. Monitor log shipping for several days and ensure log shipping is working reliably

5. At the start of downtime shutdown all SAP application servers except the PAS. Ensure all batch processing is stopped and all RFC traffic is stopped

6. In transaction SM02 enter text “Checkpoint PAS Running”. This updates table TEMSG

7. Stop the Primary Application Server. SAP is now completely shutdown. No more write activity can occur in the source DB. Ensure that no non-SAP application is connected to the source DB (there never should be, but check for any non-SAP sessions at the DB level)

8. Run this query on the Primary DB server SELECT EMTEXT FROM <schema>.TEMSG;

9. Run the native DBMS level statement INSERT INTO <schema>.TEMSG “CHECKPOINT R3LOAD EXPORT STOP dd:mm:yy hh:mm:ss” (exact syntax depends on source DBMS. INSERT into EMTEXT)

10. Halt automatic transaction log backups. Manually run one final transaction log backup on the Primary DB server. Ensure the log backup is copied to the clone servers

11. Restore the final transaction log backup on all 3 nodes

12. Recover the database on the 3 “clone” nodes

13. Run the following SELECT statement on *all* 4 nodes SELECT EMTEXT FROM <schema>.TEMSG;

14. With a phone or camera photograph the screen results of the SELECT statement for each of the 4 DB servers (the Primary and the 3 clones). Be sure to carefully include each hostname in the photo – these photographs are proof that the clone DB and the primary are identical and contain the same data from the same point in time. Retain these photos and get customer to sign off the DB replication status

15. Start export_monitor.bat on each Intel R3load export server

16. Start the dump file copy to Azure process (either AzCopy or Robocopy)

17. Start import_monitor.bat on the R3load Azure VMs

Diagram showing existing Production DB server log shipping to “clone” databases. Each DB server has one or more Intel R3load servers

Network Upload Optimizations

Jumbo Frames are ethernet frames larger than the default 1500 bytes. Typical Jumbo Frame sizes are 9000 bytes. Increasing the frame size on the source DB server, all intermediate network devices such as switches and the Intel R3load servers reduces CPU consumption and increases network throughput. The Frame Size must be identical on all devices otherwise very resource intensive conversion will occur.

Additional networking features such as Receive Side Scaling (RSS) can be switched on or configured to distribute network processing across multiple processors  Running R3load servers on VMWare has proven to make network tuning for Jumbo Frames and RSS more complex and is not recommended unless there very expert skill level available

R3load exports data from DBMS tables and compresses this raw format independent data in dump files. These dump files need to be uploaded into Azure and imported to the Target SQL Server database.

The performance of the copy and upload to Azure of these dump files is a critical component in the overall migration process.

There are two basic approaches for upload of R3load dump files:

1.Copy from on-premises R3load export servers to Azure blob storage via Public Internet with AzCopy

On each of the R3load servers run a copy of AzCopy with this command line:

AzCopy /source:C:\ExportServer_1\Dumpfiles /dest:https://<storage_account>/ExportServer_1/Dumpfiles /destkey:xxxxxx /S /NC:xx /blobtype:page

The value for /NC: determines how many parallel sessions are used to transfer files. In general AzCopy will perform best with a larger number of smaller files and /NC values between 24-48. If a customer has a powerful server and very fast internet this value can be increased. If this value is increased too high connection to the R3load export server will be lost due to network saturation. Monitor the network throughput in Windows Task Manager. Copy throughput of over 1Gigabit per second per R3load Export Server can be easily achieved. Copy throughput can be scaled up by having more R3load servers (4 are depicted in the diagram above)

A similar script will need to be run on the R3load Import servers in Azure to copy the files from Blob onto a file system that R3load can access.

2. Copy from on-premises R3load export servers to an Azure VM or blob storage via a dedicated ExpressRoute connection using AzCopy, Robocopy or similar tool

Robocopy C:\Export1\Dump1 \\az_imp1\Dump1 /MIR /XF *.SGN /R:20 /V /S /Z /J /MT:8 /MON:1 /TEE /UNILOG+:C:\Export1\Robo1.Log

The block diagram below illustrates 4 Intel R3load servers running R3load. In the background Robocopy is started uploading dump files. When entire split tables and packages are completed the SGN file is copied either manually or via a script. When the SGN file for a package arrives on the import R3load server this will trigger import for this package automatically

Note: Copying files over NFS or Windows SMB protocols is not as fast or robust as mechanisms such as AzCopy. It is recommended to test performance of both file upload techniques. It is recommended to notify Microsoft Support for VLDB migration projects as very high throughput network operations might be mis-identified as Denial of Service attacks.

Target System Optimizations

1. Use latest possible OS with latest patches

2. Use latest possible DB with latest patches

3. Use latest possible SAP Kernel with latest patches (eg. Upgrade from 7.45 kernel to 7.49 or 7.53)

4. Consider using the largest available Azure VM. The VM type can be lowered to a smaller VM after the Import process

5. Create multiple Transaction Log files with the first transaction log file on the local non-persistent SSD. Additional Transaction Log files can be created on P50 disks.  VLDB migrations could require more than 5TB of Transaction Log space. It is strongly recommended to ensure there is always a large amount of Transaction Log space free at all times (20% is a safe figure). Extending Transaction Log files during an Import is not recommended and will impact performance

6. SQL Server Max Degree of Parallelism should usually be set to 1. Only certain index build operations will benefit from MAXDOP and then only for specific tables

7. Accelerated Networking is mandatory for DB and R3load servers

8. It is recommended to use m128 3.8TB as the DB server and E64v3 as the R3load servers (as at March 2018)

9. Limit the maximum memory a single SQL Server query can request with Resource Governor. This is required to prevent index build operations from requesting very large memory grants

10. Secondary indexes for very large tables can be removed from the STR file and built ONLINE with scripts after the main portion of the import has finished and post processing tasks such as configuring STMS are occurring

11. Customers using SQL Server TDE are recommended to pre-create the database and Transaction Log files, then enable TDE prior to starting the import. TDE will run for a similar amount of time on a DB that is full of data or empty. Enabling TDE on a VLDB can lead to blocking/locking issues and it is generally recommended to import into a TDE database. The overhead importing to a TDE database is relatively low

12. Review the latest OS/DB Migration FAQ

Recommended Migration Project Documents

VLDB OS/DB migrations require additional levels of technical skill and also additional documentation and procedures. The purpose of this documentation is to reduce downtime and eliminate the possibility of data loss. The minimum acceptable documentation would include the following topics:

1. Current SAP Application Name, version, patches, DB size, Top 100 tables by size, DB compression usage, current server hardware CPU, RAM and disk

2. Data Archiving/Purging activities completed and the space savings achieved

3. Details on any upgrade, Unicode conversion or support packs to be applied during the migration

4. Target SAP Application version, Support Pack Level, estimated target DB size (after compression), Top 100 tables by size, DB version and patch, OS version and patch, VM sku, VM configuration options such as disk cache, write accelerator, accelerated networking, type and quantity of disks, database file sizes and layout, DBMS configuration options such as memory, traceflags, resource governor

5. Security is typically a separate topic, but network security groups, firewall settings, Group Policy, DBMS encryption settings

6. HA/DR approach and technologies, in addition special steps to establish HA/DR after the initial import is finished

7. OS/DB migration design approach:

-How many Intel R3load export servers

-How many R3load import VMs

-How many R3load processes per VM

-Table splitting settings

-Package splitting settings

-export and import monitor settings

-list of secondary indexes to be removed from STR files and created manually

-list of pre-export tasks such as clearing updates

9. Analysis of last export/import cycle. Which settings were changed? What was the impact on the “flight plan”? Is the configuration change accepted or rejected? Which tuning & configuration is planned for next test cycle?

10. Recovery procedures and exception handling – procedures for rollback, how to handle exceptions/issues that have occurred during previous test cycles

It is typically the responsibility of the lead OS/DB migration consultant to prepare this documentation. Sometimes topics such as Security, HA/DR and networking are handled by other consultants. The quality of such documentation has proven to be a very good indicator of the skill level and capability of the project team and the risk level of the project to the customer.

Migration Monitoring

One of the most important components of a VLDB migration is the monitoring, logging and diagnostics that is configured during Development, Test and “dry run” migrations.

Customers are strongly advised to discuss with their OS/DB migration consultant implementation and usage of the steps in this section of the blog. Not to do so exposes a customer to a significant risk.

Deployment of the required monitoring and interpretation of the monitoring and diagnostic results after each test cycle is mandatory and essential for optimizing the migration and planning production cutover. The results gained in test migrations are also necessary to be able to judge whether the actual production migration is following the same patterns and time lines as the test migrations. Customers should request regular project review checkpoints with the SAP partner.  Contact Microsoft for a list of consultants that have demonstrated the technical and organizational skills required for a successful project.

Without comprehensive monitoring and logging it would be almost impossible to achieve safe, repeatable, consistent and low downtime migrations with a guarantee of no data loss. If problems such as long runtimes of some packages were to occur, it is almost impossible for Microsoft and/or SAP to assist with spot consulting without monitoring data and migration design documentation

During the runtime of an OS/DB migration:

OS level parameters on DB and R3load hosts: CPU per thread, Kernel time per thread, Free Memory (GB), Page in/sec, Page out/sec, Disk IO reads/sec, Disk IO write/sec, Disk read KB/sec, Disk write KB/sec

DB level parameters on SQL Server target: BCP rows/sec, BCP KB/sec, Transaction Log %, Memory Grants, Memory Grants pending, Locks, Lock memory, locking/blocking

Network monitoring normally handled by network team. Exactly configuration of network monitoring depends on customer specific situation.

During the runtime of the DB import it is recommended to execute this SQL statement every few minutes and screenshot anything abnormal (such as high wait times)

select session_id, request_id,start_time,
status,
command, wait_type, wait_resource, wait_time, last_wait_type, blocking_session_id from sys.dm_exec_requests
where session_id >
49 orderby wait_time desc;

During all migration test cycles a “Flight Plan” showing the number of packages exported and imported (y-axis) should be plotted against time (x-axis). The purpose of this graph is to establish an expected rate of progress during the final production migration cutover. Deviation (either positive or negative) from the expected “Flight Plan” during test or the final production migration is easily detected using this method. Other parameters such as CPU, disk and R3load rows/sec can be overlaid on top of the “Flight Plan”

At the conclusion of the Export and Import the migration time reports must be collected (export_time.html and import_time.html) https://blogs.sap.com/2016/11/17/time-analyzer-reports-for-osdb-migrations/

VLDB Migration Do’s & Don’t

The guidelines contained in this blog are based on real customer projects and the learnings derived from these projects. This blog instructs customers to avoid certain scenarios because these have been unsuccessful in the past. An example is the recommendation not to use UNIX servers or virtualized servers as R3load export servers:

1. Very often the export performance is a gating factor on the overall downtime. Often the current hardware is more than 4-5 years old and is prohibitively expensive to upgrade

2. It is therefore important to get the maximum export performance that is practical to achieve

3. Previous projects have spent man-weeks or even man-months trying to tune R3load export performance on UNIX or virtualized platforms, before giving up and using Intel R3load servers

4. 2-socket commodity Intel servers are very inexpensive and immediately deliver substantial performance gains, in some cases many orders of magnitude greater than minor tuning improvements possible on UNIX or virtualized servers

5. Customers often have existing VM farms but most often these do not support modern offload or SRIOv technologies. Often the VMWare version is old, unpatched or not configured for very high network throughput and low latency. R3load export servers require very fast thread performance and extremely high network throughput. R3load export servers may run for 10-15 hours at nearly 100% CPU and network utilization. This is not the typical use case of most VMWare farms and most VMWare deployments were never designed to handle a workload such as R3load.

RECOMMENDATION: Do not invest time into optimizing R3load export performance on UNIX or virtualized platforms. Doing so will waste not only time but will cost much more than buying low cost Intel servers at the start of the project. VLDB migration customers are therefore requested to ensure the project team has fast modern R3load export servers available at the start of the project. This will lower the total cost and risk of the project.

Do:

1. Survey and Inventory the current SAP landscape. Identify the SAP Support Pack levels and determine if patching is required to support the target DBMS. In general the Operating Systems Compatibility is determined by the SAP Kernel and the DBMS Compatibility is determined by the SAP_BASIS patch level.

Build a list of SAP OSS Notes that need to be applied in the source system such as updates for SMIGR_CREATE_DDL. Consider upgrading the SAP Kernels in the source systems to avoid a large change during the migration to Azure (eg. If a system is running an old 7.41 kernel, update to the latest 7.45 on the source system to avoid a large change during the migration)

2. Develop the High Availability and Disaster Recovery solution. Build a PowerPoint that details the HA/DR concept. The diagram should break up the solution into the DB layer, ASCS layer and SAP application server layer. Separate solutions might be required for standalone solutions such as TREX or Livecache

3. Develop a Sizing & Configuration document that details the Azure VM types and storage configuration. How many Premium Disks, how many datafiles, how are datafiles distributed across disks, usage of storage spaces, NTFS Format size = 64kb. Also document Backup/Restore and DBMS configuration such as memory settings, Max Degree of Parallelism and traceflags

4. Network design document including VNet, Subnet, NSG and UDR configuration

5. Security and Hardening concept. Remove Internet Explorer, create a Active Directory Container for SAP Service Accounts and Servers and apply a Firewall Policy blocking all but a limited number of required ports

6. Create an OS/DB Migration Design document detailing the Package & Table splitting concept, number of R3loads, SQL Server traceflags, Sorted/Unsorted, Oracle RowID setting, SMIGR_CREATE_DDL settings, Perfmon counters (such as BCP Rows/sec & BCP throughput kb/sec, CPU, memory), RSS settings, Accelerated Networking settings, Log File configuration, BPE settings, TDE configuration

7. Create a “Flight Plan” graph showing progress of the R3load export/import on each test cycle. This allows the migration consultant to validate if tunings and changes improve r3load export or import performance. X axis = number of packages complete. Y axis = hours. This flight plan is also critical during the production migration so that the planned progress can be compared against the actual progress and any problem identified early.

8. Create performance testing plan. Identify the top ~20 online reports, batch jobs and interfaces. Document the input parameters (such as date range, sales office, plant, company code etc) and runtimes on the original source system. Compare to the runtime on Azure. If there are performance differences run SAT, ST05 and other SAP tools to identify inefficient statements

9. SAP BW on SQL Server. Check this blogsite regularly for new features for BW systems including Column Store

10. Audit deployment and configuration, ensure cluster timeouts, kernels, network settings, NTFS format size are all consistent with the design documents. Set perfmon counters on important servers to record basic health parameters every 90 seconds. Audit that the SAP Servers are in a separate AD Container and that the container has a Policy applied to it with Firewall configuration.

11. Do check that the lead OS/DB migration consultant is licensed! Request the consultant name, s-user and certification date. Open an OSS message to BC-INS-MIG and ask SAP to confirm the consultant is current and licensed.

12. If possible, have the entire project team associated with the VLDB migration project within one physical location and not geographically dispersed across several continents and time zones.

13. Make sure that there is a proper fallback plan is in place and that it is part of the overall schedule.

14. Do select fast thread count Intel CPU models for the R3load export servers. Do not use “Energy Saver” CPU models as they have much lower performance and do not use 4-socket servers. Intel Xeon E5 Platinum 8158 is a good example

Do not:

1. VLDB OS/DB migration requires an advanced technical skillset and very strong process, change control & documentation. Do not do “on the job training” with VLDB migrations

2. Do not subcontract one consulting organization to do the Export and subcontract another consulting organization to do the Import. Occasionally the Source system is outsourced and managed by one consulting organization or partner and a customer wishes to migrate to Azure and switch to another partner. Due to the tight coupling between Export and Import tuning and configuration it is very unlikely assigning these tasks to different organizations will produce a good result

3. Do not economize on Azure hardware resources during the migration and go live. Azure VMs are charged per minute and can be reduced in size very easily. During a VLDB migration leverage the most powerful VM available. Customers have successfully gone live on 200-250% oversized systems, then stabilized while running significantly oversized systems. After monitoring utilization for 4-6 weeks, VMs are reduced in size or shutdown to lower costs

Required Reading, Documentation and Tips

Below are some recommendations for those setting up this solution based on test deployments:

Check the SAP on Microsoft Azure blog regularly https://blogs.msdn.microsoft.com/saponsqlserver/

Read the latest SAP OS/DB Migration FAQ https://blogs.msdn.microsoft.com/saponsqlserver/tag/migration/

A useful blog on DMO is here https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

Information on DMO https://blogs.sap.com/2013/11/29/database-migration-option-dmo-of-sum-introduction/

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research


Columnstore became default in SAP BW

$
0
0

Overview

The following features have been optionally available in SAP BW on Microsoft SQL Server for several years:

The impact of these features is described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server. Customer experience showed, that using these features almost always resulted in BW query performance improvements. Therefore, these features are turned on by default after applying SAP Note 2582158 – Make Columnstore and Flat Cube default.

Columnstore became default only on SQL Server 2016 and newer:

Creating a New Cube

A new cube will be automatically created as a Columnstore Cube, except in the following cases:

  • The new behavior has been explicitly turned off by setting the following RSADMIN parameter
    • MSS_DEFAULT_CS = FALSE
  • APO Cubes
    Cubes of the SAP application Advanced Planning and Optimization (APO) never use the Columnstore
  • Realtime Cubes
    SAP Realtime Cubes only use the Columnstore on the e-fact table, unless you have set the RSADMIN parameter
    • MSS_REALTIME_FFACT_CS = X

    For details, see SAP Note 2371454 – Columnstore and Realtime Cubes

Transporting a Cube

A cube is automatically converted to Columnstore (on the target system), if it is empty (on the target system).

Converting all Cubes to Columnstore

There are 3 different ways for converting all cubes to Columnstore. In the second tab of report MSSCSTORE, you can define a global setting for all SAP BW cubes. We aware, that this is a global setting for all (existing and future) cubes, not just a default for new cubes.
When choosing “Always Column Store (CS)” and pressing F8, all cubes are defined in SAP BW as Columnstore Cubes. However, the Columnstore Indexes are not created on the database yet. Report MSSCSTORE should finish within a few minutes, since only the cube definition is changed.

  • Converting via process chain
    All indexes of a BW cube are created when running the BW process chain Type “Create Index” or executing “Repair DB indexes” in SAP transaction RSA1. This allows a gradual conversion of the cube indexes, once you have changed the Columnstore definition in report MSSCSTORE as described above.
  • Immediate conversion
    You might additionally select “Repair indexes of all cubes” in report MSSCSTORE. Hereby, the Columnstore Indexes are created on the database immediately. In this case, the runtime of report MSSCSTORE can easily take a few hours. Therefore, you should run report MSSCSTORE as a batch job (by pressing F9). Make sure that the SQL Server transaction log is large enough and that log backups are performed regularly.
  • Conversion during R3load System Copy
    You can automatically convert all cubes to Columnstore during an R3load-based system copy. Therefore, you have to choose “SQL Server 2016 (all column-store)” as the database version in report SMIGR_CREATE_DDL.

Conclusion

For SAP BW releases below 7.40 SP8 you should convert all BW cubes to Columnstore. For newer SAP BW releases you should consider applying the Flat Cube. A Flat Cube always uses the Columnstore.

Flat Cube became default in SAP BW

$
0
0

Overview

The following features have been optionally available in SAP BW on Microsoft SQL Server for several years:

The impact of these features is described in https://blogs.msdn.microsoft.com/saponsqlserver/2017/05/08/performance-evolution-of-sap-bw-on-sql-server. Customer experience showed, that using these features almost always resulted in BW query performance improvements. Therefore, these features are turned on by default after applying SAP Note 2582158 – Make Columnstore and Flat Cube default.

Flat Cube became default only on SQL Server 2016 and newer:

Creating a New Cube

A new cube will be created as a Flat Cube if you mark the checkbox “Flat InfoCube”.

The default value of this checkbox has been changed. It is now turned on. You can revert to the old behavior by setting the following RSADMIN parameter (In this case, only the default setting of the checkbox changes):

  • MSS_DEFAULT_FLAT = FALSE

A Flat Cube always has a columnstore index. You can choose the Flat option in combination with the RealTime option. Keep in mind, that you cannot create aggregates on a Flat Cube.

Transporting a Cube

The flat property of a BW cube can be transported from an SAP source system to an SAP target system once you have applied SAP Note 2550929 – Inconsistent metadata in case of transport of flat cubes non HANA landscape. A Flat Cube in the source system will be created as a Flat Cube in the target system. A non-flat cube will be created as non-flat in the target system. However, the flat property is not changed in the target system when transporting a cube, if the cube already exists in the target system and is not empty (means, the fact tables contain data).

Converting all Cubes to Flat Cube

The procedure of converting a cube to a Flat Cube is described in https://blogs.msdn.microsoft.com/saponsqlserver/2018/01/03/improve-sap-bw-performance-by-applying-the-flat-cube:

  • Use SAP report RSDU_REPART_UI for converting a single cube
  • Use SAP report RSDU_IC_STARFLAT_MASSCONV for converting many or all cubes

Keep in mind, that an automatic conversion to Flat Cube during an R3load based system copy or database migration is not possible. You have to convert the cubes after the database migration on the target system.

Conclusion

Since the conversion to Flat Cube can be very time-consuming, you often do not want to perform this on all your cubes. You may want to start using the Flat Cube for your most important cubes. For all non-flat cubes you should at least apply the Columnstore (which is a quite fast and simple operation). A Flat Cube always uses the Columnstore.

SAP on Azure: General Update – June 2018

$
0
0

SAP and Microsoft are continuously adding new features and functionalities to the Azure platform. The key objective of the Azure cloud platform is to deliver the best performance and availability at the lowest TCO and simplest operation. This blog includes updates, fixes, enhancements and best practice recommendations collated over recent months.

1. M-Series is Certified for SAP Hana – S4, BW4, BWoH and SoH up to 3.8TB RAM

SAP Hana customers can run S4HANA, BW4Hana, BW on Hana and Business Suite on Hana in Production in many of the Azure datacenters in Americas, Europe and Asia. https://azure.microsoft.com/en-us/global-infrastructure/regions/ More information in this blog: https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Requirements: Write Accelerator must be used for the Transaction Log disk only. Suse 12.3 or RHEL 7.3 or higher.

The SAP IaaS catalogue now includes M-series and Hana Large Instances

More information on the Write Accelerator can be found here:

https://azure.microsoft.com/en-us/blog/write-accelerator-for-m-series-virtual-machines-now-generally-available/

https://docs.microsoft.com/en-us/azure/virtual-machines/windows/how-to-enable-write-accelerator

The central Note for SAP Hana on Azure VMs is Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types https://launchpad.support.sap.com/#/notes/0001928533

The Note for Hana Large Instances for memory up to 20TB scale up is Note 2316233 – SAP HANA on Microsoft Azure (Large Instances) https://launchpad.support.sap.com/#/notes/2316233

Summary of M-Series VMs for SAP NetWeaver and SAP Hana

M-Series running SAP Hana

1. Transaction Log disk(s) must have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

2. Azure Write Accelerator must not be activated on disks holding DBMS datafiles, temp files etc

3. Azure Accelerated Networking should always be enabled on M-Series VMs running Hana

4. The precise OS releases that are supported for Hana can be found in the SAP Hana IaaS Directory https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

5. Where there is any discrepancy between any SAP OSS Note such as 1928533 or other source the SAP Hana IaaS Directory takes precedence

M-Series running AnyDB (SQL, Oracle, Sybase etc)

1. Windows 2012 2016, Suse 12.3, RH 7.x and Oracle Linux are all supported

2. Transaction Log disk(s) could have Azure Write Accelerator Enabled https://docs.microsoft.com/en-us/azure/virtual-machines/linux/how-to-enable-write-accelerator

3. Azure Write Accelerator must not be activated on Data disks

4. Azure Accelerated Networking should always be enabled on M-Series VMs running AnyDB

5. If running Oracle Linux the RHEL kernel must be used as at June 2018 instead of Oracle UEK4 kernel. Oracle UEK5 will support Accelerated Networking with Oracle 7.5

Additional Small Certified M-Series VMs

Small M-Series VMs are certified:

1. M64ls with 64vCPU and 512GB

2. M32ls with 32vCPU and 256GB

3. M32ts with 32vCPU and 192GB

Disk configuration and additional information on these new smaller M-Series VMs can be found here https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

2. SAP NetWeaver on Windows Hyper-V 2016 Fully Supported

Windows Server 2016 Hyper-V is now fully supported as a Hypervisor for SAP applications running on Windows.

Hyper-V 2016 is a powerful component for customers wishing to deploy a hybrid cloud environment with some components on-premises and some components on Azure.

A special fix for Windows 2016 is required before using Hyper-V 2016. Apply the latest update for “Windows 10 1607/Windows Server 2016″ but at least this patch level https://support.microsoft.com/en-us/help/4093120/windows-10-update-kb4093120

https://wiki.scn.sap.com/wiki/display/VIRTUALIZATION/SAP+on+Microsoft+Hyper-V

1409604 – Virtualization on Windows: Enhanced monitoring https://launchpad.support.sap.com/#/notes/0001409604

1409608 – Virtualization on Windows https://launchpad.support.sap.com/#/notes/0001409608

More information on Windows 2016 for SAP is here:

https://blogs.msdn.microsoft.com/saponsqlserver/2017/03/07/windows-2016-is-now-generally-available-for-sap/

https://blogs.sap.com/2017/05/06/performance-tuning-guidelines-for-windows-server-2016-hyper-v/

3. Build High Availability Capability within Azure Regions with Availability Zones

Availability Zones are Generally Available in many Azure Regions and are being deployed to most Azure regions shortly.

Availability Zones are physically separated datacenters with independent network and power infrastructure

VMs running in an Availability Zone achieve an SLA of 99.99% https://azure.microsoft.com/en-us/support/legal/sla/virtual-machines/v1_8/

A very good overview of Availability Zones on Azure and the interaction with other Azure components is detailed in this blog

https://blogs.msdn.microsoft.com/igorpag/2018/05/03/azure-availability-zones-quick-tour-and-guide/

More information below

https://docs.microsoft.com/en-us/azure/load-balancer/load-balancer-standard-availability-zones

https://blogs.msdn.microsoft.com/igorpag/2017/10/08/why-azure-availability-zones/

https://azure.microsoft.com/en-us/global-infrastructure/availability-zones/

A typical topology is depicted below.


The Azure Standard Internal Load Balancer is used for workloads that are distributed across Availability Zones. Even when deploying VMs into an Azure region that does not yet have Availability Zones it is recommended to use the Standard Internal Load Balancer in Zone-redundant mode. This allows

To view the VM types that are available in each Availability Zone in a Datacenter run this PowerShell command

Get-AzureRmComputeResourceSku | where {$_.Locations.Contains(“southeastasia”)-and $_.ResourceType.Equals(“virtualMachines”) -and $_.LocationInfo[0].Zones -ne $null }

Similar information can be seen in the Azure Portal when creating a VM

Customers building High Availability solutions with Suse 12.x operating system can review documentation on how to deploy single SID and Multi SID Suse Pacemaker clusters

The Microsoft documentation discusses the scenario “Microsoft SAP Fencing Agent + single iSCSI” STONITH configuration

An alternative deployment scenario “Two iSCSI devices in different Availability Zones”.

A Suse bug fix may be required to configure two iSCSI devices:
https://www.suse.com/support/kb/doc/?id=7022477

https://ptf.suse.com/f2cf38b50ed714a8409693060195b235/sles12-sp3-hae/14410/x86_64/20171219  (a user id is needed)

A recommended deployment configuration is to place each iSCSI source in a different Availability Zone.

4. Sybase ASE 16.3 PL3 “Always-on” on Azure – 2 Node HA + 3rd Async Node for DR

A new blog with step-by-step instructions on how to install and configure a 2 node HA Sybase cluster with a third node for DR has been released.

https://blogs.msdn.microsoft.com/saponsqlserver/2018/05/18/installation-procedure-for-sybase-16-3-patch-level-3-always-on-dr-on-suse-12-3-recent-customer-proof-of-concept

5. Very Useful Links for SAP on Azure Consultants

The listing below is a comprehensive collection of links that has proved very useful for many consultants working at System Integrators.

SAP on Azure Reference Architectures

SAP S/4HANA for Linux Virtual Machines on Azure https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-s4hana

Run SAP HANA on Azure Large Instances https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/hana-large-instances

Deploy SAP NetWeaver (Windows) for AnyDB on Azure Virtual Machines https://docs.microsoft.com/en-gb/azure/architecture/reference-architectures/sap/sap-netweaver

High Availability SAP Netweaver Any DB

High-availability architecture and scenarios for SAP NetWeaver

Azure Virtual Machines high availability architecture and scenarios for SAP NetWeaver https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-architecture-scenarios

https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse

Azure infrastructure preparation for SAP NetWeaver high-availability deployment

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-shared-disk

Prepare Azure infrastructure for SAP high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-infrastructure-wsfc-file-share

Prepare Azure infrastructure for SAP high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#setting-up-a-highly-available-nfs-server

Installation of an SAP NetWeaver high availability system in Azure

Install SAP NetWeaver high availability by using a Windows failover cluster and shared disk for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-shared-disk

Install SAP NetWeaver high availability by using a Windows failover cluster and file share for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-high-availability-installation-wsfc-file-share

Install SAP NetWeaver high availability by using a SUSE Linux Enterprise Server cluster framework for SAP ASCS/SCS instances https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/high-availability-guide-suse#prepare-for-sap-netweaver-installation

High Availability SAP Hana

HANA Large Instance

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

High availability set up in SUSE using the STONITH https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/ha-setup-with-stonith

SAP HANA high availability for Azure virtual machines https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-overview

SAP HANA availability within one Azure region https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-one-region

SAP HANA availability across Azure regions https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-availability-across-regions

Disaster Recovery

Protect a multi-tier SAP NetWeaver application deployment by using Site Recovery https://docs.microsoft.com/en-gb/azure/site-recovery/site-recovery-sap

SAP HANA Large Instances high availability and disaster recovery on Azure https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery

Setting Up Hana System Replication on Azure Hana Large Instances https://blogs.msdn.microsoft.com/saponsqlserver/2018/02/10/setting-up-hana-system-replication-on-azure-hana-large-instances/

Monitoring

New Azure PowerShell cmdlets for Azure Enhanced Monitoring https://blogs.msdn.microsoft.com/saponsqlserver/2016/05/16/new-azure-powershell-cmdlets-for-azure-enhanced-monitoring/

The Azure Monitoring Extension for SAP on Windows – Possible Error Codes and Their Solutions https://blogs.msdn.microsoft.com/saponsqlserver/2016/01/29/the-azure-monitoring-extension-for-sap-on-windows-possible-error-codes-and-their-solutions/

Azure Extended monitoring for SAP https://blogs.msdn.microsoft.com/saponsqlserver/2014/06/24/azure-extended-monitoring-for-sap/

https://docs.microsoft.com/en-us/azure/operations-management-suite/

https://azure.microsoft.com/en-us/services/monitor/

https://azure.microsoft.com/en-us/services/network-watcher/

Automation

https://azure.microsoft.com/en-us/services/automation/

Automate the deployment of SAP HANA on Azure https://github.com/AzureCAT-GSI/SAP-HANA-ARM

Migration from on-premises DC to Azure

Transfer data with the AzCopy https://docs.microsoft.com/en-us/azure/storage/common/storage-use-azcopy

Azure Import/Export service https://docs.microsoft.com/en-us/azure/storage/common/storage-import-export-service

Very Large Database Migration to Azure https://blogs.msdn.microsoft.com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

SAP on Azure – DMO with System Move https://blogs.sap.com/2017/10/05/your-sap-on-azure-part-2-dmo-with-system-move/

SAP on Azure certification

SAP Certified IaaS Platforms https://www.sap.com/dmc/exp/2014-09-02-hana-hardware/enEN/iaas.html#categories=Microsoft%20Azure

SAP Note #1928533 – SAP Applications on Azure: Supported Products and Azure VM types  https://launchpad.support.sap.com/#/notes/1928533

SAP certifications and configurations running on Microsoft Azure https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-certifications

Azure M-series VMs are now SAP HANA certified https://azure.microsoft.com/en-us/blog/azure-m-series-vms-are-now-sap-hana-certified/

Backup Solutions

Azure VM backup for OS https://azure.microsoft.com/en-gb/services/backup/

HANA VM Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-guide

HANA VM backup to file https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/sap-hana-backup-file-level

HANA VM backup based on storage snapshots https://docs.microsoft.com/en-gb/azure/virtual-machines/workloads/sap/sap-hana-backup-storage-snapshots

HANA Large Instance (HLI) Backup – overview https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#backup-and-restore

HLI backup based on storage snapshots https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-overview-high-availability-disaster-recovery#using-storage-snapshots-of-sap-hana-on-azure-large-instances

Use third party backup tools: Commvault, Veritas, etc.

All the major third-party backup tools are supported in Azure and have agents for SAP HANA, SQL, Oracle, Sybase etc

Commvault

Azure: https://documentation.commvault.com/commvault/v11/article?p=31252.htm

SAP HANA: https://documentation.commvault.com/commvault/v11/article?p=22305.htm

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/commvault.commvault?tab=Overview

Veritas NetBackup

Azure: https://www.veritas.com/support/en_US/article.100041400

HANA: https://www.veritas.com/content/support/en_US/doc/16226696-127422304-0/v88504823-127422304

Azure Marketplace: https://azuremarketplace.microsoft.com/en-us/marketplace/apps/veritas.veritas-netbackup-8-s?tab=Overview

Security

Network

Logically segment subnets https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#logically-segment-subnets

Control routing behavior https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#control-routing-behavior

Enable Forced Tunneling https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-forced-tunneling

Use virtual network appliances https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-virtual-network-appliances

Deploy DMZs for security zoning https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#deploy-dmzs-for-security-zoning

Avoid exposure to the Internet with dedicated WAN links https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#avoid-exposure-to-the-internet-with-dedicated-wan-links

Optimize uptime and performance https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#optimize-uptime-and-performance

HTTP-based Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#http-based-load-balancing

External Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#external-load-balancing

Internal Load Balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#internal-load-balancing

Use global load balancing https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#use-global-load-balancing

Disable RDP/SSH Access to Azure Virtual Machines https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#disable-rdpssh-access-to-azure-virtual-machines

Enable Azure Security Center https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#enable-azure-security-center

Securely extend your datacenter into Azure https://docs.microsoft.com/en-us/azure/security/azure-security-network-security-best-practices#securely-extend-your-datacenter-into-azure

Operational

Monitor, manage, and protect cloud infrastructure https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitor-manage-and-protect-cloud-infrastructure

Manage identity and implement single sign-on https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#manage-identity-and-implement-single-sign-on

Trace requests, analyze usage trends, and diagnose issues https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#trace-requests-analyze-usage-trends-and-diagnose-issues

Monitoring services https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#monitoring-services

Prevent, detect, and respond to threats https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#prevent-detect-and-respond-to-threats

End-to-end scenario-based network monitoring https://docs.microsoft.com/en-us/azure/security/azure-operational-security-best-practices#end-to-end-scenario-based-network-monitoring

Azure Security Center https://azure.microsoft.com/en-us/blog/protect-virtual-machines-across-different-subscriptions-with-azure-security-center/

https://azure.microsoft.com/en-us/blog/how-azure-security-center-helps-detect-attacks-against-your-linux-machines/

New VM Type for single tenant isolated VM https://azure.microsoft.com/en-us/blog/new-isolated-vm-sizes-now-available/

Azure Active Directory

Azure Active Directory integration with SAP HANA https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-saphana-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud Platform Identity Authentication https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sap-hana-cloud-platform-identity-authentication-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Business ByDesign https://docs.microsoft.com/en-us/azure/active-directory/active-directory-saas-sapbusinessbydesign-tutorial?toc=%2fazure%2fvirtual-machines%2fworkloads%2fsap%2ftoc.json

Azure Active Directory integration with SAP Cloud for Customer for SSO functionality https://blogs.sap.com/2017/08/02/azure-active-directory-integration-with-sap-cloud-for-customer-for-sso-functionality/

S/4HANA environment – Fiori Launchpad SAML Single Sign-On with Azure AD https://blogs.sap.com/2017/02/20/your-s4hana-environment-part-7-fiori-launchpad-saml-single-sing-on-with-azure-ad/

Very good rollup article on Azure Networking https://blogs.msdn.microsoft.com/igorpag/2017/04/06/my-personal-azure-faq-on-azure-networking-v3/

Special thanks to Ravi Alwani for collating these links

6. New Microsoft Features for SAP Customers

Microsoft has released many new features for SAP customers:

Azure Site Recovery Azure-2-Azure – Support for Suse 12.x has been released! https://docs.microsoft.com/en-us/azure/site-recovery/azure-to-azure-support-matrix

Global vNet Peering – previously it was not possible to Peer vNets from other datacenters. This is now Generally Available in some datacenters and is being deployed globally. One of the biggest advantages of Global vNet Peering is that network traffic will be carried across the Azure network backbone.

https://blogs.msdn.microsoft.com/wushuai/2018/02/04/provide-cross-region-low-latency-service-based-on-azure-vnet-peering/

https://azure.microsoft.com/en-us/blog/global-vnet-peering-now-generally-available/

The new Standard Internal Load Balancer (ILB) is Availability Zone aware and has better performance than the regular Basic ILB

https://docs.microsoft.com/en-us/azure/azure-subscription-service-limits#load-balancer

https://github.com/yinghli/azure-vm-network-performance (scroll down to review performance)

SQL Server 2016 Service Pack 2 (SP2) released https://blogs.msdn.microsoft.com/sqlreleaseservices/sql-server-2016-service-pack-2-sp2-released/

Linux customers are recommended to setup Azure Serial Console. This allows access to a Linux VM when the network stack is not working. This feature is the equivalent of an RS-232/COM port cable connection https://docs.microsoft.com/en-us/azure/virtual-machines/linux/serial-console

Azure Storage Explorer provides easier management of blob objects such as backups on Azure blob storage https://azure.microsoft.com/en-us/features/storage-explorer/

Azure now offers Trusted Execution Environment leveraging Intel Xeon Processors with Intel SGX technology. So far this is not tested with SAP yet, but may be validated in the future https://azure.microsoft.com/en-us/blog/azure-confidential-computing/

More information on new Network features can be found here https://azure.microsoft.com/en-us/blog/azure-networking-may-2018-announcements/

https://azure.microsoft.com/en-us/blog/monitor-microsoft-peering-in-expressroute-with-network-performance-monitor-public-preview/

7. New SAP Features

SAP has released a new version of SWPM. It is recommended to use this version for all new Installations. The tool can be downloaded from https://support.sap.com/en/tools/software-logistics-tools.html

1680045 – Release Note for Software Provisioning Manager 1.0 (recommended: SWPM 1.0 SP 23) https://launchpad.support.sap.com/#/notes/0001680045

Customers interested in automating SWPM can review 2230669 – System Provisioning Using a Parameter Input File https://launchpad.support.sap.com/#/notes/2230669

SAP has released new SAP Downwards Compatible Kernels. SAP has provided guidance to switch to using the new 7.53 kernel for all new Installations:

SAP recommends using the latest SP stack kernel (SAPEXE.SAR and SAPEXEDB.SAR), available in the Support Packages & Patches section of the SAP Support Portal https://launchpad.support.sap.com/#/softwarecenter.

For existing installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 749 PL 500. For details, see release note 2626990.
For new installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 753 PL 100. For details, see DCK note 2556153 and release note 2608318.
For AS ABAP 7.52 this is SAP Kernel 753 PL 100. For details, see release note 2608318.

2083594 – SAP Kernel 740, 741, 742, 745, 749 and 753: Versions and Kernel Patch Levels https://launchpad.support.sap.com/#/notes/2083594

2556153 – Using kernel 7.53 instead of kernel 7.40, 7.41, 7.42, 7.45, or 7.49 https://launchpad.support.sap.com/#/notes/0002556153

2350788 – Using kernel 7.49 instead of kernel 7.40, 7.41, 7.42 or 7.45 https://launchpad.support.sap.com/#/notes/0002350788

1969546 – Release Roadmap for Kernel 74x and 75x https://launchpad.support.sap.com/#/notes/1969546

https://wiki.scn.sap.com/wiki/display/SI/SAP+Kernel:+Important+News

8. Recommended Hana on Azure Disk Design Template

The Excel spreadsheet HANA-Disk-Design-Template-for-Azure contains a useful model template for customers planning to deploy Hana on Azure VMs.

The spreadsheet contains a sample Hana deployment on Azure M-series with details such as stripe sizes, Write Accelerator and other useful configuration settings

Further information and details can be found in the Azure documentation here: https://docs.microsoft.com/en-us/azure/virtual-machines/workloads/sap/hana-vm-operations

The spreadsheet is a sample only and should be adapted as required, for example the Availability Zone column will likely need to be updated.

Special thanks to Archana from Cognizant for creating this template

9. Disable 8.3 Filename Generation

Very old Windows releases were limited to filenames of 8.3 characters. Some applications that call very old file handling APIs can only reference files by their 8.3 format filenames.

Up to and including Windows Server 2016 all files with more than 8.3 characters will have a 8.3 format filename created.

This operation becomes very expensive if there are very large numbers of files in a directory. Frequently SAP customers will keep job logs or interface files on the /sapmnt share and the total number of files can reach hundreds of thousands.

It is recommended to disable 8.3 filename generation on all existing Windows servers and to include disabling 8.3 filename generation as part of the standard build of new Windows servers alongside steps such as removing Internet Explorer and disabling SMB v1.0.

662452 – Poor file system performance/errors during data accesses https://launchpad.support.sap.com/#/notes/662452

https://support.microsoft.com/en-us/help/121007/how-to-disable-8-3-file-name-creation-on-ntfs-partitions

10. Hundreds of vCPU and 12TB Azure VMs Certified for SAP Hana Announced

In a blog by Corey Sanders Microsoft confirms a new generation of M-Series VMs supporting hundreds of vCPU and 12TB of RAM. The blog “Why you should bet on Azure for your infrastructure needs, today and in the future” announces the following:

1. Next Generation M-Series based on Intel Skylake CPU supporting up to 12TB of RAM

2. New Hana Large Instance TDIv5 appliances 6 TB, 12 TB and 18 TB

3. New Standard SSD based storage – suitable for backups and bulk storage https://azure.microsoft.com/en-us/blog/preview-standard-ssd-disks-for-azure-virtual-machine-workloads/

4. New smaller M-Series VMs suitable for non-production, Solution Manager and other smaller SAP Hana databases

https://azure.microsoft.com/en-us/blog/why-you-should-bet-on-azure-for-your-infrastructure-needs-today-and-in-the-future/

https://azure.microsoft.com/en-us/blog/offering-the-largest-scale-and-broadest-choice-for-sap-hana-in-the-cloud/

Miscellaneous Topics, Notes & Links

2343511 – Microsoft Azure connector for SAP Landscape Management (LaMa) https://launchpad.support.sap.com/#/notes/0002343511

Optimizing SAP for Azure https://www.microsoft.com/en-us/download/details.aspx?id=56819

Useful link on setting up LVM on Linux VMs https://docs.microsoft.com/en-us/azure/virtual-machines/linux/configure-lvm

Updated SQL Column Store documentation – recommended for all BW on SQL customers 2116639 – SQL Server Columnstore Documentation https://launchpad.support.sap.com/#/notes/0002116639

Installation Procedure for Sybase 16. 3 Patch Level 3 Always-on + DR on Suse 12. 3 – Recent Customer Proof of Concept

$
0
0

In recent months we saw several customers with large investments into Hana technologies approach Microsoft for information about deploying large mission critical SAP applications on Azure with the Sybase ASE database.

SAP Hana customers are typically able to deploy Sybase ASE at little or no additional cost if they have licensed Hana Database.

Many of the customers that have contacted Microsoft are shutting datacenters or terminating UNIX platforms and moving ECC or BW systems in the size range of 25-45TB DB volume to Azure. An earlier blog describes some of the requirements and best practices for VLDB migrations to Azure. https://blogs. msdn. microsoft. com/saponsqlserver/2018/04/10/very-large-database-migration-to-azure-recommendations-guidance-to-partners/

Until recently there was no simple documented straight forward installation procedure for a typical two node High-Availability pair with Synchronous replication and a third node with Asynchronous replication. This is quite a common requirement for SAP customers.

This blog is designed to supplement the existing SAP provided documentation and to provide some hints and additional information. The SAP Sybase team are continuously updating and improving the Sybase documentation, so it is always recommended to start with the official documentation and then cross reference this documentation. This document is based on real deployments from Cognizant and DXC. The latest version of Sybase & Suse were then installed in a lab test environment to provide screenshots

High Level Overview of Installation Steps

The high-level installation process for a 3 tier SAP Distributed Installation is:

  1. Read required OSS Notes, Installation Guides, Download Installation Media and the SAP on Sybase Business Suite documentation
    1. For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/
  2. Provision Azure VMs with Suse for SAP Applications 12. 3 with Accelerated Networking Enabled
  3. Perform OS patching and preparation steps detailed below
  4. Run SWPM Distributed Install and install the ASCS Instance
  5. Export the /sapmnt NFS share
  6. Mount the /sapmnt NFS share on the Primary, Secondary and DR DB server
  7. Run SWPM Distributed Install and install the Primary DB Instance
  8. Run SWPM Distributed Install and install the Primary Application Server (Optional: add additional App servers)
  9. Perform Sybase Always-on preparation steps on Primary DB Instance
  10. Run setuphadr on Primary DB Instance
  11. Run SWPM Distributed Install and install the Secondary DB Instance
  12. Perform Sybase Always-on preparation steps on Secondary DB Instance
  13. Run setuphadr on Secondary DB Instance
  14. Run SWPM Distributed Install and install the DR DB Instance
  15. Perform Sybase Always-on preparation steps on DR DB Instance
  16. Run setuphadr on DR DB Instance
  17. Run post steps such as installing Fault Manager

Deployment Config

  1. Suse 12. 3 with latest updates
  2. Sybase 16. 03. 03
  3. SWPM version 22 or 23. SAP Kernel 7. 49 patch 500. NetWeaver ABAP 7. 50
  4. Azure Ev3 VMs with Accelerated Networking and 4 vcpu
  5. Premium Storage – each DB server has 2 x P20 disks (or more as required). App server has only a boot disk
  6. Official Sybase Documentation (some steps do not work, supplement with this blog) https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US
  7. Sample Response Files are attached here: Sybase-Sample-Response-Files. It is recommended to download and review these files
  8. Sybase Always-on does not leverage OS level clustering technologies such as Pacemaker or Windows cluster. The Azure ILB is not used. Instead the SAP workprocess is aware of the Primary and Secondary Sybase server. The DR node does not support automatic failover and this is a manual process to setup and configure SAP app servers to connect to the DR node
  9. This installation shows a “Distributed” installation. If the SAP Central Services should be highly available, follow the SAP on Azure documentation for Pacemaker
  10. Sybase Fault Manager is automatically installed on the SAP PAS during installation
  11. Be careful of Linux vs. Windows End of Life characters. Use Linux command cat -v response_file. rsIf ^M are seen then there are Windows EOL characters.

    Example:cat -v test. sh

    Output:

    Line 1 ^M

    Line 2 ^M

    Line 3 ^M

    (Note: CTRL+M is a single character in Linux, which is carriagereturn in Windows. This needs to be fixed before utilizing the file in Linux )

        To fix the issue

            $> dos2unix test. sh

            Output

                Line 1

                Line 2

                Line 3

  12. Hosts file configuration used for this deployment

    Example: <IP Address><FQDN> <SHORTNAME> <#Optional Comments>

    10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

    10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

    10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

    10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

    Common Prepare Steps on all Suse Servers

sudo zypper install -y glibc-32bit

sudo zypper install -y libgcc_s1-32bit

#these two glib 32bit are mandatory otherwise Always-on will not work

sudo zypper up -y

Note : It is mandatory to reboot the server if kernel patches are applied.

#resize the boot disk. The default linux root disk of 30GB is too small. Shutdown the VM and edit the disks in Azure Portal or Powershell. Increase the size of the disk to 60-100GB. Restart the VM and run the commands below. There is no benefit or advantage to provisioning an additional separate disk for a SAP application server

sudo fdisk /dev/sda

##delete the existing partition (this will not delete the data) and create [n] new primary [p] partition with defaults and write [w] config

sudo resize2fs /dev/sda2

sudo reboot

#Check Accelerated Networking is working

/sbin/ethtool -S eth0 | grep vf_

#Add these entries to the hosts file

sudo vi /etc/hosts

10. 1. 0. 9     sybdb1. hana. com     sybdb1    #primary DB

10. 1. 0. 10   sybapp1. hana. com    sybapp1    #SAP NW 7. 5 PAS

10. 1. 0. 11   sybdb2. hana. com     sybdb2        #secondary DB

10. 1. 0. 12   sybdb3. hana. com    sybdb3        #tertiary DB for DR

 #edit the waagent to create a swapfile

sudo vi /etc/waagent. conf

line to look for>>

ResourceDisk. EnableSwap=n

ResourceDisk. SwapSizeMB=

<<

Modify the above values Note : Swap size must be given in MB size only.

#enable the swapfile and set a size of 2GB or more. Example:

ResourceDisk. EnableSwap=y

ResourceDisk. SwapSizeMB=2000

Once done restart of the agent is necessary to get the swap file up and active.

sudo systemctl restart waagent

Other Services to be enabled and restarted are:

sudo systemctl restart nfs-server

sudo systemctl enable nfs-server

sudo systemctl status uuidd

sudo systemctl enable uuidd

sudo systemctl start uuidd

sudo systemctl status uuidd

##run sapcar and unpack SWPM 22 or 23

sapcar -xvf SWPM10SP22_7-20009701. SAR

SAP APP Server ASCS Install

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open a web browser from a Management Server and enter the Suse os-user name and password https://10. 1. 0. 10:4237/sapinst/docs/index. html

##after install exportthe NFS Share for /sapmnt

sudo vi /etc/exports

#add this line /sapmnt*(rw,no_root_squash)

## open port 2049 for nfs on NSG if required [by default VMs on same vnet can talk to each other]

 sudo systemctl restart nfs-server

SAP DB Instance Install

##do common preparation steps such as zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc  -> n, p, w

sudo fdisk /dev/sdd  -> n, p, w

#It is generally recommended to use LVM and create pv, lv etc here so we can test performance later with striping additional disks.

Note: if multiple disk used in creating data / Backup / Log storage, make a necessary striping enabled to get optimal performance.

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

Edit /etc/fstab and add the entries for the created disks.

Option 1:

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2:

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Open web browser and start installation

SAP PAS Install

##do same preparations as ASCS for zypper and hosts file etc

sudo /source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

https://10. 1. 0. 10:4237/sapinst/docs/index. html

AlwaysOn Install Primary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##Login as syb<sid> – in this case the <sid> = ase

sybdb1 /sybase% whoami

sybase

sybdb1 /sybase% pwd

/sybase

sybdb1 /sybase% ls

ASEsourcesybdb1_dma.rssybdb1_setup_hadr. rs

sybdb1 /sybase% cat sybdb1_dma.rs | grep USER_INSTALL_DIR

USER_INSTALL_DIR=/sybase/ASE

sybdb1 /sybase%

sybdb1 /sybase% source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup.bin -f /sybase/sybdb1_dma. rs -i silent

 

Note: if the command does not run put several <space> characters before the -i silent
Full path to setup.bin from ASE. ZIP file. Full path to response file otherwise it will fail with non-specific error message

 

 

##run this command to unlock the sa account. Command will fail if “-X” is not specified

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

##If any errors occur review this note

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb1_setup_hadr.rs

AlwaysOn Install Secondary

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

##create a directory for the source files.

sudo mkdir /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

##Login as syb<sid> – in this case the <sid> = ase

/sybase/source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb2_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

setuphadr /sybase/sybdb2_setup_hadr.rs

Do not restart the RMA – this is not required

AlwaysOn FM Install & Post Steps

The Sybase documentation for these steps is here.

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/286f4fc8b3ab4439b3400e97288152dc. html

The documentation is not complete. After doing the steps on the documentation link review this Note

1959660 – SYB: Database Fault Management

su – aseadm

rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:~ #su – aseadm

sybdb1:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb1:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb1:aseadm 3>

sybdb2:~ #su – aseadm

sybdb2:aseadm 1> rsecssfx put DB_CONNECT/SYB/DR_USER DR_admin -plain

sybdb2:aseadm 2> rsecssfx put DB_CONNECT/SYB/DR_PASSWORD SAPHana12345

sybdb2:aseadm 3>

## Run AlwaysOn Tuning & Configuration script on Primary and Companion

isql -UDR_admin -PSAPHana12345 -Ssybdb1:4909

sap_tune_rs Site1, 16, 4

isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

sap_tune_rs Site2, 16, 4

sybdb2:aseadm 3> isql -UDR_admin -PSAPHana12345 -Ssybdb2:4909

1> sap_tune_rs Site2, 16, 4

2> go

TASKNAMETYPE

VALUE

———————– —————–

————————————————————

Tune Replication Server Start Time

Sun Apr 29 06:20:37 UTC 2018

Tune Replication Server Elapsed Time

00:07:11

TuneRSTask Name

Tune Replication Server

TuneRSTask State

Completed

TuneRSShort Description

Tune Replication Server configurations.

TuneRSLong Description

Waiting 180 seconds: Waiting Replication Server to fully up.

TuneRSTask Start

Sun Apr 29 06:20:37 UTC 2018

TuneRSTask End

Sun Apr 29 06:27:48 UTC 2018

TuneRSHostname

sybdb2

(9 rows affected)

## On the APP server only

sudo vi . dbenv. csh

setenv dbs_syb_ha 1

setenv dbs_syb_server sybdb1:sybdb2

## Restart the SAP App server

sapcontrol -nr 00 -function StopSystem ALL

sapcontrol -nr 00 -function StartSystem ALL

https://help. sap. com/viewer/efe56ad3cad0467d837c8ff1ac6ba75c/16. 0. 3. 3/en-US/41b39cb667664dc09d2d9f4c87b299a7. html

sybapp1:aseadm 6> rsecssfx list

|———————————————————————————|

| Record Key | Status | Time Stamp of Last Update |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_PASSWORD | Encrypted| 2018-04-2903:07:11UTC |

|———————————————————————————|

| DB_CONNECT/DEFAULT_DB_USER | Plaintext| 2018-04-2903:07:07UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_PASSWORD | Encrypted| 2018-04-2906:18:26UTC |

|———————————————————————————|

| DB_CONNECT/SYB/DR_USER | Plaintext| 2018-04-2906:18:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_PASSWORD | Encrypted| 2018-04-2903:07:19UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SADB_USER | Plaintext| 2018-04-2903:07:14UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_PASSWORD | Encrypted| 2018-04-2903:07:42UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SAPSID_USER | Plaintext| 2018-04-2903:07:37UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_PASSWORD| Encrypted| 2018-04-2903:07:27UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SSODB_USER| Plaintext| 2018-04-2903:07:22UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_PASSWORD | Encrypted| 2018-04-2903:07:34UTC |

|———————————————————————————|

| DB_CONNECT/SYB/SYBSID_USER | Plaintext| 2018-04-2903:07:30UTC |

|———————————————————————————|

| SYSTEM_PKI/PIN | Encrypted| 2018-04-2722:36:39UTC |

|———————————————————————————|

| SYSTEM_PKI/PSE | Encrypted (binary) | 2018-04-2722:36:45UTC |

|———————————————————————————|

Summary

——-

ActiveRecords : 14 (Encrypted: 8, Plain: 6, Wrong Key: 0, Error: 0)

Defunct Records : 12 (180+ days: 0; Show: “list -withHistory”, Remove: “compact”)

## Run the Fault Manager Installation steps on the SAP PAS application server

sybapp1:aseadm 24> pwd

/sapmnt/ASE/exe/uc/linuxx86_64

sybapp1:aseadm 25> whoami

aseadm

sybapp1:aseadm 26> . /sybdbfm install

replication manager agent user DR_admin and password set in Secure Store.

Keep existing values (yes/no)? (yes)

SAPHostAgent connect user: (sapadm)

Enter password for user sapadm.

Password:

Enter value for primary database host: (sybdb1)

Enter value for primary database name: (ASE)

Enter value for primary database port: (4901)

Enter value for primary site name: (Site1)

Enter value for primary database heart beat port: (13777)

Enter value for standby database host: (sybdb2)

Enter value for standby database name: (ASE)

Enter value for standby database port: (4901)

Enter value for standby site name : (Site2)

Enter value for standby database heart beat port: (13787)

Enter value for fault manager host: (sybapp1)

Enter value for heart beat to heart beat port: (13797)

Enter value for support for floating database ip: (no)

Enter value for use SAP ASE Cockpit if it is installed and running: (no)

installation finished successfully.

Restart the SAP Instance – FM is added to the ASCS start profile

sybapp1:aseadm 32> sybdbfm status

fault manager running, pid = 4338, fault manager overall status = OK, currently executing in mode PAUSING

*** sanity check report (5)***.

node 1: server sybdb1, site Site1.

db host status: OK.

db status OK hadr status PRIMARY.

node 2: server sybdb2, site Site2.

db host status: OK.

db status OK hadr status STANDBY.

replication status: SYNC_OK.

AlwaysOn Install 3rd Node (DR) Async

Official SAP Sybase documentation and Links:

https://blogs. sap. com/2018/04/19/high-availability-disaster-recovery-3-node-hadr-with-sap-ase-16. 0-sp03/

Documentation https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US

https://help. sap. com/viewer/38af74a09e48457ab699e83f6dfb051a/16. 0. 3. 3/en-US/6ca81e90696e4946a68e9257fa2d3c31. html

1. Install the DB host using SWPM in the same way as the companion host

2. Copy the companion host response file

3. Duplicate the section with all the COMP entries and add it at the bottom and rename at section of the newly copied COMPs to DR (for example). Leave the old COMP and PRIM entries as is.

4. Change the setup site to DR

5. All other entries from PRIM and COMP must remain the same since the setuphadr run for 3rd node needs to know about previous 2 hosts.

6. Execute setuphadr

Review the Sample Response File attached to this blog

##do same preparations as ASCS for zypper and hosts file etc

Check that these libraries are installed otherwise Fault Manager will silently fail

sudo zypper install glibc-32bit

sudo zypper install libgcc_s1-32bit

##do same preparations as ASCS for zypper and hosts file etc

#create disks for sybase

Note : when multiple disks are added for data/log/backup to create a single volume, use right striping method to get better performance

Example:

vgcreate VG_DATA /dev/sdc /dev/sdd

lvcreate -l 100%F VG_DATA -n lv_data -i 2 -I 256

(for log use –l 32 )

sudo fdisk -l | grep /dev/sd

sudo fdisk /dev/sdc -> n, p, w

sudo fdisk /dev/sdd -> n, p, w

#only 1 disk, but created pv, lv etc here so we can test performance later with striping additional disks

sudo pvcreate /dev/sdc1 /dev/sdc1

sudo pvcreate /dev/sdc1 /dev/sdd1

sudo pvscan

sudo vgcreate syb_data_vg /dev/sdc1

sudo vgcreate syb_log_vg /dev/sdd1

sudo lvcreate -i1 -l 100%FREE -n syb_data_lvsyb_data_vg

sudo lvcreate -i1 -l 100%FREE -n syb_log_lvsyb_log_vg

sudo mkfs. xfs -f /dev/syb_data_vg/syb_data_lv

sudo mkfs. xfs -f/dev/syb_log_vg/syb_log_lv

sudo mkdir -p /sybase/source

sudo mkdir -p /log

sudo mkdir -p /sapmnt

sudo blkid | grep log

sudo blkid | grep data

edit /etc/fstab and add the entries for the created disks.

Option 1 :

Identify based on created volume group and lv details.

Ex: ls /dev/mapper/

And fetch the right devices

Ex: syb_data_vg-syb_data_lv

Add the the entries into /etc/fstab

sudo vi /etc/fstab

Add the lines.

/dev/mapper/syb_data_vg-syb_data_lv /hana/data xfs defaults,nofail 1 2

Option 2 :

#now sudo su – to root user and run this (replace GUID) – cannot run this with sudo command, must be root

sudo su –

echo “/dev/disk/by-uuid/799603d6-20c0-47af-80c9-75c72a573829 /sybase xfsdefaults,nofail02”>> /etc/fstab

echo “/dev/disk/by-uuid/2bb3f00c-c295-4417-b258-8de43a844e23 /log xfsdefaults,nofail02”>> /etc/fstab

exit

sudo mount -a

sudo df -h

sudo mount -a

sudo df -h

Note: mount points are visible only when the folders are accessed in df –h command when auto mount is enabled.

##create a directory for the source files.

sudo mkdir -p /sybase/source

## copy source files

sudo chmod 777 /sybase/source -R

## setup automount for /sapmnt

### – use auto mount not the “old” way sudo mount -t nfs4 -o rw sybapp1:/sapmnt /sapmnt

sudo mkdir /sapmnt

sudo vi /etc/auto.master

# Add the following line to the file, save and exit

+auto.master

/- /etc/auto.direct

sudo vi /etc/auto.direct

# Add the following lines to the file, save and exit

/sapmnt -nfsvers=4,nosymlink,sync sybapp1:/sapmnt

sudo systemctl enable autofs

sudo service autofs restart

sudo /sybase/source/swpm/sapinst SAPINST_REMOTE_ACCESS_USER=<os-user>

Stop the autofs and unmount the /sapmnt – sapinst will continue

The /sapmnt must be mounted again shortly after

## Install the DMA on the DR Node

##Login as syb<sid> – in this case the <sid> = ase

source/ASE1633/BD_SYBASE_ASE_16. 0. 03. 03_RDBMS_for_BS_/SYBASE_LINUX_X86_64/setup. bin -f /sybase/sybdb3_dma. rs -i silent

isql-Usapsso -PSAPHana12345 -SASE -X

sp_locklogin sa, unlock

go

sybdb3 /sybase% uname -a

Linux sybdb3 4. 4. 120-92. 70-default #1 SMP Wed Mar 14 15:59:43 UTC 2018 (52a83de) x86_64 x86_64 x86_64 GNU/Linux

sybdb3 /sybase% whoami

sybase

##Run setuphadr after editing the response file based on Sybase documentation (sample response file is attached to this blog)

sybdb3 /sybase% setuphadr /sybase/sybdb3_setup_hadr.rs

AlwaysOn Testing & Useful Command Syntax

In the section below planned and unplanned failovers as well as monitoring commands are used.

It is recommended to review the Sybase documentation and also to review these SAP Notes:

1982469 – SYB: Updating SAP ASE with saphostctrl

1959660 – SYB: Database Fault Management

2179305 – SYB: Usage of saphostctrl for SAP ASE and SAP Replication Server

## Check if Fault Manager is running on the SAP PAS with this command

ps -ef | grep sybdbfm

executable in /usr/sap/<SID>/ASCS00/work

sybdbfm is copied to sybdbfm. sap<SID>_ASCS00

cd /usr/sap/<SID>/ASCS00/work

. /sybdbfm. sapASE_ASCS00 status

. /sybdbfm. sapASE_ASCS00 hibernate

. /sybdbfm. sapASE_ASCS00 resume

login as syb<sid> in this case sybase

## Login to the RMA

isql -UDR_admin -P<<password>> -SASE_RMA_Site1 -I DM/interfaces -X -w999

## to see all the components that are running

sap_version all

go

## to see the status of a replication path

sap_status path

go

## to see the status of resources

sap_status resource

go

## Login to ASE

The syntax “-I DM/interfaces” does a lookup in the Sybase AlwaysOn configuration database to find the host and TCP port

isql -UDR_admin -P<<password>> -SASE_Site1 -I DM/interfaces -X-w999

## to clear down the transaction log run this command

dump tran ASE with truncate_only

go

## to show freespace in DB

sp_helpdb ASE

go

## Transaction log backups are needed on all replicas otherwise the Trans Log will become full

## to start/stop/get info on Sybase DB (and all required components for Always on like RMA) – run this on the DB host

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE -dbtype syb

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function StartDatabase -dbname ASE_REP -dbtype syb

## to get Sybase DB status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function GetDatabaseStatus -dbname ASE -dbtype syb

## to get Sybase DB replication status

sudo /usr/sap/hostctrl/exe/saphostctrl -user sapadm -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Check -updateoption TASK=REPLICATION_STATUS

## to send a trace ticket logon to RMA and execute these commands

sap_send_trace Site1

go

sap_status active

go

## during HADR testing leave tail running on the file /usr/sap/<SID>/ASCS00/work

tail -100f dev_sybdbfm

## to force a shutdown of the DB engine run the command below. Always-on will try to stop a normal shutdown of the DB

shutdown with wait nowait_hadr

go

## to do a planned failover from Primary to Companion DB the normal sequence is:

1. Failover from Primary to Companion

2. Drain logs from Primary to the DR site

3. Reverse Replication Route to start synchronization from the new Primary to the Companion and DR

— There is a new command that does all these steps automatically:

/usr/sap/hostctrl/exe/saphostctrl -user sapadm – -function LiveDatabaseUpdate -dbname ASE -dbtype syb -updatemethod Execute -updateoption TASK=FAILOVER -updateoption FAILOVER_FORCE=1 -updateoption FAILOVER_TIME=300

## it is recommended to use this command. If there are errors check in the path /usr/sap/hostctrl/work for log files

##other useful commands:

## to disable/enable replication from a Site to all routes

sap_disable_replication Site1, <DB>

sap_enable_replication Site1,Site2,<DB>

## command to manually failover

sap_failover <primary>,<standby>,<timeout>, [force], [unplanned]

## Materialize is a “dump and load” to reinitialize Sybase Alwayson replica.

sap_materialize auto,Site1,Site2,master

sap_materialize auto,Site1,Site2,<SID>

Sybase How To & Links

Customers familiar with SQL Server AlwaysOn should note that although it is possible to take a DB or Log backup from a replica, these backups are not compatible between Primary <-> Replica databases. It is also a requirement to run transaction log backups on the replica nodes unlike SQL Server.

SAP Notes:

2134316 – Can SAP ASE run in a cloud environment? – SAP ASE

1554717 – SYB: Planning information for SAP on ASE

1706801 – SYB: SAP ASE released for virtual systems

1590719 – SYB: Updates for SAP Adaptive Server Enterprise (SAP ASE)

1959660 – SYB: Database Fault Management

2450148 – ‘Warning: stopService() only supported on windows’ message happened during HADR configuration -SAP ASE

2489781 – SAP ASE 16. 0 SP03 Supported Operating Systems and Versions

DBA Cockpit doesn’t work by default after installation.

Setup DBA Cockpit as per:
2293673 – SYB: DBA Cockpit Correction Collection SAP Basis 7. 50

1605680 – SYB: Troubleshoot the setup of the DBA Cockpit on Sybase ASE

1245200 – DBA: ICF Service Activation for WebDynpro DBA Cockpit

For SUSE Linux Release 12 with SP3 release note : https://www. suse. com/releasenotes/x86_64/SUSE-SLES/12-SP3/

SAP Software Downloads https://support. sap. com/en/my-support/software-downloads. html

SWPM Download https://support. sap. com/sltoolset

Sybase Release Matrixhttps://wiki. scn. sap. com/wiki/display/SYBASE/Targeted+ASE+16. 0+Release+Schedule+and+CR+list+Information

Sybase Official Documentation https://help. sap. com/viewer/product/SAP_ASE/16. 0. 3. 3/en-US

Special thanks to Wajeeh Samdani from SAP Sybase Development in Walldorf

Special thanks to Cognizant SAP Cloud Team for their input and review of this blog

Content from third party websites, SAP and other sources reproduced in accordance with Fair Use criticism, comment, news reporting, teaching, scholarship, and research

Installation and Setup of Oracle 12.2 ASM on Oracle Linux on Azure – Installation Videos & Backup Strategies

$
0
0

This short blog contains videos showing the end to end installation process to setup Oracle 12.2 ASM on Azure.

The videos and documentation in this blog are being used to get formal certification for Oracle 12.2 ASM on Azure. This process is still ongoing

It is important to follow the procedure documented in this blog and not to follow the existing generic Oracle ASM on Azure documentation

The generic Oracle ASM on Azure documentation will cause many errors during SWPM installation.

Oracle Linux Preparation, Grid Installation, Oracle 12.2 Installation and SAP Installation

These three videos show the process flow for installation of an SAP NetWeaver 7.5 on Oracle 12.2 ASM

The installation sequence is:

  1. Oracle Linux 7.4 Preparation Steps
  2. Install the ASCS
  3. Install the DB Instance
    1. When prompted to install Oracle DBMS with RUNINSTALLER, first install the Grid Infrastructure https://docs.oracle.com/en/database/oracle/oracle-database/12.2/ladbi/installing-oracle-grid-infrastructure-for-a-standalone-server-with-a-new-database-installation.html#GUID-0B1CEE8C-C893-46AA-8A6A-7B5FAAEC72B3
    2. After installing Grid Infrastructure configure ASM disk groups as required
    3. Start RUNINSTALLER and install DBMS
    4. Continue SAPInst
  4. Install PAS
  5. Patch Grid Infrastructure and Oracle DB to latest released Oracle patch

The videos below illustrate the process

NOTE: Oracle Client 12.2 is not released yet, therefore the Oracle Client 12.1 should be used and configured. Be sure to specify DB ENGINE = 12.2 and DB CLIENT = 12.1 during setup. Do not attempt to follow Oracle ASM 11.x or ASM 12.1 documentation as there are large differences with ASM 12.2

1.ASCS-OracleASM-Install

2.DB-OracleASM-Install

3.APP-OracleASM-Install

Patching Oracle 12.2 Grid Infrastructure and Oracle 12.2 DB Components

The process for patching the Grid Infrastructure (G.I) and DBMS components is illustrated below.

The latest Oracle patches supported for SAP applications is usually available here: http://service.sap.com/oracle

509314 – Downloading Oracle Patches from the SAP Support Portal https://launchpad.support.sap.com/#/notes/509314

The sequence of patching is (1) Patch the Grid Infrastructure (2) After Grid Infrastructure patch the DBMS

Patching-Oracle-Grid-and-DBMS-12.2

Backup Solutions for Oracle 12.2 ASM on Azure

Three different backup solutions have been tested with Oracle ASM on Azure.

  1. Native Oracle RMAN Backup
  2. SAP BRTools (which is configured to call RMAN)
  3. Azure CommVault Virtual Appliance

During testing backup times for RMAN and BRTools was around 11 minutes. CommVault was around 30 minutes

Oracle-ASM-Backup-Scenarios

CommVault-Oracle-ASM-Backup

Links & SAP Notes

Details of the VM setup & configuration

Machine Name Internal IP Purpose Data Disks VM Size OS
sapappl4 10.0.0.13 App + ASCS    Standard E8s v3 (8 vcpus, 64 GB memory) Oracle-Linux:7.4:7.4.20170828
oradb4 10.0.0.10 Oracle DB 10 * P20(512GB) Standard E16s v3 (16 vcpus, 128 GB memory) Oracle-Linux:7.4:7.4.20170828
SAPORAJmp Jump VM / Downloads   Standard D2s v3 (2 vcpus, 8 GB memory) Win 2016

Note: 10 * P20 Premium Disks are allocated as follows: 1 x P20 for /oracle, 3 x P20 for DATA, 3 x P20 for ARCH, 3 x P20 for REDO

It is recommended to use Moba Xterm to do the GI and Oracle DB installation https://mobaxterm.mobatek.net/

2507228 – Database: Patches for 12.2.0.1 https://launchpad.support.sap.com/#/notes/0002507228

2470718 – Oracle Database Parameter (12.2) https://launchpad.support.sap.com/#/notes/0002470718

2558521 – Grid Infrastructure: Patches for 12.2.0.1 https://launchpad.support.sap.com/#/notes/0002558521

2477472 – Oracle Database Upgrade with Grid Infrastructure (12.2) https://launchpad.support.sap.com/#/notes/0002477472

105047 – Support for Oracle functions in the SAP environment https://launchpad.support.sap.com/#/notes/0000105047

2039619 – SAP Applications on Microsoft Azure using the Oracle Database: Supported Products and Versions https://launchpad.support.sap.com/#/notes/0002039619

1915301 – Database Software 12c Installation on Unix https://launchpad.support.sap.com/#/notes/0001915301

1915317 – Migrating Software Owner to ‘oracle’ https://launchpad.support.sap.com/#/notes/0001915317

1905855 – Oracle database doesn´t start in ASM https://launchpad.support.sap.com/#/notes/0001905855

1853538 – Oracle RAC controlfiles on ASM with multiple failure groups https://launchpad.support.sap.com/#/notes/0001853538

1738053 – SAPinst for Oracle ASM installation https://launchpad.support.sap.com/#/notes/0001738053

1550133 – Using Oracle Automatic Storage Management (ASM) with SAP NetWeaver based Products https://launchpad.support.sap.com/#/notes/1550133

2087004 – BR*Tools support for Oracle 12c https://launchpad.support.sap.com/#/notes/0002087004

2007980 – SAP Installation with Oracle Single Instance on Oracle Exadata and Oracle Database Appliance https://launchpad.support.sap.com/#/notes/0002007980

819829 – Oracle Instant Client Installation and Configuration on Unix or Linux https://launchpad.support.sap.com/#/notes/0000819829

https://docs.oracle.com/en/database/oracle/oracle-database/12.2/index.html

https://www.sap.com/community/topic/oracle.html

Recommended yum packages:

sudo yum update

sudo yum install libaio.x86_64 -y

sudo yum install uuid* -y

sudo yum install nfs-utils -y

#sudo yum install ksh -y

sudo yum install libstdc++-devel.x86_64 -y

sudo yum install xorg-x11-xauth.x86_64 -y

sudo yum install libaio-devel.x86_64 -y

sudo yum install sysstat.x86_64 -y

sudo yum install smartmontools.x86_64 -y

sudo yum install tcsh.x86_64 -y

sudo yum install xorg-x11-utils.x86_64 -y

sudo yum install ksh.x86_64 -y

sudo yum install glibc-devel.x86_64 -y

sudo yum install compat-libcap1.x86_64 -y

sudo yum install xorg-x11-apps.x86_64 -y

Special Credit & Thanks to Ravi Alwani from Azure CAT GSI Team for creating these videos and lab testing.

 

SAP on SQL Server on Azure – How to Bypass Proxy Server During Backup to URL

$
0
0

This short blog discusses how to avoid overloading an on-premises or Azure Proxy server with Backup to URL. Creating a typical .Net configuration file to disable a proxy server will allow BackupToURL.exe to communicate via https directly to Azure blog storage.

1. SQL Server Backup to URL

Modern releases of SQL Server support backups to Azure Blob storage. This method is convenient and popular for SAP on SQL customers running on Azure for full database backups and/or transaction log backups.

Customers running SQL Server Backup to URL are strongly recommended to update to the latest support pack and cumulative update, especially if the databases are running Transparent Database Encryption.

https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/04/more-questions-from-customers-about-sql-server-transparent-data-encryption-tde-azure-key-vault/

SAP always support the latest Service Pack and Cumulative Update for SQL Server https://launchpad.support.sap.com/#/notes/62988

2. Proxy Server Configuration & Backup to URL

The Backup to URL feature is implemented as a separate executable called BackupToURL.exe. This executable will call a standard Windows API when sending HTTPS traffic. By default this API will read the Windows Proxy Server configuration from Control Panel -> Internet Options -> Connections -> LAN Settings

Customers running SAP on SQL Server on Azure generally have one of two possible scenarios for the proxy server:

1. Azure is leveraging the central corporate proxy server that is kept in an existing on-premises data center.

2. A separate proxy server has been setup in Azure for Azure VMs/services to leverage.

In either case the proxy is unnecessary and will slow down the backup performance considerably. If the proxy server is on-premises the https call to the proxy server involves a two way transit of the ExpressRoute link. This adds to data costs for the link.

Modern versions of SQL Server support backup to many URL targets simultaneously and the traffic volume can be considerable. Customers have noticed that the proxy server and/or ExpressRoute link can become saturated.

It is generally recommended to disable the proxy server for BackupToURL.exe only and allow SQL Server Backup to URL to communicate directly with the target storage account. There are several ways to do this, but the recommended procedure is documented below

3. How to Disable Bypass Proxy Server for SQL Server Backup to URL

To disable the BackupToURL.exe from using the default proxy server create a file in the following path:

C:\Program Files\Microsoft SQL Server\MSSQL13.\<InstanceName>\MSSQL\Binn

The actual requirement is that the file be in the same directory as BackupToURL.exe for a particular version of SQL Server and Instance Name

The filename must be:

BackuptoURL.exe.config

The file contents should be:

<?xml version =”1.0″?>

<configuration>

<system.net>

<defaultProxy enabled=”false” useDefaultCredentials=”true”>

<proxy usesystemdefault=”true” />

</defaultProxy>

</system.net>

</configuration>

Additional information can be found here: https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url-best-practices-and-troubleshooting?view=sql-server-2017

Depending on the exact customer configuration it is possible that SQL Server executable does require a valid proxy server to do certain activities such as reading/writing to the Azure Key Vault for TDE. Care must be taken when completely disabling the proxy server via Control Panel

Additional options for controlling routing could include configurating a UDR. Another option is to create a private local endpoint for the blob storage account on the vnet of the SQL VM. Since the IP address is now local to the VM the proxy server will not be used.

Additional Links & Information

See point #5 for Backup to URL tuning https://blogs.msdn.microsoft.com/saponsqlserver/2017/04/04/sap-on-sql-general-update-for-customers-partners-march-2017/

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-backup-to-url?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/t-sql/statements/backup-transact-sql?view=sql-server-2017

https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/deleting-backup-blob-files-with-active-leases?view=sql-server-2017

Managed Backups to Azure Blob Storage https://docs.microsoft.com/en-us/sql/relational-databases/backup-restore/sql-server-managed-backup-to-microsoft-azure?view=sql-server-2017

SAP Support for SQL Server 2017

$
0
0

We are happy to share that SAP released support for SQL Server 2017 for some SAP NetWeaver based systems. In this blog we will talk about how you can update your existing installations on SQL Server to SQL Server 2017, what you need to do when installing a new SAP NetWeaver based system on SQL Server 2017 and what to consider when upgrading an SAP NetWeaver based system which is already running on SQL Server 2017. We will also list all supported products and the required Support Package Stacks.

For the first time, you do not need to apply new Support Packages Stacks if you are already running on a supported SPS for SQL Server 2016. The minimum SPS are the same. You only need a new SAP Kernel and ODBC driver.

SQL Server RDBMS DVD

Important: Please read SAP Note 2139358 (Login required) and SAP Note 2534720 (Login required) on more information about how and where to download the SQL Server 2017 RDBMS DVD.

Installation of SAP NetWeaver

The following SAP NetWeaver based products are supported on SQL Server 2017:

  • SAP products based on SAP NetWeaver Java for releases SAP NetWeaver 7.1 or higher
  • SAP products based on SAP NetWeaver ABAP for releases SAP NetWeaver 7.0 and higher
  • SAP NetWeaver PI for releases SAP NetWeaver 7.1 and higher
  • SAP Solution Manager 7.2

The following SAP NetWeaver based products are not supported on SQL Server 2017:

  • SAP Solution Manager 7.1 and earlier releases of SAP Solution Manager
  • SAP CRM 5.0 and SAP CRM 6.0 (also known as SAP CRM 2007)
  • SAP SCM 5.0 and SAP SCM 5.1 (also known as SAP SCM 2007)
  • SAP SRM 5.0
  • SAP NetWeaver Developer Workplace
  • SAP NetWeaver Java 7.0x

Before you start with the installation, please read the following SAP Notes:

  • Release planning Note – SAP Note 2492596 (Login required)
  • Setting up Note – SAP Note 2484674 (Login required)
  • Required SQL Server patches – SAP Note 62988 (Login required)
  • SWPM Note and required Kernel DVD – SAP Note 1680045 (Login required)
  • Central Note for SL Toolset – SAP Note 1563579 (Login required)

Requirements

  • Windows Server 2012 64-bit or higher
  • SQL Server 2017 Enterprise Edition

Preparation

SAP Software Provisioning Manager

See SAP Note SAP Note 1680045 (Login required) on where to download the SWPM. Please make sure to always download the latest SWPM and the SWPM that matches the SAP NetWeaver version that you want to install (e.g. for SAP NetWeaver 7.0 based products you need to download 70SWPM).
You need to download at least SAP Software Provisioning Manager 1.0 SP23.

Kernel DVD

For installations with SWPM 7.0x (70SWPM), see SAP Note SAP Note 1680045 (Login required) on where to download the latest Kernel DVD.
Use the archive based installation for installations with SWPM.

For SAP products based on SAP NetWeaver 7.4 and higher, use the 7.49 DCK Kernel
For SAP products based on EHP1 for SAP NetWeaver 7.3 and lower use the 721_EXT Kernel DVD (only the EXT Kernel is supported)

SAP recommends using the latest SP stack kernel (SAPEXE.SAR and SAPEXEDB.SAR), available in the Support Packages & Patches section of the SAP Support Portal https://launchpad.support.sap.com/#/softwarecenter.

For existing installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 749 PL 500. For details, see release note 2626990.
For new installations of NetWeaver 7.40, 7.50 and AS ABAP 7.51 this is SAP Kernel 753 PL 100. For details, see DCK note 2556153 and release note 2608318.
For AS ABAP 7.52 this is SAP Kernel 753 PL 100. For details, see release note 2608318.

Java DVD

See SAP Note SAP Note 1680045 (Login required) on where to download the Java DVD for the product you want to install.

ODBC driver

If you want to install a distributed SAP System (database and SAP application Server are running on different hosts), make sure that the latest ODBC driver for Microsoft SQL Server is installed on the host running your SAP application Server. See SAP Note 1902794 (Login required)

The ODBC driver can also be downloaded from the Microsoft Download Center

Installation

Install your SAP system using

  • The SAP Software Provisioning Manager SAR archive (SWPM.SAR or 70SWPM.SAR) 1.0 SP23 or higher
  • The SAP Software Provisioning Manager Kernel DVD (for installations with 70SWPM.SAR)
  • Kernel archives of SAP Kernel 7.21 EXT, 7.22EXT, 7.49 or 7.53
  • The Java DVD as described in SAP Note 1680045 (Login required)
  • The DVDs that were originally shipped with the SAP product (Export DVD, language DVDs…)

To install the SAP NetWeaver based system, follow these steps:

  1. After the Installation upgrade your kernel to the latest 7.21 EXT/7.22 EXT Kernel or 7.49 Kernel. 7.22 EXT Kernel is recommended for SAP System based on SAP NetWeaver 7.31 or lower.
  2. ABAP/ABAP+JAVA: Connect your new SAP System to your SAP Solution Manager (only required if you do not install SAP Solution Manager)
  3. ABAP/ABAP+JAVA: Create a maintenance stack to implement at least (only required if you do not install SAP Solution Manager):
    • SAP Business Suite 2005
      • SAP ERP 6.0 SPS 28 or higher
    • SAP Business Suite 7 Support Release 1
      • SAP CRM 7.0 SPS 18 or higher
      • EHP4 for SAP ERP 6.0 SPS 18 or higher
      • SAP SCM 7.0 SPS 18 or higher
      • SAP SRM 7.0 SPS 19 or higher
    • SAP Business Suite 7i 2010
      • EHP1 for SAP CRM 7.0 SPS 15 or higher
      • EHP5 for SAP ERP 6.0 SPS 15 or higher
      • EHP1 for SAP SCM 7.0 SPS 15 or higher
      • EHP1 for SAP SRM 7.0 SPS 15 or higher
    • SAP Business Suite 7i 2011
      • EHP2 for SAP CRM 7.0 SPS 16 or higher
      • EHP6 for SAP ERP 6.0 SPS 16 or higher
      • EHP2 for SAP SCM 7.0 SPS 16 or higher
      • EHP2 for SAP SRM 7.0 SPS 16 or higher
    • SAP Business Suite 7i 2013
      • EHP3 for SAP CRM 7.0 SPS 10 or higher
      • EHP7 for SAP ERP 6.0 SPS 10 or higher
      • EHP3 for SAP SCM 7.0 SPS 10 or higher
      • EHP3 for SAP SRM 7.0 SPS 10 or higher
    • SAP Business Suite 7i 2016
      • EHP4 for SAP CRM 7.0 SPS 01 or higher
      • EHP8 for SAP ERP 6.0 SPS 01 or higher
      • EHP4 for SAP SCM 7.0 SPS 01 or higher
      • EHP4 for SAP SRM 7.0 SPS 01 or higher
    • SAP NetWeaver 7.1 SPS 20 or higher
    • SAP NetWeaver 7.1 including EHP1 SPS 15 or higher
    • SAP NetWeaver 7.3 SPS 14 or higher
    • SAP NetWeaver 7.3 including EHP1 SPS 17 or higher
    • SAP NetWeaver 7.4 SPS 12 or higher
    • SAP NetWeaver 7.5 SPS 01 or higher
    • SAP NetWeaver 7.51 and higher do not need additional support packages
  4. ABAP/ABAP+JAVA: Use the Software Update Manager (SUM) 1.0 SP17 or higher to implement the maintenance stack (only required if you do not install SAP Solution Manager)
    DO NOT use SPAM to implement the support packages
    DO NOT update SPAM manually but let the SUM update the SPAM as part of the maintenance stack implementation
  5. SAP Solution Manager 7.2 only: Use the Software Update Manager (SUM) 1.0 SP17 to install SAP Solution Manager 7.2 SR1 which already contains the required SPS 01

Post Steps

Configure your SQL Server as described in SAP Note 2484657 (Login required)

System Copy of SAP NetWeaver

You can also copy your SAP System that is running on an older SQL Server release to a machine running a new SQL Server with the following steps:

  • Make sure that your source system is at least on the support package stack as described in the installation section if this blog.
  • Copy your SAP system using
    • The SAP Software Provisioning Manager 1.0 SP23 or higher SAR archive (SWPM.SAR or 70SWPM.SAR)
    • The SAP Software Provisioning Manager Kernel DVD for installations with 70SWPM or the lastest archives of the 7.21 EXT, 7.22 EXT, 7.49 or 7.53 kernel
    • The Java DVD as described in SAP Note 1680045 (Login required)
    • The DVDs that were originally shipped with the SAP product (Export DVD, language DVDs…)
  • After the Installation upgrade your kernel to the latest 7.21 EXT, 7.22 EXT Kernel, 7.49 or 7.53 Kernel. 7.22 EXT Kernel is recommended for SAP System based on SAP NetWeaver 7.31 or lower.

The steps for a System Copy where the source SAP system is already running on SQL 2017 are the same.

Update or Upgrade of SAP NetWeaver

Only updates or upgrades that are supported by the SAP Software Update Manager are supported for SQL Server 2017. Please read SAP Note SAP Note 1563579 (Login required) to find the SAP Note for the latest SAP Software Update Manager that describes the supported update and upgrade scenarios.

As an example, the following upgrades are not supported by SAP Software Manager and are therefore not supported for SQL Server 2017:

Upgrade of SAP CRM 5.0 to SAP CRM 7.0 or EHP1 for SAP CRM 7.0

Upgrade of SAP SCM 5.0 to SAP SCM 7.0 or EHP1 for SAP SCM 7.0

Upgrade of SAP SRM 5.0 to SAP SRM 7.0 or EHP1 for SAP SRM 7.0

Use at least SAP Software Update Manager 1.0 SP22 or SAP Software Manager 2.0 SP02


Orica’s S/4HANA Foundational Architecture Design on Azure

$
0
0

This blog is a customer success story detailing how Cognizant and Orica have successfully deployed and gone live with a global S/4HANA transformation project on Azure. This blog contains many details and analysis of key decision points taken by Cognizant and Orica over the last two years leading to their successful go live in August 2018.

This blog below written by Sivakumar Varadananjayan Siva is Global head of Cognizant SAP Cloud Practice and He Personally involves in Orica 4s Program from day 1 as Presales head and now as Chief Architect for Orica’s S/4HANA on Azure Adoption

Over the last 2 years, Cognizant has partnered and engaged as a trusted technology advisor and managed cloud platform provider to build Highly Available, Scalable, Disaster Proof IT platforms for SAP S/4HANA and other SAP applications in Microsoft Azure. Our customer Orica is the world’s largest provider of commercial explosives and innovative blasting systems to the mining, quarrying, oil and gas and construction markets, a leading supplier of sodium cyanide for gold extraction, and a specialist provider of ground support services in mining and tunneling. As a part of this program, Cognizant has built Orica’s new SAP S/4HANA Platform on Microsoft Azure and provides a Managed Public Cloud Platform as a Service (PaaS) offering.

Cognizant started the actual cloud foundation work during December 2016. In this blog article, we will cover some of the best practices that Cognizant adopted and share key learnings which may be essential for any customer planning to deploy their SAP workloads on Azure.

The following topics will be covered:

  • Target Infrastructure Architecture Design
    • Choosing the right Azure Region
    • Write Accelerator
    • Accelerated Networking
  • SAP Application Architecture Design
    • Sizing Your SAP Landscape for the Dynamic Cloud
    • Increasing/decreasing capacity
  • HA / DR Design (SUSE HA Cluster)
    • SUSE cluster
    • Azure Site Recovery (ASR)
  • Security on Cloud
    • Network Security Groups
    • Encryption – Disk, Storage account, HANA Data Volume, Backup
    • Role-Based Access Control
    • Locking resources to prevent deletion
  • Operations & Management
    • Reporting
    • Costing
    • Creation of clone environments
    • Backup & restore

Target Infrastructure Architecture Design

The design of a fail-proof infrastructure architecture involves visualizing the end-state with great detail. Capturing key business requirements and establishing a set of design principles will clarify objectives and help in proper prioritization while making design choices. Such design principles include but are not limited to choosing a preferred Azure Region for hosting the SAP Applications, as well as determining preferences of Operating System, database, end user access methodology, application integration strategy, high availability, disaster recovery strategy, definition of system criticality and business impacts of disruption, definition of environments, etc. During the Design phase, Cognizant involved Microsoft and SUSE along with other key program stakeholders to finalize the target architecture based on the customer’s business & security requirements. As part of the infrastructure design, critical foundational aspects such as Azure Region, ExpressRoute connectivity with Orica’s MPLS WAN, and integration of DNS and Active Directory domain controllers were finalized.

At the time of discussing the infrastructure preparation, various topics including VNet design (subnet IP ranges), host naming convention, storage requirements, and initial VM types based on compute requirements were derived. In the case of Orica’s 4S implementation, Cognizant implemented a three tier subnet architecture – Web Tier, Application Tier and Database Tier. The three tier subnet design was applied for each of Sandpit, Development, Project Test, Quality and Production so that it provides the flexibility for Orica to deploy fine-grained NSGs at subnet levels as per security requirements. Having a clearly defined tier-based subnet architecture will also enable to avoid complex NSGs being defined for individual VM hosts.

The Web Tier subnet is intended to host the SAP Web Dispatcher VMs; the Application Tier is intended to host the Central Services Instance VMs, Primary Application Server VMs and any additional application server VMs, the Database Tier is intended to host the database VMs. This is supplemented by additional subnets for infrastructure and management components, such as jump servers, domain controllers, etc.

Choosing the Right Azure Region

Although Azure operates over several regions, it is essential to choose a primary region into which main workloads will be deployed. Choosing the right Azure region for hosting the SAP Application is a vital decision to be made. The following factors must be considered for choosing the Right Azure Region for Hosting: (1) Legal and regulatory requirements dictating physical residence, (2) Proximity to the company’s WAN points of presence and end users to minimize latency, (3) Availability of VMs and other Azure Services, and (4) Cost. For more information on availability of VMs, refer to the section “Sizing Your SAP Landscape for the Dynamic Cloud” under SAP Application Architecture Design.

Accelerated Networking

Accelerated Networking enables single root I/O virtualization (SR-IOV) to a VM, greatly improving its networking performance. This high-performance path bypasses the host from the data path, reducing latency, jitter, and CPU utilization, for use with the most demanding network workloads on supported VM types. Without accelerated networking, all network traffic in and out of the VM must traverse the host machine and the virtual switch.
With accelerated networking, network traffic arrives at the VM’s network interface (NIC), and is then forwarded directly to the guest VM. All network policies that the virtual switch applies are now offloaded and applied in hardware. While essential for good and predictable HANA performance, not all VM types and operating system versions support Accelerated Networking, and this must be taken into account for the infrastructure design. Also, it is important to note that Accelerated Networking helps to minimize latency for network communication within the same Azure Virtual Network (VNet). This technology has minimal impact to overall latency during network communication over multiple Azure VNets.

Storage

Azure provides several storage options including Azure Disk Storage – Standard, Premium and Managed (attached to VMs), Azure Blob Storage, etc. At the time of writing this article, Azure is the only public cloud service provider that offers Single VM SLA of 99.9% under the condition of operating the VM with Premium Disks attached to it. The cost value proposition of choosing Premium Disk over Standard Disk for the purpose of getting an SLA for Single VM SLA is significantly beneficial and hence Cognizant recommends provisioning all VMs with Premium Disks for application and database storage. Standard Disks are appropriate to store Backups of Databases, and Azure Blob is used for Snapshots of VMs and transferring Backups and storing them as per the Retention Policy. For achieving an SLA of > 99.9%, High Availability techniques can be used. Refer to the section ‘High Availability and Disaster Recovery’ in this article for more information.

Write Accelerator

Write Accelerator is a disk capability for M-Series Virtual Machines (VMs) on Azure running on Premium Storage with Azure Managed Disks exclusively. As the name states, the purpose of the functionality is to improve the I/O latency of writes against Azure Premium Storage. Write Accelerator is ideally suited to be enabled for disks to which database redo logs are written to meet the performance requirement of modern databases such as HANA. For production usage, it is essential that the final VM infrastructure thus setup should be verified using SAP HANA H/W Configuration Check Tool (HWCCT). These results should be validated with relevant subject matter experts to ensure the VM is capable of operating production workloads and is thus certified by SAP as well.

SAP Application Architecture Design

The SAP Application Architecture Design must be based on the guiding principles that must be adopted for building the SAP applications, systems and components. To have a well laid out SAP Application Architecture Design, you must determine the list of SAP Applications that are in scope for the implementation.

It is also essential to review the following SAP Notes that provide important information on deploying and operating SAP systems on public cloud infrastructure:

  • SAP Note 1380654 – SAP support in public cloud environments
  • SAP Note 1928533 – SAP Applications on Azure: Supported Products and Azure VM types
  • SAP Note 2316233 – SAP HANA on Microsoft Azure (Large Instances)
  • SAP Note 2235581 – SAP HANA: Supported Operating Systems
  • SAP Note 2369910 – SAP Software on Linux: General information

    Choosing the OS/DB Mix of your SAP Landscape

    Using this list, the SAP Product Availability Matrix can be leveraged to determine whether the preferred Operating System and Database is supported for each of the SAP application in scope. From an ease of maintenance and management perspective, you may want to consider not having more than two variants of databases for your SAP application database. SAP has started providing support for SAP HANA database for most of the applications and since SAP HANA supports multi-tenant database, you might as well want to have most of your SAP applications run on SAP HANA database platform. For some applications that do not support HANA database, other databases might be required in the mix. SAP’s S/4HANA application runs only on HANA database. Orica chose to run HANA for every SAP application where supported and SQL Server otherwise – as this was in line with the design rationale and simplified database maintenance, backups, HA/DR configuration, etc.

    With SAP HANA 2.0 becoming mainstream (it is also mandatory for S/4HANA 1709 and higher), fewer operating systems are supported than with SAP HANA 1.0. For example SUSE Enterprise for SAP Applications is now the only flavor of SUSE supported, while “normal” SUSE Enterprise was sufficient for HANA 1.0. This may have a licensing impact for the customers, as Azure only provides BYO Subscription images. Hence customers must supply their own operating system licenses.

    Type of Architecture

    SAP offers deploying its NetWeaver Platform based applications either in a Central System Architecture (Primary Application Server and Database in same host) or in a Distributed System Architecture (Primary Application Server, Additional Application Servers and Database in separate hosts). You need to choose the type of architecture based on a thorough cost value proposition, business criticality and application availability requirements. You also need to determine the number of environments that each SAP application will require such as Sandbox, Development, Quality, Production, Training, etc. This is predominantly determined based on the change governance that you plan to setup for the project. Systems that are business critical and have requirements for high availability such as the Production Environment must always be considered to be deployed in a Distributed System Architecture Scenario with High Availability Cluster. In the case of public cloud infrastructure, this is even more critical as VMs tend to fail much more frequently than traditional “expensive” on-premises kit (e.g. IBM p-Series). In the past one could afford to be lax about HA, because individual servers tended to fail only rarely. However we’re seeing a relatively higher rate of server failure in public cloud, so if uptime is important, then HA must be set up for business critical systems. For both critical and non- critical systems, parameters should be enabled to ensure the application and database starts automatically in the event of an inadvertent server restart. Disaster Recovery is often recommended for most of the SAP Applications that are business critical based on Recovery Point Objective (RPO) and Recovery Time Objective (RTO).

    Cognizant designed 4 system landscape and distributed SAP architecture for Orica. We separated the SAP Application and DB servers, because when taken in context of HANA MDC and running everything on HANA by default, a Central System Architecture no longer makes sense. We have also named HANA Database SIDs without any correlation to the tenants that each HANA database holds. This is done with the intention of future proofing and allowing the tenants to change the HANA hosts in future if needed. In the case of Orica, we have also implemented custom scripting for automated start of SAP applications which can further be controlled (disabled or enabled) by a centrally located parameter file. High availability is designed for production and quality environments. Disaster recovery is designed as per Orica’s Recovery Point Objective (RPO) and Recovery Time Objective (RTO) defined by business requirements.

    Sizing Your SAP Landscape for the Dynamic Cloud

    Once you have determined the type of SAP architecture, you will now have a fair idea about the number of individual Virtual Machines that will be required to deploy each of these components. From an infrastructure perspective, the next step that you will need to perform is to size the Virtual Machines. You can leverage standard SAP methodologies such as Quick Sizer using Concurrent User or Throughput Based sizing. Best Practice is doing the sizing using Throughput based sizing. This will provide you the SAPS and memory requirement for the application and database components and memory requirement in the case of HANA database. Tabulate the critical piece of sizing information in a spreadsheet and refer to the standard SAP notes to determine the equivalent VM types in Azure Cloud infrastructure. Microsoft is getting SAP certification for new VMs on regular basis so it is always advisable to check the recent SAP notes for latest information. For HANA databases, you may most often require VMs with E-Series (Memory Optimized) and
    M-Series (Large Memory Optimized) based on the size of the database. At the time of writing this article, the maximum capacity supported with E-Series and M-Series are 432 GB and 3.8 TB respectively. E-series offers better cost value proposition compared to the earlier GS-series VMs offered by Azure. At this point you need to evaluate that the resulting VMs are available in the Azure region that you have preferred to host your SAP landscape. In some cases, depending upon the geography there is a possibility that some of these VM types may not be available and it is essential to be careful and choose the right Geography and Azure Region where all the required VM types are available. However, remember that Public Cloud offers great scalability and elasticity. You do not need an accurate peak sizing to provision your environments. You always have the room to scale-up or scale-down your SAP Systems based on the actual usage by monitoring the utilization metrics such as CPU, Memory and Disk Utilization. Within the same Virtual Machine series, this can be done just by powering off the VM, changing the VM Size and powering on the VM. Typically, the whole VM resizing procedure does not take more than a few minutes. Ensure that your system will fit into what’s available in Azure at any point of time. For instance, spinning up a 1 TB M-series and then finding that a 1.7TB instance is needed instead does not cause much of an hassle as it can be easily re-sized. However, if you are not sure if your system will grow beyond 3.8 TB (maximum capacity of M-Series), then it puts you in a bigger risk as complications will start of creep up (Azure Large Instances may be needed for rescue in such cases). Reserved Instances are also available in Azure, and can be leveraged for further cost optimization if accurate sizing of actual hardware requirements is performed before purchasing (to avoid over-committing).

High Availability and Disaster Recovery

Making business critical systems such as SAP S/4HANA highly available with > 99.9% high availability requires a well-defined High Availability architecture design. As per Azure, VM clusters deployed in an availability set within an availability zone in a region offers 99.95% availability. Azure offers an SLA of 99.99% when the compute VMs are deployed within a region in multiple Availability Zones. For achieving this, it is recommended to look for availability of Availability Zones in the region that is chosen for hosting the SAP applications. Note that Azure Availability Zones are still being rolled out by Microsoft and they will eventually arrive in all regions over a period of time. Also, components that are Single Point of Failure (SPOF) in SAP must be deployed in a cluster such as SUSE Cluster. Such cluster must reside within an availability set to attain 99.95% availability. To achieve the High Availability for Azure Infrastructure level all the VMs are added in availability set and exposed with Azure Internal Load Balancer (ILB). These components include (A)SCS Cluster, DB Cluster and NFS. It is also recommended to provision at least two application servers within an availability set (Primary Application Server and Additional Application Server), so as to ensure the Application Servers are redundant. Cognizant, Microsoft and SUSE worked together to build a collaborative solution based on Multi-Node iSCSI server configuration. This Multi-Node iSCSI server HA configuration for SAP Applications in Orica were the first to be deployed with this configuration in Azure Platform.

As discussed earlier, in cases where SAP components are not prevented from failure using High Availability setup, it is recommended to provision such VMs with Premium Storage Disks attached to it to take advantage of the Single VM SLA. All VMs at Orica use Premium Disks for their application and database volumes because this is the only way they would be covered by the SLA, and we also found performance to be better and more consistent.

Details about SUSE Cluster is described below

SUSE Cluster Setup for HA:

(A)SCS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is not used for replication of application files such as SAP kernel files. This design is based on the recommendation from Microsoft and it is supported by SAP as well. The reason for not enabling DRBD replication is due to potential performance issues that could pop-up when a synchronous replication is configured and a recovery could not be guaranteed which such configuration when ASR is enabled for Disaster Recovery replication at Application layer. NFS Layer high availability is achieved using SUSE HA Extension cluster. DRBD technology for replication is used for data replication. It is also recommended by Microsoft to use single NFS Cluster to cater for multiple SAP Systems to reduce complexity of the overall design.

HA testing needs to be performed thoroughly, and must be simulated for many different failure situations beyond a simple clean shutdown of the VM. E.g. Usage of halt command to simulate a VM power off, adding firewall rules in the NSG to simulate problems with the VM’s network stack, etc.

We are excited to announce that Orica is the first customer on the Multi-SID SUSE HA cluster configuration.

More details on the technical configuration of setting up HA for SAP is described here. The pacemaker on SLES in Azure is recommended to be setup with an SBD device, the configuration details are described here. Alternatively, if you do not want to invest in one additional virtual machine, you can also use the Azure Fence agent. The downside with Azure Fencing Agent is that a failover can take between 10 to 15 minutes if a resource stop fails or the cluster nodes cannot communicate which each other anymore.

Another important aspect on ensuring application availability is during a disaster through a well architected DR solution which can be invoked through a well-orchestrated Disaster Recovery Plan.

Azure Site Recovery (ASR):

Azure Site Recovery assists in business continuity by keeping business apps and workloads running during outages. Site Recovery replicates workloads running on physical and virtual machines (VMs) from a primary site to a secondary location. At the time of failover, apps are started in the secondary location and accessed from there after making relevant changes in the cluster configuration and DNS. After the primary location is running again, you can fail back to it. ASR was not tested for Orica at the time of current Go-Live as the support for SLES 12.3 was made GA by Microsoft too close to Cut-Over. However, we are currently evaluating this feature and we will be using this for DR at the time of Go-Live of the next phase.

Security on Cloud

Most of the traditional security concepts such as security at physical, server, hypervisor, network, compute and storage layers are applicable for overall security of the cloud. These are provided by the public cloud platform inherently and are audited as well by 3rd party IT security certification providers. Security on the Cloud will help you to protect the hosted applications on the cloud by leveraging features and customization aspects that are available through the cloud provider and those security features provided within the applications hosted on cloud.

Network Security Groups

Network Security Groups (NSGs) are rules applied in Networking Layer that will control traffic and communication with VMs hosted in Azure. In Azure, Separate NSGs can be associated for Prod, Non-Prod, Infra, management and DMZ environment. It is important to arrive at a strategy for defining the NSG rules in such a way that it is modularized and easy to comprehend and implement. Strict procedures need to be implemented to control these rules. Otherwise, you may often end-up with unnecessary redundant rules which will make it harder to troubleshoot any network communication related issues.

In the case of Orica, an initiative was implemented to optimize the number of NSG rules by adding multiple ports for the same source and destination ranges in the same rule. A change approval process was introduced once the NSG was associated. All the NSG rules are maintained in a custom formatted template (CSVs) which is utilized by a script for actual configuration in Azure. We expect it will be too difficult doing this manually for multiple VNets across multiple regions (e.g. primary, DR, etc.).

Encryption of Storage Account and Azure Disk

Azure Storage Service Encryption (SSE) is recommended to be enabled for all the Azure Storage Accounts. Through this, Azure Blobs will be encrypted in the Azure Storage. Any data that is written to the storage after enabling the SSE will be encrypted. SSE for Managed Disks is enabled by default.

Azure Disk Encryption leverages the industry standard BitLocker feature of Windows and the DM-Crypt feature of Linux to provide volume encryption for the OS and the data disks. The solution is integrated with Azure Key Vault to help you control and manage the disk-encryption keys and secrets in your key vault subscription. Encryption of the OS volume will help protect the boot volume’s data at rest in your storage. Encryption of data volumes will help protect the data volumes in your storage. Azure Storage automatically encrypts your data before persisting it to Azure Storage, and decrypts the data before retrieval.

SAP Data at Rest Encryption

Data at rest is encrypted for SAP Applications by encrypting the database. SAP HANA 2.0 and SQL Server natively support data at rest encryption and they provide the additional security that is needed in case of a data theft. In addition to that the backups of both these databases are encrypted and secured by a Pass Phrase to ensure these backups are only readable and can be leveraged by authentic users.

In the case of Orica, both Azure Storage Service Encryption and Azure Disk Encryption were enabled. In addition to this, SAP Data at Rest Encryption was enabled in SAP HANA 2.0 and TDE encryption was enabled in SQL Server database.

Role-Based Access Control (RBAC)

Azure Resource Manager provides a granular Role-Based Access Control (RBAC) model for assigning administrative privileges at the resource level (VMs, Storage etc.). Using an RBAC model (For e.g. service development team, App development team) can help in segregation and control of duties and grant only the amount of access to users/groups that they need to perform their jobs in selected resources. This enforces the principle of least privilege.

Resource Lock

An administrator may need to lock a subscription, resource group, or resource to prevent other users in organization from accidentally deleting or modifying critical resources. We can set the lock level to CanNotDelete or ReadOnly. In the portal, the locks are called Delete and Read-only respectively. Unlike RBAC, Locking Resources would prevent intentional and accidental deletion of resources for all the users including the users who have owner access as well. CanNotDelete means authorized users can still read and modify a resource, but they can’t delete the resource. ReadOnly means authorized users can read a resource, but they can’t delete or update the resource. Applying this lock is similar to restricting all authorized users to the permissions granted by the Reader role. For Orica, we have configured this for critical pieces of Azure infrastructure, to provide an additional layer of safety.

Operations & Management

Cognizant provides Managed Platform as a Service (mPaaS) for Orica through Microsoft Azure Cloud. Cognizant has leveraged several advantages of operating SAP systems in public cloud including scheduled automated Startup and Shutdown, automated backup management, monitoring and alerting, automated technical monitoring for optimizing the overall cost of technical operations and management. Some of the recommendations are described below.

Azure Costing and Reporting

Azure Cost Management licensed by Cloudyn, a Microsoft subsidiary, allows you to track cloud usage and expenditures for your Azure resources and other cloud providers including AWS and Google. Monitoring your usage and spending is critically important for cloud infrastructures because organizations pay for the resources they consume over time. When usage exceeds agreement thresholds, unexpected cost overages can quickly occur.

Reports help you monitor spending to analyze and track cloud usage, costs, and trends. Using Over Time reports, you can detect anomalies that differ from normal trends.
More
detailed, line-item level data may also be available in the EA Portal (https://ea.azure.com) which are more flexible compared to Cloudyn reports and could be more useful.

Backup & Restore

One of the primary requirements of system availability management as part of technical operations is to protect the systems from accidental data loss due to factors such as infrastructure failure, data corruption or even complete loss of the systems in the event of a disaster. While concepts such as High Availability and Disaster Recovery will help to mitigate infrastructure failures, for handling events such as data corruption, loss of data, etc. a robust backup and restore strategy is essential. Availability of backups allow us to technically restore an application back to working state in case of a system corruption and present the “last line of defense” in case of a disaster recovery scenario. The main goal of backup/restore procedure is to restore the system to a known-working state.

Some of the key requirements for Backup and Restore Strategy include:

  • Backup should be restorable
  • Prefer to use native database backup and restore tools
  • Backup should be secure and encrypted
  • Clearly defined retention requirements

    VM Snapshot Backups

    Azure Infrastructure offers native backup for VMs (inclusive of disks attached) using VM Snapshots. VM Snapshot backups are stored within Azure Vaults which are part of the Azure Storage architecture and are geo-redundant by default. It is to be noted that Microsoft Azure does not support traditional data retention medium such as tapes. Data retention in cloud environment is achieved using technologies such as Azure Vault and Azure Blob which are part of Azure Storage Account architecture. In general, all VMs provisioned in Microsoft Azure (including databases) should be included as part of the VM Snapshot backup plan although the frequency can vary based on the criticality of the environment and criticality of the application. Encryption should be enabled at Azure Storage Account level so that the backups when stored in the Azure Vault are also encrypted when accessed outside the Azure Subscription.

    Database Backups

    While the restorability of file system and database software can be achieved using VM Snapshot process described above, VMs containing database may not be able to restore the database to a consistent state. Hence, backup of databases is highly essential to guarantee restorability of databases. It is advisable to have all databases in the landscape to be included as part of the Full Database Backups, the schedules for must be described based on the business criticality and requirements for the application. Consistency of the database backup file should be checked after the database backup is taken. This is to ensure restorability of the database backup.

    In addition to Full Database backups, it is recommended to perform transaction log backups at regular intervals. This frequency must be higher for a production environment to support point in time recovery requests and the frequency can be relatively lower for non-production environments.

    Both Full Database Backups and Transaction Log Backups must be transferred to an offline device (such as Azure Blob) and retained as per data retention requirement. It is recommended to have all database backups to be encrypted using Native Database Backup Data Encryption methodology if the database supports it. SAP HANA 2.0 supports Native DB Backup Encryption.

    Database Backup Monitoring and Restorability Tests

    Backup Monitoring is essential to ensure the backups are occurring as per frequency and schedule. This can be automated through scripts. Restorability Test of backups will assist in guaranteeing the restorability of an application in the event of a disaster or data loss or data corruption.

    Conclusion

    Cognizant SAP Cloud Practice in collaboration with SAP, Microsoft and SUSE leveraged and built some of the best practices for deploying SAP landscape in Azure for Orica’s 4S Program. Through this article, some of the key topics that are very relevant for architecting an SAP landscape on Azure are exhibited. Hope you found this blog article useful. Feel free to add your comments.

Moving SAP on SQL Server and Windows into Azure – what SQL Server and Windows release should I use

$
0
0

As Azure became a very popular secure and scalable platform to deploy mission critical workload like SAP application on it, Microsoft itself tries to lower the bar to move your application to Azure even more. One of the measures Microsoft took was announced in this blog:

https://azure.microsoft.com/en-us/blog/announcing-new-offers-and-capabilities-that-make-azure-the-best-place-for-all-your-apps-data-and-infrastructure/

In essence, the blog announces free extended Security Patches for Windows Server 2008(R2) and SQL Server 2008(R2) beyond the end of the usual 10 year support life cycle period. Though this sounds appealing in a first glance, there are many reasons why you as SAP customer should not take this extension as motivation to delay a move of your mission critical system to more recent Windows Server and SQL Server releases.

Why do we not recommend leveraging these older Window and SQL Server releases in Azure for SAP workload other similar large scale Enterprise Mission Critical applications? Short answer is that we introduced a lot of features and functionalities into the Windows Server 2012 family and Windows Server 2019 and more recent SQL Server releases that either accommodate and improve the handling of SAP workload or improve the integration into Azure as an IaaS platform. SAP NetWeaver based systems typically run your company’s most critical business processes and shouldn’t be left running on 10 year old operating system and database platform.

When we look into details of the changes in more recent Windows Server versions and Azure that improve integration, scalability and reliability then the following documented changes are noteworthy:

These are the most obvious reasons. Besides those there were a lot of improvements that made it into more recent Windows kernels that were resulting out of issues SAP customers cases. Severe improvements regarding scalability and reliability are part these newer OS releases which in a lot of cases improve the efficiency of SAP workload in virtualized environments. As you read this list, it is clear that we highly recommend using Windows Server 2016 as operating system for your SAP deployments in Azure.

On SQL Server side, the changes are even more impressive. When we look at the list, then the first striking improvements with SQL Server 2012 and expanded and improved in more recent releases are:

SQL Server Column store: SQL Server Column Store is a ground breaking feature that can be leveraged with SAP BW, but also with other NetWeaver based SAP software. The best SAP centric documentation can be found in these articles: https://blogs.msdn.microsoft.com/saponsqlserver/tag/columnstore/ .

Besides columnstore for SAP BW, SQL Server 2016 introduced modifiable non-clustered columnstore indexes that can be applied to indexes to SAP ERP systems and other non-BW SAP NetWeaver applications.

  • SQL Server AlwaysOn: Is a major improvement to SQL Server Database Mirroring. SQL Server AlwaysOn is not only a complete high availability and disaster recovery functionality, but also allows moving backups to the secondary nodes. Combined with Windows Server 2016 Cloud Witness and SQL Server Distributed Availability Groups, this is the ideal functionality to configure high availability and disaster recovery in Azure IaaS deployments.
  • SQL Server Query Store: allows you to track long term query performance. In cases of performance degradation the new functionality allows you to rollback the execution of a query to the best performing one. More details can be read in: https://docs.microsoft.com/en-us/sql/relational-databases/performance/monitoring-performance-by-using-the-query-store?view=sql-server-2017
  • Resumable online Index builds allow to build indexes piecemeal in chunks with lower workload. This functionality got introduced in SQL Server 2017. More details can be found here: https://www.mssqltips.com/sqlservertip/4987/sql-server-2017-resumable-online-index-rebuilds/
  • SQL Server consistency checks (checkdb) were improved majorly in SQL Server 2016. The improvements resulted in way better throughput of checkdb and better resource control.

Other very important functionality that helps SQL Server to leverage Azure capabilities can be listed as:

Other functionality which is less more obvious for SAP workloads are

  • SQL Server 2016 introduced functionality to accelerate execution of SAP CDS views and functions related to CDS views
  • In the latest SQL Server releases, DMVs got changed and added to improve functionality of NetWeaver’s DBACockpit

Given all the new functionalities and the improvements that make a big difference to run SAP workload, it is highly recommended moving to a minimum of SQL Server 2016 or consider SQL Server 2017 when moving your SAP workload to Azure IaaS. Instead of staying on nearly 10 year old releases of Windows and SQL Server that never got really adapted to the usage in Azure or for more recent SAP releases or features (like SAP CDS view).

SAPS ratings on Azure VMs – where to look and where you can get confused

$
0
0

Moving SAP software to Azure, you need to decide which Azure VM types you want to use. One of the major criteria for selecting a certain VM is the throughput requirement of the SAP workload. SAP decided decades ago, to characterize the throughput in SAPS (SAP Application Performance Standard). To measure the SAPS a server or an Azure VM can deliver, server manufacturers or cloud service providers need to run the SAP SD Standard Application Benchmark. One hundred SAPS are defined as 2,000 fully business processed order line items per hour (https://www.sap.com/about/benchmark/measuring.html ).

As we certify Azure VMs for SAP NetWeaver and SAP HANA, one of the main pieces of information we need to deliver is the throughput measured in SAPS of the VM to be certified. SAP again, uses that data to document it for Azure in SAP note #1928533. This note is the official source to find the SAPS for SAP NetWeaver certified VMs. This note lists the number of vCPUs, memory and SAPS for each of the Azure VM types that are certified to run the SAP NetWeaver workload.

So where is the confusion then?

First area of confusion is created by the way publicly released SAP SD Standard Application benchmarks are documented by SAP in case of hyperscale clouds like Azure. When you follow this link, you are getting a list of SAP SD benchmarks which were released on Azure VMs. Note, we are not forced to release benchmarks conducted for certification of Azure VMs. As a result, you won’t find benchmarks released in this location for every certified Azure VM type. As you click onto a single record of a benchmark, data like this will be displayed:

image

And this is where the problem starts. Nowhere in this displayed data would you find the vCPU and memory definition of the VM that got tested. Instead you find the technical data of the host that the VM ran on. In terms of the number of CPUs and cores, the data is also not reflecting the real configuration of the host. But it does list the theoretical capacity for the case when Intel Hyperthreading is configured on the host. Means at the end, you can’t draw any conclusions on the number of vCPUs or the volume of memory for e.g. the M128s VM type (as shown above) out of the official SAP benchmark webpage. In order to get that data, you would need to check on the Azure pricing page or SAP note #1928533. So at the end, the data displayed on the official SAP benchmark webpage just confuses for the purpose of getting the SAPS of a specific Azure VM type for SAP sizing purposes. Though it displays the SAPS values of a certain VM type (which you can get in SAP note #1928533 as well), they however were reported with the host technical data; data that is unnecessary to size Azure VM infrastructure according to SAPS requirements.

Second area of confusion is created when you compare different VM types and calculate the SAPS/vCPU. Please note, we are always talking about vCPUs or CPU threads as a single unit that shows up in e.g. Windows Task Manager. We avoid the term ‘core’ for VMs on purpose because it has a very fixed meaning on bare-metal. In the Intel bare-metal world you can have one core that is representing one CPU thread in the case that the server is configured w/o Hyperthreading. Or you can have a core representing two CPU threads in case the server is configured with Hyperthreading enabled.

As you calculate the SAPS a single vCPU of an Azure VM can deliver as throughput, you can get to a somewhat surprising results when comparing different Azure VM types. E.g., let’s check DS14v2, which in SAP note #1928533 is reported with:

  • 16 vCPUs, 112GiB memory and 24180 SAPS.
  • We look at 24180 / 16 = 1511 SAPS per vCPU as throughput

As we move to DS16v3 which has the same number of vCPUs (though only 64GiB memory), you calculate only:

  • 17420 SAPS / 16 = 1088 SAPS throughput per CPU thread

This is kind of surprising since the DS16v3 is a more recent VM type and you would expect that performance or throughput per vCPU is improving all the time. Reason that this is not the case, is, that we introduced VM types that are running on hosts where Intel Hyperthreading is enabled. Means one physical processor core on the host server represents two CPU threads on the host. With that a single vCPU in an Azure VM is mapped into one of the two CPU threads of a hyperthreaded core on the host server. Hyperthreading on bare-metal server improves the overall throughput. But does not double the throughput as it does double the number of CPU threads of the host. The throughput improvement by Hyperthreading under classical SAP workload is ranging from 30-40%. As a result one core with two hyperthreaded CPU threads on the host will deliver 130-140% of the throughput the same processor core delivers without Hyperthreading. At the end, this means that a single CPU thread of a hyperthreaded core will deliver between 65-70% of what a non-hyperthreaded core delivers with a single CPU thread.

That is exactly the difference you are seeing between the SAPS a single vCPU delivers on a DS14v2 compared to a DS16v3.

So far, the following NetWeaver and/or SAP HANA certified Azure VM families are running on host hardware where Intel Hyperthreading is enabled:

  • D(S)v3
  • E(S)v3
  • M-Series

As mentioned earlier, the detailed data that is displayed by SAP on their benchmark webpage is not giving any indication whether Hyperthreading is enabled or not by just listing the theoretical capacity of the host. Indications whether Hyperthreading is enabled on the Azure hosts running certain VM families is published as well in the Azure pricing webpage. You can get indications like shown in the screenshot below:

clip_image004[4]

IMPORTANT:

Keep in mind that VM types have certain bandwidth limitations as well. In general, we can state that the smaller the VM in a VM family is, the smaller the storage and network bandwidth. In case of large VMs, like M128s or M128ms, or ES64v3 the VM is the only VM running on a host. As a result, it benefits from the complete network and storage bandwidth the host has available. In case of smaller VMs, the network and storage bandwidth need to be divided across multiple VM. Especially for SAP HANA, but also for SAP NetWeaver, it is vitally important that a VM running intensive workload does not steal CPU, memory, network and storage bandwidth from other VMs running on the same host. As a result, in sizing a VM, you also need to take the required network and storage bandwidth into account. Detailed data of network and storage throughput for each VM type can be found here:

We hope this clarifies aspects of the different SAPS per vCPU throughput calculations and measurements. And gives some explanations why the SAP benchmark webpages are not suited to check for sizing. Instead SAP note #1928533 should be consulted.

S/4H Installation in Azure – SETUP AND CONFIG IN ONE DAY

$
0
0

Overview

You want to perform a setup of SAP S/4 HANA in Azure. And you do want to do it quick so that you can experience the overall process and get ready for your landscape deployment. If so, this document is for you. This document enables you to perform the setup and configuration of the S/4H in Azure.

In this setup, we used the embedded option of SAP Fiori. What is embedded option? Don’t worry, we will cover the basics later in this document. This is a hybrid mode installation where SAP Application layer runs on Windows and Large Instances on linux operating system.

Yes! You can accomplish this all in less than 1 day. Excited? Let’s begin…

Read full documentation here: S4H-in-Azure-Setup-and-Config-in-One-day

SAP on Azure High Availability Systems with Heterogenous Windows and Linux Clustering and SAP HANA

$
0
0

SAP HANA is the well-known technology from SAP, powering different SAP applications like SAP S/4 HANA, SAP BW for HANA, SAP Business Suite on HANA, etc. 

High availability (HA) is a current commodity and must-have feature expected by many SAP customers running their productive SAP HANA systems in the Azure cloud.

SAP HANA runs only on Linux distributions.

Typically, in an HA scenario we would cluster SAP Single Points of Failures (SPOFS) like DBMS and SAP central services (SAP ASCS / SCS instance), and we have at least two redundant SAP application servers.

If you would deploy an SAP system completely on Linux OS (using SLES or Red Hat), the SAP HA architecture would include:

  •  [Linux] Clustered SAP HANA with Pacemaker on Linux
  •  [Linux] Clustered SAP ASCS/SCS with Pacemaker on Linux
    •  [Linux] A file share for SAP GLOBAL host
  •  [Linux] At least two SAP application servers on Linux

So, what does the Windows and Windows Failover Clustering have to do with SAP HA systems running on HANA?

Well, another possible HA setting with SAP HANA is to:
  • [Linux] Cluster SAP HANA with Pacemaker on Linux
  • [Windows] Use Windows Failover Cluster for SAP ASCS/SCS instance
    •  One option is to use HA file share, for SAP GLOBAL host
    •  Another option is to use shared disks
  • [WindowsAt least two SAP application servers are on Windows


In below architecture, we use a file share construct (instead of share disks) for the SAP ASCS/SCS instance.

The highly available SMB file share is implemented using Windows Scale-Out File server (SOFS) and Storage Spaces Direct (S2D).

This scenario is supported by SAP, as HANA client is available and supported on Windows OS.

The main question is – why would some customer run such a heterogenous OS/clustering scenario?

Well, there are a few cool features on Windows Clustering for SAP ACS/SCS instance that bring valuable benefits that are unfortunately not available on Linux Pacemaker cluster for ASCS/SCS, like:

  •  Fault-Tolerant SAPMNT file share on Windows Cluster
    In a Windows failover cluster, there is a feature called Continuous Availability (CA) which offers fault-tolerant SAPMNT share during the failover.

    CA is available on :
    –  Clustered File Server for general use  (in combination with shared disk)
     –  and clustered Scale-Out File Share (in combination with S2D )

    A typical benefit would be, for example, for active SAP batch process that continuously write their logs on SAP GLOBAL host via SAPMNT file share.Without this feature, failover of SAPMNT file share will cause cancelation of active SAP batch (due to loss of file handle), and you need to restart them from the beginning, which is a not a funny situation if you run business critical reports that must be finished on time.With an enabled CA feature, batch jobs are not canceled during the SAPMNT file share fail over (for unplanned or planned downtime) and are running happily until they are finished!

    More details can be found in this blog New Failover Clustering Improvements in Windows Server 2012 and Its Benefits for SAP NetWeaver High Availability and SAP note 2287140 – Support of Failover Cluster Continuous Availability feature (CA).This feature is available inWindows 2012 and higher releases.

  • The SAP Installer fully supports Windows failover clustering with shared disks as well as file share, which greatly simplifies deployment procedure.

  • SAP documentation fully describes high availability with Windows clustering for SAP ASCS/SCS instance

  • On Azure Windows Failover Cluster supports Multi-SID option,

    e.g. ability to install more SAP ASCS/SCS instances belonging to different SAP systems into one cluster. In this way, customers can consolidate SAP load and reduce the cost in Azure.

    More information on Multi-SID can be found in official SAP on Azure guides:
    SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and file share on Azure
    SAP ASCS/SCS instance multi-SID high availability with Windows Server Failover Clustering and shared disk on Azure

  • Azure Cloud Witness
    Cloud Witness is a type of Failover Cluster quorum witness that uses Microsoft Azure to provide a vote on cluster quorum. For more information check Deploy a Cloud Witness for a Failover Cluster.
    Azure cloud witness replaces the cluster Quorum with file share on the stand alone VM.  As Azure cloud witness is cloud service, the whole solution is much easier to manage and overall TCO is reduced.


The following table gives a comparison overview:

  Windows Failover Cluster Linux Pacemaker Cluster
Cluster software included in OS License Yes Yes
Integrated with Enqueue Replication Service (ERS instance)
(Fault tolerant SAP transaction lock)
Yes Yes
Fault Tolerant SAPMNT File Share
(No downtime for SAP Batch Jobs)
Yes
(Continuous Availability feature)
No

(cancelation of SAP Batch Jobs)

Clustering without
Cluster Share Disk
Yes Yes
Direct support by SAP Yes No

(partner)

Integrated in SAP installer Yes
(SWPM supports both
file share and shared disks)
No

(manual procedure)

Described in SAP Installation Guides Yes No
(partner)
Regular Installation & upgrade test by SAP Yes No

(partner)

Can be used for NW on HANA Yes Yes
ASCS/SCS Multi-SID support in Azure
(Consolidation & TCO Reduction)
Yes No
Cloud Cluster Quorum Service Yes No*

* although Azure Fence is supported, SBD fencing (which requires extra VM) is preferred option due to faster failover

In addition to SAP HANA, the same HA architecture can be used with any other DB running on Linux that supports DB client on Windows OS like:

  • Oracle
  • SAP Sybase
  • SAP Max DB
  • IBM DB2
Viewing all 61 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>