The following parameters control the Palantir Foundry Connector 2.0 for SAP Applications ("Connector"):
The default parameter values listed below are for a fresh install of the latest connector version.
Param Id | Param Name | Possible Values | Default Value | Description |
---|---|---|---|---|
BALDAT | SEPARATOR_COLUMN | any character | \t | This parameter will be used to separate columns when concatenating system logs during decompression from the BALDAT table. |
BALDAT | SEPARATOR_NEWLINE | any character | \n | This parameter will be used to separate new lines when concatenating system logs during decompression from the BALDAT table. |
BEX | CELL_TO_STRING_ | CELL_TO_STRING_W / CELL_TO_STRING_M / CELL_TO_STRING_P / CELL_TO_STRING_Q / CELL_TO_STRING_F / CELL_TO_STRING_D / CELL_TO_STRING_T | If the data type of the InfoObject needs to be changed to a string, it is defined here. A sample usage is CELL_TO_STRING_D, which converts date fields into a character data type. The data types are: W Amount, M Quantity, P Price, Q Quota, F Number, D Date, and T Time. | |
BEX | ENGINE | Alphanumeric | V2 | A new BEx Query Engine has been introduced, which brings performance improvements and additional support for query elements such as display attributes. It is not enabled by default but can be turned on by setting this parameter to V2. |
BEX | PAGING | TRUE/FALSE | FALSE | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. |
BEX | PAGING_MEMBER_LIMIT | Numeric | 100 | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. If the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT , it is considered too fine-grained and discarded for filter generation. |
BEX | PAGING_MEMBER_LIMIT | Numeric | 100 | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT , it is considered too fine-grained and is discarded for filter generation. The PAGING_MEMBER_LIMIT parameter can be adjusted in the Connector’s parameters. |
BEX | RANGESIZE | Numeric | 1000 | While generating filters, if any InfoObject has many values and member limitation is not applied, this parameter will control the filter list for each page. |
BEX | SHOW_DISPLAY_ATTRIBUTES | TRUE / FALSE | FALSE | To enable display attributes, the BEx Query Engine will need to be set to V3. Display attributes can be enabled system-wide (by maintaining this parameter) or at the individual sync level (in Foundry). |
BEX | SHOW_DISPLAY_ATTRIBUTES | TRUE/FALSE | FALSE | Display attributes are supported. To enable display attributes, the BEx Query Engine will need to be set to V3. Display attributes can be enabled system-wide (by maintaining a parameter) or at the individual sync level (in Foundry). |
BEX | TECHNICAL_NAMES | TRUE/FALSE | FALSE | If this parameter is enabled, BEx column names will be retrieved using their technical names instead of human-readable texts. |
BEX | TEXT | TRUE / FALSE / BEX | BEX | If this parameter is set to TRUE, Key and Text of the characteristic / key figure will be concatenated as column name. If this parameter is set to FALSE, the only Key of the characteristic / key figure will be column name. If this parameter is set to BEX, the Query parameter will be used to define column names. |
CLEANUP | CDPOS_WINDOW_OFFSET_DAY | Integer values | 4 | This parameter specifies the retention period for CDPOS Window records in the table /PALANTIR/PAG_16. Records older than the specified duration will be deleted. |
CLEANUP | INC_DEL_OLDER_IN_DAYS | Integer values | 45 | This parameter specifies the retention period for records in the tables /palantir/inc_04. Records older than the specified duration will be deleted. |
CLEANUP | LOG_DEL_OLDER_IN_DAYS | Integer values | 45 | This parameter specifies the retention period for records in the tables /palantir/log_03. Records older than the specified duration will be deleted. |
CLEANUP | MAX_ROW_DELETE_LIMIT | Integer values | 20000 | This parameter indicates the maximum row count while housekeeping job deleting data from paging tables. |
CLEANUP | PAGE_DEL_OLDER_IN_DAYS | Integer values | 45 | This parameter specifies the retention period for records in the tables /palantir/pag_01, /palantir/pag_02, /palantir/pag_08, /palantir/pag_11, /palantir/pag_12, /palantir/pag_14. Records older than the specified duration will be deleted. |
CLEANUP | SCHEMA_DEL_OLDER_IN_DAYS | Integer values | 5 | This parameter specifies the retention period for records in the tables /palantir/pag_09. Records older than the specified duration will be deleted. |
CLEANUP | SLTSTR_DEL_OLDER_IN_DAYS | Integer values | 2 | This parameter specifies the retention period for SLT Streaming records in the tables /palantir/pag_08. Records older than the specified duration will be deleted. |
CLEANUP | TABLE_DEL_OLDER_IN_DAYS | Integer values | 10 | This parameter specifies the retention period for records in the tables /palantir/pag_13, /palantir/pag_10, and /palantir/pag_15. Records older than the specified duration will be deleted. |
DATAMODEL | DEPTH | Integer values | 1 | This parameter value specifies the number of relationship levels to process between the given tables and their related tables. |
DATATYPE | ParamValue=BOOLEAN | TSVCHARCOMPARE, AUTHCHECK, ENABLEDBAGGREGATION | N/A | This parameter requires the parameter values to be 'BOOLEAN', and the parameter names represent the variable names that indicate the field names of boolean variables sent via Foundry in the body of the request. If a parameter name is defined with ParamId=DATATYPE and ParamValue=BOOLEAN, then this parameter will be checked in the payload. Its value will be set to TRUE if it is sent as 'X', and FALSE if it is sent as a space. |
ENCRYPT | EXTEND_ITAB | |||
EXTACTOR | DEFAULT_CONFIGURATION | Alphanumeric | None | Extractors support multiple contexts. This parameter can be used to set the default context. By doing so, there is no need to set a context on Foundry syncs as the Connector will use the default context in these cases. Leaving this parameter as "None" means that the extractor will run on the local application server, not a remote context, by default. |
EXTACTOR | DEFAULT_CONFIGURATION | None | Extractors support multiple contexts. The default context can be set in the Foundry SAP Connector parameters. By doing so, there is no need to set a context on Foundry syncs. The Foundry SAP Connector will use the default context in these cases. | |
EXTRACTOR | CHECK_CONCURRENT_JOBS | TRUE/FALSE | TRUE | Checks if there are any running jobs for this incremental ID. If the previous attempt has not completed yet, an error occurs to stop new data ingestion. |
EXTRACTOR | CONTEXT_CONFIGURATION | SAPI | N/A | SAPI represents the SAP Service API. For extractors, context configuration should be as SAPI. |
EXTRACTOR | DEBUG_MODE | TRUE / FALSE | FALSE | If this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages. |
EXTRACTOR | DEBUG_MODE | TRUE / FALSE | FALSE | If this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages. |
EXTRACTOR | EXT_DATA_<dataType> | Numeric | N/A | If the output length of an ABAP datatype is incorrect, this parameter can be used to change the data length of that data type. In the Parameter Name field, <dataType> refers to the ABAP data type for which the length should be changed. |
EXTRACTOR | EXT_DTYPE_<dataType> | ABAP Data Types | N/A | If a data type cannot be recognized by Foundry, this parameter can be used to change the data type to another type. In the Parameter Name field, <dataType> refers to the data type that needs to be changed. |
EXTRACTOR | EXT_DATA_ | Numeric | N/A | If the output length of a ABAP datatype is wrong, you can change the data length of that data type as a workaround. In Parameter Name field, refers the ABAP data type whose length needs to be changed. |
EXTRACTOR | EXT_DTYPE_ | ABAP Data Types | N/A | If a data type cannot recognized by Foundry, you can change the data type as a workaround. In Parameter Name field, refers the data type that needs to be changed. |
EXTRACTOR | FETCH_OPTION | XML / DIRECT | XML | This parameter indicates whether to use fetch method XML or DIRECT. The XML fetch method is fast since it fetches data in a compressed format. The DIRECT fetch method is slower since it fetches data as a string and processes each row individually. The DIRECT fetch method should only be used if there is an error with the XML fetch method due to special characters in the data. |
EXTRACTOR | FETCH_OPTION | XML / DIRECT | XML | This parameter indicates whether fetch method XML or DIRECT will be used. XML fetch method is faster since it fetches data in compressed format. Direct Fetch method is slower since it fetches data as string and processes every row one by one. One advantage of Direct fetch method is that it can fetch data with special characters without errors. |
EXTRACTOR | MAX_ROWS_PER_SYNC | Numeric | N/A | If this parameter is set, it will be taken into account only for syncs with the APPEND transaction type. Each sync will stop when it has ingested MAX_ROWS_PER_SYNC rows of data. |
EXTRACTOR | MAX_ROWS_PER_SYNC | Numeric | N/A | When this parameter is defined, it will be taken into account only for Append transaction types. If this parameter is set to a value, the triggered sync will stop when it ingests rows more than MaxRowsPerSync parameter. |
EXTRACTOR | RFC_CONFIGURATION | NONE | This parameter indicates the RFC name of the remote server. If this parameter is not set, or set to blank, the RFC configuration will be set to "NONE". | |
EXTRACTOR | RFC_CONFIGURATION | NONE | This parameter indicates the RFC name of the external server. If this parameter is not set, or set to space, RFC configuration will be set as NONE . | |
EXTRACTOR | TIMESTAMP | ON / OFF | OFF | When this parameter is set to ON, the data will include a timestamp showing when the data was fetched and a row order number. This information can be used to deduplicate data later in the pipeline if required. |
EXTRACTOR | TRACE_BEFORE_FETCH | TRUE / FALSE | FALSE | By default, running a trace for extractors also includes the replication (calculation, initial data transfer and replication object generation), which sometimes takes longer than the limit on a trace. By setting this property to TRUE, the trace will start before the extractor fetch operation, bringing more clarity to trace results. |
EXTRACTOR | TRACE_BEFORE_FETCH | TRUE/FALSE | FALSE | By default EXTRACTOR trace also included replication (calculation, initial data transfer and replication object generation) which sometimes takes longer than trace limits. You can adjust whether the trace starts before or after EXTRACTOR fetch operation for more clarity in the trace results. |
INCREMENTAL | CDPOS_CHANGENR_FILTER_MODE | DB | If this parameter is set to DB, then change number filter is used in the selection of the CDHDR table. Otherwise, it is filtered after selecting documents from CDHDR table. | |
INCREMENTAL | CDPOS_CHANGENR_OFFSET | Integer values | 500000 | When the CDPOS type is used, document numbers will be checked by going back as many entries as specified in this parameter. |
INCREMENTAL | CDPOS_WINDOW_CLEANUP_OFFSETDAY | Integer values | 4 | While ingesting the minimum change number from the /palantir/pag_16 table, this parameter will be used to filter records with a creation date older than the specified value. |
INCREMENTAL | COMPARATOR | GREATER_THAN / GREATER_THAN_OR_EQUAL_TO | GREATER_THAN_OR_EQUAL_TO | This parameter specifies the comparator used during incremental delta ingestions to filter records where the incremental field value is either greater than or greater than or equal to the latest recorded incremental field value. |
INCREMENTAL | ENABLE_CDPOS_CURSOR | TRUE/FALSE | FALSE | If this parameter is enabled, documents from the CDPOS table will be selected via a cursor to prevent excessive data load. |
INCREMENTAL | ENABLE_CDPOS_UDATEFILTER | TRUE/FALSE | TRUE | If this parameter is enabled, then CDHDR table will be filtered according to UDATE column of the latest Change number and then CDPOS table will be read again using the documents form CDHDR table. |
INCREMENTAL | ENABLE_TWIN_CURSOR | TRUE/FALSE | FALSE | If this parameter is enabled, documents from the twin table will be selected via a cursor to prevent excessive data load. |
INCREMENTAL | RANGESIZE | Numeric | 900 | (Internal Parameter) This parameter is used for CDPOS, CDHDR and TWIN incremental types for Table and RemoteTable objects. This parameter indicates how many conditions can exist in the nested range table to ingest data. |
INFOPROVIDER | COMPLETE_ANSWER | TRUE/FALSE | TRUE | When retrieving all authorized values of a user for an InfoObject, if this parameter is set to True, it returns not only the authorized values for the queried InfoObject but also all authorizations that include the characteristic. |
INFOPROVIDER | ENABLE_DB_AGGREGATE | TRUE/FALSE | TRUE | The parameter determines if database aggregation will be used while ingesting Infoprovider's data. |
INFOPROVIDER | FORCE_INIT | TRUE / FALSE | FALSE | If this parameter is set to TRUE, last successful Job Id will not be set as success. |
INFOPROVIDER | READ_OPEN_REQUEST | TRUE / FALSE | TRUE | This parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behavior is to read all requests. |
INFOPROVIDER | READ_OPEN_REQUEST | TRUE/FALSE | TRUE | This parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behaviour is to read all requests. |
JSON | CONVERT_RAW_TO_STRING | TRUE / FALSE | TRUE | If this parameter is set to FALSE, then JSON conversion is disabled while saving Page data. If the parameter is TRUE, data will be converted from Xstring to String. |
JSON | CONVERT_RAW_TO_STRING | TRUE / FALSE | TRUE | If this parameter is set to FALSE, then JSON conversion is disabled while saving Page data. If it is TRUE, data will be converted from Xstring to String. |
JSON | JSON_OPTIMIZED | TRUE/FALSE | TRUE | When a page read request is sent to the Connector, data is ingested directly from the paging table. If this parameter is set to True, the data is sent to Foundry without modification. Otherwise, the data is first converted into a table after being ingested from the paging table, then serialized again before being sent to Foundry. |
JSON | NUMC_KEEPZERO | TRUE / FALSE | FALSE | Prior to SP23, leading zeros in NUMC type fields were removed when non-kernel-based JSON conversion is used. Setting this parameter to TRUE will ensure leading zeros are kept, inline with the behavior for kernel-based JSON conversion. The default setting is FALSE to ensure backward compatibility with existing pipelines. |
JSON | NUMC_KEEPZERO | TRUE/FALSE | FALSE | Fixes an issue with the JSON engine for NUMC data type. Foundry SAP Connector does not remove leading zeros as of SP23. It has the same behaviour with kernel and non-kernel JSON conversion. It is not enabled by default so as not to affect existing pipelines for non-kernel JSON conversion. (useKernelJsonSerialization: false ). For incremental scenarios and data duplication, if kernel JSON conversion is enabled or this parameter value is set to true, a new initial load is recommended by resetting incremental state of the existing sync. |
JSON | REMOVE_EXTENDED | TRUE / FALSE | FALSE | If this parameter is set to TRUE, non-printable ASCII codes (char code 128 to 255) are removed from the data before extraction. |
JSON | REMOVE_NONPRINT | TRUE / FALSE | TRUE | If this parameter is set to TRUE, non-printable characters are removed from the data before extraction. |
JSON | REMOVE_EXTENDED | TRUE / FALSE | FALSE | If this parameter is set to TRUE, it eliminates non-printable ASCII Codes from the content. (Char code 128 to 255 ) |
JSON | REMOVE_NONPRINT | TRUE / FALSE | TRUE | If this parameter is set to TRUE, it eliminates non-printable characters in the data. |
KERNEL | VALUE_HANDLING | default,move | move | The transformation option controls the tolerance of conversions when mapping elementary ABAP types in KERNEL serialization. default: if there is an invalid value in a field of type n, the exception CX_SY_CONVERSION_NO_NUMBER is raised. move: Invalid values in a field of type n are copied to XML or JSON without being changed. |
LOGGER | DB | TRUE / FALSE | TRUE | This parameter is used to control whether logs are saved to the connector’s own logging tables or not. |
LOGGER | PAGEREAD_COMMIT | TRUE / FALSE | FALSE | By default, page read log messages are only sent to Foundry and not stored in the database. |
LOGGER | PAGEREAD_COMMIT | TRUE / FALSE | FALSE | By default, page read log messages are only sent to Foundry and not stored in the database. |
LOGGER | SLG | TRUE / FALSE | FALSE | This parameter is used to create log entries in SAP SLG logging. |
LOGGER | SLG_EXPIRY | Numeric | 30 | SLG_EXPIRY can be set in days; if it is not set, standard SAP SLG expiration policy applies. |
LOGGER | SLG_KEEP | TRUE / FALSE | FALSE | SLG_KEEP is used to prevent logs being deleted in SLG before expiration. |
LOGGER | SLG_EXPIRY | Numeric | 30 | SLG_EXPIRY can be set in days; if it is not set, standard SAP SLG expiration policy applies. |
LOGGER | SLG_KEEP | TRUE/FALSE | FALSE | SLG_KEEP is used to prevent logs being deleted in SLG before expiration. The default value is FALSE. |
LOGGER | TRACE_LEVEL | INFO/WARN/ERROR | WARN | This parameter controls which type of log messages will be saved to the database and returned to Foundry. Trace log levels are as follows: ERROR – Only log messages with type E-Error; WARN – Only log messages with type W-Warning, I-Information, E-Error, T-Trace; INFO – All log messages (S-Success, W-Warning, I-Information, E-Error, T-Trace) |
LOGGER | TRACE_LEVEL | INFO/WARN/ERROR | WARN | TRACE_LEVEL parameter controls which type of log messages will be saved to the database and returned to Foundry. If there is no record in the configuration table, the connector will send all messages to Foundry – the equivalent of using INFO parameter in the configuration table. Trace log levels are as follows: ERROR – Only log messages with type E-Error; WARN – Only log messages with type W-Warning, I-Information, E-Error, T-Trace; INFO – All log messages (S-Success, W-Warning, I-Information, E-Error, T-Trace) |
NAMESPACE | TIMESTAMP | TRUE / FALSE | TRUE | If this parameter is set to TRUE, timestamp and row no fields will be named /PALANTIR/TIMESTAMP and /PALANTIR/ROWNO (which is now the default) instead of ZPAL_TIMESTAMP and ZPAL_ROWNO (which was the naming convention in earlier versions). |
PAGE | DELETE_BACKGROUND | TRUE/FALSE | TRUE | If this parameter is enabled, page deletion from the paging table occurs in a background job when close and commit requests are sent. |
PAGE | MIN_PAGESIZE | Numeric | 5000 | This parameter sets the minimum row count of a page while writing data in pages. If a user specifies a page size lower than this in Foundry, it will be disregarded and this minimum used instead. This is to protect against poor performance with very small page sizes. |
PAGE | MIN_PAGESIZE | Numeric | 5000 | This parameter indicates minimum row count of a page while writing data in pages. You cannot define a pagesize number less than minimum pagesize parameter. |
PAGE | PAGEDELIMITER | any character | Horizontal Tab Stop Character | When the PAGEFORMAT parameter is set to TSV, the PAGEDELIMITER parameter specifies the character used to separate columns during data serialization. |
PAGE | PAGEFORMAT | TSV,JSON,KRNL | JSON | This parameter indicates the conversion format of the data that will be sent to Foundry. |
PAGE | PAGESIZE | Numeric | 10000 | This parameter sets the default row count of a page while writing data in pages. If a user does not specify a page size parameter in Foundry, this value will be used. |
REMOTEBEX | CELL_TO_STRING_ | CELL_TO_STRING_W / CELL_TO_STRING_M / CELL_TO_STRING_P / CELL_TO_STRING_Q / CELL_TO_STRING_F / CELL_TO_STRING_D / CELL_TO_STRING_T | If the data type of the InfoObject needs to be changed to a string, it is defined here. A sample usage is CELL_TO_STRING_D, which converts date fields into a character data type. The data types are W Amount, M Quantity, P Price, Q Quota, F Number, D Date, and T Time. | |
REMOTEBEX | ENGINE | Alphanumeric | V2 | A new BEx Query Engine is introduced which brings performance improvements and additional support for query elements such as display attributes. It is not enabled by default but can be turned on by setting this parameter to V3. |
REMOTEBEX | PAGING | TRUE/FALSE | FALSE | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject ids are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. |
REMOTEBEX | PAGING_MEMBER_LIMIT | Numeric | 100 | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT , it is considered too fine-grained and therefore discarded for filter generation. |
REMOTEBEX | PAGING_MEMBER_LIMIT | Numeric | 100 | Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT , it is considered too fine-grained and therefore discarded for filter generation. The PAGING_MEMBER_LIMIT parameter can be adjusted in the Connector’s parameters. |
REMOTEBEX | RANGESIZE | Numeric | 1000 | While generating filters, if any InfoObject has many values and member limitation is not applied, this parameter will control the filter list for each page. |
REMOTEBEX | TECHNICAL_NAMES | TRUE/FALSE | FALSE | If this parameter is enabled, BEx column names will be retrieved using their technical names instead of human-readable texts. |
REMOTEBEX | TEXT | TRUE / FALSE / BEX | BEX | If this parameter is set to TRUE, Key and Text of the characteristic / key figure will be concatenated as column name. If this parameter is set to FALSE, the only Key of the characteristic / key figure will be column name. If this parameter is set to BEX, the Query parameter will be used to define column names. |
REMOTEINFOPROVIDER | CHECK_EXIST | TRUE/FALSE | TRUE | If enabled, a check is performed to verify whether the Infoprovider exists in the remote system. |
REMOTEINFOPROVIDER | COMPLETE_ANSWER | TRUE/FALSE | TRUE | When retrieving all authorized values of a user for an InfoObject, if this parameter is set to True, it returns not only the authorized values for the queried InfoObject but also all authorizations that include the characteristic. |
REMOTEINFOPROVIDER | ENABLE_DB_AGGREGATE | TRUE/FALSE | TRUE | This parameter determines if database aggregation will be used while ingesting Infoprovider's data. |
REMOTEINFOPROVIDER | FORCE_INIT | TRUE / FALSE | FALSE | If this parameter is set to TRUE, last successful Job Id will not be set as success. |
REMOTEINFOPROVIDER | READ_OPEN_REQUEST | TRUE / FALSE | TRUE | This parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behavior is to read all requests. |
REMOTEINFOPROVIDER | READ_OPEN_REQUEST | TRUE/FALSE | TRUE | This parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behaviour is to read all requests. |
REMOTETABLE | CDPOS_FILTER_MAINTABLE_FOR_DELETE | TRUE/FALSE | FALSE | Fixes an issue for CDPOS incremental type. CDPOS deleted documents that were being retrieved by a query which checks for the change documents records from table family. This caused documents to be set as deleted although only their item level data is deleted. The key parsing approach is changed. Now, it checks the table keys only for the main table or the tables in the same table family which have the same or more keys as the main table. For example, if Sales Document 01000020 is deleted, and Sales Item is in the same table family, then entries for Sales item that have keys like 0100020_0010 should also be deleted. A new parameter was introduced to include other tables when retrieving CDPOS deleted items. The default value of the parameter is FALSE. If the parameter is set to FALSE, the CDPOS deleted documents table can be the main table and other related tables that have the same or fewer keys as the main table. If the parameter is set to TRUE, only the main table deleted documents will be retrieved from CDPOS table. |
REMOTETABLE | DBPAGING_CREATE_SHADOWTAB | TRUE/FALSE | FALSE | If this parameter is disabled and the source system runs on Hana DB, then shadow table will not be created. |
REMOTETABLE | DBPAGING_ENABLED | TRUE/FALSE | TRUE | If this parameter is enabled, then data ingestion will be performed with parallel process. |
REMOTETABLE | DBPAGING_UNICODE_PREFIX | TRUE/FALSE | TRUE | If this parameter is enabled, filter fields that have CHAR data types are flagged ( IS_CHAR_FIELD field is set to X). |
REMOTETABLE | FORCE_INIT | TRUE / FALSE | FALSE | If this parameter is set to TRUE, last successful Job Id will not be set as success. |
REMOTETABLE | PARALLEL | TRUE / FALSE | FALSE | If this parameter is set to TRUE, syncs will run in parallel processing mode. |
REMOTETABLE | PARALLEL_JOB | Numeric | 5 | If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default. |
REMOTETABLE | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If there are fewer rows in the result set than this value, parallel processing will not be used. |
REMOTETABLE | PARALLEL_JOB | Numeric | 5 | If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number. |
REMOTETABLE | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed. |
REMOTETABLE | ROWCOUNT_BY_TABCLASS | TRUE / FALSE | FALSE | If this parameter is set to TRUE, for cluster tables the row count returned by the system will be for the individual table; if it is FALSE (the default) the row count will be for the total cluster. |
REMOTETABLE | ROWCOUNT_BY_TABCLASS | TRUE/FALSE | FALSE | This parameter is used to toggle rowcount for Cluster Table for either the rowcount on the total cluster or individual table. |
REMOTETCODE | SCHEMA_FROM_DATA | TRUE/FALSE | FALSE | If the parameter is set to FALSE, the schema for the ALV report is retrieved from the metadata, which is fetched during runtime. If set to TRUE, the schema is retrieved directly from the data itself. |
RETRY | COUNT | Numeric | 1 | This parameter indicates the maximum number of times the sync will be retried if system resource checks fail. |
RETRY | DELAY | Numeric | 5 | If system resource checks fail, this parameter indicates how long (in seconds) to wait before checking the resources again. |
SLT | CHECK_CONCURRENT_JOBS | TRUE/FALSE | TRUE | Checks if there are any running jobs for this incremental ID. If the previous attempt has not completed yet, an error occurs to stop new data ingestion. |
SLT | CONTEXT_BASED_AUTHORIZATION | TRUE / FALSE | FALSE | If this parameter is set to TRUE, authorization checks will be run against the SLT context based on authorization object /PALAU/SCN . |
SLT | CONTEXT_CONFIGURATION | N/A | This parameter can be used to set a system-wide default context name, to be used in the event no context is sent from Foundry. | |
SLT | CONTEXT_BASED_AUTHORIZATION | TRUE / FALSE | FALSE | If this parameter is set to TRUE, SLT context will be checked if it has authorization in Authorization object "/PALAU/SCN". |
SLT | CONTEXT_CONFIGURATION | N/A | If Foundry does not send the SLT context to SAP Connector, this parameter will be read as SLT context name. | |
SLT | DEBUG_MODE | TRUE / FALSE | FALSE | If this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages. |
SLT | FETCH_OPTION | XML / DIRECT | XML | This parameter indicates whether to use fetch method XML or DIRECT. The XML fetch method is fast since it fetches data in a compressed format. The DIRECT fetch method is slower since it fetches data as a string and processes each row individually. The DIRECT fetch method should only be used if there is an error with the XML fetch method due to special characters in the data. |
SLT | FETCH_OPTION | XML / DIRECT | XML | This parameter indicates, whether SLT fetch method XML or DIRECT will be used. XML fetch method is faster since it fetches data in compressed format. Direct Fetch method is slower since it fetches data as string and processes every row one by one. One advantage of Direct fetch method is that it can fetch data with special characters without errors. |
SLT | FORCE_INIT | TRUE / FALSE | FALSE | If this parameter is set to TRUE, last successful Job Id will not be set as success. Besides, SLT source table will be reset to ingest initial delta values. |
SLT | GET_LIST_MODE | V1/V2 | V1 | V1 version ingest SLT table list using ODP RFC RODPS_REPL_ODP_GET_LIST. Since the standard RFC has bugs which impacts the performance, another version of SLT table list ingestion is introduced. V2 version uses IUUC_GET_TABLES RFC which ingest table list from DD02L table. IUUC_GET_TABLES RFC is also used in RODPS_REPL_ODP_GET_LIST RFC. |
SLT | MAX_ROWS_PER_SYNC | Numeric | N/A | If this parameter is set, it will be taken into account only for syncs with the APPEND transaction type. Each sync will stop when it has ingested MAX_ROWS_PER_SYNC rows of data. |
SLT | MAX_ROWS_PER_SYNC | Numeric | N/A | If this parameter is defined, it will be taken into account only for Append transaction types. If this parameter is set to a value, the triggered sync will stop when it ingests rows more than MaxRowsPerSync parameter. |
SLT | PARALLEL | TRUE / FALSE | FALSE | If this parameter is set to TRUE, syncs will run in parallel processing mode. |
SLT | PARALLEL_JOB | Numeric | 3 | If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default. |
SLT | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If there are fewer rows in the result set than this value, parallel processing will not be used. |
SLT | PARALLEL_FETCH_EXC_MSG | - | 511-RODPS_REPL | When ingesting SLT packages in parallel, fetching the last package can result in an error message due to the absence of an extracted package. To distinguish this specific error from other fetch errors, both the error message number and ID are defined. |
SLT | PARALLEL_JOB | Numeric | 3 | If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number. |
SLT | PARALLEL_PACKAGE_SIZE | Integer values | 0 | This parameter indicates how many SLT packages will be read in a single parallel process when fetching data from ODP. |
SLT | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed. |
SLT | QUEUE | TRUE / FALSE | FALSE | A new approach has been introduced for SLT work process handling. By default, the Connector uses a BTC (background) process to fetch from SLT. If multiple syncs are running, a BTC process is used for each. Set this parameter to TRUE to use a single BTC process to wait for multiple initial loads from SLT. Once the data packages reach SLT, Foundry processes are started. This improves BTC resource efficiency. |
SLT | RECOVER_FROM_CONNECTOR | TRUE/FALSE | TRUE | When a sync is triggered and a pointer is already open in SLT, this parameter determines the recovery method. If set to TRUE, pages will be retrieved from the Connector. If set to FALSE, recovery will be handled within SLT. |
SLT | REMOTEAGENT_<contextName> | N/A | If the SLT server and Connector are on different servers, this parameter should be defined. In Parameter Name, <contextName> indicates the SLT context name. For Parameter Value, the RFC destination should be defined to point to the source system of the SLT context. | |
SLT | REMOTEAGENT_ | If SLT server and SAP Connector are on different servers, this parameter should be defined. In the Parameter Name field, indicates SLT context name. In the Parameter Value field, the RFC destination should be defined which points the source system of SLT context. | ||
SLT | RFC_CONFIGURATION | NONE | This parameter indicates the RFC name of the remote server. If this parameter is not set, or set to blank, the RFC configuration will be set to "NONE". | |
SLT | RFC_CONFIGURATION | NONE | This parameter indicates the RFC name of SLT server. If this parameter is not set, or set to space, RFC configuration will be set as NONE . | |
SLT | SLT_DATA_<dataType> | Numeric | N/A | If the output length of an ABAP datatype is incorrect, this parameter can be used to change the data length of that data type. In the Parameter Name field, <dataType> refers to the ABAP data type for which the length should be changed. |
SLT | SLT_DTYPE_<dataType> | ABAP Data Types | N/A | If a data type cannot be recognized by Foundry, this parameter can be used to change the data type to another type. In the Parameter Name field, <dataType> refers to the data type that needs to be changed. |
SLT | SLT_DATA_ | Numeric | N/A | If the output length of a ABAP datatype is wrong, you can change the data length of that data type as a workaround. In the Parameter Name field, refers the ABAP data type whose length needs to be changed. |
SLT | SLT_DTYPE_ | ABAP Data Types | N/A | If a data type cannot recognized by Foundry, you can change the data type as a workaround. In the Parameter Name field, refers the data type that needs to be changed. |
SLT | TIMESTAMP | ON / OFF | OFF | When this parameter is set to ON, the data will include a timestamp showing when the data was fetched and a row order number. This information can be used to deduplicate data later in the pipeline if required. |
SLT | TRACE_BEFORE_FETCH | TRUE / FALSE | FALSE | By default, running a trace for SLT also includes the replication (calculation, initial data transfer and replication object generation), which sometimes takes longer than the limit on a trace. By setting this property to TRUE, the trace will start before the SLT fetch operation, bringing more clarity to trace results. |
SLT | TRACE_BEFORE_FETCH | TRUE/FALSE | FALSE | By default SLT trace also included replication (calculation, initial data transfer and replication object generation) which sometimes takes longer than trace limits. It is possible to adjust whether the trace starts before or after SLT fetch operation for some more clarity in the trace results. |
SLT | TRIGGER_STATE_TIMEOUT | Integer values | 300 | The trigger state status is checked after the timeout parameter exceeds to determine if there is any error in the replication of the table. |
SYSTEM | ABORT_RETRY_COUNT | Numeric | 10 | This parameter defines the number of attempts that will be made to abort the job when a transaction is closed. |
SYSTEM | ABORT_RETRY_COUNT | Numeric | 10 | A new parameter in the configuration table is defined to retry job abortion attempts in a close request. If there is no parameter, default retry parameter will be set to 10. |
SYSTEM | AUTH_CHECK_SOURCE | TABLE / PFCG | PFCG | Used to configure custom authorizations. |
SYSTEM | AUTH_GET_LIST | TRUE / FALSE | FALSE | This parameter is used to enable or disable authorization checks for the list of values for object types. |
SYSTEM | AUTH_GET_LIST | TRUE/FALSE | FALSE | This parameter is used to enable or disable authorization checks for the list of values for object types. Default value is FALSE. |
SYSTEM | CONTEXT_VALIDITY_CHECK | TRUE / FALSE | TRUE | If this parameter is set to FALSE, SLT and Remote Agent contexts will not be eliminated from the list returned to Foundry, even if they are not valid contexts. |
SYSTEM | CONTEXT_VALIDITY_CHECK | TRUE / FALSE | TRUE | If this parameter is set to FALSE, contexts of SLT and Remote Agent will not be eliminated if they are not valid contexts. |
SYSTEM | CONTINUOUS_RESOURCE_CHECK | TRUE / FALSE | TRUE | Enables resource checks for all requests (init and all paging requests). If FALSE, resource checks are only carried out for the init request. |
SYSTEM | CONTINUOUS_RESOURCE_CHECK | TRUE / FALSE | TRUE | Enables resource checks for all requests (init and all paging requests). If FALSE, resource checks are only carried out for the init request. |
SYSTEM | CPU_CHECK | TRUE / FALSE | TRUE | Enables or disables CPU checks. |
SYSTEM | CPU_CHECK | TRUE / FALSE | TRUE | Enables or disables CPU checks. |
SYSTEM | DISABLE_DYNFILTER | TRUE/FALSE | FALSE | When this parameter is set to TRUE, dynamic filters will not be parsed in Connector. |
SYSTEM | DYNAMIC_TABLE | V1 / V2 | V1 | This parameter can be used to address an issue seen in CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE , which hits the dynamic table limits. To enable new dynamic table routines, set this parameter to V2. |
SYSTEM | DYNAMIC_TABLE | V1/V2 | V1 | Fixes an issue in CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE which hits the dynamic table limits. To enable new dynamic table routines, adjust the Foundry SAP Connector parameters as V2 . |
SYSTEM | ERP_SOURCE_INFO | TRUE / FALSE | TRUE | Used to define whether or not ERP source information will be returned to Foundry with the list of contexts. |
SYSTEM | ERP_SOURCE_INFO | TRUE/FALSE | TRUE | This parameter is used to enable or disable ERP source information of contexts for get_context requests. |
SYSTEM | FAILED_AUTH_MAX_COUNT | Numeric | 200 | This parameter is used to limit failed authorization check messages from SU53. There could be cases where too many error messages are generated in the SAP system, which may potentially affect the extraction process. |
SYSTEM | FAILED_AUTH_MAX_COUNT | Numeric | 200 | This parameter limits failed authorization check messages from SU53 . There could be cases where too many error messages are generated in the SAP system, which may potentially affect the extraction process. The default value is 200 . |
SYSTEM | FILTER_DECODE | TRUE / FALSE | FALSE | Set to TRUE to enable non-Unicode filters. |
SYSTEM | FILTER_DECODE | TRUE/FALSE | FALSE | Non-unicode filters are supported. A new parameter is introduced to enable non-unicode filters. |
SYSTEM | INFOPROVIDER_AUTH_CHECK | TRUE / FALSE | FALSE | When set to TRUE, the connector will check authorizations for the user for authorization-relevant InfoObjects. This avoids authorization errors. SAP APIs are used to generate filters based on row-level authorization. |
SYSTEM | INFOPROVIDER_AUTH_CHECK | TRUE / FALSE | FALSE | Row-level authorization support for InfoProviders. When the authCheck parameter is TRUE on Foundry, the connector will check authorization for the user for the authorization-relevant InfoObjects. This sets filters for authorization-relevant InfoObjects in the InfoProvider to avoid authorization errors. SAP APIs are used to generate filters based on row-level authorization. Alternatively it can be enabled by Foundry SAP Connector. |
SYSTEM | KILL_HANGING_JOB | TRUE / FALSE | FALSE | If set to TRUE, the Connector will checks paging requests from Foundry and if there are no more page read requests after a certain period, the paging job is cancelled. |
SYSTEM | KILL_HANGING_JOB_THRESHOLD | Numeric | 1800 | If KILL_HANGING_JOB is set to TRUE, this parameter defines the number of seconds to wait before a job is considered to be hanging. |
SYSTEM | KILL_HANGING_JOB | TRUE/FALSE | FALSE | When this parameter is enabled, the Foundry SAP Connector checks paging requests, and if there are no more page read requests after a certain period, the paging job is canceled. |
SYSTEM | KILL_HANGING_JOB_RETRY | Integer values | 5 | If a network issue occurs in Foundry, the SAP background job continues running. However, if page read requests are not sent from Foundry to Connector, the job will be aborted. If there is an error while aborting the job, it will retry the specified number of times, as defined by the KILL_HANGING_JOB_RETRY parameter. |
SYSTEM | KILL_HANGING_JOB_THRESHOLD | Numeric | 1800 | When this parameter is enabled, the Foundry SAP Connector checks paging requests and if there are no more page read requests after a certain period, the paging job is canceled. |
SYSTEM | MEMORY_CHECK | TRUE / FALSE | TRUE | Enables or disables memory checks. |
SYSTEM | MEMORY_CHECK_SOURCE | ST06 / ST02 | ST06 | This parameter determines whether the memory consumption value used for resource checks will be retrieved from the ST02 transaction code or ST06 transaction code in SAP. |
SYSTEM | MEMORY_CHECK | TRUE / FALSE | TRUE | Enables or disables MEMORY checks. |
SYSTEM | MEMORY_CHECK_SOURCE | ST06 / ST02 | ST06 | This parameter indicates that memory consumption value will be retrieved from ST02 tcode or ST06 tcode in SAP. |
SYSTEM | PROCESS_CHECK | TRUE / FALSE | TRUE | Enables checks on the minimum number of permitted work processes; works in conjunction with PROCESS_MIN_BG and PROCESS_MIN_DIA . |
SYSTEM | PROCESS_CHECK | TRUE / FALSE | TRUE | Enables checks on the minimum number of permitted work processes; works in conjunction with PROCESS_MIN_BG and PROCESS_MIN_DIA. |
SYSTEM | RESOURCE_CHECK | TRUE / FALSE | TRUE | Enables or disables resource checks. If FALSE, all checks are disabled; if TRUE, other parameters (CPU_CHECK and MEMORY_CHECK ) are checked. |
SYSTEM | RESOURCE_CHECK_SERVER | LOCAL / ALL | ALL | ALL : Check all application servers and return true if any of them has available resources. Starting first from the local server that is processing the current request. LOCAL : Check only the local server and return TRUE / FALSE based on availability. |
SYSTEM | RESOURCE_CHECK | TRUE / FALSE | TRUE | Enables or disables RESOURCE checks. If FALSE, all checks are disabled; if TRUE, other parameters (CPU_CHECK and MEMORY_CHECK) are checked. |
SYSTEM | RESOURCE_CHECK_SERVER | LOCAL/ALL | ALL | * ALL : Check all application servers and return true if any of them has available resources. Starting from local server first which is processing the current request. * LOCAL : Check only the local server and return true/false based on availability. |
SYSTEM | SERVICE_ENCODING | Alphanumeric | Used to enable charset support for non-Unicode NetWeaver 7.4 installations. | |
SYSTEM | SERVICE_ENCODING | Alphanumeric | Charset support for non-unicode NetWeaver 7.4 installations has been added and can be enabled using this parameter. | |
SYSTEM | SPLIT_SYNC_CANCEL_RETRY | Integer values | When a split sync is initiated from Foundry, it ceases data ingestion upon reaching the maximum row limit specified by the maxRowsPerSync parameter. Foundry sends close and commit requests while the background job for the initial load of the split sync continues. This is why the background job is not canceled if a close request for the split sync is received. However, if the initial load of a split sync is actually canceled, the Connector cannot differentiate it. It will be checked and aborted in an asynchronous job. The job abortion will be attempted as many times as specified by the KILL_HANGING_JOB_RETRY parameter. | |
SYSTEM | SPLIT_SYNC_CANCEL_THRESHOLD | Integer values | 300 | When a split sync is initiated from Foundry, it ceases data ingestion upon reaching the maximum row limit specified by the maxRowsPerSync parameter. Foundry sends close and commit requests while the background job for the initial load of the split sync continues. This is why the background job is not canceled if a close request for the split sync is received. However, if the initial load of a split sync is actually canceled, the Connector cannot differentiate it. It will be checked and aborted in an asynchronous job after the threshold duration which is defined in seconds has passed. |
SYSTEM_THRESHOLD | CPU_LOAD | Numeric | 80 | If current system CPU load is higher than this value, the sync will be aborted. |
SYSTEM_THRESHOLD | CPU_USER | Numeric | 80 | If current system CPU user load is higher than this value, the sync will be aborted. |
SYSTEM_THRESHOLD | MEMORY_FREE | Numeric | 5 | If available memory (%) of the current system is less than this minimum, the sync will be aborted. |
SYSTEM_THRESHOLD | PROCESS_MIN_BG | Numeric | 1 | Minimum required number of background process available on the SAP Application Server for the sync to proceed. |
SYSTEM_THRESHOLD | PROCESS_MIN_DIA | Numeric | 1 | Minimum required number of dialog process available on the SAP Application Server for the sync to proceed. |
SYSTEM_THRESHOLD | CPU_IDLE | Integer values | 5 | If current system CPU idle is higher than CPU_IDLE parameter, SAP Connector cannot proceed to ingest or read data. |
SYSTEM_THRESHOLD | CPU_LOAD | Numeric | 50 | If current system CPU load is higher than CPU_LOAD parameter, system does not allow to proceed to ingest or read data. |
SYSTEM_THRESHOLD | MEMORY_FREE | Numeric | 20 | If available memory(%) of current system is less than the minimum free memory parameter, SAP connector cannot proceed to ingest/read data. |
SYSTEM_THRESHOLD | PROCESS_MIN_BG | Numeric | 5 | Number of minimum background process available on SAP Application Server. |
SYSTEM_THRESHOLD | PROCESS_MIN_DIA | Numeric | 5 | Number of minimum dialog process available on SAP Application Server. |
TABLE | CDPOS_FILTER_MAINTABLE_FOR_DELETE | TRUE/FALSE | FALSE | Fixes an issue for CDPOS incremental type. CDPOS deleted documents that were being retrieved by a query which checks for the change documents records from table family. This caused documents to be set as deleted although only their item level data is deleted. The key parsing approach is changed. Now, it checks the table keys only for the main table or the tables in the same table family which have the same or more keys as the main table. For example, if Sales Document 01000020 is deleted, and Sales Item is in the same table family, then entries for Sales item that have keys like 0100020_0010 should also be deleted. A new parameter was introduced to include other tables when retrieving CDPOS deleted items. The default value of the parameter is FALSE. If the parameter is set to FALSE, the CDPOS deleted documents table can be the main table and other related tables that have the same or fewer keys as the main table. If the parameter is set to TRUE, only the main table deleted documents will be retrieved from CDPOS table. |
TABLE | DBPAGING_CREATE_SHADOWTAB | TRUE/FALSE | FALSE | If this parameter is disabled and the source system runs on Hana DB, then shadow table will not be created. |
TABLE | DBPAGING_ENABLED | TRUE/FALSE | TRUE | If this parameter is enabled, then data ingestion will be performed with parallel process. |
TABLE | DBPAGING_MAXJOB | Integer values | 3 | This parameter specifies the number of jobs to be used for parallel database ingestion when database paging is enabled. |
TABLE | DBPAGING_UNICODE_PREFIX | TRUE/FALSE | TRUE | If this parameter is enabled, filter fields which has CHAR data types are flagged ( IS_CHAR_FIELD field is set to X). |
TABLE | FORCE_INIT | TRUE / FALSE | FALSE | If this parameters is set to TRUE, last successful Job Id will not be set as success. |
TABLE | PARALLEL | TRUE / FALSE | FALSE | If this parameter is set to TRUE, syncs will run in parallel processing mode. |
TABLE | PARALLEL_JOB | Numeric | 5 | If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default. |
TABLE | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If there are fewer rows in the result set than this value, parallel processing will not be used. |
TABLE | PARALLEL_JOB | Numeric | 5 | If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number. |
TABLE | PARALLEL_PAGE_LIMIT | Numeric | 50000 | If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed. |
TABLE | ROWCOUNT_BY_TABCLASS | TRUE / FALSE | FALSE | If this parameter is set to TRUE, for cluster tables the row count returned by the system will be for the individual table; if it is FALSE (the default) the row count will be for the total cluster. |
TABLE | ROWCOUNT_BY_TABCLASS | TRUE/FALSE | FALSE | This parameter is used to toggle rowcount for Cluster Table whether rowcount on total cluster or individual table. |
TCODE | DEBUG_MODE | TRUE / FALSE | FALSE | If this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages. |
TCODE | DEBUG_MODE | TRUE / FALSE | FALSE | If this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages. |
TCODE | SCHEMA_FROM_DATA | TRUE/FALSE | FALSE | If the parameter is set to FALSE, the schema for the ALV report is retrieved from the metadata, which is fetched during runtime. If set to TRUE, the schema is retrieved directly from the data itself. |
TRACE | DURATION_LIMIT | Numeric | 30 | If a sync is running with trace functionality and this limit (in minutes) is reached, then trace will be turned off automatically (to avoid system short dumps). |
TRACE | DURATION_LIMIT | Numeric | 30 | If a sync works with Trace functionality and this limit is reached, then Trace will be turned off automatically since long Trace would cause system short dumps. |
TRACE | MAX_SESSION | Numeric | 10 | The default maximum number of sessions per sync for a trace session is 10 but can be modified using this parameter. This is primarily intended for parallel extractions to prevent trace files being generated for every parallel job. |
TRACE | MAX_SESSION | Numeric | 10 | The default maximum number of sessions per sync for a trace session is 10. This is primarily intended for parallel extractions to prevent trace files being generated for every parallel job. |
TSV | CHARLIST | Alphanumeric | N/A | This parameter controls extraction from non-Unicode 4.6C/620/640 systems. Fixed dictionary characters are used by default but this dictionary can be extended using this parameter to add missing characters. |
TSV | ESCAPEMODE | AUTO / HEX / HEX_ZIPPED / CHAR_COMPARE | CHAR_COMPARE | This parameter controls extraction from non-Unicode 4.6C/620/640 systems. There is a fixed algorithm for character escaping, which can be changed using this parameter. |