Parameters for the Palantir Foundry Connector 2.0 for SAP Applications

Connector parameters

The following parameters control the Palantir Foundry Connector 2.0 for SAP Applications ("Connector"):

The default parameter values listed below are for a fresh install of the latest connector version.

Param IdParam NamePossible ValuesDefault ValueDescription
BALDATSEPARATOR_COLUMNany character\tThis parameter will be used to separate columns when concatenating system logs during decompression from the BALDAT table.
BALDATSEPARATOR_NEWLINEany character\nThis parameter will be used to separate new lines when concatenating system logs during decompression from the BALDAT table.
BEXCELL_TO_STRING_CELL_TO_STRING_W / CELL_TO_STRING_M / CELL_TO_STRING_P / CELL_TO_STRING_Q / CELL_TO_STRING_F / CELL_TO_STRING_D / CELL_TO_STRING_TIf the data type of the InfoObject needs to be changed to a string, it is defined here. A sample usage is CELL_TO_STRING_D, which converts date fields into a character data type. The data types are: W Amount, M Quantity, P Price, Q Quota, F Number, D Date, and T Time.
BEXENGINEAlphanumericV2A new BEx Query Engine has been introduced, which brings performance improvements and additional support for query elements such as display attributes. It is not enabled by default but can be turned on by setting this parameter to V2.
BEXPAGINGTRUE/FALSEFALSEPaging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled.
BEXPAGING_MEMBER_LIMITNumeric100Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. If the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT, it is considered too fine-grained and discarded for filter generation.
BEXPAGING_MEMBER_LIMITNumeric100Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT, it is considered too fine-grained and is discarded for filter generation. The PAGING_MEMBER_LIMIT parameter can be adjusted in the Connector’s parameters.
BEXRANGESIZENumeric1000While generating filters, if any InfoObject has many values and member limitation is not applied, this parameter will control the filter list for each page.
BEXSHOW_DISPLAY_ATTRIBUTESTRUE / FALSEFALSETo enable display attributes, the BEx Query Engine will need to be set to V3. Display attributes can be enabled system-wide (by maintaining this parameter) or at the individual sync level (in Foundry).
BEXSHOW_DISPLAY_ATTRIBUTESTRUE/FALSEFALSEDisplay attributes are supported. To enable display attributes, the BEx Query Engine will need to be set to V3. Display attributes can be enabled system-wide (by maintaining a parameter) or at the individual sync level (in Foundry).
BEXTECHNICAL_NAMESTRUE/FALSEFALSEIf this parameter is enabled, BEx column names will be retrieved using their technical names instead of human-readable texts.
BEXTEXTTRUE / FALSE / BEXBEXIf this parameter is set to TRUE, Key and Text of the characteristic / key figure will be concatenated as column name. If this parameter is set to FALSE, the only Key of the characteristic / key figure will be column name. If this parameter is set to BEX, the Query parameter will be used to define column names.
CLEANUPCDPOS_WINDOW_OFFSET_DAYInteger values4This parameter specifies the retention period for CDPOS Window records in the table /PALANTIR/PAG_16. Records older than the specified duration will be deleted.
CLEANUPINC_DEL_OLDER_IN_DAYSInteger values45This parameter specifies the retention period for records in the tables /palantir/inc_04. Records older than the specified duration will be deleted.
CLEANUPLOG_DEL_OLDER_IN_DAYSInteger values45This parameter specifies the retention period for records in the tables /palantir/log_03. Records older than the specified duration will be deleted.
CLEANUPMAX_ROW_DELETE_LIMITInteger values20000This parameter indicates the maximum row count while housekeeping job deleting data from paging tables.
CLEANUPPAGE_DEL_OLDER_IN_DAYSInteger values45This parameter specifies the retention period for records in the tables /palantir/pag_01, /palantir/pag_02, /palantir/pag_08, /palantir/pag_11, /palantir/pag_12, /palantir/pag_14. Records older than the specified duration will be deleted.
CLEANUPSCHEMA_DEL_OLDER_IN_DAYSInteger values5This parameter specifies the retention period for records in the tables /palantir/pag_09. Records older than the specified duration will be deleted.
CLEANUPSLTSTR_DEL_OLDER_IN_DAYSInteger values2This parameter specifies the retention period for SLT Streaming records in the tables /palantir/pag_08. Records older than the specified duration will be deleted.
CLEANUPTABLE_DEL_OLDER_IN_DAYSInteger values10This parameter specifies the retention period for records in the tables /palantir/pag_13, /palantir/pag_10, and /palantir/pag_15. Records older than the specified duration will be deleted.
DATAMODELDEPTHInteger values1This parameter value specifies the number of relationship levels to process between the given tables and their related tables.
DATATYPEParamValue=BOOLEANTSVCHARCOMPARE, AUTHCHECK, ENABLEDBAGGREGATIONN/AThis parameter requires the parameter values to be 'BOOLEAN', and the parameter names represent the variable names that indicate the field names of boolean variables sent via Foundry in the body of the request. If a parameter name is defined with ParamId=DATATYPE and ParamValue=BOOLEAN, then this parameter will be checked in the payload. Its value will be set to TRUE if it is sent as 'X', and FALSE if it is sent as a space.
ENCRYPTEXTEND_ITAB
EXTACTORDEFAULT_CONFIGURATIONAlphanumericNoneExtractors support multiple contexts. This parameter can be used to set the default context. By doing so, there is no need to set a context on Foundry syncs as the Connector will use the default context in these cases. Leaving this parameter as "None" means that the extractor will run on the local application server, not a remote context, by default.
EXTACTORDEFAULT_CONFIGURATIONNoneExtractors support multiple contexts. The default context can be set in the Foundry SAP Connector parameters. By doing so, there is no need to set a context on Foundry syncs. The Foundry SAP Connector will use the default context in these cases.
EXTRACTORCHECK_CONCURRENT_JOBSTRUE/FALSETRUEChecks if there are any running jobs for this incremental ID. If the previous attempt has not completed yet, an error occurs to stop new data ingestion.
EXTRACTORCONTEXT_CONFIGURATIONSAPIN/ASAPI represents the SAP Service API. For extractors, context configuration should be as SAPI.
EXTRACTORDEBUG_MODETRUE / FALSEFALSEIf this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages.
EXTRACTORDEBUG_MODETRUE / FALSEFALSEIf this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages.
EXTRACTOREXT_DATA_<dataType>NumericN/AIf the output length of an ABAP datatype is incorrect, this parameter can be used to change the data length of that data type. In the Parameter Name field, <dataType> refers to the ABAP data type for which the length should be changed.
EXTRACTOREXT_DTYPE_<dataType>ABAP Data TypesN/AIf a data type cannot be recognized by Foundry, this parameter can be used to change the data type to another type. In the Parameter Name field, <dataType> refers to the data type that needs to be changed.
EXTRACTOREXT_DATA_NumericN/AIf the output length of a ABAP datatype is wrong, you can change the data length of that data type as a workaround. In Parameter Name field, refers the ABAP data type whose length needs to be changed.
EXTRACTOREXT_DTYPE_ABAP Data TypesN/AIf a data type cannot recognized by Foundry, you can change the data type as a workaround. In Parameter Name field, refers the data type that needs to be changed.
EXTRACTORFETCH_OPTIONXML / DIRECTXMLThis parameter indicates whether to use fetch method XML or DIRECT. The XML fetch method is fast since it fetches data in a compressed format. The DIRECT fetch method is slower since it fetches data as a string and processes each row individually. The DIRECT fetch method should only be used if there is an error with the XML fetch method due to special characters in the data.
EXTRACTORFETCH_OPTIONXML / DIRECTXMLThis parameter indicates whether fetch method XML or DIRECT will be used. XML fetch method is faster since it fetches data in compressed format. Direct Fetch method is slower since it fetches data as string and processes every row one by one. One advantage of Direct fetch method is that it can fetch data with special characters without errors.
EXTRACTORMAX_ROWS_PER_SYNCNumericN/AIf this parameter is set, it will be taken into account only for syncs with the APPEND transaction type. Each sync will stop when it has ingested MAX_ROWS_PER_SYNC rows of data.
EXTRACTORMAX_ROWS_PER_SYNCNumericN/AWhen this parameter is defined, it will be taken into account only for Append transaction types. If this parameter is set to a value, the triggered sync will stop when it ingests rows more than MaxRowsPerSync parameter.
EXTRACTORRFC_CONFIGURATIONNONEThis parameter indicates the RFC name of the remote server. If this parameter is not set, or set to blank, the RFC configuration will be set to "NONE".
EXTRACTORRFC_CONFIGURATIONNONEThis parameter indicates the RFC name of the external server. If this parameter is not set, or set to space, RFC configuration will be set as NONE.
EXTRACTORTIMESTAMPON / OFFOFFWhen this parameter is set to ON, the data will include a timestamp showing when the data was fetched and a row order number. This information can be used to deduplicate data later in the pipeline if required.
EXTRACTORTRACE_BEFORE_FETCHTRUE / FALSEFALSEBy default, running a trace for extractors also includes the replication (calculation, initial data transfer and replication object generation), which sometimes takes longer than the limit on a trace. By setting this property to TRUE, the trace will start before the extractor fetch operation, bringing more clarity to trace results.
EXTRACTORTRACE_BEFORE_FETCHTRUE/FALSEFALSEBy default EXTRACTOR trace also included replication (calculation, initial data transfer and replication object generation) which sometimes takes longer than trace limits. You can adjust whether the trace starts before or after EXTRACTOR fetch operation for more clarity in the trace results.
INCREMENTALCDPOS_CHANGENR_FILTER_MODEDBIf this parameter is set to DB, then change number filter is used in the selection of the CDHDR table. Otherwise, it is filtered after selecting documents from CDHDR table.
INCREMENTALCDPOS_CHANGENR_OFFSETInteger values500000When the CDPOS type is used, document numbers will be checked by going back as many entries as specified in this parameter.
INCREMENTALCDPOS_WINDOW_CLEANUP_OFFSETDAYInteger values4While ingesting the minimum change number from the /palantir/pag_16 table, this parameter will be used to filter records with a creation date older than the specified value.
INCREMENTALCOMPARATORGREATER_THAN / GREATER_THAN_OR_EQUAL_TOGREATER_THAN_OR_EQUAL_TOThis parameter specifies the comparator used during incremental delta ingestions to filter records where the incremental field value is either greater than or greater than or equal to the latest recorded incremental field value.
INCREMENTALENABLE_CDPOS_CURSORTRUE/FALSEFALSEIf this parameter is enabled, documents from the CDPOS table will be selected via a cursor to prevent excessive data load.
INCREMENTALENABLE_CDPOS_UDATEFILTERTRUE/FALSETRUEIf this parameter is enabled, then CDHDR table will be filtered according to UDATE column of the latest Change number and then CDPOS table will be read again using the documents form CDHDR table.
INCREMENTALENABLE_TWIN_CURSORTRUE/FALSEFALSEIf this parameter is enabled, documents from the twin table will be selected via a cursor to prevent excessive data load.
INCREMENTALRANGESIZENumeric900(Internal Parameter) This parameter is used for CDPOS, CDHDR and TWIN incremental types for Table and RemoteTable objects. This parameter indicates how many conditions can exist in the nested range table to ingest data.
INFOPROVIDERCOMPLETE_ANSWERTRUE/FALSETRUEWhen retrieving all authorized values of a user for an InfoObject, if this parameter is set to True, it returns not only the authorized values for the queried InfoObject but also all authorizations that include the characteristic.
INFOPROVIDERENABLE_DB_AGGREGATETRUE/FALSETRUEThe parameter determines if database aggregation will be used while ingesting Infoprovider's data.
INFOPROVIDERFORCE_INITTRUE / FALSEFALSEIf this parameter is set to TRUE, last successful Job Id will not be set as success.
INFOPROVIDERREAD_OPEN_REQUESTTRUE / FALSETRUEThis parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behavior is to read all requests.
INFOPROVIDERREAD_OPEN_REQUESTTRUE/FALSETRUEThis parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behaviour is to read all requests.
JSONCONVERT_RAW_TO_STRINGTRUE / FALSETRUEIf this parameter is set to FALSE, then JSON conversion is disabled while saving Page data. If the parameter is TRUE, data will be converted from Xstring to String.
JSONCONVERT_RAW_TO_STRINGTRUE / FALSETRUEIf this parameter is set to FALSE, then JSON conversion is disabled while saving Page data. If it is TRUE, data will be converted from Xstring to String.
JSONJSON_OPTIMIZEDTRUE/FALSETRUEWhen a page read request is sent to the Connector, data is ingested directly from the paging table. If this parameter is set to True, the data is sent to Foundry without modification. Otherwise, the data is first converted into a table after being ingested from the paging table, then serialized again before being sent to Foundry.
JSONNUMC_KEEPZEROTRUE / FALSEFALSEPrior to SP23, leading zeros in NUMC type fields were removed when non-kernel-based JSON conversion is used. Setting this parameter to TRUE will ensure leading zeros are kept, inline with the behavior for kernel-based JSON conversion. The default setting is FALSE to ensure backward compatibility with existing pipelines.
JSONNUMC_KEEPZEROTRUE/FALSEFALSEFixes an issue with the JSON engine for NUMC data type. Foundry SAP Connector does not remove leading zeros as of SP23. It has the same behaviour with kernel and non-kernel JSON conversion. It is not enabled by default so as not to affect existing pipelines for non-kernel JSON conversion. (useKernelJsonSerialization: false). For incremental scenarios and data duplication, if kernel JSON conversion is enabled or this parameter value is set to true, a new initial load is recommended by resetting incremental state of the existing sync.
JSONREMOVE_EXTENDEDTRUE / FALSEFALSEIf this parameter is set to TRUE, non-printable ASCII codes (char code 128 to 255) are removed from the data before extraction.
JSONREMOVE_NONPRINTTRUE / FALSETRUEIf this parameter is set to TRUE, non-printable characters are removed from the data before extraction.
JSONREMOVE_EXTENDEDTRUE / FALSEFALSEIf this parameter is set to TRUE, it eliminates non-printable ASCII Codes from the content. (Char code 128 to 255 )
JSONREMOVE_NONPRINTTRUE / FALSETRUEIf this parameter is set to TRUE, it eliminates non-printable characters in the data.
KERNELVALUE_HANDLINGdefault,movemoveThe transformation option controls the tolerance of conversions when mapping elementary ABAP types in KERNEL serialization. default: if there is an invalid value in a field of type n, the exception CX_SY_CONVERSION_NO_NUMBER is raised. move: Invalid values in a field of type n are copied to XML or JSON without being changed.
LOGGERDBTRUE / FALSETRUEThis parameter is used to control whether logs are saved to the connector’s own logging tables or not.
LOGGERPAGEREAD_COMMITTRUE / FALSEFALSEBy default, page read log messages are only sent to Foundry and not stored in the database.
LOGGERPAGEREAD_COMMITTRUE / FALSEFALSEBy default, page read log messages are only sent to Foundry and not stored in the database.
LOGGERSLGTRUE / FALSEFALSEThis parameter is used to create log entries in SAP SLG logging.
LOGGERSLG_EXPIRYNumeric30SLG_EXPIRY can be set in days; if it is not set, standard SAP SLG expiration policy applies.
LOGGERSLG_KEEPTRUE / FALSEFALSESLG_KEEP is used to prevent logs being deleted in SLG before expiration.
LOGGERSLG_EXPIRYNumeric30SLG_EXPIRY can be set in days; if it is not set, standard SAP SLG expiration policy applies.
LOGGERSLG_KEEPTRUE/FALSEFALSESLG_KEEP is used to prevent logs being deleted in SLG before expiration. The default value is FALSE.
LOGGERTRACE_LEVELINFO/WARN/ERRORWARNThis parameter controls which type of log messages will be saved to the database and returned to Foundry. Trace log levels are as follows: ERROR – Only log messages with type E-Error; WARN – Only log messages with type W-Warning, I-Information, E-Error, T-Trace; INFO – All log messages (S-Success, W-Warning, I-Information, E-Error, T-Trace)
LOGGERTRACE_LEVELINFO/WARN/ERRORWARNTRACE_LEVEL parameter controls which type of log messages will be saved to the database and returned to Foundry. If there is no record in the configuration table, the connector will send all messages to Foundry – the equivalent of using INFO parameter in the configuration table. Trace log levels are as follows: ERROR – Only log messages with type E-Error; WARN – Only log messages with type W-Warning, I-Information, E-Error, T-Trace; INFO – All log messages (S-Success, W-Warning, I-Information, E-Error, T-Trace)
NAMESPACETIMESTAMPTRUE / FALSETRUEIf this parameter is set to TRUE, timestamp and row no fields will be named /PALANTIR/TIMESTAMP and /PALANTIR/ROWNO (which is now the default) instead of ZPAL_TIMESTAMP and ZPAL_ROWNO (which was the naming convention in earlier versions).
PAGEDELETE_BACKGROUNDTRUE/FALSETRUEIf this parameter is enabled, page deletion from the paging table occurs in a background job when close and commit requests are sent.
PAGEMIN_PAGESIZENumeric5000This parameter sets the minimum row count of a page while writing data in pages. If a user specifies a page size lower than this in Foundry, it will be disregarded and this minimum used instead. This is to protect against poor performance with very small page sizes.
PAGEMIN_PAGESIZENumeric5000This parameter indicates minimum row count of a page while writing data in pages. You cannot define a pagesize number less than minimum pagesize parameter.
PAGEPAGEDELIMITERany characterHorizontal Tab Stop CharacterWhen the PAGEFORMAT parameter is set to TSV, the PAGEDELIMITER parameter specifies the character used to separate columns during data serialization.
PAGEPAGEFORMATTSV,JSON,KRNLJSONThis parameter indicates the conversion format of the data that will be sent to Foundry.
PAGEPAGESIZENumeric10000This parameter sets the default row count of a page while writing data in pages. If a user does not specify a page size parameter in Foundry, this value will be used.
REMOTEBEXCELL_TO_STRING_CELL_TO_STRING_W / CELL_TO_STRING_M / CELL_TO_STRING_P / CELL_TO_STRING_Q / CELL_TO_STRING_F / CELL_TO_STRING_D / CELL_TO_STRING_TIf the data type of the InfoObject needs to be changed to a string, it is defined here. A sample usage is CELL_TO_STRING_D, which converts date fields into a character data type. The data types are W Amount, M Quantity, P Price, Q Quota, F Number, D Date, and T Time.
REMOTEBEXENGINEAlphanumericV2A new BEx Query Engine is introduced which brings performance improvements and additional support for query elements such as display attributes. It is not enabled by default but can be turned on by setting this parameter to V3.
REMOTEBEXPAGINGTRUE/FALSEFALSEPaging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject ids are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled.
REMOTEBEXPAGING_MEMBER_LIMITNumeric100Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT, it is considered too fine-grained and therefore discarded for filter generation.
REMOTEBEXPAGING_MEMBER_LIMITNumeric100Paging for BEx queries is supported via filters. The Connector automatically generates separate filters for each page. This means large BEx queries can be run without having to split the sync manually. Filter generation is based on InfoObjects in the rows of the BEx query. If the posted InfoObject IDs are not higher than the threshold value, the InfoObject is used in filter generation; otherwise, it is discarded. Later, the BEx query is run for each filter separately to extract all BEx query data. By default, paging functionality is not enabled. The Connector uses a threshold to prevent unnecessary dimensions being used as filter candidates. Therefore if the posted value for an InfoObject is more than PAGING_MEMBER_LIMIT, it is considered too fine-grained and therefore discarded for filter generation. The PAGING_MEMBER_LIMIT parameter can be adjusted in the Connector’s parameters.
REMOTEBEXRANGESIZENumeric1000While generating filters, if any InfoObject has many values and member limitation is not applied, this parameter will control the filter list for each page.
REMOTEBEXTECHNICAL_NAMESTRUE/FALSEFALSEIf this parameter is enabled, BEx column names will be retrieved using their technical names instead of human-readable texts.
REMOTEBEXTEXTTRUE / FALSE / BEXBEXIf this parameter is set to TRUE, Key and Text of the characteristic / key figure will be concatenated as column name. If this parameter is set to FALSE, the only Key of the characteristic / key figure will be column name. If this parameter is set to BEX, the Query parameter will be used to define column names.
REMOTEINFOPROVIDERCHECK_EXISTTRUE/FALSETRUEIf enabled, a check is performed to verify whether the Infoprovider exists in the remote system.
REMOTEINFOPROVIDERCOMPLETE_ANSWERTRUE/FALSETRUEWhen retrieving all authorized values of a user for an InfoObject, if this parameter is set to True, it returns not only the authorized values for the queried InfoObject but also all authorizations that include the characteristic.
REMOTEINFOPROVIDERENABLE_DB_AGGREGATETRUE/FALSETRUEThis parameter determines if database aggregation will be used while ingesting Infoprovider's data.
REMOTEINFOPROVIDERFORCE_INITTRUE / FALSEFALSEIf this parameter is set to TRUE, last successful Job Id will not be set as success.
REMOTEINFOPROVIDERREAD_OPEN_REQUESTTRUE / FALSETRUEThis parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behavior is to read all requests.
REMOTEINFOPROVIDERREAD_OPEN_REQUESTTRUE/FALSETRUEThis parameter is used to toggle between reading either green-only request or all requests (green and yellow) in an InfoProvider. The default behaviour is to read all requests.
REMOTETABLECDPOS_FILTER_MAINTABLE_FOR_DELETETRUE/FALSEFALSEFixes an issue for CDPOS incremental type. CDPOS deleted documents that were being retrieved by a query which checks for the change documents records from table family. This caused documents to be set as deleted although only their item level data is deleted. The key parsing approach is changed. Now, it checks the table keys only for the main table or the tables in the same table family which have the same or more keys as the main table. For example, if Sales Document 01000020 is deleted, and Sales Item is in the same table family, then entries for Sales item that have keys like 0100020_0010 should also be deleted. A new parameter was introduced to include other tables when retrieving CDPOS deleted items. The default value of the parameter is FALSE. If the parameter is set to FALSE, the CDPOS deleted documents table can be the main table and other related tables that have the same or fewer keys as the main table. If the parameter is set to TRUE, only the main table deleted documents will be retrieved from CDPOS table.
REMOTETABLEDBPAGING_CREATE_SHADOWTABTRUE/FALSEFALSEIf this parameter is disabled and the source system runs on Hana DB, then shadow table will not be created.
REMOTETABLEDBPAGING_ENABLEDTRUE/FALSETRUEIf this parameter is enabled, then data ingestion will be performed with parallel process.
REMOTETABLEDBPAGING_UNICODE_PREFIXTRUE/FALSETRUEIf this parameter is enabled, filter fields that have CHAR data types are flagged ( IS_CHAR_FIELD field is set to X).
REMOTETABLEFORCE_INITTRUE / FALSEFALSEIf this parameter is set to TRUE, last successful Job Id will not be set as success.
REMOTETABLEPARALLELTRUE / FALSEFALSEIf this parameter is set to TRUE, syncs will run in parallel processing mode.
REMOTETABLEPARALLEL_JOBNumeric5If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default.
REMOTETABLEPARALLEL_PAGE_LIMITNumeric50000If there are fewer rows in the result set than this value, parallel processing will not be used.
REMOTETABLEPARALLEL_JOBNumeric5If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number.
REMOTETABLEPARALLEL_PAGE_LIMITNumeric50000If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed.
REMOTETABLEROWCOUNT_BY_TABCLASSTRUE / FALSEFALSEIf this parameter is set to TRUE, for cluster tables the row count returned by the system will be for the individual table; if it is FALSE (the default) the row count will be for the total cluster.
REMOTETABLEROWCOUNT_BY_TABCLASSTRUE/FALSEFALSEThis parameter is used to toggle rowcount for Cluster Table for either the rowcount on the total cluster or individual table.
REMOTETCODESCHEMA_FROM_DATATRUE/FALSEFALSEIf the parameter is set to FALSE, the schema for the ALV report is retrieved from the metadata, which is fetched during runtime. If set to TRUE, the schema is retrieved directly from the data itself.
RETRYCOUNTNumeric1This parameter indicates the maximum number of times the sync will be retried if system resource checks fail.
RETRYDELAYNumeric5If system resource checks fail, this parameter indicates how long (in seconds) to wait before checking the resources again.
SLTCHECK_CONCURRENT_JOBSTRUE/FALSETRUEChecks if there are any running jobs for this incremental ID. If the previous attempt has not completed yet, an error occurs to stop new data ingestion.
SLTCONTEXT_BASED_AUTHORIZATIONTRUE / FALSEFALSEIf this parameter is set to TRUE, authorization checks will be run against the SLT context based on authorization object /PALAU/SCN.
SLTCONTEXT_CONFIGURATIONN/AThis parameter can be used to set a system-wide default context name, to be used in the event no context is sent from Foundry.
SLTCONTEXT_BASED_AUTHORIZATIONTRUE / FALSEFALSEIf this parameter is set to TRUE, SLT context will be checked if it has authorization in Authorization object "/PALAU/SCN".
SLTCONTEXT_CONFIGURATIONN/AIf Foundry does not send the SLT context to SAP Connector, this parameter will be read as SLT context name.
SLTDEBUG_MODETRUE / FALSEFALSEIf this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages.
SLTFETCH_OPTIONXML / DIRECTXMLThis parameter indicates whether to use fetch method XML or DIRECT. The XML fetch method is fast since it fetches data in a compressed format. The DIRECT fetch method is slower since it fetches data as a string and processes each row individually. The DIRECT fetch method should only be used if there is an error with the XML fetch method due to special characters in the data.
SLTFETCH_OPTIONXML / DIRECTXMLThis parameter indicates, whether SLT fetch method XML or DIRECT will be used. XML fetch method is faster since it fetches data in compressed format. Direct Fetch method is slower since it fetches data as string and processes every row one by one. One advantage of Direct fetch method is that it can fetch data with special characters without errors.
SLTFORCE_INITTRUE / FALSEFALSEIf this parameter is set to TRUE, last successful Job Id will not be set as success. Besides, SLT source table will be reset to ingest initial delta values.
SLTGET_LIST_MODEV1/V2V1V1 version ingest SLT table list using ODP RFC RODPS_REPL_ODP_GET_LIST. Since the standard RFC has bugs which impacts the performance, another version of SLT table list ingestion is introduced. V2 version uses IUUC_GET_TABLES RFC which ingest table list from DD02L table. IUUC_GET_TABLES RFC is also used in RODPS_REPL_ODP_GET_LIST RFC.
SLTMAX_ROWS_PER_SYNCNumericN/AIf this parameter is set, it will be taken into account only for syncs with the APPEND transaction type. Each sync will stop when it has ingested MAX_ROWS_PER_SYNC rows of data.
SLTMAX_ROWS_PER_SYNCNumericN/AIf this parameter is defined, it will be taken into account only for Append transaction types. If this parameter is set to a value, the triggered sync will stop when it ingests rows more than MaxRowsPerSync parameter.
SLTPARALLELTRUE / FALSEFALSEIf this parameter is set to TRUE, syncs will run in parallel processing mode.
SLTPARALLEL_JOBNumeric3If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default.
SLTPARALLEL_PAGE_LIMITNumeric50000If there are fewer rows in the result set than this value, parallel processing will not be used.
SLTPARALLEL_FETCH_EXC_MSG-511-RODPS_REPLWhen ingesting SLT packages in parallel, fetching the last package can result in an error message due to the absence of an extracted package. To distinguish this specific error from other fetch errors, both the error message number and ID are defined.
SLTPARALLEL_JOBNumeric3If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number.
SLTPARALLEL_PACKAGE_SIZEInteger values0This parameter indicates how many SLT packages will be read in a single parallel process when fetching data from ODP.
SLTPARALLEL_PAGE_LIMITNumeric50000If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed.
SLTQUEUETRUE / FALSEFALSEA new approach has been introduced for SLT work process handling. By default, the Connector uses a BTC (background) process to fetch from SLT. If multiple syncs are running, a BTC process is used for each. Set this parameter to TRUE to use a single BTC process to wait for multiple initial loads from SLT. Once the data packages reach SLT, Foundry processes are started. This improves BTC resource efficiency.
SLTRECOVER_FROM_CONNECTORTRUE/FALSETRUEWhen a sync is triggered and a pointer is already open in SLT, this parameter determines the recovery method. If set to TRUE, pages will be retrieved from the Connector. If set to FALSE, recovery will be handled within SLT.
SLTREMOTEAGENT_<contextName>N/AIf the SLT server and Connector are on different servers, this parameter should be defined. In Parameter Name, <contextName> indicates the SLT context name. For Parameter Value, the RFC destination should be defined to point to the source system of the SLT context.
SLTREMOTEAGENT_If SLT server and SAP Connector are on different servers, this parameter should be defined. In the Parameter Name field, indicates SLT context name. In the Parameter Value field, the RFC destination should be defined which points the source system of SLT context.
SLTRFC_CONFIGURATIONNONEThis parameter indicates the RFC name of the remote server. If this parameter is not set, or set to blank, the RFC configuration will be set to "NONE".
SLTRFC_CONFIGURATIONNONEThis parameter indicates the RFC name of SLT server. If this parameter is not set, or set to space, RFC configuration will be set as NONE.
SLTSLT_DATA_<dataType>NumericN/AIf the output length of an ABAP datatype is incorrect, this parameter can be used to change the data length of that data type. In the Parameter Name field, <dataType> refers to the ABAP data type for which the length should be changed.
SLTSLT_DTYPE_<dataType>ABAP Data TypesN/AIf a data type cannot be recognized by Foundry, this parameter can be used to change the data type to another type. In the Parameter Name field, <dataType> refers to the data type that needs to be changed.
SLTSLT_DATA_NumericN/AIf the output length of a ABAP datatype is wrong, you can change the data length of that data type as a workaround. In the Parameter Name field, refers the ABAP data type whose length needs to be changed.
SLTSLT_DTYPE_ABAP Data TypesN/AIf a data type cannot recognized by Foundry, you can change the data type as a workaround. In the Parameter Name field, refers the data type that needs to be changed.
SLTTIMESTAMPON / OFFOFFWhen this parameter is set to ON, the data will include a timestamp showing when the data was fetched and a row order number. This information can be used to deduplicate data later in the pipeline if required.
SLTTRACE_BEFORE_FETCHTRUE / FALSEFALSEBy default, running a trace for SLT also includes the replication (calculation, initial data transfer and replication object generation), which sometimes takes longer than the limit on a trace. By setting this property to TRUE, the trace will start before the SLT fetch operation, bringing more clarity to trace results.
SLTTRACE_BEFORE_FETCHTRUE/FALSEFALSEBy default SLT trace also included replication (calculation, initial data transfer and replication object generation) which sometimes takes longer than trace limits. It is possible to adjust whether the trace starts before or after SLT fetch operation for some more clarity in the trace results.
SLTTRIGGER_STATE_TIMEOUTInteger values300The trigger state status is checked after the timeout parameter exceeds to determine if there is any error in the replication of the table.
SYSTEMABORT_RETRY_COUNTNumeric10This parameter defines the number of attempts that will be made to abort the job when a transaction is closed.
SYSTEMABORT_RETRY_COUNTNumeric10A new parameter in the configuration table is defined to retry job abortion attempts in a close request. If there is no parameter, default retry parameter will be set to 10.
SYSTEMAUTH_CHECK_SOURCETABLE / PFCGPFCGUsed to configure custom authorizations.
SYSTEMAUTH_GET_LISTTRUE / FALSEFALSEThis parameter is used to enable or disable authorization checks for the list of values for object types.
SYSTEMAUTH_GET_LISTTRUE/FALSEFALSEThis parameter is used to enable or disable authorization checks for the list of values for object types. Default value is FALSE.
SYSTEMCONTEXT_VALIDITY_CHECKTRUE / FALSETRUEIf this parameter is set to FALSE, SLT and Remote Agent contexts will not be eliminated from the list returned to Foundry, even if they are not valid contexts.
SYSTEMCONTEXT_VALIDITY_CHECKTRUE / FALSETRUEIf this parameter is set to FALSE, contexts of SLT and Remote Agent will not be eliminated if they are not valid contexts.
SYSTEMCONTINUOUS_RESOURCE_CHECKTRUE / FALSETRUEEnables resource checks for all requests (init and all paging requests). If FALSE, resource checks are only carried out for the init request.
SYSTEMCONTINUOUS_RESOURCE_CHECKTRUE / FALSETRUEEnables resource checks for all requests (init and all paging requests). If FALSE, resource checks are only carried out for the init request.
SYSTEMCPU_CHECKTRUE / FALSETRUEEnables or disables CPU checks.
SYSTEMCPU_CHECKTRUE / FALSETRUEEnables or disables CPU checks.
SYSTEMDISABLE_DYNFILTERTRUE/FALSEFALSEWhen this parameter is set to TRUE, dynamic filters will not be parsed in Connector.
SYSTEMDYNAMIC_TABLEV1 / V2V1This parameter can be used to address an issue seen in CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE, which hits the dynamic table limits. To enable new dynamic table routines, set this parameter to V2.
SYSTEMDYNAMIC_TABLEV1/V2V1Fixes an issue in CL_ALV_TABLE_CREATE=>CREATE_DYNAMIC_TABLE which hits the dynamic table limits. To enable new dynamic table routines, adjust the Foundry SAP Connector parameters as V2.
SYSTEMERP_SOURCE_INFOTRUE / FALSETRUEUsed to define whether or not ERP source information will be returned to Foundry with the list of contexts.
SYSTEMERP_SOURCE_INFOTRUE/FALSETRUEThis parameter is used to enable or disable ERP source information of contexts for get_context requests.
SYSTEMFAILED_AUTH_MAX_COUNTNumeric200This parameter is used to limit failed authorization check messages from SU53. There could be cases where too many error messages are generated in the SAP system, which may potentially affect the extraction process.
SYSTEMFAILED_AUTH_MAX_COUNTNumeric200This parameter limits failed authorization check messages from SU53. There could be cases where too many error messages are generated in the SAP system, which may potentially affect the extraction process. The default value is 200.
SYSTEMFILTER_DECODETRUE / FALSEFALSESet to TRUE to enable non-Unicode filters.
SYSTEMFILTER_DECODETRUE/FALSEFALSENon-unicode filters are supported. A new parameter is introduced to enable non-unicode filters.
SYSTEMINFOPROVIDER_AUTH_CHECKTRUE / FALSEFALSEWhen set to TRUE, the connector will check authorizations for the user for authorization-relevant InfoObjects. This avoids authorization errors. SAP APIs are used to generate filters based on row-level authorization.
SYSTEMINFOPROVIDER_AUTH_CHECKTRUE / FALSEFALSERow-level authorization support for InfoProviders. When the authCheck parameter is TRUE on Foundry, the connector will check authorization for the user for the authorization-relevant InfoObjects. This sets filters for authorization-relevant InfoObjects in the InfoProvider to avoid authorization errors. SAP APIs are used to generate filters based on row-level authorization. Alternatively it can be enabled by Foundry SAP Connector.
SYSTEMKILL_HANGING_JOBTRUE / FALSEFALSEIf set to TRUE, the Connector will checks paging requests from Foundry and if there are no more page read requests after a certain period, the paging job is cancelled.
SYSTEMKILL_HANGING_JOB_THRESHOLDNumeric1800If KILL_HANGING_JOB is set to TRUE, this parameter defines the number of seconds to wait before a job is considered to be hanging.
SYSTEMKILL_HANGING_JOBTRUE/FALSEFALSEWhen this parameter is enabled, the Foundry SAP Connector checks paging requests, and if there are no more page read requests after a certain period, the paging job is canceled.
SYSTEMKILL_HANGING_JOB_RETRYInteger values5If a network issue occurs in Foundry, the SAP background job continues running. However, if page read requests are not sent from Foundry to Connector, the job will be aborted. If there is an error while aborting the job, it will retry the specified number of times, as defined by the KILL_HANGING_JOB_RETRY parameter.
SYSTEMKILL_HANGING_JOB_THRESHOLDNumeric1800When this parameter is enabled, the Foundry SAP Connector checks paging requests and if there are no more page read requests after a certain period, the paging job is canceled.
SYSTEMMEMORY_CHECKTRUE / FALSETRUEEnables or disables memory checks.
SYSTEMMEMORY_CHECK_SOURCEST06 / ST02ST06This parameter determines whether the memory consumption value used for resource checks will be retrieved from the ST02 transaction code or ST06 transaction code in SAP.
SYSTEMMEMORY_CHECKTRUE / FALSETRUEEnables or disables MEMORY checks.
SYSTEMMEMORY_CHECK_SOURCEST06 / ST02ST06This parameter indicates that memory consumption value will be retrieved from ST02 tcode or ST06 tcode in SAP.
SYSTEMPROCESS_CHECKTRUE / FALSETRUEEnables checks on the minimum number of permitted work processes; works in conjunction with PROCESS_MIN_BG and PROCESS_MIN_DIA.
SYSTEMPROCESS_CHECKTRUE / FALSETRUEEnables checks on the minimum number of permitted work processes; works in conjunction with PROCESS_MIN_BG and PROCESS_MIN_DIA.
SYSTEMRESOURCE_CHECKTRUE / FALSETRUEEnables or disables resource checks. If FALSE, all checks are disabled; if TRUE, other parameters (CPU_CHECK and MEMORY_CHECK) are checked.
SYSTEMRESOURCE_CHECK_SERVERLOCAL / ALLALLALL: Check all application servers and return true if any of them has available resources. Starting first from the local server that is processing the current request. LOCAL: Check only the local server and return TRUE / FALSE based on availability.
SYSTEMRESOURCE_CHECKTRUE / FALSETRUEEnables or disables RESOURCE checks. If FALSE, all checks are disabled; if TRUE, other parameters (CPU_CHECK and MEMORY_CHECK) are checked.
SYSTEMRESOURCE_CHECK_SERVERLOCAL/ALLALL* ALL: Check all application servers and return true if any of them has available resources. Starting from local server first which is processing the current request. * LOCAL: Check only the local server and return true/false based on availability.
SYSTEMSERVICE_ENCODINGAlphanumericUsed to enable charset support for non-Unicode NetWeaver 7.4 installations.
SYSTEMSERVICE_ENCODINGAlphanumericCharset support for non-unicode NetWeaver 7.4 installations has been added and can be enabled using this parameter.
SYSTEMSPLIT_SYNC_CANCEL_RETRYInteger valuesWhen a split sync is initiated from Foundry, it ceases data ingestion upon reaching the maximum row limit specified by the maxRowsPerSync parameter. Foundry sends close and commit requests while the background job for the initial load of the split sync continues. This is why the background job is not canceled if a close request for the split sync is received. However, if the initial load of a split sync is actually canceled, the Connector cannot differentiate it. It will be checked and aborted in an asynchronous job. The job abortion will be attempted as many times as specified by the KILL_HANGING_JOB_RETRY parameter.
SYSTEMSPLIT_SYNC_CANCEL_THRESHOLDInteger values300When a split sync is initiated from Foundry, it ceases data ingestion upon reaching the maximum row limit specified by the maxRowsPerSync parameter. Foundry sends close and commit requests while the background job for the initial load of the split sync continues. This is why the background job is not canceled if a close request for the split sync is received. However, if the initial load of a split sync is actually canceled, the Connector cannot differentiate it. It will be checked and aborted in an asynchronous job after the threshold duration which is defined in seconds has passed.
SYSTEM_THRESHOLDCPU_LOADNumeric80If current system CPU load is higher than this value, the sync will be aborted.
SYSTEM_THRESHOLDCPU_USERNumeric80If current system CPU user load is higher than this value, the sync will be aborted.
SYSTEM_THRESHOLDMEMORY_FREENumeric5If available memory (%) of the current system is less than this minimum, the sync will be aborted.
SYSTEM_THRESHOLDPROCESS_MIN_BGNumeric1Minimum required number of background process available on the SAP Application Server for the sync to proceed.
SYSTEM_THRESHOLDPROCESS_MIN_DIANumeric1Minimum required number of dialog process available on the SAP Application Server for the sync to proceed.
SYSTEM_THRESHOLDCPU_IDLEInteger values5If current system CPU idle is higher than CPU_IDLE parameter, SAP Connector cannot proceed to ingest or read data.
SYSTEM_THRESHOLDCPU_LOADNumeric50If current system CPU load is higher than CPU_LOAD parameter, system does not allow to proceed to ingest or read data.
SYSTEM_THRESHOLDMEMORY_FREENumeric20If available memory(%) of current system is less than the minimum free memory parameter, SAP connector cannot proceed to ingest/read data.
SYSTEM_THRESHOLDPROCESS_MIN_BGNumeric5Number of minimum background process available on SAP Application Server.
SYSTEM_THRESHOLDPROCESS_MIN_DIANumeric5Number of minimum dialog process available on SAP Application Server.
TABLECDPOS_FILTER_MAINTABLE_FOR_DELETETRUE/FALSEFALSEFixes an issue for CDPOS incremental type. CDPOS deleted documents that were being retrieved by a query which checks for the change documents records from table family. This caused documents to be set as deleted although only their item level data is deleted. The key parsing approach is changed. Now, it checks the table keys only for the main table or the tables in the same table family which have the same or more keys as the main table. For example, if Sales Document 01000020 is deleted, and Sales Item is in the same table family, then entries for Sales item that have keys like 0100020_0010 should also be deleted. A new parameter was introduced to include other tables when retrieving CDPOS deleted items. The default value of the parameter is FALSE. If the parameter is set to FALSE, the CDPOS deleted documents table can be the main table and other related tables that have the same or fewer keys as the main table. If the parameter is set to TRUE, only the main table deleted documents will be retrieved from CDPOS table.
TABLEDBPAGING_CREATE_SHADOWTABTRUE/FALSEFALSEIf this parameter is disabled and the source system runs on Hana DB, then shadow table will not be created.
TABLEDBPAGING_ENABLEDTRUE/FALSETRUEIf this parameter is enabled, then data ingestion will be performed with parallel process.
TABLEDBPAGING_MAXJOBInteger values3This parameter specifies the number of jobs to be used for parallel database ingestion when database paging is enabled.
TABLEDBPAGING_UNICODE_PREFIXTRUE/FALSETRUEIf this parameter is enabled, filter fields which has CHAR data types are flagged ( IS_CHAR_FIELD field is set to X).
TABLEFORCE_INITTRUE / FALSEFALSEIf this parameters is set to TRUE, last successful Job Id will not be set as success.
TABLEPARALLELTRUE / FALSEFALSEIf this parameter is set to TRUE, syncs will run in parallel processing mode.
TABLEPARALLEL_JOBNumeric5If PARALLEL is TRUE and the number of parallel jobs to use is not defined at a data sync level, the value of this parameter will be used as a system-wide default.
TABLEPARALLEL_PAGE_LIMITNumeric50000If there are fewer rows in the result set than this value, parallel processing will not be used.
TABLEPARALLEL_JOBNumeric5If parallel processing is TRUE and parallel job number is not defined, this parameter will be set as Parallel Job Process number.
TABLEPARALLEL_PAGE_LIMITNumeric50000If the number of rows is less than the default value (50,000 rows) or the value configured in the Foundry SAP Connector parameters, parallelization is not allowed.
TABLEROWCOUNT_BY_TABCLASSTRUE / FALSEFALSEIf this parameter is set to TRUE, for cluster tables the row count returned by the system will be for the individual table; if it is FALSE (the default) the row count will be for the total cluster.
TABLEROWCOUNT_BY_TABCLASSTRUE/FALSEFALSEThis parameter is used to toggle rowcount for Cluster Table whether rowcount on total cluster or individual table.
TCODEDEBUG_MODETRUE / FALSEFALSEIf this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages.
TCODEDEBUG_MODETRUE / FALSEFALSEIf this parameter is set to TRUE, an infinite loop will start in the background job which ingests data and writes as pages.
TCODESCHEMA_FROM_DATATRUE/FALSEFALSEIf the parameter is set to FALSE, the schema for the ALV report is retrieved from the metadata, which is fetched during runtime. If set to TRUE, the schema is retrieved directly from the data itself.
TRACEDURATION_LIMITNumeric30If a sync is running with trace functionality and this limit (in minutes) is reached, then trace will be turned off automatically (to avoid system short dumps).
TRACEDURATION_LIMITNumeric30If a sync works with Trace functionality and this limit is reached, then Trace will be turned off automatically since long Trace would cause system short dumps.
TRACEMAX_SESSIONNumeric10The default maximum number of sessions per sync for a trace session is 10 but can be modified using this parameter. This is primarily intended for parallel extractions to prevent trace files being generated for every parallel job.
TRACEMAX_SESSIONNumeric10The default maximum number of sessions per sync for a trace session is 10. This is primarily intended for parallel extractions to prevent trace files being generated for every parallel job.
TSVCHARLISTAlphanumericN/AThis parameter controls extraction from non-Unicode 4.6C/620/640 systems. Fixed dictionary characters are used by default but this dictionary can be extended using this parameter to add missing characters.
TSVESCAPEMODEAUTO / HEX / HEX_ZIPPED / CHAR_COMPARECHAR_COMPAREThis parameter controls extraction from non-Unicode 4.6C/620/640 systems. There is a fixed algorithm for character escaping, which can be changed using this parameter.