Schnittstellenvorgaben für Rechnungsdatenservice 1.0

DATEV Rechnungsdatenservice 1.0 can be used to transfer digital vouchers and structured document data to DATEV Unternehmen online. From the transferred data, voucher records are generated for DATEV Belege online (processing form "Extended") or DATEV Kassenbuch online. Through a provision process of the voucher records from Belege/Kassenbuch online, posting proposals can be generated for our DATEV accounting system.

MUST: Only the fromat Belegsatzdatendatei is to be used for the structured data (invoice/cash data).

API WORKFLOW

  1. Check the authorizations for the data service or DATEV Belege online
  2. Query metadata (e.g. business year, processing form, invoice/cashbooks)
  3. Generation of job ID and verification of further metadata.
  4. Transfer of the files (documents & data)
  5. Check the result of the transfer
MUST: Submitted jobs (POST) and read API requests (GET) should be in a healthy ratio. Producing another 30 GET requests on top of one successfully submitted job does not demonstrate efficient integration.

ONLINE API & ENDPOINTS for SANDBOX

Our Sandbox for Rechnungsdatenservice 1.0 is a technical mockup which provides predefined return values that are not related to the actual data submitted.

1. Check permissions

    GET https://accounting-dxso-jobs.../clients
    GET https://accounting-dxso-jobs...clients/{client-id}

    Here you can query the datasets to which the client has the necessary permissions. If the first endpoint is used, the API returns all available datasets. If the second endpoint is retrieved, we get a list of basic data, for a specific customer. This can lead to hundreds or thousands of data values, especially for larger companies. Therefore, it may also make sense to use only the second endpoint. Here, however, the client must then get an input option in the 3rd party app for the consultant and client number.

    Our sandbox basically works with the consultant number 455148 and client numbers 1-6. For the Rechnungsdatenservice 1.0, only the {client-id} 455148-1 can be used, since only client 1 has all the permissions for the data service.
    MUST: At least one of these endpoints for querying permissions for a dataset must be used. Display the company name and the consultant & client number after selecting the dataset.
    MUST: The authorization check must be checked before each data transmission. If data is transmitted multiple times during the day, a single check is sufficient for the day.
    MUST: The integration must respond to changes in permissions with a meaningful error message.

    Important note: Getting the long-term token for a {client-id} does not yet mean that the client actually has permissions for the {client-id}. That is, the 3rd party app gets the token and then immediately addresses the GET clients with it. Authentication is not to be indicated to the client as successfully connected until the GET Clients/{client-id} has confirmed the correct assignment of permissions.

    Challenges:
    • Consultant issues a token with access to a test set of multiple test sets
    • Consultant enters an invalid consultant and client number

    2. Get & check metadata

    GET https://accounting-dxso-jobs.../clients/{client-id}/

    Use this endpoint to retrieve and check the specific configuration of the dataset.

    For the successful use of the Rechnungsdatenservice 1.0, the API user must have prepared his dataset from DATEV Belege/Kassenbuch online in a suitable way:

    • Edit form "Extended" must be set ("basic_accounting_information" != null)
    • appropriate fiscal year ("fiscal_year_...") created:
      • for invoices for Belege online: the relevant WY or at least the previous WY must be present
      • for Cash data for Kassenbuch online: the relevant WY
    • the set account length ("account_length") must match the account length used in the XML files ("accountNo" and "bpAccountNo" = "account_length "+1)
    • the required general ledger is enabled ("ledgers")
    • for the WJ.
      • Input accounts: "is_accounts_payable_ledger_available" = true
      • Outgoing accounts: "is_accounts_receivable_ledger_available" = true
      • Cash: "is_cash_ledger_available" = true

    MUST: The endpoint must be used to validate the configuration. Before each data transfer, ensure that the configuration matches their expectations and has not changed in the meantime. If data is transmitted several times a day, a single check before the first transmission is sufficient.
    MUST: The integration must be able to react appropriately to changes in the configuration. Either by a meaningful error message or by automatically adapting the integration to the changed conditions (e.g. adjust account length, adjust the name of the accounting ledger, etc.)

    Challenges:
    • Consultant changes the sent data so that it no longer fits the preconfigured data sets

    3. Check generation job ID and other metadata


    POST https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs

    The job ID is the API counterpart to the ZIP envelope of the file format of the DATEV XML interface online. In the HTTP body, the relevant general ledger and the posting month must be specified. In addition to the job ID from the HTTP response, particular attention must be paid to the designation of the general ledger ("ledger_folder_names"). The specification of this designation is also to part of the XML files. 95% of the customers use the default names of DATEV. However, 5% of the customers also change these default designations.


    MUST: The designation of the general ledger is to be checked and adapted if necessary in the XML files.

    4. Transferring the files (documents & data)

    In this step, all files are submitted individually via HTTP POST request to the job ID and a final HTTP PUT request triggers the processing of the job.

    POST https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}/files

    Transfer the individual files to a job ID.


    PUT https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}

    Initiate processing of the job ID in the DATEV datacenter.

    MUST: Invoices or POS data pending for transmission must always be transmitted bundled via a job (if related to a general ledger & month).
    MUST: The storage volume per job ID must not exceed 150 MB when added up over all files. Larger jobs are inefficient in processing and must therefore be avoided.
    SHOULD: Individual files with a storage volume greater than 20 MB are already technically rejected by the API. Such large files should be compressed or transmitted in black and white if possible.
    SHOULD: Exclusive use of ASCII characters (e.g. a GUID) for all file names to avoid processing problems.
    DONT: Generation of a job ID per single invoice/cash register transfer.

    Challenges:
    • Consultant has multiple invoices transferred in a month.

    5. Check result of transfer

    The API here allows you to check a rudimentary job status on the one hand, and a more detailed processing log after the job has been processed on the other.

    GET https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}

    Request the processing status for a job.
    Please note: Our sandbox only returns the processing status "4" here.


    GET https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}/protocol-entries

    Request a detailed processing log after the job has been processed (corresponds to the import log from DATEV Belege online).

    MUST: Both GET requests are to be used. Immediately after the data transfer, the processing result must be made transparent to the API user and documented/logged in a suitable manner.
    MUST: In the log of the 3rd party app, both the processing status (GET 1) and the processing log (GET 2) must be viewable and stored for a longer period of time (at least 1 year).

    Challenges:
    • Consultant has the log of the 3rd party app shown to him and checks if the return values match the prebuilt data of the sandbox.

    ONLINE API & ENDPOINTS for PRODUCTION

    To use this API in production, a genuine DATEV authentication medium and a dataset for the DATEV Belege online application in "Extended" processing form are required. Further information on customer-side requirements can be found here

    1. Check permissions

      GET https://accounting-dxso-jobs.../clients
      GET https://accounting-dxso-jobs...clients/{client-id}

      Here you can query the datasets to which the client has the necessary permissions. If the first endpoint is used, the API returns all available datasets. If the second endpoint is retrieved, we get a list of basic data, for a specific customer. This can lead to hundreds or thousands of data values, especially for larger companies. Therefore, it may also make sense to use only the second endpoint. Here, however, the client must then get an input option in the 3rd party app for the consultant and client number.

      Our sandbox basically works with the consultant number 455148 and client numbers 1-6. For the Rechnungsdatenservice 1.0, only the {client-id} 455148-1 can be used, since only client 1 has all the permissions for the data service.
      MUST: At least one of these endpoints for querying permissions for a dataset must be used. Display the company name and the consultant & client number after selecting the dataset.
      MUST: The authorization check must be checked before each data transmission. If data is transmitted multiple times during the day, a single check is sufficient for the day.
      MUST: The integration must respond to changes in permissions with a meaningful error message.

      Important note: Getting the long-term token for a {client-id} does not yet mean that the client actually has permissions for the {client-id}. That is, the 3rd party app gets the token and then immediately addresses the GET clients with it. Authentication is not to be indicated to the client as successfully connected until the GET Clients/{client-id} has confirmed the correct assignment of permissions.

      Challenges:
      • Consultant issues a token with access to a test set of multiple test sets
      • Consultant enters an invalid consultant and client number

      2. Get & check metadata

      GET https://accounting-dxso-jobs.../clients/{client-id}/

      Use this endpoint to retrieve and check the specific configuration of the dataset.

      For the successful use of the Rechnungsdatenservice 1.0, the API user must have prepared his dataset from DATEV Belege/Kassenbuch online in a suitable way:

      • Edit form "Advanced" must be set ("basic_accounting_information" != null)
      • appropriate fiscal year ("fiscal_year_...") created:
        • for invoices for Belege online: the relevant WY or at least the previous WY must be present
        • for Cash data for Kassenbuch online: the relevant WY
      • the set account length ("account_length") must match the account length used in the XML files ("accountNo" and "bpAccountNo" = "account_length "+1)
      • the required general ledger is enabled ("ledgers")
      • for the WJ.
        • Input accounts: "is_accounts_payable_ledger_available" = true
        • Outgoing accounts: "is_accounts_receivable_ledger_available" = true
        • Cash: "is_cash_ledger_available" = true
        MUST: The endpoint must be used to validate the configuration. Before each data transfer, ensure that the configuration matches their expectations and has not changed in the meantime. If data is transmitted several times a day, a single check before the first transmission is sufficient.
        MUST: The integration must be able to react appropriately to changes in the configuration. Either by a meaningful error message or by automatically adapting the integration to the changed conditions (e.g. adjust account length, adjust the name of the accounting ledger, etc.)

        Challenges:
        • Consultant changes the sent data so that it no longer fits the preconfigured data sets

        3. Check generation job ID and other metadata


        POST https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs

        The job ID is the API counterpart to the ZIP envelope of the file format of the DATEV XML interface online. In the HTTP body, the relevant general ledger and the posting month must be specified. In addition to the job ID from the HTTP response, particular attention must be paid to the designation of the general ledger ("ledger_folder_names"). The specification of this designation is also to part of the XML files. 95% of the customers use the default names of DATEV. However, 5% of the customers also change these default designations.


        MUST: The designation of the general ledger is to be checked and adapted if necessary in the XML files.

        4. Transferring the files (documents & data)

        In this step, all files are submitted individually via HTTP POST request to the job ID and a final HTTP PUT request triggers the processing of the job.

        POST https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}/files

        Transfer the individual files to a job ID.


        PUT https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}

        Initiate processing of the job ID in the DATEV datacenter.

        MUST: Invoices or POS data pending for transmission must always be transmitted bundled via a job (if related to a general ledger & month).
        MUST: The storage volume per job ID must not exceed 150 MB when added up over all files. Larger jobs are inefficient in processing and must therefore be avoided.
        SHOULD: Individual files with a storage volume greater than 20 MB are already technically rejected by the API. Such large files should be compressed or transmitted in black and white if possible.
        SHOULD: Exclusive use of ASCII characters (e.g. a GUID) for all file names to avoid processing problems.
        DONT: Generation of a job ID per single invoice/cash register transfer.

        Challenges:
        • Consultant has multiple invoices transferred in a month.

        5. Check result of transfer

        The API here allows you to check a rudimentary job status on the one hand, and a more detailed processing log after the job has been processed on the other.

        GET https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}

        Request the processing status for a job.
        Please note: Our sandbox only returns the processing status "4" here.


        GET https://accounting-dxso-jobs.../clients/{client-id}/dxso-jobs/{job-id}/protocol-entries

        Request a detailed processing log after the job has been processed (corresponds to the import log from DATEV Belege online).

        MUST: Both GET requests are to be used. Immediately after the data transfer, the processing result must be made transparent to the API user and documented/logged in a suitable manner.
        MUST: In the log of the 3rd party app, both the processing status (GET 1) and the processing log (GET 2) must be viewable and stored for a longer period of time (at least 1 year).

        Challenges:
        • Consultant has the log of the 3rd party app shown to him and checks if the return values match the prebuilt data of the sandbox.


        CHANGELOG

        version

        Date

        Changes

        1.0 03.04.2023 First release