Receiving 408 HTTP code using the transactiondistributions endpoint

This afternoon I started receiving 408 HTTP codes when using the transactiondistributions endpoint. I've been working my through our organizations historical transactions to download them so we can utilize them internally. But this has really halted my progress. I am wondering if I am hitting an issue with the API server not fully finishing my request. I was using the bankaccounts API to download check data simultaneously from the same internal server and not having any issues. Has anyone experienced this and can offer any guidance? I checked with support via chat and they suggested I also ask in here.

Thank you!

Comments

  • Erik Leaver
    Erik Leaver Blackbaud Employee
    Tenth Anniversary Kudos 5 First Reply Name Dropper

    @Daniel Maxwell Were you able to successfully make the call before yesterday? Did you get the same result with the “Try It”?

  • @Erik Leaver I've made several hundred requests to the endpoint already, slowing working through the retrieving the recordset. I've also just tried a direct request with postman, the timeout there and in my tool is infinite, but after about a minute it timed out. With what I've tried so far I wonder if the server itself it taking too long to respond, since that can also contribute to a 408 error. I've had a routine running for at least 10 hours overnight and maybe 10 times it worked and should have made hundreds of requests overnight.

  • @Daniel Maxwell I also have a support case to look into this. I'm not sure what other options I can manipulate to make this work consistently again.

  • Erik Leaver
    Erik Leaver Blackbaud Employee
    Tenth Anniversary Kudos 5 First Reply Name Dropper

    @Daniel Maxwell I'll add myself as a watcher to the case. I see the documentation notes the API has a maximum request size limit of 8mb. Are your successful returns under this size? Have you tried different limits to return smaller requests?

  • @Erik Leaver I just used the “Try It” option and get the same error.

    For the request, the request size is definitely under 8mb. The responses are generally under 3mb, but I think response wasn't limited from what I read.

    I did not try a different limit. I'll do that now.

  • @Daniel Maxwell I've tried the “same” request reducing the limit from 5,000 to 1,000 and it didn't work and then reduced to 100 and it didn't work via the “Try It” tool in the API Docs

  • Erik Leaver
    Erik Leaver Blackbaud Employee
    Tenth Anniversary Kudos 5 First Reply Name Dropper

    @Daniel Maxwell last request - can you use try it with the sky developer cohort so we can test against the environment?

  • @Erik Leaver I just asked for access. However, to be clear this was working. It also works when I do not put an offset in. When I provide an offset of say 4980000 or in the range of that it won't work.

  • Erik Leaver
    Erik Leaver Blackbaud Employee
    Tenth Anniversary Kudos 5 First Reply Name Dropper

    @Daniel Maxwell Ah. Interesting that the issue arises with the offset. I'm guessing that offset has grown as you've worked through the records so maybe that's what's at play here.

    I've shared our troubleshooting with support & they are going to investigate further. I'll keep an eye on it.

  • @Erik Leaver Thank you. It almost seems like either the indexing isn't being used so response time is long because of table-scanning type techniques. Or perhaps with the “fixed indexing" there is another table at play that isn't playing nice. Without knowing the backend systems, and applying the knowledge I have of SQL databases, indexing and general programming, I'd guess that the server side query or such is just taking too long and my perception is that the 408 error is because the API server side is closing the connection due to perhaps a timeout on that side vs my side.

    Thanks for keeping an eye on it. I really have to get this fixed as this is hampering us.

  • Erik Leaver
    Erik Leaver Blackbaud Employee
    Tenth Anniversary Kudos 5 First Reply Name Dropper

    @Daniel Maxwell Noting support's response here in case others encounter a similar issue:

    Customer should leverage the additional filters to fulfil their use cases as their request is quite large. e.g. they can use parent-child relationship to filter specific set of transactions via batch id.

    See: ListJournalEntriesSingleJournalEntryBatch. This end point could be used to pull all the batches and then could be used to apply the batch_id filter to get the transactions.

  • @Erik Leaver I actually did not use their suggestion. I continued to use the Transaction Distribution (List) API call and incorporated the from_date and to_date parameters and went after the data in yearly segments. For this set of tasks my goal was to replicate the data as it exists via the API. Next, I need to get an incremental tool in place for ongoing transactions.

  • Alex Wong
    Alex Wong Community All-Star
    Ninth Anniversary Kudos 5 Facilitator 3 Raiser's Edge NXT Fall 2025 Product Update Briefing Badge

    @Daniel Maxwell
    do you still have issue? i remember answering some issue with transaction distribution on another post, not sure if it was you.

  • @Alex Wong
    I ended up having to use other criteria in my requests. I basically couldn't seem to do a sequential set of requests of our request to get a complete copy, starting from 1 to present. I hit somewhere around 4,000,000 records and then that was as far as I could get. The Team suggested that I use some calls to deal with things in batches, but I had alternated just before that to using the dates to limit my requests for a year of data at a time. Ultimately, I was able to get the historical data out of the API and stored locally and now I'm retrieving much smaller numbers of records.

  • Alex Wong
    Alex Wong Community All-Star
    Ninth Anniversary Kudos 5 Facilitator 3 Raiser's Edge NXT Fall 2025 Product Update Briefing Badge

    @Daniel Maxwell
    we have jsut over 4M records of transaction distribution and have no problem getting them all.

    I run 2 parallel loops

    44f5b8be9e7c4302db39c7d0dbf02421-huge-im

    one loop is a “for loop” that allow concurrency, which concurrently run 3 GET requests at a time for a list of offsets in increment of 5000, up to 2,995,000.

    2nd loop is a “do until done” loop that starts at 3,000,000 records and does 5000 at a time until done.

    the calculation here is, 3 concurrent “for loop” will average out getting 1M records each.

    the do until done loop will do about a little over 1M, so they will finish around same time.

  • Alex Wong
    Alex Wong Community All-Star
    Ninth Anniversary Kudos 5 Facilitator 3 Raiser's Edge NXT Fall 2025 Product Update Briefing Badge

    @Daniel Maxwell
    the “FULL DUMP” takes about 2.5hr.

    so this is only done when needed (transaction distribution tables got out of sync). I normally do iterative sync, which was explained more here:

    https://community.blackbaud.com/forums/viewtopic/493/73230

    but it looks like it was you who was asking on that post too, so you already know.

  • @Alex Wong
    Yeah that was me, you gave me some good info. I've been buried in getting data synced and getting internal systems converted, so I now have the bulk of what I need. I still need some other things, but hopefully all in time.