Looking for input for our Webhooks implementation

Hey everyone,

 

As you may have heard, the SKY Developer team has been working on a framework for webhooks. Webhooks will enable your applications to subscribe to events that happen in near real-time within Blackbaud SKY solutions. Examples of event triggers could be: when a new record gets added or deleted, when a particular field changes, when an action gets completed, etc Your application will be able to subscribe to these events by specifying a callback URL for the Blackbaud service to call when the event happens.


We'd love to hear about scenarios that webhooks would solve for you. We're going to start with a couple of events to get up and running, so we'd like your input on the following:
  1. What Blackbaud event trigger would you want on day one?
  2. What problem(s) would this solve?
  3. What amount of latency is tolerable? E.g. must be real-time, within minutes, within hours, daily, etc
Please reach out to me if you're interested in participating in further discovery.


Much appreciated!
«1

Comments

  • Hi Ben,


    That's great news.


    For me, the first most useful thing could be as simple as an indicator that some change happened to some contact info related to a constituent. It could even skip all the details aside from ID since I'd just be looking them up anyway.


    Right now I've got at least two other systems where I'd want to copy that update to and this sort of trigger would be great.


    As far as latency--immediate would be great. I can't think of a case where I'd want it to take longer. Sorry, I know that's not a really helpful answer but I don't have a better one. It's hard not to think of a system like that as broken if it takes more than a half minute to trigger the event.


    Thanks!

     
  • I would want web hooks that monitor constituent changes (new email address, donation processed, event registration) so that Prospect Managers (AKA assigned solicitors AKA fundraisers) could receive a daily digest of changes to anyone for whom they have an active assignment.


    App would use webhooks to record change, then API calls to send consolidated email with constituent photo, link to record, and summary of change(s) made to each Prospect Manager / Fundraiser / Assigned Solicitor.


    Front-line fundraisers would not have to wonder / review / guess as to what last changed in webview.


    Also, it would be great to have something like HRH David Zeidman‍ 's Audit Trail that just recorded the value before a change, the value after a change, who made the change and when to a running change log file. In case you need to restore / un-ring a bell.
  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Thanks for the input!


    Let's focus on one event and see if we can satisfy both your needs as well as a potential scenario for Zeidman's Chimpegration.


    If we have an event for Constituent email changed that pings the callback URL you provide with the constituent_id and the id of the email changed, your application can then call the GET Email address list (Single constituent) endpoint to get the details of the email changed for that constituent.


    Some questions:
    1. Would you be able to use the response of that existing endpoint to get what you need? i.e. figure out what changed on that email record.
    2. If there is a spike in changes within a narrow window e.g. 10 changes in 5 minutes, would you want to know about those 10 changes, or would you rather have less noise and get the most recent change within a 5-minute window?
    3. If that webhook event was available tomorrow, when would you prioritize the work to consume it?
    If we can define a pattern for email address changes, we can explore other types of contact info.


    Your input is greatly appreciated!

    Thanks!

     
  • Ben‍ one thing that's occurred to me is the potential for a loop.


    Let's say I set up a webhook-consuming app to forward email updates from RE NXT to, say, Marketo. And let´s say I also set up a webhook on Marketo to do the opposite. So, a basic email synchronization app.


    If I'm not careful, I could end up bouncing changes back and forth between the two entities, correct? From RE NXT via my app to Marketo and back again, thus re-triggering the webhook, ad infinitum.


    Does this imply that there needs to be some way of avoiding this situation, or tracing the change to prevent this? Or do we simply allow this happen once, trapping the fact that the calls after the first pair don't result in any actual changes? (If so, you'd need to be mindful of changes due to, say, validation, for example the stripping off of leading/trailing spaces, in one direction.)


    And thanks for asking for input!


    Cheers,

    Steve Cinquegrana | CEO and Principal Developer | Protégé Solutions

     
  • Hi,


    You would need a webhooks API - just refer to Mailchimps as a guide there as they are obviously well established.


    The webhook should trigger on a change to data and be delayed by a short while to allow users to back out a change i.e. they change something and then change it back. At most 5 minutes though.


    The webhook MUST also trigger for deletions as well as additions and changes and there should be a field in the webhook data to indicate which it is.


    Primary webhooks for us would be all the equivalents of the (All constituent) API endpoints which you would think would be the easiest to implement for you anyway. That and the Lists i.e. added/deleted/updated in list.


    We currently achieve these all via our own servers making 'virtual' webhooks for all the endpoints but it would be much better to have official webhook support for them.


    In terms of the sync loops issue - you need to identify a primary key i.e. id field for each webhook and then the external system needs to track that and prevent loops based upon that field. Not sure the webhook system would need to implement that itself TBH, that is probably the responsibility of the sync tool. We do it with ours.


    Cheers


    Warren
  • I wish this had existed a year or two ago. I spent last year writing code to keep a local copy of our RE data here so I could do all the things for which the API is too restricted. Webhooks would be extremely useful, and probably worth the time to write new code here so we can update in real time rather than daily/weekly (most fields I do daily, some, owing to API limits, weekly). The snag is that this system requires us to download most of the data each time. Luckily we have a small database with only 20K constituents.


    I would need only 'constituent number' and 'field updated'.


    But it would be nicer to have 'constituent number', 'field updated', 'date', 'time', 'user', 'data-before',  'data-after' and 'serial-number'. The last is so we can detect whether we have missed any change (eg owing to downtime at our end). Data before and after is of course a luxury, so we can be even more sure we have it right. Goven all this data we do not need to make any API calls after getting the webhook information.


    It would be nicer still if these data items were also written to a transaction log on your server, one which we can query. Maybe holding just the last month or two of data. Again, in case we have missed anything.


    There could be a spike problem if we make bulk changes, leading to a blizzard of web calls. I think it would be reasonable (and helpful to us too) to rate-limit your calls to us. Maybe allow us to alter the limit within some suitable range.


    Did you think this would be easy?!


     
  • One way Mailchimp handles the loop issue is by allowing the subscriber to choose the sources of data to subscribe to i.e. end user UI, backend UI or API. That way if the end user changes the email address and this triggers a change in another webhook enabled system, it won't necessarily trigger a webhook response in SKY if the SKY webhook is only set up to deal with end user UI changes as this change will have come from the SKY API.


    For webhooks to be properly effective, we need to know about all changes. Even if there are a lot of them.  There is no point in having just some changes unless they are a reversal of a change
  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Great feedback here!


    On the looping issue: this may be an issue in theory, but I'd think the application would be checking the values to make sure they are different before overwriting the field in the other system. I agree with Warren that this would probably be the responsibility of the syncing application to avoid the loop.

    Warren Sherliker‍: for delete events, what would you expect in the message? In the email example, if we provided the constituent_id and id of the email, you wouldn't be able to call the API to get the email address deleted. We wouldn't want to send out the actual email address. I agree that deletions webhooks are just as important, so let me know how you would use it.

    Christopher Dawkins‍: it's an interesting idea to rate-limit the webhook calls to your service. Would there be concerns that some data won't be synced when the limit is reached? I'd be interested to hear other thoughts on this.


    On the payload of data that gets delivered, there some security reasons why we wouldn't want to share too much. The message should be lightweight and not contain any sensitive information.


    Another question I have is whether it would be useful to have broader-grained events e.g. instead of email address changed, how about Constituent record changed that would trigger whenever a field changes on the record. Would that be valuable?


    Thanks!

     
  • Dan Snyder
    Dan Snyder Community All-Star
    Tenth Anniversary Kudos 5 Raiser's Edge NXT Fall 2025 Product Update Briefing Badge First Reply
    Ben Wong‍ in response to the question about broader-grained events, the more specific the better. If one of our gift officers was told a record changed, their immediate next question would be what changed? So if they can be told the email address was updated that would be more beneficial in my opinion.


    Ideally, it would then follow Graham Getty‍'s suggestion about showing the former and current value, but at a minimum letting the end user know what changed is a good start.
  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Thanks, Dan. Would it be feasible for your application that is consuming the webhook event to call the API to determine what changed? I understand that's more work for the application to do as a workaround if we don't have every field triggering webhook events. Would it be a viable workaround?
  • Hi,


    On deletion you would want to see the deleted records details TBH. Especially with email addresses where you may want to then remove that email address from an external system. So


    {

      "status": "deleted",

      "record": {

      ...

      }

    }


    If not then you could server the primary key of the field (id) but TBH that is not as useful.

    With the Lists endpoints and delete it is slightly different in that you would usually have the id of the constituent in the external system so don't need the rest of the details although once again it would be useful to have the deleted record in the webhook information.


    Cheers


    Warren
  • Did you think this would be easy?!

    Words of wisdom, Christopher Dawkins‍!


    Maybe this is obvious to everyone else but it wasn't to me initially; care should be taken not to conflate webhooks (data "push") with the API (data "pull"). Though they can be used together, they are separate things and I think there shouldn't be an expectation that the API will be used in conjunction with them beyond what is unavoidable.


    For example, we use the MailChimp webhooks - which are pretty well put together in my opinion - completely independently of MailChimp's - or any other - API. (We often just send email notifications from a very basic consuming PHP application.)


    On the other comment regarding examining MailChimp's webhook implementation, I'd second that, although there are some weirdities there such as having to set up separate webhooks per subscriber list even though the list ID is provided within each response.


    In essence, though, the MailChimp webhooks provide a decent starting model, I think: you simply tick off the events you want the hook to deliver on such as subscribe, unsubscribe, profile update, email change, etc. Obviously, it's a much simpler environment, but not a bad jumping off point.


    The payloads for each event are different but consistently formed. One possible annoyance is that some data is provided with metatagged fields which means that you need to know the field names and contents prior to using the hook and therefore are restricted in what you can change on either end. The base data, though, is consistent so you can still get usable info no matter what.


    If you have a look, you'll note that there is both a Profile and an UpEmail event trigger; the first is triggered by any profile change - including email address - while the latter triggers only on a changed email address. This means that if you're consuming both hooks, you can end up with two calls. I think this is ok, and provides flexibility, as long as it's understood that the Profile event covers both, but that you might only be interested in an email change and so consume only UpEmail calls.


    Also, the MailChimp hooks are immediate; there is no latency to see if the user reverses or makes further changes. I actually think that latency can be a bit of a risk here and potentially cause overwriting of changed data, a scenario you can map out for yourself. In any case, a consuming app can probably take care of this as well as the looping issue.


    And to confirm, the MailChimp webhooks do have the ability to ignore all changes made via its API. Neat, though I think it can have some repercussions where more than two data entities are involved.


    I agree that, as far as possible, the actual changed data should be provided with the call but I think that that is simply impossible or impractical in some cases. Eg, I would never expect to get an actual credit card number change via a webhook, but I would expect DOB, email, address info, etc. Where the data isn't actually provided, then some specificity about the change is to be expected, eg SSN Changed, Credit Card Change. (Obviously, I'm only referring to the Constituent and maybe Gift APIs here; the General Ledger, School, Payments, etc APIs will all have their own constraints but hopefully a high degree of conformance with each other.)


    So my questions are:


    1. Does Blackbaud envisage a single dashboard/page to manage all product webhooks, select which events are to be monitored, etc? One per product? One per API? Here, I would hope that configuring webhooks could be done via the UI rather than just via the API; Campaign Monitor only allows webhook access via its API which is pretty limiting.


    2. If there are multiple webhook dashboards, can we please have the flexibility to use a single URI for all, or separate ones? A consuming app can filter by whatever is thrown at it but some organizations might want to separate their processing, security, etc.


    3. Will the data have some base consistency across all products, APIs and endpoints? I really hope so, but the question is borne out of the general lack of consistency across these entities with the SKY API. Sorry, but it's true.


    Anyway, I'm looking forward to seeing what happens with this.

     
  • Hi Ben,


    That's great news.


    At this point we would be delighted if we get 3 web hooks mentioned below
    1. Change in constituent data - We will need constituent id, date and time of updation and what field has changed and what is the new values as well as what was the old value
    2. Change in gift data: We will need gift id, date and time of updation and what field has changed and what is the new values as well as what was the old value
    3. Change in event registration: We will need event id, date and time of updation and what field has changed and what is the new values as well as what was the old value
    In addition to this, if we can get webhooks for delete event of all the above , that would be complete the whole flow for us.


    So basically if we get a web hook every time a field changes (CRUD operation) , that would be awesome.
  • This would be great for our use - we would want notification of the creation of a new opportunity or that an opportunity just changed status.
  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Steven Cinquegrana‍ 

    1. Does Blackbaud envisage a single dashboard/page to manage all product webhooks, select which events are to be monitored, etc? One per product? One per API? Here, I would hope that configuring webhooks could be done via the UI rather than just via the API; Campaign Monitor only allows webhook access via its API which is pretty limiting.

    The first release will be via the API only then we may think about having a UI to follow if it provides value or significantly improves the experience.

    2. If there are multiple webhook dashboards, can we please have the flexibility to use a single URI for all, or separate ones? A consuming app can filter by whatever is thrown at it but some organizations might want to separate their processing, security, etc.

    You will have the flexibility to use a different URI for each subscription to a webhook event in an environment. We're not limiting the number of subscriptions that you can have to an event or an environment. 

    3. Will the data have some base consistency across all products, APIs and endpoints? I really hope so, but the question is borne out of the general lack of consistency across these entities with the SKY API. Sorry, but it's true.


    Point taken. We want to have a consistent experience. I accept that the current state has some inconsistencies since we have multiple domain areas and multiple teams (operating across multiple tech stacks for the backend services) contributing to SKY API. Definitely something for us to keep in mind with webhooks.


    I'm hearing the desire for "change" and "delete" events that provide the before and after data. There are some technical and possible security and compliance issues that may limit what data we can send in the message. Assuming that we can't provide the before change data, or the data for what was deleted, is it still valuable if we provided the ID of the record that was changed or deleted? I've heard from other developers that their apps already keep a data store with matching IDs so not receiving the data isn't an issue.


    Keep the feedback coming!

    Thanks!
  • Generally in agreement with everything mentioned above, echoing out a few particularly important things below:
    • Notification of add/edit/delete on constituent records.  Indicate what data, ideally the specific old/new values.  If old/new values can't be supplied, still useful  to know what type of data and what type of action (email added, phone deleted, etc.).  Most important data to know is basic bio data (name, email, address, phone, alt lookup IDs, deceased status), followed closely by communication preference changes (including opt-in/opt-out, contact preference changes).  Also would be useful to know about merge operations (source record ID, target record ID). 
    • Notification of add/edit/delete on gift records--same as above (what changed, old/new values).  Ideally would have enough data on the message to properly track the gift in the external system (date, amount, type, appeal, designations, ...)
    • Notification of add/edit/delete on event registrations.
    • Notification of list membership change (add/edit/delete)
    If old/new data can't be provided, it can't hurt to have the ID for the deleted/changed value, but depending on what you're integrating with, the ID might not be stored in the target system.  For example, if an email is deleted, no guarantees the synchronized system has the email ID. 


    Retries on non-200 responses from the webhook target would be useful--retry up to x times with increasing wait time between retry attempts.  Logging of failed webhook communications would be important. 
  • Retries on non-200 responses from the webhook target would be useful--retry up to x times with increasing wait time between retry attempts.  Logging of failed webhook communications would be important.

    +1 Really good points. MailChimp does this, but not everyone else does.

     
  • I'm hearing the desire for "change" and "delete" events that provide the before and after data. There are some technical and possible security and compliance issues that may limit what data we can send in the message. Assuming that we can't provide the before change data, or the data for what was deleted, is it still valuable if we provided the ID of the record that was changed or deleted? I've heard from other developers that their apps already keep a data store with matching IDs so not receiving the data isn't an issue.

    Ben Wong‍ I am really not sure that just being supplied the id is that useful. This supposes that we, the third party are storing organisation's data. This is the sort of thing that we would want to move away from. There are definitely security implications to storing all of an org's data outside of the Blackbaud store. I don't really understand what the possible security and compliance issues are. You are allowing us to retrieve data from an org's database. What is the difference between that and just sending it to us when requested for changes? I would say that there are greater security and compliance risks if you assume that we are storing the data anyway by only supplying an id.


     

  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Hey David Zeidman‍, the value of just having the ID for the record changed or deleted will vary depending on the application for sure. We know there are a lot of applications and customers who have external data stores outside of Blackbaud, where they could look up the ID in their own database. I understand that's not the case for everyone.


    The reason why we see the webhook payload data as being different to standard API calls is because standard API calls require a valid access token from a consenting user to get the data. When a subscription is created for the application to receive the webhook event, it will receive the webhook notification until the subscription is canceled (either by the application or by the customer disconnecting the application). Therefore, webhook messages don't require an access token, so we want to be more careful with what data we're sending out.


    We're also thinking of webhooks as a way for applications to be notified of events that happen, but not necessarily the vehicle to deliver the details of that event. I can see the appeal of receiving more data from a webhook, but that's not how we've been approaching it. However, this is a good time to have that discussion.

     
  • I strongly approve of the "don't send the data" approach for exactly the reasons Ben notes.
  • Ben Wong
    Ben Wong Blackbaud Employee
    Tenth Anniversary Kudos 3 Name Dropper Participant
    Thanks, Reed Wade‍. We may start with just providing the ID and seeing how many scenarios that solves.


    I welcome more feedback on this topic of how much data is expected to be in the webhook notification.


    Thanks!
  • Ben‍ I would ask that Blackbaud doesn't try to re-invent the wheel here; just implement what is fairly standard practice where possible/secure. Make sure all of the API teams buy-in and adhere to some kind of template/schema. Look at what Stripe and MailChimp and others have done. Implement retry and logging logic. Make sure your headers are standard regarding ID/security, etc. Include some test events such as webhook_add, webhook_test, etc. And regarding push data, I too think that there are times that the webhook payload can provide everything the endpoint needs - eg an address change - and there are others where this isn't practicable - new Gift - or secure - SSN updated. Horses for courses.

     
  • Steven Cinquegrana:
    Ben‍ I would ask that Blackbaud doesn't try to re-invent the wheel here; just implement what is fairly standard practice where possible/secure. Make sure all of the API teams buy-in and adhere to some kind of template/schema. Look at what Stripe and MailChimp and others have done. Implement retry and logging logic. Make sure your headers are standard regarding ID/security, etc. Include some test events such as webhook_add, webhook_test, etc. And regarding push data, I too think that there are times that the webhook payload can provide everything the endpoint needs - eg an address change - and there are others where this isn't practicable - new Gift - or secure - SSN updated. Horses for courses.

     

    Have to go with this to be honest. 


    Tie the webhook to the application if you have to add any security beyond the norm. If the application is enabled in the users RENXT then webhooks can be set up and are tied to that app. If the app is disabled then disable the webhooks. Let users admin the webhooks within their account and via the API (as all the leading tools do) and DO include things like the old email when you have a new one. 


    It is a case of not reinventing what is already out there. Simply surfacing the ID puts the onus on the external systems to store data which goes against the whole security aspects fo this I would suspect.

  • Warren Sherliker:

    It is a case of not reinventing what is already out there. Simply surfacing the ID puts the onus on the external systems to store data which goes against the whole security aspects fo this I would suspect.

     



    +1 Definitely, couldn't agree more!

  • For any others working through a webhook implementation ...


    We've so far not been able to react to any webhook event requests because we're not receiving them. We have successfully provisioned some test webhook subscriptions but event POST requests are resulting in 406 - Not Acceptable responses. We are receiving, and can process, PostMan requests fine.


    We have a simple test endpoint that only returns a 200 OK for any request followed by an email dump of the request headers. Again, all fine for PostMan POSTs, no traffic for SKY webhook event requests.


    We have been working through this with Chris Rodgers‍ over the past two weeks but we're still not there. Quite frustrating considering that this should be a trivial exercise.


    Personally, I don't feel that our feedback, pre-implementation, has been given much attention, considering the over-complication and lack of any actual data in the requests. (What exactly is the pre-authorization handshake security protecting if there is no sensitive payload data?). And the fact that, for us at least, the event functionality simply isn't working.


    > If anyone has come up against the same 406 error, would they please post? And if you got around it, how?


    > Lastly, if anyone has the full complement of POST request headers, would you please post them here along with non-sensitive values? That would be helpful for emulation with PostMan.


    Cheers and thanks, Steve

     
  • We've had it working for the last week (constituent hooks only). Subscribed manually using the API console, so there's only one event_handler script and that is here:
    http://archives.felsted.essex.sch.uk/blackbaud/howto/showphp.php?f=3

    though I am already altering this - amazing how you spot things when you show them to anyone else!

    Simple edits seem to generate two events, arriving from different IP addresses at exactly the same microsecond, but I am still learning.


    I am not recording the headers, but will do so next week (at present I don't edit any of the data myself: that's all done by the secretarial staff - the advantage here is that if anything goes wrong it can't be my fault, the disadvantage that I have to wait for them).


    Overall documentation here: http://archives.felsted.essex.sch.uk/blackbaud/howto/

     
  • Thanks Chris.


    The issue our end isn't the processing of the requests, it's that we're simply not seeing the event POST requests arrive.

    Ben Wong‍ kindly provided the POST header set to me on Friday:

    accept-encoding: gzip, deflate

    connection: keep-alive

    content-length: 302

    content-type: application/cloudevents+json; charset=utf-8

    max-forwards: 9

    origin: eventgrid.azure.net

    aeg-subscription-name: DF4555D2-A3DC-4C9E-87CA-3CDAB413874E

    aeg-delivery-count: 0

    aeg-data-version:

    aeg-metadata-version: 1

    aeg-event-type: notification



    I added the missing ones to our PostMan test calls and found that when I included the specified content-type rather than the more standard

    content-type: application/json; charset=utf-8


    it blew up the call and a 406 - Not Acceptable response was issued.


    Further delving revealed this to be a common issue if a web server has ModSecurity enabled, which seems to be quite a common thing with Apache web hosts.


    Also, although there is a possible work around in turning ModSecurity off, globally or for specific directories (refer below), this isn't always permitted by the host company. (We don't host our own web servers and it's not permitted by our hosting company).


    I've asked Chris Rodgers‍ for his comments on this. As far as I can work out, we won't be able to tolerate the content-type header remaining as-is, and I think it's likely others will strike the same issue if they are third-party hosted. Personally, I don't see the need to specify the payload content type this explicitly, as most consuming code will just deserialize to a class or do it manually, as your own and our code does. So why not just keep it simple and standard - and compatible?


    Thanks again for your response; good to see that we going about things in a similar fashion.


    Oh; I've also requested that a user-agent header be added to all calls for additional validation and filtering, as our Datawise webhook processor handles requests from several providers such as Mailchimp and Campaign Monitor. It seems a bit strange to me that this has been omitted.


    Cheers, Steve


    PS Insert these lines in your .htaccess file to turn ModSecurity off (if permitted by your host):

    <IfModule mod_security.c>

        SecFilterEngine Off

        SecFilterScanPOST Off

    </IfModule>


     
  • Chris Rodgers
    Chris Rodgers Blackbaud Employee
    Ninth Anniversary Kudos 2 Name Dropper Participant
     

    Hey, Steve. 


    I'm glad that we've been able to make a bit of progress here. I don't believe either of us anticipated the 406 response from your webhook or that the content-type might be the reason for it to trip up. I'd like to look more into this before we have to resort to removing the content-type. The CloudEvents specification mandates that the content-type (application/json; charset=utf-8) be specified on our requests. I understand that many apps would not have a problem making assumptions about the type, but this hint may become more useful if we decide to support more types (like JSON batch: application/cloudevents-batch+json). And while it doesn't appear useful in your case, there are cases where webhook processors consuming messages from multiple sources will find the header useful. Still, we want to support as many webhook consumers as possible, so I'd like to look into this and see if we can better support this use case (and I have a soft spot for shared PHP hosting environments).

     

    As for the user-agent header, we may be limited there by our event messaging provider, but I'm still looking into that.
  • Thanks Chris. Do you have a timeline on this please?


    FYI, no other provider we deal with imposes restrictions like this - AND leaves out a user-agent header. Again, we don't see the advantage if it's going to break things for even a few consumers. Why not just send the payload as raw JSON? Simple. Done. Working. Maintainable. Flexible. Etc.


    This seems like complication for complication's sake to us. We had Mailchimp's webhooks up and going in a few hours. So far we're in week three of trying to get Blackbaud's going. That's a little excessive, I think.

     
  • Chris Rodgers
    Chris Rodgers Blackbaud Employee
    Ninth Anniversary Kudos 2 Name Dropper Participant
    Sorry, Steve, I can't give a clear timeline on this.


    These questions will be on my plate this week. At this point, I don't know the direction we'll go with this. My team will discuss. This functionality is still a work a progress, and our team wanted to receive this kind of feedback early, hence the limited beta. I appreciate you providing this feedback. It is certainly valuable and better that we receive it now.


    As for the problems we've seen so far, the content-type issue might have been caught earlier if the "Test the event handler" POST request from our tutorial had been done against your endpoint. This test clearly states this header, so it's a shame it was omitted from our troubleshooting--might have saved us some time. Assuming we don't deviate from the CloudEvents spec, we'll certainly elevate the visibly of the content type in our documentation. FWIW, both application/cloudevents+json and application/cloudevents-batch+json are JSON; they just describe a particular schema. I'm hoping, at the very least, that we'll be able to provide some better guidance there. 


    Again, I'll look into our vendor restrictions regarding adding a User-Agent header. While it's not a required header in the 'by the spec' sense, I can appreciate the usefulness you've described. Seems to have been something omitted from other webhook providers in the past. I'll see what we can do.