Designing Your Integration Strategy

At Redox, we like to think of integrated workflows in three steps:

  1. Enrollment: the moment that you become aware that there is work to do and the information you collect and store at that point. This could be an action that occurs in the EHR that you’ll get as a notification, or it would be an action that happens in your application, and you’ll query the EHR for the information you need to support it.
  2. Supplementation: the gathering of additional information to provide context about the patient. Sometimes this is past results or visits for the patient, but it could also be information about the provider, patient demographic/insurance details, or a full Clinical Summary.
  3. Writeback: what (if anything) needs to be sent back to the EHR when your workflow is completed? Is that information a summary report, or something more discrete?

When you have a question about how the integration should work, it’s also good to come back to these steps—at what point do you need any given piece of information? At what point do you need to have a patient already enrolled in order for your application to do the work that needs to be done?  When you work with the Redox team, they’ll help you think of these steps in relation to your workflow and design an integrated strategy.

The following articles will help you identify these three moments in your workflow. Once you know what’s happening at each of these three steps, and what information you need when each step happens, you can use our data model documentation to identify how you might receive that information.

Redox also has pre-made integration strategies for many common workflows based on our experience doing integrations across the industry – your Redox team can help you identify if one of these strategies is best for you.

Enrolling Patients in your Application

Enrollment is the moment at which you know there is work to be done for a given patient. For example, for a patient engagement application this might be the moment that a patient creates an account, or for a diagnostic testing application it could be the moment that a provider places an order for the test. In any circumstance, this is the first time that you know of the patient and your trigger to do something with their data.

The first thing to think about for enrollment is where that enrollment event occurs—is it in the EHR, or in your application? 

If the workflow starts in the EHR, think about what information is being created and recorded at that moment—is someone scheduling a visit? Is there an order that will be placed? What about a flowsheet value that’s registered? We call these trigger moments, and they often have related integration actions, for example, when an order is placed, a message can often be triggered outbound from the EHR with information about what was ordered and for whom. Not all EHRs support all trigger points, so if there’s a very specific action you’re looking for, it’s best to talk to your Redox team about what to expect.

If the workflow starts in your application, the EHR trigger points don’t matter as much, but you should consider how you plan to tie your record of this patient to their record in the EHR and whether there’s any additional information from the EHR record that you need to create the patient in your system. 

The following are some of our most common enrollment mechanisms:

  • Patient Search is commonly used in cases where the workflow begins in your application and you need to tie your patient record to the patient’s record in the EHR. The Patient Search allows you to query with certain pieces of demographic information to obtain the patient’s ID and complete demographics from the EHR system.
  • Scheduling is commonly used for workflows centered around a patient appointment. You can query for upcoming visits at a certain clinic or with a certain provider, identify visits occurring today, or follow up on visits that happened yesterday. For more advanced cases, we can push a full feed of new, modified, or canceled visits to your endpoint.
  • Census Query allows you to query for the full list of patients admitted at any given moment if you work with admitted patients
  • Order is commonly used in cases where a provider or other clinician makes a clinical decision to order a certain service for the patient or enroll them in the application. Orders can be identified through a query or pushed to your endpoint as they’re placed.

Note: Single-Sign On may often seem like a convenient enrollment moment for your application if users will launch your application from a patient context, but the best practice that we’ve seen is to enroll the patient prior to the moment that a user will attempt to launch their chart in your app. Since the user is waiting for their link to launch while your application gathers data to respond, we recommend having the information you need to populate on the patient-specific landing page ready to go at the moment a user will try to launch so that all you need to do is send back the correct redirect, not create the page from scratch.

Supplemental Patient Data

Supplementation covers any data that you need after your initial enrollment to complete your work in your application. This might take the form of a query for additional information to give your application a better understanding of the patient’s clinical details, or it could be receiving a feed of additional information to keep your application up to date.

The most important thing to keep in mind for supplemental data is to identify when it’s created/sent out from the EHR. For example, if your application needs to have discharge information to complete its work, but the initial enrollment is earlier in a hospital visit, you may need to wait for the discharge to take place in order to complete your flow. When determining timing for supplemental information, you should consider:

  • Which moment is actually the moment when the work should be done? It’s possible that the later event should actually be your enrollment instant, and you can query for earlier information at that point.
  • Is there information that can’t be queried that needs to be stored until you’re ready to use it?
  • Does the EHR have any timing limitations for triggering this information out via an interface, or does it happen as soon as the action occurs in the EHR?


Writeback is the step that returns data to the EHR. This isn’t always necessary, but health systems may request that you write information back to the EHR if your app is providing any form of care for the patient or capturing clinical details that are needed in the EHR for clinical decision making. If you’re new to writeback or you’re not sure what information will provide the most value to your users, we would generally recommend that you start with a non-discrete form of writeback, like sending a PDF through our Media or Results data models.

Here are some other common methods of writeback to the EHR:

  • Notes is commonly used for narratives written by a provider that should be saved with other progress notes in the EHR
  • Results are often used to indicate completion of an order or diagnostic exam and can be used for both non-discrete (PDF) and discrete (individual data elements) results
  • Flowsheets is used for writeback of vitals information and sometimes for responses to questionnaires or other discrete clinical values.
  • Clinical Summary can be used to communicate details of a visit with a patient or updates to the patient’s Problems, Medications, Allergies, and Immunizations.

Workflow Normalization

In the pre-engagement section of this guide we introduced the concept of Workflow Normalization and how the Redox Engine can provide support for consistent integration points through our API regardless of how our platform is obtaining data from your connecting partners. This workflow normalization support can be summarized as two main features our engine provides: Data on Demand, where Redox can support a queryable endpoint based on pushed data, and API Polling, where Redox can poll API endpoints and create push notifications. Let’s take a look at both of these features in more detail.

Data on Demand

In situations where information is being pushed out of a healthcare data source, the volume of messages generated can quickly skyrocket, requiring a large amount of work to build, implement, and maintain the associated business logic needed to keep the information straight. We also often see that many customers receive data today that they don’t need until tomorrow (or next month) and the work of keeping that data maintained over time can be challenging.

With Redox’s Data on Demand feature, we handle the ingestion and organization of the data allowing you to build a query workflow to recall the data that you need at the appropriate point in your product’s workflow.  This reduces your overhead of managing the volume of messages and logic needed to update the information allowing you to focus on getting the data that you want, when you want it without any of the fuss in between.

Redox Data on Demand is enabled by 4 key Redox features:

  1. A multi-tenant, FHIR-conformant data storage architecture with an automated scaling infrastructure – We scale your infrastructure needs up and down based on your usage without charging incremental fees or requiring your team’s time.
  2. Pre-existing and customized business logic to ensure consistent data handling across healthcare organizations – Redox works with your partners to determine how their system makes the information you need available.  Let our experts handle the scoping, set-up, and testing to make sure you’re getting the data you need when you need it.
  3. Pre-built queryable API endpoints – We handle the connection to your partners and translate the output into the same Redox data models every time.  No need to spend time doing custom mapping of data or creating and maintaining queryable endpoints. We’ve got you covered. 
  4. Industry leading security practices – Redox combines industry-certifications with technology best practices to drive our robust security program. We are proud that both our HITRUST and SOC2 Type 2 certifications have no findings or CAPs.

Please note that this is currently only enabled for certain data model events, which are specified in our documentation.


Polling is a scalable framework that provides a consistent configuration and maintenance workflow for Redox to connect to EHR vendor APIs and still POST webhook events to your application.

The majority of our integrations with healthcare organizations are still done over an HL7 feed, as that standard is what is most commonly supported in the healthcare space. When Redox consumes an HL7 feed we typically take in the individual messages, map them to the appropriate data model and event, and POST them to your application. As the healthcare integration landscape has evolved, more and more EHRs have exposed their own proprietary APIs that can be leveraged in addition to or in lieu of HL7 interfaces. 

Some of those vendor APIs cannot be configured to send notification events to Redox as they happen, so in order to make a consistent experience for our customers we’ve developed polling infrastructure that allows us to connect to these APIs and replicate the event-based notifications of an HL7 feed. In these instances, Redox can hit the relevant API endpoints periodically to gather changes to specific resources.* Once we make this call, we may bundle up a number of calls to additional endpoints to fill out a single Redox data model message that we then POST to your application.

Let’s walk through an example of this polling functionality in more detail. Some polling support we could put in place to replicate scheduling notifications could be sending requests every minute to the following endpoint with a “changedsince” parameter updated in each request:


This would then return to Redox an array of appointments that have changed since the last time we polled the endpoint.  Each appointment object will then have a patient identifier included so that we can make an additional call for each patient who has an appointment that’s changed to a subsequent endpoint:


We then have enough data to send individual Scheduling messages via webhook to your application.

*Note that there is certainly some variation in the ways we must connect to these APIs, and the type of data we expect in the various requests and responses from the API. Not all APIs support this type of polling.

Backfilling Data

Your first go-live with a connection partner marks when you start receiving production data on which your application and users can use to provide patient care.   Depending on what services you provide, receiving production data on a “go-forward” basis may not be sufficient and you may require historical data to provide enough context for your users. 

It’s important to understand what historical data you may need, and work with your HCO to define these requirements as part of the scoping process, and then determine the best path forward to “backload” that historical data into your application in advance of the go-live.

It’s fairly typical to expect to backload a patient population, 6 months of future appointments, or a current inpatient census in preparation of your go-live.  Healthcare Organizations can typically provide that data with some ease, and Redox has tools to facilitate the backload of that data from your connection partner to your application.   Be careful, however, of the quantity and type of data you want to backload.   Asking for 10 years worth of laboratory results can put a burden on the Healthcare Organization to produce such data and will likely extend your implementation timelines (how quickly can your application consume 5 years worth of lab results?).   Make sure you prioritize only the absolute critical data elements for a backload.

Redox’s best practice is to process a backload using the same mechanisms as the real-time integration.  Meaning, if your connection partner is sending an HL7 feed of inpatient visits,  then we recommend using an HL7 feed for the backload as well.  This simplifies the effort to process the backload.  Redox also has CSV flat file templates for patient and appointment backloads, as many EHRs can produce a report of this data with reasonable ease.

Strategies by Vertical

Healthcare products and applications serve an infinite variety of purposes across a broad spectrum, but data integration strategies tend to follow relatively common patterns. These patterns form along paths-of-least-resistance, proving that simple and sleek integrations are not only easily implemented and repeatable but are also the most successfully maintained.

If the healthcare integration journey is like climbing a mountain, then these common integration strategies are like beaten paths: tried and true. Redox has climbed this mountain hundreds of times and we’ve had the opportunity to learn from the many integrations we’ve implemented. We think critically about what we have done in the past and what was most successful and we take these learnings to inform the future integrations we help design and implement.

An analysis of our live projects reveal pockets of healthcare applications which require similar integrations. Redox calls these groupings “Verticals”—sets of applications that serve comparable purposes and often require the same data resources. If you think your application could be described by one of the following “Verticals”, contact a Redox representative to get an idea of the kind of integration strategies that will allow you to implement with speed and scalability.

Care Coordination
Clinical Decision Support
Consumer Wearables
Durable Medical Equipment
EHRs or Practice Management
Healthcare CRM (Customer Relationship Management)
Labs, Genomics, and Precision Medicine
Patient Engagement
Patient Transport
Population Health, Analytics, Care Management
RCM & Provider Payment Services
Remote Patient Monitoring
Social Determinants of Health
Workflow Automation

Now that we’ve had a chance to review the high-level components of the Redox platform and how to approach your integration strategy design, we’ll be diving deeper into how to go about actually building against the Redox API and successfully connecting your application to Redox. We’ll cover required design, product, and development expectations of integrated applications leveraging Redox as well as best practices for specific data models you might be leveraging as part of our scope. We also have a compiled list of open-source packages that other Redox customers have made available as they have built against our API, spanning a number of languages.

Integrated Application Expectations

The following section covers a number of guiding principles and related expectations of integrated applications leveraging Redox within the healthcare landscape. Some of these are specific to groups using Redox and others are applicable to every application looking to integrate with healthcare organizations regardless of the vendor they use or how they’re exchanging the data. It is important to make sure all of these are understood and incorporated into your product design and functionality prior to embarking on your first integration project.

Field Reliability & Data Requirements

Redox data models have been built to support a wide range of data elements, but do not expect every message from every connection partner to have every field populated. Each healthcare organization we connect you to will leverage the integration infrastructure they already have in place to connect to Redox, which means in most cases we are translating messages to the Redox API from various customized HL7 interfaces or proprietary APIs, all of which have different data sets they support by default. This means a Scheduling message we send you from one connection partner will likely have a different set of fields that are populated by default than a Scheduling message from another partner. To help illustrate this we came up with a framework for how reliably the various fields in our data models are populated across the diversity of connections we have through the network. Our data model documentation uses badges for the reliability of each field based on our experience of how often they are available:

  • Reliable Expect this field to be present for every message from every health system (> 90%)
  • Probable Expect this field to be present for most messages from most health systems (> 50%)
  • Possible This field may be present for some messages from some health systems (< 50%)

If a field is listed as less than “Reliable” it’s not the end of the world. The core problem is that data sources have little reason to share the most data by default:

  1. A core tenet of security in healthcare is to only send what is necessary. In the event of a breach, you’ve let less information slip.
  2. Healthcare puts a premium on things working. For that reason, backwards compatibility is a core principle of integration developers. Data points may be available with new versions of software, but they are off by default.

For the above reasons, it’s very important you have a clear understanding of what your integrated application’s data requirements are before kicking off your first integration project. You’ll work with your Redox team to make sure your requirements are clearly documented before engaging with your connection partner to confirm whether your integration needs are met with default setup or if they require customization.

Data Validation

For applications receiving data over the Redox API via webhook, any validation done on the data received in the content of the transmission should be done after your endpoint responds with a 200 success response. If any issues are found with the data content after responding, such as a critical field missing or a value set not translating correctly, the expectation is for the receiving application to log an internal error before reaching out to the 24 hour on-call Redox support team. 

We’ll go into more detail why later on, but ultimately doing this type of validation after sending a success response allows Redox to continue to post data to your endpoint regardless of whether there is an issue with the content of a single message.

ID Management

When integrating data with healthcare data sources through any method, there will inevitably be different identifiers that your product will receive and need to be able to manage. The first type of identifier that usually comes to mind is patient identifiers. While the Medical Record Number (or MRN) is a common patient identifier used by many health systems as the master ID for the patient, not all health systems use it in the same way. Even health systems that do use an MRN as the master ID may use different ID type values to represent it. Essentially, every healthcare organization will have a unique patient identity structure with their own set or sets of patient identifiers and ID types. 

In order to be able to accommodate these differing patient identity structures, it is recommended that Products leveraging the Redox platform be designed to handle multiple patient identifiers and identifier types per connection, especially when planning to connect to multiple systems within a healthcare organization that may use different identifiers or if you’re working with larger IDNs that have many regions with their own patient identifiers. Products designed with the assumption that there will be a single patient identifier per connection run the risk of mismatching patients and in general not appropriately reflecting patient identity through the connection. We additionally recommend building in settings within your integrated application to specify the ID type(s) that you will need to use for any given site. This way you’re able to work easily with whatever ID type(s) you receive from a given health system.

Beyond patient identifiers, you can also expect to receive IDs representing specific visits or cases. Unlike patient IDs, visit IDs won’t be transmitted with an associated ID type and will always be mapped to the Visit.VisitNumber field in Redox data models. Surgical case IDs transmitted over our SurgicalScheduling model will be mapped to the SurgicalCase.Number field.

Connecting healthcare organizations will also send identifiers representing individual clinicians (providers). The two main types of provider IDs you can expect to encounter are the National Prescriber ID (NPI) and site-specific IDs. NPI is a nationally unique identifier assigned to providers whereas any site-specific provider IDs will only identify a provider within that specific healthcare organization. If your integrated application needs to track provider identity then it will be important to understand what type of ID your connecting partners will be sending.

Patient De-identification

Digital health applications and systems working with protected health information (PHI) are subject to HIPAA guidelines and have a much higher bar to meet when it comes to securing their platform and data. This has led to many digital products in the healthcare space exploring how they can work with de-identified patient data to reduce the increased burden on maintaining the security of their system. At this time, de-identification is not something the Redox platform supports. If you’re interested in de-identification, please work with your Redox representative so that we can make sure your use case and specific needs are understood and passed along to our relevant product teams as they continue to improve and enhance our engine and API capabilities.

Coded Values

Earlier we covered the data normalization that the Redox API can perform for a subset of data fields across our data models. A type of data we do not yet normalize is clinical coded values. This is largely due to the size of these code sets, which include ICD, SNOMED, NDC, RxNORM, LOINC, CPT, HCPCS, and hundreds of thousands of customer, organization-specific code sets. 

Translating codes and mapping them between code sets can also have significant clinical implications, as there isn’t always a perfect mapping alignment between sets. We’ve also found that due to government regulations it’s fairly safe to assume that most healthcare organizations have already implemented a number of these code sets, such as ICD-10 and LOINC codes. 

If you are interested in coded value normalization please let us know as we are always interested in hearing about particular use cases and discussing what we could potentially support in the future.

Error Handling

Redox customers leveraging source records—i.e. those sending data to Redox or querying for data against Redox—should be prepared to manage any Redox errors that are returned in response to their outbound messages. This oftentimes means your message is missing a critical field or formatted incorrectly such that our engine is unable to process and transmit the message. We’ll cover the specific error states you might encounter in Redox responses and what they indicate in the Error Management post later in the Onboarding section of this guide guide.

Date & DateTime Values

All Date and DateTime values sent from over the Redox API will be formatted to the ISO 8601 Standard.

  • Date values will use YYYY-MM-DD , for example: 2016-09-22
  • DateTime values will use YYYY-MM-DDThh:mm:ss.sTZD, for example: 2016-09-22T10:20:30.000Z

It is important to not include a time value if only the date value is known or appropriate.

Time Zones

Our engine connects and enables communication between organizations across many time zones. With part of that communication containing information that’s linked to or dependent on time, it’s natural to wonder how we manage differences in time zones and is something we had to consider very early on while developing our technology.

Redox Source and Destination records contain a time zone attribute designating the expected time zone of values. This is particularly relevant when a system connecting to Redox assumes a time zone. Specifically, HL7 engines often assume a local time zone and do not generate or parse time zone offsets in a timestamp.

We will parse any values sent with a time component using the source time zone and convert the value to the destination time zone in the subsequent transmission. If no time component is included, no time zone adjustments will be applied. If a source sends an offset in a timestamp, for example: 2007-04-05T12:30-0200 , we will use this value instead of the source time zone when parsing the date.

Code Completion

Redox customers should be completing their development against the Redox API and models prior to kicking off their first integration project. This is not to say you won’t be able to modify this build once your first project starts but that you should have it as complete as you can based on the development and testing tools Redox provides, which we’ll talk about in more depth later in this guide. 

Why is this an expectation? Because healthcare organizations are not always predictable in how quickly they can get a given project prioritized or completed, and we’ve found that removing additional variables from your first project’s timeline, like your core development to our API, allows us to move as quickly as your connection partners can once the project is launched.

Exchanging Files through the Redox API

The Redox API is capable of exchanging patient data with EHRs in a variety of message formats. In order to facilitate the transmission of large files like PDFs, Images, Audio Files, and other media, a separate Redox API endpoint is available for file uploading.

Uploading a file to Redox

  • Post to https://blob.redoxengine.com/upload with your file attached as multipart/form-data . Check out the docs for your HTTP client library for how to send a file as multipart/form-data . Below are examples for curl and the NodeJS request library. (Note that most HTTP client libraries will set the content-type header for you when you attach a file. Setting content-type  explicitly in your code might prevent your library from setting the correct value.)
  • Include the ‘Authorization’: ‘Bearer <<access token>>’  header you received from the authentication API.
  • Each file is associated with the source whose access token you use in the Authorization header. It can only be referenced in subsequent API requests associated with that source.
  • For more information on the authentication process, please see the “Authenticating with Redox” section of our Create a Source Record help article.

Check out some code snippets below to help you get started.

The break point in which you would use the file upload endpoint vs. directly sending data to the API as a base64 encoded string or plain text string is currently 200KB. If you expect that the contents of your messages may occasionally exceed 200KB, you should use this method to upload the file to Redox.

There is currently a 30MB file size limit for all files uploaded to Redox. Attempting to load any larger files will return a 413 error response.

Please note: This functionality is only available with a Pre-Production or Production account. A 403 Forbidden error will be returned if you try to upload files using a Free Account.

Example Upload Requests

Here is an example in Node.js:

var fs = require('fs');
var request = require('request');

var options = {
url: 'https://blob.redoxengine.com/upload',
headers: {
'Authorization': 'Bearer <your access token>'
formData: {
// request sets the content-type header if a file is attached
file: fs.createReadStream('test.pdf')

request.post(options, function (error, response, body) {
if (error) {
} else {

An example in curl:

curl https://blob.redoxengine.com/upload \
-H "Authorization: Bearer <<ACCESS TOKEN>>" -X POST \
-F [email protected]

Upload Response

If your upload is successful, you’ll get a 201 response with this structure:

"URI": "https://blob.redoxengine.com/123456789"

When posting subsequent requests to Redox, include the File URL to reference your uploaded document within the message. Here are two examples where the blob URI is included in the message body.

Media Message

    "Meta": {...},
    "Patient": {...},
    "Visit": {...},
    "Media": {
        "FileType": "PDF",
        "FileName": "SamplePDF",
        "FileContents": "https://blob.redoxengine.com/123456789",
        "DocumentType": "Empty File",
        "DocumentID": "b169267c-10c9-4fe3-91ae-9ckf5703e90l", // external reference number unrelated to Redox File Upload
        "Provider": {...},
        "Authenticated": "False",
        "Authenticator": {...},
        "Availability": "Unavailable",
        "Notifications": [...]

Results Message

"Meta": {...}
"Patient": {...}
"Orders": [
"Results": [
"Value": "https://blob.redoxengine.com/123456789",
"ValueType": "Encapsulated Data",
"FileType": "PDF",
"Visit": {...}

Receiving files from Redox

The Redox blob only supports file uploads, not retrievals. You can still plan to receive files over the Redox API using either the Media or Results data model, but all files will be embedded within the messages and base64-encoded. This is regardless of the size of the document, so includes files between 200KB and 30MB.

There you have it. If you have any questions about this topic reach out to your Redox representative.

Source & Destination Records

In order to start exchanging data over the Redox API, you will need to configure the appropriate Source and Destination records for your application within your Redox dashboard. Sources initiate data pushes and requests. If you want to send data, or if you want to request data (query), you will need to set up a Source in Redox. Destinations are set up to receive pushed data. If you are looking to receive data via webhook, you will need to set up a Destination in Redox.

Environment Types

When setting up a Source or Destination on Redox you must choose an environment type. The environment type defines the type of integration the records will be configured to support: Development, Staging, and Production.

You should start by creating “Development” Source and Destination records with your free account. This is what you will use to build and initially test your connection to Redox using our standard developer tools, which we’ll walk through how to use later in this section. Once you are formally working with us and are ready to start sharing data with another member of the Redox network, your account will be upgraded and you will be able to build Staging Sources and Destinations that will be used for functional and integrated testing with your connection partners during project implementations. Finally, after your integration has been tested in Staging, you will leverage Production Source and Destination setup to exchange production level data with your partners across the Redox network.

Building a Destination Record

You will build and configure destination records directly within your Redox dashboard. After logging in, select the Destinations tab in the navigator on the left. You’ll see that there is a pre-configured dummy destination record available for you to walk through called Redox API Endpoint.

Once you’re ready to build your own destination, just follow the below steps.

  1. Click the Create Destination button.
  2. You’ll then be prompted to name your destination and select an appropriate Environment Type before being redirected to your record’s Settings tab. Note: Staging and Production record names will be visible to your data sharing partners, so err on the side of being specific.
  3. Once you’re in the Settings tab of your new destination select the connection method and format. The default values here will be Redox API and JSON, which will almost always be the correct values for anyone leveraging our API.
  4. You’re then ready to specify your Endpoint, which is your organization’s webhook endpoint where you want Redox HTTP requests sent. This URL will need to be accessible by Redox’s servers and an HTTPS endpoint is required for both Staging and Production destinations.
  5. Next, you’ll set a Verification Token. This is a token we will send in the ‘verification-token’ field of the HTTP request header in every request Redox makes to your endpoint. This is how you will ensure the communication is coming from Redox and that no other entity is sending data to your endpoint.
  6. The next step is to verify your endpoint by clicking Verify and Save. Read the next section for more details on this important final step in setting up your destination.

Verifying a Destination Endpoint

When you click Verify and Save, RedoxEngine will make a POST or a GET request to your Destination’s Redox API Endpoint in order to perform destination verification and verify the validity of the URL.

Important Note: For a new destination, we default to using POST for stronger security. Using GET is still permitted if that’s preferred.


Below is an example of the POST request header

"url": "https://endpoint.yourdomain.com",
"method": "POST",
"headers": {
"source-host": "https://candi.redoxengine.com",
"application-name": "RedoxEngine",
"content-type": "application/json"

Below is an example of the POST request body:

"verification-token": "verificationtoken",
"challenge": "cc2f1bdf-af51-4974-af5c-f3af19d6526c"

A GET verification request from Redox will also include the same header, with an additional query string appended to the URL with the challenge value. Below is an example:


When your server receives one of these requests, it needs to:

  • Verify the verification-token matches the one you supplied when creating your Destination. This is a security check so that your server knows the request is being made by RedoxEngine and relates to the Destination you just configured.
  • Render a response to the POST or GET request that includes only the challenge value. This confirms that this server is configured to accept POSTs from Redox, and is used for security verification on RedoxEngine’s side.

Example of a response to the verification request:


The verification step is based on W3C’s WebSub, and we have a 15 minute video to walk you through the challenge step.

Destination FAQs

How can my endpoint distinguish between a verification POST and a non-verification POST from Redox?

Verification POSTs will include a challenge value and your destination’s verification token (that you specified when you set up the destination record) in the body of the POST. Non-verification POSTs from Redox will always include the verification token in the header of the message.

What are some helpful troubleshooting tips?

Troubleshooting verification with a new destination endpoint will be easier if you first remove encryption and troubleshoot it as an HTTP endpoint, which will give you more insight into the message your endpoint is receiving and your endpoint’s response. Before you remove encryption be sure to set your destination’s verification token to a test value in your dashboard (something like “123”).

I have my app running on my local machine. Can Redox send messages there?

You’ll eventually need a public HTTP server that has the following:

  • HTTPS support
  • A valid SSL certificate
  • An open port that accepts GET and POST requests

However, for the purposes of initial testing, ngrok is a free tool that creates a tunnel from the public internet to a port on your local machine. With ngrok installed and configured, POST requests from Redox will be automatically forwarded and responses returned.

Can I delete a destination record that I’m no longer using or that was made in error?

Destinations cannot be deleted once they’re created. If you have a destination that you want to deprecate we suggest renaming the record to add “ZZZ” in the front of the rest of the name so that the record falls to the bottom of your list of destinations, which is sorted alphabetically.

Building a Source Record

Similarly to destination records, you will build and configure source records directly within your Redox dashboard. After logging in, select the Sources tab in the navigator on the left. You’ll see that there is a pre-configured dummy source record available for you to walk through also called Redox API Endpoint.

Once you’re ready to build your own source, just follow the below steps.

  1. Click the Create Source button.
  2. You’ll then be prompted to name your source and select an appropriate Environment Type before being redirected to your record’s Settings tab. Note: Staging and Production record names will be visible to your data sharing partners, so err on the side of being specific.
  3. Once you’re in the Settings tab of your new source select the connection method and format. The default values here will be Redox API and JSON, which will almost always be the correct values for anyone leveraging our API.
  4. You’re then ready to authenticate your new source and send messages to Redox. Continue reading the next section to learn more about it.

Authenticating a Source

In order to send data from your Source to a Destination, you must first authenticate with Redox. Within your Source Settings, you will have both an API Key and button to set your Source Secret. After your source secret is initially set you’ll still have the ability to change it at any time. To Authenticate, send a HTTPS POST request to https://api.redoxengine.com/auth/authenticate with your source keys in the body of your request:

  • apiKey
  • secret

Authentication request:

curl -X POST https://api.redoxengine.com/auth/authenticate \
-d '{"apiKey": "not-a-real-api-key", "secret": "super-secret-client-secret"}' \
-H '{"Content-Type": "application/json"}'

The response body will contain the following keys:

  • accessToken: This is the token that must be sent with every request.
  • expires: This is the time that your accessToken expires. After this time, you’ll need to authenticate with Redox again.
  • refreshToken: You can use the refreshToken to obtain a new accessToken without requesting a new accessToken using your Source Secret and apiKey. More details below.

Sample Response:

"accessToken": "13d5faa8-aacd-4a0d-a666-51455b1b2ced",
"expires": "2015-03-25T20:52:35.000Z",
"refreshToken": "4ed7b234-9bde-4a9c-9c86-e1bc6e535321"

To obtain a new accessToken using the refreshToken, make a POST to https://api.redoxengine.com/auth/refreshToken with two keys in the body of your post:

  • apiKey
  • refreshToken

Request an accessToken using a refreshToken:

curl -X POST https://api.redoxengine.com/auth/refreshToken \
-d '{"apiKey": "not-a-real-api-key", "refreshToken": "4ed7b234-9bde-4a9c-9c86-e1bc6e535321"}' \
-H '{"Content-Type": "application/json"}'

Sample Response:

"accessToken": "13d5faa8-aacd-4a0d-a666-51455b1b2ced",
"expires": "2015-03-26T20:52:35.000Z",
"refreshToken": "4ed7b234-9bde-4a9c-9c86-e1bc6e535321"

The response will be the same as if you authenticated using your Source Secret.

After authenticating with Redox, all outbound messages from your Source should have the following header key/value pairs:

  • “Content-Type”: “application/json”
  • “Authorization”: “Bearer ” + accessToken

Below is an example—note that there is a space after the word Bearer in the Authorization header.

"headers": {
"content-type": "application/json",
"authorization": "Bearer 13d5faa8-aacd-4a0d-a666-51455b1b2ced"

Source FAQs

At what interval should I initiate authentication requests?

Your accessToken is valid for 24 hours, so we recommend re-authenticating close to that interval (e.g. every 23.5 hours).

What headers should I include in messages sent outbound from my source?

You should include the following headers in your outbound messages over the Redox API:

"headers": {
"content-type": "application/json",
"authorization": "Bearer <ACCESS TOKEN>"
If I only have a single destination subscribed to my Source, do I still need to specify my connection partner’s destination information in my outgoing messages?

Yes. Our engine requires destination information to be specified in all messages we receive over our API.

Testing Your Connection to Redox

Redox provides standard developer tools that allow you to both test receiving webhook pushes to your destination as well as initiating queries or posting data back to Redox through your source. The data we provide to create test messages and transmissions is static and cannot be modified or added to through your test calls. We’ll cover how to use those standard testing tools as well as others that our team can help you connect to if relevant or helpful for your specific strategy and use case.

Destination Developer Tools

When you are ready to test sending webhooks to a destination you’ve configured you can do so by going to the DevTools tab in your destination record. Once there, you can select the Data Model and event type of the message you want to send to your destination record. You’ll then be able to select options from a few drop-down lists for key data fields—such as the patient name—to create your test message content.

Once you’ve selected options for the required fields the test JSON will appear below in a grey editor. Once you’ve initially created your test JSON and it appears in the editor you can modify it however you like before sending it to your endpoint. You can change any content in the message like the patient details or clinical codes and values. Keep in mind this just changes the transmission content being sent to you and doesn’t modify the data available standardly through our tools.

Once you’re ready to send the message, scroll to the bottom of the JSON payload and click the Send Data Model button. Once sent, a transmission will be created and you should see a link to its log appear.

If everything has been set up properly you will receive a 200 Response from your endpoint. If you run into an error and can’t resolve the issue, reach out to your Redox representative for help.

Source Developer Tools

Each source you create in Redox will also have DevTools sections where you can either put together quick curl snippets or download a Postman Collection and Environment file specific to your source. 

Testing Using Postman

If using Postman for testing, make sure you download and use both files.

Use the import button pictured BELOW to load the file you downloaded above. You will see a new collection on the “Collections” tab.

Locate the environment dropdown (top right), click the gear and choose Manage Environments.

The manage environment popup will appear. Click Import from the options in the bottom and open your environment file.

After you have imported the environment, you should be able to edit the variables. Locate the secret variable and set it to your source secret.

In the Authorization folder, you will find the Initial Login request. Run the login request and it should set your access token automatically.

From here, you can test sending any of our data models from your source. All messages you send using Postman (or any other method for that matter) will appear in the Messages logs.

Testing Using Curl

The curl commands that you can build to send pre-configured messages from your source to Redox are entirely constructed within the dashboard on the DevTools (Curl) tab.

Under Step 1, you’ll first enter your source secret to get an authentication request that you can copy and paste into your terminal session.

Once you receive your accessToken you can return to your dashboard and paste it in the Access Token field under Step 2.

You’ll then select the data model, event type, and other key information to generate the JSON body. Similarly to the authentication request, you’ll then be able to copy the sample curl request and paste it into your terminal session to send the test message.

Data on Demand Sandbox

In addition to the standard dev tools for sources or destinations, Redox also has a Data on Demand Sandbox for testing data on demand flows. Use of this sandbox requires your organization to be upgraded to a customer account and the configuration of new subscriptions, so please speak to your Redox representative if you are testing a workflow that would benefit from use of this sandbox.

The Data on Demand sandbox enables testing of query workflows against test data created by the Redox team. All Redox query events are available in the sandbox with 1-2 example patients for each query type. Your Redox representative will be able to configure the necessary setup to connect you to this sandbox if any Redox queries are part of your integration scope.

If you need additional or specialized data for your testing, your Redox team can also create a new sandbox environment for your own use where you can load any data that you want.

Messages & Transmissions

You may have already seen within your Redox dashboard that there are two types of transactions we refer to: Messages and Transmissions. What’s the difference?

Messages are sent from your Source to a subscribed Destination.

Transmissions are received by your Destination from a subscribed Source.

For all intents and purposes:

  • Messages are what you send to other members of the Redox Network.
  • Transmissions are what you receive from other members of the Redox Network.


Transaction logs are incredibly helpful from development testing to troubleshooting production errors. Logs allow you to view all Transmissions or Messages in one place. Logs are displayed in the views Messages, Transmissions, and Errors from your organization’s dashboard. You can also view specific logs by Source or Destination by selecting them first and then navigating to the desired log view (Messages, Transmissions, or Errors).

Logs by default are sorted by timestamp with the newest Message or Transmission on top. When you first create an organization your logs will be empty.  As soon as you start using Dev Tools (Source > Dev Tools or Destination > Dev Tools) you will see those test Transmissions and Messages start to populate the Logs.  Logs can get long as you develop and launch with Redox.  The use of drop-down filters at the top of the Logs can aid in this process, allowing you to find Messages by Data Models, Event types, Filtered, Errors, Resubmissions, and Source.

Since dashboard logs include the body of your organization’s transactions, PHI is part of production record logs and will be viewable to users from your organization whom you give PHI access. You can reference our post on User Management & PHI Access to read more about how that access is configured.


Before two organizations can exchange data over Redox they need to agree upon and sign off on exactly what information is authorized for exchange. Redox will then set up a Subscription between the organizations that allows them to send, receive, or query authorized data using specific Redox Data Models. A Subscription in Redox is the pairing of a Source and Destination and is unique by Data Model. You can view all Subscriptions in the dashboard.

When you set up your organization on Redox you will see that you already have a good number of Subscriptions. These are Subscriptions set up for connecting your organization’s example Source “Redox API Endpoint” to Redox’s example Destination “Redox EMR” by all data models available. This allows you to use our Dev Tools to test sending and querying information over Redox.

The other group of Subscriptions that you will see is from Redox’s example Source “Redox Dev Tools” connecting to your organization’s example Destination “Redox API Endpoint”. These subscriptions allow you to use our Dev Tools to test receiving information over Redox.

As you work with us to share information with other members of the Redox network we will add new Subscriptions to your organization based on the specific information you need to exchange and workflows you need to support. As you start connecting to other organizations through Redox the Subscriptions tab will help you clearly see what information you are sending to other organizations and what information you are receiving from other organizations. 

This will also be where you obtain the identifiers of the source and destination records of your connection partners that you’re subscribed to. The destination IDs you see subscribed to your source are the destination IDs you will include in the Meta object of messages sent from our source and will be how Redox routes your outgoing requests. Conversely, the IDs of the source records subscribed to your destination will be the source IDs you’ll see on inbound requests from Redox and will be how you determine which of your connection partners sent a particular transmission.

Subscriptions can only be established between Sources and Destinations of equivalent Environment Type. Ie. a Production Source can only have a Subscription with a Production Destination. In order to connect with another organization, you must create a source or destination with a type of staging or production. You cannot connect a development source or destination to another health system/organization.


Application Queuing

Applications should have queuing infrastructure in place so that received messages are managed and stored appropriately. Redox won’t send subsequent messages to an application’s endpoint until a 200 success response has been received for the previous message; we recommend sending this confirmation prior to doing any data validation so you can continue to receive messages in real time. If after returning a 200 response, your application identifies an issue with a particular message, we recommend logging an error in your application and reaching out to our 24 hour support team. Alternatively, if there is an issue with how the message was processed or sent then applications should respond to Redox with an appropriate error code. See our post under Technical Considerations on HTTP Status Codes.

It is also recommended that applications have queuing capability for outbound messages being posted to Redox. In the event that Redox does not return a 200 success response to an application’s POST, the application will need to be able to queue any subsequent outbound messages until the issue is resolved.

Redox Queuing

In the event that an application or EHR returns an error, Redox will queue subsequent messages and proactively reach out to the appropriate team to resolve the issue. Redox also has queuing in place to process all inbound messages from both EHR and application partners and connecting parties can expect real-time confirmations.

Error Management

Error management is an integral part of maintaining connections with other healthcare entities. Redox will manage the connections between our platform and your connection partners, but you will still be responsible for managing errors in your connection to Redox. In this section we’ll cover the different status codes you’ll encounter, error logs in the Redox Dashboard, and what error alerting you can expect from Redox.

HTTP Status Codes

Since status codes are an important way our API communicates, we want to clearly document what HTTP status codes the API can return, when they can come up, and what applications should do when they encounter each response.

We also want to use this page to share what we expect from webhooks.

Success Response

A 200 response indicates a successful request over the Redox API.

If the message has been sent to your connection partner health systems and no synchronous response is needed (i.e. it is not a query), then the 200 response you receive simply indicates that the data model was valid and subscriptions were set up correctly.

If the health system needs to respond, 200 will be sent along with relevant response data, which may include errors. For example, in ClinicalSummary PatientQuery, the health system may be down or there may be no documents available for the patient. This will be a 200 response with a Meta.Errors array.

"Meta": {
"Errors": [
"ID": 531806,
"Text": "Something went wrong retrieving the document list. Please contact Redox Support. Could not find any documents.",
"Type": "query",
"Module": "Send"

400 Bad Request

A 400 status code indicates that the message was received, but something went wrong with the request. Often, this means that a required field is missing. For example, if the device message is missing Device.ID, Meta.Errors will inform you of that.

"Meta": {
"DataModel": "Device",
"EventType": "New",
"Message": {
"ID": 41962259
"Source": {
"ID": "aed02b8f-f001-491d-a521-1d757af59bef",
"Name": "redox-sample"
"Errors": [
"ID": 531829,
"Text": "Required field missing - Device.ID in Device:New",
"Type": "message",
"Module": "DataModels"

If the destination system returned an error, Redox will return a 400  response with additional detail in the Errors  array of the response body:

"Meta": {
"DataModel": "Device",
"EventType": "New",
"Message": {
"ID": 41962259
"Source": {
"ID": "aed02b8f-f001-491d-a521-1d757af59bef",
"Name": "redox-sample"
"Errors": [
"ID": 531829,
"Text": "Destination returned an error response.",
"Type": "transmission",
"Module": "Destination Response",
"DestinationStatusCode": 401,
"Details": "Not authorized to access this patient."

401 Unauthorized

Redox will return a 401 response when authentication to the Redox API fails with a plain text message of Invalid request.

curl -X POST https://api.redoxengine.com/auth/authenticate \
-H "Content-Type: application/json" \
-d '{"apiKey": "<API KEY>", "secret": "<SECRET>"}'

404 Not Found / 403 Forbidden

Redox will return 403 and 404 responses if the API request does not use the POST verb, or if /endpoint or /query are not included in the Redox endpoint after api.redoxengine.com.

500 Internal Server Error

Redox will return a 500 response when an unexpected error occurs in our system. Our automated pager will notify us instantly when this happens, but regardless, please reach out if you receive one.

502 Bad Gateway

In rare cases, the API may return a 502 status code due to internal Redox updates. When this occurs, your application should retry the request. If you receive persistent 502 responses, please reach out to the Redox team at [email protected] for help.

Redox Response Handling

A 200 response from your application indicates that everything is OK. We wait for a 200 response before sending the next message. If we receive a status code >= 400, we will pause before sending you any more transmissions and retry the last one that failed.

For a given destination, each subscription with a connecting source will have its own queue. When a response code >= 400 is received, the following happens:

  • The subscription queue for the particular transmission being sent is paused.
  • The failed transmission goes onto a subscription-specific retry queue.
  • Redox on-call engineer is paged.
  • When the retry succeeds, the normal queue is un-paused.

As we covered earlier in the Integrated Application Expectations section, because of how we handle response codes, connecting applications should not return anything  over 400 unless you need our engine to stop all traffic to your destination. For example, if you have a non-critical error, send back a 200 and log the error on your application before reaching out to our support team directly. Conversely, we recommend using the pause to your advantage if you are experiencing a downtime or otherwise critical error so that we can start queuing your messages and ensure no data is lost while the issue is getting resolved. 

Retry Logic

As we’ve mentioned, when any destination initially returns an error on a received transmission Redox will first retry the transmission a number of times before throwing an alert. This prevents unnecessary notifications for intermittent internet issues.Within the retry queue, Redox will retry sending the message immediately, then again after another 10 seconds and again after another 10 seconds. The retry logic has a timed backoff period such that after the third retry the period between retries will increase in the following order: 20, 30, 50, 80, 120.

Error Logging

All transmission and message errors associated with your organization will be logged in the Errors tab of the left-hand dashboard navigator.

Each error log will allow you to access the associated message or transmission log and view the error code and full response, which allows both your team and ours to more effectively troubleshoot any issues.

Error Alerting

During your first implementation, Redox will make sure you have a documented alert email address in your Redox dashboard profile. If your destination starts erring against requests from Redox, after the third transmission retry we’ll automatically send an email to the listed address. For outbound messages from your application, you will be responsible for managing any errors returned from Redox.

What about errors that occur between Redox and your partners? Redox is a full-service platform and all monitoring and resolution of connectivity issues between our engine and your connection partners is owned by the Redox support team, which means that our standard alerting doesn’t include programmatically notifying you of these issues. All levels of the Redox stack are continuously monitored using Pagerduty, along with backup services, to instantly notify our engineers when errors occur. We have the ability to replay messages when necessary and will work to ensure that any data you send to us gets to your partners and anything your partners send to us gets to you, regardless of whether any connectivity or transmission issues occur along the way.

API Onboarding Videos

We are working to create a valuable collection of technical onboarding videos that will make getting up to speed with Redox a breeze. Follow this link to view this collection and start learning!

Community Created Tools

Developers using our API often create open source, standard mappings and other tools built to complement our platform. The Redox team does not formally vet these packages, but we do want to call attention to these and keep a compiled list here that you can reference and potentially use. We will maintain this list as new tools are developed. If you or your team are creating a package like those below that you’d like to share, please don’t hesitate to reach out and let us know.

Java – John Ravan, Tyler Dillon, Mark Gunnels

Sample .Net App – Chris Hennen

PHP – Woody Gilk

Java Scala – Aaron Patzer


Redox on Cloudmine – Ben Moser

.Net Class library converter – Trent Torkelson

Redox Express Middleware (Node.js)Ingenious Agency

Redox Sample Destination (Node.js)Ingenious Agency

General Tutorial Video (using Node.js) – Tom Jessessky