Introduction > How Redox Works

How Redox Works

Redox provides integration infrastructure that can be managed by your team through a single platform and connection.  The Redox integration platform has been built based on experience with millions of messages, thousands of HL7 feeds and EHR vendor API calls, and with hundreds of healthcare organizations and healthcare software providers.  

Integration is critical to many products, but it can quickly take over a large amount of engineering time and staff in order to scale. Traditional point-to-point connections require a never-ending amount of customization, complexity, and code that make scaling your codebase and customer base challenging.  

Redox uses a networked approach that’s designed to provide a more predictable integration method for software vendors and a reusable integration for healthcare providers.  Healthcare software vendors are able to use a single HTTPS connection and consistent JSON format across any health data source including all major EHR vendors, CRMs, Health Information Exchanges (HIEs), and more. Redox can exchange data with healthcare organizations and key data sources, such as HIEs, using their system’s existing data exchange capabilities.  Whether you’re receiving HL7 from Epic, C-CDAs from Allscripts, FHIR from Cerner, or using any of the myriad of (and disparate) EHR vendor APIs out there, you will always have a consistent way of integrating when you use Redox.

To better understand the answer to the question of “How does Redox work?”, it’s important to start by breaking down the jobs-to-be-done when it comes to integrating healthcare data.  Though the answer would seem complex, it’s fairly straightforward to understand how data flows and is managed across an software vendor’s application, the Redox integration platform, and a healthcare organization. Those are the three places data is processed and stored within its journey—let’s take a closer look at each.

Redox for Integrated Healthcare Products, Devices, and Software

For software vendors connecting to Redox, we provide integration infrastructure that is built to scale and helps reduce tech debt and addresses scaling database challenges.  For any vendor that works with integrated data, there are four jobs your product needs to do:

  1. Ingest data – the ability to receive data in a compatible format
  2. Process data – completion of data processing steps while retaining FIFO order on large quantities of incoming data
  3. Maintain data – application of business logic to define how data should be handled when it’s received
  4. Recall data – reference maintained data at the correct point in product workflows

Redox lightens this load by making sure that you’re working with the data you need in a predictable manner through our API. Vendors code and then connect to Redox once over an HTTPS webhook. Redox will send messages to an application’s callback URL and receive messages and queries. Redox has standard JSON data models that represent a wide variety of patient-centric data. When we send messages to an endpoint, we include an application secret in the message headers. Conversely, we provide an API key for messages being sent to Redox over the webhook. In the event of a transmission error, Redox automatically retries a number of times before throwing an error alert notifying the application’s escalation path and our on-call engineer to investigate further.

What Happens in the Engine…

Ingest the data

In order to ingest the data, you first need to be able to set up and maintain connectivity with the data source.  In healthcare, this often means using a VPN or handling the different authentication and authorization methods using by the variety of EHR vendors and healthcare data sources you need to work with.  

Redox handles all of the VPN set-up and monitoring over the life of your connection using automated alerts to our on-call support teams.  We also have pre-built adaptors to connect with the major EHR vendors and a centralized dashboard to manage your specific credentials.

Redox is designed to be able to ingest data from different healthcare data sources, such as EHRs, HIEs, CRMs, and ERPs, using the format that their system already supports (eg: HL7, vendor API, FHIR, etc). Through our experience working with hundreds of healthcare data sources, we already have the necessary mechanisms in place to work with any data source using the most common healthcare-industry accepted integration standards (as defined by the ONC’s Interoperability Standards Advisory).

Process the data

Messages go through three stages of transformation to standardize into our JSON specifications. We use a proprietary parallel processing system to provide virtually real-time messaging processing while retaining FIFO order.  First, messages are parsed according to type (HL7 v2, CCD, Custom API, etc.). Secondly, they are configured for vendor-specific specifications based on the source system, such as an EHR, HIE, CRM, or ERP. Finally, they are configured for the health-system-level workflow customizations that can affect these messages. At this stage, field-level filters can be applied before messages are routed to their proper destination and transmitted in the specified format.

Maintain the data

Redox has begun to roll out support for specific types of data to continue extending the integration infrastructure we provide. For push-based integrations (such as HL7), Redox will move the information to our operational data store and continually update the data as we receive new information for enabled data models.  In situations where information is being pushed out of a healthcare data source, the volume of messages can quickly become a large amount of work to build, implement, and maintain the business logic needed to keep the information straight.  

We also often see that many customers receive data today that they don’t need until tomorrow (or next month) and the work of keeping that data maintained over time can be challenging.  As a part of our processing work, we are already defining certain components of the business logic needed to understand how and why we’re receiving data at certain points from healthcare data sources.  Extending this business logic to maintained data makes it easier for organizations using Redox to stay focused on the data that’s essential.

Recall the data

For data models that are enabled with our Data on Demand feature, Redox provides Query and Response events to be able to get the data that’s needed at the appropriate moment in time.  Learn more about this feature in the Data on Demand section.

Healthcare Organizations and Data Sources

For the healthcare organizations and data sources connecting to software vendors that are using Redox, the three main jobs we handle are connectivity, data translation into the expected format of the receiving system, and data delivery.  This allows healthcare organizations to use the data exchange formats and processes supported by their systems (such as HL7, EHR vendor APIs, FHIR, C-CDA, etc) while allowing software vendors to use the Redox API as a consistent format across the different groups they work with.

Our integration experts work with health systems to identify the most efficient integration strategy for their unique EHR implementation. For HL7 integrations, Redox most commonly maintains a secure VPN tunnel for MLLP traffic. HL7 messages originating in the EHR are normally routed through an on-premise interface engine already in use in the health system before being pushed to our VPN. Conversely, Redox pushes HL7 messages back to the EHR through this infrastructure as well. We also work with health systems to connect to EHR-vendor-custom APIs as well as supporting DIRECT protocol.

The bottom line here is that we handle the mapping and configuration of messages in our engine, making it easy and lightweight for IT teams to work with us.

Throughout this entire process, best practice safeguards are in place to protect patient data. Check out our Security Overview for more information on how we protect client data.

Data Translation

This diagram shows how data consumed from an external source is transformed into our normalized data models. Though this shows an HL7v2 message transformed into JSON, please note that a variety of other standards such as CDA, FHIR, X12, and Web Services can be both consumed and produced.