Adoption & Growth

Your First Project

From a technical perspective, you will be ready for your first project when you’ve finished coding up to the Redox API using our available development and testing tools and have a clear understanding of your integrated data requirements and workflow, as outlined in the Onboarding section of this guide. Below, we’ll cover what your technical team can expect once you’ve completed your onboarding tasks and have kicked off your first implementation.

Site Scoping

Reading through the onboarding section, you may have noticed that there will typically be a subset of questions you’ll have to ask your connection partners to get a sense of the feasibility of the integration and to understand the scope of a specific site’s capabilities. Below are the most common questions that need to be asked for most projects and scopes and are topics you should plan to discuss with your connection partners:

  • Can you support any of the connection methods and standards supported by Redox for the specific data sets (e.g. orders) that we are looking to receive?
  • Will your default data exchange infrastructure provide our specific data requirements?
    • If not, what is the level of effort to modify existing setup to include required data fields?
  • What is your patient identification structure? 
    • How many patient IDs do you have across your enterprise system and what are the ID types?
  • What types of visit or encounter identifiers will you send?
  • What types of provider identifiers will you be sending?
  • What kinds of coded clinical values will you be sending?

In addition to the above, there may be additional questions you need to ask that are specific to your scope or workflow. These are points of discussion Redox will help you identify during your scoping process and workflow review with our team.

Testing

The following is an overview of the various phases of testing efforts to ensure your integrations are reflective of your intended workflow and will lead to a successful go-live. While these definitions will help guide you through the testing process, they do NOT replace test scripts or scenarios. Redox expects that your team will work alongside the Healthcare Organization’s project team to develop a test plan prior to project kick-off.

Functional Testing

Functional Testing is the opportunity to ensure connectivity and basic message transference are successfully set up. During Functional Testing, we recommend that at least one message for each Data Model in scope (and each EventType, within reason) be successfully exchanged between the Healthcare Organization and your application. 

This phase will give you initial insight into the content of the messages, what updates need to be made (and by whom), and help you confirm your scope allows you to successfully accomplish all aspects of your workflow.

Integrated Testing

Once Functional Testing is complete, we move into Integrated Testing. This phase moves beyond the basic ability to send and receive messages and focuses on how the content of the messages meet the needs of the outlined workflows. During this phase, your team should ensure that:

  • All workflows can be successfully completed end-to-end
  • Messages contain all required information in the correct and expected format
  • Workflows are executed in the fashion they will be done in the production environment

This phase typically relies on testing scripts (created by your team and/or the Healthcare Organization) to ensure that all parties can track expected workflow, issues encountered, and path to resolution. At the completion of this phase, all parties should be in agreement that the workflow is complete and ready for production. 

User Acceptance Testing (optional)

User Acceptance Testing (UAT) allows end users to experience the workflow prior to Go Live. Note that not all Healthcare Organizations require this step. UAT should not be scheduled until  all workflows are successfully completed by members of the project team to avoid poor adoption of new tools or workflows.

This phase relies on end users to complete the workflow as they will in real-life scenarios. This allows them to ask questions, provide feedback, and familiarize themselves with a potentially new workflow. 

Production Testing (optional)

Production Testing is a final chance to ensure that all workflows and related messages are operating as expected. This is an optional phase in which workflows are completed in the production environment prior to making them public to end users.

This phase is completed at a scheduled time, often during off-hours, to ensure end users are not disrupted by the testing effort. This is the last step prior to Go Live and is often completed days or hours prior to releasing the workflow to end users.

Retrospective

After you bring your first project live, we strongly encourage your team to conduct a thorough retrospective. The retro should target all aspects of the project but should specifically ensure that technical designs and decision-making are reviewed. For example, many of our customers find that after their first project there are updates to their product or integration scope that need to take place—maybe you’ve found that part of the data set you believed to be required is no longer a requirement for your initial integration scope or maybe you need to add an additional feature to your product’s UI to better facilitate the user workflow. It will be better to intentionally set aside time to conduct this review after your first project goes live while the information and context is still fresh in your brain. Let your Redox representative know when you’re looking to schedule your retrospective and our team will be happy to provide and review any relevant feedback we have from your first project.


Data Model Updates

One thing that sets the Redox API apart from APIs like HL7 FHIR is our ability to rapidly support new, emerging use cases and get them live in production quickly. As our data models evolve with new data points, our first focus is always on developer satisfaction. Below, you’ll find information on how we handle updates to our models today and what our plans are down the road.

What will stay constant?

We do our best to not make any of the following types of changes:

  • Changing data types of fields (e.g. string to Array, number to string, etc.)
  • Removing fields that already exist
  • Adding additional required fields
  • Decommissioning a Data Model or Event Type

If we ever do run into a fringe case where any of the above updates need to be made (which has yet to happen), please know that we would notify all customers far in advance of the update and that in appropriate cases we would provide a transition plan for all affected customers.

Additions to Redox Data Models

Current additions to models that Redox may make include:

  • New fields of any data type (Array, string, etc.) to existing data models
  • New Data Model event types

When we make an addition to a data model we will post it in our Change Log, which will also post to our public Slack General channel. You can join our Slack community here. Additionally, any changes made to our models will be automatically reflected in the schemas available for download here.

The best way to build against our models and account for these additions is to be as “tolerant” a reader as possible by ignoring data points that aren’t necessary and not parsing everything into strongly typed objects. If this isn’t feasible for your project or if you’re running into issues with this based on your specific stack or environment please let us know so we can talk through any other potential solutions.


Scaling Architecture

From an architectural standpoint, there are a few key things to think about when looking to scale your integrated product across hundreds or thousands of connections. In this section we’ll share the best practices and common roadblocks we’ve seen.

Avoid Mirroring Data

In almost any software architecture, mirroring data, or trying to maintain an in-sync duplicated representation of another system’s data, is an anti-pattern. It’s extremely difficult to maintain a duplicated data set of another system’s data in a cost effective and scalable way.  In healthcare integrations, unfortunately this is relatively common.  The reason for this is that most healthcare integration is done by consuming Healthcare system HL7 feeds due to lack of alternative options, requiring the consuming system to save the data if the product needs that data at a later date or on an ongoing basis.

If it can be avoided, don’t try to persist a health system’s data.  It’s costly from both a technology spend perspective, but also from a maintenance and operational standpoint (especially at scale).  Redox offers another option via our supported queries through our Data on Demand feature.  Designing your solution to only access the healthcare system’s data your product needs when it needs it will reduce your architectures maintenance and operating costs while increasing its ability to rapidly scale.

Multi-tenancy and Dynamic Onboarding

To move from a few integrations to integrations at scale, it’s important to make onboarding of new health systems as dynamic and painless as possible.  If you want to integrate with even 100 health systems, but it takes 1 engineering effort-day just to provision new infrastructure each time, you are losing 100 engineering effort days just adding those new customers.  That’s a huge opportunity cost when compared to what value could be added to the product itself with 100 days worth of engineering effort.

Some strategies to consider to reduce onboarding friction are multi-tenancy on shared infrastructure, and automated infrastructure partitioning.  It’s tempting to want to provision segregated infrastructure for each integration, but this strategy is not cost effective and quickly becomes unmaintainable at scale.  Instead, choose a partitioning strategy that guarantees protection against data cross-contamination while also allowing for shared infrastructure and dynamic onboarding.

Volume Management

For event driven workflows, as integrations scale up, so does event/message volume.  Further, there is a wide variance in event volume rates across different health systems.  Embracing Event Streaming architectural paradigms will enable your architecture to scale resources as needed as you scale your integrations.

Some recommendations to keep in mind when designing an event streaming architecture:

  • Ensure the layers of your architecture are horizontally and dynamically scalable as well as tenant agnostic.
  • Segregate tenant event streams, while keeping processing code as tenant agnostic as possible.
    • Impact of high volume event streams on other streams must be minimal. 
  • Tenant specific event stream control:
    • Problems in one tenant’s event stream can’t impact other tenant’s streams.
  • Automate load balancing.