Stallion logo

Business Continuity Plan

Redhorse Technologies Private Limited

Version 1.0 • Effective March 15, 2025

Introduction

At Redhorse Technologies Private Limited, we recognize that our customers depend on React Native Stallion for critical over-the-air (OTA) update delivery to their mobile applications. This Business Continuity Plan (BCP) establishes the framework, strategies, and procedures we maintain to ensure service resilience, minimize disruption, and enable rapid recovery in the event of incidents or disasters.

This document outlines our approach to business continuity planning, disaster recovery, and operational resilience. It is designed to provide transparency to our customers, partners, and stakeholders regarding how we protect service availability and maintain operations during adverse events.

Scope

This Business Continuity Plan applies to all React Native Stallion services, including the Update service, Submit service, Build workflows, Console management interface, and supporting infrastructure. It covers all personnel, systems, and processes necessary to maintain and restore these services.

The plan addresses various disruption scenarios including infrastructure failures, natural disasters, cyber incidents, pandemic events, and third-party service outages.

Business Impact Analysis

We conduct regular business impact analyses to understand the criticality of our services and the potential consequences of disruption. This analysis helps us prioritize recovery efforts and allocate resources appropriately.

Critical Services are those essential to core customer operations and include the OTA Update delivery infrastructure, patch distribution network, and bundle storage systems. These services are designed for maximum resilience with redundancy across multiple availability zones.

Important Services support customer workflows and include the Console management interface, build and submit pipelines, and analytics dashboards. These services have high availability targets with defined recovery procedures.

Supporting Services include documentation, marketing systems, and non-critical internal tools. While important for overall operations, these can tolerate longer recovery times without significant customer impact.

Our impact analysis considers factors such as revenue impact, customer experience degradation, contractual obligations, regulatory requirements, and reputational consequences when determining recovery priorities.

Recovery Objectives

We define clear recovery objectives for our services based on their criticality and business impact.

Recovery Time Objective (RTO)

The Recovery Time Objective represents the maximum acceptable duration between service interruption and restoration. Our RTOs are defined based on service criticality:

  • OTA Update Delivery: 15 minutes RTO — Our update delivery infrastructure is designed with automatic failover capabilities to minimize any interruption to update distribution.

  • Console and Management: 1 hour RTO — Management interfaces are restored promptly to enable customers to manage their applications and deployments.

  • Build and Submit Pipelines: 4 hours RTO — Workflow services are restored within a timeframe that minimizes impact to customer release cycles.

Recovery Point Objective (RPO)

The Recovery Point Objective represents the maximum acceptable data loss measured in time. Our RPOs ensure minimal data loss:

  • Customer Bundle Data: Near-zero RPO — All uploaded bundles and assets are replicated synchronously across multiple storage locations.

  • Configuration and Metadata: 1 hour RPO — Customer configurations, deployment settings, and metadata are backed up continuously with point-in-time recovery capabilities.

  • Analytics and Logs: 24 hours RPO — Historical analytics data is backed up daily, with recent data reconstructable from multiple sources if needed.

Infrastructure Resilience

Our platform is built on cloud infrastructure designed for high availability and fault tolerance.

Multi-Zone Architecture

React Native Stallion services are deployed across multiple availability zones within each region. This architecture ensures that the failure of any single data center does not result in service disruption. Traffic is automatically distributed across healthy zones, and failover occurs without manual intervention.

Geographic Redundancy

Critical data and services are replicated across geographically separate regions. This protects against regional disasters and provides customers with low-latency access from multiple locations worldwide.

Redundant Components

Our architecture eliminates single points of failure through redundant load balancers, database clusters with automatic failover, distributed storage systems, and multiple network paths. All critical components are designed with N+1 or greater redundancy.

Auto-Scaling and Self-Healing

Our infrastructure automatically scales to handle traffic fluctuations and self-heals by detecting and replacing unhealthy instances. This ensures consistent performance and availability even during unexpected load spikes or component failures.

Disaster Recovery Strategies

We employ multiple disaster recovery strategies appropriate to different service tiers and failure scenarios.

Active-Active Configuration

Our core update delivery infrastructure operates in an active-active configuration across multiple data centers. Unlike traditional primary-standby models, all data centers actively serve customer traffic simultaneously. This means there is no "failover" in the traditional sense — if one data center experiences issues, traffic seamlessly continues through the remaining healthy data centers.

Automated Failover

Database systems, caching layers, and compute resources are configured for automatic failover. Health checks continuously monitor service availability, and traffic is automatically rerouted away from unhealthy components within seconds of detection.

Backup and Restore

All customer data is backed up using encrypted, automated backup systems. Backups are stored in geographically separate locations and tested regularly to ensure recoverability. We maintain multiple backup generations to enable point-in-time recovery when needed.

Incident Management

Our incident management process ensures rapid response to service disruptions of any severity.

Incident Classification

Incidents are classified by severity to ensure appropriate response:

  • Severity 1 (Critical): Complete service outage or data loss affecting multiple customers. Triggers immediate all-hands response with executive notification.

  • Severity 2 (High): Significant service degradation or outage affecting a subset of customers. Triggers immediate engineering response with management notification.

  • Severity 3 (Medium): Limited service impact or performance degradation. Addressed during normal business hours with customer communication as appropriate.

  • Severity 4 (Low): Minor issues with minimal customer impact. Tracked and resolved through standard support processes.

Incident Response Team

A designated incident commander coordinates all response activities during major incidents. Engineering and operations personnel are available on-call 24/7, with documented playbooks for common scenarios. The incident commander has authority to engage any necessary resources and make operational decisions to restore service.

Communication

During incidents, we communicate proactively with affected customers through our status page, email notifications, and in-console alerts. Enterprise customers receive direct communication from their account representatives. Post-incident reports are provided for significant events.

BCP/DR Testing

We maintain a rigorous testing program to validate our business continuity and disaster recovery capabilities.

Operational Testing

As part of normal operations, we regularly perform controlled failovers during maintenance windows. These operational exercises validate our multi-zone and multi-region failover capabilities under real-world conditions. By incorporating resilience testing into routine operations, we exceed industry-standard annual DR testing requirements.

Tabletop Exercises

We conduct periodic tabletop exercises where team members walk through disaster scenarios and response procedures. These exercises identify gaps in documentation, training, or procedures and ensure all personnel understand their roles during an incident.

Annual DR Exercises

At least annually, we conduct comprehensive disaster recovery exercises that simulate major failure scenarios. These exercises test our ability to recover critical services within defined RTOs and validate the integrity of backup and recovery procedures.

Playbook Maintenance

Operational playbooks are maintained for all critical recovery scenarios and reviewed at least annually. Lessons learned from incidents and exercises are incorporated into playbook updates to continuously improve our response capabilities.

Third-Party Dependencies

React Native Stallion relies on reputable third-party service providers for cloud infrastructure, content delivery, and specialized services. We carefully evaluate and monitor these dependencies as part of our continuity planning.

Cloud Infrastructure

Our primary infrastructure is hosted on major cloud providers that maintain their own comprehensive BCP/DR programs, geographic redundancy, and high availability commitments. We leverage provider-native resilience features while maintaining our own additional redundancy layers.

Content Delivery

Update distribution is accelerated through content delivery networks with global edge presence. CDN providers are selected based on their availability track record, geographic coverage, and failover capabilities.

Payment Processing

Payment processing is handled by PCI DSS-compliant providers including Razorpay, PayPal, and Paddle. We do not store payment card data, reducing our exposure and relying on specialized providers' security and availability measures.

Vendor Risk Management

All critical vendors are evaluated for their business continuity capabilities as part of our vendor management process. We maintain awareness of vendor dependencies and have contingency plans for scenarios where vendor services become unavailable.

Pandemic and Remote Operations

Redhorse Technologies maintains a distributed workforce with robust remote work capabilities. Our pandemic response plan ensures business operations can continue even during widespread public health emergencies.

All personnel are equipped to work remotely with secure access to necessary systems and tools. Our cloud-based infrastructure and communication platforms enable full operational capability regardless of physical office availability. Support and engineering teams operate across multiple locations and time zones, providing coverage continuity even if specific regions are impacted.

Customer Responsibilities

While Redhorse Technologies maintains comprehensive business continuity measures for our platform, customers are encouraged to incorporate React Native Stallion into their own disaster recovery planning.

Customers should consider how their applications will behave if update services are temporarily unavailable, implement appropriate fallback behaviors in their mobile applications, maintain their own backups of critical bundle configurations and deployment settings, and test their applications' resilience to update service interruptions.

We are available to consult with Enterprise customers on integration patterns that maximize resilience.

Plan Governance

Ownership

Executive management is responsible for overall business continuity governance. A designated Business Continuity Manager coordinates plan maintenance, testing, and continuous improvement activities.

Review and Updates

This Business Continuity Plan is reviewed at least annually or upon significant changes to our services, infrastructure, organizational structure, or risk landscape. Reviews incorporate lessons learned from incidents, exercises, and changes in industry best practices.

Risk Assessment

We conduct periodic risk assessments to identify potential threats to service continuity and evaluate the effectiveness of existing controls. Assessment findings inform updates to this plan and drive investment in resilience improvements.

Related Documents

Contact Us

If you have questions about our Business Continuity Plan or require additional information for your own compliance or risk management purposes, please contact us.

Enterprise customers may request additional details through their account representatives.