My own early career experience in business continuity planning and disaster recovery is rooted in the still vividly raw memories of 9/11. Working in telco at the time I remember first staring in horror, on the TV screen in our office cafeteria, as we all did when the towers fell. Then watching the alarms fire red from our NOC, first on the eastern seaboard and then throughout the US. No one was truly prepared for what happened. Not the systems, not the procedures, and certainly not us all as humans.
Being a telecommunications provider and critical infrastructure, we managed the back-ends, for the bulk of the anonymous burner phones of the US with an expectation of 5 9s uptime. From that event, we, like many other technology providers, embarked down a serious path of better disaster recovery and business continuity planning. We invested in hardware. We developed new procedures. We trained, we planned, and we executed. That company at the time even went so far as to grab a state of the art data-center, complete with redundant power grid connections, parallel battery rooms, and even a massive diesel generator rumored to be able to run the building and our hardware for well over a month. We adapted from that tragedy and became more resilient from it.
We’re now faced with an equally, and most likely far greater challenge. A more global one for sure. And once again organizations are facing head-on how we keep our business running effectively in the face of such unexpected tragedies. How we keep the valued services we provide to our customers and their consumers running.
In a different industry now, but I’d argue one that can be deemed critical infrastructure in our connected world, we’re forced now to take a disciplined look at how we can be better prepared.
From a broadcast operations perspective, I’d say we’ve passed the first phase of this crisis. Companies have shifted to remote based work. They’ve implemented workflows that are maybe “good enough for now” but not viable in any long-term or effective capacity. Perhaps they’re using cobbled together solutions with VPNs that give access to their local physical hardware based SANs. Lonely isolated SANs left behind in their now ghosted offices. Maybe they're using some other remote access software, someone even talked about using Citrix in their workflow. I wasn’t even aware that it still existed. All these stop-gap solutions are limited, either through asset transport, single points of failure, bottlenecked access points, and cumbersome disjointed workflows across disparate systems that are time-consuming and end-user intensive. The work can continue but not in a sustainable or reliable way.
Over the past few weeks, we’ve been helping guide both our customers and new connections on how our remote video editing tool, Vimond IO, can provide them a more robust solution to manage their video editing workflows in the face of such disasters. Moving their workflow fully to massive cloud infrastructures designed and managed to provide 5 9s availability and scalability. Accessible from anywhere, at any time.
With Vimond IO editors can easily gather, share, and edit content from home, without having to worry about latency or the load on the servers.
Our industry is facing its largest collective challenge, and ultimately an awakening that we were not ready operationally or technically for the shift to come. How we work is changing. The solutions we use will need to adapt. To plan for the unknown unknowns, and keep moving forward in bringing the valued content to consumers who rely on it.
Read more about how Vimond can secure your remote workflows with cloud-based video editing ↓.