So far in our series on scaling observability for game launches, we’ve discussed ways to 1) quickly analyze large volumes of telemetry data and, 2) ensure high-quality telemetry data for more effective analysis at lower costs.
The best practices in these blogs outline best practices for scaling observability during game launch day – which is necessary to ensure high performance across all infrastructure components – to ensure no lag, no glitches, and no bugs.
In this third and final post in the series, let’s review a third critical factor for scaling observability effectively during launch day: maximizing your team’s engineering resources.
Nodes, user sessions, and telemetry data can all scale up quickly – but your team cannot. Limited engineering resources can make it difficult to effectively scale your visibility into production. Let’s look at some ways to ease the stress on your team during game launches so every engineer can focus on the critical tasks at hand during a game launch.
A Scalable Support System to Help When You Need It
Maintaining detailed observability into production can be hard. Observability practitioners need to build dashboards, instrument their services to expose the relevant telemetry, build alerts, and navigate their data to monitor and troubleshoot their environment. Technical challenges that prevent these tasks can thwart visibility into the player experience or production health.
Ensuring effective support guarantees end users can quickly and effectively operate the observability system to identify and troubleshoot problems. However, if your observability system is used by a hundred engineers, support can become a tedious full time job.
Consider a platform that outsources support for engineers, so any user can reach out and get the help they need to maintain observability into their services, especially when your game launch is in crunch time.
SaaS vs. Self-Hosted Observability
Large enterprises run their own observability systems to gain insights into their services and ensure reliability. Many others use self-hosted open source monitoring tools like Opensearch, Prometheus, and OpenTelemetry to monitor and troubleshoot their system.
Engineering teams at Facebook, Netflix, and Uber run many of these open source tools – they’re highly customizable and free to install. However, if you’re going to run a self-hosted system, there are plenty of things to consider before the launch of your next big game.
Running self-hosted systems require dedicated engineering resources to keep them running smoothly. The user is on the hook to maintain the performance of the open source observability system, which can impact data ingestion and query speed – or worse, cause the system to drop data or crash altogether.
To state the obvious: being blind to application performance and user experience during a game launch is not ideal.
In addition to observability system reliability, there are resource constraints to consider.
If you’re feeling the strain of limited talent and resources, there is an opportunity cost of managing a self-hosted stack. If you don’t want to allocate engineering resources to other priorities, you may just let somebody else manage the observability stack for you. SaaS platforms can handle the entire data pipeline and clusters for you. So all you need to do is send your data, log into your account, and begin analyzing.
Want to learn more? Sign up for a free trial of Logz.io to see how our observability platform can scale to meet your game launch needs.
Leave a Reply