Blog Posts✒️

The problem with serverless scaling

May 31, 2024 7 min

In the first article, I wrote about our live statistics system and why we implemented it on our website. In later articles, I want to dive deep into problems we have had with scaling the system to a growing number of users and what we did to tackle chosen problems. Releasing the feature to the public After putting the product/feature on our website, our first broadcast/event occurred. It was the Fall Final 2022 event in November, and it was the first time we would show our users/fans the new real-time stats feature we have built over the last five weeks. We were stoked to see what the fans were thinking about it and how they would react. At the start, we saw around 3-400 people watching the live stream on our website, and everything was running smoothly, with no errors occurring yet. The viewership grew with around a thousand people, and we started to see errors, and users reporting that the stats weren’t working for them. We tried to investigate if either the API or database was failing. It wasn’t the API that was struggling, because we chose to build the API upon the Lambda serverless function infrastructure on AWS, which could scale to 1500 running instances at once. Then we looked at how the database was handling the traffic, and it was struggling, with 1000 connections to the database. Pain with serverless computing scaling After the event, we sat down to reflect on what caused the outage of real-time stats. We wrote a postmortem about what happened and investigated how we can resolve this issue so that in the future, we can scale this feature to more people than we had in this event (around 1200 people). The first place we started looking for issues was CloudWatch. There was a similarity in the metrics tabs of both the database and the API between Lambda concurrent executions and database connections. We could see that we had a lot of concurrent executions on the last day of the event, which made sense because a lot of people tuned in to watch the grand final and the showmatch on BLAST.tv By looking at the metrics we could conclude that for every container that lambda spun up, it would generate a connection to the database, which meant that we needed to implement some API caching or reduce the amount of API calls that were made by every user each x second to the API which returns the live stats. Database Proxy implementation The current API implementation meant that each single one of the viewers would send a GET request to the statistics API, which would query the data and return it to the user. For every x amount of seconds we specified, we would make each one of the users request the stats. So this meant if we had 1500 concurrent viewers on our live stream, we would have 1500 requests every 15 seconds. We would see the requests coming in waves because the interval/timer starts when the user loads the live page. Our database wasn’t happy with 1500 heavy queries every 15-20 seconds, even though the database can scale to 128 ACUs. The database was struggling because of the amount of connections to the database, so we tried setting up a database proxy in front of it, in the hope that it could reduce the load on the database. The proxy would be in front of the database to handle database pooling for us, so when the lambdas scale to hundreds of instances, they would connect to the database proxy, which would handle the database connections and the transactions to the actual database. The database proxy would allow us to scale the lambdas and keep the number of connections to the database below a fixed threshold so the database wouldn’t crash. We can do this, because the database proxy helps us pooling the connections to the database, and allows the connections to be re-used for queries. Bug fixing under stressful situations We tried implementing the database proxy during the actual event when the problem occurred after deploying what we thought was the fix with the database proxy, but it didn’t seem to have changed regarding the database connections. The connections were still around the ~800 mark. After the event was over, and we tried our best to survive the rest of the days during the event, we sat down to try to figure out why our changes with the database proxy we deployed didn’t work. After some research, we found that we missed linking the lambda function with the database proxy. There is a setting under the lambda function, which you can select the database or database proxy it would connect to, which would reduce the CPU and memory usage of our database and handle the database connection automatically from the lambda function to the database proxy. Conclusion There are a few points the BLAST.tv team learned during this outage of Live Statistics during Fall Final 2022. First and foremost, we needed to have load-tested the system before launching it. We had only tested it on our development environment (with a maximum of 5 users), which resulted in us not knowing how the system would scale to hundreds of users during a live event. The second learning we can take away from this would be even though we are in a stressful situation we shouldn’t panic and start deploying all sorts of things. It’s better to sit down, breathe a little, and find which solutions are available and how to implement them. It’s also better to deploy a solution to a development environment so you can test the solution before promoting it to production. Doing this would minimize the amount of downtime of a given service/system. It’s also a good idea to deploy a single solution one at a time. That way, can you ensure that you can test if the solution fixes the problem and tell what worked and what didn’t work. Thank you for reading through this article. In the later articles, I want to explain why we didn’t go with API caching and the different types of caching methods there are.

How we scaled real-time data to thousands of people

May 17, 2024 5 min

This is the first part of my first series called 'Real-Time Stats on BLAST.tv.' In this series, I wanted to show and explain how we at BLAST are and what technical/engineering challenges that come with trying to build new functionalities for a large user base. The purpose of Real Time Statistics I first wanted to discuss our 'Real-Time Statistics' on our platform, BLAST.tv. The reason we wanted real-time statistics on BLAST.tv was that it was one of the most requested features by our users, and we also aimed to enhance the way our viewers experience esports online. We believed that bringing live statistics closer to the end-user would provide them with more insights into the game while they are watching the livestream. It would also allow the users to discuss the statistics in the chat window we have on the website. The history behind Real Time Statistics on BLAST.tv It all begins with our team wanting implement real-time statistics for Counter-Strike (the game). These statistics would display in-depth information for the given match we are showcasing on BLAST.tv, similar to how the website HLTV does it. Counter-Strike has functionality that allows it to send game events occurring on the game server to a specific HTTP endpoint in the form of log lines. Example of a log line: 01/01/1970 - 00:00:00.000 - MatchStatus: Score: 0:0 on map "de_overpass" RoundsPlayed: 0 We ended up creating a relatively simple log processor that processed the log lines and inserted the data into a relational database (PostgreSQL). The data was split into entity events (match events) and player events (kills, deaths, assists, etc.). We stored the 'loggedAt' timestamp as well, so we could list the events in the order they were sent. The splitting of events allows us to query specific data depending on what we need for the product. The design Our designers created a design for live statistics with two different views: a simple overview and a more in-depth one. The simple view provides users with key statistics for each player, along with the current score on the map and information about the leading team. The two bars on either side indicate the number of maps each team has won. Most Counter-Strike matches are played in a Best of 3 (BO3) format, so winning 2 maps is required to win the match. The detailed view shows more in-depth statistics for each player. When the user expands the left sidebar, they are given the option to choose between the two teams. After the user has picked one of the two teams, they need to select a specific player for that team. Once the player is selected, they can view in-depth statistics such as Average Damage per Round (ADR) or headshot percentage, and so on. When the detailed view is expanded, the bottom stats bar would also expand to reveal the round progression over time, showing which team won each round. The user would be able to click into a specific round, and see detailed statistics about which player got the first blood, the number of kills each team achieved in that specific round, as well as who dealt the most damage. The problems with scaling As you can see by the designs, we needed to provide a lot of detailed information/data about the running match, and that’s the reason why this became one of our biggest and most complicated systems we have to date. I have chosen to split the problems into separate articles, making it easier to understand the various challenges we have encountered while scaling this product to thousands of users. I have tried to focus the articles around the scaling issues we have been through instead of the actual logic that goes on behind the scenes. Next article, would you be able to read what challenges we faced with using serverless infrastructure to build out this functionality, and how serverless isn’t always the way forward for having well-performing infrastructure. I’m also discussing different strategies we used to deliver real-time data to the end user and how we scaled that. I will dive deeper into how we scaled the data to thousands of users using cache in different ways. API caching isn’t always the way to go, and I will argue why we chose to move away from using it to use static files instead. Thank you for taking the time to read my article. Have a great day! ☀️

Why Flutter and Firebase are best buddies

Jun 28, 2021 4 min

If you are starting a startup or building the next big app for the AppStore, I recommend trying out the pair Flutter and Firebase as a programming stack. Flutter and Firebase are both developed by Google, where Flutter is their open-source cross-platform app framework, and Firebase is their BaaS (Backend-as-a-service). Arguments for using Flutter and Firebase And here is where the first point why Flutter and Firebase are a good programming stack. Google is standing behind both Flutter and Firebase, which means the integrations between the two platforms almost flawless. Pretty much every product that Firebase offers can be integrated into a Flutter app with just a couple of lines of code. A bare minimum app linked to a specific Firebase project is only two lines in two separate files. In pubspec.yaml file: And in the main.dart file: So with just two lines, you can link your new Flutter app up to Firebase. The company Invertase is maintaining the plugins and is updating them. Invertase is also managing the Firebase plugins for React Native. So they have two teams working to support Firebase for both React Native and also Flutter. Below is a list of products in their stable version that supports Flutter. My second argument for using Flutter with Firebase is that they both are very beginner-friendly, with a lot of good documentation to help you set up a fast app that can get distributed to both Android and IOS at the same time. With Flutter as the frontend, can it help you a lot when it comes down to performance and distribution concurrent to two platforms. Firebase will, on the other hand, help you manage a scalable backend service for your app. It can be a pain to manage your own backend, like a VPS (Virtual Private Server) all by yourself, and by using Firebase, you can focus on making your app and get it out to the world fast and secure. The drawbacks of using Flutter and Firebase But there are also drawbacks of using Flutter and Firebase together. Flutter is the new kid on the block, so the community is so limited. This, unfortunately, means that there are only a limited number of answered community and Stackoverflow questions. But the Flutter community expanding rapidly every day. The problem with Firebase as a backend service is that it can be very costly if your app scales very fast. Because Firebase structured, so you pay for CRUD (Create, Read, Update, Delete) actions on their database, and not only for the bandwidth you use, can it be very costly if you have an app that reads a lot of data from your Firestore If you want to build an app with Flutter and Firebase and are thinking This is will be the next Facebook. Then I recommend finding a good revenue strategy for how your app is going to make itself money. Conclusion Flutter and Firebase is a solid programming stack to build a cross-platform and scalable app. Flutter gives you the ability to create a fast and almost native app in no time. Firebase gives you a free tier of most of their products, which you can use to create an app with a solid 100-1000 daily users without any backend costs. But if your app scales to more than 50.000 users, you maybe need to consider either building your own backend (API, Database, etc.) or creating a solid revenue plan for your app, which can pay for its monthly bills. Final Note: Let me know what you think about using Firebase and Flutter to build cross-platform apps. Are you a fan of those technologies, or are you using other technologies/programming stacks?

Custom Search Shortcuts on Google Chrome

Jun 21, 2021 4 min

Introductions to !Bangs Most of my friends who use DuckDuckGo daily have used the !Bang feature alongside the normal browser to increase their productivity. DuckDuckGo has been adding different shortcuts to their search engine since 2008, and they are still the only search engine that promotes this feature. But because I use Google Chrome, and there weren't integrated custom shortcuts by default, I needed to figure out a way to implement it. Therefore I began to search the web for how to make my own shortcuts to different websites, and after many hours I could not find any solution to the problem. But I found a built-in function in Google Chrome which is the "Custom Search Engines" Here you could get a list over some of Chromes Default Search engines as well as other websites search engines. Google Chrome has been adding websites custom search engine automatically by default without anyone knowing. So if you look up the list of custom search engines on your Chrome browser, it most likely has a lot of website search engines already added. How to create your own "!Bang" Firstly you need to make sure that you have access to Google Chromes custom and other search engines. You can check by going into the settings, and under Search Engine. Then you will need to click on "Manage search engines", it's here where you can manage your main search engine, which in my case is Google and your custom search engines (!Bangs) Find the search URL Secondly, we need to find the URL to a specific website we want to add to our custom search shortcuts. You can only add websites that include our search term in the URL. So if a website's URL doesn't change when you search, it most likely can't be added to a custom search shortcut. But websites like Youtube, Twitch, Facebook, and most of the Google Apps support it. For example, we can take the website Twitch, which is a live streaming platform for all kinds of things. When we search on Twitch, Twitch updates its URL to include our search term, as seen in the picture below. We are looking for the bit in the URL after twitch.tv/search?term= which in our case, there has been searched on "Hello World". Adding the website to our Custom Search Shortcuts Now that we have found the URL which we want to add to our Custom Search Shortcuts, we can add it to the list. Firstly we need to click on the little add button, which opens a box we can enter a name of the shortcut, a keyword, and the URL. Here will we give our Shortcut a name, which in our example is Twitch. Afterward, will we find a keyword phrase that we like. It's this phrase we need to enter into the search box of our Chrome Browser. So a good thing is also trying to keep it as short as possible, and also make it understandable, so we can remember it. The last part is to copy-paste the URL we found into the URL box, and replace our search term which in our example was "HelloWorld" with a %s. The %s will get replaced when we use our custom search shortcut. Finally, click the add button, and there you will see your first created shortcut, which you can use to search websites faster. Type the keyword into your browser, press space, and see the magic happen. 🔮 Bonus Tip: Because Google Chrome by default adds a website's search engine into our Custom Search Engines, you can add a little star (*) in front of the name of your Custom Shortcuts. This will move all of your custom search shortcuts to the top of the list. Bonus Tip #2: If you don't want a star in front of all your shortcuts, you can add a little nice extension that prevents the browser to add websites custom search engines. Link Here Thanks! Thanks for reading my first article on how to create custom search shortcuts. If you have any questions feel free to comment below. Have a wonderful day 😄