As the first post of 2023, Wishing everybody a wonderful new year 2023.
If you are here, you most probably are facing issues related to scaling with your WebRTC application or you are just exploring with some future plans to build a production grade WebRTC app. In both the cases, you are at the right place. This post is going to be a continuation from the previous post we wrote on this topic a couple of months ago. The previous post described in details regarding when auto-scaling is necessary and when it is not. If you are not sure if your solution needs WebRTC autoscaling or not, you should read the previous post here before reading further.
In the last post we discussed about horizontal OR vertical scaling as a strategic option to scale mediasoup media servers based on the use case. In this post, we are going to discuss about another way of auto-scaling and its use case. We also are going to discuss interesting new enhancements to CWLB.
The third WebRTC scaling strategy
The third approach is a combination of vertical and horizontal scaling combined in as one. It can be called a hybrid scaling approach. Here the vertical scaling approach is used to scale one room to all available cores in a mediasoup instance in case of a need.Once this mediasoup instance is totally occupied, but still the same room needs more resources, the horizontal scaling is used to scale to different mediasoup instance located in separate host. For all the new resource allocation requests for the same room, the new server is then used according to the vertical scaling strategy until and unless the first server has free resources to spare. This hybrid approach is typically useful for very large rooms like large event rooms where the load-balancer needs to cater to 100s / 1000s of concurrent users in one room in a complete just in time resource request mode.
Lets understand the 2 important key words mentioned in the above paragraph.
Resource requests: A request is made to the media server to allocate some resources to the user so that the user can send / receive audio / video / screen-share media streams.
Just in time request: This load-balancer strategy is used when the load-balancer has no previous information about the size of the rooms so that it can pre-allocate and reserve the resources.Here the load-balancer has to work really hard to keep track of real time resource usage of each media server and allocate / free resources in real time as the user joins / leaves a room. This type of implementation is relative complex to a pre-allocation and reservation based load-balancing strategy.
The Hybrid+ WebRTC scaling strategy
The hybrid+ scaling strategy has all the things that is there in the hybrid scaling strategy. In addition, it also has some other important aspects which makes this strategy a really good choice for medium / large scale deployments.
An additional relay server between the client and the media server to make a media server completely stateless i.e. the media server will not contain any kind of business logic.
Capable of creating / destroying on demand media servers using APIs of cloud providers in a completely automated manner with least manual intervention
Capable of utilizing advanced techniques like media server cascading for keeping the latency to the minimum while catering to a global user base. Media servers in different geographic locations need to run simultaneously to enable media server cascading.
Capable of HA(High Availability) setup where stand by media servers can take up load when primary media servers fail while in use. Additional standby media servers need to run to ensure HA.
CWLB 2.0
CWLB 1.0 which was released in June 2022 had vertical scaling, horizontal scaling which used AWS EC2 instances for auto-scaling of media servers. This was good enough for small and medium use cases. But for large and very large use cases like large scale event, it had 2 disadvantages. The first is that the load-balancer used to take more media server resources than the number of media servers ideally it should be consuming and the second is that the data transfer costs each room was incurring while using AWS ec2 instances.
In CWLB 2.0 , we have now addressed these 2 points along with many other improvements.
First, the core load-balancer algorithm is now fully JIT request compatible. It means it now uses media server resources very efficiently by keeping track of each media server usage in the real time and allocate / de-allocate resources based on real time user resource consumption demands. It now has all strategies enabled i.e. vertical scaling, horizontal scaling, and a mix of both aka hybrid scaling.
Second, we have integrated another cloud provider, DigitalOcean into the load-balancer which has relatively less data transfer costs than AWS ec2. Lets take an edtech use case as an example to compare the data transfer costs between AWS ec2 and DigitalOcean for your reference so that you can understand why this is important.
Example
A maths tutoring company in India runs online maths tutoring classes for high school students. Here each maths teacher teaches high school maths to1000 students in one online session. They conduct 6 such sessions every day for 6 days a week with each session being conducted for 90mins. Lets try to calculate an approximate data transfer cost for a month. Here we will be using some assumptions to look more realistic.
Lets calculate the amount of data being transferred from the media servers in the cloud to students who have joined the class.
The teacher is speaking while either sharing his/her camera / screen for whole of the class time i.e. 90mins.
Lets assume that the audio is consuming 40Kb/second and the video / screen share is consuming 500Kb/second of internet bandwidth. So each student is consuming 540Kb/second of data.
Here is how the maths looks like.
540 * 60 * 90 = 2.78Gb is what one student consumes for whole 90mins session.
It there are 1000 students in that session, the total data consumption for that session would be 2780.9 Gb or 2.71 Tb.
If there are 6 such sessions happen each day, then the data transfer amount for each day would be 16.29 Tb.
Considering that these sessions happen for 6 days a week, the data transfer amount for the week would be, 97.76 Tb.
Considering 4 weeks in a month, the data transfer amount for the whole month would be, 391.06Tb. That’s a lot of data being transferred!
Now lets look at the cost. AWS ec2 charges $0.08/Gb for outbound data transfers from AWS ec2 o public internet. It essentially means AWS doesn’t charge for the teacher who is sending his / her audio and video streams to the media server but it charges for the students who are listening to the audio and video streams relayed by media server hosted in AWS ec2.
The maths looks like this.
391.06 * 1024 * 0.08 = $32,036
The amount of data consumed per month in Tb which is converted to Gb by multiplying 1024 along with AWS data transfer cost per Gb. This is the cost only for the data transfer and it doesn’t include the cost for running AWS ec2 instances for media servers. That cost will be be added to this cost on the actual usage basis.
Now lets look at the maths for running the same amount of maths tutoring sessions with media servers running on DigitalOcean.
There will be no change on the total amount of data transfers which is 391.06Tb.
The maths will look like this.
391.06 * 1024 * 0.01= $4004
This cost will further come down as there is free data transfer bundled with each DigitalOcean droplets. For example a 4 vCPU , 8GB CPU optimised instance comes with 5TB of free data transfer per month. With DigitalOcean, we can consider the final cost to be in the range of $3200 ~ $3500.
Due to this difference in the data transfer costs, we integrated DigitalOcean into CWLB 2.0 to provide an alternative to AWS ec2 to run media server with lesser cost. But this is purely optional and configurable from the loadbalancer settings of the admin dashboard.
Any organization admin can change their cloud vendor in the dashboard from AWS to DO or vice versa with a button click and the media servers will run the desired cloud as selected by the admin. The default cloud setting for running the media servers is now DO(DigitlOcean).It can be changed to AWS EC2 any time in the loadbalancer settings.
Some other important updates in CWLB 2.0 are as below.
Loadbalancing recording servers
Like media servers, the servers responsible for handling meeting recording can get exhausted quickly if there is a lot of demand for recordings. In order to solve this, we have now integrated recording server autoscaling to the load-balancer. Now the load-balancer can not only auto scale media servers but also recording servers in a fully automated manner.
Loadbalancing breakout rooms
Breakout rooms were already available in CWLB 1.0 but they were not very resource efficient. The customers had to use the same amount of credits to use breakout rooms as the main room. With CWLB 2.0, the breakout rooms are fully integrated in the JIT request handling mode into the load balancer so that the customers need not pay anything extra for using breakout rooms. It’s completely dynamic based on the actual usage of the breakout rooms irrespective of the main room size.
Due to current work pressure, we are not able to write an exhaustive list of all the updates that happened in CWLB 2.0 though we would love to write a exhaustive list when time permits. Until then if you have any query / suggestion related to CWLB 2.0, please feel free to drop us a mail at hello@centedge.io.
A real-life incident that happened with one of our customers.
A customer of ours having offices in the US and EU has a nice & innovative video conferencing application with some really cool features for collaborative meetings. They came to us for helping them fix some critical bugs and load balance their video backend. A piece of Interesting information we came to know is that they were running only one media server but a really huge one with 72 cores! The reason for running such a large server was that they wanted a lag-free & smooth video experience for all. In the beginning, when they had a small server, they were facing issues with video quality. Therefore, they took the biggest possible server for consistent video quality for all without even realizing that the video quality issue was due to the server. After digging deep, we made some interesting discoveries about their architecture and suggested some changes to their video infrastructure which includes downgrading the media server to an 8-core media server and having a horizontal load balancer to distribute the load effectively. After the suggested changes, their video infra bill was down by ~80%.
Here is the comparison.
Before:
A 72-core instance in AWS in the EU Frankfurt region costs $3.492/hour which becomes $2514.24 per month.
After:
An 8-core instance in AWS AWS in the EU Frankfurt region costs $ 0.348/hour which becomes $250.56
A horizontal load balancer instance also costs approximately the same, i.e. $250 /month.
So the total becomes $500/ month. A savings of ~80% per month on the cloud server bill!
When the CEO of the company got to know of the media server bill, he was skeptical about the business viability of the service because of the cloud bill that used to be paid every month. After the change, the prospect of the service seems more promising to him for business viability.
Load balancing WebRTC Media Servers, The Need
The rush for creating video conferencing apps is going to stay especially using WebRTC. As WebRTC 1.0 is already standardized by the Internet Engineering Task Force (IETF) by the date this post is being written, it is going to become mainstream in the coming times with the advent of 5G. Having said that, building a video conferencing app still is much more complicated than building a pure web app. Why? Because too many things need to be taken care of to create a production-ready video conferencing app. Those too many things can broadly be divided into 2 major parts. One is to code the app and test it on the local network(LAN ). Once it is successfully tested locally, it is time to take it to the cloud to make it available to a host of other users through the Internet. This is where dev-ops plays a critical role.
Now let’s understand why it is so important.
Let’s assume you have built the service to cater to 50 users in conferencing mode in each room. Now if you have taken a good VPS like c5-xLarge on a cloud provider like AWS, let’s assume it can support up to 10 conference rooms. What will happen if there is a need for an 11th room? In this case, you need to run another server that can handle another 10 rooms. But how will you know when the 11th room request will come? If don’t want to check manually every time a new room creation request comes, then there are 2 options. Either you tell your user who is requesting the 11th room that the server capacity is full and wait until a room becomes free OR create a logic so that a new server can be created magically whenever the new room creation request comes!! Now this situation is called auto-scaling and this is the magical effect of doing proper dev-ops on your cloud provider. The point to note here is that the way you are creating new servers as the demand grows, similarly you have to delete the servers when the demand reduces. Else the bill from your cloud vendor will go over the roof!!
Here is a brief summary of how a typical load-balancing mechanism works. I am not going to discuss the core logic of when to scale as that can be completely dependent on the business requirement. If there is a need to be up-scaled or down-scaled( short form for creating or deleting servers on demand, programmatically) according to dynamic demand, then there has to be a control mechanism inside the application to let the cloud know that there is more demand for rooms, that’s why more number of servers need to be created now to cater to the demand surge. Then the cloud has to be informed about the details of the VPS needed to be created like instance type, EBS volume needed, etc along with other needed parameters for the cloud to be able to create the server. Once the server is created, the cloud has to inform the application server back that the VPS has been created and is ready for use. Then the application server will use the newly created server for the newly created room and thus cater to the new room creation request successfully. A similar but opposite approach has to be taken when the rooms get released after usage. In this case, we need to let the cloud know that we don’t need some specific servers and they need to be deleted as they won’t be used until a new room creation request comes. When a new room creation request comes, one can again ask the cloud to create new servers and cater to the request for creating a new room successfully. This is how one will typically manage their dev ops to dynamically create and delete VPS according to the real-time need.
WebRTC auto-scaling/load-balancing, the strategies
Now that we understand what is DevOps in brief, let us also understand the general strategies to follow to do the dev ops, especially for the video conferencing use case. It can be broadly divided into 2 scenarios based on varied levels of automation that can be brought in to satisfy one’s business requirement. Though there can be a lot of variations of automation that can be brought in, let me describe 2 strategies for the sake of simplicity that can satisfy a majority of the business requirements.
In this strategy, the point is to automate the load distribution mechanism effectively to up-scale and down-scale the media servers while keeping the media servers in a cloud-agnostic manner. In this strategy, media server creation and deletion are not the scopes of load balancing. They can be independently created and updated in the load balancer in some manner so that there are enough servers always available to cater to when there is a surge in demand.
Pros:
Multi-cloud strategy
Better command and control
Less complex to implement
Cons:
Lesser automation
Strategy-2: Uni cloud Fully automatic load balancing
In this strategy, the point is to automate the load distribution mechanism effectively upscale and downscale while bringing in more automation while tightly coupling to a cloud provider.
In this, a cloud provider’s APIs can be integrated to create and destroy servers in a completely on-demand manner, without much manual intervention. In this approach, the load balancer can create servers from a specific cloud using APIs in case of an upscaling need and delete a server whenever the load decreases.
Pros:
Greater automation
Highly resource-efficient
Cons:
More complex to implement
Dependent on a single cloud vendor
There is no general rule that one should follow a specific load-balancing approach. It completely depends on the business requirement for which one needs load balancing. One should properly understand one’s business requirements and then decide the kind of load-balancing strategy that will be suitable. If you need help in deciding a good load-balancing strategy for your video infrastructure, feel free to have an instant meeting or a scheduled one with one of our core technical guys using this link.
Note: The load balancer mentioned in the above real-life incident is a WebRTC-specific stateful load balancer developed from scratch by us only for the purpose of auto-scaling WebRTC media servers. It is known as CWLB and more details about it can be found here.
A media server in a WebRTC infrastructure plays a critical role in scaling a WebRTC call beyond 4 participants. Whenever you join a call that has 8-10 participants or more, know that a media server is doing the hard work behind the scene to provide you with a smooth audio/video experience. If you have a need for building a WebRTC infrastructure and you need to select a WebRTC media server for your use case, then this post is going to help you with enough information to take an informed decision.
Why and When a WebRTC Media Server is required?
A WebRTC Media Server is a critical piece of software that helps a WebRTC application distribute audio/video streams to all the participants of an audio/video meeting. Without them, creating a large audio/video call beyond 4 users would be a highly difficult task due to the nature of WebRTC calls. WebRTC calls are designed for real-time use cases (<1 second of delay between the sender and receiver of an audio/video stream). In this case, a user sending his/ her audio/video streams has to send the streams to all the participants who are joining the conference for viewing it in real-time, so that a real conversation can happen. Imagine a call with 10 people, where everybody is sending his / her audio/video stream to rest 9 people(other than himself/herself) so that they can view it in real time. Let’s do some maths to find out some interesting details.
When a user joins an audio-video call that is running on WebRTC, he/she can share either audio/video/screen or all of them together.
If joined only with audio: ~40Kbps of upload bandwidth is consumed
if joined with only video: ~ 500Kbps of upload bandwidth is consumed
if joined with only screen share: ~ 800 Kbps of upload bandwidth is consumed
if all 3 are shared together : ~1340Kbps or 1.3Mbps of upload bandwidth is consumed
If there are 10 people in the meeting, then 1.3 * 9 = 11.7 Mbps of upload bandwidth will be consumed every second! Remember that you need to send your audio/video/screen-share or all of them together to everybody else except yourself. Anybody who doesn’t have a consistent 11.7Mbps bandwidth, can’t join this meeting!
This also brings another challenge for the device being used by the user to join the conference. The CPU of the device has to work very hard to compress and encode the audio/video/screen share video streams to send over the network as data packets. If the CPU has to spend 5% of its capacity to compress and encode the users audio/video/screen-share streams to send it to another user who has joined the meeting, then it has to spend 9 * 5 = 45% of its efforts to compress, encode, and send the user’s audio/video/screen-share streams to rest 9 participants.
Is the CPU not wasting its efforts by trying to do the exact same thing 9 times in this case?
Can we not compress, encode, and send just the user’s audio/video/screen-share streams
once to the cloud and the cloud does some magic to replicate the audio/video/screen-share streams of that user and send it to everybody else present in the same meeting room!
Yes we possibly can do this magic and the name of this magic is Media Server!
Different kinds of WebRTC Media Servers, MCU vs. SFU
Primarily there are 2 kinds of Media servers. One is a SFU and another is a MCU.
According to the last example, now we know that we need a media server that can replicate and distribute the streams of a user to as many people as needed without wasting the user’s network and CPU capacity. Let’s take this example forward.
There is a situation, where the meeting needs to support various UI layouts with a good amount of configuration options regarding who can view and listen to whom! It turns out that this is going to be a virtual event with various UI layouts like Stage, backstage, front-row seats, etc. Here the job of the media server is to replicate and distribute the streams to everybody else except the user himself/herself. Therefore in this case of a 10-user virtual event, every user will be sending only his / her streams to the media server once and receiving the streams from everybody else as individual streams. This way, the event organizer can create multiple UI layouts for viewing by different users according to the place they currently are in, i.e. the backstage/ stage / front row. In this situation, the SFU is helping us by sending all the streams as individual audio/video streams without forcing the way they should be displayed to an individual user. In an SFU, though the user sends only his/her audio/video/screen-share streams it receives from everybody else as individual streams which consumes download bandwidth based on the number of participants. the more the number of participants, the more the download bandwidth is consumed!
Now let’s take a different situation of a team meeting of 10 users of an organization who don’t need much dynamism in the UI but are happy with the usual Grid layout of videos. In this situation, we can merge the audio and video streams of all other participants except himself/herself in the server and create one audio/video stream which can then be sent to all other participants. Here, all the users will send their own audio/video stream and receive all others’ combined audio/video stream(Only one stream!) in a fixed layout as created by the server. The UI will just show one video which was sent by the server as the combined video element. Here MCU is helping us do our job neatly. In this situation, the download bandwidth consumption will be consistent irrespective of the number of users joining the meeting as every user will receive only one audio/video stream from the server. The 2 major downside of this approach is the number of servers needed to create a combined video of all users would be much higher than just replicating and sending the approach of an SFU and rigid UI layout which is already decided by the server without the UI having any control over it.
Two of the largest global video conferencing services use one of the approaches described above.
Gmeet : SFU
MS Teams: MCU
SFUs are slowly gaining more popularity due to the amount of flexibility they provide in creating UI layouts which is highly important for an engaging user experience and takes much lesser servers to cater to a large number of users as compared to an MCU. We are going to discuss the most popular SFUs available out there today and how to choose one for your next WebRTC Media Server requirement.
How to Choose a WebRTC Media Server for your next requirement?
In this section, we are going to discuss the top open-source media servers currently available out there and how they perform against each other. Here, I am going to discuss those media servers which use WebRTC/ openRTC as their core implementation. I won’t be covering the media servers built on PION, the go implementation of WebRTC as that needs a different post.
We would be discussing some of the key things about the below media servers.
Jitsi Video Bridge(JVB), Jitsi (SFU)
Kurento (SFU + MCU)
Janus (SFU)
Medooze (SFU + MCU)
Mediasoup(SFU)
We would primarily be discussing the performance of each media server along with its suitability for building a WebRTC infrastructure.
Jitsi Video Bridge(JVB), Jitsi
Jitsi is a very popular open-source video conferencing solution available out there today. It is so popular because it provides a complete package for building a video conferencing solution including a web & mobile UI, the media server component which is JVB along with some required add-ons like recording and horizontal scalability out of the box. It has very good documentation as well which makes it easy to configure it on a cloud like AWS.
Kurento
Kurento used to be the de facto standard for building WebRTC apps for the promises it made to the WebRTC developers with its versatility(SFU + MCU) and OpenCV integration for real-time video processing way back in 2014. But after the acquisition of Kurento and its team by Twillio in 2017, the development has stopped and now it’s in maintenance mode. One can understand that it is not so great now from the fact that the current team which is maintaining Kurento has a freemium offering named OpenVidu which uses mediasoup as its core media server!
Janus
Janus is one of the most performant SFUs available out there with very good documentation. It has a very good architecture where the Janus core does the job of routing and allows various modules to do various jobs including recording, bridging to SIP/PSTN, etc. It is being updated regularly by its backer to keep it up-to-date with the latest WebRTC changes. This can be a choice for building a large-scale Enterprise RTC application which needs a good amount of time and resource investment for building the solution. The reason is that it has its own way of architecting the application and can’t be integrated as a module into a large application like mediasoup.
Medooze
Medooze is more known for its MCU capabilities than SFU capabilities though its SFU is also a capable one. Though it is a performant media server, it lacks in the documentation side which is key for open source adoption. It was acquired by Cosmo Software in 2020 after which Cosmo Software has been acquired by Dolby. This can be your choice if you are a pro in WebRTC and know most of the stuff by yourself. From Github commits it seems that it is still in active development but it still needs good effort in the documentation side.
Mediasoup
Mediasoup is a highly performant SFU media server available today with detailed documentation and it is backed by a team of dedicated authors with a vibrant open source community and backers. the best part is that it can be integrated into a large Nodejs / Rust application as a module to let it do its job as part of a large application. It has a super low-level API structure which enables developers to use whatever/however they need to use it inside their application. Though it needs a good amount of understanding to build a production-ready application that is beyond the demo provided by the original authors, it is not that difficult to work with it if one is passionate and dedicated to learning the details.
Below is a set of exhaustive performance benchmarking tests done by Cosmo Software people back in 2020 at the height of COVID when WebRTC usage was going beyond the roof to keep the world running remotely. Below are the important points from the test report that are needed to be considered. The whole test report can be found at the bottom of this post for people interested to know more.
Testing a WebRTC application needs to be done with virtual users which actually are cloud VMs joining a meeting room as a test user performing a certain task/tasks. In this case, the test users aka cloud VMs joined using the below-mentioned configuration. In this case, all the above servers were hosted as a single instance server using a VM as described below.
The next is load parameters which were used to test each of these media servers. The numbers are not the same for all these media servers as the peak load (after which a media server fails!) capacity is not the same for every one of these. Here these peak load numbers of each media server have been derived after a good amount of DRY runs.
The test result of the load test.
Page loaded: true if the page can load on the client side which is running on a cloud VM.
Sender video check: true if the video of the sender is displayed and is not a still or blank image.
All video check: true if all the videos received by the six clients from the SFU passed the video check which means every virtual client video can viewed by all other virtual clients.
There are other important aspects of these media servers like RTT(Round Trip Time), Bitrates and overall video quality.
The RTT is an important parameter which tells that how fast a a media stream data aka RTP packet is delivered over the real time network conditions. The lower the RTT the better it is.
The Bitrate is directly responsible for video quality. It simply means how many media stream data packets are being transmitted in real time. the higher the bitrate the better is the image quality but the higher the load on the network to transmit and on the client side CPU to decode. Therefore, it is always a balancing act tp trying to send as many data packets aka the bitrate as possible without congesting the network or overburdening the CPU. Here a good media server can play a good role with techniques like Simulcast / SVC to perosnalise the bitrate for each individual receiver based on their network and CPU capacity.
As it tells, this is the video quality being transmitted by the media server in various load patterns. The higher the quality the better it is.
I hope I was able to provide a brief description of each media server with a enough data points so that you can make a good decision in choosing the media server for your next video project. Feel free to drop me an email at sp@centedge.io if you need any help with your selection process or with video infrastructure development process. We have a ready to use cloud video infrastructure built with mediasoup media server which can take care of your scalable video infra needs and let you focus on your application and business logic. You can have an instant video call/ scheduled video call with me using this link for discussing anything related to WebRTC/media servers/ video conferencing/live streaming etc.
PS: Here is the link to the full test report if anybody is interested in reading the whole of it which has a detailed description of this load test along with many interesting findings.
In today’s fast-paced world, effective communication is the lifeblood of any successful enterprise. As businesses continue to expand globally, the demand for a reliable, scalable, and secure communication system becomes paramount. This is where CWLB, our cutting-edge Media Stack steps in as the ultimate solution for enterprise usage. In this article, we will explore the myriad benefits of our Media Stack and how, as a dedicated solution provider, we can help enterprises build an Enterprise-grade communication system to meet their specific needs.
1. Unmatched Performance and Scalability:
Our Media Stack boasts exceptional performance, enabling real-time audio and video streaming without latency issues. With built-in load balancing and clustering capabilities, it can effortlessly scale to accommodate growing enterprise requirements, ensuring seamless communication across geographically dispersed teams.
2. Reliable and Secure Communication:
Security is of utmost importance for enterprises, especially when handling sensitive data. Our Media Stack is equipped with state-of-the-art encryption protocols, securing all communication channels and safeguarding against potential threats, ensuring confidential information remains private and protected.
3. Customization to Suit Enterprise Needs:
One of the key strengths of our Media Stack lies in its versatility. As a solution provider, we understand that each enterprise has unique requirements. With our expertise, we can tailor the Media Stack to meet specific needs, integrating it seamlessly with existing infrastructure and applications.
4. Seamless Integration with Communication Tools:
Our Media Stack is designed to effortlessly integrate with a wide array of communication tools, including Voice over Internet Protocol (VoIP), WebRTC, instant messaging, and more. This compatibility ensures that enterprises can leverage their existing tools while enjoying enhanced communication capabilities.
5. Enhanced Collaboration and Productivity:
Effective communication fosters collaboration, thereby boosting overall productivity. Our Media Stack facilitates crystal-clear audio and high-definition video conferencing, breaking down communication barriers and allowing teams to collaborate seamlessly.
6. Real-time Analytics and Monitoring:
Monitoring communication performance is crucial for enterprises to make informed decisions. Our Media Stack provides real-time analytics, enabling businesses to assess call quality, user engagement, and system health, ensuring optimal performance at all times.
7. Reduced Costs and Enhanced ROI:
By choosing our Media Stack, enterprises can benefit from cost savings due to its efficient resource utilization and scalability. Moreover, the enhanced communication system increases efficiency, delivering a higher return on investment (ROI).
8. Reliable Support and Maintenance:
As a solution provider, we take pride in our unwavering commitment to customer satisfaction. Our team of experts is always available to provide reliable support and timely maintenance, ensuring that our Media Stack operates at peak performance throughout its lifecycle.
9. Future-proof Solution:
With technology evolving rapidly, it is essential for enterprises to invest in future-proof solutions. Our Media Stack is built on cutting-edge technology, ensuring it remains relevant and adaptable to emerging trends and industry changes.
10. Seizing the Opportunity:
By partnering with us, enterprises can harness the power of our Media Stack to build a robust, secure, and scalable communication system. Our expert team will work closely with clients to design, implement, and maintain the ideal solution, tailored to their specific needs.
11. Enterprise Mediasoup:
As Mediasoup is a highly popular open-source Media server with an active community around it, we built CWLB on top of it which makes CWLB solid as a rock and future-proof. CWLB as a Media Stack converts open-source mediasoup to Enterprise Mediasoup to provide unmatched performance with scalability, reliability, and security.
In conclusion, our Media Stack CWLB is the ultimate choice for enterprises seeking to elevate their communication system to new heights. With unmatched performance, security, scalability, and customization options, it empowers businesses to communicate seamlessly and collaborate effectively. As a solution provider, we are committed to helping enterprises harness this power to build an Enterprise-grade communication system that drives success and growth. By embracing CWLB, our Media Stack, businesses can forge ahead confidently into the future of communication technology.
Feel free to meet one of us for an instant meeting or a scheduled meeting using Meetnow. We are reachable at hello@centedge.io and we would be delighted to hear from you.