This article continues the series of articles on load tests. Today we will analyze the testing methodology and answer the question: "How many IP cameras can be connected to a WebRTC server?"
System administration *
For user to be satisfied
- New
- Top
- All
- ≥0
- ≥10
- ≥25
- ≥50
- ≥100
Load test of WebRTC recording on AWS
Do you remember how just a few years ago it was a disaster to lose a camera at the end of a vacation? All memorable pictures and videos then disappeared along with the lost device. Probably, this fact prompted the great minds to invent cloud storage, so that the safety of records no longer depends on the presence of the devices on which these records are made.
WebRTC face to face video chat. Load test
We continue to review variants of load tests. In this article we will go over the testing methodology and conduct a load test that we will use to try and determine the number of users that could watch and stream at the same time, meaning the users will simultaneously publish and view the streams.
Load testing for WebRTC mixer
This article is a continuation of our series of write-ups about load tests for our server. We have already discussed how to compile metrics and how to use them to choose the equipment, and we also provided an overview of various load testing methods. Today we shall look at how the server handles stream mixing.
New features of the hybrid monitoring AIOps system Monq
In one of the previous articles, I’ve already written about the hybrid monitoring system from Monq. Almost two years have passed since then. During this time, Monq has significantly updated its functionality, a free version has appeared, and the licensing policy has been updated. If monitoring systems in your company start to get out of control, and their number rushes somewhere beyond the horizon, we suggest you take a look at Monq to take control of monitoring. Welcome under the cut.
Using a headless browser for WebRTC load tests
In the previous article we went over a load test whose data could be used to choose a load-appropriate server. In the course of the testing, we would publish a stream on one WCS, and we would pick up that stream several times using a second WCS. The acquired results could be used as a basis for decisions on server operability.
Some would (justly) have concerns regarding the possible biases in such a test — after all, one of our servers was used to test another one of our servers. Could it be that we were using a specially optimized code that skewed the results in our favor?
Clickhouse next to Zabbix or how to collect logs next to monitoring
If you use Zabbix to monitor your infrastructure objects but have not previously thought about collecting and storing logs from these objects then this article is for you.
Choosing a server for 1000 WebRTC streams
In any project, a great deal of importance is placed on the selection of server hardware and WebRTC streaming is no exception. One of the key principles of such a selection is balance – the hardware should be powerful enough to handle the streams with no drops in quality, but not too powerful so as to waste resources. So, how does one choose the right server?
Monitoring WebRTC streams with Prometheus and Grafana
Monitoring systems are a vital tool for any system administrator, because they can be used to extract specific information from services, such that:
Application performance monitoring and health metrics without APM
I have already written about AIOps and machine learning methods in working with IT incidents, about hybrid umbrella monitoring and various approaches to service management. Now I would like to share a very specific algorithm, how one can quickly get information about functioning conditions of business applications using synthetic monitoring and how to build, on this basis, the health metric of business services at no special cost. The story is based on a real case of implementing the algorithm into the IT system of one of the airlines.
Currently there are many APM systems, such as Appdynamics, Dynatrace, and others, having a UX control module inside that uses synthetic checks. And if the task is to learn about failures quicker than customers, I will tell you why all these APM systems are not needed. Also, nowadays health metrics are a fashionable feature of APM and I will show how you can build them without APM.
How to connect to FTPS or mount it to local folder
FTPS - is FTP with SSL layer, please don't mess it with SFTP. FTPS uses regular FTP protocol underneath, but all commands and data is encrypted using SSL. So mechanism of work is pretty same as in HTTPS: old protocol encapsulated in security layer. But that's breaks a lot of traditional FTP clients you are used to.
So here is 2 dead simple solutions I've tested with many FTPS servers, which setup could be much more correct then it actualy was. You can encounter FTPS servers configured in 2 ways with ports 20 + 21 and 989 + 990 used.
Filezilla
Filezilla is a GUI client available for both linux and windows. It has pretty specific interface. It can correctly handle wrong certificates, unusual ports and so on. Can be downloaded here. Just enter host, username, password and port(only if needed) and press Connect.
Mounting FTPS under linux
There is a an utililty called curlftpfs. It works under linux/*bsd and allows to mount remote FTPS(S) dir to you local directory. So in the simpliest ways on the ubuntu/debian it will look like:
sudo apt install curlftpfs
mkdir /tmp/ftp-mount
curlftpfscurlftpfs -o ssl ftp://USERNAME:PASSWORD@HOSTNAME:21/ /tmp/ftp-mount
If server you connecting to has wrong or outdated SSL certificate you can try:
curlftpfs -o ssl,no_verify_peer,no_verify_hostname ftp://USERNAME:PASSWORD@HOSTNAME:21/ /tmp/ftp-mount
If you need to change port from 21 to something else, remember, you can change port only in connection string, maybe via .netrc, but NOT with curlftpfs ftp_port option.
If you know a solution that allows to mount ftps folders under Windows, please mention it in comments.
How to Disable Password Request or Account Password in Windows 10, 8 or 7
How to Recover Data from RAID 5, 1, 0 on Linux
Improving Ansible
Let's once again improve Ansible. Well, this won't work without getting into sources.
Linux Switchdev the Mellanox way
This is a transcription of a talk that was presented at CSNOG 2020 — video is at the end of the page
Greetings! My name is Alexander Zubkov. I work at Qrator Labs, where we protect our customers against DDoS attacks and provide BGP analytics.
We started using Mellanox switches around 2 or 3 years ago. At the time we got acquainted with Switchdev in Linux and today I want to share with you our experience.
Starting the server
Even the most experienced and highly qualified system administrators often have only a vague idea of what exactly happens during the server startup process. So, let's look at this process in detail.
The magic of Virtualization: Proxmox VE introductory course
Today, I am going to explain how to quickly deploy several virtual servers with different operating systems on a single physical server without much effort. This will enable any system administrator to manage the whole corporate IT infrastructure in a centralized manner and save a huge amount of resources.
Ansible: CoreOS to CentOS, 18 months long journey
There was a custom configuration management solution.
I would like to share the story about a project. The project used to use a custom configuration management solution. Migration lasted 18 months. You can ask me 'Why?'. There are some answers below about changing processes, agreements and workflows.
How to test Ansible and don't go nuts
It is the translation of my speech at DevOps-40 2020-03-18:
After the second commit, each code becomes legacy. It happens because the original ideas do not meet actual requirements for the system. It is not bad or good thing. It is the nature of infrastructure & agreements between people. Refactoring should align requirements & actual state. Let me call it Infrastructure as Code refactoring.
Safe-enough linux server, a quick security tuning
The case: You fire up a professionally prepared Linux image at a cloud platform provider (Amazon, DO, Google, Azure, etc.) and it will run a kind of production level service moderately exposed to hacking attacks (non-targeted, non-advanced threats).
What would be the standard quick security related tuning to configure before you install the meat?
release: 2005, Ubuntu + CentOS (supposed to work with Amazon Linux, Fedora, Debian, RHEL as well)
Authors' contribution
-
amarao 2679.0 -
eucariot 1872.0 -
LMonoceros 1574.0 -
shurup 1465.1 -
oldadmin 1221.0 -
simpleadmin 1089.0 -
chemtech 1028.0 -
it_man 940.6 -
snezhko 918.0 -
cooper051 896.0