IaC Development Life Cycle
This is the translation of my speech at T-Meetup: DevOps Life Cycle.
I believe that you have heard about SDLC (Systems development life cycle). Is it possible that the same things are applicable for the IaC?
For user to be satisfied
This is the translation of my speech at T-Meetup: DevOps Life Cycle.
I believe that you have heard about SDLC (Systems development life cycle). Is it possible that the same things are applicable for the IaC?
This article continues the series of articles on load tests. Today we will analyze the testing methodology and answer the question: "How many IP cameras can be connected to a WebRTC server?"
Do you remember how just a few years ago it was a disaster to lose a camera at the end of a vacation? All memorable pictures and videos then disappeared along with the lost device. Probably, this fact prompted the great minds to invent cloud storage, so that the safety of records no longer depends on the presence of the devices on which these records are made.
We continue to review variants of load tests. In this article we will go over the testing methodology and conduct a load test that we will use to try and determine the number of users that could watch and stream at the same time, meaning the users will simultaneously publish and view the streams.
This article is a continuation of our series of write-ups about load tests for our server. We have already discussed how to compile metrics and how to use them to choose the equipment, and we also provided an overview of various load testing methods. Today we shall look at how the server handles stream mixing.
In the previous article we went over a load test whose data could be used to choose a load-appropriate server. In the course of the testing, we would publish a stream on one WCS, and we would pick up that stream several times using a second WCS. The acquired results could be used as a basis for decisions on server operability.
Some would (justly) have concerns regarding the possible biases in such a test — after all, one of our servers was used to test another one of our servers. Could it be that we were using a specially optimized code that skewed the results in our favor?
If you use Zabbix to monitor your infrastructure objects but have not previously thought about collecting and storing logs from these objects then this article is for you.
In any project, a great deal of importance is placed on the selection of server hardware and WebRTC streaming is no exception. One of the key principles of such a selection is balance – the hardware should be powerful enough to handle the streams with no drops in quality, but not too powerful so as to waste resources. So, how does one choose the right server?
Monitoring systems are a vital tool for any system administrator, because they can be used to extract specific information from services, such that:
I have already written about AIOps and machine learning methods in working with IT incidents, about hybrid umbrella monitoring and various approaches to service management. Now I would like to share a very specific algorithm, how one can quickly get information about functioning conditions of business applications using synthetic monitoring and how to build, on this basis, the health metric of business services at no special cost. The story is based on a real case of implementing the algorithm into the IT system of one of the airlines.
Currently there are many APM systems, such as Appdynamics, Dynatrace, and others, having a UX control module inside that uses synthetic checks. And if the task is to learn about failures quicker than customers, I will tell you why all these APM systems are not needed. Also, nowadays health metrics are a fashionable feature of APM and I will show how you can build them without APM.
Let's once again improve Ansible. Well, this won't work without getting into sources.
This is a transcription of a talk that was presented at CSNOG 2020 — video is at the end of the page
There was a custom configuration management solution.
I would like to share the story about a project. The project used to use a custom configuration management solution. Migration lasted 18 months. You can ask me 'Why?'. There are some answers below about changing processes, agreements and workflows.
It is the translation of my speech at DevOps-40 2020-03-18:
After the second commit, each code becomes legacy. It happens because the original ideas do not meet actual requirements for the system. It is not bad or good thing. It is the nature of infrastructure & agreements between people. Refactoring should align requirements & actual state. Let me call it Infrastructure as Code refactoring.
What would be the standard quick security related tuning to configure before you install the meat?
release: 2005, Ubuntu + CentOS (supposed to work with Amazon Linux, Fedora, Debian, RHEL as well)