Pull to refresh

All streams

Show first
Rating limit

The Systems Engineering Methodology for Startups

Start-up development Product Management *Systems engineering *

Creating a product startup can be an exciting experience, but it can be a daunting one as well. On average, only 1 out of 10 startups is successful, according to the Global Startup Ecosystem Report. Therefore, to raise your prospects, there are quite a number of important considerations to make in advance. 

Bearing in mind everything you need when launching a startup is a challenging task, so it’d be a sound idea to rely on some well-established methodology. That's why we were inspired by the Systems Engineering methodology, presented in such industry standards as ISO 15288 and CFR21. In this article, we’ll make a brief overview of this methodology and highlight how it can help entrepreneurs to encompass and structure the process of creating and developing a startup.

Read more
Rating 0
Views 724
Comments 3

Stress-testing: How Testers Live in a Turbulent World of Bugs

Иннотех corporate blog IT systems testing *Lifehacks for geeks Brain Health
Translation

A tester is one of the most stressful roles in IT. You constantly need to be concentrated and report bugs to developers in your team. Lidiya Yegorova, Innotech’s “Scoring conveyor” team QA-Lead shared her practices on how to minimize the stress while testing.

Read more
Rating 0
Views 3K
Comments 0

Metaverses: hype or the future to come?

Decentralized networks *Machine learning *Artificial Intelligence Social networks and communities AR and VR

Alexander Volchek, IT entrepreneur, CEO educational platform GeekBrains

Pretty much everyone in the IT community is talking metaverses, NFTs, blockchain and cryptocurrency. This time we will discuss metaverses, and come back to everything else in the letters to follow. Entrepreneurs and founders of tech giants are passionate about this idea, and investors are allocating millions of dollars for projects dealing with metaverses. Let's start with the basics.

Read more
Total votes 2: ↑1 and ↓1 0
Views 801
Comments 0

What are neural networks and what do we need them for?

Mathematics *Machine learning *Data Engineering *

Explaining through simple examples

For a long time, people have been thinking on how to create a computer that could think like a person. The advent of artificial neural networks is a significant step in this direction. Our brain consists of neurons that receive information from sensory organs and process it: we recognize people we know by their faces, and we feel hungry when we see delicious food. All of this is the result of brain neurons working and interacting with each other. This is also the principle that artificial neural networks are based on, simulating the processes occurring in the human brain.

What are neural networks

Artificial neural networks are a software code that imitates the work of a brain and is capable of self-learning. Like a biological network, an artificial network also consists of neurons, but they have a simpler structure.

If you connect neurons into a sufficiently large network with controlled interaction, they will be able to perform quite complex tasks. For example, determining what is shown in a picture, or independently creating a photorealistic image based on a text description.

Read more
Total votes 1: ↑1 and ↓0 +1
Views 926
Comments 1

Blood, sweat and pixels: releasing a mobile game with no experience

«Лаборатория Касперского» corporate blog Information Security *Game development *
In January 2022, we, at Kaspersky, released our first mobile game – Disconnected. The game was designed for companies that want to strengthen their employees’ knowledge of cybersecurity basics. Even though game development is not something you would expect from a cybersecurity company, our motivation was quite clear – we wanted to create an appealing, interactive method of teaching cybersecurity.



Over our many years of experience in security awareness and experimentation with learning approaches (e.g. online adaptive platforms, interactive workshops and even VR simulations), we’ve noticed that even if the material is presented in a highly engaging way, people still lack the opportunity to apply the knowledge in practice. This means that although they are taking in the information, it won’t necessarily be applied.
Read more →
Total votes 7: ↑7 and ↓0 +7
Views 2.6K
Comments 0

ArGOtecture

Go *

This is an article that describes my vision of building a system that actively uses Go as the main programming language and SOA/microservices as a design paradigm. 

Here I will try to cover 4 chapters that together allow us to build a solid and reliable system.

Read more
Rating 0
Views 868
Comments 0

How Analyst Days/14 went for us

Иннотех corporate blog System Analysis and Design *Conferences IT-companies
Translation

Conference participation is one of the most important practices for professional development. Hence, Innotech is actively sending out both its speakers and listeners for the biggest events. Senior Analyst Anastasia Kochetova shares her impressions from the Analyst Days/14 conference.

Read more
Rating 0
Views 580
Comments 0

Multilingual Text-to-Speech Models for Indic Languages

Machine learning *Natural Language Processing *Voice user interfaces *

In this article, we shall provide some background on how multilingual multi-speaker models work and test an Indic TTS model that supports 9 languages and 17 speakers (Hindi, Malayalam, Manipuri, Bengali, Rajasthani, Tamil, Telugu, Gujarati, Kannada).

It seems a bit counter-intuitive at first that one model can support so many languages and speakers provided that each Indic language has its own alphabet, but we shall see how it was implemented.

Also, we shall list the specs of these models like supported sampling rates and try something cool – making speakers of different Indic languages speak Hindi. Please, if you are a native speaker of any of these languages, share your opinion on how these voices sound, both in their respective language and in Hindi.

Read more
Total votes 2: ↑2 and ↓0 +2
Views 1K
Comments 0

Detecting attempts of mass influencing via social networks using NLP. Part 2

Python *Data Mining *Twitter API *Big Data *Natural Language Processing *
Tutorial

In Part 1 of this article, I built and compared two classifiers to detect trolls on Twitter. You can check it out here.

Now, time has come to look more deeply into the datasets to find some patterns using exploratory data analysis and topic modelling.

EDA

To do just that, I first created a word cloud of the most common words, which you can see below.

Read more
Total votes 3: ↑3 and ↓0 +3
Views 607
Comments 0

Detecting attempts of mass influencing via social networks using NLP. Part 1

Python *Data Mining *Twitter API *Big Data *Natural Language Processing *
Tutorial

During the last decades, the world’s population has been developing as an information society, which means that information started to play a substantial end-to-end role in all life aspects and processes. In view of the growing demand for a free flow of information, social networks have become a force to be reckoned with. The ways of war-waging have also changed: instead of conventional weapons, governments now use political warfare, including fake news, a type of propaganda aimed at deliberate disinformation or hoaxes. And the lack of content control mechanisms makes it easy to spread any information as long as people believe in it.  

Based on this premise, I’ve decided to experiment with different NLP approaches and build a classifier that could be used to detect either bots or fake content generated by trolls on Twitter in order to influence people. 

In this first part of the article, I will cover the data collection process, preprocessing, feature extraction, classification itself and the evaluation of the models’ performance. In Part 2, I will dive deeper into the troll problem, conduct exploratory analysis to find patterns in the trolls’ behaviour and define the topics that seemed of great interest to them back in 2016.

Features for analysis

From all possible data to use (like hashtags, account language, tweet text, URLs, external links or references, tweet date and time), I settled upon English tweet text, Russian tweet text and hashtags. Tweet text is the main feature for analysis because it contains almost all essential characteristics that are typical for trolling activities in general, such as abuse, rudeness, external resources references, provocations and bullying. Hashtags were chosen as another source of textual information as they represent the central message of a tweet in one or two words. 

Read more
Total votes 3: ↑3 and ↓0 +3
Views 1K
Comments 0

IDS Bypass at Positive Hack Days 11: writeup and solutions

Positive Technologies corporate blog Information Security *Network technologies *CTF *

The IDS Bypass contest was held at the Positive Hack Days conference for the third time (for retrospective, here's . This year we created six game hosts, each with a flag. To get the flag, participants had either to exploit a vulnerability on the server or to fulfill another condition, for example, to enumerate lists of domain users.

The tasks and vulnerabilities themselves were quite straightforward. The difficulty laid in bypassing the IDS: the system inspected network traffic from participants using special rules that look for attacks. If such a rule was triggered, the participant's network request was blocked, and the bot sent them the text of the triggered rule in Telegram.

And yes, this year we tried to move away from the usual CTFd and IDS logs towards a more convenient Telegram bot. All that was needed to take part was to message the bot and pick a username. The bot then sent an OVPN file to connect to the game network, after which all interaction (viewing tasks and the game dashboard, delivering flags) took place solely through the bot. This approach paid off 100%!

Подробнее
Total votes 3: ↑3 and ↓0 +3
Views 880
Comments 0

How we tackled document recognition issues for autonomus and automatic payments using OCR and NER

Python *Natural Language Processing *
Sandbox

In this article, I would like to describe how we’ve tackled the named entity recognition (aka NER) issue at Sber with the help of advanced AI techniques. It is one of many natural language processing (NLP) tasks that allows you to automatically extract data from unstructured text. This includes monetary values, dates, or names, surnames and positions.

Just imagine countless textual documents even a medium-sized organisation deals with on a daily basis, let alone huge corporations. Take Sber, for example: it is the largest financial institution in Russia, Central and Eastern Europe that has about 16,500 offices with over 250,000 employees, 137 million retail and 1.1 million corporate clients in 22 countries. As you can imagine, with such an enormous scale, the company collaborates with hundreds of suppliers, contractors and other counterparties, which implies thousands of contracts. For instance, the estimated number of legal documents to be processed in 2022 has been over 65,000, each of them consisting of 30 pages on average. During the lifecycle of a contract, a contract usually updated with 3 to 5 additional agreements. On top of this, a contract is accompanied by various source documents describing transactions. And in the PDF format, too.

Previously, the processing duty befell our service centre’s employees who checked whether payment details in a bill match those in the contract and then sent the document to the Accounting Department where an accountant double-checked everything. This is quite a long journey to a payment, right?

Read more
Rating 0
Views 450
Comments 0

An Antidote to Absent-Mindedness, or How I Gained Access to an OpenShift Node without an SSH Key

Иннотех corporate blog System administration **nix *DevOps *Openshift *
Translation

Typically when a Node falls out of the OpenShift cluster, this is resolved by simply restarting the offending element. What should you do, however, if you’ve forgotten the SSH key or left it in the office? You can attempt to restore access by using your wit and knowledge of Linux commands. Renat Garaev, lead developer at Innotech, described how he found the solution for this riddle and what was the outcome.

Read more
Rating 0
Views 4.5K
Comments 0

«If I had a heart...» Artificial Intelligence

Reading room Artificial Intelligence Science fiction

Most people fear of artificial intelligence (AI) for the unpredictability of its possible actions and impact [1], [2]. In regard to this technology concerns are voiced also by AI experts themselves - scientists, engineers, among whom are the foremost faces of their professions [3], [4], [5]. And you possibly share these concerns because it's like leaving a child alone at home with a loaded gun on the table - in 2021, AI was first used on the battlefield in completely autonomous way: with an independent determination of a target and a decision to defeat it without operator participation [6]. But let’s be honest, since humanity has taken in the opportunities this new tool could give us, there is already no way back – this is how the law of gengle works [7].

Imagine the feeling of a caveman observing our modern routine world: electricity, Internet, smartphones, robots... etc. In the next two hundred years in large part thankfully to AI humankind will undergo the number of transformations it has since the moment we have learned to control the fire [8]. The effect of this technology will surpass all our previous changes as a civilization. And even as a species, because our destiny is not to create AI, but to literally become it.

... more, give me more, give me more ...
Rating 0
Views 2.7K
Comments 0

Text-based CAPTCHA in 2022

Information Security *Machine learning *Artificial Intelligence
Translation

The first text-based CAPTCHA ( we’ll call it just CAPTCHA for the sake of brevity ) was used in 1997 by AltaVista search engine. It prevented bots from adding Uniform Resource Locator (URLs) to their web search engine.

Back then it was a decent defense measure. However the progress can't be stopped, and this defense was bypassed using OCR available at those times (for example FineReader).

CAPTCHA became more complex, noise was added to it, along with distortions, so the popular OCRs couldn’t recognize this text. And then OCRs custom made for this task appeared. It costed extra money and knowledge for the attacking side. The CAPTCHA developers were required to understand the challenges the attackers met, what distortions to add, in order to make the automation of the CAPTCHA recognition more complex.

The misunderstanding of the principles the OCRs were based on, some CAPTCHAs were given such distortions, that they were more of a hassle for regular users than for a machine.

OCRs for different types of CAPTCHAs were made using heuristics, and the most complicated part of it was the CAPTCHA segmentation for the stand along symbols, that subsequently could be easily recognized by the CNN (for example LeNet-5), also SVM showed a good result even on the raw pixels.

In this article I’ll try to grasp the whole history of CAPTCHA recognition, from heuristics to the contemporary automated recognition systems. We’ll figure out, if a CAPTCHA is still alive.

I’ll review the yandex.com CAPTCHA. The Russian version of the same CAPTCHA is more complex.

Read more
Total votes 4: ↑3 and ↓1 +2
Views 1.3K
Comments 0