There are such types in Java - called "Generics".
And the point here is that there is a convention for naming generic type parameters WITH ONE CAPITAL LETTER.
But we have found much more readable approach!
General-purpose computer-programming language that is concurrent, class-based, object-oriented, and specifically designed to have as few implementation dependencies as possible
There are such types in Java - called "Generics".
And the point here is that there is a convention for naming generic type parameters WITH ONE CAPITAL LETTER.
But we have found much more readable approach!
The problem
Unfortunately, when fulfilling their planned business goals, the departments of the organisation rarely take into account such a metric as solution code quality. And usually developers has no time for normal code review process.
Hello!
In this article, I will try to briefly describe how Java Virtual Machine works with fonts. Once I needed to change the font used by the JVM and, surprisingly, found only pieces of legacy information about this. I spent a little time investigating the problem and now want to share this information with anybody who could find it useful. Feel free to leave any comments :)
In one of the tasks on the project, I had to wrap the ForkJoinTask collection in CompletableFuture for asynchronous execution and building data processing pipeline chains.
The point of this article is to explore Lambda functions, their dirrerences from regular functions and how they are implemented, based on C++, Python and Java programming languages.
Throughout this article I will be using godbolt.org to compile code and see machine code or byte code.
I will continue the story "How to put the whole world into a regular laptop: PostgreSQL and OpenStreetMap" with secrets about OpenStreetMap geodata, on which many companies have built their business, but not everyone shares the details... Well, today we will open crucial details.
The OSM database in PosgreSQL after loading from the dump takes up more than 587 GB. This is already a large database by the standards of a DBMS, and one huge table for each type of object will not work. For manageability, such data must be partitioned, it's good that PostgreSQL supports declarative data partitioning. It remains only to figure out how to split geographical data. After searching and comparing, the H3 hierarchical hexagonal geospatial indexing system came to rescue. All this was implemented in my openstreetmap_h3 project for fast processing and loading of the world dump into the PostGIS database.
I considered following options from geopartitioning systems...
When a person used to say that he controls the whole world, he was usually placed in the next room with Napoleon Bonaparte. I hope that these times are in the past and everyone can analyze the geodata of the entire Earth and get answers to their global questions in minutes and seconds. I published Openstreetmap_h3 - my project, which allows you to perform geoanalytics on data from OpenStreetMap in PostGIS or in any query engine that can work with Apache Arrow / Parquet.
First of all, I say hello to the haters and skeptics. What I developed is really unique and solves the problem of transforming and analyzing geodata using the usual and familiar tools available to every analyst and data science specialist without bigdata, GPGPU, FPGA. What looks easy to use and code now is my personal project where I have invested my vacations, weekends, sleepless nights and a lot of personal time over the past 3 years. Maybe I will share the background of the project and the rake that I went through, but first I will still describe the end result.
This is a translation of my article in Russian
In this article, I want to write about my experience of interacting with the LeetCode platform, and describe my preparation for an interview in FAANG similar companies by breaking it down into levels.
The whole article is written based on my experience, the numbers are very rough, I do not pretend to be objective, perhaps there are best practices on how to solve LeetCode problems, it would be cool if you share your experience in the comments.
This is a series of articles dedicated to the optimal choice between different systems on a real project or an architectural interview.
At work or at a System Design interview, you often have to choose the best message broker. I plunged into this issue and will tell you what and why. What is better in each case, what are the advantages and disadvantages of these systems, and which one to choose, I will show with several examples.
In this article, I want to describe my experience of paying off technical debt on our project in the form of a guide. In this guide, I will highlight some of the most common cases of technical debt and suggest methods for solving them. Since this is a rather extensive topic, I will recommend several books for study, because I do not see it possible to talk about everything within the framework of this article. Everything described applies to the BackEnd part, but it may be suitable for other developers. I would be glad if you share your experience on this topic in the comments.
This is a series of articles dedicated to the optimal choice between different systems on a real project or an architectural interview.
This topic seemed relevant to me because such tasks can be encountered both at work and at an interview for System Design Interview and you will have to choose between these two types of DBMS. I plunged into this issue and will tell you what and how. What is better in each case, what are the advantages and disadvantages of these systems and which one to choose, I will show with several examples at the end of the article.
SQL or NoSQL?
At first glance making REST and Apache Kafka compatible seems quite a challenge. However Innotech team nailed the task. Kirill Voronkin, Lead Developer at Innotech, shared the details on transforming unsynchronized queries into synchronized.
In previous post I've described an example of kogito-based microservice on quarkus in native mode, containing one embedded pmml model with decision tree. While it can be successfully used for prototyping purposes, in the real life microservice might contain several prediction models. From the first view I've got an impression, that inclusion of several models should be a trivial extension of the prototype with one model. We were completely wrong in our assumption, that's the reason, why I've decided to write this post. Another reason, is absence of guides, in which 2 (or more models) are put inside DMN diagrams in kogito framework.
Very often we've heard that some frameworks fit together so good, that they are considered as "match made in heaven". In this article I would like to share our experience regarding integration of those frameworks.
In this article I have described problems I have faced during integration Apache Camel with Spring Boot Actuator Prometheus for collecting metrics and my solution to solve this problem (which I haven't found over the internet).
For the first time PVS-Studio provided support for the CWE classification in the 6.21 release. It took place on January 15, 2018. Years have passed since then and we would like to tell you about the improvements related to the support of this classification in the latest analyzer version.
It seems that the problem of calculating the absolute value of a number is completely trivial. If the number is negative, change the sign. Otherwise, just leave it as it is. In Java, it may look something like this:
public static double abs(double value) {
if (value < 0) {
return -value;
}
return value;
}
It seems to be too easy even for a junior interview question. Are there any pitfalls here?
Java 8 introduced two kinds of functional expressions: lambda expressions like s -> System.out.println(s)
and method references like System.out::println
. At first, developers were more enthusiastic about method references: they are often more compact, you don't need to think up the parameter name, and, as urban legends say, method references are somewhat more optimal than lambda expressions. Over time, however, the enthusiasm waned. One of the problems with method references is the difficulty in debugging.
The previous year has been very distressing for businesses and employees. Though, software development didn’t get so much affected and is still thriving. While tech expansion is continuing, Java development is also going under significant transformation.
The arrival of new concepts and technologies has imposed a question mark on the potential of Java developers. From wearable applications to AI solutions, Java usage is a matter of concern for peers.
Moreover, it is high time that developers enhance their skills as to the changing demands of the industry. If you are a Java developer, surely you too would be wondering what I am talking about what things you should learn.
In microservice world the problem of high load is exteremely big especially when we have a REST API which is accessed quite extensively. Why do we need throttling? The main answer is to decrease the load of the service at the moment.
Different frameworks have different solutions, mostly some additional libraries. Also there is a Guava RateLimiter and Bucket4J . What is interesting Spring MVC being one of the most popular solutions for building REST APIs (thank you Spring Boot) doesn't have any built in rate limiter. As for external solutions there not that many ways around.
Today, I would like to present a tiny experimental library specific for Spring MVC. It is called SpringRateLimitter. The library is very tiny ,works in runtime. The idea is to annotate entire rest controller or specific method , than count the number of incoming requests for the annotated URI and based on the values check if we exceed the allowed number of calls. In case of exceeding an HTTP error code 429 is thrown and after the throttling period is over , the endpoint is available again.
So How does it look like. As first step Maven dependency must be added
Information