![](https://webcf.waybackmachine.org/web/20211031114540im_/https://habrastorage.org/getpro/habr/upload_files/23e/561/399/23e5613994dfc894521d309456d164ef.jpg)
I am not an economist, but in light of current events with cryptocurrencies and the economy in general, I would like to share my thoughts on some kind of ideal economy, around which everything is happening now.
Nuances of designing distributed systems
I am not an economist, but in light of current events with cryptocurrencies and the economy in general, I would like to share my thoughts on some kind of ideal economy, around which everything is happening now.
Author: Sergey Lukyanchikov, Sales Engineer at InterSystems
What is Distributed Artificial Intelligence (DAI)?
Attempts to find a “bullet-proof” definition have not produced result: it seems like the term is slightly “ahead of time”. Still, we can analyze semantically the term itself – deriving that distributed artificial intelligence is the same AI (see our effort to suggest an “applied” definition) though partitioned across several computers that are not clustered together (neither data-wise, nor via applications, not by providing access to particular computers in principle). I.e., ideally, distributed artificial intelligence should be arranged in such a way that none of the computers participating in that “distribution” have direct access to data nor applications of another computer: the only alternative becomes transmission of data samples and executable scripts via “transparent” messaging. Any deviations from that ideal should lead to an advent of “partially distributed artificial intelligence” – an example being distributed data with a central application server. Or its inverse. One way or the other, we obtain as a result a set of “federated” models (i.e., either models trained each on their own data sources, or each trained by their own algorithms, or “both at once”).
Distributed AI scenarios “for the masses”
We will not be discussing edge computations, confidential data operators, scattered mobile searches, or similar fascinating yet not the most consciously and wide-applied (not at this moment) scenarios. We will be much “closer to life” if, for instance, we consider the following scenario (its detailed demo can and should be watched here): a company runs a production-level AI/ML solution, the quality of its functioning is being systematically checked by an external data scientist (i.e., an expert that is not an employee of the company). For a number of reasons, the company cannot grant the data scientist access to the solution but it can send him a sample of records from a required table following a schedule or a particular event (for example, termination of a training session for one or several models by the solution). With that we assume, that the data scientist owns some version of the AI/ML mechanisms already integrated in the production-level solution that the company is running – and it is likely that they are being developed, improved, and adapted to concrete use cases of that concrete company, by the data scientist himself. Deployment of those mechanisms into the running solution, monitoring of their functioning, and other lifecycle aspects are being handled by a data engineer (the company employee).
The DHT system has existed for many years now, and torrents along with it, which we successfully use to get any information we want.
Together with this system, there are commands to interact with it. There are not many of them, but only two are needed to create a decentralized database: put and get.
This is an official tutorial published earlier on Ontology Medium blog
Excited to publish it for Habr readers. Feel free to ask any related questions and suggest a better format for tutorial materials
This is an official tutorial published earlier on Ontology Medium blog
Excited to publish it for Habr readers. Feel free to ask any related questions and suggest a better format for tutorial materials
In the beginning, there was the Word… which quickly became communication protocol in need of an upgrade.
Hi, I'm one of the developers of the sharded blockchain Near Protocol, and in this article want to talk about what blockchain sharding is, how it is implemented, and what problems exist in blockchain sharding designs.
It is well-known that Ethereum, the most used general purpose blockchain at the time of this writing, can only process less than 20 transactions per second on the main chain. This limitation, coupled with the popularity of the network, leads to high gas prices (the cost of executing a transaction on the network) and long confirmation times; despite the fact that at the time of this writing a new block is produced approximately every 10–20 seconds the average time it actually takes for a transaction to be added to the blockchain is 1.2 minutes, according to ETH Gas Station. Low throughput, high prices, and high latency all make Ethereum not suitable to run services that need to scale with adoption.