The main goal of a blockchain explorer is to collect network data so that one could further analyze it and retrieve it in a human-understandable format, with an ability to search, filter, and view network identities such as blocks, transactions, and validators.
Just like most blockchain networks, we also needed one. Not always this data can be obtained from a node. Moreover, it is not what nodes are intended for, and too many tasks can lead to network congestion. We decided to develop Minter Explorer, but soon the project’s requirements increased, and the explorer has evolved into a gateway for the communication between the blockchain network and our applications.
The first release looked more like a prototype, while the rapid development of the blockchain network itself often made it challenging to make long-run decisions and plan some of our next steps.
Initially, the project was “solid” and written in PHP 7.2. For API, we used the Lumen (Laravel) framework; for a database, PostgreSQL 9.6; for queuing, RabbitMQ; and Redis for caching. We also developed a website on the Vue.js front-end framework.
Everything worked like this:
- In an infinite loop, a console command was making queries to a node with a time-out of five seconds;
- Returned information was then transmitted to workers for further storage.
As you can see, nothing supernatural. :)
Due to a significant number of queries and the fact that the node of early versions often operated unstably, we had to ensure the fault tolerance of the service.
For that purpose, we introduced another console command, which, in a loop with a time-out, went through the pool of nodes, checking the availability and relevance of data and marking a node as active or inactive. Thus, when receiving an error, the system went on to the next node.
That is also the reason behind our decision to switch the wallets we developed to sending transactions via Explorer.
Connecting the wallets as clients of the project posed new requirements. The main of them was the necessity to proxy the formed transactions to the blockchain network and send the result to the client. At the same time, it was needed to store the balances of the addresses to protect the node from a large number of queries and gather information about the coins on the network for the convenience of users. In the old versions of a node, the proxying of transactions was pretty simple as the node itself was checking whether a transaction has been added to the network and returning the result only after that.
But do not forget that one of the main advantages of the Minter network is the instant transfer of a massive number of transactions. We did not want to overload the nodes with additional responsibilities and passed the checking process on to Explorer.
Later, aiming to divide responsibility among services, we moved this part to a separate subproject. That is how Minter Gate was established.
Minter Gate is a service that helps form a transaction and send it to the Minter network.
The service’s API allows to:
- estimate a transaction fee;
- get the amount you will receive after the sale;
- get the amount you will need to buy another coin;
- get the total number of transactions for a specific address;
- send a formed transaction to the network.
We developed the service using the Go language and the Gin framework, which made it lightweight and fault-tolerant.
Further Division of Services
In a few months, we located some shortcomings of system design. PHP was unable to handle the blocks with a high number of transactions (>600) when the generation of a new block was taking place, which caused the explorer to lag behind the network. The transactions table has expanded; new identities appeared. It was obvious that the database needed significant restructuring. To increase the data processing speed, we focused on redesigning the part responsible for seeding the database and moved it to a separate service.
Minter Explorer Extender
Extender is a service responsible for seeding the database from the blockchain network. For this service, we chose the Go language as well, which enabled us to use multi-threading for data processing, which, in its turn, significantly improved the performance.
When communicating with the base, the service sent a minimum number of reading queries, caching the necessary data. It let us substantially reduce the load on the database and eliminate one of the weak spots.
A flexible setup — for example, of the number of workers that processed the network identities and the size of chunks, which were handled by those workers — made it possible for users to achieve high performance on different configurations.
Minter Explorer API
API was the last to be modified. We did not want to lump all technologies together and also rewrote this part in Go. As an HTTP framework, we used Gin. Since we leveraged PostgreSQL, we chose ORM — Go-Pg for working with the base as it had been specifically designed to suit Postgres. Unlike its alternatives, ORM generated optimal queries, which also allowed us to limit the general number of queries sent to the database.
Client Notifications in Real Time
Next thing, we had to reduce the number of queries that Minter Explorer API was making to the DB, so we added client notifications through WebSocket. The appearance of new blocks and transactions was now broadcast at the time of their saving.
When it came to a real-time messaging server, our choice fell on Centrifugo. The project claimed support for high loads and also had client libraries for the platforms we needed: JS, iOS, and Android.
Explorer Extender retrieves the data from the node and saves it to the DB, simultaneously sending this data to the socket server, which, in its turn, notifies all interested clients.
Minter Gate works directly with the node by receiving data that is needed for the client to create a new or send an already formed transaction.
Looking back, we can now say that we got an infrastructure capable of fulfilling any tasks, connecting to the network at any point and syncing with it in the shortest time possible. And this infrastructure consists of the services of a limited interconnection, each targeting a specific, “narrow” area.
The engine of progress does not, however, have a reverse gear — neither do we — and maybe Minter Explorer will be assigned new tasks in the future.