Agents Of I.N.U. Update: Jan 28, 2022

All the happenings in the world of I.N.U.

It’s been an up-and-down January. We kicked off the month with CMC / CG (and can finally put Wen C.M.C. / Wen CG behind us). Since then, Bitcoin and the crypto market have fallen off a cliff, and we haven’t been spared. Marketing’s had to take a bit of a back-seat while we take a more organic approach to growth.

On the plus side, it’s given us an excellent opportunity to focus on developing our product. While so far we’ve mostly spoken about the website, it is just the tip of the iceberg. Most of the magic that powers Agents of I.N.U. lives on our servers, analysing the Blockchain and transforming the data to make it easily accessible to our users.

Disclaimer. I’ll try to make this update accessible, but things might get a bit technical due to the topics that we’ll be discussing.

How does this Blockchain thing even work?

Before we get into the specifics of Agents of I.N.U., let’s start by setting the scene by giving a very simplified overview of the Blockchain and tokens.

  1. We operate on the Binance Smart Chain.
  2. The Binance Smart Chain is an Ethereum-compatible blockchain that allows users to create and run Smart Contracts.
  3. Tokens are Smart Contracts that people can trade on the Binance Smart Chain. Tokens implement the BEP-20 standard (identical to Ethereum’s ERC-20 standard). The standard specifies what functions a contract needs to support to be tradeable.
  4. BEP-20 / ERC-20 tokens are traded through a series of Smart Contracts that make up a Decentralized Exchange (DEX). The most popular DEX on the BSC is PancakeSwap.
  5. PancakeSwap (a fork/copy of Uniswap) uses Liquidity Pools instead of order books to enable token trading. This makes it possible to always trade tokens, even if no one else is buying or selling. It’s a type of Automated Market Maker.
  6. All updates to the BSC are done through transactions. This includes creating contracts/tokens, adding liquidity and buying/selling tokens.
  7. Transactions are grouped into blocks (generally 100–200 transactions per block). Blocks are processed roughly every 3 seconds on the BSC Blocks are validated and confirmed by Validators on the network.
  8. There are Nodes in the wild that sync the state of the Blockchain and expose it to the Internet. These Nodes are the backbone of BSC and power dApps, though they have limitations. When you see data about a token, it’s coming from a Node. When you make any transaction (e.g. through Metamask/Trust Wallet), it’s going through one of these Nodes.

Right…I knew a lot of that already. Why’s it matter?

There are a few key takeaways from the above:

  • Most Defi blockchains use the ERC-20 standard, with very few changes. So, once you support one Blockchain, it’s not a massive amount of work to support another Blockchain. What works on BSC and PancakeSwap also works on Ethereum and Uniswap, Avalanche and Trader Joe, Fantom and Spookyswap…
  • Nodes are fast and efficient at allowing dApps to interact with the Blockchain. They are terrible for querying large amounts of data and doing any level of analysis. This is why most dApps have non-existent sorting and filtering support and generally only provide rudimentary metrics.

Ok…What’s this got to do with Agents of I.N.U.?

Glad you asked. While on the surface, it might be easy to think of Agents of I.N.U. as merely a window into the Blockchain, in reality, using the Blockchain directly is not feasible for the level of analysis that we support.

Instead, there’s a mountain of work to transform the raw data from the Blockchain and transform it into a format that makes it easily accessible for the type of analysis that we want to perform. The effort involved in doing that is not dissimilar to some of the most prominent players in the Defi space — names like Covalent, Moralis, Bitquery etc.

We’re aware of no other API that offers the level of filtering and sorting that we do, allowing users to hone in on tokens that match their exact criteria.

This is only possible because we transform the data into a huge database of minute-level price data for each token that we’re tracking. As of writing, we’ve got 35.6 million minutes in our database across 231,631 tokens.

Interesting. So you store price data to make it easy and quick to search for tokens?

Exactly. Though we’re storing much more than just price data. Currently, we store:

  • Minute-level price, trade count and volume data
  • 15 minute, 1 hour and 1 day level price, trade count and volume data, for performance reasons
  • We periodically check whether tokens can be traded (our Honeypot Checker)
  • Contract analysis to look for scams (rudimentary atm, we’ll be updating this in the near future)
  • Liquidity lock and contract change events
  • Periodically update the number of holders for each token
  • Extract social links from the contract code (Telegram, Twitter)

Awesome. Can you show me how it works?

Here’s a high-level summary of the different parts of our system, and how they work:

  • Event Service listens to relevant Liquidity / Owner events on the Blockchain and updates our database with that information.
  • Price Updater Service listens to token trade events and updates our price database
  • Contract Analysis Service runs periodically against the Blockchain, checking to see if the contract is safe and whether or not it can be traded (Honeypot checker)
  • Social Analysis Service attempts to extract social media information from the contract. In future, we’ll be adding other data streams to extract this information

That’s awesome!

Thanks. We think so too, but there are some issues:

  • We can’t replay data. If our service is down, or the Nodes are laggy, there’s a chance that we miss some price data. While there’s some logic to minimise data loss, it does happen from time to time.
  • Some subtle bugs in the code have caused us to calculate the wrong price for some tokens. <4% of tokens are affected and those only infrequently. The current system has no way of updating incorrect prices.
  • We only track data as it comes in, live of the Blockchain. That means we have no data from before we turned on our services (around October). Due to the way we detect tokens, this also means we’re not tracking some popular, older tokens.
  • We currently only track prices for tokens that are paired against BNB. This is the vast majority of tokens, but we can do better.
  • If we were to launch our service for a different chain (say, Ethereum), we’d only start collecting data from that day. Not very useful, as one of our point of difference is our ability to aggregate data historically.

Not so cool. What are you doing about all that?

A lot. I’ve spent the last two weeks improving our system to rectify most of the issues. Here’s how:

  1. We’ve created two new services, the Data Downloader Service and the Historical Price Updater Service.
  2. The Data Downloader Service will download a day’s worth of data into secondary, portable databases (SQLite databases for those in the know). It then stores these databases in the cloud, ready for processing. Initially, we’re instructing it to download data for the last few months.
  3. The Data Downloader Service will also download yesterday’s data every day at 1 am UTC.
  4. The Historical Price Updater Service will run every time a day is downloaded. It’ll reprocess that day of data and update our DB accordingly. It’ll also pick up any tokens we’re not currently tracking.
  5. As the Historical Price Updater Service updates the data with our latest algorithm, it’ll correct any issues in prices. We’ll also update this to track even more tokens — including those that aren’t paired against BNB.
  6. We’ll also be able to extend our algorithms to track multiple pairs against the same token in the future. Due to our new system, we’ll be able to reprocess all the data very quickly and make sure that we’re tracking even more.

Nice! Any other benefits?

Sure.

  1. These new services will pave the way for us to support other chains. We’ll need to create new versions of our services, point them to the Nodes in the other chains, and instruct the Data Downloader Service to download the last few months worth of data. Easy-peasy syncing!
  2. With our data being much more complete and robust, we’ll support much more in-depth charts on our token page, spanning months.
  3. There’s a lot of value in what we’re doing to other apps and products in this space. Given that our services are becoming increasingly rugged and production-grade, there’s a future where we can start licensing out access to our APIs to other companies.

Thanks for the very detailed update! But have there been any changes to the website?

It hasn’t been the priority, but there are a few improvements that we’ll be rolling out shortly:

  • We’ve added Token Search to the navbar, allowing you to quickly and easily navigate to a Token Page
  • We’ll be displaying Liquidity Lock / Owner Renounce information on the Token Page
  • We’ll be displaying logos for a good amount of tokens on the BSC

Once these backend changes are done, it’ll enable a host of other improvements to the Token Page, so stay tuned!

Great! Long read, but I learned a lot.

Glad to hear it. And if you want to learn even more:

Start by checking out our web app!

Web App | Telegram | Twitter | Brand Site | Youtube Channel

If you’d like more detail on any of the above, or if there are other topics you’d love to learn about, please do let us know!

--

--

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store