As many of you know, we have been busy here at EOSUSA building our our infrastructure to provide all the services we can for the EOS ecosystem. In the last 6 months we have grown from the single EOS server we originally built after the mainnet launch to now having multiple host servers, dedicated disk arrays, and even spinning up a secondary site in our Panama location. With so much advancement (and more to come) and in the interest of transparency, I wanted to provide everyone with an update on what we currently have in place.
In our primary US site, we currently have 5 bare-metal front-end servers loaded with virtualization software that allows us to run multiple server instances on the hardware to make efficient use of the resources. All servers have multiple Xeon processors along with highly-redundant RAID6 disk arrays, redundant power supplies, and multiple battery backups to almost eliminate any hardware failure related downtime. There is also a stand-alone 16-drive disk array (SAN) that is shared among all the front-end virtual servers as needed. Everything connects together over a dedicated layer-3 networking switch and is isolated to a dedicated DMZ port on our FortiGate firewall.
We are working on building out our Panama site, who also has a dedicated Fortinet, switch, and power but currently only has a single virtual server onsite to provide redundancy. Our new, larger server was destroyed in shipment so we are working to repair/replace the hardware and complete the buildout of the secondary services in that location.
Now you might be asking yourself: "Self, how is he using all of that equipment?" Lucky for you, I have an updated network diagram detailing each of the servers so you can get an overview of things. But I'll also try to break things down into what the servers do to help make sense:
We host our BP registration/information files on a small, hosted virtual server to assure our public information is always accessible. We also use this server for monitoring the external access to our nodes and alert if unable to connect.
Proxy servers serve 2 main purposes: 1) provide a filter between the internet and the back-end servers for security and efficiency, and 2) provide fault-tolerance/fail-over options should a server be offline. The proxy servers check on the status of all back-end servers and only forward requests if the server is online and functional. For instance, all EOS API traffic is initially handled equally by the first 2 API servers, but if 1 goes down, the other handles all requests. If that 2nd server also goes down, the proxy automatically sends all traffic to our backup API server until the first 2 servers are back online. It also allows us to present all the back-end services as a single node (ie: you can use the same seed01.eosusa.news address for normal EOS chain data, full v1 history data, or the new v2 Hyperion data; no need to know different addresses for different services). To provide redundancy, there are 2 identical proxy servers so if one is offline, the FortiGate redirects all traffic to the 2nd proxy server (which then load-balances the servers as described above).
Servers: EOS-PR01, EOS-PR02, EOS-SE11
These servers are the primary servers/services for the EOS chain data. They each contain a copy of the EOS blockchain (currently 200GB+) and provide both the web API access for EOS chain information as well as the P2P services used to synchronize all other EOS nodes throughout the world.
Servers: EOS-SE01, EOS-SE02, EOS-SE03, EOS-SE11
State History Server:
Recently a new EOSIO plugin was introduced to allow other programs to pull EOS chain information so it can be more easily used. It requires it's own 500GB+ of logs but once synced, allowed us to implement many of the new applications/integrations being developed (such as Hyperion/Chronicle).
Servers: EOS-SE02, EOS-SE11
Block Producer Servers:
As we are a standby block producer far away from seeing active duty (currently 120s), our block producer nodes are pretty much sitting idle but ready and waiting to be called into service. Each maintains a current copy of the EOS chain along with our BP signing keys when we are tapped to sign blocks, it's ready to go. It also runs the BP Heartbeat plugin from LiquidEOS which writes status/validation information to the chain every hour to assure the node is really online and ready to sign blocks as needed. In the event our primary BP node is unavailable, the secondary node in our Panama location will step in and provide the same services as the primary node did until it is back online.
Servers: EOS-BP01, EOS-BP11
Light History Server:
As you are probably aware, the original plugin for pulling chain history (known as v1 history) has been depricated and requires an abnormally massive server to run. Many new history solutions are being developed to alleviate this issue, but in the meantime Greymass has developed a custom plugin that allows servers to provide "some" of the v1 history, and if that local information is not enough, send the request over to one of the few servers still running the full v1 history information. We have implemented the light history plugin (storing everyone's last 1000 transactions locally) and then passing all larger historical requests over to the Greymass servers as needed.
Hyperion is a new history platform developed by EOSRio that utilizes the state history plugin mentioned above to pull the chain data out into a much faster database (ElasticSearch) that can than be used to quickly pull the chain history data. While it currently takes just under 1TB of hard drive space, this is much more managable than the 1TB of memory/RAM required for the v1 history plugin. It actually consists of 3 servers, although 2 are shared by any additional chains being indexed by Hyperion. One handles the reading/indexing of the data as well as the front-end API web interface, 1 is a message queuing (staging ground) server, and the last is the big ElasticSearch database server. As we add more chains to the Hyperion indexer, they will only require the index/API server (the queuing/database servers are shared).
Servers: EOS-HYP01, EOS-MQ01, EOS-ES01, JUNGLE-HYP01
CryptoLions developed a separate side-chain for the testing of EOS applications/processes before being deployed to the mainnet, and we are running a full API/P2P/BP node for it as well. It also has the state history plugin enabled allowing us to offer the same ancillary/supporting services for the JungleNet that we offer on the EOS chain.
Servers: JUNGLE-SE01, JUNGLE-HYP01
EOS Chronicle is another project that utilizes the information exposed by the state history plugin to then be used in external databases/services. It is still under development and we are working with the developers to implement any new additional features/services it provides as available.
LiquidEOS has released LiquidApps as a platform for offering enhanced services for dApps utilizing the EOS chain. DSP stands for "dApp Service Providers" and basically acts as a middle-man between the dApps and the EOS chain to provide additional features/services.
System Servers (Not Listed):
We also have a few non-critical servers dedicated for administration/monitoring purposes. One being my dedicated administrative remote access box and the other being used for internal monitoring of all nodes, web servers, and vote/account activity.
Servers: EOS-SY01, EOS-ADM01, EOS-ADM11
So that's pretty much a current overview of all the servers we have in place now... at least the ones I can tell you about :) Be on the lookout for additional updates as we announce new services we are providing as well as new hardware I am deploying to handle those new services.
Don't forget to vote for your favorite BPs and be sure to include us (ivote4eosusa) in that list! :) We are always available on Telegram/etc. so if you have any questions or just want to chat, you know where to find us!