Bit (coin) bot
An end to end machine learning system for:
- Training and Serving Cryptocurrency Price time series and heuristic forecasting models
- Backtesting said deep learning models with different strategies and configurations
- Deployment of trading solutions as trading signals, automated paper-trading or live-trading bots
- Dashboarding and Visualizing model forecasting, signals, backtrading or live trading results
Deep Learning Model Training and Deployment
The system allows for both interactive notebook-based local development and cloud-native ML E2E pipeline execution.
Tensorflow Extended (TFX) with the Fluent TFX API layer is used extensively to orchestrate such pipelines that can either be executed locally, or on a cloud-based environment such as GCP Dataflow, Kubeflow Pipelines, Azure Databricks and others.
For the pipelines and the online realtime prediction services, the runtime is abstracted away either with Apache Beam pipelines or through docker containers, with the usage of Tensorflow Serving to enable 0 downtime new model deployments after sophisticated input data validation, model evaluation and infrastructure validation for each model trained.
Every pipeline step execution, experiment run, prediction or other artifacts/data produced in this system is logged, either on a ML Metadata Store (MLMD) or custom solutions. Status reports, alerts and notifications from selected parts of the system are reproted to our discord server (slack-like alternative).
Backtrading and Live Trading
Trading Strategies have been developed that co-operate with the trained models, along with some in-house state keeping components that are used for online trading activities. These components include but are not limited to:
- Trailing Stop-Loss and Take Profit Orders
- Vendor (e.g. Binance DEX) connectivity, order services and fee models
- Paper (or real) money allowance, per namespace
- Various ways of deep learning model prediction service invocation
Our backtrading system of choice is Backtesting.py, but realtime deployment is handled a bit differently.
Deployment and DevOps
Everything is packaged as self-contained docker containers. Those containers are deployed in a on-premises (small) kubernetes cluster with kubeflow pipelines and knative installed. Jobs can be scheduled on their own. Results are published to the ML Metadata server.