Technology
Kwiz is not just using tools — we are engineering systems. This page exists for the technically curious: developers, CTOs evaluating partnerships, and anyone who wants to understand the infrastructure behind our environmental analytics, enterprise applications, and trading products.
Foundation
R is our primary language — and we treat it as a serious engineering tool, not just a statistical scripting language. Our entire stack is built on R, from environmental analytics to production APIs to trading infrastructure.
Why R? The depth of its statistical ecosystem is unmatched. But more importantly, modern R infrastructure — Rhino for application architecture, box for modular imports, testthat for testing, Plumber for APIs — makes R a viable production language when engineered properly.
Our approach to R development follows software engineering best practices that are standard in other ecosystems but still uncommon in the R world: modular architecture, dependency injection, comprehensive test suites, CI/CD pipelines, and containerised deployment.
R 4.x Rhino Shiny Plumber box testthat renv Docker
# Production R: Not Your Average Script
# Modular imports with box
box::use(
app/logic/analysis[
run_pipeline, validate_output
],
app/logic/monitoring[
log_event, alert_on_failure
]
)
# Dependency injection for testability
run_analysis <- function(
data,
db = prod_db(),
logger = prod_logger()
) {
result <- run_pipeline(data)
logger\(</span><span class="code-function">info</span>(<span class="code-string">"Complete: {nrow(result)}"</span>)<br>
db<span class="code-keyword">\)write(result)
}
Environmental Stack
Our environmental data science is powered by R’s best-in-class geospatial and ecological ecosystem. We build reproducible pipelines for environmental monitoring, impact assessment, and climate analytics.
Geospatial processing with sf for vector operations and terra for raster analysis — satellite imagery, land use classification, NDVI computation, and spatial overlay operations.
Interactive mapping with leaflet and mapview for stakeholder engagement dashboards and environmental monitoring tools.
Reproducible reporting with Quarto for generating transparent, auditable environmental reports that combine code, analysis, and documentation in a single pipeline.
Ecological data via rgbif for GBIF biodiversity data access, with automated quality assessment workflows for coordinate validation, taxonomic checks, and duplicate detection.
sf terra leaflet mapview rgbif Quarto NDVI Remote Sensing
# Environmental Data Pipeline
library(sf)
library(terra)
library(leaflet)
# Process satellite imagery
ndvi <- rast(“sentinel_b8.tif”) |>
compute_ndvi(nir = 4, red = 3)
# Spatial analysis
impact_zones <- project_boundary |>
st_buffer(5000) |>
st_intersection(protected_areas)
# Interactive stakeholder map
leaflet(impact_zones) |>
addTiles() |>
addPolygons(
color = “#14b8a6”
)
Kwiz Quants Platform
The Kwiz Quants platform connects an R-based strategy engine to MetaTrader 5 through KwizStrategyTester — our own EA — a 2-stage validation pipeline, and a production Shiny dashboard. Here is exactly how it is built.
Signal generation, validation pipeline (52,248 combos), portfolio optimisation, and 13-module Shiny dashboard — all in R.
Our own MT5 EA. Identical behaviour in Strategy Tester and live trading — one EA, two modes, no drift. Covers 105 instruments across 9 asset classes.
13-module Rhino/Shiny app backed by Plumber APIs. Real-time P&L, position tracking, strategy lifecycle management, and client billing.
Columnar, Arrow-native persistence. Ephemeral DuckDB connections — thread-safe for concurrent Shiny and trading processes. SQLite WAL for pipeline queue.
Containerised (Docker): The three R services — Shiny dashboard (port 3838), Payments/Paystack webhook API (port 8484), and read-only Strategy API (port 8585) — run in Docker with renv-pinned package versions. This guarantees reproducibility across dev and production for the application layer.
Not containerised: MT5 is a Windows binary. It runs natively on Windows or via a Wine64 bridge on macOS. The validation pipeline (Stage 1 grid search, Stage 2 MT5 tester) also runs directly on the host — it needs access to the local MT5 terminal and is orchestrated by R scripts, not Docker.
Stage 1 — Grid Search (Dukascopy OHLC): Each of 52,248 strategy-pair-timeframe combinations runs 10-fold walk-forward cross-validation with per-fold parameter grid search. Independent of broker infrastructure; uses Dukascopy historical data downloaded offline. Multiple runners parallelise via a SQLite WAL-mode shared queue.
Stage 2 — MT5 Strategy Tester (broker tick data): Stage 1 survivors run inside the native MT5 Strategy Tester via KwizStrategyTester — the same EA used in live trading. This is the first time broker spreads and tick-level execution costs enter the picture. Running identical EA code in both tester and live modes eliminates implementation drift. Wine64 bridges MT5 on macOS.
Quality Assurance
Rigorous validation is our hallmark — whether we’re validating environmental data quality, testing enterprise applications, or backtesting trading strategies.
Our quantitative validation process is inspired by Marcos Lopez de Prado’s work on backtesting methodology:
Combinatorial Purged Cross-Validation eliminates lookahead bias by creating non-overlapping train/test splits with embargo periods. This prevents the data leakage that invalidates most retail backtests.
Deflated Sharpe Ratio accounts for the multiple testing problem — when you test hundreds of strategies, some will look profitable by chance. The DSR adjusts for this, ensuring only strategies with genuine statistical edge survive.
Multi-layer execution testing validates that R-computed signals translate correctly to MT5 execution with realistic spreads and slippage.
All code — environmental pipelines, enterprise applications, and trading infrastructure — is held to a 95%+ test coverage standard:
Unit tests with testthat: every function, every edge case, every error path.
Integration tests with shinytest2: end-to-end application behaviour validated in headless browsers.
Environmental data validation: automated quality checks for coordinate precision, taxonomic consistency, and temporal completeness in GBIF and EIA datasets.
CI/CD pipelines via GitHub Actions: automated testing on every push, with deployment gates that prevent untested code from reaching production.
Explore our environmental capabilities, trading platform, or discuss how our engineering can solve your challenges.