AI Eval and Monitoring
Silsilat uses AI to evaluate gold collateral, ensure compliance with policy rules, and continuously monitor model performance through Arize Phoenix, trace logging, and Hedera audit anchoring.
The AI Evaluation & Monitoring Module (AEMM) is the intelligence layer of the Silsilat ecosystem. It brings transparency, accuracy, and accountability to every gold collateral assessment and policy decision — ensuring that every loan is grounded in real data, consistent logic, and auditable reasoning.
Purpose
Silsilat’s AI modules perform two critical functions:
Valuation Intelligence: Calculate precise loan-to-value (LTV) ratios using real-time market data and historical patterns.
Continuous Monitoring: Evaluate, log, and explain every AI decision for compliance, fairness, and auditability.
By integrating Arize Phoenix and Hedera Consensus Service (HCS), Silsilat transforms AI outputs into verifiable trust artifacts.
System Overview
Core Subsystems
Subsystem
Function
Gold Evaluator Agent
Performs collateral appraisal and computes fair LTV ratios.
Policy Engine
Applies haircut and regulatory limits (e.g., BNM/Ar-Rahnu rules).
Arize Phoenix
Logs and visualizes model performance metrics and trace histories.
IPFS Artifact Manager
Stores AI trace data and evaluation outputs for immutability.
HCS Publisher
Anchors evaluation summaries and CIDs to Hedera for traceability.
AI Evaluation Workflow

Evaluation Model Inputs
Feature
Source
Description
Gold Price (USD/oz)
MetalPriceAPI
Live spot price for gold in global markets.
FX Conversion Rate (USD → MYR)
FastForex API
Ensures consistent local currency valuation.
Purity (%)
Pawnshop / Appraiser
Karat value (e.g., 916, 999).
Weight (grams)
Pawnshop
Physical mass of pledged gold.
Item Type
Pawnshop
Jewelry or bar (affects haircut rate).
Policy ID
Silsilat Registry
Defines haircut, max LTV, and AML thresholds.
Historical Volatility
Phoenix Model
Market risk adjustment factor.
Model Architecture
Silsilat employs ensemble hybrid models combining deterministic policy logic and machine learning estimation.
Model Layer
Purpose
Rule-Based Policy Layer
Enforces regulatory and Shariah constraints.
Regression Model
Predicts expected market value based on purity, weight, and FX.
Risk Scoring Model
Estimates probability of collateral devaluation or default.
Explainability Layer (XAI)
Produces interpretable outputs for compliance and audit.
Example Output:
Observability & Traceability (via Arize Phoenix)
Each AI decision generates a trace event that is logged, visualized, and monitored in Phoenix. This allows Silsilat operators, regulators, and auditors to review model decisions in real time.
Phoenix Tracing Features
Input-Output Lineage: Full record of data sources and transformations.
Model Versioning: Each inference tied to model hash and version ID.
Performance Metrics: Drift, precision, recall, LTV variance, and gold price delta.
Explainability View: SHAP-based factor importance visualization.
Alerting: Automatic anomaly detection if output deviates from expected range.
Example Phoenix Metadata
Audit & Artifact Storage
Every evaluation produces an IPFS artifact containing:
Input parameters
Model version and metadata
Output (LTV, haircut, appraisal)
Policy ID and rule set applied
Confidence and drift scores
The artifact’s CID and SHA256 hash are published to Hedera for immutability and public verification.
Example HCS Message
Model Monitoring & Retraining
Continuous Feedback Loop
New market data and appraiser feedback collected.
Phoenix detects model drift or error spikes.
Retraining pipeline triggered (scheduled or on-demand).
Updated model deployed with new version hash.
Governance Council reviews and signs off on update.
Monitoring Metrics
Metric
Description
Trigger Threshold
LTV Drift
Difference between predicted vs. actual LTV
±3%
Price Variance Error
Deviation from market benchmark
>2%
Compliance Fail Rate
% of outputs breaching policy
>0.5%
Data Integrity Score
Missing or corrupted inputs
<0.98 confidence
Governance & Oversight
All AI evaluations and retraining cycles are overseen by the Policy & AI Ethics Committee, comprising:
Silsilat Data Science Team
Shariah Compliance Officers
Regulator Observers
External Auditor (optional)
Each model update requires:
Signed approval from Governance Council.
Publication of new model version hash on HCS_MODEL_TOPIC_ID.
Archive of old model metadata for trace continuity.
Security, Privacy & Compliance
Aspect
Implementation
Data Privacy
No personally identifiable data stored in raw form; anonymized inputs only.
Integrity
All inputs hashed and verified before model processing.
Auditability
Phoenix + IPFS + HCS ensure immutable end-to-end lineage.
Explainability
Model outputs must include rationale for each recommendation.
Override Capability
Human administrator may override AI result (HCS_OVERRIDE_TOPIC_ID).
AI in the Loan Lifecycle
Stage
AI Function
Pre-loan
Collateral appraisal and LTV estimation.
Active loan
Revaluation monitoring (gold price fluctuation).
Compliance
Policy validation and AML risk scoring.
Post-loan
Performance feedback for model retraining.
This creates a living intelligence loop where every transaction improves the model ecosystem.
Summary
The AI Evaluation & Monitoring Module transforms how value, risk, and trust are measured in decentralized finance.
Key Attributes
Accurate, policy-aware valuations
Transparent and explainable decisions
Immutable trace artifacts (IPFS + HCS)
Continuous model improvement via Arize Phoenix
Ethical governance and override accountability
Last updated
