-An MCP server to integrate Ergo Blockchain Node and Explorer APIs for checking address balances, analyzing transactions, viewing transaction history, performing forensic analysis of addresses, searching for tokens, and monitoring network status.
A standardization tool for Ergo MCP API responses that transforms various output formats (JSON, Markdown, plaintext) into a consistent JSON structure for improved integration and usability.
The MCP API returns responses in inconsistent formats:
This inconsistency makes it difficult to integrate with other systems and requires custom handling for each endpoint.
The MCPResponseStandardizer
transforms all responses into a consistent JSON structure:
{
"success": true,
"data": {
// Standardized response data extracted from the original
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
For error responses:
{
"success": false,
"error": {
"code": 400,
"message": "Error message"
},
"meta": {
"format": "json|markdown|text|mixed",
"endpoint": "endpoint_name",
"timestamp": "ISO-timestamp"
}
}
from mcp_response_standardizer import MCPResponseStandardizer
# Initialize the standardizer
standardizer = MCPResponseStandardizer()
# Standardize a response
endpoint_name = "blockchain_status"
response_content = "..." # Content from the MCP API
status_code = 200 # HTTP status code from the API call
# Get standardized response
standardized = standardizer.standardize_response(
endpoint_name,
response_content,
status_code
)
# Access the standardized data
if standardized["success"]:
data = standardized["data"]
# Use the standardized data...
else:
error = standardized["error"]
print(f"Error {error['code']}: {error['message']}")
You can also use the standardizer from the command line:
python mcp_response_standardizer.py blockchain_status response.txt
Where:
blockchain_status
is the endpoint nameresponse.txt
is a file containing the response contentA test script test_standardizer.py
is provided to demonstrate the standardizer with sample responses:
python test_standardizer.py
This script:
sample_responses
directoryThe standardizer uses the following approach:
Ergo Explorer Model Context Protocol (MCP) is a comprehensive server that provides AI assistants with direct access to Ergo blockchain data through a standardized interface.
This project bridges the gap between AI assistants and the Ergo blockchain ecosystem by:
All endpoints in the Ergo Explorer MCP implement a standardized response format system that:
@standardize_response
decorator for automatic format conversion{
"status": "success", // or "error"
"data": {
// Endpoint-specific structured data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789
}
}
For more information on response standardization, see RESPONSE_STANDARDIZATION.md.
The Ergo Explorer MCP provides advanced entity identification capabilities through address clustering algorithms. This feature helps identify groups of addresses likely controlled by the same entity.
The following endpoints are available for entity identification:
/address_clustering/identify
/address_clustering/visualize
/address_clustering/openwebui_entity_tool
/address_clustering/openwebui_viz_tool
Ergo Explorer MCP integrates with Open WebUI to provide enhanced visualization and interaction capabilities:
To identify entities related to an address:
from ergo_explorer.api import make_request
# Identify entities for an address
response = make_request("address_clustering/identify", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Get visualization for an address
viz_response = make_request("address_clustering/visualize", {
"address": "9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN",
"depth": 2,
"tx_limit": 100
})
# Access entity clusters
entities = response["data"]["clusters"]
for entity_id, entity_data in entities.items():
print(f"Entity {entity_id}: {len(entity_data['addresses'])} addresses")
print(f"Confidence: {entity_data['confidence_score']}")
To use the Open WebUI tools:
[Tool: openwebui_entity_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
[Tool: openwebui_viz_tool]
[Address: 9gUDVVx75KyZ783YLECKngb1wy8KVwEfk3byjdfjUyDVAELAPUN]
[Depth: 2]
[TX Limit: 100]
The Ergo Explorer MCP includes built-in token estimation capabilities to help AI assistants optimize their context window usage. This feature provides an estimate of the number of tokens in each response for various LLM models.
tiktoken
is not availableToken estimation is included in the metadata
section of all standardized responses:
{
"status": "success",
"data": {
// Response data
},
"metadata": {
"execution_time_ms": 123,
"result_size_bytes": 456,
"is_truncated": false,
"token_estimate": 789,
"token_breakdown": {
"data": 650,
"metadata": 89,
"status": 50
}
}
}
To access token estimates in responses:
from ergo_explorer.api import make_request
# Make a request to any endpoint
response = make_request("blockchain/status")
# Access token estimation information
token_count = response["metadata"]["token_estimate"]
is_truncated = response["metadata"]["is_truncated"]
print(f"Response contains approximately {token_count} tokens")
if is_truncated:
print("Response was truncated to fit within token limits")
You can specify which LLM model to use for token estimation:
from ergo_explorer.api import make_request
# Request with specific model type for token estimation
response = make_request("blockchain/address_info",
{"address": "9hdcMw4eRpJPJGx8RJhvdRgFRsE1URpQCsAWM3wG547gQ9awZgi"},
model_type="gpt-4")
# The token_estimate will be calculated based on GPT-4's tokenization
Response Type | Target Token Range | Optimization Strategy |
---|---|---|
Simple queries | < 500 tokens | Full response without truncation |
Standard queries | 500-2000 tokens | Selective field inclusion |
Complex queries | 2000-5000 tokens | Pagination or truncated response |
Data-intensive | > 5000 tokens | Summary with optional detail retrieval |
The Ergo Explorer MCP includes comprehensive functionality for tracking the historical ownership of tokens and analyzing how distribution changes over time:
// Simple request with just essential parameters
GET /token/historical_token_holders
{
"token_id": "d71693c49a84fbbecd4908c94813b46514b18b67a99952dc1e6e4791556de413",
"max_transactions": 200
}
Response format includes detailed token transfer history and snapshots of token distribution at various points in time (or block heights).
Clone the repository:
git clone https://github.com/ergo-mcp/ergo-explorer-mcp.git
cd ergo-explorer-mcp
Install dependencies:
pip install -r requirements.txt
Configure your environment:
# Set up environment variables
export ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1"
export ERGO_NODE_API="http://your-node-address:9053" # Optional
export ERGO_NODE_API_KEY="your-api-key" # Optional
Run the MCP server:
python -m ergo_explorer.server
Build the Docker image:
docker build -t ergo-explorer-mcp .
Run the container:
docker run -d -p 8000:8000 \
-e ERGO_EXPLORER_API="https://api.ergoplatform.com/api/v1" \
-e ERGO_NODE_API="http://your-node-address:9053" \
-e ERGO_NODE_API_KEY="your-api-key" \
--name ergo-mcp ergo-explorer-mcp
To contribute to the project:
pip install -r requirements.txt
pip install -r requirements.test.txt
pytest
For comprehensive documentation, see:
This project is licensed under the MIT License - see the LICENSE file for details.
by: mcpdotdirect
Comprehensive blockchain services for 30+ EVM networks, supporting native tokens, ERC20, NFTs, smart contracts, transactions, and ENS resolution.
by: GoPlausible
A comprehensive MCP server for tooling interactions (40+) and resource accessibility (60+) plus many useful prompts for interacting with the Algorand blockchain.