Menu
NEW AGENT
MY AGENTS
ASSISTANTS
Step 1:
Memetoken Scout Alert System
1️⃣
Perfect output
- scan ALL
2️⃣ Add
output numbers
, then...
3️⃣ Add
Subagent Numbers
(work backwards
from output number!
)
4️⃣ Add
ACTUAL Skills
to subagent
✅ DONE..Copy x4 to Step 3...
SETTINGS
LOGOUT
What Shall We Build Next?
1
Describe
Describe your task
2
Refine
Refine the plan
3
SubAgents
Review all agents
4
Deploy
Deploy your agent
Sub Agent 1
Sub Agent 2
Sub Agent 3
Sub Agent 4
Sub Agent 5
Sub Agent 6
Sub Agent 7
Sub Agent 8
A) SUBAGENT SUMMARY This subagent retrieves token data from GMGN.ai and filters out tokens that do not meet the specified criteria (liquidity < 100,000, volume < 250,000, age ≥ 24h, holders ≤ 300), returning only the eligible tokens. B) FINAL TASK OUTPUT A structured list (e.g., JSON or Python list/dict) of the filtered tokens that meet the specified metrics. This list should include, at a minimum: • Contract Address • Liquidity • Volume • Age • Number of Holders C) SUBAGENT INPUT • The URL from GMGN.ai where the tokens and associated metrics can be scraped. • The filter instructions (liquidity < 100,000, volume < 250,000, age ≥ 24 hours, holders ≤ 300). E) SUBAGENT TASK SUMMARY 1. Input: The subagent receives two pieces of input: • The GMGN.ai URL listing tokens (e.g., https://gmgn.ai/tokens). • The filtering instructions (liquidity < 100,000, volume < 250,000, age ≥ 24 hours, holders ≤ 300). 2. Skill #226 – Extract Structured Data From 1x URL • DESCRIPTION: Crawls the GMGN.ai URL to gather raw token data. • INPUT: – The GMGN.ai page URL (e.g., https://gmgn.ai/tokens). – Instructions on what to extract (liquidity, volume, age, holders, contract address). • OUTPUT: Raw (unfiltered) token data in a structured format (text, JSON, or dictionary). 3. Skill #190 – Write or rewrite text based on instructions • DESCRIPTION: Takes the raw token data from Skill #226, parses it, and filters out tokens that fail the user’s thresholds. • INPUT: – The raw data (in text/JSON form). – The filter criteria: liquidity < 100,000, volume < 250,000, age ≥ 24 hours, holders ≤ 300. • OUTPUT: A final list or JSON of tokens that meet the criteria. 4. Subagent Final Output ([gmgn-filtered-tokens]): • The filtered list of tokens (in structured form) that remain after applying all the specified filters. F) SILOS SILO 1: Data Extraction • Step 1 (Skill #226): Extract the relevant metrics for all tokens from the supplied GMGN.ai URL and return these as structured data. SILO 2: Data Filtering • Step 2 (Skill #190): From that extracted data, systematically apply the filters (liquidity, volume, age, holders). Any token failing to meet any single criterion is discarded. The filtered tokens are then returned as the subagent’s final output.
SubAgent #1 - Diagram
Expand Diagram
A) SUBAGENT SUMMARY "Rugcheck Evaluator" reads in a filtered list of tokens, queries each token’s contract address on rugcheck.xyz to retrieve its risk rating, then returns only the tokens with a "Good" (or better) rating. B) FINAL TASK OUTPUT A refined list of tokens (text/JSON/dict format) that strictly have a rugcheck score "Good" (or better). This list is labeled [good-rated-tokens]. C) SUBAGENT INPUT • A filtered list of tokens with the following data fields for each token: – Contract Address – Other metadata (e.g. liquidity, volume, age, etc.) • The subagent specifically needs to read each token’s Contract Address to plug into rugcheck.xyz. E) SUBAGENT TASK SUMMARY Below is the step-by-step workflow that “Rugcheck Evaluator” will execute: 1) Receive the filtered list of tokens ([filtered-tokens]) from the previous subagent. 2) For each token in [filtered-tokens]: a) Use Skill #226 - "Extract Structured Data From 1x URL" to visit rugcheck.xyz, passing the Contract Address in the URL or query parameter. - INPUT: i. The URL for rugcheck.xyz with the token’s Contract Address appended (or appropriately passed). ii. An instruction to scrape the token’s rating/risk (e.g. "Extract the rugcheck score from the page"). - OUTPUT: • A structured snippet (text/JSON/dict) containing the token’s rugcheck rating. b) Use Skill #190 - "Write or rewrite text based on instructions" to check whether the rating is "Good" (or better). If so, keep the token; otherwise discard. - INPUT: i. Rugcheck score. ii. Instruction: "Filter out tokens below 'Good' rating." - OUTPUT: • The subset of tokens that meet the “Good” threshold. 3) Collate and return the final refined list of tokens with a "Good" (or better) rating ([good-rated-tokens]) for next steps in the overall pipeline. F) SILOS • Silo 1: Data Extraction – Subagent calls Skill #226 (Extract structured data from rugcheck.xyz per token). • Silo 2: Data Processing – Subagent calls Skill #190 (Filter out tokens whose rating is below "Good"). • Subagent returns the final refined token list ([good-rated-tokens]).
SubAgent #2 - Diagram
Expand Diagram
A) SUBAGENT SUMMARY "Twitter Inspector" performs a Twitter-focused check for each token—collecting its social media score and top 20 influencers from tweetscout.io—then merges this information with the token’s existing data so it can be passed along to the next stage. B) FINAL TASK OUTPUT A structured list/array (e.g., JSON) of the tokens (those already filtered to “Good”-rated status by Rugcheck) complete with each token’s social media score and top 20 influencers from tweetscout.io. Referred to as: [tokens-with-twitter-insights] C) SUBAGENT INPUT • A list/array of tokens that have already passed the “Good”-rating check (i.e., [good-rated-tokens]). Each token object should at least contain: 1) Contract Address (CA) 2) Twitter handle (or a way to identify the Twitter page) 3) Other relevant info carried over from earlier subagents D) SUBAGENT OUTPUT • A similar list/array ([tokens-with-twitter-insights]) of all tokens from the input list, but each token record shall now include: 1) Social media score from tweetscout.io 2) Top 20 influencers who follow the project E) SUBAGENT TASK SUMMARY Below is the detailed flow of how “Twitter Inspector” operates. It chains together the relevant skills to query tweetscout.io individually for each token, extract the data, and merge it back into the token list. 1. Receive [good-rated-tokens] as initial input. 2. For each token in [good-rated-tokens]: a. Obtain the token’s Twitter handle from the token’s data. b. Use Skill #226 (Extract Structured Data From 1x URL) to visit the tweetscout.io endpoint/URL for that token’s Twitter handle, retrieving: • Social media score. • Top 20 influencers following the project. c. Temporarily store the returned data alongside that token’s other fields. 3. After all tokens are processed, use Skill #190 (Write or rewrite text based on instructions) to finalize/merge each token’s newly acquired Twitter metrics into a neat, consistent format. 4. Return the completed collection of tokens ([tokens-with-twitter-insights]) with the updated Twitter data. F) SILOS • Silo A: “Fetch Twitter Data” – INPUT: Single token object with (Contract Address, Twitter handle, etc.) – ACTION: 1) Skill #226: Extract structured data from tweetscout.io for the token’s Twitter page, obtaining social media score + top 20 influencers. – OUTPUT: The same token object appended with the new Twitter data. • Silo B: “Combine + Finalize Output” – INPUT: Updated token objects (all tokens) each containing original fields + newly fetched Twitter data. – ACTION: 1) Skill #190: Takes the updated data for each token and merges it into a final standardized structure. – OUTPUT: The final combined array/list [tokens-with-twitter-insights], with social score and influencer details.
SubAgent #3 - Diagram
Expand Flow
A) SUBAGENT SUMMARY "Telegram Notifier" is responsible for generating the final Python script (as plain text) that: • Pulls token data from GMGN.ai, filtering by the specified criteria (liquidity, volume, age, holders). • Queries rugcheck.xyz for a “Good” rating. • Retrieves social score and top 20 influencers via tweetscout.io. • Sends qualifying tokens’ contract addresses (and relevant details) to the specified Telegram bot. B) FINAL TASK OUTPUT A single .py file (as text) containing the complete Python script (all logic in one script) that executes the above workflow and sends user-friendly alerts to the specified Telegram bot. C) SUBAGENT INPUT • The final list of token data (liquidity, volume, age, holders, contract address, rugcheck rating, social score, influencer info). • Open-ended instructions (e.g. how to structure the code, how to authenticate with Telegram, scheduling the 60-minute intervals, how to store historical analysis). D) SUBAGENT TASK SUMMARY The subagent receives the final token data plus instructions. It then uses the following skill to produce the script: 1) #190 (Write or rewrite text based on instructions) • INPUT: A detailed prompt that instructs the LLM to generate a Python script with: – Code to fetch token data from GMGN.ai (#226 usage in the produced script). – Code to filter by liquidity < 100,000, volume < 250,000, age ≥ 24 hours, holders ≤ 300. – Code to check rugcheck.xyz (#226 usage) for rating “Good.” – Code to retrieve Twitter stats (tweetscout.io) for social score / top influencers (#226 usage). – Logic to compile results, apply urgency levels, store historical data for 1 month, limit daily alerts to 100, re-check token metrics, etc. – Logic to send final alerts with relevant metrics to Telegram (https://t.me/GMGN_sol_bot). • OUTPUT: Plain text of a single .py script that, when run, handles the entire workflow and notifies you via Telegram. E) SILOS This subagent has one main silo: • Silo 1 (Generate Python Script): – Input: Final list of token data + instructions – Action: #190 → “Write or rewrite text based on instructions” to produce the complete .py script – Output: [final-automation-script] (the final .py code in text form)
4 Template & Links
Expand Flow
A) SUBAGENT SUMMARY This subagent continuously monitors tokens (which have already been alerted on), stores their data for up to one month, and triggers new alerts if their metrics drop below specified thresholds. B) FINAL TASK OUTPUT A text-based alert message (to be sent via the bot) indicating if any previously alerted token has fallen below the thresholds, along with updated metrics and a note that the token’s data will be stored for only one month. C) SUBAGENT INPUT • List of previously alerted tokens (with metrics, timestamps, and any token identifiers needed for re-check). • Thresholds (liquidity < 100,000, volume < 250,000, holders ≤ 300, etc.). • The schedule or frequency at which to re-check tokens (e.g., every 24 hours). • Current date/time context, so the subagent can purge data older than one month. E) SUBAGENT TASK SUMMARY Below is the detailed step-by-step process (in logical order) for the subagent’s monitoring-and-notifying workflow. For skill usage, we assume the subagent either reuses a skill like #226 to re-check metrics from websites or calls an internal database read/write routine to compare historical data. 1. Receive the list of previously alerted tokens, each with relevant metrics (e.g., liquidity, volume, holders, date/time, and Contract Address). 2. For each token, re-check or “pull” updated metrics by calling #226 - Extract Structured Data From 1x URL (or an equivalent local method) on: – The GMGN.ai token page (gather updated liquidity, volume, holder count, etc.). – Any additional data sources if needed (e.g., checking the same token on another site to confirm the changes). 3. Compare the newly fetched metrics with the previously stored/warned metrics: – If updated liquidity < 100,000 or volume < 250,000 or holders ≤ 300, prepare an alert. – If all are still above threshold, continue and do nothing. 4. Use #190 - Write or rewrite text based on instructions to generate the alert message text: – Summarize which metric(s) has fallen below threshold. – Indicate that the token was previously alerted, but its metrics have changed. 5. (Optional in code) Send this updated alert message to the Telegram bot if any changes are found. 6. (Optional in code) Periodically purge any token data older than one month from the local storage. – This is an internal housekeeping step (no specific skill required). 7. Subagent Final Output: – If no changes are found, return “No new alerts.” – If changes are found, return the text-based alert message for each affected token. F) SILOS SILO 1: Data Pull and Threshold Comparison • Input: (Token List + Thresholds) • Step 1: For each token, read stored metrics. • Step 2: Retrieve updated data with #226 from GMGN.ai. • Step 3: Compare new metrics with thresholds. SILO 2: Alert Generation and Notification • Input: (List of tokens that have changed, old vs. new metrics) • Step 4: Use #190 to rewrite text describing the alert. • Step 5: Send an alert to Telegram if below threshold. SILO 3: Data Housekeeping • Input: (Complete token recordset, with timestamps) • Step 6: Remove historical entries older than 30 days. • Output: Updated data store ready for future checks. ––––––––––––––––––––––– This completes the subagent flow (commonly referenced as “subagent 5”) to monitor, store, and alert on changing token metrics after an initial alert has already been sent.
5 Template & Links
Expand Flow
A) SUBAGENT SUMMARY: This subagent continuously monitors tokens that have previously passed all eligibility checks. At regular intervals, it re-fetches the tokens’ metrics, compares them against the initial thresholds, and if any token has dropped below certain thresholds (or otherwise changed significantly), it sends a follow-up alert to the Telegram bot—while also managing historical records by discarding data older than 1 month. B) FINAL TASK OUTPUT: The final output is an updated (in-memory or local) dataset containing the latest token metrics, plus any Telegram alerts if a token’s updated metrics fall below thresholds. In other words: • A refreshed state/data structure containing token status • Possible Telegram messages for any tokens that degrade beyond thresholds C) SUBAGENT INPUT: 1. A list of tokens (with their contract addresses and initial metrics) that have previously passed the main eligibility checks. 2. The timestamp of when each token was initially added to the monitoring list (for purging after 1 month). 3. A set of thresholds or conditions that trigger a re-alert (e.g., liquidity, volume, etc.). E) SUBAGENT TASK SUMMARY (Step-by-Step Flow): 1) Receive the current list of “active monitored tokens” (with contract addresses, stored metrics, and initial analysis date). 2) For each token, use skill #226 (Extract Structured Data From 1x URL) to re-scrape new metrics from GMGN.ai (or the relevant source) at the scheduled interval. • INPUT to #226: – The token-specific page URL on GMGN.ai (or relevant API endpoint). – Instructions to extract current liquidity, volume, age, holder count, etc. • OUTPUT from #226: – Structured data with the token’s updated metrics (liquidity, volume, age, holder count, etc.). 3) Compare the newly scraped metrics to the original thresholds. • Use skill #190 (Write or rewrite text based on instructions) to parse the numeric fields and evaluate whether a token’s updated data has fallen below the required thresholds or changed in a noteworthy way (e.g., “liquidity < 100,000,” “volume < 250,000,” etc.). • If the token still meets or exceeds thresholds, quietly update the local data store with “still OK” status. 4) If any token’s metrics have dropped below a threshold (or changed significantly in a negative way), generate an alert message. • Use skill #190 again to create a descriptive text summarizing the difference (e.g., “Token X liquidity fell from 120,000 to 80,000. This is below the threshold. Updating status.”). 5) Send the follow-up alert to the Telegram bot (same approach as in the main flow, typically done by generating Python code with skill #190 that performs the Telegram HTTP API call). • INPUT to #190: – The text of the message you want to send to the Telegram bot. – Authentication token or method for the Telegram Bot. • OUTPUT from #190: – The snippet of Python code responsible for sending the alert. (In the final consolidated script, this will just be executed automatically.) 6) Purge historical data older than 1 month. • Use skill #190 to generate or rewrite code logic that checks the saved timestamps for each monitored token, and if the token’s age in the monitoring system exceeds 1 month, remove it from the data store entirely. 7) Collate all updated tokens (those still active plus newly saved metrics) and finalize the updated data structure. • This final updated data structure (along with any triggered alerts) is the final output of the subagent. F) SILOS: • SILO 1: Token Metrics Retrieval – Step 1 & 2: Input the existing token list, re-scrape each token’s data with skill #226. • SILO 2: Comparison & Alert Generation – Step 3 & 4: Use skill #190 to parse and compare new metrics with thresholds, generate alerts for any tokens that degrade below thresholds. • SILO 3: Telegram Notification – Step 5: Use skill #190 to write code that sends the newly formed alert message to the Telegram bot. • SILO 4: Historical Data Management – Step 6: Use skill #190 to handle purging tokens older than 1 month and update the final data store. This completes the subagent’s actionable workflow.
6 Template & Links
Expand Flow
Templates & Links Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
7 Template & Links
Expand Flow
Questions & Research Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
8 Template & Links
Expand Flow
Templates & Links Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
9 Template & Links
Expand Flow
Templates & Links Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
10 Template & Links
Expand Flow
Questions & Research Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
11 Template & Links
Expand Flow
Templates & Links Lorem Ipsum is simply dummy text of the printing and typesetting industry. Lorem Ipsum has been the industry's standard dummy text ever since the 1500s, when an unknown printer took a galley of type and scrambled it to make a type specimen book. It has survived not only five centuries, but also the leap into electronic typesetting, remaining essentially unchanged. It was popularised in the 1960s with the release of Letraset sheets containing Lorem Ipsum passages, and more recently with desktop publishing software like Aldus PageMaker including versions of Lorem Ipsum.
12 Template & Links
Expand Flow
Need To Start Afresh?
BACK TO REFINE
Tweaked & Good To Go?
PROCEED TO DEPLOY