Functionitem Executecommand Automation Webhook – Business Process Automation | Complete n8n Webhook Guide (Intermediate)
This article provides a complete, practical walkthrough of the Functionitem Executecommand Automation Webhook n8n agent. It connects HTTP Request, Webhook across approximately 1 node(s). Expect a Intermediate setup in 15-45 minutes. One‑time purchase: €29.
What This Agent Does
This agent orchestrates a reliable automation between HTTP Request, Webhook, handling triggers, data enrichment, and delivery with guardrails for errors and rate limits.
It streamlines multi‑step processes that would otherwise require manual exports, spreadsheet cleanup, and repeated API requests. By centralizing logic in n8n, it reduces context switching, lowers error rates, and ensures consistent results across teams.
Typical outcomes include faster lead handoffs, automated notifications, accurate data synchronization, and better visibility via execution logs and optional Slack/Email alerts.
How It Works
The workflow uses standard n8n building blocks like Webhook or Schedule triggers, HTTP Request for API calls, and control nodes (IF, Merge, Set) to validate inputs, branch on conditions, and format outputs. Retries and timeouts improve resilience, while credentials keep secrets safe.
Third‑Party Integrations
- HTTP Request
- Webhook
Import and Use in n8n
- Open n8n and create a new workflow or collection.
- Choose Import from File or Paste JSON.
- Paste the JSON below, then click Import.
-
Show n8n JSON
Title: Automating SWIFT Code Extraction with n8n: A Step-by-Step Workflow Meta Description: Learn how to use n8n to automatically extract SWIFT codes by crawling and parsing data from theswiftcodes.com, leveraging APIs like uProc and MongoDB to enhance, store, and manage banking data. Keywords: n8n, web scraping automation, SWIFT codes, theswiftcodes.com, MongoDB, uProc API, workflow automation, web crawling, HTML extraction, ETL n8n, financial data automation — Automating the Extraction of SWIFT Codes Using n8n In an era of data-driven systems, access to accurate banking data such as SWIFT codes is crucial for international transactions, compliance, and financial applications. Traditionally, scraping and structuring this data can be time-consuming and error-prone. However, no-code/low-code platforms like n8n can automate this task seamlessly. In this article, we’ll explain how an advanced n8n workflow, titled "extract_swifts", automates the extraction of SWIFT codes from theswiftcodes.com, normalizes country data, and persists it in a MongoDB database — all with minimal human intervention. Overview of the Workflow The extract_swifts workflow is initiated manually and consists of a series of automated actions: 1. Fetches a list of countries from theswiftcodes.com. 2. Iteratively scrapes SWIFT data for each country (and multiple pages when applicable). 3. Normalizes country information using the uProc geographic API. 4. Stores the extracted data in a MongoDB collection. 5. Caches per-page HTML files locally to enable reusability and retry mechanisms. Let’s break it down step by step. Step 1: Triggering and Preparing the Environment The workflow starts with a Manual Trigger node to allow user-controlled execution. It immediately creates a directory to cache web page responses. This local caching serves two purposes — it helps avoid unnecessary HTTP requests and provides a backup in case of connection failure. Step 2: Scraping Country Links The workflow performs an HTTP request to theswiftcodes.com’s "browse-by-country" page, fetching the full HTML of the directory. The HTML Extract node then parses all clickable links to country-specific pages using CSS selectors. These links are transformed into a list of items with a Function node named "Prepare countries". Step 3: Batch Processing by Country The list of countries is split into batches of 1, ensuring orderly, sequential processing. Each batch proceeds through a set of nodes: - The country code is normalized using the uProc API to obtain standardized ISO codes. - The initial page to scrape (first page or subsequent paginated link) is determined. Step 4: Fetching, Caching, and Reusing Country Pages To avoid repeated HTTP requests, every page URL is converted into a safe filename using "Generate filename". If a cached HTML page exists locally, it’s read via the Read Binary File node. If not, the workflow: - Waits 1 second (to prevent server overload). - Downloads the HTML. - Saves it to the local directory for future use. Once the local file is ready, it’s parsed with HTML Extract1 to gather the core data: - SWIFT code - Bank name - Branch - City - Next page link (for pagination) Step 5: Structuring and Inserting into MongoDB The extracted arrays are matched by index and wrapped into individual objects in the "Prepare Documents" node. Fields include: - iso_code (from uProc) - country (parsed from URL) - name, city, branch, swift_code - page (URL) - createdAt and updatedAt timestamps All documents are then inserted into MongoDB using the MongoDB1 node into the swifts.meetup collection. Step 6: Handling Pagination and Continuity Pagination is handled by checking if there’s a "Next" page link. If one exists: - The workflow updates the static global data to store the next page URL. - It loops back to scrape the new page. If no next page exists, the loop ends and it continues with the next country. Data Integrity, Modularity, and Optimization One notable feature of this workflow is resilience: - Files are cached locally — partial workflows won’t result in data loss or repeated network drain. - Pagination and country processing are cleanly separated, with checks to move from page-to-page and country-to-country. Using external services like uProc enhances the quality of data stored, ensuring ISO-standard country codes are committed alongside source-visible names. Third-Party APIs and Services Used This workflow integrates with: - uProc API (geographic.getCountryNormalized): - Used to normalize country names into ISO codes for consistency and downstream ETL processes. - MongoDB: - Target datastore to persist structured SWIFT data (names, cities, branches, SWIFT codes, etc.). Notably, theswiftcodes.com is the primary source of the raw HTML content, though it is interacted with not through an official API but by scraping its public pages. Conclusion This n8n workflow is a prime example of how no-code tools can orchestrate complex multi-step scraping, parsing, enriching, and storing sequences. By combining smart tools like uProc and MongoDB, it not only reclaims hours of manual work but ensures data consistency and reliability. For financial institutions, compliance teams, and developers working in fintech, this automation offers a plug-and-play solution to keep SWIFT code databases fresh and accurate. Whether you're launching a banking app, developing compliance workflows, or maintaining a global finance directory — this n8n solution proves that data automation doesn't have to be hard. — Third-party APIs Used: - uProc (https://uproc.io) - MongoDB (Database accessed via n8n’s native MongoDB node) If you’re looking to deploy a similar solution or customize it for your internal use case, n8n’s flexibility allows you to adapt and scale this workflow to scrape and structure virtually any type of web data.
- Set credentials for each API node (keys, OAuth) in Credentials.
- Run a test via Execute Workflow. Inspect Run Data, then adjust parameters.
- Enable the workflow to run on schedule, webhook, or triggers as configured.
Tips: keep secrets in credentials, add retries and timeouts on HTTP nodes, implement error notifications, and paginate large API fetches.
Validation: use IF/Code nodes to sanitize inputs and guard against empty payloads.
Why Automate This with AI Agents
AI‑assisted automations offload repetitive, error‑prone tasks to a predictable workflow. Instead of manual copy‑paste and ad‑hoc scripts, your team gets a governed pipeline with versioned state, auditability, and observable runs.
n8n’s node graph makes data flow transparent while AI‑powered enrichment (classification, extraction, summarization) boosts throughput and consistency. Teams reclaim time, reduce operational costs, and standardize best practices without sacrificing flexibility.
Compared to one‑off integrations, an AI agent is easier to extend: swap APIs, add filters, or bolt on notifications without rewriting everything. You get reliability, control, and a faster path from idea to production.
Best Practices
- Credentials: restrict scopes and rotate tokens regularly.
- Resilience: configure retries, timeouts, and backoff for API nodes.
- Data Quality: validate inputs; normalize fields early to reduce downstream branching.
- Performance: batch records and paginate for large datasets.
- Observability: add failure alerts (Email/Slack) and persistent logs for auditing.
- Security: avoid sensitive data in logs; use environment variables and n8n credentials.
FAQs
Can I swap integrations later? Yes. Replace or add nodes and re‑map fields without rebuilding the whole flow.
How do I monitor failures? Use Execution logs and add notifications on the Error Trigger path.
Does it scale? Use queues, batching, and sub‑workflows to split responsibilities and control load.
Is my data safe? Keep secrets in Credentials, restrict token scopes, and review access logs.