Splitout Code Automation Webhook – Business Process Automation | Complete n8n Webhook Guide (Intermediate)
This article provides a complete, practical walkthrough of the Splitout Code Automation Webhook n8n agent. It connects HTTP Request, Webhook across approximately 1 node(s). Expect a Intermediate setup in 15-45 minutes. One‑time purchase: €29.
What This Agent Does
This agent orchestrates a reliable automation between HTTP Request, Webhook, handling triggers, data enrichment, and delivery with guardrails for errors and rate limits.
It streamlines multi‑step processes that would otherwise require manual exports, spreadsheet cleanup, and repeated API requests. By centralizing logic in n8n, it reduces context switching, lowers error rates, and ensures consistent results across teams.
Typical outcomes include faster lead handoffs, automated notifications, accurate data synchronization, and better visibility via execution logs and optional Slack/Email alerts.
How It Works
The workflow uses standard n8n building blocks like Webhook or Schedule triggers, HTTP Request for API calls, and control nodes (IF, Merge, Set) to validate inputs, branch on conditions, and format outputs. Retries and timeouts improve resilience, while credentials keep secrets safe.
Third‑Party Integrations
- HTTP Request
- Webhook
Import and Use in n8n
- Open n8n and create a new workflow or collection.
- Choose Import from File or Paste JSON.
- Paste the JSON below, then click Import.
-
Show n8n JSON
Title: Automated Trustpilot Review Analysis with n8n: From Scraping to Insights Using AI Meta Description: Learn how to build a fully automated pipeline using n8n to scrape Trustpilot reviews, extract key insights via clustering and LLMs, and export the results to Google Sheets. This no-code/low-code workflow uses Qdrant, OpenAI, and LangChain for intelligent feedback analysis. Keywords: n8n, Trustpilot scraping, customer feedback analysis, sentiment analysis, LLM, OpenAI, Qdrant, LangChain, Google Sheets, no-code automation, vector database, K-means clustering, customer insights, review analysis Third-party APIs Used: 1. Trustpilot (via HTML scraping with n8n HTTP & HTML nodes) 2. OpenAI API (GPT-4o-mini and Embedding models) 3. Google Sheets API (via n8n Google Sheets integration) 4. Qdrant API (Vector database for storing and retrieving embeddings) — 🚀 Article: From Reviews to Real Insights: Automating Customer Feedback Analysis with n8n and AI Understanding customer sentiment is no longer a luxury—it's a necessity. And while platforms like Trustpilot give businesses access to raw customer feedback, transforming that data into actionable insights requires a system. That’s where n8n shines. In this article, we’ll walk through how a dynamic no-code workflow built in n8n can intelligently scrape Trustpilot reviews, analyze them using clustering and Large Language Models (LLMs), and export neatly organized insights to Google Sheets for easy consumption by stakeholders. 🧠 What You'll Learn: - How to scrape Trustpilot reviews without writing a single line of code. - How to store review data in a vector database for semantic analysis. - How to use OpenAI’s GPT models to extract meaningful insights. - How to automate reporting into a Google Sheets dashboard. Let’s unpack the key steps in this robust workflow. — 🔁 Step 1: Start Fresh by Clearing Old Data Before beginning a new analysis, the workflow ensures there's no redundant data in the Qdrant vector store for the selected company (e.g., freddiesflowers.com). It uses Qdrant’s Delete Points API to remove previously stored reviews, ensuring fresh insights every run. — 🕸️ Step 2: Scrape Trustpilot Reviews Using n8n’s HTTP Request and HTML Extract nodes, the workflow dynamically scrapes up to the latest three pages of a company’s Trustpilot reviews. Key metadata—like review author, country, rating, date of experience, and review content—is extracted. The HTML node’s CSS selectors make it easy to extract structured review data without needing an external scraper or parser. — 📦 Step 3: Vectorizing Reviews with Qdrant Each review’s text content is embedded via OpenAI’s text-embedding-3-small model. These semantic vectors are then stored in a Qdrant vector database. Additional metadata (e.g., review date, author, rating) is attached for future querying and filtering. Qdrant’s advanced filtering capabilities let us segment and query these vectors based on metadata—essential for tracking sentiment trends over time. — ⚙️ Step 4: Trigger Analysis Subworkflow Once review data is stored, a subworkflow is triggered to analyze the reviews. This structure brings logical separation between the data ingestion and insight generation processes, making the system easier to maintain and expand. — 📆 Step 5: Load Monthly Reviews The subworkflow initializes a date range (start and end of the current month) and queries Qdrant for all reviews written in that period using metadata filters. This lets companies generate insights on a rolling monthly basis. — 🧮 Step 6: Cluster Reviews Using K-means The real magic happens here. Using a native Python Code node inside n8n, we apply a K-Means clustering algorithm (sklearn) to group similar reviews. Each cluster represents a common theme or opinion among reviews—e.g., delivery issues, pricing concerns, product quality. Only clusters with three or more reviews are retained to ensure statistical relevance. — 📦 Step 7: Fetch Cluster Payloads With point IDs grouped by cluster, the workflow retrieves each review’s full payload from Qdrant. This sets the stage for natural language processing by LLMs. — 🤖 Step 8: Generate Insights with OpenAI An OpenAI GPT-4o-mini model (via LangChain) summarizes each cluster’s reviews, providing: - A short narrative insight - Overall sentiment (strongly negative to strongly positive) - Suggested improvements This transforms unstructured feedback into business-ready intelligence. — 📊 Step 9: Export to Google Sheets Lastly, all insights—including sentiment, advice, raw reviews, and cluster details—are exported into a Google Sheet. This live dashboard allows internal teams to track customer sentiment, identify pain points, and prioritize fixes or features accordingly. — 🔧 Bonus Features - Recursive character text splitting ensures long review segments are processed intelligently. - Insight generation is modular—easy to expand to other platforms beyond Trustpilot. - Workflow is self-contained yet extensible thanks to modular nodes and reusable subworkflows. — 📌 Wrap Up This n8n workflow is a perfect example of how no-code tools can rival complex data pipelines. It seamlessly combines scraping, vector databases, embedding models, clustering algorithms, and generative AI—all inside a visual builder. Whether you're a product manager looking to improve customer experience, a marketer gauging sentiment, or a founder keeping a finger on the pulse—this workflow delivers high-quality, automated insights from real customers. — 📎 Resources: - GitHub Repo (coming soon) - Sample Output Sheet https://docs.google.com/spreadsheets/d/e/2PACX-1vQ6ipJnXWXgr5wlUJnhioNpeYrxaIpsRYZCwN3C-fFXumkbh9TAsA_JzE0kbv7DcGAVIP7az0L46_2P/pubhtml — Join the conversation on the n8n Discord or community forum to share your version of this workflow—or get help making it your own. Happy automating!
- Set credentials for each API node (keys, OAuth) in Credentials.
- Run a test via Execute Workflow. Inspect Run Data, then adjust parameters.
- Enable the workflow to run on schedule, webhook, or triggers as configured.
Tips: keep secrets in credentials, add retries and timeouts on HTTP nodes, implement error notifications, and paginate large API fetches.
Validation: use IF/Code nodes to sanitize inputs and guard against empty payloads.
Why Automate This with AI Agents
AI‑assisted automations offload repetitive, error‑prone tasks to a predictable workflow. Instead of manual copy‑paste and ad‑hoc scripts, your team gets a governed pipeline with versioned state, auditability, and observable runs.
n8n’s node graph makes data flow transparent while AI‑powered enrichment (classification, extraction, summarization) boosts throughput and consistency. Teams reclaim time, reduce operational costs, and standardize best practices without sacrificing flexibility.
Compared to one‑off integrations, an AI agent is easier to extend: swap APIs, add filters, or bolt on notifications without rewriting everything. You get reliability, control, and a faster path from idea to production.
Best Practices
- Credentials: restrict scopes and rotate tokens regularly.
- Resilience: configure retries, timeouts, and backoff for API nodes.
- Data Quality: validate inputs; normalize fields early to reduce downstream branching.
- Performance: batch records and paginate for large datasets.
- Observability: add failure alerts (Email/Slack) and persistent logs for auditing.
- Security: avoid sensitive data in logs; use environment variables and n8n credentials.
FAQs
Can I swap integrations later? Yes. Replace or add nodes and re‑map fields without rebuilding the whole flow.
How do I monitor failures? Use Execution logs and add notifications on the Error Trigger path.
Does it scale? Use queues, batching, and sub‑workflows to split responsibilities and control load.
Is my data safe? Keep secrets in Credentials, restrict token scopes, and review access logs.