top of page
Search

LM Studio & N8N Tutorial

  • Writer: Muzkkir Husseni
    Muzkkir Husseni
  • 4 days ago
  • 4 min read

Build Your Own AI Chatbot with n8n, LM Studio & Flask: A Fun, Step-by-Step Guide for 2025


Ready to create your own AI chatbot? This 2025 guide combines n8n for automation, LM Studio for local AI models, and Flask for a sleek chat interface. Perfect for beginners or pros, you’ll build a chatbot that answers queries like “What’s a healthy snack?” in just a few steps. With clear code, visuals, and tips, this tutorial makes AI development fun and accessible. Let’s get started!


What You’ll Learn:

  • Install n8n, LM Studio, and Flask.

  • Create an n8n workflow with AI model switching.

  • Build a responsive chat frontend.

  • Test your chatbot locally.

Why It’s Cool: Run a private, customizable AI chatbot on your machine!


1. Essential Tools and Installation

Required Software:

  • LM Studio

    • Obtainable via the official website. 

    • Facilitates local execution of AI models (e.g., Qwen 1.7B).

ree


Download Docker Desktop

  • Downloadable from the Docker homepage.

  • Compatible with Windows, Mac, and Linux systems.

  • Necessary for running n8n in a containerized environment.

ree

  • n8n via Docker

    • Launch Docker.


Start n8n with the following command, ensuring proper environment variable configuration for external access:docker run -d --name n8n -p 5678:5678 -e N8N_SECURE_COOKIE=false -v ~/.n8n:/home/node/.n8n n8nio/n8n


Run Docker with below environment variable so that it can be access via IP Address


N8N_SECURE_COOKIE=false

Automation Work Flow Setup http://10.0.0.234:5002/webhook/chat1


 2.Configuration of LM Studio

API Preparation:

  • Load the preferred AI model within LM Studio.

Set the API endpoint to:http://host.docker.internal:1234/v1

  • This configuration bridges Docker’s network isolation and allows n8n to access LM Studio’s API reliably.


  

ree

ree

ree

3. Construction of the n8n Workflow

Workflow Steps:

Add a Webhook node with the endpoint /chat1 (e.g., http://10.0.0.234:5002/webhook/chat1).

ree

Test webhook functionality using:curl -X POST "<http://10.0.0.234:5002/webhook-test/chat1>" -H "Content-Type: application/json" -d '{"message": "Hello AI agent, I need your help with something."}'


  • AI Agent Node Integration:

    • Link the AI Agent node to the webhook.

    • Set the API URL to the LM Studio endpoint.

Specify the prompt, such as: “You’re a helpful assistant.”

ree

  • Model Selector Node:

    • Implement conditional logic for model selection (e.g., “Q3” for Qwen 1.7B or “R1” for Deepseek 8B).

Apply expressions like:{{ $json.bot === 'Q3' }}


ree

  • Response Node:

    • Finalize workflow by adding a Respond to Webhook node to transmit the AI’s answer to the client.

  • Testing:

    • Activate the workflow.

    • Sample queries (e.g., “healthy snacks”) can be used for initial tests.


ree
ree
ree

4. Backend Development with Flask

Script Implementation:

  • Save the backend logic as ai_server.py.

  • Features:

    • Serves the main chat page through the / route.

    • Handles /chat POST requests, forwards messages to n8n, processes AI replies, and returns JSON.

  • Launch process:

Run Flask application with:python ai_server.py

  • Consult Flask documentation for further details if needed.

ree



5. Frontend Development (HTML)

Template Configuration:

  • Store the HTML in templates/index.html.

  • Contains:

    • User interface with a model selection dropdown and chat interaction area.

    • CSS for responsive design.

    • JavaScript to post messages to Flask, display a typing indicator (e.g., static/gifs/chatbot.gif), and animate chat exchanges.


from flask import Flask, jsonify, render_template, request
import requests

app = Flask(__name__)
N8N_WEBHOOK_URL = "http://10.0.0.234:5002/webhook/"


@app.route('/')
def home():
    return render_template('index.html')


@app.route('/chat', methods=['POST'])
def chat():
    message = request.json.get('message')
    if not message:
        return jsonify({"error": "Empty message"}), 400
    bot = request.json.get('bot')
    N8N_WEBHOOK_URL = "http://10.0.0.234:5002/webhook/Q3"
    try:
        response = requests.post(
            N8N_WEBHOOK_URL,
            json={"message": message, "bot": bot},
            headers={"Content-Type": "application/json"}
        )
        if response.status_code == 200:
            n8n_reply = response.json()
            print("+ Response received from n8n:\n", n8n_reply)
            modifiedData = n8n_reply[0]['output'].split("</think>")[-1].strip()
            return jsonify({"reply": modifiedData})
        return jsonify({"reply": "Something Went Wrong!"}), 500
    except Exception as e:
        return jsonify({"error": str(e)}), 500

if __name__ == "__main__":
    app.run(host="0.0.0.0", port=5001, debug=False)

For HTML front page:


<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>AI Chat</title>
<style>
*{box-sizing:border-box;margin:0;padding:0}
body,html{height:100%;width:100%;display:flex;flex-direction:column;font-family:sans-serif;background:#f9fafb;padding:10px}
header{width:100%;background:#0f172a;color:#fff;padding:15px;text-align:center;font-size:1.4rem;font-weight:bold;border-radius:12px;box-shadow:0 2px 4px rgba(0,0,0,0.2);margin-bottom:10px}

/* Dropdown */
#botSelectContainer{display:flex;justify-content:center;margin-bottom:10px}
#botSelect{padding:10px 16px;border-radius:12px;border:1px solid #d1d5db;font-size:1rem;cursor:pointer;outline:none}

/* Chat container */
#chat{flex:1;overflow-y:auto;padding:12px;display:flex;flex-direction:column;gap:12px;width:100%;border-radius:12px;background:#fff;margin-bottom:10px}
.msg{max-width:85%;padding:18px;border-radius:14px;word-wrap:break-word;white-space:pre-wrap;font-size:1.05rem}
.user{align-self:flex-end;background:#2563eb;color:#fff;margin-left:15%;border-bottom-right-radius:6px}
.bot{align-self:flex-start;background:#e5e7eb;color:#111;margin-right:15%;border-bottom-left-radius:6px}

/* Input area */
#inputArea{display:flex;width:100%;gap:8px;padding:12px;background:#fff;border-radius:12px;box-shadow:0 1px 3px rgba(0,0,0,0.1)}
#inp{flex:1;padding:16px;border:1px solid #d1d5db;border-radius:12px;font-size:1rem}
#btn{padding:16px 24px;background:#2563eb;color:#fff;border:none;border-radius:12px;font-size:1rem;cursor:pointer;transition:0.2s}
#btn:hover{background:#1d4ed8}

/* Responsive */
@media(max-width:768px){
.msg{padding:14px;font-size:1rem}
#inp,#btn,#botSelect{padding:12px;font-size:0.95rem}
}
@media(max-width:480px){
.msg{padding:12px;font-size:0.9rem}
#inp,#btn,#botSelect{padding:10px;font-size:0.9rem}
}
</style>
</head>
<body>
<header>Zenith AI</header>

<div id="botSelectContainer" style="display:flex;justify-content:center;margin:10px 0;">
  <select id="botSelect" style="padding:10px 16px;border-radius:12px;border:1px solid #d1d5db;font-size:1rem;cursor:pointer;text-align:center;">
    <option value="Q3" style="text-align:center;">Qwen3 (1.7 Billion) Fast*</option>
    <option value="R1" style="text-align:center;">Deepseek R1 (8 Billion)</option>
    <option value="M8" style="text-align:center;">Meta Llama 3.1 (8 Billion)</option>
    <option value="Q32" style="text-align:center;">Qwen3 (32 Billion) Deepthink*</option>
  </select>
</div>


<div id="chat"></div>

<div id="inputArea">
<input id="inp" placeholder="Type your message..." />
<button id="btn">Send</button>
</div>

<script>
const chat=document.getElementById("chat"),
      inp=document.getElementById("inp"),
      btn=document.getElementById("btn"),
      botSelect=document.getElementById("botSelect");

let selectedBot=botSelect.value;

// Update selected bot when dropdown changes
botSelect.addEventListener("change",()=>{selectedBot=botSelect.value;});

function appendMsg(cls,text){
  const m=document.createElement("div");
  m.className="msg "+cls;
  m.textContent=text;
  chat.appendChild(m);
  chat.scrollTop=chat.scrollHeight;
  return m;
}

function typeWriter(elem,text,speed=15){
  let i=0;
  return new Promise(r=>{
    let t=setInterval(()=>{
      elem.textContent+=text[i];i++;
      if(i>=text.length){clearInterval(t);r();}
    },speed);
  });
}

async function send(){
  let text=inp.value.trim(); if(!text) return;
  appendMsg("user",text); inp.value="";

  // Typing GIF
  const botMsg=document.createElement("div");
  botMsg.className="msg bot";
  const gif=document.createElement("img");
  gif.src="{{ url_for('static', filename='gifs/chatbot.gif') }}"; 
  gif.style.width="50px"; gif.style.height="50px";
  gif.style.display="block"; gif.style.margin="0 auto";
  botMsg.appendChild(gif);
  chat.appendChild(botMsg);
  chat.scrollTop=chat.scrollHeight;

  try{
    let res=await fetch("/chat",{
      method:"POST",
      headers:{"Content-Type":"application/json"},
      body:JSON.stringify({message:text, bot:selectedBot}) // send selected bot
    });
    let data=await res.json();
    botMsg.textContent=""; // remove GIF
    await typeWriter(botMsg,data.reply||"No response");
  }catch(e){botMsg.textContent="⚠️ Error connecting";}
}

btn.onclick=send;
inp.addEventListener("keypress",e=>{if(e.key==="Enter") send();});
</script>
</body>
</html>

File Structure to be:

ree

Start the Servers

ree


6. Deployment and Testing

Operational Steps:

  • Start LM Studio’s local server.

  • Launch n8n via Docker.

Execute the Flask backend script:python ai_server.py


  • Open the interface at http://localhost:5001

  • Select a language model and initiate conversation. Test queries (such as “healthy meals”) are recommended to confirm successful setup.


ree

ree


7. Considerations:

  • Privacy: Local operation ensures privacy of interactions.

  • Performance: Smaller models offer high speed but limited capabilities. More complex models demand substantial computing resources.

  • Networking: Correct IP configuration is essential; network troubleshooting may be required.


 
 
 

Comments


logo new.png

Zenith Secure 7

Empowering You to Rise Above Cyber Threats

Quick Links:
Home | Research & Tools | Services | About Us | Contact

Stay Informed:
Subscribe to Our Newsletter | Read the Latest Blog Posts | Follow Us on GitHub

Connect With Us:
LinkedIn | Twitter | YouTube | GitHub

© 2024 Zenith Secure 7. All Rights Reserved.
Crafted with Integrity, Fueled by Innovation.

bottom of page