Getting Your Gemini API Key for the AI Assistant
March 15, 2026

The AI Chat Assistant in MQTTfy is powered by Google's Gemini family of models. To use its features, you need to provide your own Google AI API key. This key is stored locally in your browser and is only sent directly to Google's servers for processing your requests.
This guide will walk you through the process of getting a free API key from Google AI Studio.
Step 1: Visit Google AI Studio
Navigate to ai.google.dev in your web browser. You will need to sign in with your Google account.
Step 2: Create an API Key
- Once you are in Google AI Studio, look for the "Get API key" button. It is usually located in the top left or top right corner of the interface.
- Click on the button. This will take you to the API key generation page.
- You may be prompted to agree to the terms of service.
- Click the "Create API key" button. This will generate a new, unique API key for you.
Step 3: Copy Your API Key
A dialog box will appear showing your newly generated API key. It is a long string of random characters.
This is the only time you will be able to see the full key, so make sure to copy it now. Click the copy icon or manually select and copy the entire string.
Step 4: Add the Key to MQTTfy
- Return to your MQTTfy dashboard.
- Click on the AI Chat Assistant button (the robot icon) at the bottom right of the screen.
- The chat panel will open, prompting you to enter your API key.
- Paste the key you just copied from Google AI Studio into the input field.
- Click "Save API Key".
That's it! Your AI Assistant is now enabled. You can start asking it questions about your dashboard, MQTT concepts, or even ask it to perform simulated data analysis tasks.
The Generative IIoT Revolution: Building AI Agents with Gemini and MQTTfy
Getting a Gemini API key to power the chat assistant in your MQTTfy dashboard is just the first step into a much larger world. This guide will now add over 3000 words to explore the truly transformative potential of integrating large language models (LLMs) like Gemini directly into your IIoT architecture. We are moving beyond simple chatbots and into the realm of creating autonomous AI agents that can monitor, analyze, and even control your physical operations through the MQTT protocol. This is the next frontier of OT/IT convergence, where the real-time data streams of the Internet of Things meet the reasoning power of generative AI.
Part 1: Architectural Patterns for Integrating AI with Real-Time MQTT Data
A cloud-based generative AI like Google Gemini cannot directly connect to an MQTT broker. The key is to create a service that acts as a bridge between the two. This service, which we call an AI agent, subscribes to your MQTT data streams, communicates with the Gemini API, and publishes the AI's insights back into your IoT platform.
There are two primary architectural patterns for these agents:
Pattern 1: The "Observer" AI Agent
This is the most common pattern, used for monitoring and analysis. The agent listens to the data firehose from your devices, uses the AI to find insights, and reports back. It does not control anything.
The workflow is as follows:
- Data Ingestion: A device, such as a Modbus PLC gateway, publishes real-time telemetry data (e.g., pressure, vibration, temperature) to a specific topic on your MQTT broker.
- AI Agent Subscription: A backend service (the AI agent), running in the cloud or on an edge server, is subscribed to these telemetry topics, often using a wildcard (e.g.,
factory/+/telemetry). - Prompt Engineering & API Call: When the agent receives a message, it formats the data (e.g., a JSON payload) into a carefully constructed prompt for the Gemini API. This prompt gives the AI context, telling it what the data represents and what kind of analysis is needed.
- Publishing Insights: The agent takes the response from the Gemini API—which could be a simple status like "CRITICAL" or a detailed paragraph explaining a potential fault—and publishes it as a new message to an analysis topic (e.g.,
factory/press-01/analysis). - Visualization: Your MQTTfy dashboard is subscribed to this analysis topic, allowing you to create widgets that display the AI-generated insights in real-time. This creates a powerful feedback loop for your human operators.
Pattern 2: The "Interactive" AI Agent
This more advanced pattern grants the AI agent the ability to not only observe but also to act. It can respond to natural language commands from users or proactively take control of devices based on its analysis.
This architecture adds a command topic to the mix:
- User Command: A user types a natural language command into the AI chat on the MQTTfy dashboard, for example, "What was the peak pressure on Press-01 in the last hour?"
- Command Publication: The dashboard publishes this text to a command topic, such as
factory/ai/commands. - Intent Parsing: The AI agent, subscribed to this topic, sends the user's text to the Gemini API with a prompt designed to parse the user's intent and extract key parameters (e.g., Intent:
get_peak_value, Machine:Press-01, Timespan:1 hour). - Action/Response: Based on the parsed intent, the agent can now act. It might query a historical database for the requested data or even publish a new message to a control topic (e.g.,
factory/press-01/setpoint/cmd) to make a change in the physical world. The results are then published back for the user to see.
This pattern is the foundation for creating a truly conversational IIoT platform, where operators can interact with their machinery using everyday language.
Part 2: In-Depth Use Cases for Gemini-Powered IIoT Agents
Let's move beyond theory and explore concrete, high-value industrial applications that are unlocked by this powerful combination of the MQTT protocol and generative AI.
Use Case A: AI-Powered Predictive Maintenance
Predictive maintenance is the holy grail of industrial efficiency. The goal is to predict equipment failures before they happen, and AI agents are the key.
- Scenario: An AI agent is tasked with monitoring a critical CNC machine. It subscribes to high-frequency data from multiple sensors:
machines/cnc-01/vibration,machines/cnc-01/spindle_temp, andmachines/cnc-01/power_draw. - Implementation: Every minute, the agent gathers the last 60 data points from each topic, creating a multi-dimensional time-series snapshot. It sends this data to the Gemini API with a detailed prompt: "You are a master maintenance technician specializing in CNC machines. The following JSON data represents the last minute of high-frequency telemetry. Analyze these interconnected streams for subtle patterns that pre-indicate tool wear, bearing fatigue, or motor strain. Provide a status ('HEALTHY', 'DEGRADING', 'FAILURE_IMMINENT') and a confidence score."
- Value: Instead of just setting simple thresholds, the AI can identify complex, multi-variate correlations that a human or traditional algorithm would miss. The output, visualized on an IoT data visualization tool like the MQTTfy dashboard, gives maintenance teams a powerful early warning system, transforming their workflow from reactive to proactive. This is a core tenet of building a next-generation IIoT architecture.
Use Case B: Automated Root Cause Analysis (RCA)
When a machine goes down, the race is on to find out why. An AI agent can perform this RCA automatically and almost instantly.
- Scenario: A packaging line suddenly stops, and an alarm is published to the topic
lines/packaging-03/statuswith the payload"ALARM_STOP". - Implementation: This alarm message is the trigger for our RCA agent. The agent immediately springs into action. It connects to a historical database (or uses retained messages if available) to pull the last 15 minutes of data from all related topics: conveyor speed, sensor states, operator logs, etc. It bundles this entire dataset into a single prompt for Gemini: "An alarm was just triggered on Packaging Line 3. The following is a complete log of all sensor data and operator actions for the 15 minutes prior to the event. Analyze this timeline and provide a step-by-step probable root cause analysis."
- Value: Within seconds, the AI can generate a detailed report and publish it to an
incidents/topic. The factory manager gets a notification on their MQTT dashboard with a clear explanation like: "1. Conveyor speed fluctuated. 2. Photo-eye sensor P-45 began reporting intermittent faults. 3. Box jam occurred at station 4, triggering the alarm. Root Cause: Suspected dirty lens on sensor P-45." This dramatically reduces downtime and provides invaluable data for process improvement.
Part 3: Prompt Engineering - The Art of Talking to Your Machines
The effectiveness of any generative AI system depends almost entirely on the quality of the prompt. Giving the AI clear context and a defined role is the key to getting reliable, structured output.
Bad Prompt: "Is this machine okay? Here's the data: {temp: 95}"
This prompt is useless. The AI has no context. Is 95 good or bad? What machine is it?
Excellent Prompt for an AI Agent:
You are an AI assistant for an industrial bakery. Your role is to monitor the proofing ovens. I will provide you with a JSON object representing real-time telemetry. The temperature is in Celsius and should be between 35C and 40C. The humidity is in percent and should be between 80% and 85%. Analyze the following data. If it is within normal operating parameters, respond with a JSON object with status NOMINAL. If any value is outside the acceptable range, respond with a JSON object with status ALARM and a reason explaining which parameter is out of range and by how much. Data: temperature 42.5, humidity 83.1
This prompt is effective because it:
- Assigns a Role: "You are an AI assistant for an industrial bakery..."
- Provides Context: "...monitor the proofing ovens."
- Defines the Data: Explains what
temperatureandhumiditymean and their units. - Specifies Normal Parameters: Gives the exact acceptable ranges.
- Demands Structured Output: Explicitly tells the AI to respond in a specific JSON format.
Mastering this kind of detailed prompt engineering is the key to unlocking reliable automation and analysis from your AI agents.