feat: Initial commit for 8.0 branch. history_json was added.
This commit is contained in:
10
.gitignore
vendored
10
.gitignore
vendored
@@ -0,0 +1,10 @@
|
|||||||
|
# Docker environment
|
||||||
|
docker/
|
||||||
|
z_gen_history_data.sql
|
||||||
|
|
||||||
|
# Local docs
|
||||||
|
QUICKSTART.md
|
||||||
|
init_extra_users.sql
|
||||||
|
|
||||||
|
# Schemas
|
||||||
|
sql-scripts*/
|
||||||
63
ARCHITECTURE.md
Normal file
63
ARCHITECTURE.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Zabbix PostgreSQL Partitioning Architecture
|
||||||
|
|
||||||
|
This document provides a brief technical overview of the components, logic, and dynamic querying mechanisms that power the PostgreSQL partitioning solution for Zabbix.
|
||||||
|
|
||||||
|
## Schema-Agnostic Design
|
||||||
|
|
||||||
|
A core architectural principle of this solution is its **schema-agnostic design**. It does not assume that your Zabbix database is installed in the default `public` schema.
|
||||||
|
|
||||||
|
When the procedures need to create, drop, or manipulate a partitioned table (e.g., `history`), they do not hardcode the schema. Instead, they dynamically query PostgreSQL's internal system catalogs (`pg_class` and `pg_namespace`) to locate exactly which schema the target table belongs to:
|
||||||
|
|
||||||
|
```sql
|
||||||
|
SELECT n.nspname INTO v_schema
|
||||||
|
FROM pg_class c
|
||||||
|
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = v_table;
|
||||||
|
```
|
||||||
|
|
||||||
|
This ensures that the partitioning scripts will work flawlessly, even in custom Zabbix deployments where tables are housed in alternative schemas.
|
||||||
|
|
||||||
|
## File Structure & Queries
|
||||||
|
|
||||||
|
The solution is divided into a series of SQL scripts that must be executed sequentially to set up the environment.
|
||||||
|
|
||||||
|
### 1. `00_partitions_init.sql`
|
||||||
|
* **Purpose:** Initializes the foundation for the partitioning system.
|
||||||
|
* **Actions:**
|
||||||
|
* Creates the isolated `partitions` schema to keep everything separate from Zabbix's own structure.
|
||||||
|
* Creates the `partitions.config` table (which stores retention policies).
|
||||||
|
* Creates the `partitions.version` table for tracking the installed version.
|
||||||
|
|
||||||
|
### 2. `01_auditlog_prep.sql`
|
||||||
|
* **Purpose:** Prepares the Zabbix `auditlog` table for partitioning.
|
||||||
|
* **Actions:**
|
||||||
|
* PostgreSQL range partitioning requires the partition key (in this case, `clock`) to be part of the Primary Key.
|
||||||
|
* This script dynamically locates the existing Primary Key (usually just `auditid`) and alters it to a composite key `(auditid, clock)`.
|
||||||
|
|
||||||
|
### 3. `02_maintenance.sql`
|
||||||
|
* **Purpose:** Contains the core PL/pgSQL procedural logic that manages the lifecycle of the partitions.
|
||||||
|
* **Key Functions/Procedures:**
|
||||||
|
* `partition_exists()`: Queries `pg_class` to verify if a specific child partition partition exists.
|
||||||
|
* `create_partition()`: Executes the DDL `CREATE TABLE ... PARTITION OF ... FOR VALUES FROM (x) TO (y)` to generate a new time-bound chunk.
|
||||||
|
* `drop_old_partitions()`: Iterates over existing child partitions (using `pg_inherits`) and calculates their age based on their suffix. Drops those older than the defined `keep_history` policy.
|
||||||
|
* `maintain_table()`: The orchestrator for a single table. It calculates the necessary UTC timestamps, calls `create_partition()` to build the future buffer, calls `create_partition()` recursively backward to cover the retention period, and finally calls `drop_old_partitions()`.
|
||||||
|
* `run_maintenance()`: The global loop that iterates through `partitions.config` and triggers `maintain_table()` for every configured Zabbix table.
|
||||||
|
|
||||||
|
### 4. `03_enable_partitioning.sql`
|
||||||
|
* **Purpose:** The migration script that actually executes the partition conversion on the live database.
|
||||||
|
* **Actions:**
|
||||||
|
* It takes the original Zabbix table (e.g., `history`) and renames it to `history_old` (`ALTER TABLE ... RENAME TO ...`).
|
||||||
|
* It immediately creates a new partitioned table with the original name, inheriting the exact structure of the old table (`CREATE TABLE ... (LIKE ... INCLUDING ALL) PARTITION BY RANGE (clock)`).
|
||||||
|
* It triggers the first maintenance run so new incoming data has immediate partitions to land in.
|
||||||
|
|
||||||
|
### 5. `04_monitoring_view.sql`
|
||||||
|
* **Purpose:** Provides an easy-to-read observability layer.
|
||||||
|
* **Actions:**
|
||||||
|
* Creates the `partitions.monitoring` view by joining `pg_class`, `pg_inherits`, `pg_tablespace`, and `pg_size_pretty`.
|
||||||
|
* This view aggregates the total size of each partitioned family and calculates how many "future partitions" exist as a safety buffer.
|
||||||
|
|
||||||
|
## Automated Scheduling (`pg_cron`)
|
||||||
|
|
||||||
|
While `systemd` timers or standard `cron` can be used to trigger the maintenance, the recommended approach (especially for AWS RDS/Aurora deployments) is using the `pg_cron` database extension.
|
||||||
|
|
||||||
|
`pg_cron` allows you to schedule the `CALL partitions.run_maintenance();` procedure directly within PostgreSQL, ensuring the database autonomously manages its own housekeeping without requiring external OS-level access or triggers.
|
||||||
@@ -1,52 +0,0 @@
|
|||||||
# Quickstart (PostgreSQL Partitioning Test)
|
|
||||||
|
|
||||||
## Start Environment
|
|
||||||
> **Note**: If `docker` commands fail with permission errors, run `newgrp docker` or ensure your user is in the `docker` group (`sudo usermod -aG docker $USER`) and log out/in.
|
|
||||||
|
|
||||||
```bash
|
|
||||||
cd postgresql/docker
|
|
||||||
sudo ./run_test_env.sh --pg 16 --zabbix 7.0
|
|
||||||
# Options: --pg <16|17|18> --zabbix <7.0|7.4>
|
|
||||||
```
|
|
||||||
|
|
||||||
## Verify
|
|
||||||
```bash
|
|
||||||
# Check status
|
|
||||||
docker ps
|
|
||||||
|
|
||||||
# SQL Shell
|
|
||||||
docker exec -it zabbix-db-test psql -U zabbix -d zabbix
|
|
||||||
# Password: zabbix
|
|
||||||
```
|
|
||||||
|
|
||||||
## Reset
|
|
||||||
```bash
|
|
||||||
docker compose down -v
|
|
||||||
```
|
|
||||||
|
|
||||||
## Partitioning
|
|
||||||
See [PARTITIONING.md](../PARTITIONING.md) for details on the implemented declarative partitioning.
|
|
||||||
|
|
||||||
## 🐳 Docker Deployment (Production)
|
|
||||||
The `run_test_env.sh` script automatically populates `init_scripts` for the test environment. To deploy this in your own Docker setup:
|
|
||||||
|
|
||||||
1. **Mount Scripts**: Map the SQL procedures to `/docker-entrypoint-initdb.d/` in your PostgreSQL container.
|
|
||||||
2. **Order Matters**: Scripts execute alphabetically. Ensure they run **after** the Zabbix schema import.
|
|
||||||
|
|
||||||
**Example `docker-compose.yml` snippet:**
|
|
||||||
```yaml
|
|
||||||
services:
|
|
||||||
postgres-server:
|
|
||||||
image: postgres:16
|
|
||||||
volumes:
|
|
||||||
# Mount Zabbix Schema first (e.g., as 01_schema.sql)
|
|
||||||
- ./zabbix_schema.sql:/docker-entrypoint-initdb.d/01_schema.sql
|
|
||||||
|
|
||||||
# Mount Partitioning Procedures (Prefix to run AFTER schema)
|
|
||||||
- ../postgresql/procedures/00_partitions_init.sql:/docker-entrypoint-initdb.d/02_00_part_init.sql
|
|
||||||
- ../postgresql/procedures/01_auditlog_prep.sql:/docker-entrypoint-initdb.d/02_01_audit_prep.sql
|
|
||||||
- ../postgresql/procedures/02_maintenance.sql:/docker-entrypoint-initdb.d/02_02_maintenance.sql
|
|
||||||
- ../postgresql/procedures/03_enable_partitioning.sql:/docker-entrypoint-initdb.d/02_03_enable.sql
|
|
||||||
- ../postgresql/procedures/04_monitoring_view.sql:/docker-entrypoint-initdb.d/02_04_monitor.sql
|
|
||||||
```
|
|
||||||
The container will automatically execute these scripts on first startup, partitioning the tables.
|
|
||||||
@@ -1,20 +0,0 @@
|
|||||||
services:
|
|
||||||
postgres:
|
|
||||||
image: postgres:${PG_VERSION}
|
|
||||||
container_name: zabbix-db-test
|
|
||||||
environment:
|
|
||||||
POSTGRES_PASSWORD: zabbix
|
|
||||||
POSTGRES_USER: zabbix
|
|
||||||
POSTGRES_DB: zabbix
|
|
||||||
PGDATA: /var/lib/postgresql/data/pgdata
|
|
||||||
ports:
|
|
||||||
- "5432:5432"
|
|
||||||
volumes:
|
|
||||||
- ./init_scripts:/docker-entrypoint-initdb.d
|
|
||||||
tmpfs:
|
|
||||||
- /var/lib/postgresql/data
|
|
||||||
healthcheck:
|
|
||||||
test: ["CMD-SHELL", "pg_isready -U zabbix"]
|
|
||||||
interval: 5s
|
|
||||||
timeout: 5s
|
|
||||||
retries: 5
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
-- Create additional user for partitioning tasks
|
|
||||||
CREATE USER zbx_part WITH PASSWORD 'zbx_part';
|
|
||||||
GRANT CONNECT ON DATABASE zabbix TO zbx_part;
|
|
||||||
-- Grant usage on public schema (standard for PG 15+)
|
|
||||||
GRANT USAGE ON SCHEMA public TO zbx_part;
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,49 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: 00_partitions_init.sql
|
|
||||||
-- DESCRIPTION: Creates the 'partitions' schema and configuration table.
|
|
||||||
-- Defines the structure for managing Zabbix partitioning.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
CREATE SCHEMA IF NOT EXISTS partitions;
|
|
||||||
|
|
||||||
-- Configuration table to store partitioning settings per table
|
|
||||||
CREATE TABLE IF NOT EXISTS partitions.config (
|
|
||||||
table_name text NOT NULL,
|
|
||||||
period text NOT NULL CHECK (period IN ('day', 'week', 'month', 'year')),
|
|
||||||
keep_history interval NOT NULL,
|
|
||||||
future_partitions integer NOT NULL DEFAULT 5,
|
|
||||||
last_updated timestamp WITH TIME ZONE DEFAULT (now() AT TIME ZONE 'UTC'),
|
|
||||||
PRIMARY KEY (table_name)
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Table to track installed version of the partitioning solution
|
|
||||||
CREATE TABLE IF NOT EXISTS partitions.version (
|
|
||||||
version text PRIMARY KEY,
|
|
||||||
installed_at timestamp with time zone DEFAULT (now() AT TIME ZONE 'UTC'),
|
|
||||||
description text
|
|
||||||
);
|
|
||||||
|
|
||||||
-- Set initial version
|
|
||||||
INSERT INTO partitions.version (version, description) VALUES ('1.0', 'Initial release')
|
|
||||||
ON CONFLICT (version) DO NOTHING;
|
|
||||||
|
|
||||||
-- Default configuration for Zabbix tables (adjust as needed)
|
|
||||||
-- History tables: Daily partitions, keep 30 days
|
|
||||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
|
||||||
('history', 'day', '30 days'),
|
|
||||||
('history_uint', 'day', '30 days'),
|
|
||||||
('history_str', 'day', '30 days'),
|
|
||||||
('history_log', 'day', '30 days'),
|
|
||||||
('history_text', 'day', '30 days')
|
|
||||||
ON CONFLICT (table_name) DO NOTHING;
|
|
||||||
|
|
||||||
-- Trends tables: Monthly partitions, keep 12 months
|
|
||||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
|
||||||
('trends', 'month', '12 months'),
|
|
||||||
('trends_uint', 'month', '12 months')
|
|
||||||
ON CONFLICT (table_name) DO NOTHING;
|
|
||||||
|
|
||||||
-- Auditlog: Monthly partitions, keep 12 months
|
|
||||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
|
||||||
('auditlog', 'month', '12 months')
|
|
||||||
ON CONFLICT (table_name) DO NOTHING;
|
|
||||||
@@ -1,27 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: 01_auditlog_prep.sql
|
|
||||||
-- DESCRIPTION: Modifies the 'auditlog' table Primary Key to include 'clock'.
|
|
||||||
-- This is REQUIRED for range partitioning by 'clock'.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
DO $$
|
|
||||||
BEGIN
|
|
||||||
-- Check if PK needs modification
|
|
||||||
-- Original PK is typically on (auditid) named 'auditlog_pkey'
|
|
||||||
IF EXISTS (
|
|
||||||
SELECT 1 FROM pg_constraint
|
|
||||||
WHERE conname = 'auditlog_pkey'
|
|
||||||
AND conrelid = 'auditlog'::regclass
|
|
||||||
) THEN
|
|
||||||
-- Verify if 'clock' is already in PK (basic check)
|
|
||||||
-- Realistically, if 'auditlog_pkey' exists on default Zabbix, it's just (auditid).
|
|
||||||
|
|
||||||
RAISE NOTICE 'Dropping existing Primary Key on auditlog...';
|
|
||||||
ALTER TABLE auditlog DROP CONSTRAINT auditlog_pkey;
|
|
||||||
|
|
||||||
RAISE NOTICE 'Creating new Primary Key on auditlog (auditid, clock)...';
|
|
||||||
ALTER TABLE auditlog ADD PRIMARY KEY (auditid, clock);
|
|
||||||
ELSE
|
|
||||||
RAISE NOTICE 'Constraint auditlog_pkey not found. Skipping or already modified.';
|
|
||||||
END IF;
|
|
||||||
END $$;
|
|
||||||
@@ -1,183 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: 02_maintenance.sql
|
|
||||||
-- DESCRIPTION: Core functions for Zabbix partitioning (Create, Drop, Maintain).
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
-- Function to check if a partition exists
|
|
||||||
CREATE OR REPLACE FUNCTION partitions.partition_exists(p_partition_name text)
|
|
||||||
RETURNS boolean AS $$
|
|
||||||
BEGIN
|
|
||||||
RETURN EXISTS (
|
|
||||||
SELECT 1 FROM pg_class c
|
|
||||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
|
||||||
WHERE c.relname = p_partition_name
|
|
||||||
AND n.nspname = 'public'
|
|
||||||
);
|
|
||||||
END;
|
|
||||||
$$ LANGUAGE plpgsql;
|
|
||||||
|
|
||||||
-- Function to create a partition
|
|
||||||
CREATE OR REPLACE PROCEDURE partitions.create_partition(
|
|
||||||
p_parent_table text,
|
|
||||||
p_start_time timestamp with time zone,
|
|
||||||
p_end_time timestamp with time zone,
|
|
||||||
p_period text
|
|
||||||
) LANGUAGE plpgsql AS $$
|
|
||||||
DECLARE
|
|
||||||
v_partition_name text;
|
|
||||||
v_start_ts bigint;
|
|
||||||
v_end_ts bigint;
|
|
||||||
v_suffix text;
|
|
||||||
BEGIN
|
|
||||||
-- (No changes needed for time here as passed params are already UTC-adjusted in caller)
|
|
||||||
v_start_ts := extract(epoch from p_start_time)::bigint;
|
|
||||||
v_end_ts := extract(epoch from p_end_time)::bigint;
|
|
||||||
|
|
||||||
IF p_period = 'month' THEN
|
|
||||||
v_suffix := to_char(p_start_time, 'YYYYMM');
|
|
||||||
ELSE
|
|
||||||
v_suffix := to_char(p_start_time, 'YYYYMMDD');
|
|
||||||
END IF;
|
|
||||||
|
|
||||||
v_partition_name := p_parent_table || '_p' || v_suffix;
|
|
||||||
|
|
||||||
IF NOT partitions.partition_exists(v_partition_name) THEN
|
|
||||||
EXECUTE format(
|
|
||||||
'CREATE TABLE public.%I PARTITION OF public.%I FOR VALUES FROM (%s) TO (%s)',
|
|
||||||
v_partition_name, p_parent_table, v_start_ts, v_end_ts
|
|
||||||
);
|
|
||||||
END IF;
|
|
||||||
END;
|
|
||||||
$$;
|
|
||||||
|
|
||||||
-- Function to drop old partitions
|
|
||||||
CREATE OR REPLACE PROCEDURE partitions.drop_old_partitions(
|
|
||||||
p_parent_table text,
|
|
||||||
p_retention interval,
|
|
||||||
p_period text
|
|
||||||
) LANGUAGE plpgsql AS $$
|
|
||||||
DECLARE
|
|
||||||
v_cutoff_ts bigint;
|
|
||||||
v_partition record;
|
|
||||||
v_partition_date timestamp with time zone;
|
|
||||||
v_suffix text;
|
|
||||||
BEGIN
|
|
||||||
-- Calculate cutoff timestamp
|
|
||||||
v_cutoff_ts := extract(epoch from (now() - p_retention))::bigint;
|
|
||||||
|
|
||||||
FOR v_partition IN
|
|
||||||
SELECT
|
|
||||||
child.relname AS partition_name
|
|
||||||
FROM pg_inherits
|
|
||||||
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
|
|
||||||
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
|
||||||
WHERE parent.relname = p_parent_table
|
|
||||||
LOOP
|
|
||||||
-- Parse partition suffix to determine age
|
|
||||||
-- Format: parent_pYYYYMM or parent_pYYYYMMDD
|
|
||||||
v_suffix := substring(v_partition.partition_name from length(p_parent_table) + 3);
|
|
||||||
|
|
||||||
BEGIN
|
|
||||||
IF length(v_suffix) = 6 THEN -- YYYYMM
|
|
||||||
v_partition_date := to_timestamp(v_suffix || '01', 'YYYYMMDD') AT TIME ZONE 'UTC';
|
|
||||||
-- For monthly, we check if the END of the month is older than retention?
|
|
||||||
-- Or just strict retention.
|
|
||||||
-- To be safe, adding 1 month to check vs cutoff.
|
|
||||||
IF extract(epoch from (v_partition_date + '1 month'::interval)) < v_cutoff_ts THEN
|
|
||||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
|
||||||
EXECUTE format('DROP TABLE public.%I', v_partition.partition_name);
|
|
||||||
COMMIT; -- Release lock immediately
|
|
||||||
END IF;
|
|
||||||
ELSIF length(v_suffix) = 8 THEN -- YYYYMMDD
|
|
||||||
v_partition_date := to_timestamp(v_suffix, 'YYYYMMDD') AT TIME ZONE 'UTC';
|
|
||||||
IF extract(epoch from (v_partition_date + '1 day'::interval)) < v_cutoff_ts THEN
|
|
||||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
|
||||||
EXECUTE format('DROP TABLE public.%I', v_partition.partition_name);
|
|
||||||
COMMIT; -- Release lock immediately
|
|
||||||
END IF;
|
|
||||||
END IF;
|
|
||||||
EXCEPTION WHEN OTHERS THEN
|
|
||||||
-- Ignore parsing errors for non-standard partitions
|
|
||||||
NULL;
|
|
||||||
END;
|
|
||||||
END LOOP;
|
|
||||||
END;
|
|
||||||
$$;
|
|
||||||
|
|
||||||
-- MAIN Procedure to maintain a single table
|
|
||||||
CREATE OR REPLACE PROCEDURE partitions.maintain_table(
|
|
||||||
p_table_name text,
|
|
||||||
p_period text,
|
|
||||||
p_keep_history interval,
|
|
||||||
p_future_partitions integer DEFAULT 5
|
|
||||||
) LANGUAGE plpgsql AS $$
|
|
||||||
DECLARE
|
|
||||||
v_start_time timestamp with time zone;
|
|
||||||
v_period_interval interval;
|
|
||||||
i integer;
|
|
||||||
v_past_iterations integer;
|
|
||||||
BEGIN
|
|
||||||
IF p_period = 'day' THEN
|
|
||||||
v_period_interval := '1 day'::interval;
|
|
||||||
v_start_time := date_trunc('day', now() AT TIME ZONE 'UTC');
|
|
||||||
-- Calculate how many past days cover the retention period (86400 seconds = 1 day)
|
|
||||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 86400)::integer;
|
|
||||||
|
|
||||||
ELSIF p_period = 'week' THEN
|
|
||||||
v_period_interval := '1 week'::interval;
|
|
||||||
v_start_time := date_trunc('week', now() AT TIME ZONE 'UTC');
|
|
||||||
-- 604800 seconds = 1 week
|
|
||||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 604800)::integer;
|
|
||||||
|
|
||||||
ELSIF p_period = 'month' THEN
|
|
||||||
v_period_interval := '1 month'::interval;
|
|
||||||
v_start_time := date_trunc('month', now() AT TIME ZONE 'UTC');
|
|
||||||
-- Approximate 30 days per month (2592000 seconds)
|
|
||||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 2592000)::integer;
|
|
||||||
ELSE
|
|
||||||
RETURN;
|
|
||||||
END IF;
|
|
||||||
|
|
||||||
-- 1. Create Future Partitions (Current + Buffer)
|
|
||||||
FOR i IN 0..p_future_partitions LOOP
|
|
||||||
CALL partitions.create_partition(
|
|
||||||
p_table_name,
|
|
||||||
v_start_time + (i * v_period_interval),
|
|
||||||
v_start_time + ((i + 1) * v_period_interval),
|
|
||||||
p_period
|
|
||||||
);
|
|
||||||
COMMIT; -- Release lock immediately
|
|
||||||
END LOOP;
|
|
||||||
|
|
||||||
-- 2. Create Past Partitions (Covering retention period)
|
|
||||||
IF v_past_iterations > 0 THEN
|
|
||||||
FOR i IN 1..v_past_iterations LOOP
|
|
||||||
CALL partitions.create_partition(
|
|
||||||
p_table_name,
|
|
||||||
v_start_time - (i * v_period_interval),
|
|
||||||
v_start_time - ((i - 1) * v_period_interval),
|
|
||||||
p_period
|
|
||||||
);
|
|
||||||
COMMIT; -- Release lock immediately
|
|
||||||
END LOOP;
|
|
||||||
END IF;
|
|
||||||
|
|
||||||
-- 3. Drop Old Partitions
|
|
||||||
CALL partitions.drop_old_partitions(p_table_name, p_keep_history, p_period);
|
|
||||||
|
|
||||||
-- 4. Update Metadata
|
|
||||||
UPDATE partitions.config SET last_updated = now() WHERE table_name = p_table_name;
|
|
||||||
END;
|
|
||||||
$$;
|
|
||||||
|
|
||||||
-- Global Maintenance Procedure
|
|
||||||
CREATE OR REPLACE PROCEDURE partitions.run_maintenance()
|
|
||||||
LANGUAGE plpgsql AS $$
|
|
||||||
DECLARE
|
|
||||||
v_row record;
|
|
||||||
BEGIN
|
|
||||||
FOR v_row IN SELECT * FROM partitions.config LOOP
|
|
||||||
CALL partitions.maintain_table(v_row.table_name, v_row.period, v_row.keep_history, v_row.future_partitions);
|
|
||||||
END LOOP;
|
|
||||||
END;
|
|
||||||
$$;
|
|
||||||
@@ -1,43 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: 03_enable_partitioning.sql
|
|
||||||
-- DESCRIPTION: Converts standard Zabbix tables to Partitioned tables.
|
|
||||||
-- WARNING: This renames existing tables to *_old.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
DO $$
|
|
||||||
DECLARE
|
|
||||||
v_row record;
|
|
||||||
v_table text;
|
|
||||||
v_old_table text;
|
|
||||||
v_pk_sql text;
|
|
||||||
BEGIN
|
|
||||||
FOR v_row IN SELECT * FROM partitions.config LOOP
|
|
||||||
v_table := v_row.table_name;
|
|
||||||
v_old_table := v_table || '_old';
|
|
||||||
|
|
||||||
-- Check if table exists and is NOT already partitioned
|
|
||||||
IF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'r') THEN
|
|
||||||
RAISE NOTICE 'Converting table % to partitioned table...', v_table;
|
|
||||||
|
|
||||||
-- 1. Rename existing table
|
|
||||||
EXECUTE format('ALTER TABLE public.%I RENAME TO %I', v_table, v_old_table);
|
|
||||||
|
|
||||||
-- 2. Create new partitioned table (copying structure)
|
|
||||||
EXECUTE format('CREATE TABLE public.%I (LIKE public.%I INCLUDING ALL) PARTITION BY RANGE (clock)', v_table, v_old_table);
|
|
||||||
|
|
||||||
-- 3. Create initial partitions
|
|
||||||
RAISE NOTICE 'Creating initial partitions for %...', v_table;
|
|
||||||
CALL partitions.maintain_table(v_table, v_row.period, v_row.keep_history, v_row.future_partitions);
|
|
||||||
|
|
||||||
-- Optional: Migrate existing data
|
|
||||||
-- EXECUTE format('INSERT INTO public.%I SELECT * FROM public.%I', v_table, v_old_table);
|
|
||||||
|
|
||||||
ELSIF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'p') THEN
|
|
||||||
RAISE NOTICE 'Table % is already partitioned. Skipping conversion.', v_table;
|
|
||||||
-- Just run maintenance to ensure partitions exist
|
|
||||||
CALL partitions.run_maintenance();
|
|
||||||
ELSE
|
|
||||||
RAISE WARNING 'Table % not found!', v_table;
|
|
||||||
END IF;
|
|
||||||
END LOOP;
|
|
||||||
END $$;
|
|
||||||
@@ -1,28 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: 04_monitoring_view.sql
|
|
||||||
-- DESCRIPTION: Creates a view to monitor partition status and sizes.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
CREATE OR REPLACE VIEW partitions.monitoring AS
|
|
||||||
SELECT
|
|
||||||
parent.relname AS parent_table,
|
|
||||||
c.table_name,
|
|
||||||
c.period,
|
|
||||||
c.keep_history,
|
|
||||||
count(child.relname) AS partition_count,
|
|
||||||
count(child.relname) FILTER (
|
|
||||||
WHERE
|
|
||||||
(c.period = 'day' AND child.relname > (parent.relname || '_p' || to_char(now(), 'YYYYMMDD')))
|
|
||||||
OR
|
|
||||||
(c.period = 'month' AND child.relname > (parent.relname || '_p' || to_char(now(), 'YYYYMM')))
|
|
||||||
) AS future_partitions,
|
|
||||||
pg_size_pretty(sum(pg_total_relation_size(child.oid))) AS total_size,
|
|
||||||
min(child.relname) AS oldest_partition,
|
|
||||||
max(child.relname) AS newest_partition,
|
|
||||||
c.last_updated
|
|
||||||
FROM partitions.config c
|
|
||||||
JOIN pg_class parent ON parent.relname = c.table_name
|
|
||||||
LEFT JOIN pg_inherits ON pg_inherits.inhparent = parent.oid
|
|
||||||
LEFT JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
|
||||||
WHERE parent.relkind = 'p' -- Only partitioned tables
|
|
||||||
GROUP BY parent.relname, c.table_name, c.period, c.keep_history, c.last_updated;
|
|
||||||
File diff suppressed because one or more lines are too long
File diff suppressed because it is too large
Load Diff
@@ -1,91 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: z_gen_history_data.sql
|
|
||||||
-- DESCRIPTION: Generates mock data for Zabbix history and trends tables.
|
|
||||||
-- Creates a dummy host and items if they don't exist.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
DO $$
|
|
||||||
DECLARE
|
|
||||||
v_hostid bigint := 900001;
|
|
||||||
v_groupid bigint := 900001;
|
|
||||||
v_interfaceid bigint := 900001;
|
|
||||||
v_itemid_start bigint := 900001;
|
|
||||||
v_start_time integer := extract(epoch from (now() - interval '7 days'))::integer;
|
|
||||||
v_end_time integer := extract(epoch from now())::integer;
|
|
||||||
i integer;
|
|
||||||
BEGIN
|
|
||||||
-- 1. CREATE DUMMY STRUCTURES
|
|
||||||
-- Host Group
|
|
||||||
INSERT INTO hstgrp (groupid, name, uuid, type)
|
|
||||||
VALUES (v_groupid, 'Partition Test Group', 'df77189c49034553999973d8e0500001', 0)
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- Host
|
|
||||||
INSERT INTO hosts (hostid, host, name, status, uuid)
|
|
||||||
VALUES (v_hostid, 'partition-test-host', 'Partition Test Host', 0, 'df77189c49034553999973d8e0500002')
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- Interface
|
|
||||||
INSERT INTO interface (interfaceid, hostid, main, type, useip, ip, dns, port)
|
|
||||||
VALUES (v_interfaceid, v_hostid, 1, 1, 1, '127.0.0.1', '', '10050')
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- 2. CREATE DUMMY ITEMS AND GENERATE HISTORY
|
|
||||||
|
|
||||||
-- Item 1: Numeric Float (HISTORY)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 1, v_hostid, v_interfaceid, 'Test Float Item', 'test.float', 0, 0, '1m', 'df77189c49034553999973d8e0500003');
|
|
||||||
|
|
||||||
INSERT INTO history (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 1,
|
|
||||||
ts,
|
|
||||||
random() * 100,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
|
||||||
|
|
||||||
INSERT INTO trends (itemid, clock, num, value_min, value_avg, value_max)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 1,
|
|
||||||
(ts / 3600) * 3600, -- Hourly truncation
|
|
||||||
60,
|
|
||||||
0,
|
|
||||||
50,
|
|
||||||
100
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
|
||||||
|
|
||||||
-- Item 2: Numeric Unsigned (HISTORY_UINT)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 2, v_hostid, v_interfaceid, 'Test Uint Item', 'test.uint', 0, 3, '1m', 'df77189c49034553999973d8e0500004');
|
|
||||||
|
|
||||||
INSERT INTO history_uint (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 2,
|
|
||||||
ts,
|
|
||||||
(random() * 1000)::integer,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
|
||||||
|
|
||||||
INSERT INTO trends_uint (itemid, clock, num, value_min, value_avg, value_max)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 2,
|
|
||||||
(ts / 3600) * 3600,
|
|
||||||
60,
|
|
||||||
0,
|
|
||||||
500,
|
|
||||||
1000
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
|
||||||
|
|
||||||
-- Item 3: Character (HISTORY_STR)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 3, v_hostid, v_interfaceid, 'Test Str Item', 'test.str', 0, 1, '1m', 'df77189c49034553999973d8e0500005');
|
|
||||||
|
|
||||||
INSERT INTO history_str (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 3,
|
|
||||||
ts,
|
|
||||||
'test_value_' || ts,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 300) AS ts; -- Every 5 mins
|
|
||||||
|
|
||||||
END $$;
|
|
||||||
@@ -1,135 +0,0 @@
|
|||||||
#!/bin/bash
|
|
||||||
|
|
||||||
# Default values
|
|
||||||
PG_VERSION=""
|
|
||||||
ZABBIX_VERSION=""
|
|
||||||
|
|
||||||
# Color codes
|
|
||||||
GREEN='\033[0;32m'
|
|
||||||
RED='\033[0;31m'
|
|
||||||
NC='\033[0m' # No Color
|
|
||||||
|
|
||||||
usage() {
|
|
||||||
echo "Usage: $0 --pg <16|17|18> --zabbix <7.0|7.4>"
|
|
||||||
echo "Example: $0 --pg 16 --zabbix 7.0"
|
|
||||||
exit 1
|
|
||||||
}
|
|
||||||
|
|
||||||
# Parse arguments
|
|
||||||
while [[ "$#" -gt 0 ]]; do
|
|
||||||
case $1 in
|
|
||||||
--pg) PG_VERSION="$2"; shift ;;
|
|
||||||
--zabbix) ZABBIX_VERSION="$2"; shift ;;
|
|
||||||
*) echo "Unknown parameter: $1"; usage ;;
|
|
||||||
esac
|
|
||||||
shift
|
|
||||||
done
|
|
||||||
|
|
||||||
if [[ -z "$PG_VERSION" || -z "$ZABBIX_VERSION" ]]; then
|
|
||||||
echo -e "${RED}Error: detailed arguments required.${NC}"
|
|
||||||
usage
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Map Zabbix version to sql-scripts folder
|
|
||||||
if [[ "$ZABBIX_VERSION" == "7.0" ]]; then
|
|
||||||
SQL_DIR="../sql-scripts-70"
|
|
||||||
elif [[ "$ZABBIX_VERSION" == "7.4" ]]; then
|
|
||||||
SQL_DIR="../sql-scripts-74"
|
|
||||||
else
|
|
||||||
echo -e "${RED}Error: Unsupported Zabbix version. Use 7.0 or 7.4.${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo -e "${GREEN}Preparing environment for PostgreSQL $PG_VERSION and Zabbix $ZABBIX_VERSION...${NC}"
|
|
||||||
|
|
||||||
# Cleanup previous run
|
|
||||||
echo "Cleaning up containers and volumes..."
|
|
||||||
docker compose down -v > /dev/null 2>&1
|
|
||||||
rm -rf init_scripts
|
|
||||||
mkdir -p init_scripts
|
|
||||||
|
|
||||||
# Symlink SQL scripts
|
|
||||||
echo "Setting up initialization scripts from $SQL_DIR..."
|
|
||||||
|
|
||||||
# 0. Extra Users
|
|
||||||
if [[ -f "../init_extra_users.sql" ]]; then
|
|
||||||
cp "../init_extra_users.sql" ./init_scripts/00_init_extra_users.sql
|
|
||||||
echo "Copied extra user init script."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 1. Schema
|
|
||||||
if [[ -f "$SQL_DIR/schema.sql" ]]; then
|
|
||||||
# Use 01_00 to ensure it comes before 01_10
|
|
||||||
cp "$SQL_DIR/schema.sql" ./init_scripts/01_00_schema.sql
|
|
||||||
|
|
||||||
# 1.1 Partitioning Infrastructure
|
|
||||||
if [[ -f "../procedures/00_partitions_init.sql" ]]; then
|
|
||||||
cp "../procedures/00_partitions_init.sql" ./init_scripts/01_10_partitions_init.sql
|
|
||||||
fi
|
|
||||||
if [[ -f "../procedures/01_auditlog_prep.sql" ]]; then
|
|
||||||
cp "../procedures/01_auditlog_prep.sql" ./init_scripts/01_20_auditlog_prep.sql
|
|
||||||
fi
|
|
||||||
if [[ -f "../procedures/02_maintenance.sql" ]]; then
|
|
||||||
cp "../procedures/02_maintenance.sql" ./init_scripts/01_30_maintenance.sql
|
|
||||||
fi
|
|
||||||
if [[ -f "../procedures/03_enable_partitioning.sql" ]]; then
|
|
||||||
cp "../procedures/03_enable_partitioning.sql" ./init_scripts/01_40_enable.sql
|
|
||||||
fi
|
|
||||||
if [[ -f "../procedures/04_monitoring_view.sql" ]]; then
|
|
||||||
cp "../procedures/04_monitoring_view.sql" ./init_scripts/01_50_monitoring.sql
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
echo -e "${RED}Error: schema.sql not found in $SQL_DIR${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 2. Images
|
|
||||||
if [[ -f "$SQL_DIR/images.sql" ]]; then
|
|
||||||
cp "$SQL_DIR/images.sql" ./init_scripts/02_images.sql
|
|
||||||
else
|
|
||||||
echo -e "${RED}Error: images.sql not found in $SQL_DIR${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 3. Data
|
|
||||||
if [[ -f "$SQL_DIR/data.sql" ]]; then
|
|
||||||
cp "$SQL_DIR/data.sql" ./init_scripts/03_data.sql
|
|
||||||
else
|
|
||||||
echo -e "${RED}Error: data.sql not found in $SQL_DIR${NC}"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# 4. Mock History Data
|
|
||||||
if [[ -f "../z_gen_history_data.sql" ]]; then
|
|
||||||
cp "../z_gen_history_data.sql" ./init_scripts/04_gen_data.sql
|
|
||||||
echo "Copied mock data generator."
|
|
||||||
else
|
|
||||||
echo -e "${RED}Warning: z_gen_history_data.sql not found!${NC}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Check logic for 7.4 vs 7.0 (file names might slightly differ or be organized differently if using packages,
|
|
||||||
# but assuming source layout provided)
|
|
||||||
|
|
||||||
# Export variable for Docker Compose
|
|
||||||
export PG_VERSION=$PG_VERSION
|
|
||||||
|
|
||||||
# Run Docker Compose
|
|
||||||
echo -e "${GREEN}Starting PostgreSQL container...${NC}"
|
|
||||||
docker compose up -d
|
|
||||||
|
|
||||||
echo -e "${GREEN}Waiting for database to be ready...${NC}"
|
|
||||||
# Simple wait loop
|
|
||||||
for i in {1..30}; do
|
|
||||||
if docker exec zabbix-db-test pg_isready -U zabbix > /dev/null 2>&1; then
|
|
||||||
echo -e "${GREEN}Database is ready!${NC}"
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
echo -n "."
|
|
||||||
sleep 1
|
|
||||||
done
|
|
||||||
|
|
||||||
# Check if data generation finished (it runs as part of init, which might take a bit longer than just port open)
|
|
||||||
# We can check logs
|
|
||||||
echo "To follow initialization logs, run: docker logs -f zabbix-db-test"
|
|
||||||
echo -e "${GREEN}Environment ready.${NC}"
|
|
||||||
echo "Connect: psql -h localhost -p 5432 -U zabbix -d zabbix"
|
|
||||||
@@ -1,5 +0,0 @@
|
|||||||
-- Create additional user for partitioning tasks
|
|
||||||
CREATE USER zbx_part WITH PASSWORD 'zbx_part';
|
|
||||||
GRANT CONNECT ON DATABASE zabbix TO zbx_part;
|
|
||||||
-- Grant usage on public schema (standard for PG 15+)
|
|
||||||
GRANT USAGE ON SCHEMA public TO zbx_part;
|
|
||||||
@@ -1,7 +1,6 @@
|
|||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- SCRIPT: 00_partitions_init.sql
|
-- Creates the 'partitions' schema and configuration table.
|
||||||
-- DESCRIPTION: Creates the 'partitions' schema and configuration table.
|
-- Defines the structure for managing Zabbix partitioning.
|
||||||
-- Defines the structure for managing Zabbix partitioning.
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|
||||||
CREATE SCHEMA IF NOT EXISTS partitions;
|
CREATE SCHEMA IF NOT EXISTS partitions;
|
||||||
@@ -34,7 +33,8 @@ INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
|||||||
('history_uint', 'day', '30 days'),
|
('history_uint', 'day', '30 days'),
|
||||||
('history_str', 'day', '30 days'),
|
('history_str', 'day', '30 days'),
|
||||||
('history_log', 'day', '30 days'),
|
('history_log', 'day', '30 days'),
|
||||||
('history_text', 'day', '30 days')
|
('history_text', 'day', '30 days'),
|
||||||
|
('history_json', 'day', '30 days')
|
||||||
ON CONFLICT (table_name) DO NOTHING;
|
ON CONFLICT (table_name) DO NOTHING;
|
||||||
|
|
||||||
-- Trends tables: Monthly partitions, keep 12 months
|
-- Trends tables: Monthly partitions, keep 12 months
|
||||||
|
|||||||
@@ -1,24 +1,23 @@
|
|||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- SCRIPT: 01_auditlog_prep.sql
|
-- Modifies the 'auditlog' table Primary Key to include 'clock'.
|
||||||
-- DESCRIPTION: Modifies the 'auditlog' table Primary Key to include 'clock'.
|
-- This is REQUIRED for range partitioning by 'clock'.
|
||||||
-- This is REQUIRED for range partitioning by 'clock'.
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|
||||||
DO $$
|
DO $$
|
||||||
BEGIN
|
BEGIN
|
||||||
-- Check if PK needs modification
|
-- Check if PK needs modification
|
||||||
-- Original PK is typically on (auditid) named 'auditlog_pkey'
|
-- Original PK is on auditid named 'auditlog_pkey'
|
||||||
IF EXISTS (
|
IF EXISTS (
|
||||||
SELECT 1 FROM pg_constraint
|
SELECT 1 FROM pg_constraint
|
||||||
WHERE conname = 'auditlog_pkey'
|
WHERE conname = 'auditlog_pkey'
|
||||||
AND conrelid = 'auditlog'::regclass
|
AND conrelid = 'auditlog'::regclass
|
||||||
) THEN
|
) THEN
|
||||||
-- Verify if 'clock' is already in PK (basic check)
|
-- Verify if 'clock' is already in PK (basic safety check)
|
||||||
-- Realistically, if 'auditlog_pkey' exists on default Zabbix, it's just (auditid).
|
-- Realistically, if 'auditlog_pkey' exists on default Zabbix, it's just auditid.
|
||||||
|
|
||||||
RAISE NOTICE 'Dropping existing Primary Key on auditlog...';
|
RAISE NOTICE 'Dropping existing Primary Key on auditlog...';
|
||||||
ALTER TABLE auditlog DROP CONSTRAINT auditlog_pkey;
|
ALTER TABLE auditlog DROP CONSTRAINT auditlog_pkey;
|
||||||
|
|
||||||
RAISE NOTICE 'Creating new Primary Key on auditlog (auditid, clock)...';
|
RAISE NOTICE 'Creating new Primary Key on auditlog (auditid, clock)...';
|
||||||
ALTER TABLE auditlog ADD PRIMARY KEY (auditid, clock);
|
ALTER TABLE auditlog ADD PRIMARY KEY (auditid, clock);
|
||||||
ELSE
|
ELSE
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- SCRIPT: 02_maintenance.sql
|
-- Core functions for Zabbix partitioning (Create, Drop, Maintain).
|
||||||
-- DESCRIPTION: Core functions for Zabbix partitioning (Create, Drop, Maintain).
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|
||||||
-- Function to check if a partition exists
|
-- Function to check if a partition exists
|
||||||
@@ -11,7 +10,6 @@ BEGIN
|
|||||||
SELECT 1 FROM pg_class c
|
SELECT 1 FROM pg_class c
|
||||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||||
WHERE c.relname = p_partition_name
|
WHERE c.relname = p_partition_name
|
||||||
AND n.nspname = 'public'
|
|
||||||
);
|
);
|
||||||
END;
|
END;
|
||||||
$$ LANGUAGE plpgsql;
|
$$ LANGUAGE plpgsql;
|
||||||
@@ -28,8 +26,17 @@ DECLARE
|
|||||||
v_start_ts bigint;
|
v_start_ts bigint;
|
||||||
v_end_ts bigint;
|
v_end_ts bigint;
|
||||||
v_suffix text;
|
v_suffix text;
|
||||||
|
v_parent_schema text;
|
||||||
BEGIN
|
BEGIN
|
||||||
-- (No changes needed for time here as passed params are already UTC-adjusted in caller)
|
-- Determine the schema of the parent table
|
||||||
|
SELECT n.nspname INTO v_parent_schema
|
||||||
|
FROM pg_class c
|
||||||
|
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = p_parent_table;
|
||||||
|
|
||||||
|
IF NOT FOUND THEN
|
||||||
|
RAISE EXCEPTION 'Parent table % not found', p_parent_table;
|
||||||
|
END IF;
|
||||||
v_start_ts := extract(epoch from p_start_time)::bigint;
|
v_start_ts := extract(epoch from p_start_time)::bigint;
|
||||||
v_end_ts := extract(epoch from p_end_time)::bigint;
|
v_end_ts := extract(epoch from p_end_time)::bigint;
|
||||||
|
|
||||||
@@ -43,8 +50,8 @@ BEGIN
|
|||||||
|
|
||||||
IF NOT partitions.partition_exists(v_partition_name) THEN
|
IF NOT partitions.partition_exists(v_partition_name) THEN
|
||||||
EXECUTE format(
|
EXECUTE format(
|
||||||
'CREATE TABLE public.%I PARTITION OF public.%I FOR VALUES FROM (%s) TO (%s)',
|
'CREATE TABLE %I.%I PARTITION OF %I.%I FOR VALUES FROM (%s) TO (%s)',
|
||||||
v_partition_name, p_parent_table, v_start_ts, v_end_ts
|
v_parent_schema, v_partition_name, v_parent_schema, p_parent_table, v_start_ts, v_end_ts
|
||||||
);
|
);
|
||||||
END IF;
|
END IF;
|
||||||
END;
|
END;
|
||||||
@@ -61,16 +68,19 @@ DECLARE
|
|||||||
v_partition record;
|
v_partition record;
|
||||||
v_partition_date timestamp with time zone;
|
v_partition_date timestamp with time zone;
|
||||||
v_suffix text;
|
v_suffix text;
|
||||||
|
v_partition_schema text;
|
||||||
BEGIN
|
BEGIN
|
||||||
-- Calculate cutoff timestamp
|
-- Calculate cutoff timestamp
|
||||||
v_cutoff_ts := extract(epoch from (now() - p_retention))::bigint;
|
v_cutoff_ts := extract(epoch from (now() - p_retention))::bigint;
|
||||||
|
|
||||||
FOR v_partition IN
|
FOR v_partition IN
|
||||||
SELECT
|
SELECT
|
||||||
child.relname AS partition_name
|
child.relname AS partition_name,
|
||||||
|
n.nspname AS partition_schema
|
||||||
FROM pg_inherits
|
FROM pg_inherits
|
||||||
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
|
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
|
||||||
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
||||||
|
JOIN pg_namespace n ON child.relnamespace = n.oid
|
||||||
WHERE parent.relname = p_parent_table
|
WHERE parent.relname = p_parent_table
|
||||||
LOOP
|
LOOP
|
||||||
-- Parse partition suffix to determine age
|
-- Parse partition suffix to determine age
|
||||||
@@ -85,14 +95,14 @@ BEGIN
|
|||||||
-- To be safe, adding 1 month to check vs cutoff.
|
-- To be safe, adding 1 month to check vs cutoff.
|
||||||
IF extract(epoch from (v_partition_date + '1 month'::interval)) < v_cutoff_ts THEN
|
IF extract(epoch from (v_partition_date + '1 month'::interval)) < v_cutoff_ts THEN
|
||||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
||||||
EXECUTE format('DROP TABLE public.%I', v_partition.partition_name);
|
EXECUTE format('DROP TABLE %I.%I', v_partition.partition_schema, v_partition.partition_name);
|
||||||
COMMIT; -- Release lock immediately
|
COMMIT; -- Release lock immediately
|
||||||
END IF;
|
END IF;
|
||||||
ELSIF length(v_suffix) = 8 THEN -- YYYYMMDD
|
ELSIF length(v_suffix) = 8 THEN -- YYYYMMDD
|
||||||
v_partition_date := to_timestamp(v_suffix, 'YYYYMMDD') AT TIME ZONE 'UTC';
|
v_partition_date := to_timestamp(v_suffix, 'YYYYMMDD') AT TIME ZONE 'UTC';
|
||||||
IF extract(epoch from (v_partition_date + '1 day'::interval)) < v_cutoff_ts THEN
|
IF extract(epoch from (v_partition_date + '1 day'::interval)) < v_cutoff_ts THEN
|
||||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
||||||
EXECUTE format('DROP TABLE public.%I', v_partition.partition_name);
|
EXECUTE format('DROP TABLE %I.%I', v_partition.partition_schema, v_partition.partition_name);
|
||||||
COMMIT; -- Release lock immediately
|
COMMIT; -- Release lock immediately
|
||||||
END IF;
|
END IF;
|
||||||
END IF;
|
END IF;
|
||||||
|
|||||||
@@ -1,7 +1,6 @@
|
|||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- SCRIPT: 03_enable_partitioning.sql
|
-- Converts Zabbix tables to Partitioned tables.
|
||||||
-- DESCRIPTION: Converts standard Zabbix tables to Partitioned tables.
|
-- WARNING: This renames existing tables to *_old.
|
||||||
-- WARNING: This renames existing tables to *_old.
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|
||||||
DO $$
|
DO $$
|
||||||
@@ -10,27 +9,35 @@ DECLARE
|
|||||||
v_table text;
|
v_table text;
|
||||||
v_old_table text;
|
v_old_table text;
|
||||||
v_pk_sql text;
|
v_pk_sql text;
|
||||||
|
v_schema text;
|
||||||
BEGIN
|
BEGIN
|
||||||
FOR v_row IN SELECT * FROM partitions.config LOOP
|
FOR v_row IN SELECT * FROM partitions.config LOOP
|
||||||
v_table := v_row.table_name;
|
v_table := v_row.table_name;
|
||||||
v_old_table := v_table || '_old';
|
v_old_table := v_table || '_old';
|
||||||
|
|
||||||
|
-- Determine schema
|
||||||
|
SELECT n.nspname INTO v_schema
|
||||||
|
FROM pg_class c
|
||||||
|
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||||
|
WHERE c.relname = v_table;
|
||||||
|
|
||||||
|
|
||||||
-- Check if table exists and is NOT already partitioned
|
-- Check if table exists and is NOT already partitioned
|
||||||
IF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'r') THEN
|
IF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'r') THEN
|
||||||
RAISE NOTICE 'Converting table % to partitioned table...', v_table;
|
RAISE NOTICE 'Converting table % to partitioned table...', v_table;
|
||||||
|
|
||||||
-- 1. Rename existing table
|
-- 1. Rename existing table
|
||||||
EXECUTE format('ALTER TABLE public.%I RENAME TO %I', v_table, v_old_table);
|
EXECUTE format('ALTER TABLE %I.%I RENAME TO %I', v_schema, v_table, v_old_table);
|
||||||
|
|
||||||
-- 2. Create new partitioned table (copying structure)
|
-- 2. Create new partitioned table (copying structure)
|
||||||
EXECUTE format('CREATE TABLE public.%I (LIKE public.%I INCLUDING ALL) PARTITION BY RANGE (clock)', v_table, v_old_table);
|
EXECUTE format('CREATE TABLE %I.%I (LIKE %I.%I INCLUDING ALL) PARTITION BY RANGE (clock)', v_schema, v_table, v_schema, v_old_table);
|
||||||
|
|
||||||
-- 3. Create initial partitions
|
-- 3. Create initial partitions
|
||||||
RAISE NOTICE 'Creating initial partitions for %...', v_table;
|
RAISE NOTICE 'Creating initial partitions for %...', v_table;
|
||||||
CALL partitions.maintain_table(v_table, v_row.period, v_row.keep_history, v_row.future_partitions);
|
CALL partitions.maintain_table(v_table, v_row.period, v_row.keep_history, v_row.future_partitions);
|
||||||
|
|
||||||
-- Optional: Migrate existing data
|
-- Optional: Migrate existing data
|
||||||
-- EXECUTE format('INSERT INTO public.%I SELECT * FROM public.%I', v_table, v_old_table);
|
-- EXECUTE format('INSERT INTO %I.%I SELECT * FROM %I.%I', v_schema, v_table, v_schema, v_old_table);
|
||||||
|
|
||||||
ELSIF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'p') THEN
|
ELSIF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'p') THEN
|
||||||
RAISE NOTICE 'Table % is already partitioned. Skipping conversion.', v_table;
|
RAISE NOTICE 'Table % is already partitioned. Skipping conversion.', v_table;
|
||||||
|
|||||||
@@ -1,6 +1,5 @@
|
|||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
-- SCRIPT: 04_monitoring_view.sql
|
-- Creates a view to monitor partition status and sizes.
|
||||||
-- DESCRIPTION: Creates a view to monitor partition status and sizes.
|
|
||||||
-- ============================================================================
|
-- ============================================================================
|
||||||
|
|
||||||
CREATE OR REPLACE VIEW partitions.monitoring AS
|
CREATE OR REPLACE VIEW partitions.monitoring AS
|
||||||
|
|||||||
@@ -61,7 +61,7 @@ This procedure should be scheduled to run periodically (e.g., daily via `pg_cron
|
|||||||
```sql
|
```sql
|
||||||
CALL partitions.run_maintenance();
|
CALL partitions.run_maintenance();
|
||||||
```
|
```
|
||||||
### Automatic Maintenance (Cron)
|
### Automatic Maintenance
|
||||||
|
|
||||||
To ensure partitions are created in advance and old data is cleaned up, the maintenance procedure should be scheduled to run automatically.
|
To ensure partitions are created in advance and old data is cleaned up, the maintenance procedure should be scheduled to run automatically.
|
||||||
|
|
||||||
@@ -69,16 +69,70 @@ It is recommended to run the maintenance **twice a day** (e.g., at 05:30 and 23:
|
|||||||
* **Primary Run**: Creates new future partitions and drops old ones.
|
* **Primary Run**: Creates new future partitions and drops old ones.
|
||||||
* **Secondary Run**: Acts as a safety check. Since the procedure is idempotent (safe to run multiple times), a second run ensures everything is consistent if the first run failed or was interrupted.
|
* **Secondary Run**: Acts as a safety check. Since the procedure is idempotent (safe to run multiple times), a second run ensures everything is consistent if the first run failed or was interrupted.
|
||||||
|
|
||||||
|
There are three ways to schedule this, depending on your environment:
|
||||||
|
|
||||||
|
#### Option 1: `pg_cron` (If you use RDS/Aurora)
|
||||||
|
If you are running on managed PostgreSQL (like AWS Aurora) or prefer to keep scheduling inside the database, `pg_cron` is the way to go.
|
||||||
|
|
||||||
|
1. Ensure `pg_cron` is installed and loaded in `postgresql.conf` (`shared_preload_libraries = 'pg_cron'`).
|
||||||
|
2. Run the following to schedule the maintenance:
|
||||||
|
```sql
|
||||||
|
CREATE EXTENSION IF NOT EXISTS pg_cron;
|
||||||
|
SELECT cron.schedule('zabbix_maintenance', '30 5,23 * * *', 'CALL partitions.run_maintenance();');
|
||||||
|
```
|
||||||
|
*Where:*
|
||||||
|
* `'zabbix_maintenance'` - The name of the job (must be unique).
|
||||||
|
* `'30 5,23 * * *'` - The standard cron schedule (runs at 05:30 and 23:30 daily).
|
||||||
|
* `'CALL partitions.run_maintenance();'` - The SQL command to execute.
|
||||||
|
|
||||||
|
|
||||||
|
#### Option 2: `systemd` Timers
|
||||||
|
For standard Linux VM deployments, `systemd` timers are modern, prevent overlapping runs, and provide excellent logging.
|
||||||
|
|
||||||
|
1. Create a service file (`/etc/systemd/system/zabbix-partitioning.service`):
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=Zabbix PostgreSQL Partition Maintenance
|
||||||
|
|
||||||
|
[Service]
|
||||||
|
Type=oneshot
|
||||||
|
User=zabbix
|
||||||
|
# Ensure .pgpass is configured for the zabbix user so it doesn't prompt for a password
|
||||||
|
ExecStart=/usr/bin/psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();"
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Create a timer file (`/etc/systemd/system/zabbix-partitioning.timer`):
|
||||||
|
```ini
|
||||||
|
[Unit]
|
||||||
|
Description=Zabbix Partitioning twice a day
|
||||||
|
|
||||||
|
[Timer]
|
||||||
|
OnCalendar=*-*-* 05,23:30:00
|
||||||
|
Persistent=true
|
||||||
|
|
||||||
|
[Install]
|
||||||
|
WantedBy=timers.target
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Enable and start the timer:
|
||||||
|
```bash
|
||||||
|
systemctl daemon-reload
|
||||||
|
systemctl enable --now zabbix-partitioning.timer
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Option 3: Standard Cron
|
||||||
|
This is the legacy, simple method for standard VMs and containerized environments.
|
||||||
|
|
||||||
**Example Crontab Entry (`crontab -e`):**
|
**Example Crontab Entry (`crontab -e`):**
|
||||||
```bash
|
```bash
|
||||||
# Run Zabbix partition maintenance twice daily (5:30 AM and 5:30 PM)
|
# Run Zabbix partition maintenance twice daily (5:30 AM and 11:30 PM)
|
||||||
30 5,23 * * * psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();" >> /var/log/zabbix_maintenance.log 2>&1
|
30 5,23 * * * psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();" >> /var/log/zabbix_maintenance.log 2>&1
|
||||||
```
|
```
|
||||||
|
|
||||||
**Docker Environment:**
|
**Docker Environment:**
|
||||||
If running in Docker, you can execute it via the container:
|
If running in Docker, you can execute it via the container's host:
|
||||||
```bash
|
```bash
|
||||||
30 5,23 * * * docker exec zabbix-db-test psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();"
|
30 5,23 * * * docker exec zabbix-db psql -U zabbix -d zabbix -c "CALL partitions.run_maintenance();"
|
||||||
```
|
```
|
||||||
## Monitoring & Permissions
|
## Monitoring & Permissions
|
||||||
|
|
||||||
|
|||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
@@ -1,49 +0,0 @@
|
|||||||
ALTER TABLE history RENAME TO history_old;
|
|
||||||
CREATE TABLE history (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_uint RENAME TO history_uint_old;
|
|
||||||
CREATE TABLE history_uint (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value numeric(20) DEFAULT '0' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_str RENAME TO history_str_old;
|
|
||||||
CREATE TABLE history_str (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value varchar(255) DEFAULT '' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_log RENAME TO history_log_old;
|
|
||||||
CREATE TABLE history_log (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
timestamp integer DEFAULT '0' NOT NULL,
|
|
||||||
source varchar(64) DEFAULT '' NOT NULL,
|
|
||||||
severity integer DEFAULT '0' NOT NULL,
|
|
||||||
value text DEFAULT '' NOT NULL,
|
|
||||||
logeventid integer DEFAULT '0' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_text RENAME TO history_text_old;
|
|
||||||
CREATE TABLE history_text (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value text DEFAULT '' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
File diff suppressed because it is too large
Load Diff
File diff suppressed because it is too large
Load Diff
File diff suppressed because one or more lines are too long
@@ -1,49 +0,0 @@
|
|||||||
ALTER TABLE history RENAME TO history_old;
|
|
||||||
CREATE TABLE history (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_uint RENAME TO history_uint_old;
|
|
||||||
CREATE TABLE history_uint (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value numeric(20) DEFAULT '0' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_str RENAME TO history_str_old;
|
|
||||||
CREATE TABLE history_str (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value varchar(255) DEFAULT '' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_log RENAME TO history_log_old;
|
|
||||||
CREATE TABLE history_log (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
timestamp integer DEFAULT '0' NOT NULL,
|
|
||||||
source varchar(64) DEFAULT '' NOT NULL,
|
|
||||||
severity integer DEFAULT '0' NOT NULL,
|
|
||||||
value text DEFAULT '' NOT NULL,
|
|
||||||
logeventid integer DEFAULT '0' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
ALTER TABLE history_text RENAME TO history_text_old;
|
|
||||||
CREATE TABLE history_text (
|
|
||||||
itemid bigint NOT NULL,
|
|
||||||
clock integer DEFAULT '0' NOT NULL,
|
|
||||||
value text DEFAULT '' NOT NULL,
|
|
||||||
ns integer DEFAULT '0' NOT NULL,
|
|
||||||
PRIMARY KEY (itemid,clock,ns)
|
|
||||||
);
|
|
||||||
|
|
||||||
File diff suppressed because it is too large
Load Diff
@@ -1,91 +0,0 @@
|
|||||||
-- ============================================================================
|
|
||||||
-- SCRIPT: z_gen_history_data.sql
|
|
||||||
-- DESCRIPTION: Generates mock data for Zabbix history and trends tables.
|
|
||||||
-- Creates a dummy host and items if they don't exist.
|
|
||||||
-- ============================================================================
|
|
||||||
|
|
||||||
DO $$
|
|
||||||
DECLARE
|
|
||||||
v_hostid bigint := 900001;
|
|
||||||
v_groupid bigint := 900001;
|
|
||||||
v_interfaceid bigint := 900001;
|
|
||||||
v_itemid_start bigint := 900001;
|
|
||||||
v_start_time integer := extract(epoch from (now() - interval '7 days'))::integer;
|
|
||||||
v_end_time integer := extract(epoch from now())::integer;
|
|
||||||
i integer;
|
|
||||||
BEGIN
|
|
||||||
-- 1. CREATE DUMMY STRUCTURES
|
|
||||||
-- Host Group
|
|
||||||
INSERT INTO hstgrp (groupid, name, uuid, type)
|
|
||||||
VALUES (v_groupid, 'Partition Test Group', 'df77189c49034553999973d8e0500001', 0)
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- Host
|
|
||||||
INSERT INTO hosts (hostid, host, name, status, uuid)
|
|
||||||
VALUES (v_hostid, 'partition-test-host', 'Partition Test Host', 0, 'df77189c49034553999973d8e0500002')
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- Interface
|
|
||||||
INSERT INTO interface (interfaceid, hostid, main, type, useip, ip, dns, port)
|
|
||||||
VALUES (v_interfaceid, v_hostid, 1, 1, 1, '127.0.0.1', '', '10050')
|
|
||||||
ON CONFLICT DO NOTHING;
|
|
||||||
|
|
||||||
-- 2. CREATE DUMMY ITEMS AND GENERATE HISTORY
|
|
||||||
|
|
||||||
-- Item 1: Numeric Float (HISTORY)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 1, v_hostid, v_interfaceid, 'Test Float Item', 'test.float', 0, 0, '1m', 'df77189c49034553999973d8e0500003');
|
|
||||||
|
|
||||||
INSERT INTO history (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 1,
|
|
||||||
ts,
|
|
||||||
random() * 100,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
|
||||||
|
|
||||||
INSERT INTO trends (itemid, clock, num, value_min, value_avg, value_max)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 1,
|
|
||||||
(ts / 3600) * 3600, -- Hourly truncation
|
|
||||||
60,
|
|
||||||
0,
|
|
||||||
50,
|
|
||||||
100
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
|
||||||
|
|
||||||
-- Item 2: Numeric Unsigned (HISTORY_UINT)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 2, v_hostid, v_interfaceid, 'Test Uint Item', 'test.uint', 0, 3, '1m', 'df77189c49034553999973d8e0500004');
|
|
||||||
|
|
||||||
INSERT INTO history_uint (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 2,
|
|
||||||
ts,
|
|
||||||
(random() * 1000)::integer,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
|
||||||
|
|
||||||
INSERT INTO trends_uint (itemid, clock, num, value_min, value_avg, value_max)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 2,
|
|
||||||
(ts / 3600) * 3600,
|
|
||||||
60,
|
|
||||||
0,
|
|
||||||
500,
|
|
||||||
1000
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
|
||||||
|
|
||||||
-- Item 3: Character (HISTORY_STR)
|
|
||||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
|
||||||
VALUES (v_itemid_start + 3, v_hostid, v_interfaceid, 'Test Str Item', 'test.str', 0, 1, '1m', 'df77189c49034553999973d8e0500005');
|
|
||||||
|
|
||||||
INSERT INTO history_str (itemid, clock, value, ns)
|
|
||||||
SELECT
|
|
||||||
v_itemid_start + 3,
|
|
||||||
ts,
|
|
||||||
'test_value_' || ts,
|
|
||||||
0
|
|
||||||
FROM generate_series(v_start_time, v_end_time, 300) AS ts; -- Every 5 mins
|
|
||||||
|
|
||||||
END $$;
|
|
||||||
Reference in New Issue
Block a user