build(repo): structure the repo into postgresql subdirectory with separate template and internal tests
This commit is contained in:
63
postgresql/tests/ARCHITECTURE.md
Normal file
63
postgresql/tests/ARCHITECTURE.md
Normal file
@@ -0,0 +1,63 @@
|
||||
# Zabbix PostgreSQL Partitioning Architecture
|
||||
|
||||
This document provides a brief technical overview of the components, logic, and dynamic querying mechanisms that power the PostgreSQL partitioning solution for Zabbix.
|
||||
|
||||
## Schema-Agnostic Design
|
||||
|
||||
A core architectural principle of this solution is its **schema-agnostic design**. It does not assume that your Zabbix database is installed in the default `public` schema.
|
||||
|
||||
When the procedures need to create, drop, or manipulate a partitioned table (e.g., `history`), they do not hardcode the schema. Instead, they dynamically query PostgreSQL's internal system catalogs (`pg_class` and `pg_namespace`) to locate exactly which schema the target table belongs to:
|
||||
|
||||
```sql
|
||||
SELECT n.nspname INTO v_schema
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relname = v_table;
|
||||
```
|
||||
|
||||
This ensures that the partitioning scripts will work flawlessly, even in custom Zabbix deployments where tables are housed in alternative schemas.
|
||||
|
||||
## File Structure & Queries
|
||||
|
||||
The solution is divided into a series of SQL scripts that must be executed sequentially to set up the environment.
|
||||
|
||||
### 1. `00_schema_create.sql`
|
||||
* **Purpose:** Initializes the foundation for the partitioning system.
|
||||
* **Actions:**
|
||||
* Creates the isolated `partitions` schema to keep everything separate from Zabbix's own structure.
|
||||
* Creates the `partitions.config` table (which stores retention policies).
|
||||
* Creates the `partitions.version` table for tracking the installed version.
|
||||
|
||||
### 2. `01_auditlog_prep.sql`
|
||||
* **Purpose:** Prepares the Zabbix `auditlog` table for partitioning.
|
||||
* **Actions:**
|
||||
* PostgreSQL range partitioning requires the partition key (in this case, `clock`) to be part of the Primary Key.
|
||||
* This script dynamically locates the existing Primary Key (usually just `auditid`) and alters it to a composite key `(auditid, clock)`.
|
||||
|
||||
### 3. `01_maintenance.sql`
|
||||
* **Purpose:** Contains the core PL/pgSQL procedural logic that manages the lifecycle of the partitions.
|
||||
* **Key Functions/Procedures:**
|
||||
* `partition_exists()`: Queries `pg_class` to verify if a specific child partition partition exists.
|
||||
* `create_partition()`: Executes the DDL `CREATE TABLE ... PARTITION OF ... FOR VALUES FROM (x) TO (y)` to generate a new time-bound chunk.
|
||||
* `drop_old_partitions()`: Iterates over existing child partitions (using `pg_inherits`) and calculates their age based on their suffix. Drops those older than the defined `keep_history` policy.
|
||||
* `maintain_table()`: The orchestrator for a single table. It calculates the necessary UTC timestamps, calls `create_partition()` to build the future buffer, calls `create_partition()` recursively backward to cover the retention period, and finally calls `drop_old_partitions()`.
|
||||
* `run_maintenance()`: The global loop that iterates through `partitions.config` and triggers `maintain_table()` for every configured Zabbix table.
|
||||
|
||||
### 4. `02_enable_partitioning.sql`
|
||||
* **Purpose:** The migration script that actually executes the partition conversion on the live database.
|
||||
* **Actions:**
|
||||
* It takes the original Zabbix table (e.g., `history`) and renames it to `history_old` (`ALTER TABLE ... RENAME TO ...`).
|
||||
* It immediately creates a new partitioned table with the original name, inheriting the exact structure of the old table (`CREATE TABLE ... (LIKE ... INCLUDING ALL) PARTITION BY RANGE (clock)`).
|
||||
* It triggers the first maintenance run so new incoming data has immediate partitions to land in.
|
||||
|
||||
### 5. `03_monitoring_view.sql`
|
||||
* **Purpose:** Provides an easy-to-read observability layer.
|
||||
* **Actions:**
|
||||
* Creates the `partitions.monitoring` view by joining `pg_class`, `pg_inherits`, `pg_tablespace`, and `pg_size_pretty`.
|
||||
* This view aggregates the total size of each partitioned family and calculates how many "future partitions" exist as a safety buffer.
|
||||
|
||||
## Automated Scheduling (`pg_cron`)
|
||||
|
||||
While `systemd` timers or standard `cron` can be used to trigger the maintenance, the recommended approach (especially for AWS RDS/Aurora deployments) is using the `pg_cron` database extension.
|
||||
|
||||
`pg_cron` allows you to schedule the `CALL partitions.run_maintenance();` procedure directly within PostgreSQL, ensuring the database autonomously manages its own housekeeping without requiring external OS-level access or triggers.
|
||||
90
postgresql/tests/QUICKSTART.md
Normal file
90
postgresql/tests/QUICKSTART.md
Normal file
@@ -0,0 +1,90 @@
|
||||
# Quickstart (PostgreSQL Partitioning Test)
|
||||
|
||||
## Start Environment
|
||||
> **Note**: If `docker` commands fail with permission errors, run `newgrp docker` or ensure your user is in the `docker` group (`sudo usermod -aG docker $USER`) and log out/in.
|
||||
|
||||
```bash
|
||||
cd postgresql/docker
|
||||
sudo ./run_test_env.sh --pg 16 --zabbix 7.0
|
||||
# Options: --pg <16|17|18> --zabbix <7.0|7.4>
|
||||
```
|
||||
|
||||
## Verify
|
||||
```bash
|
||||
# Check status
|
||||
docker ps
|
||||
|
||||
# SQL Shell
|
||||
docker exec -it zabbix-db-test psql -U zabbix -d zabbix
|
||||
# Password: zabbix
|
||||
```
|
||||
|
||||
## Reset
|
||||
```bash
|
||||
docker compose down -v
|
||||
```
|
||||
|
||||
## Partitioning
|
||||
See [ARCHITECTURE.md](../ARCHITECTURE.md) for details on the implemented declarative partitioning.
|
||||
|
||||
## AWS RDS / External Database Testing
|
||||
|
||||
You can run these partitioning tests against a real AWS RDS (or any external PostgreSQL instance).
|
||||
|
||||
### 1. Configure Credentials
|
||||
First, create a `db_credentials` file in the `postgresql/` directory. (This file is ignored by Git to keep your passwords safe).
|
||||
Example `postgresql/db_credentials`:
|
||||
```bash
|
||||
# Admin credentials
|
||||
export DB_HOST="your-rds-endpoint.rds.amazonaws.com"
|
||||
export DB_PORT="5432"
|
||||
export DB_NAME="postgres"
|
||||
export DB_USER="postgres"
|
||||
export DB_PASSWORD="your_admin_password"
|
||||
|
||||
# SSL Configuration
|
||||
export DB_SSL_MODE="verify-full"
|
||||
export DB_PEM_URL="https://truststore.pki.rds.amazonaws.com/global/global-bundle.pem"
|
||||
export DB_SSL_ROOT_CERT="./global-bundle.pem"
|
||||
|
||||
# Zabbix credentials to be created
|
||||
export ZBX_DB_NAME="zabbix"
|
||||
export ZBX_DB_USER="zabbix"
|
||||
export ZBX_DB_PASSWORD="zabbix_password"
|
||||
```
|
||||
|
||||
### 2. Automated Testing
|
||||
You can run the same automated deployment script, but instruct it to deploy directly to your RDS instance instead of a local Docker container:
|
||||
|
||||
```bash
|
||||
cd postgresql/docker
|
||||
./run_test_env.sh --pg 16 --zabbix 7.0 --rds
|
||||
```
|
||||
|
||||
If you want to completely clean up the RDS database and start fresh (terminating existing connections and dropping all data), use the `--rds-drop` flag. You will be prompted to type `yes` to safely confirm the deletion:
|
||||
```bash
|
||||
./run_test_env.sh --pg 16 --zabbix 7.0 --rds-drop
|
||||
```
|
||||
|
||||
### 3. Manual Setup & Zabbix Integration
|
||||
If you want to prepare the real database for your Production Zabbix Server manually, you can just run the initialization script directly:
|
||||
|
||||
```bash
|
||||
cd postgresql
|
||||
./setup_rds.sh
|
||||
# To drop an existing database and start fresh, use:
|
||||
# ./setup_rds.sh --drop
|
||||
```
|
||||
|
||||
The script will automatically connect as the `postgres` user, conditionally download the SSL certificates if needed, and set up the `zabbix` user and database.
|
||||
Upon success, the script will output the exact block you need to copy into your `zabbix_server.conf`, e.g.:
|
||||
|
||||
```ini
|
||||
DBHost=your-rds-endpoint.rds.amazonaws.com
|
||||
DBName=zabbix
|
||||
DBUser=zabbix
|
||||
DBPassword=zabbix_password
|
||||
DBPort=5432
|
||||
DBTLSConnect=verify_full
|
||||
DBTLSCAFile=/full/path/to/global-bundle.pem
|
||||
```
|
||||
20
postgresql/tests/docker/docker-compose.yml
Normal file
20
postgresql/tests/docker/docker-compose.yml
Normal file
@@ -0,0 +1,20 @@
|
||||
services:
|
||||
postgres:
|
||||
image: postgres:${PG_VERSION}
|
||||
container_name: zabbix-db-test
|
||||
environment:
|
||||
POSTGRES_PASSWORD: zabbix
|
||||
POSTGRES_USER: zabbix
|
||||
POSTGRES_DB: zabbix
|
||||
PGDATA: /var/lib/postgresql/data/pgdata
|
||||
ports:
|
||||
- "5432:5432"
|
||||
volumes:
|
||||
- ./init_scripts:/docker-entrypoint-initdb.d
|
||||
tmpfs:
|
||||
- /var/lib/postgresql/data
|
||||
healthcheck:
|
||||
test: ["CMD-SHELL", "pg_isready -U zabbix"]
|
||||
interval: 5s
|
||||
timeout: 5s
|
||||
retries: 5
|
||||
@@ -0,0 +1,5 @@
|
||||
-- Create additional user for partitioning tasks
|
||||
CREATE USER zbx_part WITH PASSWORD 'zbx_part';
|
||||
GRANT CONNECT ON DATABASE zabbix TO zbx_part;
|
||||
-- Grant usage on public schema (standard for PG 15+)
|
||||
GRANT USAGE ON SCHEMA public TO zbx_part;
|
||||
3407
postgresql/tests/docker/init_scripts/01_00_schema.sql
Normal file
3407
postgresql/tests/docker/init_scripts/01_00_schema.sql
Normal file
File diff suppressed because it is too large
Load Diff
@@ -0,0 +1,48 @@
|
||||
-- ============================================================================
|
||||
-- Creates the 'partitions' schema and configuration table.
|
||||
-- Defines the structure for managing Zabbix partitioning.
|
||||
-- ============================================================================
|
||||
|
||||
CREATE SCHEMA IF NOT EXISTS partitions;
|
||||
|
||||
-- Configuration table to store partitioning settings per table
|
||||
CREATE TABLE IF NOT EXISTS partitions.config (
|
||||
table_name text NOT NULL,
|
||||
period text NOT NULL CHECK (period IN ('day', 'week', 'month', 'year')),
|
||||
keep_history interval NOT NULL,
|
||||
future_partitions integer NOT NULL DEFAULT 5,
|
||||
last_updated timestamp WITH TIME ZONE DEFAULT (now() AT TIME ZONE 'UTC'),
|
||||
PRIMARY KEY (table_name)
|
||||
);
|
||||
|
||||
-- Table to track installed version of the partitioning solution
|
||||
CREATE TABLE IF NOT EXISTS partitions.version (
|
||||
version text PRIMARY KEY,
|
||||
installed_at timestamp with time zone DEFAULT (now() AT TIME ZONE 'UTC'),
|
||||
description text
|
||||
);
|
||||
|
||||
-- Set initial version
|
||||
INSERT INTO partitions.version (version, description) VALUES ('1.0', 'Initial release')
|
||||
ON CONFLICT (version) DO NOTHING;
|
||||
|
||||
-- Default configuration for Zabbix tables (adjust as needed)
|
||||
-- History tables: Daily partitions, keep 30 days
|
||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
||||
('history', 'day', '30 days'),
|
||||
('history_uint', 'day', '30 days'),
|
||||
('history_str', 'day', '30 days'),
|
||||
('history_log', 'day', '30 days'),
|
||||
('history_text', 'day', '30 days')
|
||||
ON CONFLICT (table_name) DO NOTHING;
|
||||
|
||||
-- Trends tables: Monthly partitions, keep 12 months
|
||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
||||
('trends', 'month', '12 months'),
|
||||
('trends_uint', 'month', '12 months')
|
||||
ON CONFLICT (table_name) DO NOTHING;
|
||||
|
||||
-- Auditlog: Monthly partitions, keep 12 months
|
||||
INSERT INTO partitions.config (table_name, period, keep_history) VALUES
|
||||
('auditlog', 'month', '12 months')
|
||||
ON CONFLICT (table_name) DO NOTHING;
|
||||
194
postgresql/tests/docker/init_scripts/01_30_maintenance.sql
Normal file
194
postgresql/tests/docker/init_scripts/01_30_maintenance.sql
Normal file
@@ -0,0 +1,194 @@
|
||||
-- ============================================================================
|
||||
-- Core functions for Zabbix partitioning (Create, Drop, Maintain).
|
||||
-- ============================================================================
|
||||
|
||||
-- Function to check if a partition exists
|
||||
CREATE OR REPLACE FUNCTION partitions.partition_exists(p_partition_name text)
|
||||
RETURNS boolean AS $$
|
||||
BEGIN
|
||||
RETURN EXISTS (
|
||||
SELECT 1 FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relname = p_partition_name
|
||||
);
|
||||
END;
|
||||
$$ LANGUAGE plpgsql;
|
||||
|
||||
-- Function to create a partition
|
||||
CREATE OR REPLACE PROCEDURE partitions.create_partition(
|
||||
p_parent_table text,
|
||||
p_start_time timestamp with time zone,
|
||||
p_end_time timestamp with time zone,
|
||||
p_period text
|
||||
) LANGUAGE plpgsql AS $$
|
||||
DECLARE
|
||||
v_partition_name text;
|
||||
v_start_ts bigint;
|
||||
v_end_ts bigint;
|
||||
v_suffix text;
|
||||
v_parent_schema text;
|
||||
BEGIN
|
||||
-- Determine the schema of the parent table
|
||||
SELECT n.nspname INTO v_parent_schema
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relname = p_parent_table;
|
||||
|
||||
IF NOT FOUND THEN
|
||||
RAISE EXCEPTION 'Parent table % not found', p_parent_table;
|
||||
END IF;
|
||||
-- (No changes needed for time here as passed params are already UTC-adjusted in caller)
|
||||
v_start_ts := extract(epoch from p_start_time)::bigint;
|
||||
v_end_ts := extract(epoch from p_end_time)::bigint;
|
||||
|
||||
IF p_period = 'month' THEN
|
||||
v_suffix := to_char(p_start_time, 'YYYYMM');
|
||||
ELSE
|
||||
v_suffix := to_char(p_start_time, 'YYYYMMDD');
|
||||
END IF;
|
||||
|
||||
v_partition_name := p_parent_table || '_p' || v_suffix;
|
||||
|
||||
IF NOT partitions.partition_exists(v_partition_name) THEN
|
||||
EXECUTE format(
|
||||
'CREATE TABLE %I.%I PARTITION OF %I.%I FOR VALUES FROM (%s) TO (%s)',
|
||||
v_parent_schema, v_partition_name, v_parent_schema, p_parent_table, v_start_ts, v_end_ts
|
||||
);
|
||||
END IF;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Function to drop old partitions
|
||||
CREATE OR REPLACE PROCEDURE partitions.drop_old_partitions(
|
||||
p_parent_table text,
|
||||
p_retention interval,
|
||||
p_period text
|
||||
) LANGUAGE plpgsql AS $$
|
||||
DECLARE
|
||||
v_cutoff_ts bigint;
|
||||
v_partition record;
|
||||
v_partition_date timestamp with time zone;
|
||||
v_suffix text;
|
||||
v_partition_schema text;
|
||||
BEGIN
|
||||
-- Calculate cutoff timestamp
|
||||
v_cutoff_ts := extract(epoch from (now() - p_retention))::bigint;
|
||||
|
||||
FOR v_partition IN
|
||||
SELECT
|
||||
child.relname AS partition_name,
|
||||
n.nspname AS partition_schema
|
||||
FROM pg_inherits
|
||||
JOIN pg_class parent ON pg_inherits.inhparent = parent.oid
|
||||
JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
||||
JOIN pg_namespace n ON child.relnamespace = n.oid
|
||||
WHERE parent.relname = p_parent_table
|
||||
LOOP
|
||||
-- Parse partition suffix to determine age
|
||||
-- Format: parent_pYYYYMM or parent_pYYYYMMDD
|
||||
v_suffix := substring(v_partition.partition_name from length(p_parent_table) + 3);
|
||||
|
||||
BEGIN
|
||||
IF length(v_suffix) = 6 THEN -- YYYYMM
|
||||
v_partition_date := to_timestamp(v_suffix || '01', 'YYYYMMDD') AT TIME ZONE 'UTC';
|
||||
-- For monthly, we check if the END of the month is older than retention?
|
||||
-- Or just strict retention.
|
||||
-- To be safe, adding 1 month to check vs cutoff.
|
||||
IF extract(epoch from (v_partition_date + '1 month'::interval)) < v_cutoff_ts THEN
|
||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
||||
EXECUTE format('DROP TABLE %I.%I', v_partition.partition_schema, v_partition.partition_name);
|
||||
COMMIT; -- Release lock immediately
|
||||
END IF;
|
||||
ELSIF length(v_suffix) = 8 THEN -- YYYYMMDD
|
||||
v_partition_date := to_timestamp(v_suffix, 'YYYYMMDD') AT TIME ZONE 'UTC';
|
||||
IF extract(epoch from (v_partition_date + '1 day'::interval)) < v_cutoff_ts THEN
|
||||
RAISE NOTICE 'Dropping old partition %', v_partition.partition_name;
|
||||
EXECUTE format('DROP TABLE %I.%I', v_partition.partition_schema, v_partition.partition_name);
|
||||
COMMIT; -- Release lock immediately
|
||||
END IF;
|
||||
END IF;
|
||||
EXCEPTION WHEN OTHERS THEN
|
||||
-- Ignore parsing errors for non-standard partitions
|
||||
NULL;
|
||||
END;
|
||||
END LOOP;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- MAIN Procedure to maintain a single table
|
||||
CREATE OR REPLACE PROCEDURE partitions.maintain_table(
|
||||
p_table_name text,
|
||||
p_period text,
|
||||
p_keep_history interval,
|
||||
p_future_partitions integer DEFAULT 5
|
||||
) LANGUAGE plpgsql AS $$
|
||||
DECLARE
|
||||
v_start_time timestamp with time zone;
|
||||
v_period_interval interval;
|
||||
i integer;
|
||||
v_past_iterations integer;
|
||||
BEGIN
|
||||
IF p_period = 'day' THEN
|
||||
v_period_interval := '1 day'::interval;
|
||||
v_start_time := date_trunc('day', now() AT TIME ZONE 'UTC');
|
||||
-- Calculate how many past days cover the retention period (86400 seconds = 1 day)
|
||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 86400)::integer;
|
||||
|
||||
ELSIF p_period = 'week' THEN
|
||||
v_period_interval := '1 week'::interval;
|
||||
v_start_time := date_trunc('week', now() AT TIME ZONE 'UTC');
|
||||
-- 604800 seconds = 1 week
|
||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 604800)::integer;
|
||||
|
||||
ELSIF p_period = 'month' THEN
|
||||
v_period_interval := '1 month'::interval;
|
||||
v_start_time := date_trunc('month', now() AT TIME ZONE 'UTC');
|
||||
-- Approximate 30 days per month (2592000 seconds)
|
||||
v_past_iterations := ceil(extract(epoch from p_keep_history) / 2592000)::integer;
|
||||
ELSE
|
||||
RETURN;
|
||||
END IF;
|
||||
|
||||
-- 1. Create Future Partitions (Current + Buffer)
|
||||
FOR i IN 0..p_future_partitions LOOP
|
||||
CALL partitions.create_partition(
|
||||
p_table_name,
|
||||
v_start_time + (i * v_period_interval),
|
||||
v_start_time + ((i + 1) * v_period_interval),
|
||||
p_period
|
||||
);
|
||||
COMMIT; -- Release lock immediately
|
||||
END LOOP;
|
||||
|
||||
-- 2. Create Past Partitions (Covering retention period)
|
||||
IF v_past_iterations > 0 THEN
|
||||
FOR i IN 1..v_past_iterations LOOP
|
||||
CALL partitions.create_partition(
|
||||
p_table_name,
|
||||
v_start_time - (i * v_period_interval),
|
||||
v_start_time - ((i - 1) * v_period_interval),
|
||||
p_period
|
||||
);
|
||||
COMMIT; -- Release lock immediately
|
||||
END LOOP;
|
||||
END IF;
|
||||
|
||||
-- 3. Drop Old Partitions
|
||||
CALL partitions.drop_old_partitions(p_table_name, p_keep_history, p_period);
|
||||
|
||||
-- 4. Update Metadata
|
||||
UPDATE partitions.config SET last_updated = now() WHERE table_name = p_table_name;
|
||||
END;
|
||||
$$;
|
||||
|
||||
-- Global Maintenance Procedure
|
||||
CREATE OR REPLACE PROCEDURE partitions.run_maintenance()
|
||||
LANGUAGE plpgsql AS $$
|
||||
DECLARE
|
||||
v_row record;
|
||||
BEGIN
|
||||
FOR v_row IN SELECT * FROM partitions.config LOOP
|
||||
CALL partitions.maintain_table(v_row.table_name, v_row.period, v_row.keep_history, v_row.future_partitions);
|
||||
END LOOP;
|
||||
END;
|
||||
$$;
|
||||
56
postgresql/tests/docker/init_scripts/01_40_enable.sql
Normal file
56
postgresql/tests/docker/init_scripts/01_40_enable.sql
Normal file
@@ -0,0 +1,56 @@
|
||||
-- ============================================================================
|
||||
-- Converts standard Zabbix tables to Partitioned tables.
|
||||
-- WARNING: This renames existing tables to *_old.
|
||||
-- ============================================================================
|
||||
|
||||
DO $$
|
||||
DECLARE
|
||||
v_row record;
|
||||
v_table text;
|
||||
v_old_table text;
|
||||
v_pk_sql text;
|
||||
v_schema text;
|
||||
BEGIN
|
||||
FOR v_row IN SELECT * FROM partitions.config LOOP
|
||||
v_table := v_row.table_name;
|
||||
v_old_table := v_table || '_old';
|
||||
|
||||
-- Determine schema
|
||||
SELECT n.nspname INTO v_schema
|
||||
FROM pg_class c
|
||||
JOIN pg_namespace n ON n.oid = c.relnamespace
|
||||
WHERE c.relname = v_table;
|
||||
|
||||
|
||||
IF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'r') THEN
|
||||
RAISE NOTICE 'Converting table % to partitioned table...', v_table;
|
||||
|
||||
-- 1. Rename existing table
|
||||
EXECUTE format('ALTER TABLE %I.%I RENAME TO %I', v_schema, v_table, v_old_table);
|
||||
|
||||
-- 2. Create new partitioned table (handling auditlog PK uniquely)
|
||||
IF v_table = 'auditlog' THEN
|
||||
EXECUTE format('CREATE TABLE %I.%I (LIKE %I.%I INCLUDING DEFAULTS INCLUDING COMMENTS) PARTITION BY RANGE (clock)', v_schema, v_table, v_schema, v_old_table);
|
||||
EXECUTE format('ALTER TABLE %I.%I ADD PRIMARY KEY (auditid, clock)', v_schema, v_table);
|
||||
EXECUTE format('CREATE INDEX IF NOT EXISTS auditlog_1 ON %I.%I (userid, clock)', v_schema, v_table);
|
||||
EXECUTE format('CREATE INDEX IF NOT EXISTS auditlog_2 ON %I.%I (clock)', v_schema, v_table);
|
||||
ELSE
|
||||
EXECUTE format('CREATE TABLE %I.%I (LIKE %I.%I INCLUDING ALL) PARTITION BY RANGE (clock)', v_schema, v_table, v_schema, v_old_table);
|
||||
END IF;
|
||||
|
||||
-- 3. Create initial partitions
|
||||
RAISE NOTICE 'Creating initial partitions for %...', v_table;
|
||||
CALL partitions.maintain_table(v_table, v_row.period, v_row.keep_history, v_row.future_partitions);
|
||||
|
||||
-- Optional: Migrate existing data
|
||||
-- EXECUTE format('INSERT INTO %I.%I SELECT * FROM %I.%I', v_schema, v_table, v_schema, v_old_table);
|
||||
|
||||
ELSIF EXISTS (SELECT 1 FROM pg_class WHERE relname = v_table AND relkind = 'p') THEN
|
||||
RAISE NOTICE 'Table % is already partitioned. Skipping conversion.', v_table;
|
||||
-- Just run maintenance to ensure partitions exist
|
||||
CALL partitions.run_maintenance();
|
||||
ELSE
|
||||
RAISE WARNING 'Table % not found!', v_table;
|
||||
END IF;
|
||||
END LOOP;
|
||||
END $$;
|
||||
27
postgresql/tests/docker/init_scripts/01_50_monitoring.sql
Normal file
27
postgresql/tests/docker/init_scripts/01_50_monitoring.sql
Normal file
@@ -0,0 +1,27 @@
|
||||
-- ============================================================================
|
||||
-- Creates a view to monitor partition status and sizes.
|
||||
-- ============================================================================
|
||||
|
||||
CREATE OR REPLACE VIEW partitions.monitoring AS
|
||||
SELECT
|
||||
parent.relname AS parent_table,
|
||||
c.table_name,
|
||||
c.period,
|
||||
c.keep_history,
|
||||
count(child.relname) AS partition_count,
|
||||
count(child.relname) FILTER (
|
||||
WHERE
|
||||
(c.period = 'day' AND child.relname > (parent.relname || '_p' || to_char(now(), 'YYYYMMDD')))
|
||||
OR
|
||||
(c.period = 'month' AND child.relname > (parent.relname || '_p' || to_char(now(), 'YYYYMM')))
|
||||
) AS future_partitions,
|
||||
pg_size_pretty(sum(pg_total_relation_size(child.oid))) AS total_size,
|
||||
min(child.relname) AS oldest_partition,
|
||||
max(child.relname) AS newest_partition,
|
||||
c.last_updated
|
||||
FROM partitions.config c
|
||||
JOIN pg_class parent ON parent.relname = c.table_name
|
||||
LEFT JOIN pg_inherits ON pg_inherits.inhparent = parent.oid
|
||||
LEFT JOIN pg_class child ON pg_inherits.inhrelid = child.oid
|
||||
WHERE parent.relkind = 'p' -- Only partitioned tables
|
||||
GROUP BY parent.relname, c.table_name, c.period, c.keep_history, c.last_updated;
|
||||
187
postgresql/tests/docker/init_scripts/02_images.sql
Normal file
187
postgresql/tests/docker/init_scripts/02_images.sql
Normal file
File diff suppressed because one or more lines are too long
277331
postgresql/tests/docker/init_scripts/03_data.sql
Normal file
277331
postgresql/tests/docker/init_scripts/03_data.sql
Normal file
File diff suppressed because it is too large
Load Diff
91
postgresql/tests/docker/init_scripts/04_gen_data.sql
Normal file
91
postgresql/tests/docker/init_scripts/04_gen_data.sql
Normal file
@@ -0,0 +1,91 @@
|
||||
-- ============================================================================
|
||||
-- SCRIPT: z_gen_history_data.sql
|
||||
-- DESCRIPTION: Generates mock data for Zabbix history and trends tables.
|
||||
-- Creates a dummy host and items if they don't exist.
|
||||
-- ============================================================================
|
||||
|
||||
DO $$
|
||||
DECLARE
|
||||
v_hostid bigint := 900001;
|
||||
v_groupid bigint := 900001;
|
||||
v_interfaceid bigint := 900001;
|
||||
v_itemid_start bigint := 900001;
|
||||
v_start_time integer := extract(epoch from (now() - interval '7 days'))::integer;
|
||||
v_end_time integer := extract(epoch from now())::integer;
|
||||
i integer;
|
||||
BEGIN
|
||||
-- 1. CREATE DUMMY STRUCTURES
|
||||
-- Host Group
|
||||
INSERT INTO hstgrp (groupid, name, uuid, type)
|
||||
VALUES (v_groupid, 'Partition Test Group', 'df77189c49034553999973d8e0500001', 0)
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Host
|
||||
INSERT INTO hosts (hostid, host, name, status, uuid)
|
||||
VALUES (v_hostid, 'partition-test-host', 'Partition Test Host', 0, 'df77189c49034553999973d8e0500002')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Interface
|
||||
INSERT INTO interface (interfaceid, hostid, main, type, useip, ip, dns, port)
|
||||
VALUES (v_interfaceid, v_hostid, 1, 1, 1, '127.0.0.1', '', '10050')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- 2. CREATE DUMMY ITEMS AND GENERATE HISTORY
|
||||
|
||||
-- Item 1: Numeric Float (HISTORY)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 1, v_hostid, v_interfaceid, 'Test Float Item', 'test.float', 0, 0, '1m', 'df77189c49034553999973d8e0500003');
|
||||
|
||||
INSERT INTO history (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 1,
|
||||
ts,
|
||||
random() * 100,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
||||
|
||||
INSERT INTO trends (itemid, clock, num, value_min, value_avg, value_max)
|
||||
SELECT
|
||||
v_itemid_start + 1,
|
||||
(ts / 3600) * 3600, -- Hourly truncation
|
||||
60,
|
||||
0,
|
||||
50,
|
||||
100
|
||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
||||
|
||||
-- Item 2: Numeric Unsigned (HISTORY_UINT)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 2, v_hostid, v_interfaceid, 'Test Uint Item', 'test.uint', 0, 3, '1m', 'df77189c49034553999973d8e0500004');
|
||||
|
||||
INSERT INTO history_uint (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 2,
|
||||
ts,
|
||||
(random() * 1000)::integer,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
||||
|
||||
INSERT INTO trends_uint (itemid, clock, num, value_min, value_avg, value_max)
|
||||
SELECT
|
||||
v_itemid_start + 2,
|
||||
(ts / 3600) * 3600,
|
||||
60,
|
||||
0,
|
||||
500,
|
||||
1000
|
||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
||||
|
||||
-- Item 3: Character (HISTORY_STR)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 3, v_hostid, v_interfaceid, 'Test Str Item', 'test.str', 0, 1, '1m', 'df77189c49034553999973d8e0500005');
|
||||
|
||||
INSERT INTO history_str (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 3,
|
||||
ts,
|
||||
'test_value_' || ts,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 300) AS ts; -- Every 5 mins
|
||||
|
||||
END $$;
|
||||
164
postgresql/tests/docker/run_test_env.sh
Executable file
164
postgresql/tests/docker/run_test_env.sh
Executable file
@@ -0,0 +1,164 @@
|
||||
#!/bin/bash
|
||||
|
||||
# Default values
|
||||
PG_VERSION=""
|
||||
ZABBIX_VERSION=""
|
||||
|
||||
# Color codes
|
||||
GREEN='\033[0;32m'
|
||||
RED='\033[0;31m'
|
||||
NC='\033[0m' # No Color
|
||||
|
||||
usage() {
|
||||
echo "Usage: $0 --pg <16|17|18> --zabbix <7.0|7.4> [--rds] [--rds-drop]"
|
||||
echo "Example: $0 --pg 16 --zabbix 7.0 [--rds-drop]"
|
||||
exit 1
|
||||
}
|
||||
|
||||
# Parse arguments
|
||||
USE_RDS=false
|
||||
DROP_RDS=false
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--pg) PG_VERSION="$2"; shift ;;
|
||||
--zabbix) ZABBIX_VERSION="$2"; shift ;;
|
||||
--rds) USE_RDS=true ;;
|
||||
--rds-drop) USE_RDS=true; DROP_RDS=true ;;
|
||||
*) echo "Unknown parameter: $1"; usage ;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
if [[ -z "$PG_VERSION" || -z "$ZABBIX_VERSION" ]]; then
|
||||
echo -e "${RED}Error: detailed arguments required.${NC}"
|
||||
usage
|
||||
fi
|
||||
|
||||
# Map Zabbix version to sql-scripts folder
|
||||
if [[ "$ZABBIX_VERSION" == "7.0" ]]; then
|
||||
SQL_DIR="../sql-scripts-70"
|
||||
elif [[ "$ZABBIX_VERSION" == "7.4" ]]; then
|
||||
SQL_DIR="../sql-scripts-74"
|
||||
else
|
||||
echo -e "${RED}Error: Unsupported Zabbix version. Use 7.0 or 7.4.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
echo -e "${GREEN}Preparing environment for PostgreSQL $PG_VERSION and Zabbix $ZABBIX_VERSION...${NC}"
|
||||
|
||||
# Cleanup previous run
|
||||
echo "Cleaning up containers and volumes..."
|
||||
docker compose down -v > /dev/null 2>&1
|
||||
rm -rf init_scripts
|
||||
mkdir -p init_scripts
|
||||
|
||||
# Symlink SQL scripts
|
||||
echo "Setting up initialization scripts from $SQL_DIR..."
|
||||
|
||||
# 0. Extra Users
|
||||
if [[ -f "../init_extra_users.sql" ]]; then
|
||||
cp "../init_extra_users.sql" ./init_scripts/00_init_extra_users.sql
|
||||
echo "Copied extra user init script."
|
||||
fi
|
||||
|
||||
# 1. Schema
|
||||
if [[ -f "$SQL_DIR/schema.sql" ]]; then
|
||||
# Use 01_00 to ensure it comes before 01_10
|
||||
cp "$SQL_DIR/schema.sql" ./init_scripts/01_00_schema.sql
|
||||
|
||||
# 1.1 Partitioning Infrastructure
|
||||
if [[ -f "../../procedures/00_schema_create.sql" ]]; then
|
||||
cp "../../procedures/00_schema_create.sql" ./init_scripts/01_10_schema_create.sql
|
||||
fi
|
||||
if [[ -f "../../procedures/01_maintenance.sql" ]]; then
|
||||
cp "../../procedures/01_maintenance.sql" ./init_scripts/01_30_maintenance.sql
|
||||
fi
|
||||
if [[ -f "../../procedures/02_enable_partitioning.sql" ]]; then
|
||||
cp "../../procedures/02_enable_partitioning.sql" ./init_scripts/01_40_enable.sql
|
||||
fi
|
||||
if [[ -f "../../procedures/03_monitoring_view.sql" ]]; then
|
||||
cp "../../procedures/03_monitoring_view.sql" ./init_scripts/01_50_monitoring.sql
|
||||
fi
|
||||
else
|
||||
echo -e "${RED}Error: schema.sql not found in $SQL_DIR${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 2. Images
|
||||
if [[ -f "$SQL_DIR/images.sql" ]]; then
|
||||
cp "$SQL_DIR/images.sql" ./init_scripts/02_images.sql
|
||||
else
|
||||
echo -e "${RED}Error: images.sql not found in $SQL_DIR${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 3. Data
|
||||
if [[ -f "$SQL_DIR/data.sql" ]]; then
|
||||
cp "$SQL_DIR/data.sql" ./init_scripts/03_data.sql
|
||||
else
|
||||
echo -e "${RED}Error: data.sql not found in $SQL_DIR${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 4. Mock History Data
|
||||
if [[ -f "../z_gen_history_data.sql" ]]; then
|
||||
cp "../z_gen_history_data.sql" ./init_scripts/04_gen_data.sql
|
||||
echo "Copied mock data generator."
|
||||
else
|
||||
echo -e "${RED}Warning: z_gen_history_data.sql not found!${NC}"
|
||||
fi
|
||||
|
||||
# Check logic for 7.4 vs 7.0 (file names might slightly differ or be organized differently if using packages,
|
||||
# but assuming source layout provided)
|
||||
|
||||
# Export variable for Docker Compose
|
||||
export PG_VERSION=$PG_VERSION
|
||||
|
||||
if [ "$USE_RDS" = "true" ]; then
|
||||
echo -e "${GREEN}Deploying directly to RDS environment...${NC}"
|
||||
if [ ! -f "../db_credentials" ]; then
|
||||
echo -e "${RED}Error: ../db_credentials file not found. Please create it first.${NC}"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Initialize RDS (create/drop user and db)
|
||||
if [ "$DROP_RDS" = "true" ]; then
|
||||
echo "Initializing Zabbix RDS user and database (with DROP requested)..."
|
||||
bash ../setup_rds.sh --drop
|
||||
else
|
||||
echo "Initializing Zabbix RDS user and database..."
|
||||
bash ../setup_rds.sh
|
||||
fi
|
||||
|
||||
source ../db_credentials
|
||||
export PGPASSWORD="$ZBX_DB_PASSWORD"
|
||||
|
||||
echo "Applying scripts from init_scripts/ to RDS..."
|
||||
for sql_file in $(ls ./init_scripts/*.sql | sort); do
|
||||
echo "Executing $sql_file..."
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$ZBX_DB_NAME user=$ZBX_DB_USER sslmode=$DB_SSL_MODE sslrootcert=../$DB_SSL_ROOT_CERT" -f "$sql_file" -v ON_ERROR_STOP=1
|
||||
done
|
||||
|
||||
echo -e "${GREEN}RDS Environment ready.${NC}"
|
||||
echo "Connect: psql \"host=$DB_HOST port=$DB_PORT dbname=$ZBX_DB_NAME user=$ZBX_DB_USER sslmode=$DB_SSL_MODE sslrootcert=../$DB_SSL_ROOT_CERT\""
|
||||
else
|
||||
# Run Docker Compose
|
||||
echo -e "${GREEN}Starting PostgreSQL container...${NC}"
|
||||
docker compose up -d
|
||||
|
||||
echo -e "${GREEN}Waiting for database to be ready...${NC}"
|
||||
# Simple wait loop
|
||||
for i in {1..30}; do
|
||||
if docker exec zabbix-db-test pg_isready -U zabbix > /dev/null 2>&1; then
|
||||
echo -e "${GREEN}Database is ready!${NC}"
|
||||
break
|
||||
fi
|
||||
echo -n "."
|
||||
sleep 1
|
||||
done
|
||||
|
||||
# Check if data generation finished
|
||||
echo "To follow initialization logs, run: docker logs -f zabbix-db-test"
|
||||
echo -e "${GREEN}Environment ready.${NC}"
|
||||
echo "Connect: psql -h localhost -p 5432 -U zabbix -d zabbix"
|
||||
fi
|
||||
5
postgresql/tests/init_extra_users.sql
Normal file
5
postgresql/tests/init_extra_users.sql
Normal file
@@ -0,0 +1,5 @@
|
||||
-- Create additional user for partitioning tasks
|
||||
CREATE USER zbx_part WITH PASSWORD 'zbx_part';
|
||||
GRANT CONNECT ON DATABASE zabbix TO zbx_part;
|
||||
-- Grant usage on public schema (standard for PG 15+)
|
||||
GRANT USAGE ON SCHEMA public TO zbx_part;
|
||||
101
postgresql/tests/setup_rds.sh
Executable file
101
postgresql/tests/setup_rds.sh
Executable file
@@ -0,0 +1,101 @@
|
||||
#!/bin/bash
|
||||
set -e
|
||||
|
||||
# Change directory to script's location
|
||||
cd "$(dirname "$0")"
|
||||
|
||||
DROP_DB=false
|
||||
while [[ "$#" -gt 0 ]]; do
|
||||
case $1 in
|
||||
--drop) DROP_DB=true ;;
|
||||
esac
|
||||
shift
|
||||
done
|
||||
|
||||
# Source credentials from db_credentials file
|
||||
if [ -f "./db_credentials" ]; then
|
||||
echo "Loading credentials from db_credentials..."
|
||||
source ./db_credentials
|
||||
else
|
||||
echo "Error: db_credentials file not found in $(pwd)"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# 1. Provide the PEM key for AWS RDS if not exists
|
||||
if [ -n "$DB_PEM_URL" ] && [ ! -f "$DB_SSL_ROOT_CERT" ]; then
|
||||
echo "Downloading SSL root certificate from AWS..."
|
||||
wget -qO "$DB_SSL_ROOT_CERT" "$DB_PEM_URL"
|
||||
fi
|
||||
|
||||
# Ensure PEM has right permissions if it exists
|
||||
if [ -f "$DB_SSL_ROOT_CERT" ]; then
|
||||
chmod 600 "$DB_SSL_ROOT_CERT"
|
||||
fi
|
||||
|
||||
# 2. Login as the RDS admin user (postgres) to create the zabbix user/database
|
||||
echo "Connecting to PostgreSQL to create Zabbix user and database..."
|
||||
|
||||
export PGPASSWORD="$DB_PASSWORD"
|
||||
|
||||
# Create the zabbix user if it doesn't already exist
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -v ON_ERROR_STOP=1 <<EOF
|
||||
DO \$\$
|
||||
BEGIN
|
||||
IF NOT EXISTS (SELECT FROM pg_catalog.pg_roles WHERE rolname = '$ZBX_DB_USER') THEN
|
||||
CREATE ROLE $ZBX_DB_USER WITH LOGIN PASSWORD '$ZBX_DB_PASSWORD';
|
||||
END IF;
|
||||
END
|
||||
\$\$;
|
||||
EOF
|
||||
|
||||
echo "User '$ZBX_DB_USER' verified/created."
|
||||
|
||||
# Create the zabbix database if it doesn't already exist
|
||||
DB_EXISTS=$(psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -t -c "SELECT 1 FROM pg_database WHERE datname='$ZBX_DB_NAME'" | tr -d '[:space:]')
|
||||
|
||||
if [ "$DROP_DB" = "true" ] && [ "$DB_EXISTS" = "1" ]; then
|
||||
echo -e "\n========================================"
|
||||
echo -e " WARNING! "
|
||||
echo -e "========================================"
|
||||
echo -e "You requested to completely DROP and RE-INITIATE the database '$ZBX_DB_NAME'."
|
||||
echo -e "This will delete ALL data. Are you sure you want to proceed?"
|
||||
read -p "Type 'yes' to proceed: " confirm_drop
|
||||
if [ "$confirm_drop" != "yes" ]; then
|
||||
echo "Database drop cancelled. Exiting."
|
||||
exit 1
|
||||
fi
|
||||
echo "Terminating active connections and dropping database..."
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -c "SELECT pg_terminate_backend(pid) FROM pg_stat_activity WHERE datname = '$ZBX_DB_NAME' AND pid <> pg_backend_pid();"
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -c "DROP DATABASE $ZBX_DB_NAME;"
|
||||
DB_EXISTS=""
|
||||
fi
|
||||
|
||||
if [ "$DB_EXISTS" != "1" ]; then
|
||||
echo "Database '$ZBX_DB_NAME' does not exist. Creating..."
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -c "CREATE DATABASE $ZBX_DB_NAME OWNER $ZBX_DB_USER;"
|
||||
else
|
||||
echo "Database '$ZBX_DB_NAME' already exists."
|
||||
fi
|
||||
|
||||
# Grant necessary permissions
|
||||
psql "host=$DB_HOST port=$DB_PORT dbname=$DB_NAME user=$DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT" -c "GRANT ALL PRIVILEGES ON DATABASE $ZBX_DB_NAME TO $ZBX_DB_USER;"
|
||||
|
||||
echo ""
|
||||
echo "================================================================================"
|
||||
echo "✅ Initialization Successful!"
|
||||
echo "================================================================================"
|
||||
echo "You can now use these settings in your Zabbix server configuration:"
|
||||
echo "--------------------------------------------------------------------------------"
|
||||
echo "DBHost=$DB_HOST"
|
||||
echo "DBName=$ZBX_DB_NAME"
|
||||
echo "DBUser=$ZBX_DB_USER"
|
||||
echo "DBPassword=$ZBX_DB_PASSWORD"
|
||||
echo "DBPort=$DB_PORT"
|
||||
echo "DBTLSConnect=verify_full"
|
||||
echo "DBTLSCAFile=$(realpath $DB_SSL_ROOT_CERT)"
|
||||
echo "================================================================================"
|
||||
echo ""
|
||||
echo "To connect manually for testing directly to the Zabbix DB:"
|
||||
echo "export PGPASSWORD=\"$ZBX_DB_PASSWORD\""
|
||||
echo "psql \"host=$DB_HOST port=$DB_PORT dbname=$ZBX_DB_NAME user=$ZBX_DB_USER sslmode=$DB_SSL_MODE sslrootcert=$DB_SSL_ROOT_CERT\""
|
||||
echo ""
|
||||
277331
postgresql/tests/sql-scripts-70/data.sql
Normal file
277331
postgresql/tests/sql-scripts-70/data.sql
Normal file
File diff suppressed because it is too large
Load Diff
187
postgresql/tests/sql-scripts-70/images.sql
Normal file
187
postgresql/tests/sql-scripts-70/images.sql
Normal file
File diff suppressed because one or more lines are too long
@@ -0,0 +1,49 @@
|
||||
ALTER TABLE history RENAME TO history_old;
|
||||
CREATE TABLE history (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_uint RENAME TO history_uint_old;
|
||||
CREATE TABLE history_uint (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value numeric(20) DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_str RENAME TO history_str_old;
|
||||
CREATE TABLE history_str (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value varchar(255) DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_log RENAME TO history_log_old;
|
||||
CREATE TABLE history_log (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
timestamp integer DEFAULT '0' NOT NULL,
|
||||
source varchar(64) DEFAULT '' NOT NULL,
|
||||
severity integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
logeventid integer DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_text RENAME TO history_text_old;
|
||||
CREATE TABLE history_text (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
3407
postgresql/tests/sql-scripts-70/schema.sql
Normal file
3407
postgresql/tests/sql-scripts-70/schema.sql
Normal file
File diff suppressed because it is too large
Load Diff
287733
postgresql/tests/sql-scripts-74/data.sql
Normal file
287733
postgresql/tests/sql-scripts-74/data.sql
Normal file
File diff suppressed because it is too large
Load Diff
187
postgresql/tests/sql-scripts-74/images.sql
Normal file
187
postgresql/tests/sql-scripts-74/images.sql
Normal file
File diff suppressed because one or more lines are too long
@@ -0,0 +1,49 @@
|
||||
ALTER TABLE history RENAME TO history_old;
|
||||
CREATE TABLE history (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_uint RENAME TO history_uint_old;
|
||||
CREATE TABLE history_uint (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value numeric(20) DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_str RENAME TO history_str_old;
|
||||
CREATE TABLE history_str (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value varchar(255) DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_log RENAME TO history_log_old;
|
||||
CREATE TABLE history_log (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
timestamp integer DEFAULT '0' NOT NULL,
|
||||
source varchar(64) DEFAULT '' NOT NULL,
|
||||
severity integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
logeventid integer DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_text RENAME TO history_text_old;
|
||||
CREATE TABLE history_text (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
3359
postgresql/tests/sql-scripts-74/schema.sql
Normal file
3359
postgresql/tests/sql-scripts-74/schema.sql
Normal file
File diff suppressed because it is too large
Load Diff
319788
postgresql/tests/sql-scripts-80/data.sql
Normal file
319788
postgresql/tests/sql-scripts-80/data.sql
Normal file
File diff suppressed because it is too large
Load Diff
187
postgresql/tests/sql-scripts-80/images.sql
Normal file
187
postgresql/tests/sql-scripts-80/images.sql
Normal file
File diff suppressed because one or more lines are too long
@@ -0,0 +1,49 @@
|
||||
ALTER TABLE history RENAME TO history_old;
|
||||
CREATE TABLE history (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value DOUBLE PRECISION DEFAULT '0.0000' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_uint RENAME TO history_uint_old;
|
||||
CREATE TABLE history_uint (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value numeric(20) DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_str RENAME TO history_str_old;
|
||||
CREATE TABLE history_str (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value varchar(255) DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_log RENAME TO history_log_old;
|
||||
CREATE TABLE history_log (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
timestamp integer DEFAULT '0' NOT NULL,
|
||||
source varchar(64) DEFAULT '' NOT NULL,
|
||||
severity integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
logeventid integer DEFAULT '0' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
ALTER TABLE history_text RENAME TO history_text_old;
|
||||
CREATE TABLE history_text (
|
||||
itemid bigint NOT NULL,
|
||||
clock integer DEFAULT '0' NOT NULL,
|
||||
value text DEFAULT '' NOT NULL,
|
||||
ns integer DEFAULT '0' NOT NULL,
|
||||
PRIMARY KEY (itemid,clock,ns)
|
||||
);
|
||||
|
||||
3386
postgresql/tests/sql-scripts-80/schema.sql
Normal file
3386
postgresql/tests/sql-scripts-80/schema.sql
Normal file
File diff suppressed because it is too large
Load Diff
91
postgresql/tests/z_gen_history_data.sql
Normal file
91
postgresql/tests/z_gen_history_data.sql
Normal file
@@ -0,0 +1,91 @@
|
||||
-- ============================================================================
|
||||
-- SCRIPT: z_gen_history_data.sql
|
||||
-- DESCRIPTION: Generates mock data for Zabbix history and trends tables.
|
||||
-- Creates a dummy host and items if they don't exist.
|
||||
-- ============================================================================
|
||||
|
||||
DO $$
|
||||
DECLARE
|
||||
v_hostid bigint := 900001;
|
||||
v_groupid bigint := 900001;
|
||||
v_interfaceid bigint := 900001;
|
||||
v_itemid_start bigint := 900001;
|
||||
v_start_time integer := extract(epoch from (now() - interval '7 days'))::integer;
|
||||
v_end_time integer := extract(epoch from now())::integer;
|
||||
i integer;
|
||||
BEGIN
|
||||
-- 1. CREATE DUMMY STRUCTURES
|
||||
-- Host Group
|
||||
INSERT INTO hstgrp (groupid, name, uuid, type)
|
||||
VALUES (v_groupid, 'Partition Test Group', 'df77189c49034553999973d8e0500001', 0)
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Host
|
||||
INSERT INTO hosts (hostid, host, name, status, uuid)
|
||||
VALUES (v_hostid, 'partition-test-host', 'Partition Test Host', 0, 'df77189c49034553999973d8e0500002')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- Interface
|
||||
INSERT INTO interface (interfaceid, hostid, main, type, useip, ip, dns, port)
|
||||
VALUES (v_interfaceid, v_hostid, 1, 1, 1, '127.0.0.1', '', '10050')
|
||||
ON CONFLICT DO NOTHING;
|
||||
|
||||
-- 2. CREATE DUMMY ITEMS AND GENERATE HISTORY
|
||||
|
||||
-- Item 1: Numeric Float (HISTORY)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 1, v_hostid, v_interfaceid, 'Test Float Item', 'test.float', 0, 0, '1m', 'df77189c49034553999973d8e0500003');
|
||||
|
||||
INSERT INTO history (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 1,
|
||||
ts,
|
||||
random() * 100,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
||||
|
||||
INSERT INTO trends (itemid, clock, num, value_min, value_avg, value_max)
|
||||
SELECT
|
||||
v_itemid_start + 1,
|
||||
(ts / 3600) * 3600, -- Hourly truncation
|
||||
60,
|
||||
0,
|
||||
50,
|
||||
100
|
||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
||||
|
||||
-- Item 2: Numeric Unsigned (HISTORY_UINT)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 2, v_hostid, v_interfaceid, 'Test Uint Item', 'test.uint', 0, 3, '1m', 'df77189c49034553999973d8e0500004');
|
||||
|
||||
INSERT INTO history_uint (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 2,
|
||||
ts,
|
||||
(random() * 1000)::integer,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 60) AS ts;
|
||||
|
||||
INSERT INTO trends_uint (itemid, clock, num, value_min, value_avg, value_max)
|
||||
SELECT
|
||||
v_itemid_start + 2,
|
||||
(ts / 3600) * 3600,
|
||||
60,
|
||||
0,
|
||||
500,
|
||||
1000
|
||||
FROM generate_series(v_start_time, v_end_time, 3600) AS ts;
|
||||
|
||||
-- Item 3: Character (HISTORY_STR)
|
||||
INSERT INTO items (itemid, hostid, interfaceid, name, key_, type, value_type, delay, uuid)
|
||||
VALUES (v_itemid_start + 3, v_hostid, v_interfaceid, 'Test Str Item', 'test.str', 0, 1, '1m', 'df77189c49034553999973d8e0500005');
|
||||
|
||||
INSERT INTO history_str (itemid, clock, value, ns)
|
||||
SELECT
|
||||
v_itemid_start + 3,
|
||||
ts,
|
||||
'test_value_' || ts,
|
||||
0
|
||||
FROM generate_series(v_start_time, v_end_time, 300) AS ts; -- Every 5 mins
|
||||
|
||||
END $$;
|
||||
Reference in New Issue
Block a user