← Back to all posts

PipelineDB 0.9.1

PipelineDB 0.9.1 is here, download it now!

This release brings continuous triggers to PipelineDB open-source. Previously, they were only available for our enterprise users, but now they available to everyone!

Continuous Triggers

You can now create triggers on continuous views which greatly simplifies building a real-time alerting system with PipelineDB. Check out our blog post to learn more about how continuous triggers work!

Debug Mode

PipelineDB packages now come with a debug build of the server binary. The debug binary is compiled in O0 level with debug symbols and assertions turned on. Debug mode is designed to enable us to better support users when something goes wrong. It can be run in two ways:

First, with the -d/--debug flag in conjunction with pipeline-ctl binary:

pipeline-ctl -d -D ... start
pipeline-ctl --debug -D ... start

Or by executing the pipeline-server-debug binary directly:

pipeline-server-debug -D <data directory>

first_values Improvements

We added the first_values ordered-set aggregate in the last release, but it was fairly unusable. The return type of the aggregate function was anyarray which made it impossible to use any array type functions on the value. We've fixed that in this release. If the sort expression is a single column, the return type will the the array type of the column being sorted over. If the sort expression contains multiple columns then the return type will be record[].


\d v0
     Continuous view "public.v0"
    Column    |   Type    | Modifiers
 first_values | integer[] |

CREATE CONTINUOUS VIEW v1 AS SELECT first_values(3) WITHIN GROUP (ORDER BY x::int, y::text) FROM stream;

\d v1
     Continuous view "public.v1"
    Column    |   Type   | Modifiers
 first_values | record[] |

array_agg(anyarray) Support

PostgreSQL 9.5 added the ability to aggregate array types into arrays. PipelineDB now supports that as well!

CREATE CONTINUOUS VIEW v AS SELECT array_agg(ARRAY[x::int, y::int]) FROM stream;

INSERT INTO stream (x, y) VALUES (1, 2), (2, 4);
INSERT INTO stream (x, y) VALUES (3, 6), (4, 8);

(1 row)

Removed pipeline_kafka From Core Codebase

We removed pipeline_kafka from the core code base to break dependency between their release cycles. As a result, pipeline_kafka no longer comes pre-installed with PipelineDB packages and you'll have to build and install the module yourself. The code base for pipeline_kafka can be found here. It has instructions for building and installing the extension.