If it is a single-row table, then there's no risk whatsoever to fill it with a single row that can be NULL, as @Gordon Linoff suggests.
Internally, you should be aware that Vertica, in the background, always implements an UPDATE as a DELETE, by adding a delete vector for the row, and then applying an INSERT.
No problem with a single-row table, as the Tuple Mover (the background daemon process that wakes up all 5 mins to de-fragment the internal storage, to put it simply, and will create a single data (Read Optimized Storage - ROS) container out of: the previous value; the delete vector pointing to that previous value, thus deactivating it, and the newly inserted value that it is updated to.
So:
CREATE TABLE table1 (
mycol VARCHAR(16)
) UNSEGMENTED ALL NODES; -- a small table, replicate it across all nodes
-- now you have an empty table
-- for the following scenario, I assume you commit the changes every time, as other connected
-- processes will want to see the data you changed
-- then, only once:
INSERT INTO table1 VALUES(NULL::VARCHAR(16);
-- now, you get a ROS container for one row.
-- Later:
UPDATE table1 SET mycol='first value';
-- a DELETE vector is created to mark the initial "NULL" value as invalid
-- a new row is added to the ROS container with the value "first value"
-- Then, before 5 minutes have elapsed, you go:
UPDATE table1 SET mycol='second value';
-- another DELETE vector is created, in a new delete-vector-ROS-container,
-- to mark "first value" as invalid
-- another new row is added to a new ROS container, containing "second value"
-- Now 5 minutes have elapsed since the start, the Tuple Mover sees there's work to do,
-- and:
-- - it reads the ROS containers containing "NULL" and "first value"
-- - it reads the delete-vector-ROS containers marking both "NULL" and "first value"
-- as invalid
-- - it reads the last ROS container containing "second value"
-- --> and it finally merges all into a brand new ROS container, to only contain.
-- "second value", and, at the end the four other ROS containers are deleted.
With a single-row table, this works wonderfully. Don't do it like that for a billion rows.