You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
(this might be related to a dbt seed issue regarding column types: #684)
I have a PostgreSQL 10 table with text columns.
However, the archive tool creates the archive table with columns of type character varying(255)-- causing errors like value too long for type character varying(255) when the archive table is populated.
The text was updated successfully, but these errors were encountered:
Thanks @joevandyk - does this happen on the first dbt archive? Or does it happen on subsequent archives?
Archive uses a create table as (....) to initially build the archive table, so i'd be surprised if this doesn't work on the first run. For subsequent runs, dbt will execute insert statements to add new data to the archive table. Sometimes the new data is "wider" than the existing columns, but dbt should be able to widen the column automatically.
Any other information you can share here would be really helpful!
Thanks! Yeah - text has a max width of 256 on Redshift, but it can be an unlimited size on Postgres. We should 1) change our hardcoded 255 to 256 everywhere (don't know how that happened) and 2) fix the logic around text types on Postgres
(this might be related to a dbt seed issue regarding column types: #684)
I have a PostgreSQL 10 table with
text
columns.However, the archive tool creates the archive table with columns of type
character varying(255)
-- causing errors likevalue too long for type character varying(255)
when the archive table is populated.The text was updated successfully, but these errors were encountered: