Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

hasura migrate commands not working with GitHub Actions #3638

Closed
joshuarobs opened this issue Jan 6, 2020 · 2 comments
Closed

hasura migrate commands not working with GitHub Actions #3638

joshuarobs opened this issue Jan 6, 2020 · 2 comments

Comments

@joshuarobs
Copy link

joshuarobs commented Jan 6, 2020

Edit: I've added a test repo that anyone can clone and try it out for themselves: https://github.com/joshuarobs/making-hasura-work-with-github-actions
Simply just clone it, push something to master and watch the magic "happen" (hint: it doesn't)

Here's a link to the latest build run log which failed: https://github.com/joshuarobs/making-hasura-work-with-github-actions/runs/375147959

This is my .github/workflows/nodejs.yml file:

name: Node CI

on: [push]

jobs:
  build:

    runs-on: ubuntu-16.04

    strategy:
      matrix:
        node-version: [12.x]

    steps:
    - uses: actions/checkout@v1
    - name: Install Hasura cli
      run: curl -L https://github.com/hasura/graphql-engine/raw/master/cli/get.sh | bash
    - name: Start up the Docker containers and migrate to the latest version of the database
      run: |
        docker-compose up -d
        sleep 4
        cd docker
        echo $(docker ps)
        hasura migrate status --skip-update-check --endpoint http://localhost:8080
    - name: Use Node.js ${{ matrix.node-version }}
      uses: actions/setup-node@v1
      with:
        node-version: ${{ matrix.node-version }}
    - name: npm install, build, and test
      run: |
        npm ci
        npm run build --if-present
        npm test
      env:
        CI: true
        DB_ENDPOINT: http://localhost:8080/v1/graphql

When it gets to the hasura migrate status --skip-update-check --endpoint http://localhost:8080 it fails, giving me this error:

time="2020-01-06T05:16:59Z" level=fatal msg="version check: failed to get version from server: failed making version api call: Get http://localhost:8080/v1/version: read tcp 127.0.0.1:39732->127.0.0.1:8080: read: connection reset by peer"
28
##[error]Process completed with exit code 1.

If I try to run hasura migrate status --skip-update-check --endpoint http://localhost:8080 locally on my machine, it works fine. But when I try to do it with GitHub Actions, it fails.

So...

  • Is anyone else having this issue, trying to run Hasura cli commands on GitHub Actions?
  • Is this the fault of Hasura? (Personally, I don't think so, since running it locally works fine for me)
  • Is this the fault of GitHub Actions not letting me somehow access a docker container?
  • Is there any other CI that doesn't have this kind of problem?

I've added a test repo that anyone can clone and try it out for themselves: https://github.com/joshuarobs/making-hasura-work-with-github-actions
Simply just clone it, push something to master and watch the magic "happen" (hint: it doesn't)

Here's a link to the latest build run log which failed: https://github.com/joshuarobs/making-hasura-work-with-github-actions/runs/375147959

@joshuarobs
Copy link
Author

joshuarobs commented Jan 7, 2020

After doing some more brute forcing, to find the root cause of the problem, I think I've finally found a lead.

I ran pg_dump just to see what's in each database and found a discrepancy. This is the command I used: docker exec container-name_actions_postgres_1 pg_dump -U postgres -d postgres. The container name is obtained from docker ps and looking at the one that ends with postgres_1.

The actual full command, as seen in the repo is: docker exec making-hasura-work-with-github-actions_postgres_1 pg_dump -U postgres -d postgres. It runs automatically in the build. Running that locally and manually will also work.

Here is the pg_dump I get from GitHub Actions:

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.1 (Debian 12.1-1.pgdg100+1)
-- Dumped by pg_dump version 12.1 (Debian 12.1-1.pgdg100+1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;
--
-- PostgreSQL database dump complete
--

This is where it gets even more interesting, the pg_dump on my local machine:

--
-- PostgreSQL database dump
--

-- Dumped from database version 12.1 (Debian 12.1-1.pgdg100+1)
-- Dumped by pg_dump version 12.1 (Debian 12.1-1.pgdg100+1)

SET statement_timeout = 0;
SET lock_timeout = 0;
SET idle_in_transaction_session_timeout = 0;
SET client_encoding = 'UTF8';
SET standard_conforming_strings = on;
SELECT pg_catalog.set_config('search_path', '', false);
SET check_function_bodies = false;
SET xmloption = content;
SET client_min_messages = warning;
SET row_security = off;

--
-- Name: hdb_catalog; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA hdb_catalog;


ALTER SCHEMA hdb_catalog OWNER TO postgres;

--
-- Name: hdb_views; Type: SCHEMA; Schema: -; Owner: postgres
--

CREATE SCHEMA hdb_views;


ALTER SCHEMA hdb_views OWNER TO postgres;

--
-- Name: pgcrypto; Type: EXTENSION; Schema: -; Owner: -
--

CREATE EXTENSION IF NOT EXISTS pgcrypto WITH SCHEMA public;


--
-- Name: EXTENSION pgcrypto; Type: COMMENT; Schema: -; Owner: 
--

COMMENT ON EXTENSION pgcrypto IS 'cryptographic functions';


--
-- Name: hdb_schema_update_event_notifier(); Type: FUNCTION; Schema: hdb_catalog; Owner: postgres
--

CREATE FUNCTION hdb_catalog.hdb_schema_update_event_notifier() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
  DECLARE
    instance_id uuid;
    occurred_at timestamptz;
    curr_rec record;
  BEGIN
    instance_id = NEW.instance_id;
    occurred_at = NEW.occurred_at;
    PERFORM pg_notify('hasura_schema_update', json_build_object(
      'instance_id', instance_id,
      'occurred_at', occurred_at
      )::text);
    RETURN curr_rec;
  END;
$$;


ALTER FUNCTION hdb_catalog.hdb_schema_update_event_notifier() OWNER TO postgres;

--
-- Name: inject_table_defaults(text, text, text, text); Type: FUNCTION; Schema: hdb_catalog; Owner: postgres
--

CREATE FUNCTION hdb_catalog.inject_table_defaults(view_schema text, view_name text, tab_schema text, tab_name text) RETURNS void
    LANGUAGE plpgsql
    AS $$
    DECLARE
        r RECORD;
    BEGIN
      FOR r IN SELECT column_name, column_default FROM information_schema.columns WHERE table_schema = tab_schema AND table_name = tab_name AND column_default IS NOT NULL LOOP
          EXECUTE format('ALTER VIEW %I.%I ALTER COLUMN %I SET DEFAULT %s;', view_schema, view_name, r.column_name, r.column_default);
      END LOOP;
    END;
$$;


ALTER FUNCTION hdb_catalog.inject_table_defaults(view_schema text, view_name text, tab_schema text, tab_name text) OWNER TO postgres;

--
-- Name: insert_event_log(text, text, text, text, json); Type: FUNCTION; Schema: hdb_catalog; Owner: postgres
--

CREATE FUNCTION hdb_catalog.insert_event_log(schema_name text, table_name text, trigger_name text, op text, row_data json) RETURNS text
    LANGUAGE plpgsql
    AS $$
  DECLARE
    id text;
    payload json;
    session_variables json;
    server_version_num int;
  BEGIN
    id := gen_random_uuid();
    server_version_num := current_setting('server_version_num');
    IF server_version_num >= 90600 THEN
      session_variables := current_setting('hasura.user', 't');
    ELSE
      BEGIN
        session_variables := current_setting('hasura.user');
      EXCEPTION WHEN OTHERS THEN
                  session_variables := NULL;
      END;
    END IF;
    payload := json_build_object(
      'op', op,
      'data', row_data,
      'session_variables', session_variables
    );
    INSERT INTO hdb_catalog.event_log
                (id, schema_name, table_name, trigger_name, payload)
    VALUES
    (id, schema_name, table_name, trigger_name, payload);
    RETURN id;
  END;
$$;


ALTER FUNCTION hdb_catalog.insert_event_log(schema_name text, table_name text, trigger_name text, op text, row_data json) OWNER TO postgres;

SET default_tablespace = '';

SET default_table_access_method = heap;

--
-- Name: event_invocation_logs; Type: TABLE; Schema: hdb_catalog; Owner: postgres
--

CREATE TABLE hdb_catalog.event_invocation_logs (
    id text DEFAULT public.gen_random_uuid() NOT NULL,
    event_id text,
    status integer,
    request json,
    response json,
    created_at timestamp without time zone DEFAULT now()
);


ALTER TABLE hdb_catalog.event_invocation_logs OWNER TO postgres;

--
-- Name: event_log; Type: TABLE; Schema: hdb_catalog; Owner: postgres
--

CREATE TABLE hdb_catalog.event_log (
    id text DEFAULT public.gen_random_uuid() NOT NULL,
    schema_name text NOT NULL,
    table_name text NOT NULL,
    trigger_name text NOT NULL,
    payload jsonb NOT NULL,
    delivered boolean DEFAULT false NOT NULL,
    error boolean DEFAULT false NOT NULL,
    tries integer DEFAULT 0 NOT NULL,
    created_at timestamp without time zone DEFAULT now(),
    locked boolean DEFAULT false NOT NULL,
    next_retry_at timestamp without time zone,
    archived boolean DEFAULT false NOT NULL
);


ALTER TABLE hdb_catalog.event_log OWNER TO postgres;

--
-- Name: event_triggers; Type: TABLE; Schema: hdb_catalog; Owner: postgres
--

CREATE TABLE hdb_catalog.event_triggers (
    name text NOT NULL,
    type text NOT NULL,
    schema_name text NOT NULL,
    table_name text NOT NULL,
    configuration json,
    comment text
);


ALTER TABLE hdb_catalog.event_triggers OWNER TO postgres;

--
-- Name: hdb_allowlist; Type: TABLE; Schema: hdb_catalog; Owner: postgres
--

CREATE TABLE hdb_catalog.hdb_allowlist (
    collection_name text
);


ALTER TABLE hdb_catalog.hdb_allowlist OWNER TO postgres;

--
-- Name: hdb_check_constraint; Type: VIEW; Schema: hdb_catalog; Owner: postgres
--

CREATE VIEW hdb_catalog.hdb_check_constraint AS
 SELECT (n.nspname)::text AS table_schema,
    (ct.relname)::text AS table_name,
    (r.conname)::text AS constraint_name,
    pg_get_constraintdef(r.oid, true) AS "check"
   FROM ((pg_constraint r
     JOIN pg_class ct ON ((r.conrelid = ct.oid)))
     JOIN pg_namespace n ON ((ct.relnamespace = n.oid)))
  WHERE (r.contype = 'c'::"char");


ALTER TABLE hdb_catalog.hdb_check_constraint OWNER TO postgres;

...

That's not even the full output pasted. For some reason, the pg_dump in the GitHub Actions machine doesn't contain all the SQL stuff I'd assume that would make Hasura work.

And this is with all an "empty" Hasura database. Going to http://localhost:8080/console in the Data tab will have no tables.

I'm not sure what to do past this point, but I'll try to find a way to make the cloud machine start up the same as my local dev environment.

Anyone have ideas as to why there is no SQL if this is built on the server?

@joshuarobs
Copy link
Author

Closing this issue as I'm going to combine this issue with a proper one that focuses more on getting Hasura to work with GitHub Actions for a proper DevOps CI/CD pipeline workflow

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant