-
Notifications
You must be signed in to change notification settings - Fork 16
Developer Documentation
First clone the repository if you haven't already:
$ git clone https://github.com/virtualcommons/port-of-mars.git
These instructions assume you're familiar with the Linux / macOS command line. They might work on a Windows Subsystem for Linux setup + Docker engine but we don't have the resources to test that - if you get that working please let us know by submitting an issue or posting in our GitHub discussions with the relevant details.
Alternatively, running Linux in a virtual machine should ensure being able to follow this guide without issue. A guide for running the latest Ubuntu release (22.04/22.10) with Virtualbox 7 can be found here: https://ubuntu.com/tutorials/how-to-run-ubuntu-desktop-on-a-virtual-machine-using-virtualbox. In order for this to be practical it is recommended to allocate as much system resources as possible (ideally 8GB+ of RAM and half of your CPU cores) to the virtual machine and limit background processes on the host. Some additional optimizations like https://blogs.oracle.com/scoter/post/oracle-vm-virtualbox-61-3d-acceleration-for-ubuntu-1804-and-2004-virtual-machines can be enabled to squeeze more performance out.
Base dependencies:
- An up-to-date version of Docker installed
- make
- bash 5.x
- gettext package for envsubst
You might want to alias docker compose to something easier to type since it will be frequently used, for example:
$ alias doc="docker compose"
macOS has some pretty ancient CLI tooling that won't cut it to build this application (like, bash 3.x etc... come on, mac!). In order to build and run port of mars from the command line we'll need to upgrade them. We've tested this using macports but homebrew should also work. You'll also need XCode and the XCode Command Line Tools.
- Download and install macports.
- Follow the macports install XCode and the XCode Command Line Tools guide here
- Install / upgrade bash and gettext
$ sudo port install bash gettext
-
(Apple Silicon M1/M2 chips only) set the
DOCKER_DEFAULT_PLATFORM
environment variable tolinux/amd64
e.g.,$ export DOCKER_DEFAULT_PLATFORM=linux/amd64
. To make sure this is always set in your environment put the export command in your shell startup script (e.g.,.bash_profile
for bash,.zprofile
for zsh etc). You can find out what shell you are running with the command$ echo $0
Note
Increasing the virtual disk size may also be necessary since compatibility/OS stuff may take up a large portion of the default amount. This can be done in Docker Desktop (settings > resources > disk image size)
The rest of these instructions should be applicable to both Linux / macOS / WSL once you have the base dependencies installed.
In the root directory of your cloned port-of-mars repository, run:
$ ./configure dev # this may fail if you do not have envsubst / gettext installed or are using an old version of bash
$ make initialize # create a new docker-compose.yml, build client and server docker images, initialize database
$ make deploy # start the docker containers up
In general, $ make deploy
is what you can use to start all the services (client, server, database, redis). Right after a $ git pull && make deploy
.
At this point you should be able to access your local instance of Port of Mars at http://localhost:8081
This is a hot-reloading server so any changes to the client should automagically get deployed and made visible without doing anything. If you do run into problems though or need to rebuild your container images, the following commands are often helpful:
$ docker compose restart server # server|client|db|redis or whatever service name in docker-compose.yml you want to restart
$ docker compose down # bring down all services to keep your system load clean
$ docker compose build --pull # rebuild container images (run docker compose up -d to recreate them afterwards)
$ docker compose exec server bash # enter the running server container (replace server with any service name defined in `docker-compose.yml`)
If you are starting from scratch you will need to initialize the database and initialize the schema and then load data fixtures into the database for the Port of Mars tutorial quiz questions.
$ docker compose exec server bash # open a bash shell into the server container
$ npm run initdb # DESTRUCTIVE OPERATION, be careful with this! Drops the existing port of mars database if it exists
You should now be able to test the Port of Mars by visiting http://localhost:8081
in your browser.
Note that much of this setup is optional and only serves to give a sensible dev environment. Configuration can be customized to suit one's preferred workflow.
VSCode is recommended as a primary editor/IDE for this project due to native plugin support for Vue/Typescript as well as LiveShare which lets you easily collaborate with each other.
- Vue - Official - Vue language features
- ESLint and Prettier - code linting/formatting
- GitLens - blame annotations and more for git
- Live Share - collaboration
The following VSCode settings will configure the prettier extension to to use the project config and format JS, TS, and Vue files on save.
Create / edit the file .vscode/settings.json
relative to your project root directory:
{
"editor.tabSize": 2,
"editor.formatOnSave": true,
"editor.formatOnPaste": false,
"prettier.configPath": ".prettierrc",
"[typescript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[javascript]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
},
"[vue]": {
"editor.defaultFormatter": "esbenp.prettier-vscode"
}
}
Since Port of Mars uses docker to containerize the application, node_modules
will only exist inside the containers and you will encounter issues with unknown modules.
VSCode has some support for developing within containers: https://code.visualstudio.com/docs/devcontainers/containers that you can customize so that library dependencies in your editor resolve properly.
Another way to get around this and have access to code completion is to mirror dependencies on the host/locally which we'll go over below:
Install node and npm with Node Version Manager (recommended)
$ curl -o- https://raw.githubusercontent.com/nvm-sh/nvm/v0.39.3/install.sh | bash
$ source ~/.bashrc # for bash shell, alternatively close and re-open the terminal
$ nvm install lts/gallium # install node 16 lts and npm
$ nvm use lts/gallium
# check to make sure everything worked
$ node --version # should be v16.x.x
$ npm --version
Install project dependencies
$ cd port-of-mars # make sure you are in the project root
# install packages from the lockfiles generated by the containers
$ for dir in {client,server,shared} ; do (cd "$dir" && npm ci) ; done
Being able to see git history is extremely useful in understanding a codebase or specific piece of code. This can be done on Github, with git blame
or in your editor with something like Gitlens. Large changes that provide no context (formatting, for example) can be ignored with git blame --ignore-rev
. In order to have Gitlens and git blame
ignore these revisions by default add the .git-blame-ignore-revs
file, which indexes commit hashes we want to ignore, to your git config with:
$ git config blame.ignoreRevsFile .git-blame-ignore-revs
Research projects, like startups, have evolving needs - this often leads to changes in the data model and underlying schema. To manage these changes we use typeorm migrations so we can reliably test data model changes locally and synchronize staging and production databases reproducibly.
All of these commands need to be executed within the server Docker container, so first run $ docker compose exec server bash
before running these commands.
We can ask typeorm to automagically generate a migration file with autodetected schema changes using the typeorm migration:generate
command.
$ npm run -- typeorm migration:generate src/migration/NameOfMigration
This will generate a new file in migration/
with up()
and down()
methods based on the changes to the schema (Entities).
If we want to create an empty migration (for example, to perform a data cleaning migration), we can use the typeorm migration:create
command instead.
To display or run database migrations (for schema changes etc.) use npm run typeorm migration:show
and npm run typeorm migration:run
Example:
$ npm run typeorm migration:show # shows all available migrations and any pending ones
ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:show
query: SELECT * FROM "information_schema"."tables" WHERE "table_schema" = current_schema() AND "table_name" = 'migrations'
query: SELECT * FROM "migrations" "migrations" ORDER BY "id" DESC
[X] Initial1600968396723
[ ] UserMetadataAddition1607117297405
$ npm run typeorm migration:run
ts-node -r tsconfig-paths/register ./node_modules/typeorm/cli.js migration:run
query: SELECT * FROM "information_schema"."tables" WHERE "table_schema" = current_schema() AND "table_name" = 'migrations'
query: SELECT * FROM "migrations" "migrations" ORDER BY "id" DESC
1 migrations are already loaded in the database.
2 migrations were found in the source code.
Initial1600968396723 is the last executed migration. It was executed on Thu Sep 24 2020 17:26:36 GMT+0000 (Coordinated Universal Time).
1 migrations are new migrations that needs to be executed.
query: START TRANSACTION
query: ALTER TABLE "user" ADD "isActive" boolean NOT NULL DEFAULT true
query: ALTER TABLE "user" ADD "dateCreated" TIMESTAMP NOT NULL DEFAULT now()
query: INSERT INTO "migrations"("timestamp", "name") VALUES ($1, $2) -- PARAMETERS: [1607117297405,"UserMetadataAddition1607117297405"]
Migration UserMetadataAddition1607117297405 has been executed successfully.
query: COMMIT
Done in 1.75s.
Changes to the schema and migrations should be committed together to keep things in sync.
In addition to the workflow, you can run Prettier and ESLint locally with:
$ docker compose exec client bash
$ npm run lint # runs eslint, checking for potential issues in the code
$ npm run style # runs prettier, checking for formatting
# the same can be done for the server
$ docker compose exec server bash
$ npm run lint
$ npm run style
Server tests: https://github.com/virtualcommons/port-of-mars/tree/main/server/tests
Client tests: https://github.com/virtualcommons/port-of-mars/tree/main/client/tests
You can run all of the tests via
$ make test
make docker-compose.yml
generates the docker-compose.yml
from templates and can be re-run to apply any changes in these templates.
make browser
requires the open-url-in-container
Firefox extension: https://addons.mozilla.org/en-US/firefox/addon/open-url-in-container/
$ ./configure staging
$ make deploy
Copy the Sentry DSN url into keys/sentry_dsn
. Then
$ ./configure prod
$ make deploy
You can interact with the server through a basic command line interface tool with various utility commands: exporting data, modifying user data, etc. Enter the server container, e.g., docker compose exec server bash
then npm run cli -- <command>
. In production you'll need to run npm run cli:prod -- <command>
# list available subcommands
$ npm run cli -- --help
# usage
$ npm run cli -- [options] [command]
npm run cli -- accounts setadmin --username <username>
- set admin flag on a user, allowing access to the admin dashboard
Data can be exported from the database with the dump.sh
script. You must pass in a required tournamentRoundId
parameter and optional game ids gids
(numbers separated by spaces). Note that open beta games should all fall under the same open beta tournamentRoundId
, which may vary depending on the state of the database.
Example:
$ ./dump.sh dev --tournamentRoundId <id> # the tournament round to export
$ ./dump.sh dev --tournamentRoundId <id> --gids 1 5 9 15 # also filter by specific games for the given tournament round
Production
In staging / production, change dev
to prod
:
$ ./dump.sh prod --tournamentRoundId 11 # dump all games for tournament round 11
This generates CSV files with every persisted game event as well as summary csv files with game, player and tournament data. The csv files will be in the docker/dump
folder.
Entry point is currently exportData in https://github.com/virtualcommons/port-of-mars/blob/main/server/src/cli.ts
The general flow is to query all game events and players given the constraints (list of ids, min / from date) and then run a series of Summarizers over them.
The entire Mars Event deck is serialized by the TakenStateSnapshot that captures the entire state of the game at the beginning of the game. Currently TakenStateSnapshot only runs at the beginning of the game.
The Summarizers iterate over the querysets and generate CSVs based on the data within, usually by applying game events, in order, to a fresh GameState and then serializing the results.
GameEventSummarizer
emits a CSV where each line has the serialized game event, the game state before the event was applied, the game state after the event was applied, and the role that generated the event. NOTE: to support flexible group sizes this will need to be changed to a player ID, with a server ID sentinel value indicating that the server generated the event.
AccomplishmentSummarizer
emits the static set of Accomplishments (in-code / in-memory) at the time of data export as a CSV. This may need to be versioned and/or moved into the DB to support differing sets of Accomplishments across different runs of the Port of Mars https://github.com/virtualcommons/port-of-mars/issues/719
The post-processing data workflow developed by @cpritcha runs a series of R functions over the raw CSVs in /dump
. The entry point is https://github.com/virtualcommons/port-of-mars/blob/a43e95fd827f6e344cd6aa02c3e2dd29d8dec208/analytics/R/export.R
This step generates intermediate mostly long form data.
Once the entire tournament has been run and we want to aggregate data and combine it with the survey data (currently from Qualtrics) we need to convert it into wide form and manipulate its structure. This code is in https://github.com/virtualcommons/port-of-mars-analysis
Entry point is main.R
TODO: tournament_dir should probably be a parameter, max_game_rounds should be part of the GameMetadata https://github.com/virtualcommons/port-of-mars/issues/721
GameMetadata
should be read in from some kind of summary file and then incorporated into game.R game_metadata_expand
which produces the "codebook" describing the data columns and what they mean.
IMPORTANT: The survey_cultural_renames
, survey_round_begin_renames
, survey_round_end_renames
data structures in https://github.com/virtualcommons/port-of-mars-analysis/blob/c48147011d3854d75e12e8b8d9947faf2e32e912/R/survey.R#L14 will need to be updated when the Qualtrics survey changes (any new questions, reordered questions, etc)
This diagram gives a general idea of the application structure:
The reigning design principles of the Port of Mars client are:
- Consistency - repeating design and components as well as meeting expectations by using common standards
- Simplicity - eliminating visual clutter in order to keep focus on important elements
- User control - give people the freedom to take actions and undo those actions
Port of Mars currently uses Bootstrap-Vue which composes and extends Bootstrap v4 into reusable Vue components. This provides a convenient UI framework that handles responsiveness and accessibility.
In order to maintain consistency in styling, bootstrap-vue components and bootstrap 'helper classes' are used first. Then, if additional styling is needed, use scoped styles within the component file or global styles if it is shared across components.
Port of Mars is a Single-page application.
The main, 'top level' pages of the application are located in client/src/views/
. These are registered in shared/src/routes.ts
and then client/src/router.ts
, at which point vue-router renders the correct view associated with the URL in the browser.
Pages are generally composed of multiple reusable components, creating a heirarchical structure.
Components are defined in client/src/components/
and are then organized by concern, typically corresponding to a page or in global/
if used by multiple pages.
The majority of application data is maintained in a global state, eliminating the need for a complex web of components passing data back and forth. Vuex is used as a state management library.
The client API provides a clean interface with the server-side API. For non-game-related components, AJAX requests are sent to the server to either send or retrieve data and either store data in the global state or return it. For game-related components, the Colyseus client API is used to synchronize game state between the client and the server.
The shared
directory in the project contains code that is used by both the client and the server and is duplicated on each when building the application.
This includes shared application settings, some utility functions, and most notably: type definitions shared between the client and server.
Entities defined in server/src/entity/
are Typescript classes that map to a table in the database. Port of Mars uses TypeORM as an object-relational mapper which provides an interface for defining schema and querying the database in Typescript. The official docs will often be found to be lacking so a better reference exists at https://orkhan.gitbook.io/typeorm/docs.
Services defined in server/src/services/
contain the majority of logic on the server that is not directly related to game logic. In many cases, services are querying or updating data in the main database using the TypeORM Repository API.
The Settings
service interfaces with a Redis instance, which is used for dynamic application settings (configuration that we want to be able to change at runtime).
The Persistence
service is responsible for storing on-going game data in the form of an event stream in the database.
The Replay
service, on the other hand, simulates games stored by Persistence
by re-hydrating the game state and reapplying the events in order. This is how games are exported for further analysis.
server/src/routes
contains Express endpoints that handle requests from the client by calling a service that usually either returns or updates data. Routes should generally not implement any logic and should delegate to services, only worrying about error handling.
Both the main game and the lobby/waiting room are Colyseus Rooms.
Rooms use Colyseus Schema, a synchronizeable structure (meaning syncing between client/server is handled for us), for managing the game state. Schemas can be nested into a heirarchical structure which the game state makes use of.
The bulk of the game logic is structured in Commands
that execute GameEvents
which encapsulate the functionality needed to apply the event to the game state. A game room will receive messages/requests from the client and then execute a command, while a game loop maintains the clock and applies server actions such as bot actions and phase switching.
The lobby logic is relatively simple so is all contained within the lobby state and lobby room classes.
server/tests/
contains a suite of automated functional and unit tests that are written using the Jest framework. We aim for coverage of the most important functionality of the application, i.e. game functionality (is the game state correctly modified after each event?), replay (is a game stored and simulated correctly?), registration (can a user go through the full process of signing up?), etc.
These tests are run automatically when pushing to the upstream repository but can be run locally as well with make test
.
see also: port-of-mars/wiki/Technology-Stack