Releases: AppScale/gts
Releases · AppScale/gts
AppScale 3.4.0
Highlights of features/bugs in this release:
- Upgraded Appscale images to Xenial
- Introduced ability to deploy and manage multiple services
- Improved celery worker performance by using eventlet pool
- Reduced the number of dashboards running for smaller deployment
- Added capability to store version details in Zookeeper
- Upgraded Cassandra to 3.11
- Improved various autoscaling policies
- Reduced verbosity of logs to improve readability
- Fixed a bug which left behind old celery producer connections
- Moved Hermes configurations to Zookeeper
- Added support for UpdateQueues API
- Allowed external access to Hermes
- Allowed use of separate HAProxy for appservers and services
- Handled graceful instance termination with the AppManager
- Handled graceful stop of Taskqueue servers
- Added some monit interface improvements
- Fixed a bug which allowes datastore clients to keep connections alive
- Changed hosting structure to manage revisions
- Allowed haproxy timeout to be tunable from the tools
- Removed the need for email address while deploying apps
- Fixed a bug to consider nodes with open role before spawning new instances
- Fixed HAProxy stop commands
- Removed tracking of application metadata from UAServer and relied on Zookeeper instead
- Added support for resumable package downloads after failures during the build
- Implemented the datastore_v4.AllocateIds API
- Upgraded to Kazoo to 2.4.0
- Fixed a bug which allowed to properly handle non-ascii codes in Search API
Known Issues:
- Transactional tasks do not currently work for Java
- Combined application logs are not currently available in Xenial
Artifacts
AppScale 3.3.0
Highlights of features/bugs in this release:
- Added support for Ubuntu Xenial
- Improved autoscaling mechanism to rely on resources capacity
- Removed unnecessary DB request while adding task
- Fixed a bug that caused a DB request to hang indefinitely
- Improved log collection command on appscale-tools
- Simplified the process for enabling the datastore-viewer
- Fixed ejabberd configuration and installation on Azure
- Manual scaling with add-instances in cloud
- Added retry for queue operations to improve reliability
- Clearer error messages if AppScale is not configured correctly
- Fixed a bug that could cause inconsistent state of task when it’s deleted
- Starting only one process per BRServer for easier monitoring
- Improved task queue leases performance
- Fixed a bug with single-property cursor queries that have an equality filter on a multi-valued property
- Improved load balancer configurations
- Improved handling of datastore timeout
- Allocating entity IDs in blocks
- Added base support of Google Admin API
- Improved monitoring of running AppScale processes
- Added docs for starting datastore on its own
- Starting AppScale services in background
- Fixed a bug with a pillow version used for Images API
- Improved rabbitmq stability
- Keep redirect URL after creating a user
- Added performance profiling
- Other minor improvements and fixes
Known Issues:
- There can be some brief downtime when redeploying or updating an application
Artifacts
AppScale 3.2.1
Highlights of features/bugs in this release:
- Fixed appscale upgrade from 3.1.0
- Added dependency python-twisted
- Build when repo update is not required
- Added support for appscale ssh to role
- Added request ids to TaskQueue and Datastore logs
- Integrated Azure Scale Sets into the agent
- Modified Azure agent to assign Public IPs only for load balancers
- Refined appscale down: added --terminate option
- Redesigned the AppScale Dashboard and added relocate functionality
- Improved Map Reduce and Pipeline support
- Improved appscale get/set property functionality
- Improved appscale status output (in particular for large deployments)
- Improved latency and behavior for autoscaling AppServers and instances
- Improved startup time of AppScale.
- Put centralized Datastore HAProxy on all load balancers
- Put centralized TaskQueue HAProxy on all load balancers
- Fixed a bug that prevented Cassandra from being restarted in some cases after a restore
- Fixed a bug that could lose application requests during a redeploy.
- Fixed concurrency issues during commits in the datastore
- Fixed a bug with GCE persistent storage being mounted incorrectly
- Fixed a bug that caused overloading a single taskqueue node
- Fixed a bug parsing cron jobs when time was 0 and 60
- Fixed a bug where agents would default to spot instances
- Fixed Zookeeper configuration for maximum client connections
- Simplified deployment state handling (merged locations yaml and locations json file)
- Upgraded Cassandra to 3.7
- Upgraded Go to 1.6 and added support for vendoring
- Install Java 8 for Cassandra usage on compatible machine
- Pin wstools to 0.4.3
- Pin tornado to 4.2.0
- Pin google-api-python-client to 1.5.4
- Added dependencies: capnproto, pycapnp
Known Issues:
- There can be some brief downtime when redeploying or updating an application
Artifacts
AppScale 3.1.0
Notable Changes
- Added support for using Azure as an infrastructure
- Added preliminary support for pull queues
- Added support for more cron formats
- Changed the dashboard, allowing it to be treated like a normal application
- Added flexibility to the Java queue configuration parsing process
- Upgraded Cassandra to 2.2.7
- Made large batch statements and transactions more reliable
- Fixed a bug that prevented multiple dashboard AppServers from running
- Fixed a bug that caused instability when min was undefined
- Fixed a bug that prevented the dashboard from deploying an application
- Fixed a bug that prevented queue configuration changes from taking effect
- Fixed crash when instance_class or max_concurrent_requests were defined
Artifacts
AppScale 3.0.0
Highlights of features/bugs in this release:
- Fixed bug with not capping negative numbers at 0 in Memcache API decr()
- Switched to new clustering tool for RabbitMQ
- Fixed bug with key namespaces in Zookeeper transactions
- Locked down UserAppService external port as it's not needed anymore
- Removed unused pycassa references
- Modified dev/test script for deleting all data to run for a single app ID
- Fixed bug in deploying the AppScale dashboard that was preventing login
- Added RabbitMQ/Celery cleanup upon appscale clean
- Specified JSON gem that works with supported version of Ruby
- Added composite index deletion logging
- Write datastore transaction data on commit
- Upgraded to Cassandra 2.1.15
- Added retry mechanism for connecting to Cassandra
- Wait until Zookeeper and enough Cassandra nodes are up to perform upgrade
- Fixed bug in entity validation during upgrade
- Don't require user input during SSH
- Initialize Cassandra config before database upgrade
- Fixed bug in choosing a host for a push task URL hook
- Log monit service errors
- Log the upgrade progress
- Remove app archive upon appscale remove/undeploy
- Removed unused code in AppTaskQueue
- Delete push tasks after completion/expiration
- Fixed bug with updating cron upon app redeploy
- Added logging of datastore results in debug mode
- Removed confusing error about non-existing dir during bootstrap
- Avoid unsafe disk operations on mount
- Wait for at most 30 seconds for a monit operation
- Wait for database quorum for all tokens on start
AppScale 3.0.1
Highlights of features/bugs in this release:
- Set num tokens without defining initial token to fix problem with restarts
- Fixed bootstrap script to continue upgrade even when detached from HEAD
- Ensure monit is running during an upgrade
- Added Datastore metadata table in the view all records script.
Artifacts:
AppScale 2.9.0
Highlights of features/bugs in this release:
- Added an upgrade mechanism to make future platform upgrades easier
- Gave ZooKeeper more responsibility for keeping track of configuration state
- Fixed the "Delete App" functionality in the dashboard
- Fixed a bug that regenerated nginx files needlessly
- Adjusted the placement logic of new AppServers to improve fault tolerance
- Improved AppController stability
- Fixed certificate verification in some cases when a server uses SNI
- Included more AppServer information in the output of 'appscale status'
- Provided a more useful error message for invalid node layouts
- Moved the Cassandra installation directory outside of the Git repository
- Added logging to the UAServer
Artifacts:
AppScale 2.8.0
Highlights of features/bugs in this release:
- Alternative install for zookeeper was removed
- Start cron in docker for faststart
- Run AppDashboard tests by using nosetests in Rakefile
- Include ssl port in AppDasboard /apps/json route
- Round up cpu usage in appscale status
- Added HAProxy for UserAppServer
- Allow datastore to operate in read-only mode
- Better exception handling by raising specific errors
- System stats are now reported by the SystemManager
- Added deployment key to GCE instance metadata
- Added ability to gather system and platform stats in the AppController
- Defined celery amp backend as scheme
- Added app log rotate when appengine role is not running
- Removed the ability to call monit remotely from MonitInterface
- Removed log rotate scripts for apps during terminate
- Hermes collects monitoring stats from all deployment nodes
- Ensure Datastore logging is configured correctly
- Changed AppController's monit start command to only start Ruby process
- Omitted python version in application stats
- Changed the use of killall to stop the AppController
- Allow multiple filters on kindness queries
- Removed duplicate continue path for Java authentication redirects
- Added HAProxy routing for BlobServer
- Have the controller log to STDOUT
- Cleaned up invalid kind indices
- Fixed multiple equality filters for NDB queries
- Updated the query cursor format to use the one introduced in 1.5.1 SDK
- Reduced build verbosity
- Updated Go to version 1.4.3
- Added new methods in the AppController for the tools to call out to the UserAppServer
- Coordinate backup and restore operations from head node
- Allow Cassandra restore to run without prompting
- Check available space on backup output directory
- Support for new amd64 relocations
- Deploy appscalesensor app for registered deployments
- Cleaner UserAppServer responses
Artifacts:
AppScale 2.7.1
Highlights of features/bugs in this release:
- A bug in get_app_data that affected upgrades was fixed
- AppScale can be built on Debian Wheezy
- AppScale can be built on a Raspberry Pi 2
- Python applications can now use the vendor module
- Datastore debug logs are more useful
- There are fewer ZooKeeper connection errors
Artifacts:
AppScale 2.7.0
Highlights of features/bugs in this release:
- Fixed bug in choosing app ports via lsof
- Fixed Vagrant FastStart to pick the correct IP
- Added hourly status log in AppController
- Handle UpdateIndex requests to the Datastore
- Removed hardcoded refs to AppScale home dir
- AppScale now runs on Ubuntu 14.04 LTS (Trusty Tahr)
- Have Monit check app servers via the port
- Increase Java URLFetch API size limit to 33MB
- Clean up expired successful transactions
- Fixed bug in monitoring Nginx
- Updated URLFetch stub to handle socket timeouts
- Keep about 1GB of app logs
- Replaced nc with logger for centralized app log
- Made start services idempotent
- Have Zookeeper autopurge run more often
- Use Monit to stop all running services during appscale down
- Fixed bug that was spawning additional cloud instances
- Better AppController restore flow
- Fixed bug that was causing an app to be disabled
- Assigned application ports will now persist through down/up
- Use Xenial's version of kazoo
- Fixed AppController's stop command
- Removed the use of root RSA keys
- Have the AppManager set up AppServer routing
- Create separate logrotate script per application
Artifacts: